Performant parsing of HTML pages with Node.js and XPath
I think Osmosis is what you're looking for.
- Uses native libxml C bindings
- Supports CSS 3.0 and XPath 1.0 selector hybrids
- Sizzle selectors, Slick selectors, and more
- No large dependencies like jQuery, cheerio, or jsdom
HTML parser features
- Fast parsing
- Very fast searching
- Small memory footprint
HTML DOM features
- Load and search ajax content
- DOM interaction and events
- Execute embedded and remote scripts
- Execute code in the DOM
Here's an example:
osmosis.get(url)
.find('//div[@class]/ul[2]/li')
.then(function () {
count++;
})
.done(function () {
assert.ok(count == 2);
assert.done();
});
You can do so in several steps.
- Parse HTML with
parse5
. The bad part is that the result is not DOM. Though it's fast enough and W3C-compiant. - Serialize it to XHTML with
xmlserializer
that accepts DOM-like structures ofparse5
as input. - Parse that XHTML again with
xmldom
. Now you finally have that DOM. - The
xpath
library builds uponxmldom
, allowing you to run XPath queries. Be aware that XHTML has its own namespace, and queries like//a
won't work.
Finally you get something like this.
const fs = require('mz/fs');
const xpath = require('xpath');
const parse5 = require('parse5');
const xmlser = require('xmlserializer');
const dom = require('xmldom').DOMParser;
(async () => {
const html = await fs.readFile('./test.htm');
const document = parse5.parse(html.toString());
const xhtml = xmlser.serializeToString(document);
const doc = new dom().parseFromString(xhtml);
const select = xpath.useNamespaces({"x": "http://www.w3.org/1999/xhtml"});
const nodes = select("//x:a/@href", doc);
console.log(nodes);
})();
Note that you have to prepend every single HTML element of a query with the x:
prefix, for example to match an a
inside a div
you would need:
//x:div/x:a
I have just started using npm install htmlstrip-native
which uses a native implementation to parse and extract the relevant html parts. It is claiming to be 50 times faster than the pure js implementation (I have not verified that claim).
Depending on your needs you can use html-strip directly, or lift the code and bindings to make you own C++ used internally in htmlstrip-native
If you want to use xpath, then use the wrapper already avaialble here; https://www.npmjs.org/package/xpath
Libxmljs is currently the fastest implementation (something like a benchmark) since it's only bindings to the LibXML C-library which supports XPath 1.0 queries:
var libxmljs = require("libxmljs");
var xmlDoc = libxmljs.parseXml(xml);
// xpath queries
var gchild = xmlDoc.get('//grandchild');
However, you need to sanitize your HTML first and convert it to proper XML. For that you could either use the HTMLTidy command line utility (tidy -q -asxml input.html
), or if you want it to keep node-only, something like xmlserializer should do the trick.