Which CSS selectors or rules can significantly affect front-end layout / rendering performance in the real world?
The first thing that comes to mind here is: how clever is the rendering engine you're using?
That, generic as it sounds, matters a lot when questioning the efficiency of CSS rendering/selection. For instance, suppose the first rule in your CSS file is:
.class1 {
/*make elements with "class1" look fancy*/
}
So when a very basic engine sees that (and since this is the first rule), it goes and looks at every element in your DOM, and checks for the existence of class1
in each. Better engines probably map classnames to a list of DOM elements, and use something like a hashtable for efficient lookup.
.class1.class2 {
/*make elements with both "class1" and "class2" look extra fancy*/
}
Our example "basic engine" would go and revisit each element in DOM looking for both classes. A cleverer engine will compare n('class1')
and n('class2')
where n(str)
is number of elements in DOM with the class str
, and takes whichever is minimum; suppose that's class1
, then passes on all elements with class1
looking for elements that have class2
as well.
In any case, modern engines are clever (way more clever than the discussed example above), and shiny new processors can do millions (tens of millions) of operations a second. It's quite unlikely that you have millions of elements in your DOM, so the worst-case performance for any selection (O(n)
) won't be too bad anyhow.
Update:
To get some actual practical illustrative proof, I've decided to do some tests. First of all, to get an idea about how many DOM elements on average we can see in real-world applications, let's take a look at how many elements some popular sites' webpages have:Facebook: ~1900 elements (tested on my personal main page).
Google: ~340 elements (tested on the main page, no search results).
Google: ~950 elements (tested on a search result page).
Yahoo!: ~1400 elements (tested on the main page).
Stackoverflow: ~680 elements (tested on a question page).
AOL: ~1060 elements (tested on the main page).
Wikipedia: ~6000 elements, 2420 of which aren't spans
or anchors
(Tested on the Wikipedia article about Glee).
Twitter: ~270 elements (tested on the main page).
Summing those up, we get an average of ~1500 elements. Now it's time to do some testing. For each test, I generated 1500 divs
(nested within some other divs
for some tests), each with appropriate attributes depending on the test.
The tests
The styles and elements are all generated using PHP. I've uploaded the PHPs I used, and created an index, so that others can test locally: little link.
Results:
Each test is performed 5 times on three browsers (the average time is reported): Firefox 15.0 (A), Chrome 19.0.1084.1 (B), Internet Explorer 8 (C):
A B C
1500 class selectors (.classname) 35ms 100ms 35ms
1500 class selectors, more specific (div.classname) 36ms 110ms 37ms
1500 class selectors, even more specific (div div.classname) 40ms 115ms 40ms
1500 id selectors (#id) 35ms 99ms 35ms
1500 id selectors, more specific (div#id) 35ms 105ms 38ms
1500 id selectors, even more specific (div div#id) 40ms 110ms 39ms
1500 class selectors, with attribute (.class[title="ttl"]) 45ms 400ms 2000ms
1500 class selectors, more complex attribute (.class[title~="ttl"]) 45ms 1050ms 2200ms
Similar experiments:
Apparently other people have carried out similar experiments; this one has some useful statistics as well: little link.
The bottom line:
Unless you care about saving a few milliseconds when rendering (1ms = 0.001s), don't bother give this too much thought. On the other hand, it's good practice to avoid using complex selectors to select large subsets of elements, as that can make some noticeable difference (as we can see from the test results above). All common CSS selectors are reasonably fast in modern browsers.Suppose you're building a chat page, and you want to style all the messages. You know that each message is in a div
which has a title
and is nested within a div
with a class .chatpage
. It is correct to use .chatpage div[title]
to select the messages, but it's also bad practice efficiency-wise. It's simpler, more maintainable, and more efficient to give all the messages a class and select them using that class.
The fancy one-liner conclusion:
Anything within the limits of "yeah, this CSS makes sense" is okay.
Most answers here focus on selector performance as if it were the only thing that matters. I'll try to cover some spriting trivia (spoiler alert: they're not always a good idea), css used value performance and rendering of certain properties.
Before I get to the answer, let me get an IMO out of the way: personally, I strongly disagree with the stated need for "evidence-based data". It simply makes a performance claim appear credible, while in reality the field of rendering engines is heterogenous enough to make any such statistical conclusion inaccurate to measure and impractical to adopt or monitor.
As original findings quickly become outdated, I'd rather see front-end devs have an understanding of foundation principles and their relative value against maintainability/readability brownie points - after all, premature optimization is the root of all evil ;)
Let's start with selector performance:
Shallow, preferably one-level, specific selectors are processed faster. Explicit performance metrics are missing from the original answer but the key point remains: at runtime an HTML document is parsed into a DOM tree containing N
elements with an average depth D
and than has a total of S
CSS rules applied. To lower computational complexity O(N*D*S)
, you should
Have the right-most keys match as few elements as possible - selectors are matched right-to-left^ for individual rule eligibility so if the right-most key does not match a particular element, there is no need to further process the selector and it is discarded.
It is commonly accepted that
*
selector should be avoided, but this point should be taken further. A "normal" CSS reset does, in fact, match most elements - when this SO page is profiled, the reset is responsible for about 1/3 of all selector matching time so you may prefer normalize.css (still, that only adds up to 3.5ms - the point against premature optimisation stands strong)Avoid descendant selectors as they require up to
~D
elements to be iterated over. This mainly impacts mismatch confirmations - for instance a positive.container .content
match may only require one step for elements in a parent-child relationship, but the DOM tree will need to be traversed all the way up tohtml
before a negative match can be confirmed.Minimize the number of DOM elements as their styles are applied individually (worth noting, this gets offset by browser logic such as reference caching and recycling styles from identical elements - for instance, when styling identical siblings)
Remove unused rules since the browser ends up having to evaluate their applicability for every element rendered. Enough said - the fastest rule is the one that isn't there :)
These will result in quantifiable (but, depending on the page, not necessarily perceivable) improvements from a rendering engine performance standpoint, however there are always additional factors such as traffic overhead and DOM parsing etc.
Next, CSS3 properties performance:
CSS3 brought us (among other things) rounded corners, background gradients and drop-shadow variations - and with them, a truckload of issues. Think about it, by definition a pre-rendered image performs better than a set of CSS3 rules that has to be rendered first. From webkit wiki:
Gradients, shadows, and other decorations in CSS should be used only when necessary (e.g. when the shape is dynamic based on the content) - otherwise, static images are always faster.
If that's not bad enough, gradients etc. may have to be recalculated on every repaint/reflow event (more details below). Keep this in mind until the majority of users user can browse a css3-heavy page like this without noticeable lag.
Next, spriting performance:
Avoid tall and wide sprites, even if their traffic footprint is relatively small. It is commonly forgotten that a rendering engine cannot work with gif/jpg/png and at runtime all graphical assets are operated with as uncompressed bitmaps. At least it's easy to calculate: this sprite's width times height times four bytes per pixel (RGBA) is 238*1073*4≅1MB
. Use it on a few elements across different simultaneously open tabs, and it quickly adds up to a significant value.
A rather extreme case of it has been picked up on mozilla webdev, but this is not at all unexpected when questionable practices like diagonal sprites are used.
An alternative to consider is individual base64-encoded images embedded directly into CSS.
Next, reflows and repaints:
It is a misconception that a reflow can only be triggered with JS DOM manipulation - in fact, any application of layout-affecting style would trigger it affecting the target element, its children and elements following it etc. The only way to prevent unnecessary iterations of it is to try and avoid rendering dependencies. A straightforward example of this would be rendering tables:
Tables often require multiple passes before the layout is completely established because they are one of the rare cases where elements can affect the display of other elements that came before them on the DOM. Imagine a cell at the end of the table with very wide content that causes the column to be completely resized. This is why tables are not rendered progressively in all browsers.
I'll make edits if I recall something important that has been missed. Some links to finish with:
http://perfectionkills.com/profiling-css-for-fun-and-profit-optimization-notes/
http://jacwright.com/476/runtime-performance-with-css3-vs-images/
https://developers.google.com/speed/docs/best-practices/payload
https://trac.webkit.org/wiki/QtWebKitGraphics
https://blog.mozilla.org/webdev/2009/06/22/use-sprites-wisely/
http://dev.opera.com/articles/view/efficient-javascript/
While it's true that
computers were way slower 10 years ago.
You also have a much wider variety of device that are capable of accessing your website these days. And while desktops/laptops have come on in leaps and bounds, the devices in the mid and low end smartphone market, in many cases aren't much more powerful than what we had in desktops ten years ago.
But having said that CSS Selection speed is probably near the bottom of the list of things you need to worry about in terms of providing a good experience to as broad a device range as possible.
Expanding upon this I was unable to find specific information relating to more modern browsers or mobile devices struggling with inefficient CSS selectors but I was able to find the following:
http://www.stevesouders.com/blog/2009/03/10/performance-impact-of-css-selectors/
Quite dated (IE8, Chrome 2) now but has a decent attempt of establishing efficiency of various selectors in some browsers and also tries to quantify how the # of CSS rules impacts page rendering time.
http://www.thebrightlines.com/2010/07/28/css-performance-who-cares/
Again quite dated (IE8, Chrome 6) but goes to extremes in inefficient CSS selectors
* * * * * * * * * { background: #ff1; }
to establish performance degradation.