Scalable Bitmaps
For over a year now I’ve been keeping an eye on the ongoing responsive images debates & thinking about how images can, do, and should “work” on the web. I’ve written some words on the subject here and there; I’ve recently put some code out into the world which embodies those thoughts: scalables.js.
What are you thinking!?
A couple of core ideas:
- HTML is for describing content; CSS is for laying it out.
- The rendered size of an image on a page is primarily a function of the page’s layout, not a file’s intrinsic dimensions.
Allow me to digress.
My boss is firmly rooted in print design. Whenever he asks me how “big” a digital image is, I have to tell him two things: a dpi and a physical size in inches. “1200 × 900” might be a concise and true description of the file’s data, but when speaking with the boss I have to translate to an arbitrary physical size: “it’s 4˝ × 3˝ at 300 dpi.” Of course nothing about the file is intrinsically 4˝ × 3˝, and it doesn’t “have” 300 dpi; it could just as well render at 6˝ × 4½˝ at 200 dpi, 1˝ × ¾˝ at 1200 dpi, etc.
Substitute “CSS-pixels” for “inches” and some “@2x” declarations for the dpi numbers and that’s a paragraph about responsive images and much of the confusion surrounding them.
Back in the bad old days, when one CSS-pixel equaled one device-pixel and we all designed fixed-width layouts, it made perfect sense to tailor an image’s data-pixels to the size that we knew that image would render at on our layout. We baked a presentational size into the image file itself such that the number of data-pixels we were sending across the wire equaled the number of device-pixels we knew we would have to fill. Data-pixels == CSS-pixels == device-pixels. We delivered perfectly crisp images with maximum efficiency.
As I see it, the “responsive-images problem” is this: we no longer know how many device-pixels we’ll have to fill.
One reason: hi-DPI screens have caused the CSS-pixel/device-pixel ratio to become variable. srcset=""
began as an attempt to solve this by allowing authors to specify different files to use at different ratios.
But the number of CSS-pixels we will be filling with a given image has also become variable. The web is being viewed on bigger and smaller devices than ever before and the resulting movement towards fluid layouts means that, in addition to not knowing how many device-pixels make up a CSS-pixel, we now have no idea how many CSS-pixels a given <img>
will occupy within our layout.
Our three pixels — data, CSS, and device — have decoupled. A pixel is not a pixel is not a pixel.
Everyone seems to agree that until browsers learn to daftly deal with a new format that either contains an image at multiple resolutions or abandons traditional notions of pixels all together (aside: is anybody working on this?), supplying multiple source files to accommodate for this newfound range of possibilities is the way to go.
But creating a mechanism for deciding which of these sources to download — sending an appropriate number of data-pixels over the wire — has proved tricky.
<picture>
and srcset=""
both attempt to solve the second, fluid-layout problem by placing viewport queries in HTML. This is fundamentally wrong-headed, as it requires authors to tailor those queries to a specific layout, baking (invisible! complex! error-prone!) presentational information into their HTML. I wrote a lot more about this on the WHATWG mailing list. Briefly:
Markup shouldn’t depend on layout. I shouldn’t have to work from the viewport size in — thinking through every aspect of the layout that affects the image’s size — when writing my markup. My markup shouldn’t break when I add a new breakpoint, tweak a max-width, or adjust some padding.
I shouldn’t have to declare that the source that I want to load for 800-pixel-wide viewports on 1x devices is also the source that I want to load for 400-pixel-wide viewports on 2x devices. I shouldn’t fear having to add more markup in the future, to support device-pixel-ratios
greater than 2.
These are symptoms — the root problem is that that we’re explicitly associating presentational information with our <img>
. An image source doesn’t “have” an intrinsic device-pixel-ratio
, or a viewport max-width
, any more than it has 4˝, or 300 dpi. It’s just a bunch of data — a bunch of pixels.
So in markup, let me simply describe those pixels.
Then, I’ll size the <img>
via CSS however I please, and leave it to somebody else (maybe, someday, browsers? for now: scalables.js) to evaluate how many device-pixels the image occupies on-screen — and let that somebody else figure out which source to load for me. Instead of this:
<img src="200.jpg" srcset="400.jpg 448w,
800.jpg 512w,
200.jpg 896w,
400.jpg 1x,
800.jpg 512w 2x,
400.jpg 896w 2x,
800.jpg 2x" />
or this:
<picture>
<source media="(min-width: 56em)" srcset="400.jpg 1x, 800.jpg 2x">
<source media="(min width: 32em)" srcset="200.jpg 1x, 400.jpg 2x">
<source media="(min-width: 28em)" src="800.jpg">
<source srcset="400.jpg 1x, 800.jpg 2x">
<img src="200.jpg">
</picture>
(which are equivalent, and depend on the specific [pretty simple!] layout I outlined in my WHATWG-email example), with scalables.js you write this:
<div data-scalable>
<img src="200.jpg" data-width="200" data-height="133">
<p>View image:
<a href="800.jpg" data-width="800" data-height="533">full size (800 x 533)</a> /
<a href="400.jpg" data-width="400" data-height="266">half-size (400 x 266)</a>
</p>
</div>
…which will work on any layout.
Fallbacks & future frend-ifying
Another idea:
- Progressive enhancement, done well, goes both ways: we begin from bedrock and extend to the stars.
Let’s break that markup down. A “scalable” consists of a parent element with the data-scalable
attribute, containing a thumbnail <img>
and references to a range of source files — not in <source>
tags or in a srcset=""
attribute, but using good, old-fashioned anchors — they’re just links! The thumbnail and each link have data-width
and data-height
attributes which describe the dimensions of the data therein.
Thus, there is no separate markup for fallback content, as there is with the <object>
, <video>
, <audio>
, and proposed <picture>
elements. There’s just the content: a thumbnail and a bunch of options to view or download the image at a range of resolutions, marked up like it’s 1993.
From there, we enhance. If you’re CSS-capable, we’ll scale that thumbnail up to fit the layout. If you can run JavaScript, we’ll pick an appropriately-sized source and swap it in for the now-probably-fuzzy thumbnail. From there, authors can choose to hide the alternate-version links, though in certain contexts, it might make sense to keep them around.
How to size the source files? As Jason Grigsby has pointed out, this is not a simple question. I am of the opinion that, thus freed from the need to serve one-size-fits-all source files, authors should supply as wide of a range as possible. In the context of my photography work, I’ve been thinking that that means full-sized, 14-megapixel images at the top end and 100-pixel-long-side thumbnails at the bottom. If you can’t justify storing giant files that will be accessed by a rounding-error-esque percentage of users, or if you don’t want anyone printing or otherwise redistributing full-resolution versions of your images, or if you think it’s insane to present five or six different versions of the same image to people who end up seeing the links, you’ll temper the top end of that range accordingly.
But for me there is something pleasingly definitive about it. This is my content: all of it. Unabridged. Supplying full-resolution files also means never having to re-render a resource for the web again.
(Until I decide to switch to WebP or JPEG 2000)
How to fill the space between the extremes? Hierarchical JPEGs, tiled-pyramid TIFFs, and .icns files — all bitmap image formats designed to accommodate scaling — use halving/doubling schemes; seems reasonable.
Speed, double downloads, & preloaders
Okay, back to strong opinions. Last big idea:
- Perceived speed is the most important measure of performance.
Thusfar, any proposal for a “responsive images solution” that does what scalables.js does, and what I’m explicitly advocating for — waiting for layout information before deciding which source file to load — has been considered dead on arrival. Waiting breaks preloading, and preloading images is a big deal.
“I think preloading is the single biggest performance improvement browsers have ever made.” — Steve Souders
As far as I can tell, in the minds of browser makers, that huge preloading performance win trumps any ideological notions of content/presentation separation (as well as concerns about markup maintainability and ease of authorship); viewport queries are the only way to let the browser know enough about the layout (before it knows anything about the layout) to start preloading an appropriate source — they’re the only way to have our responsive cake and eat delicious preloading performance too.
“…it doesn’t have to be ideal — all we need is that it be possible. … Browser vendors have indicated that they cannot wait for layout before they have to fetch the image.” — Ian Hickson
(Jason Grigsby wrote a particularly excellent post about this.)
And this whole party really got started when Matt Marquis wrote a seminal A List Apart article about the need for new markup to deal with the responsive images problem — specifically due to the fact that if authors dynamically switch <img>
sources after the preloader has begun its work, users end up downloading the image twice.
Brazenly, scalables.js waits for layout and usually ends up downloading images twice. Is it slow? No. Pages appear faster. If used properly, the bandwidth hit isn’t even all that bad. Let me explain.
(And let me preface this by saying that I got most of these ideas from Jeremy Keith’s excellent post and Paul Robert Lloyd’s eye-opening example [whose code I stole and started wildly hacking at months ago to kick this whole thing off].)
The key is having a very-small thumbnail. That few-kB thumbnail gets pre-loaded far faster than a larger image would — it is then sized up & styled as the rest of the page loads & paints around it. The user gets a useable page (with fuzzy but identifiable images) ASAP — at least as soon and probably sooner than they would with <picture>
or srcset=""
.
Only then does scalables.js start figuring out which higher-res sources to load. Loading the final, crisp, image takes longer than it would have otherwise — and we‘ve incurred a few extra kilobytes and HTTP requests along the way — but we rendered a useable, meaningful page before anybody else; in my book, that counts as a win. And preloaders — far from being circumvented with 1px transparent GIFs, <img>
s with no src=""
, or no <img>
element at all — carried us to that finish line.
The “art direction” use-case
So it appears that I’ve written 2,000 words but there’s this thing I haven’t mentioned. Art direction.
Scalables.js dosen’t do it. Achieving such things in markup blurs that same old line between presentation and content — a line that I am trying to paint bright red.
Whatever happened to content-parity? Would you crop the Mona Lisa?
Would you display:none;
paragraphs out of your blog posts for on-the-go mobile users?
If so — I bet you’d use CSS!
*steps off of soapbox*
The code is on GitHub. By day and by trade I am a printer and sometimes-photographer; scalables.js is the first code I’ve ever published. Even just attempting to contribute to the direction of something as big and important as The Web feels wondrous & a bit scary. If any of this is good, it’s probably the bits I stole from all of the smart people I’ve been reading for the past year and have linked to throughout.
A post like this makes me sad that I don’t have comments on this site, but I do have a twitter and an email.