Responsive Images: The Incomplete <picture>

If there's one thing that there've been an ample number of articles telling front end developers they must unequivocally adopt in the last five years, it's responsive images. We're all aware that more and more users are viewing our websites on mobile devices, and standard definition image assets simply won't do for today's retina displays. Thankfully, the W3C and WHATWG standards bodies have done a great job of taking feedback to give front-end developers simple but powerful tools to begin handling serving images at a wide range of resolutions for all the devices we now have to target. However, while eager web developers have been pushing everyone to add these new tools to their toolkit, recent projects have taught me that there are quite a few caveats that the considerate developer should be aware of before they start serving responsive images.

Firstly, let's discuss what tools currently exist and when to use them. The two main responsive image solutions added in HTML5 are img srcset and the <picture> tag. <picture> is a container for a set of image <source>s, where each source provides a srcset attribute with a path to the image you want to show and a media attribute with a standard media query breakpoint. You can list as many <source>s as you wish to have images, and each can also choose between a 1x or 2x size image for retina screens. Finally, there's a normal <img> tag at the end of your <source>s with all the standard src and alt attributes that you'd normally include. Regardless of what resolution you're viewing the page at, the <img> tag is actually the only rendered element, and all the <source>s do is swap out the src attribute in the <img> tag. Also, in the case that the browser doesn't support responsive images, the <img> tag is a safe fallback. If you're going to do any extra styling on your picture, just apply it to the <img> tag.

The Pendulum Swings

Sometime around 2001, I made a personal technology prediction. It was a time when blogs were the latest thing to change the face of the web. Everyone influential was starting their own weblog, LiveJournal was still a little ways off from reaching critical mass, and Blogger was years away from becoming a Google property. I'd been running my own website about my interests for years, and I'd installed Apache on my family desktop. Microsoft had just dropped its latest, greatest operating system, Windows XP, and Apple was turning nerds' heads for the first time in a while with a stable release of OS X.

Putting all this together, I started to envision a future of the web where just having a computer gave everyone the ability to participate online with their own blog. For Microsoft and Apple's next operating system, they should ship every copy with Apache bundled in at an OS level. Users could flip a switch and publish their blog right from their desktop, running over their home internet connection. Tossing photos into your user Pictures folder would create an image gallery. Text files in your Documents folder would be shared as blog entries. All you had to do was give someone your IP address or some other DNS that resolved to it and everyone and their mother could have their very own cutting edge weblog.

It sounds like one of those crazy dot-com-fueled ideas that nobody would really want. Apple did, in fact, ship OS X with a built-in version of Apache enabled through a Personal Web Sharing toggle up through 10.8, when the feature was put to rest. However, even stripped down, Apache still required writing HTML files, so it never caught on even as LiveJournal peaked and users migrated to more complicated platforms like MySpace. Why? There was clear user interest there, so why did so few people publish their own websites, but flock to place their content on other people's sites?

Okay, there are many technical reasons that this didn't take off. Broadband was still exceedingly rare in the early 2000's. Most people weren't comfortable leaving their computer on 24 hours a day. And it would have been a huge undertaking on the part of any operating system maker to educate users about why they might want to share information on their computer and how to do it safely. And let's put aside that by the time Microsoft's next operating system released, pretty much anyone who thought they wanted a blog had one. But the idea of self-publishing didn't go away just because this implementation never developed. What did happen?

Press X to Express Yourself

Who do you want to be? Any player who sits down in front of a game will ask themselves this within the first few minutes of starting. Some types of games will dictate this to the player. "Well, for the next several hours, you’re a kickass soldier who lets nothing stand in his way”, and whether this resonates with the player or not is going to determine, to some degree, if the player is going to become immersed in the narrative of the game. Other games may offer a player a choice that will give the player some agency in what their experience is. "Here, you can be a fighter, a thief, or a magic user, what do you like most?” We usually call these role-playing games, but the role the player has, beyond the way they approach the designed gameplay challenges, is usually not considered. Those that do, that ask the player, "what do you really want your role to be in this story", too often reduce the choice down to a binary good vs evil delineation that just doesn’t ring true with the choices people really make. Much was said about Bioshock’s moral choice system prior to its release, about how the decision to save or harvest the defenseless Little Sisters for their precious Adam would resonate with a player’s ability to choose selfishness or selflessness, but in the end, it was presented as an A or B button prompt that was only disturbing the first time you saw the canned animation and had no lasting impression. Games that can give a player a choice that feels meaningful, even outside the context of the game, are extremely rare, and when they successfully immerse a player in the decisions they ask them to make, will be the things the player talks about when they remember the experience of playing that game.

I’m going to talk about two games that had this effect on me and how they were successful, and two others that stumbled for me in trying. I’ll try to keep to the topic, but truthfully, I could talk endlessly about these games and I fear that I still won’t communicate the impression I had of playing them and why I wish there were more games that could affect players in the way these did. But here goes, anyway.

Modern Architecture

Web development has undergone major shifts in the last 5-10 years, both structurally and technologically. The days when making a webpage meant hosting some HTML files on a server are largely long gone. As are the days of businesses employing a single web developer for the position of running an enterprise-level website, as mid-to-large scale companies now have internal development teams of at least a dozen engineers with various disciplines, or will more often use the resources of outside development shops that have the experience in producing high quality deliverables. The classic role of development as a part of IT is less relevant as web frameworks have taken on a larger scope, project management techniques have become redefined as creative processes, and development tools and resource usage have shifted radically.

The Semantic Web

The web utilizes a rapidly increasing number of programming languages, but HTML, the first and most basic building block of the web, is not one of them. Most developers would say with some condemnation that HTML is not a programming language, and while they are correct that it is not programming, it is a language. A programming language can provide instructions using computer analogues of all the basic constructions of human language: nouns, verbs, descriptors, punctuation, and rules of grammar. A markup language, which are the last two letters in HTML to give you a hint, is like a written language that consists solely of adjectives. It only describes the context of another language. To illustrate, HTML tags are the adjectives that go before, after or between the English-language content that makes up a complete webpage. The tag, <a> describes  that the following piece of text is a link, and the tag property, href, further describes where that link leads to. So, for example, the line: 

<a href=“http://www.google.com”> Clicking this text will take you to Google </a>

is HTML’s way of ascribing a purposeful intent in the text that goes on the webpage. The dictionary for HTML contains 109 adjectives in its most recent publication, and the job of every current browser is to read the meaning of them in as close to the same way as possible, though each browser infers the intent of those words slightly differently. However, unless your goal in life is to learn all the ins and outs of HTML, there is no reason to cover any more of these fundamentals.

Instead, I’d like to talk about the parts of HTML that make me enjoy my job. This is the part of HTML known as semantic markup. It’s a purposeful effort by the people who use and design HTML to make their adjectives give better context to what they actually describe. For example, the tag <b> has been the method for describing text which should be bold since the very early days of HTML. However, <b> is poor vocabulary to use in the context of text, because it doesn’t actually give meaning to the words that it is describing. Instead, if you wanted to describe the importance of the text, you should use the tag that describes it as <strong>. <strong> is functionally identical to <b>, but de-couples the function from the intent. After all, the text that you want to stand out as strong may not actually appear bold in your design, so why describe it that way?

The New Marketplace

Open-source champion, Eric Raymond, famously described his preferred method of software development as a bazaar, a marketplace of ideas where every person was free to contribute and improve on a software project. The idea that many developers working on a problem, each with their own agenda and motivations, could produce a better product than a top-down approach to software development was only starting to see adoption at the peak of the first internet boom. But the strength of the idea outlasted the projects and companies that were the early adopters of it. For those of us who work as web developers, it would be hard to imagine not utilizing tools in our daily work that aren’t hosted on Github. The transparent nature of HTML and Javascript make it easy to see the weaknesses and inefficiencies in our code, and we frequently have them pointed out to us. The bazaar is so prolific that the concept no longer needs to describe the democratization of development practices that the internet faced two decades ago. It’s shifted to a new paradigm.

With the accumulation of wealth in a developing nation, the healthy open-air bazaars become supermarkets and malls. Web developers are blessed with a wealth of tools and frameworks born out of a vacuum of need to improve our lifestyle. Just as enough of us realize that there must be a better way to structure our code or reach our users, you’ll find that there are competing brands and brand champions for the best way to solve this problem that you’d only just discovered exists. Having too many options of brand of laundry detergent is truly a first world problem, but it underlies a stressful point in our jobs. Our clients trust us to make the best decisions for them, yet most web developers don’t feel like experts of their domain. For those of us who have been doing this since more-or-less the beginning, keeping up is like daily boot camp and many only get through with the support of their peers at work or development community. For novices and those who work in isolation, it’s like looking up at Mount Everest. In that situation, the only thing to do is to take the first step, as I recently did.

Subscribe to Fine Nothings RSS