Adventures in font loading

More and more of the sites I’m building recently are using webfonts, and some of these fonts are very heavy. I care about load time and page weight, so I set about finding ways to ensure I could use webfonts while minimising some of the associated problems, viz. FOUT and missing content.

I’m making some assumptions here that you may or may not agree with, so you have an early opportunity to get to the comments to tell me I’m wrong without having to read the whole post. Firstly, webfonts are a good thing. I like what they say and do. Secondly, consideration for people on low bandwidth is of vital importance. This is the web and it’s for everyone. Thirdly, content should be available as soon as possible. Fourthly, FOUT is a feature, not a bug.

Before I go on I just want to mention one thing. I work in a busy agency, so there isn’t a lot of time for experimentation. What this effectively means is that I have been trying different things out on each new website rather than trying out lots of things on one website or a standalone demo. My method for font loading is still evolving, I’m still learning and I’m sure there are plenty of things I haven’t thought of and plenty of mistakes in what I’m doing. Please let me know in the comments.

Do nothing

Before I started thinking too hard about webfonts I just used a plain @font-face declaration and included the font in the font stack of various selectors throughout my CSS.

For high bandwidth scenarios this is great. Nothing gets in the way of the fonts and they are usually on screen with no noticeable delay. However this approach really falls apart when you throttle bandwidth. The screen stays blank for a very long time. The type of sites I build have text in them so this is unacceptable.

Keep webfonts out of smaller viewports

The first thing I went for was to use a media query on a link element to keep the fonts out of small viewports.

<link rel="stylesheet" href="fonts.css" media="(min-width:20em)">

This however is an approach that shows I wasn’t thinking too hard about the problem and should have known better. There is absolutely no relationship between viewport size and bandwidth—sometimes people with laptops struggle to get a signal with a mifi, sometimes people with phones use fast wifi. There is no logic behind this approach.

Use Webfont Loader

WebFont Loader is a Google/Typekit collaboration that adds different classes to the html element depending on when fonts are available. It works with a range of font services and can be configured to work with self-hosted fonts, which is what I did.

There are two basic ways to use. Firstly it can be a broad on/off, where the fallback font is loaded while the fonts are loading, and when the .wf-active class is added all the fonts update at once.

The second way to use it is to be more granular with individual font weights. The website I tried WebFont Loader on uses the Avenir family, and I could swap out the default for Avenir Light as soon as it was available, then if Avenir Book became available a few moments later, load it in. The script adds classes like wf-avenirbook-n4-active to give you this control.

The idea behind this is to get each font showing in the browser as soon as possible, without having to wait for the last one and having one big flash of restyled text. It does seem smoother, less snappy.

I still wasn’t happy though. WebFont Loader is a right lump of JavaScript and glancing at it there is a lot of UA detection going on which I’m not comfortable with, especially when I don’t know exactly what it’s doing. In testing a plain page with just a heading and a couple of paragraphs I was getting load times in the 150-170ms range over the wired connection at work.

Another big problem was that when throttled there was a double FOUT. Perhaps I was doing it wrong, but it was really janky below 256kb/s.

Finally, and specific to the website I was working on, Avenir is a system font on iOS6 and above. It shouldn’t have to wait for class names to be added to the document.

Cut the mustard, stylesheet injection, Network Information API

That brings us up to now, and the website I’m working on at the minute. I’m using a combination of the BBC’s cutting the mustard, a modified version of Scott Jehl’s fonts.js, and the Network Information API as an enhancement for supporting browsers.

First of all I get rid of less capable browsers by testing for 'addEventListener' in win && 'localStorage' in win && 'querySelector' in doc. I, and more importantly the owner of the site I’m doing this on, am ok with this. YMMV.

If the browser cuts the mustard I have a function that injects a stylesheet link element with any specified href into the document head.

The next thing is to test bandwidth using the Network Information API. Currently only older versions of Android have a useful version of the API, and it has always been non-standard. Nonetheless it allows me to not load the fonts over 2g and 3g. Again, this is a judgement call, but in my opinion the loading spinner goes on for too long over those connections and I’m happy to keep webfonts out of any older Androids that slip through the mustard cutting.

The code I’m using is in this gist.

I’m loading the page with content in the fallback system font, checking to see we’re in a decent browser, if we can detect 2g or 3g declare a variable, load the font CSS.

This is the best way I’ve found so far to ensure that content is there from the earliest possible point, if FOUT is perceptible it’s as soon as possible, and where possible bandwidth is directly taken into account.

In an earlier version of the loading script I also estimated bandwidth for browsers that don’t support the Network Information API by calculating the length of time it takes an image to download. It was probably worth a try just to see how bad an idea it is, but it definitely won’t make it into production. Calculating bandwidth is not a problem designers and developers should concern themselves with in my opinion.

On a more practical level, without the image download page load was usually between 70ms and 100ms. The image download added about 50ms on to that. Another problem is that accuracy increases with image size, indeed small images produce wildly inaccurate results. Waiting for an image to download before deciding on bandwidth creates a paradox.

Summary

There are two main opinions I have formed around font loading. The first is that the weight of fonts should be considered as part of the overall weight of a page. Jeremy Keith, Chris Coyier, Tim Kadlec and Brad Frost have all talked about performance budgets, where you set targets for page weight and HTTP requests, balancing the different components against each other. I think this is a good idea, it gives us well defined and measurable constraints to design against.

The other main point is that I think bandwidth is a problem for browsers to solve. Looking back now I realise piddling about with image downloads had me on a hiding to nothing. I hope we have a reliable way to measure bandwidth soon, although I appreciate it’s a very difficult problem to solve.

Please do chip in with a comment here or on twitter. I’m nowhere near the end of this adventure so I’m really keen to hear from other people how they’re tackling this.

How callback functions in JavaScript work

I’m building a thing that’s not quite finished yet, and it uses the geolocation API to get latitude and longitude. I wanted to put them into an object and use it for calculations.

My first attempt was:


var loc = {
    get_latlong: function() {
        var self = this,
            update_loc = function(position) {
                self.latitude = position.coords.latitude;
                self.longitude = position.coords.longitude;
            };

        win.navigator.geolocation.getCurrentPosition(update_loc);
    }
};

What I thought this would do was add two properties — latitude and longitude — to the loc object that I could pick up and run with straight away.

It seemed to work because console.log(loc) gave me the object with the two properties added (and their values), but console.log(loc.latitude) was undefined.

I explained my problem on Stack Overflow and got a good answer from Oleg that got me on the road to solving my problem.

It was CBroe’s comment on my question that led me to understand how things were working though.

The getCurrentPosition method is asynchronous, and for good reason. Looking up a location can take some time, so we don’t want to hold everything up while that happens. I was asking a question that geolocation hadn’t answered yet, which is why it always came back undefined.

What I needed was some way to say “get the location then do the calculations,” and that is a callback function.

Here’s a basic use of a callback function:


// define the function
var some_function = function(arg, callback) {
    // do something here e.g.
    var square = arg * arg;

    callback(square);
};

// call the function
some_function(5, function(param) {
    // do what it says in some_function definition, then do this
    console.log(param);
});

Running that will log 25 to the console.

The easiest way to think of it is to see it in two steps. First of all some_function is called to do it’s thing on the number 5, in this case square it.

Next up is the callback function. some_function is expecting a second argument, and it’s expecting it to be a function with one parameter — callback(square).

All we need to do is say what the callback does, in this case log the argument it received, square, to the console.

Another way of putting it: the function call is saying “run some_function and do what you have to do, then when you’re finished run the anonymous function that has been passed as the second argument, with square as an argument.”

In my example I need to wait for the location information to be returned by the geolocation API then pass the loc object to the callback function to perform calculations on the latitude and longitude.

It’s subject to change, but right now it looks like:


var loc = {
    get_latlong: function(callback) {
        var self = this,
            update_loc = function(position) {
                self.latitude = position.coords.latitude;
                self.longitude = position.coords.longitude;

                callback(self);
        };

        win.navigator.geolocation.getCurrentPosition(update_loc);
    }
};

loc.get_latlong(function(loc) {
    // loc.latitude and loc.longitude are now available
}

Here are a couple of articles I found useful when trying to get my head round this:

Review of jQuery Hotshot by Dan Wellman

JavaScript is something I have been trying to get better at for a couple of years now, and I’ve been concentrating on writing vanilla JS while paying very little attention to libraries such as jQuery. However I realise jQuery isn’t going to go away. I will have to deal with it in other people’s code and I still use it to cover my ass for things like AJAX, where I wouldn’t be confident that I could write good cross browser vanilla JS.

When Dan Wellman asked on twitter if anyone would like to review his latest book on jQuery I took the opportunity for selfish reasons as much as anything else. If I could take a brief interlude from vanilla JS to brush up on jQuery, why not.

jQuery Hotshot is nothing like what I was expecting, and in a good way. There is only the briefest of introductions to how jQuery works then straight into a tour de force of some pretty impressive real world examples of what can be done with jQuery. From a simple game, through UI enhancements, advanced Google Maps API developments, jQuery Mobile, the HTML5 file API and plenty more.

I really like Dan’s writing style. To me it seems relaxed and comfortable and I was able to follow along with the code and explanations without any bother.

It’s obvious Dan knows what he’s talking about and the first few pages of the first chapter will convince you if you have your doubts about his expertise. Absolutely top drawer.

The thing that impressed me most about the book was the constant refrain of best practices in the background, and not just with regard to jQuery. Yes, there was a chapter dedicated to the best way to write a jQuery plugin — and if you write a lot of jQuery plugins this chapter might just be worth the price of the book in itself — but Dan also talks about good practice when writing CSS (with a nod to CSS Lint), he points to articles on general JavaScript development, and gives a nod to accessibility.

One thing I wasn’t so keen on was the spy astronaut headings in each chapter. I thought they went just that little bit too far past fun and quirky into annoying and distracting, but they’re only headings so didn’t get in my way too much and may well make the book more readable for other people.

Another thing that wasn’t so hot was that the download to accompany the book didn’t work for me at all. I tried a few times over a couple of weeks in different browsers, but not dice unfortunately. Hopefully the publishers will have that sorted soon.

Overall I would definitely recommend this book for anyone looking to use jQuery. It’s a real eye-opener and I’d be surprised if you didn’t learn plenty about the library and its capabilities, and indeed plenty about web development in general. If you would like to buy jQuery Hotshot you can get it on Amazon at http://www.amazon.co.uk/jQuery-Hotshot-ebook/dp/B00BFQ61GU/ or the publisher Packt at http://www.packtpub.com/jquery-hotshot/book (those aren’t affiliate links).

Finally I’d like to point out that apart from a free copy of the book as an ebook I didn’t get paid for this review. It’s my honest opinion of the book and my only connection to Dan is the very occasional conversation in the public timeline on twitter.

Using AJAX with WordPress for conditional loading

AJAX has become a big part of responsive design for me. I use it to load secondary content into larger viewports to make it easier to find/view than if it is behind a small link in the footer or somewhere like that.

In WordPress it’s really easy to do, but this is one of those situations where I couldn’t find a definitive guide to how it’s done, so I’ve written this that will hopefully fix that.

Two steps:

  1. Build a PHP function that creates whatever it is you need.
  2. Build an AJAX request that goes and gets it.

WordPress handles everything else using /wp-admin/admin-ajax.php

As an example let’s say we want the titles of the latest five posts to show up in a sidebar on each single post. It’s not vital content and can easily be accessed by going to the top level page that shows the latest five, but it might be nice to have for some people and we have the room.

The first thing to do is create a PHP function in our theme’s functions.php with a WordPress loop that creates the list.


function get_latest() {
	$args = array(
		'posts_per_page'  => 5,
		'category'        => 1,
	);

	$posts_array = get_posts($args);

	echo '<nav role="navigation">';

	echo '<h2>Latest News</h2>';

	echo '<ul>';

	foreach ($posts_array as $post):
		setup_postdata($post);
		echo '<li><a class="' . esc_attr("side-section__link") . '" href="' . esc_attr(get_page_link($post->ID)) . '">' . $post->post_title . '</a>';
	endforeach;

	echo '</ul>';

	echo '</nav>';

	die();
}
// creating Ajax call for WordPress
add_action( 'wp_ajax_nopriv_get_latest', 'get_latest' );
add_action( 'wp_ajax_get_latest', 'get_latest' );

It’s a straightforward WordPress function — it could be anything at all, even something as simple as echo '<a href="https://twitter.com/intent/user?user_id=123456789">follow me on twitter</a>. There are two things worth noting however.

The die() is necessary at the end to stop any further PHP processing in /wp-admin/admin-ajax.php which outputs 0. If we don’t use die() at the end of our function a 0 will appear after our list.

The other thing to note is the block of two add_action() functions. This will not work without them.

Now to the front end. We need to create a JavaScript function that calls /wp-admin/admin-ajax.php and tells it which PHP function to run.


jQuery.ajax({
	type: 'POST',
	url: '/wp-admin/admin-ajax.php',
	data: {
		action: 'get_latest', // the PHP function to run
	},
	success: function(data, textStatus, XMLHttpRequest) {
		jQuery('#latest-news').html(''); // empty an element
		jQuery('#latest-news').append(data); // put our list of links into it
	},
	error: function(XMLHttpRequest, textStatus, errorThrown) {
		if(typeof console === "undefined") {
			console = {
				log: function() { },
				debug: function() { },
			};
		}
		if (XMLHttpRequest.status == 404) {
			console.log('Element not found.');
		} else {
			console.log('Error: ' + errorThrown);
		}
	}
});

All we need to do in here is tell the function which PHP function to run and where to put the output, in this case in a container element with id latest-news.

You can wrap the jQuery function in a matchMedia test or use a technique that uses JavaScript to test a CSS property value that only applies to larger viewports.

That’s all there is to it. It’s quick and easy but a reliance on jQuery might not be your bag. However AJAX is one thing jQuery is very good at getting working cross browser painlessly.

Flexbox vertical ordering

The only times I’ve had cause to use flexbox in anger is for content re-ordering, or as Jordan Moore more eloquently puts it, content choreography. Even at that I’ve only ever used vertical re-ordering and that’s all I’ll be talking about in this post. Other more comprehensive resources are listed at the end.

A project I am currently working on is a large content site with gazillions of pages and sub pages. We decided to keep the main navigation simple (6 or so items) and display a list of sub pages in each category.

In smaller viewports the list is in a block just before the page footer and on larger viewports we decided to move it under the main navigation at the top of the page and visually tie it to its parent menu item using colour.

Demo one is a page with the basic markup and style that roughly reflects the project before flexbox is used, and for clarity I’ve left out the page header and footer.

From top to bottom it’s:

  1. a div containing the main content block and a secondary block
  2. another secondary block
  3. the block containing the list of sub pages.

It’s these three blocks that we’ll be reordering using flexbox.

The web designer’s web designer Chris Coyier recently wrote about the best way to get flexbox working in as many browsers as possible so we’ll use that as the baseline for our vertical ordering.

The boxes that are to be re-ordered need to be wrapped in a container with the display set to flex.


.l-flex {
	display: -moz-box;
	display: -webkit-box;
	display: -webkit-flex;
	display: -ms-flexbox;
	display: flex;
}

The default display is horizontal so to change that we’ll add the declarations to make it vertical.


.l-flex {
	display: -moz-box;
	display: -webkit-box;
	display: -webkit-flex;
	display: -ms-flexbox;
	display: flex;
	-moz-box-orient: vertical;
	-webkit-box-orient: vertical;
	-webkit-flex-flow: column;
	-ms-flex-direction: column;
	flex-flow: column;
}

Now to reorder the boxes we just need to add a declaration for the order of each box.


.l-flex-1 {
	-moz-box-ordinal-group: 1;
	-webkit-box-ordinal-group: 1;
	-webkit-order: 1;
	-ms-flex-order: 1;
	order: 1;
}

.l-flex-2 {
	-moz-box-ordinal-group: 2;
	-webkit-box-ordinal-group: 2;
	-webkit-order: 2;
	-ms-flex-order: 2;
	order: 2;
}

.l-flex-3 {
	-moz-box-ordinal-group: 3;
	-webkit-box-ordinal-group: 2;
	-webkit-order: 3;
	-ms-flex-order: 3;
	order: 3;
}

Demo two has the flexbox included and uses media queries so it only happens above 50em. Now the list of sub pages is displayed at the top of the document.

One thing to note at this point is that Firefox doesn’t support percentage widths on the ordered boxes. Demo three has 50% width declared on all the boxes and it has no effect in Firefox (screenshot). Your options are either add an extra element inside the box and give it a percentage width, or remove the -moz- prefixes and use the less enhanced layout. This is a 3 year old bug which could means it’s not high priority or could mean it’s close to the top of the fix me pile. I have no idea how these things work.

Demo four has the main content floated to the left and the first aside floated to the right, a pattern I use in this project.

This is fine in all browsers except Chrome, which goes completely buck mad with disappearing content, overlapping content and huge spaces (screenshot). Somehow the floats in one box throw grenades all over the rest of the page.

Fortunately there are two ways to prevent this.

The easiest method is to clear the floats using overflow:auto or overflow:hidden as in demo five. The clearfix method currently in HTML5 Boilerplate doesn’t help.

The second way is to replace the floats with inline-block elements, as shown in demo six.

Opera, IE10 and Safari display things as intended with no surprises, Opera being the only one that works without prefixes. That makes flexbox one area of web standards that will take a step back if WebKit don’t squash the bugs and un-prefix before Opera switches rendering engine.

For me, a side effect of this brief foray into flexbox is an extra bit of weight for the argument against vendor prefixes. I’m thankful flexbox is prefixed in Gecko and WebKit as it is buggy. That pretty much explains the purpose of vendor prefixes–they’re experimental and need to be thoroughly tested before the prefix comes off.

I am ambivalent though. I use vendor prefixes occasionally and there are plenty that apparently don’t have any bugs but remain prefixed.

When I was working on the project that spurred me to write this I initially chose to only use flexbox in Opera and IE10. Firefox and Chrome were broken, time was short, the prototype needed to be sent to the client, and I didn’t figure out the fix for Chrome until writing this post and creating the stripped back demos. After giving it some thought I settled for changing display:flex to display:table;caption-side:top and changing order:3 to display:table-caption which works all the way back to IE8. If you don’t know what I’m talking about Jeremy Keith explains it better than I ever could in his Re-tabulate post.

This is only a few things I have encountered in a narrow use case of a small aspect of flexbox. My favourite comprehensive article on the subject is Chris Mill‘s opus Flexbox — fast track to layout nirvana? in which he smashes Betteridge’s Law into tiny pieces and I recommend you read it. When flexbox is widely supported and bug free it will revolutionize web layout.

All the demos are on Github.

Resources

Notes on the classList API

For me the classList API is one of the most useful parts of HTML5. Manipulating classes is an everyday part of JavaScript on the web, but is a cowpath that required a sure foot to tread before before getting the full treatment from the pavers in the form of classList.

The basic syntax is element.classList.method where method can be one of the following:

  • add
  • remove
  • toggle
  • contains
  • item
  • length
  • toString

classList.add()

If all you need to do is add a class to an element use classList.add(). Let’s say you have a div with an id of box and you want to add a class of “highlight” it would work like so:


var box = document.getElementById('box');
box.classList.add('highlight');

The class is now added in the DOM and declarations in CSS for .highlight will be applied to the element.

More classes can be added the same way, so you can go ahead with box.classList.add('highlight--sidebar') or box.classList.add('l-wide') or any other classes you like.

You can only add one class at a time, so box.classList.add('class-1','class-2') doesn’t work.

classList.remove()

If we want to remove a class from the list it’s just as simple:

box.classList.remove('highlight');

The class has been removed in the DOM and any side effects of that will be applied.

classList.toggle()

The toggle method is useful for things like show/hide interfaces where the same action has opposite effects.

If there’s a button in our markup we can listen for clicks on it and use it to switch the highlight class on and off.


button = document.querySelector('button');
toggleBox = function() { box.classList.toggle('highlight'); };
button.addEventListener('click',toggleBox,false);

classList.contains()

If we need to check whether or not a particular class name is in the list we can use classList.contains() to return true or false.

It could be used to check if an image in a gallery has the class that makes it bigger than the rest or to check if a div has the class that makes it visually hidden for example.


if(box.classList.contains('highlight') {
    // do something
} else {
    // do something else
}

classList.item()

This method returns a string of the value of the item in the list at the index passed as an argument. That probably doesn’t make sense so I’ll it illustrate by using an example. If the HTML is <div class="highlight highlight--sidebar l-wide"> the following are all true:


box.classList.item(0) === 'highlight';
box.classList.item(1) === 'highlight--sidebar';
box.classlist.item(2) === 'l-wide';

The number passed represents the position of the class name in the list, starting to count from 0.

Again it’s dead simple, although I have caught myself using square brackets as if it’s a normal array.

classList.length

This method does what it says on the tin and will be familiar to you if you have done any JavaScript before. It counts and returns how many classes are in the list.

In the example above console.log(box.classList.length) returns 3 as we currently have 3 class names on the element, highlight, highlight--sidebar and l-wide.

classList.toString()

If you ever need to turn the list of classes to a string use this method. console.log(typeof box.classList.toString()) will return “highlight highlight–sidebar l-wide” as a string.

Personally I find I’m using the classList API on pretty much every project now. If you’re not too familiar or comfortable with JavaScript APIs its simplicity makes it a great place to start learning and its usefulness means you’ll probably find a real world application on your current project.

Browser support is good and there are a couple of excellent polyfills I know of if you need to support older browsers. Bear in mind that JavaScript should be an enhancement in web pages so you should have a fallback in HTML and CSS that suffices for older browsers. Ask yourself do you need a polyfill before doing a copy/paste.

Resources

Quick git tip: stash and stash pop

I’ll be moving these quick tips into their own section of the site when I get the time/can be bothered, but until then they’ll be showing up here.

Here’s the scenario: you work on a branch called something like offline, commit the work, then spot an issue that can be fixed with CSS. Acting like you’re a magpie who’s seen something shiny you forget you’re on the offline branch and save a bunch of work.

Now when you go to commit you’re in a branch that doesn’t make sense for the work you’re doing, and if you try to checkout a different branch you can’t.

Here’s what to do:

  1. git stash
  2. git checkout dev
  3. git checkout -b css-fix
  4. git stash pop

And you’re in a more appropriate branch with your CSS changes unstaged.

What the stash command does is get rid of your changes since the last commit and keep them safe until you ask for them back again with stash pop. Simple and very useful.

git –help stash has much more information if you want a closer look.

Why I think we shouldn’t use CSS viewport in IE10 for now

Together with my colleague Toby I’ve been looking at the problems discovered and highlighted by Matt Stow regarding IE10 and responsive design.

The story goes back to Tim Kadlec’s post on IE10 snap mode, a feature of desktop (tablet as well, anyone?) IE10 that allows you to drag the browser window to the left or right of the screen and it snaps to 320px wide.

From our point of view as designers and developers it was a bit of a bummer because websites are scaled when snap mode is activated, whether they are responsive or not. Basically they look like non-responsive sites do on a phone or iPod.

What Tim discovered was that you can add @-ms-viewport{width:device-width} to your CSS and that fixes it. For some reason <meta name="viewport" content="width=device-width"> doesn’t work, but the CSS viewport does.

So anyway, everything was fine until Matt noticed that sites using @-ms-viewport{width:device} width look terrible on Windows Phone 8.

What is happening is that the CSS viewport rule is causing Mobile IE10 to set the viewport to device pixels instead of CSS pixels and everything is a lot smaller than it should be. On the Nokia Lumia 920 this sets the viewport to 768 pixels but with the meta viewport only it’s 320 pixels.

Some screenshots would best illustrate the problem. I don’t have IE10 desktop but I do have a Nokia Lumia 920 running Windows Phone 8 so we can look at it, and you’ll have to take my word for it on the desktop browser.

The BBC news site uses @-ms-viewport{width:device-width} in http://static.bbci.co.uk/news/1.4.3-440/stylesheets/core.css (you’ll need to do Cmd/Ctrl + F as it’s minified CSS) and that fixes the desktop snap mode problem. However on the phone it looks pretty rough as you can see.

Screenshot of BBC News on a Lumia 920
The BBC news site on a Lumia 920. Fiddly.

The BBC home page and sport site on the other hand don’t use the CSS viewport so they look good and are easy to read like they are on any other small device.

Screenshot of BBC Sport on a Lumia 920
The BBC Sport site on a Lumia 920
Screenshot of the BBC home page on a Lumia 920
The BBC home page on a Lumia 920

Another site that is affected by this is Northern Ireland animal charity the USPCA (full disclosure: I did a lot of work on this site in my last job, including adding @-ms-viewport to the CSS). As you can see it looks less than optimal on the 920.

Screenshot of the USPCA home page on a Lumia 920
The USPCA home page on a Lumia 920

So what to do about all this?

Should we be thinking mobile first and figuring out a way of adding the CSS viewport on desktop only? I think we probably should, but not because of blind adherence to “mobile first”.

There may already be more desktop IE10 users than Windows Phone 8 users so it’s not because of numbers. The thing is, the vast majority of sites out there aren’t responsive, and the vast majority of responsive sites in all likelihood don’t have @-ms-viewport, so chances are most times IE10 users use snap mode they’ll wonder why they bothered. Mobile IE10 users on the other hand don’t have the choice. We can either give them an optimal layout or we can make everything look half the size it’s supposed to.

The solution Microsoft recommend is to wrap the viewport in a media query like so:


@media (max-width:25em) {
    @-ms-viewport {
        width: 320px;
    }
}

I don’t think that’s sustainable for two reasons. Firstly, how to we pick our breakpoint? It doesn’t matter if it’s in pixels or ems, if a device comes out that is outside the max-width we’re back to square one.

The second reason is the other side of the same coin. Device sizes are a sliding scale now, not a few fixed sizes from 320 and up, so when a device with a screen size around 6 inches with more than 320 CSS pixels is released we’re changing the media query to match and creating a new @viewport rule. It’s a bit like keeping a database of UA strings up to date – always playing catch up and potentially missing some devices.

For those reasons my preference is to leave out @-ms-viewport altogether. By all means use @-o-viewport as it behaves like we would expect, but until someone figures out a way to detect the features of IE10 desktop that will enable us to identify its capability to support CSS viewport I will be omitting the IE version.

If you’re really stuck use a UA sniff, but be aware it’s not a good long term solution. A small script in your language of choice that checks the UA string for MSIE while making sure IEMobile isn’t in there then adds the CSS is the general idea.

Perhaps the wider lesson is that we should avoid using vendor prefixes, but that’s a whole other bag of mad snakes.

I’d love to hear what you think about all that, so please leave a comment or ping me on twitter.

Credits

Toby Osbourn for saying things that make me thinky.

A quick look at the Network Information API

UPDATE : the W3C working draft has been updated and the categories (unknown, ethernet, wifi, 2g, 3g, 4g and none) are gone. For now it’s bandwidth in Megabytes/second as detailed below but doubts are expressed about the viability of that with the possibility of very-slow, slow, fast and very fast mooted.

If you were to ask developers with an interest in mobile today what one feature they would like above all others, there’s a good chance a lot of them would say bandwidth detection.

The Network Information API represents the first steps down that road, and there is a working draft at the W3C dated June 2011. As we can see from Maximiliano Firtman‘s excellent mobile support tables, it is supported on Android 2.2 and up, and on the Kindle Fire’s Silk browser.

As APIs go it couldn’t be much simpler — we just get the value of navigator.connection.type which will be one of unknown, ethernet, wifi, 2g, 3g, 4g and none. In practice Android deviates from the specification and returns 0, 1, 2, 3 or 4. (I don’t know what Silk does, but it’s a custom Android so it’s as likely as not that it’s the same).

ValueConnection type
0UNKNOWN
1ETHERNET
2WIFI
3CELL_2G
4CELL_3G
Fig. 1 The values returned by navigator.connection.type and their corresponding network types in the Android BOM.

So a simple alert(navigator.connection.type); will give us something like the screenshot shown in figure 2. Demo (Android only)

A screenshot from an Android device showing an alert box with the number 2
Fig. 2 An Android device on a wifi connection.

That’s the current version of the Network Information API as it exists in the wild today, and although not totally useless it doesn’t give us much to go on. We can really only make inferences from CELL_2G and NONE. We know 2g is definitely slow — ethernet, wifi and 3g can be fast or slow. As far as NONE is concerned we could maybe listen for clicks on links/buttons and present our own “Network down” message instead of a browser or router error page if there is no network.

Personally, I wouldn’t be comfortable drawing any conclusions about network speed if the returned value represented ethernet, wifi or 3g but I might consider something like:


if(navigator.connection && navigator.connection.type !== 3){ //not 2g
    document.write('<link rel="stylesheet" href="hi-res-backgrounds.css" media="only screen and (-webkit-min-device-pixel-ratio:1.5),only screen and (min--moz-device-pixel-ratio:1.5),only screen and (-o-device-pixel-ratio:3/2),only screen and (min-device-pixel-ratio:1.5), only screen and (min-resolution:1.5dppx)">');
}

Faced with developer apathy and little interest from vendors a new direction was sought, and an Editor’s Draft is now up at the W3C which looks more promising.

Like the old version it uses the navigator.connection object but the type attribute is gone. Instead we have the rather exciting bandwidth and be-nice-to-your-users metered attributes.

There is a third onchange attribute which is a function that fires a change (for when bandwidth changes), online or offline event.

Using navigator.connection.bandwidth returns an integer value of the current bandwidth in megabytes per second. (Note how this is different to how ISPs report their bandwidth achievements in megabits per second.) It is to return infinity if the bandwidth is unknown, and 0 if there is no connection.

The metered attribute is a boolean that is true if the user is on something like a pre-pay or capped data plan. The spec suggests that the information necessary to know if metered is true or false should come from the ISP, carrier or the user.

There are a couple of examples using the change event in the draft spec: one to update navigator.connection.bandwidth in the console as it changes, the other that uses it in conjunction with the metered attribute to prevent automatic polling.

On to implementation. Unsurprisingly it’s even less widespread than type. Mozilla have a prefixed version using mozConnection that’s in Firefox 12 and up, but I’m getting fairly useless results in Firefox 13 on Windows 7. On my home wifi the bandwidth is reported as Infinity, which according to the spec means it’s unknown. I can assure you the kids’ Buzz Lightyear toys are far closer to infinity than my home broadband will ever be.

WebKit also has a prefixed implementation webkitConnection but it’s undefined in Canary and Chromium that I’ve tested.

Either way I’ve put up a demo using the Mozilla and WebKit prefixed versions (run in Firefox with the console open).

The fact that Mozilla and WebKit are off the mark already has them ahead of IE and Opera. It could well be in the next version of Chrome (don’t hold your breath for Safari) and I’m looking forward to making good use of it in production.

I’m glad the type attribute isn’t being used any more — it’s likely to be misunderstood and misused. Misunderstood in that some devs may make assumptions about network speed based on the network type; misused in that some devs may make assumptions about users’ context.

Nevertheless it’s still going to be hard to use bandwidth sensibly. Here are a few potential problems I can think of.

  • There will be a lot of judgement calls to make e.g. where do we draw the line between fast and slow? Can you tell if your connection drops below 1MB/s? I know I would struggle with that.
  • Even though we (probably) won’t be using type there is still room for baseless assumptions. For example a dev may decide that people with a connection below a particular level don’t need certain content.
  • The point in time at which bandwidth is returned on a fluctuating network is very important and may be totally inappropriate for the majority of the session.
  • What would people think if they saw an interesting (but not essential) photo on a news site then returned to the story later to find the photo cropped or absent because the Network Information API was used to adjust image downloads over a slow connection?

There are probably many other issues that I haven’t thought of but I’m sure the benefits will outweigh them, and I’m looking forward to working out the best approaches and seeing what sort of consensus the community arrives at when the spec stabilizes.

Credits

  • Jordan Moore for piquing my interest.
  • Mathias Bynens for linking to obscure documents about WebKit that I would have gone the rest of my natural without ever knowing about.
  • Robin Berjon for answering my questions on twitter.

Form with two submit buttons using HTML5

Let’s say we have a registration form with the choice to join the site, service or whatever as a free or paid member.

We can set up a simple form like so:

<form action="register.php">
    <label for="username">Username</label>
    <input id="username" type="text" pattern="[A-Za-z]{0,20}[0-9]{0,2}" placeholder="No twitter spamhandles" required>

    <label for="password">Password</label>
    <input id="password" type="password" required>

    <label for="free">Free</label>
    <input id="free" type="radio" name="membership" value="free">

    <label for="paid">Paid</label>
    <input id="paid" type="radio" name="membership" value="paid">

    <input type="submit" value="Join">
</form>

All fairly straightforward and some nice new HTML5 attributes to help the user along the way.

We can however help the user even further by removing the click where they select the type of membership they want, and progressively enhancing the form using a second submit button with the formaction attribute that was introduced with HTML5. The formaction attribute allows us to override the default action attribute and submit the form to a different script on the server.

<form>
    <!-- form controls -->
    <input type="submit" value="Join Free" formaction="free.php">
    <input type="submit value="Join Premium" formaction="premium.php">
</form>

Browser support isn’t great so we can’t just dive in and add the new button. We will have to use feature detection and insert the element using JavaScript in browsers that support formaction, leaving non-supporting browsers and non-JavaScript users with the original (perfectly usable) form.

Before we take a look at the JavaScript, a warning: I am crap at JavaScript. Ergo I would appreciate any feedback from folk who know more about it and would advise those who aren’t too sure to read any comments that may be added. So onwards. The first thing we do is a feature detection:

var supportsFormaction = function() {
    var input = document.createElement("input"); // create an element in memory
    return "formAction" in input; // check if the browser supports the attribute
}

In a production setting it would be better to write a more general supportsAttribute function and pass in arguments for the element and the attribute, but what we have here will do for this example.

Next it’s a simple matter of manipulating the DOM to accommodate our new buttons.

if(supportsFormaction()) {
    var premium = document.createElement("input"),
        free = document.getElementById("submit"),
        registration_form = document.getElementById("register");

    // set up the premium submit button
    premium.setAttribute("formaction","premium.php");
    premium.type = "submit";
    premium.value = "Join Premium";

    registration_form.appendChild(premium); // add the new button to the form

    // change the original submit button into a free submit button
    free.setAttribute("formaction","free.php");
    free.value = "Join Free";

    // hide the radio buttons
    document.getElementById("membership-type").style.display = "none";
}

There’s a gist on github of the completed example and a working demo on this site.

I tested on all major desktop browsers plus Opera Mini, Opera Mobile, Android, Mobile Safari, Symbian, Blackberry, Fennec and Windows Phone emulator, and didn’t get any false positives on the feature detection. If you find a browser that applies the DOM manipulation but submits to the regular script please let me know in the comments.