Lighthouse accessibility score is unreliable

Every so often somebody at work wants to use a single-use service for a specific need their team or department has, and asks my team to measure how well it stacks up against our internal standards for inclusion.

Today I did it myself, which is unusual, and I started with accessibility. The sample page they gave us had the following issues:

The Lighthouse accessibility score for that is 95, inside a lovely passmark-green circle.

The only things it picked up were the colour contrast and the skipped heading levels. In my view it should at least have highlighted the missing landmarks, the <h5> coming first, and the non-descriptive link text. Both axe DevTools and ARC Toolkit browser extensions did, so no reason for Lighthouse not to. I’m confused why it doesn’t as it comes from axe user impact assessments and landmark errors are in there, marked as moderate.

This points to a broader point: automated accessibility testing is only useful for the lowest hanging fruit, the fruit that gently brushes the ground on a breezy day. It’s impossible for any automated tool to check if an image is presentational or not, if alt text is appropriate, if keyboard navigation works well, how a page will sound to a screen reader user, that focus trap in a custom dialog works properly, if visual order matches DOM order, if a calendar widget can be used by keyboard-only users, and much more.

So take this post as a plea to developers to get a little better at accessibility every day. After all, accessible website are better for everybody.

Tags