Automated Testing’s Strength Comes From Efficiency

Today, via Facebook’s “Memories” feature, my good friend and mentor Mike Paciello shared this tweet from Steve Faulkner:

These thoughts mirror my own and, in my opinion, may even be a tad optimistic.  But “complete coverage” from automated testing of any kind is a bad goal to have. Whether it is SEO, security, or accessibility, trusting a tool to do all the work for you is a terrible strategy.

Understanding Efficiency

In physics, efficiency is a measure that compares energy input to work performed. Simply put, a machine is considered efficient if it can do more work with less energy. We can apply the same type of measure to understand the real benefit to automated accessibility testing.

Cost of human work

As I started writing this blog post, I asked a large number of accessibility experts how long it would take to do the following task:

  • Load a page in a browser
  • Find an image on that page
  • Test whether it has an alt attribute
  • Log a bug about that missing alt attribute

Across the board, the prevailing answer was “around 5 minutes”.

What does that cost? This can be tracked very easily by using the fully loaded personnel cost of the person doing the testing and logging of the issue.  Naturally that depends on a lot of factors, so I’ll use the typical salary for a Software Quality Assurance Analyst.  For our purposes, let’s assume a fully loaded cost of $60 an hour.  In that scenario, it costs $5 to find and log that issue for an image missing a text alternative.  That’s not bad.

Cost of machine work

It is a no-brainer to understand that a computer can do work faster than a human. That is, after all, what they were created to do. But how much more accessibility testing can a machine do than a human?  Here are some metrics to consider.

  • In its default configuration, can accept and process 300 API calls per second. That’s 18,000 pages per minute.
  • currently has 188 accessibility tests in production which, combined, can assess a web page against over 3000 distinct failure conditions.
  • According to Tenon Research, finds approximately 252 errors per page.

Given the above, can find 4,536,000 accessibility errors per minute. In the same 5 minutes that it takes a skilled human to manually inspect and log one issue, Tenon can find 22,680,000 issues.  

It is also worth noting, that this number also includes tons of things that take far longer for a human to test & log, such as issues with forms, tables, color contrast, document structure, proper use of ARIA, and more.

A good tool makes a good human better

Put into the above perspective it would be easy to assume that I’m trying to make the case for replacing humans with a tool.  No. Never.  As I mentioned at the start of this post, there are a lot of accessibility best practices that a tool – any tool – cannot test for, when it comes to accessibility.  But an automated tool is very fast at finding what it can find.  When leveraged properly, the right tool can offer you instant return on investment but it isn’t a replacement.

A robust and effective accessibility program is one that takes into consideration the results of multiple types of testing, each deployed at the right time and by the right people with a focus on what each testing type is best at finding. Automation should be used early and often to exploit its high level of efficiency so that our humans can be faster and more accurate.

Start your free trial of Tenon today!