High-quality software is not expensive. High-quality software is faster and cheaper to build and maintain than low-quality software, from initial development all the way through total cost of ownership.
Capers Jones
One of the challenges posed by using an automated testing tool is dealing with issue volume and issues out of context. In the former case, during manual review a skilled reviewer can find the problem and intuit based on experience how severe an issue is. Tenon has created a proprietary Priority algorithm that creates a normalized score for each issue. This algorithm was created in collaboration with a number of accessibility experts, using the Delphi Method to weigh the various factors that must be considered during prioritization. This prioritization methodology has been implemented successfully at a large financial institution.
As it turns out, a majority of the issues that can be uncovered with automation have a high priority score and, although the priority scores Tenon returns are normalized (in this case, ranked against each others), the volume of issues can seem quite daunting. Tenon finds an average of about 50 issues per page. But that number can be pretty deceiving how “bad” is a page with 50 errors? Is a page with only 25 errors only half as bad? As a measure of performance, raw issue count isn’t very useful. What about pages that are significantly more complicated or have more content? All other things being equal, an extremely simple page with 50 issues is much worse than a complicated page having 50 issues. Although they have the same volume of issues, there are more issues in less space in the simpler page – it has a higher issue density.
For Tenon’s density measure, we use Kilobyte of document source. This is our own rough means of normalizing each document to provide a comparative estimate on each document and is inspired by Function Point analysis. Given our lack of insight into the actual backend code of a web page (or set of pages on the same site), this is the closest we can get to judging each page against the others. Doing so gives us a way to estimate the relative performance of each page. In other words, a page with 80 errors in 100 KB of source has 80% density. A page with 80 errors in 80 KB of source has 100% density. While both pages have the same number of errors, the smaller page has more errors per KB of source and is, therefore, “worse”.
To understand how this lends itself to judging the performance of an entire site, in the chart above, we see an example of the Density Distribution chart within Tenon’s dashboard. It is organized by the number of pages within each bucket of 10%: 0%, 1-10%, 11-20%, and so on – on up to 100%+. This provides an at a glance view of performance. Put simply, the taller the bars are on the left, the better the performance. Sites with very high bars on the far right of the graph are poor performers.
As Tenon grows, we hope to be able to provide our users with features that allow them to better understand their website’s quality and user experience. We hope to add and improve upon Tenon’s reporting features to provide these sorts of insights. If you have any questions or suggestions, don’t hesitate to contact us. Find our details on our homepage.