Tenon was founded on a simple principle: to provide tools and services based on making websites honestly more accessible. We're not here to help you tick the boxes. We're not going to make exaggerated claims about what percentage of errors our API-centered testing tool can find or minimize how much work there still is to do on a website. We're here to face the facts about meeting the real needs of people with disabilities.
The backbone of that fact-finding is doing real-world research and acting on the results. We've talked to you before about our research, but we aren't resting on our laurels. Here are a few ways that we're growing our research practice at Tenon.
Test Verification Program
Not every automated accessibility test is perfect in every situation, especially when that test is first introduced. Although every one of our tests is based on unit tests, one of the most tedious and demoralizing realities of automated testing is encountering false positives: something that's reported as an accessibility failure, when it actually isn't. A false positive takes the developer time to verify or address and can even make them doubt their own knowledge of accessibility. False positives can also undermine a user's confidence in the tool.
That's why we've added our new test verification feature. This allows a user to flag a reported issue as a false positive. Whenever a participating user reports a false positive, our system creates a support ticket and we'll look into it.
Because tackling false positives helps us improve our service, we'll be offering free API credits to users for verifying results. If we have enough people participate, and have quality results, we'll institute this crowdsourcing as our second line of Quality Assurance for our tests (after internal review, of course!).
If you're a current Tenon user and you want to help us help you, log into your account and activate this feature today!
To activate, just go to Settings > Account Settings, scroll to the bottom to "Would you like to participate in our test verification program?", select "Yes," and click the "Submit Settings" button.
Trust and verify!
There's a Russian proverb: "Trust, but verify." We're turning that around a bit, trusting you to verify testing results, and not to reveal details of any real-world HTML samples you might see. Once you sign up for the verification program, you can start trading time for free API credits by helping us verify real test results.
It's very simple. You just go to the Test Verifier page, and look at the sample of HTML that contains the error along with the test description. If the test was accurate, you just select "Yes;" if not, select "No," and tell us what's wrong about it; if you're not sure, just select "Not sure (or skip this issue)" and move on. Click "Submit," and another test result pops up. Rinse and repeat. Each test should only take you a few seconds.
For example, the screencap below shows a common error: The test description says the
<html> element doesn't have a
lang (language) attribute (in this case, it should be
<html lang="en"> ), so the screen reader can't be certain how to pronounce the text content and will default to the user's preference (which isn't always right). I looked at the code, saw that indeed, the
<html> element is missing its
lang, so I clicked "Yes," and hit "Submit." Easy!
Free API credits
What's in it for you? A sense of satisfaction with a job well done and an opportunity to put machines in their place and catch their mistakes, and oh, and free API credits!
For each test you verify, you get one free API call. It's that easy.
If you're short on Tenon API credits but need to test your 10-page site for accessibility, just hop on the Test Verifier, confirm 10 test results, and boom! You now have enough boost credits to test your site. Pretty sweet deal, right?
Of course, if you don't have the time to verify test results, you can always buy boost credits for USD$0.05 apiece, or subscribe to our Pay-Along metered service. But we hope you'll help us improve our service for everyone.
Based partly on our test verification against false positives, and building our our growing horde of data (see below), we hope to eventually generate probability scores for our tests with a statistically significant sample size, confidence level, and confidence interval. What does this mean, and what are the implications? We'll drill into this in a future blog post, where we'll share details on some upcoming API changes that will make this probability scoring even more powerful.
But the short answer is, by being objective and truthful with ourselves about the effectiveness of each and every test, we can steadily improve and ensure that Tenon remains the most useful accessibility API for our current and future customers.
For the past several years, we've used our API to record data and look for patterns in that data. We publish the raw numbers publicly on our site (with permission) so that anyone else can discover patterns themselves because that's how open science is done.
As with any big-data project, we aren't sure what patterns we'll find. We aren't certain that any of it will improve our product, or lead to a new product, or even have a positive impact on digital accessibility. We want to emphasize that correlation does not imply causation, so any patterns we find may not even be meaningful.
But this eyes-open approach seems to already indicate at least one intriguing result: issue density –the number of accessibility errors on a page relative to the amount of content– is a key indicator of the general accessibility of a content management system (CMS). You can read more about this on our first research blog post, The best & worst of content management systems.
So far, we've scanned over 11,000 technologies on over 22,000 unique domains. That seems like a lot of data, but until we reach 50,000 domains, we regard our observations as tentative.
But that doesn't mean we can't sample for initial patterns to watch out for and that's what we're doing now. We'll post our initial hypotheses here, and then verify –or invalidate– them in follow-up posts.
Future Research & Data Sharing
As mentioned above, future API changes will make the data we gather even more accurate and the tests we perform more reliable.
But just because a test returns an accurate result doesn't mean that the test is relevant; a test is only a good test if it produces an actionable issue for our customers that leads to a meaningful improvement in their users' experience. We're taking steps to track the relevance of each of our tests. This will add to our ability to more accurately correlate issues to specific technologies and add accuracy to the probability scoring.
In the long term, we plan to share these results as well, and will publish periodic updates of flattened data output.
Ultimately, we want the data we collect to be helpful for people with disabilities, for our customers, and for our service, so if you find other interesting uses for our data, have suggestions on how we could use it, or even have ideas for improved tests, let us know!