Rapid remediation with Tenon. Beat bad code with a Mallet

Courseware remediation

During November 2017, Tenon was approached by a new customer to remediate for them four (4) online courses. This customer is migrating scores of courses to a new platform for a large government agency. Because this agency is in the Executive Branch of the Federal Government, they’re required to comply with Section 508 of the Rehabilitation Act.

We were given access to course content, each containing hundreds of individual HTML files. Time to get to work.


For each course, the customer would place it in a git repository and send us an e-mail, after which we’d take over.

We would create a new remediation branch and set up boilerplate configuration for our tooling, such as eslint and jsonlint. We used Grunt to hook remediation into tasks, and set up the Tenon Grunt plugin to test via the Tenon API as we worked.


We’d configure Grunt setup to go through all the course’s HTML files, sending each to the API for testing. Testing an entire course could take several minutes for testing to complete, with an average of 2 seconds spent by the API processing each separate file.

Screenshot of Terminal output showing that there were 752 files with errors

After testing, we would be armed with a list of pages that had errors, and we’d start fixing problems. In general, the more files a course contained, the more issues we’d see getting logged – but often there’d be a clear pattern of issues being repeated throughout the course. These could efficiently be corrected by using find-and-replace patterns in our code editors:

Screenshot of IntelliJ IDEA's Replace in Path window being used to add a lang attribute to all opening html tags

We’d locate a specific item from Tenon’s output, open a page listed in the terminal output, find the element(s) with those errors, replace them with the code necessary to fix the problem and commit the fixes. We’d run the Grunt task from time-to-time to help keep track of progress and find new patterns.

Combined with manual testing, we’d average 10 hours of work per course, remediating 4 courses in just over 40 hours.

Great! Here’s more.

Our process worked very well on the initial run of 4 courses. Then the client said they had over 100 more to do! Fortunately we had our notes and informative git history – we could just repeat our workflow, right?

The problem with the above process was that generic find-and-replace doesn’t hold up well to doing thousands of the same issue but with tiny variations. For instance, we found a ton of cases where links had title attributes that had the same content as the text in the link:

<a title="the link text" href="/foo/">the link text<a>

Patterns like this would litter the HTML, all just different enough from each other to defeat a generic find-and-replace. Conversely they were similar enough that manual fixing was too inefficient and repetitive: sometimes there were other attributes in the links. Other times those attributes were in different places.

This meant that finding and replacing each individual instance was less efficient than it could have been if we could just access the DOM and use the DOM itself to determine if the links have title attributes that are the same as the text, regardless of their specific values.

There were plenty of other things we found that needed to be fixed, too. Some of them were real accessibility issues and some of them were code style issues, such as:

  • Spacer images without alt attributes
  • Redundant title attributes on links
  • Empty frames without title attributes
  • html element without a language attribute
  • tabindex attributes applied on non-actionable elements
  • tabindex attributes with a value greater than zero

We were lucky in that issues we found were pervasive throughout the material. As is often the case, the courses’ original developers used the same practices over and over. Developers tend to do things the same way every time they create things, which means that when accessibility issues occur they often have the same cause and underlying code.

What we really had were repeating issues in the context of HTML markup. What if we built an automated remediation tool that operated within this context?


We ended up creating Mallet: a tool that takes raw HTML, parses it into DOM context, applies dom-aware fixers and returns the remediated HTML.

Designing Mallet

We designed Mallet knowing that we wanted to use it in a wide variety of remediation work – hence we built Mallet as a robust core library that we can build other tools on top of.

We realised that Mallet should only do two things:

  1. Correct problems that are verifiably incorrect, such as an <img> missing an alt, or a <ul> as a direct child of another <ul>
  2. Report on structure that might require manual review, such as usage of the role attribute

These two cases are handled by what we called fixers and matchers.


A fixer consists of a short name, short, which is used in reporting, a title which is for documentation purposes and internal test reporting, a tag which determines which type of node it is applied on, a matches function which determines whether this node should be remediated, and a fix function which fixes the node in-place.

A fixer which forces an image node to contain an alt attribute unless it has an explicit role of presentation, would be:

  short   : 'FORCE_ALT_ON_IMG',
  title   : 'Add empty alt attribute when not present on an image',
  tag     : 'img',
  matches : (node =>
              !has(node, 'alt') &&
              !['none', 'presentation'].some(_ => hasExplicitRole(node, _))
    fix     : (node => node.attribs.alt = '')


A matcher is really just a fixer without a fix function. Matchers have an optional data function that determines any relevant extra information to return as part of the match.

A matcher that matches on any element with a role attribute defined that’s not empty:

  short : 'ELEMENT_WITH_ROLE',
  title : 'Match any element with a role defined',
  tag : '*',
  matches : (node => is(node, 'role', (_ => _.length > 0))),
  data : (node => ({ type: node.name, role: node.attribs.role }))

Mallet is designed to compose larger functionality around small composing functions, going by the Unix philosophy of doing one thing and doing it well. This allows Mallet to be easy to test and easy to extend.

Running Mallet

At its most basic level, Mallet is used (within projects) as:

Mallet(<list<Fix>>fixes)(<string>html): Promise<string>html

Optionally callbacks to hooks can be provided, firing when a matcher matches and a fixer is applied. Mallet applies matchers and fixers to the DOM nodes in recursive-descent fashion.

Building around Mallet

Mallet as a simple library isn’t very useful by itself, so we’ve built several tools around mallet-library:

  • mallet, a command-line tool which takes a filename as argument and writes remediated HTML back to the same file
  • mallet-reporter, a command-line tool which takes one or more filename(s) as argument(s) and reports back detailed results as JSON
  • various report-* commands which can read mallet-reporter results and generate reports in various formats

This makes sure that Mallet remains a solidly-tested, easily-extended library that we can apply in various auditing and remediation projects.

By generating reports as Mallet is applied, we end up with the basic remediations applied, and afterwards we can use the raw reporting data to generate any report we need, to assist in manual remediation. This means that we only have to apply Mallet once, which reduces time spent by Mallet while it’s processing data.

Effective Mallet usage

Mallet has reduced the average time remediating a course from 10 hours to 3 hours, processing around three thousand files per minute (depending on the host system) and generating useful manual remediation reports along the way.

Youtube Video showing Mallet in action

As we identify more repeating patterns through the courses, we add fixers and matchers as we go, making sure Mallet makes us work as efficient as possible.


One thing we often have to remind people is that there’s a limit to what automated testing can find. Automated testing tools work by subjecting websites to a series of generic heuristic checks. Even though Tenon has been proven to find the most issues there are still limits to what can be found, because writing tests that can be applied reliably to all websites is different than writing specific tests for one site. In addition, there are some things that are too complicated or too subjective to fix automatically.

Mallet is subject to the same limitations; it was designed to be an extendable companion tool to take care of “boilerplate accessibility correction” and assist in manual remediation. And in that regard, it’s working exactly as it should, getting more useful as we identify more patterns to hammer into shape.

Post a Comment