First, let me say that I am not an expert in web accessibility. Unfortunately I think most people who are web developers aren't. Worse, evaluating development work to understand how people interact with it is not only hard, it is often relegated to a "508 checklist" rather than being a core piece of how website and application development happens. Further, the tools that help evaluate a site for accessibility concerns often only catch the more glaring concerns, still requiring evaluation by people.
...human inspection is necessary when it comes to semantics, for instance to assess if the page title properly describes the page, if a particular element has either not been marked up at all or with a too generic element, or if elements which belong logically together can be grouped inside a proper element.
The report shows once more that human inspection is crucial to achieve a high degree of web accessibility, and that a dedicated effort must be made to develop a more modern generation of checkers suitable for the latest standards and recommendations, and tailored for the needs of today's testers, developers, and site owners.
And that's the reality- people are still a fundamental part of the process. However, the tools that we do have now can provide a helpful first pass in evaluating our work. We have an opportunity to bring some accessibility questions to the fore during the development process.
My interest in linting is exactly this- that it is an easy way for teams integrate accessibility evaluation throughout the entire lifecycle of a project. By making it part of the pull request process these tools- as imperfect as they maybe- can catch some concerns early which can alleviate pressure if significant alterations are needed. Discovering that an ajax widget poses accessibility challenges is much easier to deal with early in the project rather than in a quality assurance sprint at the end. When accessibility problems make code unmergeable then teams can take responsibility for something they may have not realized until very late in the process. Scheduling regular time for assessment by people is of course still needed but the hope is that those reviews will be much more effective since many obvious concerns have already been addressed.
So how can it be brought into to a continuous integration workflow? For some first pass testing, I have used access_lint which relies on Google's Accessibility Developer Tools to provide testing on CircleCi. Here's an example project which shows how implement linting as a test. It's basically three lines of code, two of which don't really count. Note that I'm using a specific branch of access_lint which handles the return status from the tests for CircleCi to evaluate. You can see the test result here.
The reality is that it's simple to integrate rudimentary tests for accessibility concerns. It does not absolve a team from doing a full audits but it does make accessibility an integral part of our process. As the tools continue to mature this process will only become more useful and effective.