Accessibility Is a Code Quality Issue
When teams think about accessibility, they usually think about design. Colour contrast ratios, font sizes, screen reader compatibility. These are important, but they represent only half the picture.
A significant number of accessibility failures originate in code. Missing alt attributes on images, form inputs without labels, focus outlines removed in CSS, icon buttons with no accessible name. These issues are not design decisions. They are code patterns that can be detected, measured, and prevented automatically.
Treating accessibility as a code quality concern rather than a design concern changes when and how teams address it. Instead of catching problems during manual audits or user testing (both of which happen late in the development cycle), teams can catch them at the point where they are cheapest to fix: before the code is merged.
Common Accessibility Patterns in Code
The most frequent accessibility failures in codebases are also the most predictable. They follow consistent patterns that static analysis can identify reliably.
Images Without Alt Text
Every image element needs an alt attribute. If the image conveys information, the alt text should describe that information. If the image is purely decorative, the alt attribute should be present but empty (alt=""), which tells screen readers to skip it.
In practice, missing alt attributes account for a large share of accessibility violations. They are easy to introduce because the image renders perfectly without them. The developer sees the correct visual result and moves on. A screen reader user encounters an unlabelled image that might be critical to understanding the page.
This pattern is straightforward to detect automatically. Any <img> tag without an alt attribute is a violation. Any image component that does not enforce an alt prop is a risk.
Form Inputs Without Labels
An input field without an associated label is unusable for screen reader users. They hear "edit text" with no indication of what the field is for. Sighted users might work it out from the surrounding context or placeholder text, but placeholder text disappears once the user starts typing and is not announced consistently by assistive technology.
Labels must be programmatically associated with their input, either by wrapping the input in a <label> element or by using the for/id attribute pair. A label that is visually positioned near an input but not programmatically linked does not count.
This is another pattern that automated analysis handles well. For every <input>, <select>, and <textarea>, the tool checks for an associated <label>. Missing associations are flagged with the file and line number.
Focus Outlines Removed in CSS
This one is remarkably common. A developer or designer sees the default browser focus outline (that blue or dotted ring around focused elements) and considers it ugly. They add outline: none or outline: 0 to their CSS. The visual result is cleaner. The accessibility impact is severe.
Keyboard users rely on focus indicators to know where they are on the page. Removing focus outlines makes the interface effectively unusable for anyone who navigates with a keyboard, which includes many users with motor impairments, power users who prefer keyboard navigation, and anyone whose mouse has just stopped working.
The fix is not to keep the default browser outline. It is to replace it with a custom focus style that fits the design system. But the pattern of removing outlines entirely is detectable in CSS analysis, and it should be flagged every time.
Icon Buttons Without Labels
An icon button is a button that contains only an icon, with no visible text. A magnifying glass for search, a hamburger menu icon, a close button with an X. These are visually intuitive for sighted users but completely opaque to screen readers.
Without an accessible name, a screen reader announces "button" with no indication of what the button does. The solution is an aria-label attribute that describes the action: "Search", "Open menu", "Close dialogue".
Detecting this pattern requires checking whether button elements that contain only image or SVG content also have an aria-label, aria-labelledby, or visible text fallback. It is more nuanced than checking for alt text, but it is still a well-defined pattern.
Links Without Accessible Text
Similar to icon buttons, links that contain only images or use generic text like "click here" or "read more" provide no context for screen reader users, who often navigate by listing all links on a page. A list of twelve links that all say "read more" is not helpful.
Links need descriptive text that makes sense out of context. "Read our accessibility guide" is useful. "Click here" is not. While automated tools cannot always judge whether link text is descriptive enough, they can detect links with no text content at all and links that rely solely on image content without alt text.
Why These Issues Persist
If these patterns are so predictable, why do they keep appearing in codebases?
The primary reason is that accessibility violations are invisible to the developer who introduces them. The page looks correct. The tests pass (because most test suites do not check accessibility). The code review focuses on logic and architecture, not HTML semantics. The violation ships and nobody notices until an accessibility audit or, worse, a user complaint.
The secondary reason is tooling gaps. Linting tools like eslint-plugin-jsx-a11y catch some issues in JSX, but they miss CSS-based violations, template languages, and cross-file patterns. Most accessibility testing happens at the rendered page level (using tools like Axe or Lighthouse), which means the feedback arrives long after the code was written.
Bridging this gap requires analysis that operates at the code level, before the page is rendered. Scanning templates for missing alt text, checking stylesheets for removed focus outlines, and verifying that interactive components have accessible names. This approach catches issues at the same stage as linting and type checking, which means faster feedback and cheaper fixes.
Building Accessibility Into Quality Checks
The most effective approach treats accessibility violations the same way teams treat security vulnerabilities: as code-level issues that should be detected early, reported clearly, and tracked over time.
Detect at the Code Level
Run accessibility pattern checks as part of your analysis pipeline. Check templates, components, and stylesheets for the patterns described above. Report findings with file paths, line numbers, and clear explanations of why the pattern is a problem and how to fix it.
Set Thresholds That Make Sense
Not every team needs to achieve WCAG AAA compliance immediately. Starting with a baseline score and preventing regressions is a reasonable first step. As the team addresses existing issues, the threshold can be raised progressively.
The key is that the threshold exists and is enforced. Without a quality gate, accessibility work tends to be deprioritised in favour of feature delivery. A gate ensures that new code does not make the situation worse while the team works on improving the existing codebase.
Track Progress Over Time
Accessibility is not a one-time fix. New code introduces new patterns, component libraries get updated, and design systems evolve. Tracking accessibility scores over time shows whether the team is improving, maintaining, or regressing.
A trend line also makes the business case tangible. When a team can show that their accessibility score has improved from 55 to 78 over six months, it is easier to justify the continued investment.
Frequently Asked Questions
Can automated tools catch all accessibility issues?
No. Automated analysis catches structural patterns reliably: missing alt text, unlabelled inputs, removed focus outlines. But some accessibility requirements need human judgement, such as whether alt text is actually descriptive or whether the tab order makes logical sense. Automated checks are a strong first line of defence, not a complete solution.
Which accessibility issues should block a pull request?
Start with the most impactful and clearly detectable patterns: images without alt text, inputs without labels, and focus outlines removed in CSS. These are unambiguous violations that affect real users. More nuanced issues can be reported as advisory findings.
Is accessibility only relevant for public-facing applications?
No. Internal tools are used by employees, some of whom may have disabilities. Legal requirements in many jurisdictions apply to internal software as well. Beyond compliance, accessible code tends to be better-structured code, which benefits maintainability for the whole team.
How does accessibility relate to code quality scoring?
Accessibility is one domain of codebase health. It measures whether the code produces interfaces that are usable by people with different abilities. Like security or architecture, it can be scored based on the presence or absence of known patterns and tracked over time.
Accessibility Is for Everyone
The most common misconception about accessibility is that it benefits only a small group of users. In reality, accessible code benefits everyone. Keyboard navigation helps power users. Clear labels help users on mobile devices. Proper heading structure helps search engines. Alt text helps users on slow connections where images fail to load.
Building accessibility into code quality checks is not about compliance. It is about building software that works well for the widest possible audience. The patterns are well understood, the detection is reliable, and the fixes are usually straightforward. The only missing ingredient for most teams is the tooling to catch these issues before they ship.