The Div That Thought It Was a Button

Somewhere, right now, a <div> with an onClick handler is pretending to be a button. It looks right. It works with a mouse. It passed QA because QA tested it with a mouse. And it is completely invisible to anyone navigating with a keyboard or screen reader.

We see this in almost every React codebase we audit. Not some. Almost every single one.

The fun part about component architecture is that when a component is accessible, every instance of it is accessible. The less fun part is that when a component is broken, every instance of it is broken. We audited an application last year where a single non-semantic dropdown pattern created 43 identical findings. Forty-three. Same component, same bug, same finding, 43 times. The audit report was longer than the codebase.

This post is the reference we wish existed when we started building accessible React components. Real patterns, real code, and the stuff that most "accessible React" tutorials skip because aria-label is easier to explain than focus management. If you have ever read one of those tutorials and thought "okay, but what about the hard parts," this is for you.

Semantic HTML: The Cheat Code Nobody Uses

Before reaching for ARIA, exhaust native HTML. This is the most impactful accessibility habit you can build, and also the one developers resist the most because it feels too simple. Surely accessibility is harder than "use a <button> instead of a <div>."

It is not. Not at this level. A <button> already handles keyboard events, focus management, and screen reader announcements. A <select> already manages option navigation. A <dialog> already traps focus. These elements come with decades of built-in accessibility behavior. Free of charge.

// We find this in every codebase. We are not exaggerating.
// It is always a div. It always has an onClick. It never has a role.
<div className="btn" onClick={handleClick}>
  Submit
</div>
 
// This does the same thing, except it actually works.
<button className="btn" onClick={handleClick}>
  Submit
</button>

The <div> version requires you to manually add role="button", tabIndex={0}, and onKeyDown handlers for both Enter and Space. Then it still will not appear in the accessibility tree correctly. The <button> gives you all of that for free. Zero additional effort. Full keyboard and screen reader support. But somehow the <div> persists, because somebody five years ago decided that styling a <button> was too hard and wrote a <div> instead, and now it is in every component in the system.

We have feelings about this.

Keyboard Navigation Patterns

Every interactive component needs a keyboard contract. Not "it would be nice if keyboard worked." A contract. A documented set of key interactions that your component guarantees. Here are the patterns we follow on every custom web application we build.

  • Tab to focus
  • Enter or Space to activate
  • Visible focus indicator (we use a marching ants SVG ring, but anything visible works. The bar is genuinely that low.)

Composite Widgets (Menus, Tabs, Toolbars)

This is where most tutorials wave goodbye and wish you luck. Single controls are simple. Composite widgets are where keyboard navigation gets interesting, which is developer shorthand for "confusing."

  • Tab into the group, Tab out of the group
  • Arrow keys to navigate within the group
  • Home / End to jump to first/last item
  • Escape to close/dismiss

The pattern is called "roving tabindex." Tab gets you in and out. Arrow keys move within. That is the whole mental model. Once you understand it, you will notice that every well-built component library in existence follows this exact pattern. And you will start noticing which ones do not.

const handleKeyDown = (e: KeyboardEvent) => {
  const items = getFocusableItems();
  const currentIndex = items.indexOf(document.activeElement as HTMLElement);
 
  switch (e.key) {
    case "ArrowDown":
      e.preventDefault();
      const nextIndex = (currentIndex + 1) % items.length;
      items[nextIndex]?.focus();
      break;
    case "ArrowUp":
      e.preventDefault();
      const prevIndex = (currentIndex - 1 + items.length) % items.length;
      items[prevIndex]?.focus();
      break;
    case "Home":
      e.preventDefault();
      items[0]?.focus();
      break;
    case "End":
      e.preventDefault();
      items[items.length - 1]?.focus();
      break;
  }
};

Notice the modulo wrapping. When you hit the last item and press down, focus wraps to the first. When you hit the first and press up, it wraps to the last. Small detail. The kind of detail that separates "we thought about keyboard users" from "we added tabIndex= and called it a day."

Focus Management in React

This is the section that most React accessibility guides pretend does not exist, and it is the one responsible for the most real-world bugs we find. If this section feels long, that is because this problem deserves more attention than it gets.

React's reconciliation will destroy your focus state without hesitation or remorse. When the DOM updates, the element that had focus might be removed and re-created. The user was mid-task, tabbing through a filtered list, and suddenly their focus is gone. Dumped to <body>. They have no idea where they are on the page. Meanwhile, visually, nothing looks wrong. The list re-rendered. The items are there. Everything looks fine if you are using a mouse. But the keyboard user is lost.

This happens with:

  • Lists that re-render after filtering (the most common culprit by a wide margin)
  • Modals that open or close
  • Dynamic forms that add or remove fields
  • Basically anything that changes the DOM while someone is navigating it

The Pattern: Restore Focus with useRef

function FilterableList({ items }: { items: Item[] }) {
  const [filter, setFilter] = useState("");
  const previousFocusRef = useRef<HTMLElement | null>(null);
 
  const handleFilterChange = (value: string) => {
    // Remember where focus was before we blow up the DOM
    previousFocusRef.current = document.activeElement as HTMLElement;
    setFilter(value);
  };
 
  useEffect(() => {
    // If the thing that had focus no longer exists, pick somewhere sensible
    if (previousFocusRef.current && !document.body.contains(previousFocusRef.current)) {
      document.getElementById("filter-input")?.focus();
    }
  });
 
  return (
    <div>
      <input id="filter-input" value={filter} onChange={(e) => handleFilterChange(e.target.value)} />
      {/* filtered items */}
    </div>
  );
}

The key insight: you need to decide where focus goes when the current target disappears. There is no universal right answer. It depends on the component. But you need to make a deliberate decision, because if you do not, the browser decides for you. Its decision is always <body>. That is never the right answer, but the browser does not care. It has other things to worry about.

Screen Reader Announcements

Here is a thing that surprises people the first time they hear it: screen readers cannot see your UI. Visual changes mean nothing to them. A filtered list updating from 50 items to 3? Silent. A toast notification sliding in from the corner? Invisible. A loading spinner replacing the entire page content? The screen reader has absolutely no idea anything happened.

You have to tell it. Out loud. With ARIA.

Live Regions

{/* This div is invisible, but screen readers are listening to it */}
<div role="status" aria-live="polite" aria-atomic="true" className="sr-only">
  {`Showing ${filteredCount} of ${totalCount} results`}
</div>
  • aria-live="polite" waits for the user to finish what they are doing before announcing. Polite, as the name suggests.
  • aria-atomic="true" reads the entire region content, not just the part that changed
  • className="sr-only" hides it visually but keeps it in the accessibility tree. Present but invisible. Like a good roadie.

When to Announce

  • Filter results change count
  • Form validation errors appear
  • Toast notifications fire
  • Loading states begin and end
  • Dynamic content updates

The general rule: if something changed that a sighted user would notice, a screen reader user needs to know about it too. If you are unsure whether to announce something, announce it. Nobody has ever complained that a screen reader told them too much relevant information.

Reduced Motion: The 15-Minute Fix Nobody Ships

Some users experience motion sickness from animations. Some have seizure disorders. Some just find your parallax scroll distracting. The prefers-reduced-motion media query lets you respect that preference, and it takes about 15 minutes to implement across your entire application.

We say "15 minutes" because we have timed it. Multiple times. On multiple projects. It is always about 15 minutes. And yet most applications do not do it.

/* Your beautiful entrance animation */
.card-entrance {
  animation: entrance 0.9s cubic-bezier(0.25, 1, 0.5, 1) both;
}
 
/* The version that does not make anyone nauseous */
@media (prefers-reduced-motion: reduce) {
  .card-entrance {
    animation: fade-in 0.6s ease-out both;
    animation-delay: 0s !important;
  }
}

The principles:

  1. Never remove the animation entirely. A gentle opacity fade provides visual continuity without causing discomfort.
  2. Remove transforms. No translateY, no scale, no rotate. These are the ones that cause problems.
  3. Remove blur. Even filter transitions can trigger motion sensitivity.
  4. Remove stagger delays. Everything appears at once. Staggered entrances are fun. Staggered entrances that make someone dizzy are not fun.
  5. Keep durations reasonable. 0.6 seconds of opacity. Comfortable for virtually everyone.

Fifteen minutes. No good reason to skip it. And yet.

Testing: The Part Everyone Plans to Do Later

Automated tools catch about 30% of accessibility issues. That is not a rough estimate. We have tracked this across hundreds of accessibility audits, and 30% is actually generous. Some of the most critical issues (keyboard traps, focus management bugs, missing screen reader announcements) are completely invisible to automated scanners. Lighthouse will give you a 95 and your modal will still trap keyboard users with no escape. Ask us how we know.

MethodWhat It Actually Catches
axe-core / LighthouseMissing ARIA, contrast failures, missing alt text. The easy stuff.
Keyboard-only navigationFocus traps, missing interactions, that input you can tab to but not tab out of
Screen reader (NVDA/VoiceOver)Announcement gaps, wrong reading order, the button that says "submit submit blank"
Zoom to 200%Layout breakage, text truncation, the horizontal scrollbar of shame
Reduced motion toggleAnimation compliance, or more likely, the complete absence of reduced motion support

Build all five into your development process. Not as a final gate before launch, because by then every fix involves working around six months of code that was built on top of the broken pattern. A 5-minute fix during development becomes a 3-hour archaeology expedition after launch. We have done enough of both to know which one we prefer.

The Payoff

Accessible React components are not harder to build. They are built with a wider definition of "user" in mind. Semantic HTML, keyboard navigation, focus management, screen reader announcements, motion preferences. None of these are advanced concepts. They are just concepts that most tutorials skip because they require more thought than aria-label="close".

The result: components that work for everyone, pass any audit, and honestly just feel better to use. Not because you added accessibility on top. Because you thought about every interaction path from the start, not just the mouse-and-monitor demo that looks great in a standup but falls apart the moment a real user touches it.

Build it right the first time and you never have to go back and fix it. Build it wrong and you will absolutely go back and fix it, except now it costs five times as much and your PM is asking why accessibility remediation was not in the original estimate. It was not in the estimate because nobody brought it up. Now everyone is bringing it up. That is usually how it goes.