Data visualization and storytelling


New screen-reader settings to make your visualizations more accessible

You can now add a text-based alternative to the visual aspect of your Flourish graphic

This year has highlighted the critical role that data visualization can play in informing society. Most of us have been consuming charts, maps and tables about the coronavirus crisis daily with our cup of coffee, improving our data literacy skills along the way. But as visualization becomes ever more important, so too does the need to make sure the graphics we publish are usable by as many people as possible.

Online accessibility is a huge area, and lots of organizations aim for high accessibility standards into their web publishing. But in the world of interactive data-driven content, accessibility is a relatively new field. We’re determined to innovate in this area, and today, we’re excited to announce a step on that journey: screen-reader settings to provide a text alternative to visual elements.

What a sighted user sees

What a screen reader sees

How it works

All our core visualization templates now come with built-in screen-reader descriptions to offer a better alternative to the visual components. By default, titles, subtitles, and footers are already screen-reader friendly. They’re read aloud either upon user interaction or when assistive technology navigates through a webpage featuring our embedded graphics.

If you set your main visualization container to be readable, screen readers will attempt to read the content inside it. Importantly, the screen-reader description for the main visual container will always be read out before the content inside the container is accessed. Setting both a screenreader description and having a readable main visualization container might be a suitable option for text-based visuals like a Cards grid, or if the data points in your visualization are limited and well-labeled with adequate aria-labels and focus features. You can find a list of compatible templates that meet these criteria here.

Set a screenreader description for the main visual container

We’ve added new functionality that enables a dual experience: a readable and keyboard-navigable main graphic along with screen-reader descriptions. This gives you the flexibility to make your data points and axis labels accessible, while also providing a separate screen-reader description that supplements the visual information.

We’ve enhanced the user experience further by incorporating additional aria attributes, refining how screen readers interpret Flourish visualizations.

With these updates, you have more control over the accessibility of your visual content, ensuring it’s not just visually appealing but also universally accessible.

Under the hood, the text is added to the page as a hidden paragraph which becomes visible to screen readers via an aria-describedBy attribute on the main visual container, the contents of which gets an aria-hidden attribute.

As part of this update, we’ve also added a title attribute to the iframes that visualizations are embedded in, to differentiate their content as “visual or interactive content”.

How to write a good screen-reader description

Whether you’re designing for sighted or non-sighted users, the main title and subtitle should usually be descriptive, summarizing the main point of the graphic rather than just describing what data is being visualized. As your titles and subtitles in Flourish visualizations will always be read out by screen readers, it doesn’t make sense to repeat these in the screen-reader description.

Instead, use the screen reader text to provide an alternative to the visual aspect of the graphic specifically. What type of chart is it? What aspects or values are the most important and stand out? Try and get in all the key details while keeping the message as short as possible.

Lots more to do

Screen-reader text is only one part of the accessibility picture. There’s lots more to consider, from text sizing and color contrast to keyboard navigability and giving access to the underlying data for non-visual exploration, as shown in the example above via a simple link in the footer.

We have a long way to go and as we work more in this areas we’d love to hear your suggestions and feedback. Feel free to get in touch via Twitter or