Author Archives: Charles Myers

Accessibility Metadata in action at Teachers’ Domain

As the proposal receives more attention, there have been numerous questions concerning the utility of mediaFeature and accessMode for both the search and usefulness of accessible content. These questions are best answered with a demonstration. It turns out that WGBH built a repository for learning resources, which are tagged with information representing LRMI and Accessibility Metadata (the database was done with the original AccessforAll) tags. Madeleine Rothberg, Project Director for WGBH National Center for Accessible Media, recorded a short video (5:19) that shows the use of the accessibility metadata. You can watch the video (yes, it is captioned and there is a transcript at the end of this page) and see the following major points in the video.

Accessibility Metadata at WGBH Teacher's Domain

  • 00:30 An example search, where the accessibility properties (mediaFeature) are displayed.
  • 01:20 Enter a new situation where the user can’t hear, possibly because the computer environment does not have computer speakers in the laboratory or because of deafness. Preferences are set for this.
  • 01:57 Now that the profile is known, the search results show whether the content is accessible to that user or not.
  • 03:10 Shows that an animation with no audio is flagged as accessible to this user even though it doesn’t have captions.
  • 04:15 Search filters let you search on specific mediaFeatures.

I think you’ll agree that this information makes accessible content easier to find.

Their plans for future implementation would allow similar
searches without saved preferences, by offering search filters that
replicate the kind of personalized search the preferences allow, for
example by finding resources with either no audio content, or with all
auditory content adapted to other accessModes: only modes that had no audio or the auditory content adapted would display.
The magic that makes this happen is the accessibility metadata. I’ve included code snippets below for what these tags would be in the data if tagged today.
This demo was done with encodings that were predecessors to LRMI and Accessibility Metadata… I have translated the syntax and names to our proposed names.


<div itemscope="" itemtype=""><meta itemprop="accessMode" content="visual" />
<meta itemprop="accessMode" content="auditory" />
<meta itemprop="mediaFeature" content="captions" />
<span itemprop="name">The Structure of DNA</span>
<meta itemprop="about" />DNA
<meta itemprop="keywords" content="National K -12 Subject" />
<meta itemprop="keywords" content="Science" />
<meta itemprop="keywords" content="Life Science" />
<meta itemprop="keywords" content="Genetics and Heredity" />
<meta itemprop="keywords" content="Molecular Mechanisms of DNA" />
<meta itemprop="learningResourceType" content="Movie" />
<meta itemprop="inLanguage" content="en-us" />
<meta itemprop="typicalAgeRange" content="14-18+" /></div>

For the silent video, you can see that the accessmode is now only visual, and that captions is not an adaption/ mediaFeature, as there was no auditory accessMode to adapt; the third and fourth line of the sample tags disappear.

<div itemscope="" itemtype=""><meta itemprop="accessMode" content="visual" />
<span itemprop="name">DNA Animation</span>
<meta itemprop="about" />DNA
<meta itemprop="keywords" content="National K -12 Subject" />
<meta itemprop="keywords" content="Science" />
<meta itemprop="keywords" content="Life Science" />
<meta itemprop="keywords" content="Genetics and Heredity" />
<meta itemprop="keywords" content="Molecular Mechanisms of DNA" />
<meta itemprop="learningResourceType" content="Movie" />
<meta itemprop="inLanguage" content="en-us" />
<meta itemprop="typicalAgeRange" content="14-18+" /></div>

The transcript of the above video can be found below:

This is a demonstration of accessibility features in Teachers’ Domain.  Teachers’ Domain is a digital media library for teachers and students.  It’s being transitioned to a new site, called PBS Learning Media, where the accessibility features may be a little bit different.  So I wanted to show you how these accessibility features work right now in Teachers’ Domain.

If I do a search in Teachers’ Domain on a general topic, like DNA, I get back hundreds of results. Some of them have accessibility information available, and some do not.  So, for example, here is a video that offers captions, so it’s labeled with having the accessibility feature of captions.  And if I play that video, I’d be able to turn the captions on.  Here are a lot of other resources that don’t have any accessibility information. Here’s one. It’s a document that includes the accessibility feature long description.  So that means that images that are part of the document have been described in text thoroughly enough that a person who can’t see the images can make use of the document.

So this amount of information is really useful.  If you’re scanning a bunch of results, you can look and see which accessibility features are offered for which videos.  But it doesn’t give you the full story.  For example, if you are a person who can’t hear, or you’re using video in a classroom without speakers, you might think that this video, called “Insect DNA Lab,” wouldn’t be useful to you because it doesn’t list that it contains captions.  But what you can’t tell from this piece of information is that that video doesn’t have any audio at all, so it’s perfectly suited for use without audio, because there isn’t any.  So in order to extract that kind of detail about how well different resources meet the needs of a particular teacher or students, we can set the features in the profile.  So we go to My Profile, and scroll to bottom where there are Accessibility Settings. And right now none of the accessibility settings are set, and that is why we aren’t getting custom information. I set my accessibility preferences to indicate that I need captions for video and audio, and when transcripts are available, that I’d like those too. Now I’ve got a set of accessibility preferences that match the needs of a person who can’t hear, or can’t hear well, and might also match the needs of a school where there are not speakers on the computers in the computer lab.

So now if I repeat that same search for material on DNA, my resources that don’t have any Accessibility Metadata look just the same.  But the resources that have Accessibility Metadata start to pop up with more information.  So this resource, “DNA Evidence” is an audio file, and there are no captions or transcripts available. It’s labeled as inaccessible to me.  This video, which we had already noted has captions, now has a green check mark and says that it’s accessible.  Similarly, this interactive activity that doesn’t have any audio in it is fine. This DNA animation video that doesn’t have any audio is listed as accessible to me.  So now with a combination of the metadata on the resources and my own personal preferences recorded in my profile, I can actually get much better information as I look through the set of search results about which resources will suit me best. And when it works properly in the player, you can also use this feature to serve up the video automatically with the right features turned on.  So for example, if you look at this video about the structure of DNA, we know that it has captions and when we view the video those captions should come on automatically.

Another way that you can use the information in Teachers’ Domain is with the search filters. Here there’s a set of accessibility features that are listed for my search results.  So in all of the search results about DNA, for which there are 202, I can see quickly here that five of those offer audio description, which is an additional audio track for use by people who are blind or visually impaired who can’t see the video, but can learn a lot from the audio. There are 76 resources that have captions, 13 with transcripts, and 8 that offer long description of images, so that those static images can be made accessible to people who can’t see them.  So the faceted search is another way to quickly find resources if you know you are looking for a particular kind of resource, like one that has captions, or one that has descriptions.  But the additional benefit of the accessibility checkmarks is that they alert you to resources that are accessible to you whether because they have a feature you especially need, like captions, or because they don’t have any audio in the first place and they don’t pose a barrier to your use.

Join the discussion in progress concerning this Accessibility Metadata proposal

There’s nothing like a return from summer and a Dublin Core metadata conference, where people are face to face in the same location, to get people reading and paying attention to our specification.

Once people read the specification, and avail themselves of the more detailed resources section and other content on the website, their comments come to mind. And many people have kept their comments to themselves. No longer! The email traffic on our Google group spiked yesterday, as well as the W3C public vocabs list. It began with an email from George Kerscher of DAISY and IDPF, which sparked emails by Charles McCathie Nevile, which became public on the list in a reply from Gerardo Capiel. Notes of support from CEDS and ISO helped increase interest as well. You can join the thread from there.

Get involved now. If you’ve been dozing on the sidelines over the summer, or had an opinion and said “I don’t know if anyone cares,” this is the time to be heard. Here are some suggestions for things you can do:

Your critical skills and support are appreciated in advance. If you want to tell others about this, use the shortlink

Erlend Øverby of ISO/IEC JTC 1/SC 36 sends support of Accessibility Metadata to

image of the letter from ISO/IEC, including logo and signatureAs work on the accessibility metadata vocabulary finalizes and our submission to receives more attention, notable individuals, companies and standards groups are expressing their support. This paper letter from the chair of the technical committee for Information technology for learning, education and training at ISO/IEC was sent to one of our working group members, Jutta Treviranus to forward to the appropriate W3C mailing list. This blog gives a place for this information to be held and to be referenced in the email to the public vocabs list on behalf of Mr. Øverby. The letter follows.

Dear W3C members,

As Chair of ISO/IEC JTC 1/SC 36 I wish to express my strong support for the proposal to include accessibility metadata based on the AccessForAll or ISO/IEC 24751 standard within This highly successful multi-part standard is currently undergoing its first revision. The metadata expressed in the standard fills a critical gap for resource discovery and identification by anyone requiring alternative access systems or alternative resource presentations. The fact that the proposal is based on an established international standard supports interoperability and avoids fragmentation. We look forward to working with W3C to ensure that the proposal is successful and that the proposed additions meet their important goal.

Erlend Øverby
Chair ISO/IEC JTC 1/SC 36

The scanned copy of this letter, as we received it, is available in PDF format.
The ISO/IEC 24751 standard can be found at the Fluid project website.

This letter can be seen in the W3C public vocabs list as well.

Bookshare tags over 195,000 titles with accessibility metadata

Bookshare, the world’s largest online library of accessible eBooks for people with print disabilities, has just added accessibility metadata to its full library of over 195,000 titles. “We’ve had the information on content type, age ranges and available media forms locked away in our database, only useful to the most skilled internet searchers or through the search via our website or API.

Now, with the draft accessibility metadata semantic tags added, powered by microdata and, we can make this available in a manner that search engines can more easily find this information,” said Gerardo Capiel, VP of Engineering at Benetech. For example, the proof of concept Google Custom Search query below for history textbooks can be constrained to titles whose images have enhanced with rich descriptions by Bookshare volunteers:

Screen Shot of Google Custom Search Engine
You can do this yourself with two searches. The first is to search Bookshare for any titles involving history: that will return over 7,000 hits. The second, with just the titles that have alternative text for images or, better yet, long descriptions, is a search constrained by these mediafeatures: it returns just 14 items that are the higher value.

It was easy to scale this metadata conversion across all of Bookshare’s titles. Rather than requiring editorial work on each web page, these were database-generated web pages. The task was one of adding a few more tags or attributes around existing content. For example, for “World History Ancient Civilizations” above, elements like the title went from

<h1>World History: Ancient Civilizations</h1>


<h1 itemprop="name">World History: Ancient Civilizations</h1>

Other tags, such as

<meta itemprop="accessMode" content="textual"/>
<meta itemprop="mediaFeature" content="structuralNavigation"/>
<meta itemprop="mediaFeature" content="alternativeText"/>

were also added to tell the specific accessibility features of the book. More information on how to add accessibility metadata to your accessible content can be found on the resources page.

Accessibility Metadata Best Practices Guide draft available

A few major milestones have been achieved in the past week. Our best practices guide, weighing in at 33 pages in it’s v.5 version, is now available on the Resources page. If you have a question on markup methods for the accessibility metadata, this is the definitive resource. Questions or requests for clarification should go to our Google Group list, available on the Discuss page. I’ll note that the document is quite complete and extensive, but is in draft mode primarily because the specification to is still in draft.

As we get closer to our first real implementations and available data, we do have some live examples coming online. The first example, from Bookshare, can be seen on the live examples page.