Short Paper
A common vocabulary to facilitate the exchange of web accessibility evaluation results
Download PDF Read OnlineWeb accessibility evaluations are significantly different from document validations; they are not based on determining conformance with specifications of formal grammars but are often more rule-based methodologies encompassing sequences of atomic tests to determine conformance to guidelines. While some of these tests may not be automatable by current computer technology and require human judgment to determine the results, evaluation tools often vary considerably in their coverage and reliability for tests that are automatable. These differences in tool performances are probably unavoidable in practice; however, they cause notable discrepancies in the efficiency and quality of the conformance evaluations. This paper discusses the practical implications of the Evaluation and Report Language; a vendor neutral vocabulary to facilitate the aggregation of test results generated by different evaluation tools in order to maximize the benefit from their respective features. The paper will also highlight the importance of this language to the objectives of the Dublin Core Metadata Initiative.
Author information
Cite this article
DOI : 10.23106/dcmi.952108192
Published