Monitoring Accessibility: Large Scale Evaluations at a Geo-Political Level Silvia Mirri
Ludovico A. Muratori
Paola Salomoni
Department of Computer Science University of Bologna Via Mura Anteo Zamboni 7 40127 Bologna (BO), Italy
Department of Computer Science University of Bologna Via Mura Anteo Zamboni 7 40127 Bologna (BO), Italy
Department of Computer Science University of Bologna Via Mura Anteo Zamboni 7 40127 Bologna (BO), Italy
+39 0547 33813
+39 0547 33813
+39 0547 33813
[email protected]
[email protected]
[email protected]
ABSTRACT
prohibitive, even by reducing any assessment to a particular Public institution. Many evaluation activities were conducted in recent years, focusing on different realms, starting from the Riga Dashboard (involving 34 European countries [7]) to local monitoring actions. All these evaluations are used to be sporadically done on a Web sites sample, which is small enough to be manually checked. The macro-level analysis about Web sites accessibility and its geo-political localization require a different approach. It is worth noting that these points of view are strategic as it concerns to investment and resource saving by public administrations and institutions.
Once we assumed that Web accessibility is a right, we implicitly state the necessity of a governance of it. Beyond any regulation, institutions must provide themselves with suitable tools to control and support accessibility on typically large scale scenarios of content and resources. No doubt, the economic impact and effectiveness of these tools affect accessibility level. In this paper, we propose an application to effectively monitor Web accessibility from a geo-political point of view, by referring resources to the specific (category of) institutions which are in charge of it and to the geographical places they are addressed to. Snapshots of such a macro level spatial-geo-political analysis can be used to effectively focus investments and skills where they are actually necessary.
To actually measure the accessibility of a Web page, an in-depth evaluation is required, which must include also manual controls. Despite guidelines and regulations about accessibility have been present on the world wide scenario for many years, sites are still afflicted with automatically detectable errors, which can completely compromise their accessibility. The provision of automatable assessments would guarantee time and resource saving, thereby opening the evaluation to larger sets of URLs and allowing more frequent controls. Moreover, the automatic monitoring of Web accessibility, whether it were done in time and according to some classifications (e.g. spatial, geo-political or by role) on the dominion of controlled institutions, would represent a great support to a deeper manual evaluation. A Web sites accessibility monitoring application can be really effective in such a context, since it greatly supports human operators in evaluating a large scale amount of Web pages.
Categories and Subject Descriptors
H.5.4 [Information Interfaces and Presentation]: Hypertext/Hypermedia – User issues; K.4.2 [Computers and Society]: Social Issues – Assistive technologies for persons with disabilities; Handicapped persons/special needs.
General Terms
Measurement, Design, Human Factors.
Keywords
Web Accessibility, Automated Evaluation, Accessibility Evaluation, Monitoring Accessibility.
In order to be effectively used in a wide range of monitoring actions, an automatic monitoring application needs to be tailored to different aspects, such as: (i) sets of URLs (and their related classification criteria) to be controlled; (ii) guidelines and requirements (or subset of them) to be checked; (iii) frequency of evaluations; (iv) spatial and geo-political reports and analysis of results.
1. INTRODUCTION
Nowadays dynamism and wideness of the Web represent critical parameters whenever content and services become the target of a certain control or any regulation has to be applied to them. This is the case of Web accessibility compliance to the national laws some countries have enacted since the last „90s. The number of Web pages and services to be evaluated may be actually
Design and development of the above tool imply some open issues to be faced. First of all, evaluations must be as exhaustive and in-depth as possible. This means that the monitoring system should be based on an accurate evaluation tool which is not prone to false positives and negatives. In turn, the evaluation tool should provide controls based on the largest variety of accessibility guidelines and regulation requirements. Finally, it should maximize the checks which may be automatically conducted. Indeed, available automatic validation tools do not perform some complex checks. As an instance, none of the existing tools
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. ASSETS’11, October 24–26, 2011, Dundee, Scotland, UK. Copyright 2011 ACM 978-1-4503-0919-6/11/10...$10.00. .
163
completely analyze XHTML and CSS codes in order to verify the color contrast on a whole Web page, by exploiting the cascading characteristics of style sheets and the inheritance of their rules. In fact, available color contrast analyzers usually perform evaluations on single couples of background and foreground colors, which have to be manually specified by users. Further complex checks should be performed by automatic tools, such as the correct usage of headings structure.
whole system, its components and configuration issues. Section 5 (System at a glance) will present main features of an instance of the system at work (analyzing some European public institutions Web sites) and some evaluation results. Finally Section 6 (Conclusions) will close the paper, by presenting some final considerations and future work.
On the one hand, many efforts have been done to provide evaluate-and-repair tools focusing on single URLs or pages. On the other hand, an approach to give a snapshot about any set of URLs, according to suitable criteria, has not been faced yet. This macro level point of view could be useful to Web site commissioners and public institutions in managing investment and resources. Summing up a “resolution-independent” system – i.e. integrating a big picture with in-depth analysis capabilities – would represent the best solution in such a context.
In the field of accessibility monitoring some tools are available, but all of them are devoted to monitor just single Web sites. Moreover, most of such applications are pay services, provided by software houses. The different aims (evaluate-and-repair Web site vs. providing accessibility evaluation from macro-level and geopolitical point of views) and the different level of results presentation (report about a single Web site vs. report about large scale accessibility evaluations) make our system novel and unique in such a context.
2. BACKGROUND AND RELATED WORK
In this paper we will present an application (named AMA, which stands for Accessibility Monitoring Application) which has been designed and implemented to gather accessibility status of large collections of Web sites according to different guidelines and regulations (including WCAG 2.0 [18], U.S. Section 508 [15] and Italian Stanca Act Requirements [10]). Such a tool is a part of the VaMoLà project [1], [12], which was born from a collaboration between the Emilia-Romagna Region and the University of Bologna. Both these institutions are Italian Public Administrations and their Web sites have to be made compliant with the Stanca Act [11]. This law imposes that Italian public Web services and information should be accessible, that employers with disabilities should be provided with adequate assistive technologies and accessible applications and the public Procurement of ICT goods and services should always take accessibility into account [11].
Thus far, several national and international studies on Web accessibility have been conducted [2]. Some of them are devoted to identify and to monitor the accessibility level of different kinds of Web sites (e.g. governmental, Public administration, higher education) according to national or international guidelines. Such studies are mainly based on accessibility monitoring activities which are conducted by human operators (who have been supported by automatic and/or semi-automatic evaluations tools). Hence these are time-consuming activities that cannot involve a large set of Web pages and that cannot be often and regularly performed. In the following we present some international works which deal with the accessibility evaluation and monitoring of Public administrations and governmental Web sites, by summarizing their main results. In 2006, the United Nations has commissioned the “United Nations Global Audit of Web Accessibility” [13]. In such a study 100 Web site homepages were evaluated, in order to present an indication of the existing Web accessibility level. The homepages were chosen from twenty countries around the world, five for each country and, in turn, one of them was from the Public administration (very often the main governmental Web site home page was selected to be evaluated). This study was conducted by testing only the homepage of each Web site, using both automated and manual techniques, according to the WCAG 1.0 AAA conformance level [17]. Each checkpoint was given a pass, fail or not applicable status. The resulting report presents detailed analysis for each country and shows that only 3% of the analyzed Web sites (all from the European Public sector) were found to meet WCAG 1.0 level A, while none of the sites were WCAG 1.0 level AA or AAA compliant. The use of a tool like our system would make possible a similar, but wider and deeper study. In particular, our application is capable to check a larger set of homepages and it allows wider and more detailed evaluations, by automatically controlling also other pages of each Web site. Its Web interface can provide the same kind of comparisons which are shown on [13], from a geographical point of view, and it is also capable to show them as they change in time. In addition, it provides evaluations based on different guidelines, not only as it concerns to the WCAG 1.0.
AMA periodically validates a set of URLs (by means of different automatic evaluation tools) and reports the accessibility level of the Web sites they address to. Obviously, only a subset of the guidelines and requirements are fully automatable. On the current official version of AMA, checks that require manual controls and human judgment are not taken into account. Nevertheless, a prototype devoted to collect, record and report manual evaluation results has been designed and developed and it is going to be integrated into AMA. Some of its features will be described in the following. AMA obtains data which clearly depict a significant status of accessibility barriers, based on certain errors (i.e. absence of alternative to images, controls without labels, not enough color contrast). Hence, our monitor provides a meaningful and structured screening of accessibility level and trend, by showing the number of detected errors. In fact, AMA is not devoted to evaluate and repair single Web sites, but its goal is to capture a snapshot of a situation. It is worth mentioning that URLs that fail automatic evaluation are certainly inaccessible, while those which pass it may be accessible. Hence we can provide an optimistic accessibility evaluation with automated controls. Currently, the Emilia-Romagna Region and other public institutions are using AMA for their periodical Web sites monitoring activities.
In 2007, the “Assessment of the Status of eAccessibility in Europe” study has been commissioned by the European Community [5]. The main aim of this work (called MeAC, which stands for “Measuring progress of eAccessibility in Europe”) is to follow up on previous researches and support the future
The remainder of this paper is organized as follows. Section 2 (Background and related work) will introduce main related work, with the aim of comparing them with our system. Section 3 (Design issues) will describe the main goals of the project and the system features while Section 4 (Architecture) will depict the
164
development of European policy in the field of eAccessibility. A certain number of Web sites were chosen from each EU member state, plus USA, Canada and Australia. Overall, 336 Public and private Web sites were evaluated by using a combination of automated and manual testing procedures to assess WCAG 1.0 level A. In particular, 25 pages were collected from each site with an automated retrieval process and then they were automatically and manually analyzed. In few countries, the majority of tested Public Web sites met the standards, while in most of them none of the sites did. In order to control accessibility evolution, one year later a similar and consequent study has been conducted, involving only 10 of the original 28 countries (9 EU member states plus USA) [4]. The results show that the situation was generally improving, but only the accessibility of Web sites in USA was significantly increased. Another study has been conducted in 2009 with the aim of analyzing the Web accessibility level of compliance with WCAG 2.0 in the European countries [3]. Results of this study report that none of the analyzed Web sites achieved WCAG 2.0 compliance and none of them achieved full WCAG 1.0 compliance (by taking into account both automatic and manual testing). Also in this case, the use of AMA would make possible a similar study, with some additional features, such as comparisons among results based on both geographical and temporal dimensions. Our system would also guarantee the possibility of conducting evaluations on the basis of different guidelines and requirements.
improvements or worsening. In other words, our system would support different kind of accessibility evaluations (e.g. large set of URLs, in-depth evaluation of a single Web site, comparisons from different point of views, and so on) with great benefits in terms of effective interventions by their public institutions.
Another study about accessibility monitoring of European Web sites was conducted by the European Internet Accessibility Observatory (EIAO) [2] in 2008. The EIAO is an implementation of the automated monitoring application scenario of UWEM (Unified Web Evaluation Methodology) [16]. UWEM describes methods for the collection of evaluation samples, test procedures according to WCAG 1.0 level AA, and several different reporting options. In [2] a comparison between the EIAO study results and the MeAC ones is presented: there is a positive trend of the overall level of accessibility of European governmental Web sites, even if it is rather poor.
Periodical evaluations (A) permit to outline the accessibility macro-level trend for each URL from the database. Thanks to the automatic storage, AMA can evaluate pages by considering several sets of guidelines or requirements, including WCAG 1.0 [17], WCAG 2.0, Section 508 and Stanca Act. We have chosen the WCAG 2.0 because they are the most recent W3C guidelines. Evaluations according to the Stanca Act Requirements have been included because AMA is currently used by some Italian Public Administrations (in particular, by the Emilia-Romagna Region) to monitor Web sites which are under their authority (e.g. Provinces, Municipalities, schools, public health institutions). Through the Web interface users can choose also to check according to WCAG 1.0 guidelines and Section 508 regulation. Evaluations according to other regulations (i.e. the German BITV) could be added, since AMA is extensible. Obviously, a wide check requires a strong computational effort and it can consequently take long evaluation time for each URL. Moreover, the larger the set of evaluated guidelines is, the bigger the database storing results will be.
3. DESIGN ISSUES
The main goal of AMA is storing and providing data related to accessibility evaluation about large sets of URLs in order to provide macro-level, spatial and geo-political analysis. Public Administrations and institutions ought to periodically perform large scale accessibility evaluation of their Web sites, which could be not enough in terms of depth, wideness and frequency, due to the large amount of required resources. In order to provide an effective support to this process, AMA repeatedly performs automatic checks of URLs stored in its database and it presents analysis results by means of a Web based interface. In such an application:
Finally, in [14] a large-scale study of Web accessibility is presented. Such a study has been conducted over a set of Portuguese Web sites with the aim of discovering Web pages quality according to 39 checkpoints from WCAG 1.0 (priority 1 and 2). The study has confirmed that simpler and smaller Web pages tend to have a higher accessibility quality, even if accessibility communication must be improved, as well as the diffusion of accessibility culture. The authors indicate this study as an ongoing work, since they are planning to evaluate the same set of URLs in next years (in order to study the evolution of accessibility guidelines compliance by providing temporal comparisons) and to report some specific Web sites categories results.
A.
Data automatic evaluation and storage are periodically and automatically performed by the system, without any human activity. The system checks all the pages from the database and stores results into it.
B.
Quantitative synthesis of evaluation results can be extracted from the database. This may be done by browsing the Web interface or through direct queries.
Complexity and database dimension are also related to the number of evaluated Web sites. Furthermore database dimension depends upon frequency of evaluations, which are configured by the system administrator (by means of a suitable Web interface). Finally, for each URL, AMA can evaluate one or more pages, going in depth from the home page. A configuration file (defined by the system administrator) sets the level of the pages AMA has to evaluate (the first level is the home page, the second level is represented by all the URLs directly linked from the home page, and so on). Anyway, through the configuration file, it is also possible to set a maximum number of pages to evaluate per single Web site.
As it concerns the studies mentioned above, our system would provide accessibility evaluation from macro-level and geopolitical point of views. This could be strategic to Web site commissioners and public institutions in managing investment and resources. In addition, our system provides more features, allowing evaluations on the basis of different guidelines, requirements and comparisons among results, which could be based on different criteria, such as geographical and temporal dimensions. This means that, as an instance, the same evaluation of the same set of URLs can be conducted in two different periods and results can be compared in order to show accessibility
For each evaluation (A), the system saves into the database the following data:
165
the evaluation date;
the evaluated URL;
for each evaluated guideline: the number of errors generated by the evaluation system, the number of manual controls to do, which are suggested by the evaluation system, the number of potential errors detected by heuristics performed by the evaluation system and the number of checks which were done for each kind of error (computed by the evaluation system).
it validates WCAG 1.0, WCAG 2.0 checkpoints, Section 508 rules and Stanca Act Requirements;
it provides a Web interface in Italian language and in English;
it records evaluations results since July 2010.
The system supports geo-political referenced URLs, with the aim to spatially explore (B) data into the database. This feature is specifically designed to support the quantitative synthesis in geographically distributed administration. URLs and their geographical and geo-political data have to be manually inserted into the database through a specific Web interface by admin users. URLs can be categorized on the basis of groups of interest, such as government, municipality, region, province, university, health board, etc. AMA is a multi-user application, so users can access it by logging in through username and password. The end-users could create and exploit their accessibility reports (B), according to some options they could set. Setting options is a fundamental element in AMA interface. The Options Menu permits to set some preferences (related to the Web sites to evaluate and to the evaluation itself) so as to customize reports. In particular, it is possible to select:
the guidelines and requirements, in order to verify the checked URL conformity according to them.
The categories and the sub-categories of the Web sites that the end-users want to evaluate.
A previous date, in order to compare the correspondent accessibility level with the current one.
The geographical area to monitor.
Fig. 1 - A screenshot of an AMA instance Figure 1 shows the home page of such an instance of the application. AMA source code is distributed under UEPL open source license. We have chosen such a license because of a local regulation which bound the Emilia-Romagna region in delivering this kind of permissions about intellectual properties.
Once the above elements have been set, a complex set of queries can be performed on the database. Evaluation data are provided through tables and though a mash-up with Google Maps.
4. ARCHITECTURE
Data presented to users through the Web interface are the percentage of pages without any error (for the former and the current evaluation) and the average number of errors detected per page (for the former and the current evaluation). In addition for each analyzed group of sites, the AMA Web interface shows the number of checked pages, the number of checked pages without any error, the percentage of pages without any error, the total errors number and the average number of errors detected per page.
AMA is composed of two main components:
it uses a database storing main URLs which are related to the main European public institutions (154 Web sites, about 5 for each European country plus some European Union ones, including National Parliaments, Governments, Presidents and Prime Ministries Web sites); for each URL the system can provide accessibility monitor results of the home page (level 0), of all the pages directly linked from the home page (level 1) and of all the pages directly linked from the level 1 pages, according to end-user‟s selection;
it supports access to data by using a geo-referenced interface;
an Evaluation Manager, based on a script in charge of storing data from periodical evaluations in a database;
(ii)
a Results Manager which lets users to exploit and navigate accessibility reports.
In addition, a simple Users Manager provides support to user access. The whole system architecture is depicted in Figure 2.
AMA has been developed in PHP and the system which interacts with end-users generates XHTML 1.0 Strict Web pages. Data are stored in a MYSQL database. An instance of AMA is available at this URL: http://polo-rer.polocesena.unibo.it/v3ama_bif (username: guest, password guest). It has been customized as follows:
(i)
The Evaluation Manager selects a set of sites from the URLs-DB and evaluates them by using (through Web Services) an external accessibility validator, called Accessibility Validation Application (AVA). This system outputs results of an analysis about actual and potential barriers it has found on the basis of several guideline and requirements (currently Stanca Act, WCAG 1.0, WCAG 2.0 and Section 508 [15]). Moreover, AVA exploits some external evaluation systems to respectively validate HTML and CSS code and verify color contrast. AVA is an open source software, derived from AChecker [9], available both as Web service and as a Web based application which can be used by end users through its Web interface. More details on AVA design issues and implementation can be found in [1] and in [12]. AChecker has been chosen because at the moment it is the only automatic accessibility evaluation tool which controls also WCAG 2.0 and some other national requirements (including the Italian Stanca Act
166
Fig. 2 - System Architecture and the German BITV). This means that it is not possible to provide a comparison among AChecker and other automatic accessibility evaluation tools, because none of them check Web resources according to WCAG 2.0 success criteria. AChecker is an open source software and it is expansible, thus it is possible to add new guidelines and requirements, in order to provide a more complete monitoring application. More details about AChecker and the accuracy of its evaluations can be found in [9] and in [8].
5. SYSTEM AT A GLANCE
In this section we will describe some main features of the AMA system, by referring to the instance described at the end of Section 3 and depicted in Figure 1.
5.1 Options Selections
The AMA Web interface provides an Options Menu which can be exploited by the user in order to choose some options and customize the monitoring activities as well as the presentation of report results. User can choose the guidelines which will drive the evaluation (e.g. WCAG 1.0, WCAG 2.0, Section 508 and Stanca Act). For each of this set of guidelines it is possible to select the level (if any is available, e.g. level A, AA or AAA) and one or more requirements or success criteria. This means that it is possible to conduct evaluations according to specific accessibility issues, such as the presence of alternatives to non-textual content, or the correct use of labels in forms, etc.
Colors are verified by means of an application called Co2 Validator (COlor COntrast Validator) we have designed and implemented to estimate contrasts. Since color differences between foreground and background must be measured for every textual element, a recursive, crossed control has been provided. In fact, the cascading characteristics of style sheets (which are actually imposed for separating presentation from content) implies suitable inferences on XHTML pages and CSS documents, thereby allowing for inheritance and exceptions. Cross-browsing features (implicitly stated by the law) further restrict the possibility of omitting text/background colors which have to be always identified and compared. Due to the intrinsic complexity of crossed measures, parallelization of the process, as well as other techniques for optimization, have been taken into account.
Moreover, user can choose the categories of Web sites to be evaluated (e.g. municipalities, provinces, states, governments, education, public health and so on). User can also set a previous date in order to compare most recent evaluations results with previous ones in a temporal dimension and the depth of the evaluations. In particular, in this AMA instance, user can choose to check the accessibility level of home pages (level 0), of all the URLs directly linked from the home page plus the previous levels pages (level 1), or of all the URLs directly linked from the level 1 pages plus the previous levels pages (level 2).
Let us notice that such an approach to the colors contrast assessment sets aside from the Italian regulations and can be assumed as a general principle for accessibility. Results from AVA are stored by the Evaluation Manager into the Results-DB. Checks are performed periodically, according to the frequency settings, previously declared into the Config-DB.
Finally user can choose the spatial-geo-politically related information, in order to enjoy accessibility evaluations results related to a specific area. In this AMA instance, default configuration is set to show results of European Web sites.
Further configurations are related to maps and shapes to be used in case of geo-referenced analysis and to guidelines and requirements to be considered in the evaluation. All configurations are set by the administrator of the AMA instance, who is responsible also for the user management.
Detailed about all the European countries Web sites are available and related accessibility evaluations results could be browsed through this menu section and through the map. The AMA system can provide and manage results for the following geo-political levels: continents, countries, states, regions, provinces and municipalities.
Data stored in the Result-DB can be (i) browsed through the Webbased interface of the Result Manager, which permits to select main synthesis and produces results on a tabular form (available also as a map-based graphical representation in case of georeferenced sites); (ii) directly queried from the Results-DB in order to obtain more detailed or more specific synthesis.
Figure 3 depicts the Options Menu related to our AMA instance.
167
according to the selected guidelines and requirements). On the contrary, a dark color means a low percentage of Web sites with no errors according to the selected Guideline(s). A script based on a map “mouseover” event allows the visibility of a tooltip with the percentage of Web sites without errors in an area. Moreover, clicking on a specific country area allows user to evaluate detailed accessibility report results about that country, browsing to a different geo-political level of the monitoring application.
Fig. 4 – Screenshot of the map which shows accessibility evaluation results
5.3 Detailed results and temporal comparisons
Despite a map is a good way to provide an easy-to-read spatial and geo-political report, it is obviously not enough to offer detailed data about accessibility evaluations and it is worth mentioning it is not accessible to users with visual disabilities. To overcome this, the AMA web interface provides detailed data through tables. A summary table of the main analyzed geopolitical entity (in our instance the European continent) is shown before the map, in order to provide data about the number of evaluated Web sites, the number of Web sites without any error, the total number of errors and the average number of errors per Web site. The map is followed by a table which reports summarized data for every country of the continent. In particular, such a table shows for each country the percentage of Web sites without any error and the average number of errors per Web site. The table is navigable (as well as the map) and it is possible to access to specific country data by clicking on its name. More detailed results are available for each country at the end of the page, showing: the number of evaluated Web sites, the number of Web sites without any error, the percentage of Web sites without any error, the total number of errors and the average number of errors per Web sites. Moreover, for each evaluated requirement or
Fig. 3 Options Menu
5.2 Browsing and presenting report results through maps
In the AMA system URLs-DB is geo-referenced. Consequently, data in the Results-DB can be exploited through a map, providing a macro-level and a spatial-geo-political accessibility point of view. Figure 4 shows the map related to the evaluation of the main European public institutions Web sites according to Level A of WCAG 2.0 success criteria. Different colors in the map highlight different accessibility levels of monitored Web sites: the lighter is the color of an area in the map and the higher is the level of accessibility of Web sites in that area (expressed as the number of evaluated pages without errors
168
success criterion, a table is shown in order to provide information about certain errors and the number of evaluated Web pages for each Web site (which can be directly opened). An example of these latter detailed tables is shown in Figure 5.
to the barrier impact evaluations, users can select “Barriers (WCAG 2.0)” as guidelines in the Option Menu into the AMA Web interface. Users can select the level of conformity to WCAG 2.0 (level A, level AA and level AAA) and one or more types of assistive technology/disability which have to be investigated. When the user chooses to evaluate Web sites according the Barriers Impact Factor, two data are added at the table which shows the analysis results: the total barrier impact factor and the average barrier impact factor. Data about Barrier Impact Factor provides more significant results and can sketch the accessibility level of a large sample of Web sites in a better way. Let us note that the BIF results on screen reader/blindness are very high, as we could expect it. In fact, barriers which impact on screen reader are more frequent and automatically detectable, without a manual evaluation.
Fig. 5 Table showing detailed accessibility evaluation results related to the Italian country
5.5 Recording manual evaluations
In monitoring large scale accessibility Web sites from a macro level and spatial-geo-political point of view, evaluating certain errors can capture a snapshot of a situation and it is an effective way to provide results about those sites which are surely inaccessible (because they do not pass the automatic evaluations), since our application is not an evaluate-and-repair tool. Obviously this is not enough to offer a view of accessibility level which includes also manual controls and results about warning which should be checked by a human operator. In order to overcome this lack we have designed and developed a prototype which allows users managing and storing manual evaluations results. Such a prototype downloads Web pages code and allows user deciding about the actual presence of an error whenever a warning occurs in evaluating the accessibility of a specific URL (see Figure 6). For each warning user can browse the downloaded Web page which shows a marker corresponding to that warning. Currently the recording manual evaluations prototype Web interface is available in Italian language. The recording manual evaluations prototype is going to be integrated into the AMA system and to be translated in English language, so as to allow a more complete monitoring activity.
All these tables can show a temporal comparison between the last conducted evaluation and a previous one, if the user chooses it by means of the Option Menu. All the shown data are available into the instance described in section 3 “Design issues”. It is worth noting that Netherlands provides the highest percentage of Web sites without any error (66.67%), while Ireland provides the lowest average number of errors per page (2.00), according to WCAG 2.0, Level A (evaluating all the success criteria).
5.4 Evaluating technological barriers
In order to measure accessibility levels, several metrics have been defined. Some of them are based on the number of errors per page, some other ones require complex manual evaluations. A short summary of main metrics can be found in [12]. With the aim of measuring the accessibility barriers impact in a feasible and effective way on a large scale sample of Web sites, we have defined and added a new metric to our system. The BIF (Barriers Impact Factor) metric is computed as follows (on the basis of WCAG 2.0):
BIF (i)
# error(i) * weight (i)
error
Where i represents the assistive technologies/disabilities affected by detected errors; BIF(i) is the Barrier Impact Factor affected the i assistive technology/disability; error(i) represents the number of detected errors which affect the i assistive technology/disability; weight(i) represents the weight which has been assigned to the i assistive technology/disability. The lowest value of BIF could be 0 and this represents the absence of barriers. The higher is the BIF value and the higher is the impact of a certain barrier on a specific type of assistive technology/disability. Barriers have been grouped into 7 sets, which impact in the following assistive technologies and disabilities: screen reader/blindness; screen magnifier/low vision; color blindness; input device independence/movement impairments; deafness; cognitive disabilities; photosensitive epilepsy. AMA shows also evaluation results based on aBIF, which is the average barriers impact factor on the whole set of barriers. For the sake of simplicity, the weights have been defined as follows: the weight of each error which requires a manual control is 0; the weight of each certain level AAA error is 1; the weight of each certain level AA error is 2; the weight of each certain level A error is 3. Different weights could be computed on the basis of experiments with users. To obtain data results related
Fig. 6 The recording manual evaluations prototype Web interface
5.6 The Emilia-Romagna Region case study
Currently some Italian public administrations are using AMA instances in order to monitoring related public Web sites. In particular, the Emilia-Romagna Region [6] is monitoring the Regional, Provinces and Municipalities Web sites. The EmiliaRomagna Region AMA instance regularly evaluates a group of 376 Web sites (about 4,000 pages) and periodically monitors their accessibility level. Such AMA instance is available at the
169
[4] Cullen, K., Kubitschke, L., and Meyer, I. eAccessibility status follow-up 2008. Available at: http://ec.europa.eu/information_society/activities/einclusion/do cs/meac_study/meac_follow-up_2008.pdf, 2008.
following URL: http://polo-rer.polocesena.unibo.it/vamolamonitor/ (username: guest; password: guest). It validates the WCAG 1.0, WCAG 2.0 checkpoints and the Stanca Act requirements (storing evaluations results since July 2010). Details about evaluations report results can be browsed and enjoyed on the basis of the following geo-political entities: Provinces and Municipalities. Analyzing Stanca Act requirements compliance results about the last year (from July 2010 to July 2011) we can observe a general positive trend: Web sites without any error is increased (from 2.64% to 4.51%) and the average number of errors per Web sites is decreased (from 147.87 to 143.30).
[5] Cullen, K., Kubitschke, L., and Meyer, I. Assessment of the status of eAccessibility in Europe. Available at: http://ec.europa.eu/information_society/activities/einclusion/lib rary/studies/meac_study/index_en.htm, 2007. [6] Emilia-Romagna Region. ERMES. Available at: http://www.regione.emilia-romagna.it, 2010. [7] European Commission, DG Information Society and Media. Measuring progress in e-Inclusion Riga Dashboard 2007. Available at: http://ec.europa.eu/information_society/activities/einclusion/do cs/i2010_initiative/rigadashboard.pdf, 2007.
6. CONCLUSIONS
We designed and developed AMA to provide a macro-level and spatial-geo-political accessibility monitoring application. Its aim is supporting Web site commissioners and public institutions in managing investment and resources and in facing issues and shortcomings, in order to increase their Web pages accessibility level, rather than providing an evaluation-and-repair tool. In our application a set of parameters are customizable, according to end-users expectations and preferences. The presented case study is meant to show that the results reports, provided by AMA, represents a suitable cue for any aimed intervention, with the consequent resources saving (from a human and economical point of view). We have designed and developed a prototype which allows users manually adding checked errors to the Results-DB (URL by URL), thereby offering support to an integrated automatic/manual monitoring action. An integration phase is planned and it is going to be managed so as to provide a more complete monitoring application. Our system is currently used by the Emilia-Romagna Region and some other Italian public institutions, to support the periodical evaluation action, which is formally established by this institution.
[8] Gay, G.R., Li, C. AChecker: Open, Interactive, Customizable, Web Accessibility Checking. In Proceedings 7th ACM International Cross-Disciplinary Conference on Web Accessibility (W4A 2010) Raleigh (North Carolina, USA), April 2010, ACM Press, New York, 2010. [9] IDI, Ontario College of Art and Design, AChecker. Available at: http://www.atutor.ca/achecker/index.php, 2011. [10] Italian Parliament. Decreto Ministeriale 8 luglio 2005 - Annex A: Technical assessment and technical accessibility requirements of Internet technology-based applications. Available at: http://www.pubbliaccesso.it/normative/DM080705-A-en.htm, 2005. [11] Italian Parliament. Law nr. 4 – 01/09/2004. Official Journal nr. 13 – 01/17/2004, January 2004. [12] Mirri, S., Muratori L., Roccetti, M. and Salomoni, P. Metrics for Accessibility: Experiences with the Vamolà Project. In Proceedings 6th ACM International Cross-Disciplinary Conference on Web Accessibility (W4A 2009) Madrid (Spain), April 2009, ACM Press, New York, 2009, pp. 142-145.
7. ACKNOWLEDGMENTS
The authors would like to thank Giovanni Grazia and Jacopo Deyla (from the Emilia-Romagna Region), Marina Vriz and Ennio Paiella (from ASPHI) and Gregory R. Gay (from IDI, Ontario College of Art and Design).
[13] Nomensa. United Nations global audit of Web accessibility. Available at: http://www.un.org/esa/socdev/enable/documents/fnomensarep. pdf, 2006.
8. REFERENCES
[1] Battistelli, M., Mirri, S., Muratori, L.A., Salomoni, P., Spagnoli, S. Avoiding to dispense with accuracy: A method to make different DTDs documents comparable. In Proceeding of the 25th Symposium On Applied Computing (SAC2010) (Sierre, Switzerland, March 22-26, 2010). ACM Press, 2010.
[14] Rui, L., Gomes, D., Carrico, L. Web not for all: a Large Scale Study of Web Accessibility. In Proceedings 7th ACM International Cross-Disciplinary Conference on Web Accessibility (W4A 2010) Raleigh (North Carolina, USA), April 2010, ACM Press, New York, 2010.
[2] Bühler, C., Heck, H., Nietzio, A., Goodwin Olsen, M., and Snaprud, M. Monitoring Accessibility of Governmental Web Sites in Europe. In Proceedings of the International Conference on Computers Helping People with Special Needs (ICCHP 2008) (Linz, Austria, July 9-11, 2008). Springer, Lecture Notes in Computer Science, 2008, 410-417.
[15] U.S. Rehabilitation Act Amendments. Section 508. Available at: http://www.webaim.org/standards/508/checklist, 1998. [16] Web Accessibility Benchmarking Cluster. Unified Web Evaluation Methodology (UWEM 1.2 Tests). Available at: http://www.wabcluster.org/uwem1_2/UWEM_1_2_TESTS.pd f, 2007.
[3] Cullen, K., Kubitschke, L., Boussios, T., Dolphin, C., and Meyer, I. Web accessibility in European countries: level of compliance with latest international accessibility specifications, notably WCAG 2.0, and approaches or plans to implement those specifications. Available at: http://ec.europa.eu/information_society/activities/einclusion/lib rary/studies/docs/access_comply_main.pdf, 2009.
[17] World Wide Web Consortium. Web Content Accessibility Guidelines (WCAG) 1.0. Available at: http://www.w3.org/TR/WCAG10/, 1999. [18] World Wide Web Consortium. Web Content Accessibility Guidelines (WCAG) 2.0. Available at: http://www.w3.org/TR/WCAG20/, 2008.
170