Assessing Readiness to Provide Services in Health Facilities: Examples of Different Methods Providing Different Results
89
90
• Recommended best practice for collecting valid and reliable information: »» Ensure good sampling methods are used and that they are well documented »» Well trained interviewers provides a stronger basis for assumption of comparability »» Observation (in each relevant service area) to validate service-specific conditions and practices is preferable »» Observing practices may be impractical in terms of time/finance • Reported practices (interview) not acceptable for comparisons but may be acceptable for comparing policy standards known by staff • Respondent reporting practices should be staff most familiar with day to day services and practices »» If written records on training received by staff are not available, next best source of information on training subject, content, and when training was received by staff is to interview individual staff.
82
85
70
59
60
40 27
30
Kenya 2010
Namibia 2009
17
20 7
10
Uganda 2007
27
85
85
79
87
85
83
23 11
3
76
Facilities offering indicated service where soap and running water are in the service area, Kenya 2010 100 80 60 40 20 0
76
75
81
74
79
65 83
79
75 80
62
Kenya 2010
50 34
27 31 14 0 No beds
38 8
24
33
21
12 15 12 10
8
0 0 0000
1-9 10-19 20-49 50-99 100+ beds beds beds beds beds
Public facilities-classified "clinic" or "dispensary" 90 80 70
Kenya 2010 (clinic)
50 40 30
43 29
20 10
Kenya 2010(Dispensary)
29 28 24 20
Namibia 2009(Clinic) Uganda 2007 (HC II) 003
0 No beds
0000
1-9 beds 10-19 beds 20-49 beds
50
50 40
32
30
38
0
30
28 1516
20 10
37
6
0.4 2
3
69 4
22
11
5
6
5
Namibia 2009 Uganda 2007
• Greater availability of valid, reliable, and comparable data will provide »» An evidence base for drawing conclusions about cost and efficiency between different systems for service delivery and resource allocations »» Evidence of change that can be linked with inputs
6
About IHFAN
23
22
20 9
10 0 2
3
4
5
6
Number of rooms serving the major services offered in the facility
Availability of water in facilities, Uganda, 2007 100
90
87
83
80
74
70
65
63 57
60
Onsite safe water source
50 40
36
32 26
29
Running water in all service rooms
20 10 0
0 2
3
4
5
6
Kenya 2010
21
• Decisions on which approach is used will be influenced by resources and time available for collecting and processing data, however: »» Using observation to collect information improves reliability »» Observing service specific items in the service area improves validity »» Ensuring a direct link between the data collected and the definition applied improves validity »» Using facility size rather than country classification for grouping data improves comparability »» Consistency in availability of items in all relevant service sites (e.g., hand-washing capacity) decreases as the number of service sites increase—Be aware that availability of items in service sites depends on systems as much as supply “availability”
Soap is in all service areas
34
Number of rooms serving the major services offered in the facility
Number of service sites for eligible key services
60
55
Soap in stock
1
1
80 76 69
53
59
30
Soap in service area
48
Uganda 2007
64
30
60
No beds
24
40
100
Uganda 2007 (HC III)
10-19 beds 1-9 beds
50
Service offered by facility
Uganda 2007 (HC IV)
20-49 beds
45
Namibia 2009
• There is no “correct” or “best” answer—this depends on »» the objective of the assessment, and »» how the information is to be used
50-99 beds
39
71
Recommendation for Improving HFA Methods to Strengthen Validity, Reliability, and Comparability
70
1
Percent of facilities with the indicated number of different service sites for key services
11
35
60
Running water in service area
Namibia 2009
100+ beds
Availability of soap in facilities, Uganda, 2007 100
70
68
• Differences in number of service sites for key services
12 2 10 0 5
80
0
• Findings »» Cannot assume that the percentages for each service are in the same facilities »» For any given service around 40% of service sites are missing soap and/or water for hand washing »» No pattern in which services are lacking soap and water
5 5 15
90
58
1-9 beds 10-19 beds 20-49 beds 50-99 beds 100+ beds
Public facilities classified "health center" 100 90 80 70 60 50 40 30 20 10 0
100
Soap in service area
0
• No consistency in size of “health centers” nor “dispensaries/clinics” »» Similar findings for NGO/FBO “health centers” and “dispensaries/ clinics”
Percent of facilities
64
Service offered by facility
50 38
79
4 3 6 7
• Findings »» Differences in number of service sites varies by country—different service provision systems
Running water in service area
80
Percent of facilities
• Data collection methods: interview, observation, or a combination of interview and observation. The following are experiential findings for the different methods of data collection: »» One person is not necessarily the most informed on service conditions and events. Reliability may vary. »» Knowledge of training received by other staff is not reliable. »» Reports of service conditions and availability of staff often reflect the “norm” or the “expected,” rather than the current situation. »» Time needed to validate information through observation is not the major factor influencing daily number of facilities surveyed.
85
100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0%
Kenya 2010
percent of facilities
100
Percent of facilities
• Sampling methods and degree of precision
100 80 60 40 20 0
Public facilities classified "Hospital"
No beds
2
• Conditions in one service site may not reflect conditions throughout a facility.
• Kenya public “hospitals” range from 1 to 9 beds to 100 beds, while in Namibia most public “hospitals” have 50+ beds, and in Uganda, 100+ beds.
Experiences are based on SPA and Service Availability Mapping (SAM) survey experiences
Data Collection: Factors Influencing Reliability of Information
Data Collection Sites and Indicator Definitions: Factors Affecting Comparability of Results
Facilities offering indicated service where soap and running water are in the service area, Namibia 2009
5
Percent of facilities with indicated number of beds
percent of facilities
IHFAN aims to highlight a few methodological issues and how they may influence interpretation of results and, when feasible, to suggest ways to improve on existing practices. Data findings presented are based on the following: • Data Source: Service Provision Assessment (SPA), Macro ICFI • Countries: Kenya 2010; Namibia 2009; Uganda 2007 • Facilities: ll facility types offering basic Maternal/Child/Reproductive Health and HIV services • Managing authorities: Public and private non-profit (NGO/FBO), private for profit sector facilities • Nationally representative samples of public and private non-profit facilities offering MCH/RH/HIV services (databases often include selected for-profit and other government (police/military) facilities)
Data Findings—Comparisons of indicators for size of facilities (using beds) within facilities with the same/similar classification
4 Percent of facilities
Attempts are being made to standardize methods and definitions, but there remains a need to improve general understanding of experiences and data that provide evidence about factors that influence validity, reliability, and comparability of information from health facilities.
The following are issues in using national facility classification as basis for classifications: • Most countries have policies that describe standard staffing, services, levels of diagnostics, and medications available within specific types of public facilities. However, this leads to: »» inconsistent health facility classifications between countries »» different standards between countries for types and levels of service for each classification, and »» inconsistent rationales for service level. • Within a country, facility classification often reflects levels of services and staffing planned, but this is not the reality on “the ground.” • Limited information exists on private sector facilities—and many countries have weak regulation of private sector facilities.
Percent of facilities
Limited consensus exists on “best practices” in methodologies for HFA because different HFA approaches employ different methodologies which influence results. Even where same methodologies are used, different indicator definitions and nonspecific terminology when describing results can lead to confusion and the comparison of unlike items.
Often HFA data are presented using national facility classification. Assuming country classification provides a meaningful frame of reference, it facilitates interpretation by those using this data to inform decisions.
• Recommendations for improving comparability of facility level findings across time and geography • Developing a classification of facilities using measurable and objective criteria (e.g., number of beds, staff and/or availability of services) provides a better frame of reference for interpreting results and will improve the evidence base for linking resources, health systems, and outcomes.
Percent of facilities
Limited availability of data that are comparable across time and geography and that provide reliable and valid measures of readiness to provide health services, quality of care, and health system performance can strengthen Health System Strengthening (HSS) efforts. These efforts provide evidence of change or lack of change in the context of inputs that have occurred and a context for interpreting findings related to infrastructure and service delivery patterns. However, health systems differ by countries, and there is little evidence of outcomes related to different systems: comparable information will improve the evidence base for strategic decision making.
3
Data Analysis: Factors Affecting Comparability of Information
Percent of facilities
1
Health Facility Assessment (HFA) Methodologies and Data Use: Why HFA Methods Matter
• Findings »» Availability of water onsite or soap in stock does not translate into presence in all relevant service areas »» Assumption that conditions in one service site is indicative of overall conditions is not true »» Differences in any availability onsite and availability of soap and/or water in all service sites becomes greater when there are more service sites »» To measure system for ensuring supplies, need to check all relevant service sites • Number of different service sites varies by size of facility and is associated with consistency of supplies in all service sites
Nancy Fronczak, PhD Social Sectors Development Strategies (SSDS)/IHFAN
[email protected] Paul Ametepi, MD, MPH MEASURE DHS ICF Macro/IHFAN
[email protected] Shanti Noriega Minichiello, MPH FHI 360/IHFAN
[email protected] Bolaji Fapahunda, PhD John Snow Inc., (JSI)/IHFAN
[email protected] Natasha Kanagat, MPH John Snow Inc., (JSI)/IHFAN Coordinator, IHFAN Secretariat
[email protected]
www.Ihfan.org MEASURE Evaluation is funded by the United States Agency for International Development (USAID) through Cooperative Agreement GHA-A-00-08-00003-00 and is implemented by the Carolina Population Center at the University of North Carolina at Chapel Hill, in partnership with Futures Group International, ICF Macro, John Snow, Inc., Management Sciences for Health, and Tulane University. The views expressed in this poster do not necessarily reflect the views of USAID or the United States government.