|
Lyn Paleo, MPA, DrPH Cand, School of Public Health, U.C. Berkeley, 6405 Regent St., Oakland, CA 94618, 510/967-6792, paleo@igc.org
Occupational health and safety programs increasingly use participatory evaluation to assess interventions' effectiveness. However, more is known about the process of conducting CBPR/evaluation than the criteria for assessing the quality or “truth” of results. Well-established are the procedures or principles for conducting CBPR (such as equalizing power between researchers and participants and promoting co-learning among all collaborators); however, questions remain about how to judge the results. Should the conventional evaluation standards of objectivity, reliability, generalizability, and validity be applied to participatory evaluation? Should different, but parallel, criteria be used? Do we need wholly different criteria or procedures for assessing the quality of the research conclusions? How do we avoid the relativistic position that “all findings are good findings” as long as the proper steps were taken to ensure thorough participation by workers in all stages of the research? This presentation outlines the key considerations, presents research findings based on in-depth interviews with program staff, participants, and researchers of three well-known CBPR projects, and concludes with observations about how practically to consider issues of bias and rigor in participatory evaluation of occupational health and safety interventions.
Learning Objectives:
Keywords: Credible Science, Participatory Research
Presenting author's disclosure statement:
I do not have any significant financial interest/arrangement or affiliation with any organization/institution whose products or services are being discussed in this session.