Description:
Evaluations
should always be based on robust evidence. Identifying and synthesising
existing evidence is therefore a key requirement for impactful evaluations. AI
can help summarise and assess research outputs and other evidence in the public
domain. The workshop will explore the role of AI in evidence synthesis. It is
tailored for professionals looking to navigate the complexities of integrating
artificial intelligence in evidence-based evaluations.
The workshop will
introduce participants to the nuanced capabilities and limitations of AI in
aggregating and analyzing data and provide an overview of how to balance
innovation with reliability. Participants will learn the ‘ins and outs’ of
prompt engineering, a key skill for the successful use of AI, through practical
exercises designed to foster confidence in utilizing AI technologies, ensuring
the delivery of transparent, trustworthy results to clients. Workshop
participants will discover an array of AI tools, learning to select and apply
the most effective solutions for their evaluation needs, enhancing efficiency
and insight.
The
workshop will be useful for mid-level to senior professionals with some
foundational knowledge in evaluation methodologies who are seeking to enhance
their skills with advanced tools and techniques, such as evaluation specialists
and managers, academics, researchers, and professionals who are directly
involved in designing, conducting, and managing evaluations of public sector
and healthcare programs, as well as Policy Makers and Advisors.
The workshop assumes
a basic understanding of both evaluation principles and some familiarity with
AI concepts. The content would bridge the gap between traditional evaluation
methodologies and the innovative application of AI, focusing on practical applications,
ethical considerations, and how to interpret and communicate AI-driven
insights.
Learning Outcomes:
Session I: Foundations of AI in Evidence Synthesis
1.
Understanding PRISMA Guidelines: Participants will gain a comprehensive
understanding of the PRISMA guidelines and their application in ensuring the
rigor, transparency, and reliability of systematic reviews, especially when
integrating AI into evidence synthesis.
2.
Introduction to AI and LLMs: Participants will develop a foundational knowledge
of AI, particularly Large Language Models (LLMs), including how they function,
their capabilities, and their limitations in the context of evidence synthesis
for evaluations.
3.
Practical Application of PRISMA with AI: Through interactive group exercises,
participants will apply PRISMA guidelines to a hypothetical AI-assisted
evidence synthesis project, learning to reconcile traditional methodological
standards with AI-driven processes.
4.
Critical Analysis of AI in Evidence Synthesis: Participants will engage in
group discussions to analyze real-world examples of AI successes and failures
in evidence synthesis, fostering a critical perspective on the integration of
AI in evaluation work.
5. Introduction to
Prompt Engineering: Participants will be introduced to the basics of prompt
engineering within AI-driven evidence synthesis, learning how to formulate
queries that yield relevant and reliable results.
Session II: Advanced AI Techniques and Ethical Considerations
1.
Advanced Prompt Engineering: Participants will deepen their skills in prompt
engineering for LLMs, learning to create precise, contextually appropriate
queries that align with PRISMA guidelines to ensure methodological integrity in
evidence synthesis.
2.
Exploring AI Tools for Evidence Synthesis: Participants will gain hands-on
experience with various AI tools and platforms tailored for systematic reviews,
understanding how to select and utilize these tools effectively in their
evaluation practices.
3.
Ensuring Reliability and Validity of AI Outputs: Participants will learn
strategies for assessing and enhancing the reliability, validity, and rigor of
AI-generated evidence synthesis outputs, ensuring they meet the high standards
required for evaluations.
4.
Ethical Considerations in AI Use: Through discussions and reflections,
participants will explore the ethical implications of using AI in evidence
synthesis, focusing on trust, transparency, and the responsibility of
maintaining integrity in evaluation practices.
5. Applying AI
Insights in Evaluation: Participants will reflect on and plan how to integrate
the knowledge and skills gained from the workshop into their own evaluation
practices, ensuring that AI-driven insights are applied effectively and
ethically.
Session I Agenda:
Introduction
(15 minutes)
Part
1: PRISMA Guidelines as a Gold Standard in Evidence Synthesis (30 mins)
Part 2: Understanding AI in Evidence Synthesis (30 mins)
Wrap
up and Closing (15 mins)
Session II Agenda:
Introduction
(15 mins)
Part
1: Prompt Engineering in LLMs for Evidence Synthesis (30 mins)
Part
2: Trust and Reliability in AI Outputs (30 mins)
Wrap-Up
and Closing (15 mins)
Closing Remarks:
Motivation for applying workshop learnings to enhance evidence synthesis in
evaluation practices.
This workshop is aligned to AEA’s Competencies and Guiding Principles as follows:
The
workshop, titled ‘Who is afraid of … AI?’ How to use AI in evidence synthesis
for evaluations, aligns closely with the AEA Competencies and Guiding
Principles by providing participants with advanced methodological tools to
enhance their evaluation practices. Specifically, it addresses the following
domains and principles:
1.
Professional Practice (Domain 1.0) - The workshop equips participants with the
skills to integrate AI into evidence synthesis, adhering to the AEA’s
foundational documents by ensuring evaluations are rooted in robust, ethical
practices that enhance professional competency.
2.
Methodology (Domain 2.0) - The workshop's focus on AI technologies directly
enhances participants' methodological capabilities. By introducing AI's role in
systematic inquiry, the workshop empowers evaluators to effectively harness
quantitative, qualitative, and mixed methods for evidence synthesis, thus
aligning with the systematic inquiry and competence principles.
3.
Context (Domain 3.0) - Through discussions on the limitations and ethical
considerations of AI in evidence synthesis, the workshop prepares participants
to navigate diverse evaluation contexts, ensuring cultural competence and
integrity in their practices.
4.
Planning & Management (Domain 4.0) - The workshop covers practical aspects
of using AI tools, thereby helping evaluators to better plan, manage, and
execute evaluation studies with an emphasis on methodological rigor and
transparency, consistent with the principle of integrity.
5.
Interpersonal (Domain 5.0) - By engaging participants in interactive
discussions and group activities, the workshop fosters strong interpersonal
skills, such as communication and collaboration, essential for effective
evaluation practice and aligned with the principle of respect for people.
Overall, the workshop
encourages participants to critically assess the reliability and validity of
AI-generated outputs, ensuring that the common good and equity are upheld in
evaluation practices. This alignment with AEA's competencies and guiding principles
not only enhances the technical expertise of evaluators but also promotes
ethical, contextually aware, and socially responsible evaluation practices.
Technological Requirements:
In order to gain the most during this course, it is recommended that participants have access to ChatGPT 3.5 or higher, ChatPDF, or Micrsoft Edge Co-pilot.
Presenter:
Prof. Axel Kaehne
Axel
Kaehne is Professor of Health Services Research and Director of the Unit for
Evaluation & Policy Analysis at Edge Hill University Medical School as well
as Visiting Professor at the University of Eastern Finland. As a former
Cochrane reviewer he has in-depth expertise on evidence synthesis and its
applicability in evaluation work. His scholarly work spans program evaluations,
multiagency service integration, and complex adaptive systems in healthcare,
underpinned by both quantitative and qualitative research methodologies. He is
editor in chief of the Journal of Health Organization and Management and the
Journal of Integrated Care (both Emerald Publishing).
Facilitation Experience:
Axel
has been lecturing and facilitating teaching and post-graduate training for
professionals in various learning contexts since 2005. Axel is recognized for
his dynamic classroom teaching techniques in face to face and online delivery
mode. He employs a blend of traditional and innovative methods, including
interactive discussions, group projects, and the use of digital platforms to
enhance learning and engagement. By incorporating problem-based learning and
case studies from his extensive research, Axel ensures that theoretical
concepts are grounded in practical, real-world scenarios. This approach not
only facilitates deeper understanding but also prepares workshop participants
to apply their knowledge effectively in their evaluation practice.
Dates:
Thursday, October 3, 1:00PM- 2:30PM ET
Thursday, October 10, 1:00PM- 2:30PM ET
Notes:
Registration is limited to 40 participants for this eStudy.
Once you purchase the eStudy you must register for each session. Recordings will be made available to registrants unable to attend sessions live. Recordings will be made available to all registrants for 90 days.