Saturday, March 21, 2026

 Talking It Out: How to Listen and Be Heard

By David Stanton Robinson
March 21, 2026

Hey everyone! Have you ever noticed how sometimes, talking about important stuff with others feels more like a fight than a chat? It seems like people are always yelling or getting mad instead of actually listening. But what if I told you there's a simple, secret tool that can fix a lot of this? It’s not some fancy new law or a complicated computer program. It's just your personal story and how well you listen to other people's stories.


Way back when I was only eight years old, my dad moved our whole crew—me, Mom, and Sis—about two hours away from our very first home. During those long drives back to visit Mom's side of the family, my father would get me totally hooked on debates about every topic under the sun. He’d kick things off by asking for my take on some big, controversial issue—like the death penalty. "What do you think about the death penalty?" he’d ask. I’d give my two cents, he’d share his reply, and then he’d ask what I thought of his response. This back-and-forth never really ended; it just started right back up every single trip! I’m pretty sure that’s where my internal researcher and love for asking questions first began. But looking back, I know I could still be a much better listener to other people’s personal stories.


Think of your story as the invisible glue that holds a healthy talk together. When we talk about big ideas, we often use complicated words and facts. But if we share the personal, true-life lessons—the "heart" stuff—that made us believe what we believe, people listen way better. It makes us real, and it makes people trust us more than just using a bunch of charts and numbers.

This whole idea of better talking is built on two main things:

  1. Owning your story: Telling your own truth about why you care.

  2. Deep listening: Hearing someone else without judging them.

When we share our lives, it makes us less angry at people who disagree with us. We learn to separate the person we're talking to from their ideas. Listening to their story helps us "step into their shoes" and feel what they feel. This is called "civic empathy," and it changes a grumpy argument into a real attempt to understand.

So, how do we actually do this? We need some rules! The "My Neighbor's Voice" program has some awesome ones. We're going to use the "Brain Sponge" to really listen. We'll use "Zip It" to stop ourselves from cutting people off. We'll even do "Heavy Lifting" to try hard to understand, and "No Fix-It" to stop giving advice when people just want to be heard. Let's build a better "Story of Us" together!

Your Amazing Personal Story

Your personal story is the most important part of this whole idea. It's how you see yourself, what you value, and what you choose to share. It's the foundation for any good, healthy conversation.

Sharing our stories does some serious magic:

  • It stops the hate: When you tell your story, people see you as a person, not just a label. This helps stop those bad, angry feelings we get toward people on the other side of an argument. We learn to "separate the person from their position."

  • It lets everyone play: You don't have to be a big-shot expert or a genius with numbers to join the conversation. Your life experience is your expert badge! Stories are the main way that people who don't usually talk politics can jump in and be heard.

  • It grows your heart: Listening to someone's story makes you see things from their side. This "perspective-taking" helps us be more caring and active citizens. It’s about listening to do something, not just listening to argue.

  • It builds "Our Story": Beyond your "Story of Self," we need a "Story of Us." This is the shared feeling of what we all care about and what we want to do together in our neighborhood or town.

  • It’s the best way to convince people: If you share the personal experiences that led to your belief, you look more honest and trustworthy to people who don't agree with you. Your "heart lessons" are usually much stronger than just "head lessons."

Knowing and sharing your own story is a step toward feeling truly free and helping to make our democracy better for everyone.

Five Simple Ways to Have Better Talks

If we want to have better talks, we need to remember a few key things:

1. Start with Real Life.

Forget the complicated facts and start with what you've actually lived through. Your work, your family, and your life experiences should be the beginning of the talk. When you share the personal stuff that shaped you, it builds trust and is way more convincing than just throwing out a statistic.

2. Make it Fair and Fun.

A good talk is like a game of catch—you have to "give and take." Everyone must have an equal chance to talk, and everyone needs to try and help the others join in. When people aim to learn from each other—and are ready to admit someone else might have a better idea—trust grows even deeper.

3. Put on Your Empathy Glasses.

When you hear someone's story, try to see the world through their eyes. This makes you a more active listener and less likely to jump into a fight. It helps you see the person, not just the opposing idea. Don't throw out an idea just because you don't like who said it; truly think about whether the idea is good or not.

4. Dream Big About Tomorrow.

Sometimes, we should take a break from worrying about all the problems we have right now. Instead, let's focus on what a great future could look like! Everyone can use their own hopes and ideas to build this shared dream. The main goal is to take your personal story ("Story of Self") and weave it together with everyone else's to create a common dream ("Story of Us") that makes us all want to act.

5. Talk is Good, Action is Better.

A strong conversation can handle some disagreements. But to keep people committed, the talk has to lead to action! You have to turn those words into real decisions and then do something. If you plan an action but nothing happens, people get frustrated and won't want to talk next time. Making words turn into practical steps is super important for keeping trust alive.

The Superpower of Deep Listening

I know, it’s hard. We’re all busy, and we all want to jump in and share our own amazing thoughts. But really listening to someone without judging them is one of the most important things you can do to build trust in your community. It sets the stage for actually solving problems together.

Here are the six awesome rules for becoming a super-listener, inspired by great programs like "My Neighbor's Voice":

  • The "Zip It" Rule (Quiet Time!): When someone is sharing their truth, your mouth needs to go on a total vacation. That means no cutting in, no comments, no questions, and zero pushback! This makes the speaker feel totally safe and honest.

  • The "Brain Sponge" Rule (Soak it Up): Stop planning your clever answer! Take off your "answering hat" and just try to absorb what the speaker is saying, like a sponge soaking up water. You are listening to understand, not to reply.

  • The "No Fix-It" Rule (Be a Helper, Not a Fixer): Even if you have the world's greatest advice, hold it in! People usually just need a chance to talk and find their own answer. Let your inner repair person take a long nap.

  • The "Heavy Lifting" Rule (Empathy is Work!): Feeling what someone else feels isn't easy—it takes real mental muscle. Be ready to use your brain power to really try and step into their shoes. It's the most important work you can do.

  • The "Timer Magic" Rule (Fair Share): Use a strict clock, like three minutes for everyone. The timer makes sure the spotlight gets shared equally, so one person can't hog the whole conversation all night long.

  • The "Treasure Hunt" Rule (It’s a Gift): Everyone's life is a beautiful, complex puzzle. When someone shares the personal story of how they got their opinion, treat that story like a valuable treasure. It’s the key to building real empathy.

Let's Get Started!

If we can all start using our stories as our main tools for talking, and if we can all practice these deep listening rules, we can stop fighting so much. We can start seeing each other as real people, even when we disagree. It takes work, but by changing our daily conversations, we can build the strong, trusting communities that we all desperately want. Let’s go out there and start talking and listening better today!


Sunday, July 27, 2025

From Good to Great: Enhancing Program Evaluation in Your Community

By David S. Robinson, EdD   July 27, 2025    www.evaluationhelp.com

Program evaluation in community organizations is a crucial process that assesses the effectiveness of programs and initiatives designed to enhance community well-being. However, program evaluators often overlook essential considerations during the planning and implementation phases, which can significantly impact the quality and relevance of their evaluations. 


Understanding these commonly forgotten aspects is crucial for ensuring that evaluations effectively address community needs and foster stakeholder engagement. Among the top five considerations evaluators frequently neglect are early stakeholder engagement, clarification of evaluation purposes, addressing power dynamics, providing adequate training and resources, and establishing ongoing communication and feedback mechanisms. 

Engaging stakeholders from the outset fosters a sense of ownership and ensures diverse perspectives are included, leading to more relevant evaluation outcomes.  Meanwhile, clearly defining the purpose of an evaluation helps maintain focus and allows for the development of actionable insights that stakeholders can utilize effectively. Furthermore, addressing power imbalances between external evaluators and community members is vital for promoting collaboration and trust throughout the evaluation process.

Another critical consideration is the need to train and provide resources for stakeholders unfamiliar with evaluation methods. Empowering community members with the knowledge and skills to participate meaningfully enhances the overall quality of evaluations. Lastly, establishing a transparent process for ongoing communication and feedback ensures that stakeholders remain engaged and informed throughout the evaluation, thereby enhancing its overall impact.

By addressing these often-overlooked considerations, program evaluators can improve the effectiveness of their evaluations and better support community organizations in achieving their goals, ultimately leading to more successful and sustainable community initiatives

Monday, May 26, 2025

Bridging the Gap: Understanding the Differences Between Program Evaluation and Academic Research

 Bridging the Gap: Understanding the Differences Between Program Evaluation and Academic Research


By David S. Robinson, EdD (www.evaluationhelp.com


Introduction

In the world of inquiry and analysis, two distinct yet interconnected approaches shape our understanding of effectiveness and knowledge: academic research and program evaluation. Recognizing the unique characteristics and complementary nature of these approaches is crucial for researchers, evaluators, educators, and policymakers to collaborate effectively and drive meaningful change. While both involve systematic investigation, they may serve different purposes, employ different methodologies, require different social-emotional skills (Emotional Intelligence - EQ), and cater to different audiences. Understanding these distinctions is crucial for researchers, evaluators, and policymakers alike, enabling more effective collaboration and ultimately fostering more impactful real-world initiatives and significant advancements in various fields. This blog aims to clarify these distinctions and highlight how they complement each other in advancing knowledge and improving practical applications.


Academic Research: The Pursuit of Knowledge

Academic research is driven by the quest for new knowledge, theories, and insights. Researchers formulate hypotheses, conduct experiments, and analyze data to contribute to the broader academic discourse. This process is often theoretical, hypothesis-driven, and rigorously peer-reviewed, ensuring findings are thoroughly scrutinized by experts in the field before being published. Common methodologies include experimental studies, longitudinal research, and qualitative analysis. For example, a university study investigating the long-term effects of early childhood education on cognitive development would follow strict research protocols, aiming to contribute to educational psychology literature. The social-emotional skills (EQ) required involve emotional regulation, resilience, patience, critical reflection, and empathy for study participants. While providing foundational knowledge and rigorous insights, the primary output is often peer-reviewed publications, and the direct application of findings might take time or require further steps beyond the research itself.


Program Evaluation: Measuring Impact and Effectiveness

Program evaluation, on the other hand, is focused on applied and practical, aiming to directly assess the effectiveness of specific programs or interventions and provide actionable recommendations for improvement. Evaluators work closely with stakeholders to determine whether a program meets its intended goals and how it can be improved. Methodologies in program evaluation include surveys, interviews, performance metrics, and cost-benefit analysis. The social-emotional (EQ) skills required involve interpersonal and social awareness, adaptability, comfort with ambiguity, conflict management, and persuasive communication. For instance, a government-funded literacy program might undergo evaluation to assess its impact on improving reading skills among students. Disagreements may arise among program implementers and management, whereby evaluators must adapt to changing priorities.  The findings would inform policymakers whether to continue, modify, or expand the initiative. While providing actionable recommendations for immediate decision-making by stakeholders, evaluation findings are often context-specific and primarily intended to guide practical improvements within a particular program, rather than contributing to broad theoretical frameworks.





Key Differences and Intersections

While both approaches rely on systematic inquiry, they differ in several key aspects:


  • Purpose: Academic research seeks to expand theoretical knowledge, while program evaluation assesses real-world effectiveness.

  • Audience: Academic research targets scholars and researchers, whereas program evaluation informs policymakers and practitioners.

  • Methodology: Academic research follows rigorous scientific protocols, while program evaluation adapts methods to practical needs.

  • Outcome: Academic research contributes to literature, while program evaluation leads to actionable recommendations.

  • Emotional Intelligence (EQ) skills may differ between academic researchers and program evaluators based on their roles and objectives.


Despite their differences, academic research and program evaluation are deeply interconnected and play complementary roles. Academic research provides theoretical foundations and methodological rigor that evaluators can use to design effective programs and evaluations. Conversely, program evaluation generates rich empirical data from real-world settings that researchers can analyze to refine theories and models, identify new research questions, and test the applicability of academic findings in practice. Collaboration between researchers and evaluators, perhaps through joint projects or sharing datasets, strengthens both fields, ensuring that theoretical insights translate efficiently into practical improvements and that real-world data robustly informs academic inquiry. While academic research and program evaluation differ in their primary objectives, they are deeply interconnected and play complementary roles in advancing knowledge and improving real-world outcomes.


Figure 1 graphically illustrates the differences and where complementary interconnections provide an advantage in academic research and program evaluation.


Figure 1. Different but Complementary Interconnections of Research and Evaluation




Conclusion

Understanding the distinctions between academic research and program evaluation is crucial for researchers, evaluators, and policymakers alike. While academic research expands knowledge by pursuing foundational truths, program evaluation ensures that initiatives achieve their intended impact by focusing on practical effectiveness. By bridging the gap between these approaches and fostering collaboration, we can leverage the strengths of both to tackle complex societal challenges more effectively, leading to both meaningful advancements in theory and demonstrable improvements in the real world.


Monday, March 17, 2025

Local Wisdom, Lasting Change: Empowering Communities Through Grounded Program Evaluation

 

Local Wisdom, Lasting Change: Empowering Communities Through Grounded Program Evaluation


Abstract

A local craftsman built a pavilion from local storm-felled trees, demonstrating resourcefulness and sustainability. This pavilion serves as a metaphor for grounding program evaluation in local resources and knowledge. Utilizing local stakeholders in program evaluation taps into contextual knowledge often missed by external evaluators. Empowering community members to conduct evaluations fosters ownership and engagement with the process. Effective evaluations, like the pavilion, acknowledge limitations and build on lessons learned. Strategies for enhancing program evaluation include engaging stakeholders, utilizing local resources, and focusing on sustainability. Building evaluation capacity within communities creates a lasting impact and ongoing improvement. The post advocates for valuing local wisdom and resilience in program evaluation, leading to more meaningful and sustainable outcomes.


Introduction

During a recent visit to Low Country, South Carolina, I stumbled upon an unexpected lesson in resourcefulness and sustainability. Amidst the moss-draped oaks and tidal marshes, I met a local polymath—an engineer by training but a craftsman by passion. On his property outside of DC stood a magnificent pavilion, its wooden beams rising gracefully toward the sky. What made this structure remarkable wasn't just its aesthetic appeal, but its origin story: the entire pavilion was constructed from storm-felled trees, local timber milled on-site, and assembled with ingenuity born from necessity.

As I admired the pavilion's elegant design and sturdy construction, I couldn't help but see parallels to my work in program evaluation. Too often, we rely on imported frameworks and external expertise when the most sustainable and impactful solutions might be growing right around us. This pavilion—born from local resources, skills, and knowledge—offers a powerful metaphor for how we might approach program evaluation in a more grounded, sustainable, and ultimately more effective way.

The Pavilion Story – A Lesson in Ingenuity and Sustainability

The polymath brother (as his family affectionately called him) wasn't just any engineer. With degrees in mechanical engineering and architecture, and years spent working on complex infrastructure projects, he approached problems with both technical precision and creative vision. When Hurricane Matthew swept through the area in 2016, leaving dozens of mature hardwood trees scattered across a local popular park, he saw not devastation but opportunity.

Instead of hiring contractors to remove the fallen trees and import materials for a new gathering space, he set up a portable sawmill and began transforming chaos into creation. Oak became support beams, cypress was milled for weather-resistant flooring, and various hardwoods were carefully selected for different structural elements based on their natural properties. The pavilion took shape over months, designed to withstand future storms while honoring the character of the landscape it came from. Local neighbors came together to collaborate on and help rebuild the pavilion.

What struck me most was how this approach—using what's available, respecting local conditions, and applying appropriate expertise—created something more harmonious with its environment than any prefabricated structure could have been. The pavilion wasn't just built on the land; it was built of the land, embodying a profound lesson in working with, rather than imposing upon, the existing environment.

Drawing Parallels to Program Evaluation

This pavilion-building approach mirrors what the most effective program evaluations strive to achieve. Instead of imposing standardized evaluation frameworks that often fail to capture local nuances, what if we invested in developing evaluation skills among program participants and community members?

When we empower local stakeholders to design and conduct evaluations, we tap into invaluable contextual knowledge. These individuals understand the subtle cultural factors, historical contexts, and community dynamics that external evaluators might miss. Just as the polymath brother knew which wood would resist local insects and which would withstand humidity, community members know which questions matter most and how to interpret responses correctly.

Building evaluation capacity from the ground up also fosters ownership. When participants see evaluation as their tool—not something imposed from outside—they're more likely to engage meaningfully with the process and act on the findings. The evaluation becomes not just an assessment but an integral part of the program's growth and development, much like the pavilion became not just a structure but an expression of resilience and adaptation.

Lessons from the Pavilion Builder – Accomplishments, Errors, and Reflections

The finished pavilion stands as a testament to what locally-sourced solutions can accomplish. It has weathered subsequent storms, hosted countless gatherings, and become a landmark that visitors admire. Its functionality exceeds what commercially available options might have provided, precisely because it was designed with intimate knowledge of local needs and conditions.

Yet the builder was quick to point out his missteps: "I didn't account for how the different woods would expand at different rates," he admitted, showing me where he'd made subsequent adjustments. "And this section took twice as long as necessary because I was learning the technique as I went."

These admissions weren't signs of failure but rather reflections of a healthy learning process—one that parallels effective evaluation practices. Good evaluations acknowledge limitations, document lessons learned, and build on previous experiences. They recognize that perfection isn't the goal; improvement is.

Perhaps most insightful was his reflection on the process: "The best part wasn't finishing it—it was figuring out how each unique piece of wood could contribute to the whole." This perspective mirrors the value of inclusive evaluation approaches that recognize how diverse perspectives combine to create a more complete understanding of program impacts.

Practical Strategies for Enhancing Program Evaluation Using Local Knowledge

How can we bring this pavilion-building metaphor to program evaluation? Several strategies stand out:

Engage Stakeholders Meaningfully: Involve program participants not just as data sources but as evaluation designers and analysts. Their questions often lead to the most relevant insights.

Utilize Local Resources: Map existing skills and knowledge within your community before seeking external expertise. Sometimes the perfect evaluation "timber" is already in your backyard.

Focus on Sustainability: Design evaluation practices that can be maintained beyond initial implementation. Simple, repeatable methodologies often yield more consistent insights than complex approaches that collapse without expert guidance.

Embrace Flexibility: Allow your evaluation framework to adapt to changing circumstances, just as the pavilion builder adjusted his techniques to work with the unique properties of each tree.

Building Something That Lasts

The pavilion stands as a physical manifestation of sustainable design—a structure that will serve its purpose for generations because it was built with deep understanding of local conditions and materials. Similarly, evaluations built on local knowledge and skills create lasting impact.

When we invest in building evaluation capacity within communities and organizations, we don't just get better data for one assessment—we create an evaluation mindset that continues to generate insights long after external consultants have moved on. Program participants empowered with evaluation skills become agents of ongoing improvement and adaptation.

Like the storm-felled trees transformed into a beautiful pavilion, challenges in programs can become opportunities for growth when viewed through an evaluative lens that values local wisdom and resilience.

Conclusion

The polymath brother's pavilion reminds us that our best resources often lie right beside us in the neighborhood—we need only the vision to recognize them and the skills to transform them. In program evaluation, this means looking first at the knowledge, experiences, and capabilities within the communities we serve before importing external frameworks or expertise.

By building evaluation capacity locally, tailoring approaches to context, and valuing diverse perspectives, we create evaluations that are not just more accurate but more meaningful and sustainable. Like a well-built pavilion, these evaluations stand the test of time, providing shelter for better decision-making and program improvement for years to come.

The next time you approach program evaluation, ask yourself: What trees have already fallen that I might build with? What local knowledge might I mill into something useful? The answers might just lead to evaluation practices as beautiful and enduring as a hand-crafted pavilion rising from the South Carolina lowlands.


Saturday, March 01, 2025

When High Emotional Intelligence Meets Program Evaluation: A Double-Edged Sword

By David S. Robinson, EdD

March 1, 2025

As a program evaluator, high emotional intelligence (EQ) can be both a blessing and a challenge. Although it enables deeper connections with stakeholders and better communication of findings, it can also create unexpected hurdles in delivering objective assessments. Let's explore this complex intersection of emotional intelligence and program evaluation.

EQ Advantage: What Makes It Valuable?

High emotional intelligence brings several powerful assets to this table. People with high EQ excel at understanding both themselves and others, making them natural communicators and builders of relationships. They can:

- Read the room effectively during stakeholder meetings

- Navigate complex interpersonal dynamics

- Manage their own emotions during stressful situations

- Build trust with program participants

- Motivate teams toward common goals

These skills are invaluable for gathering sensitive data, conducting interviews, or presenting potentially challenging findings to stakeholders.

 

The Hidden Challenges

However, these emotional strengths can sometimes work against a program evaluator's primary mission. High-EQ evaluators often face some unexpected challenges.

1. The Objectivity Dilemma

While empathy helps to understand program participants, it can also cloud judgment. High-EQ evaluators may find themselves emotionally invested in a program's success, potentially compromising their ability to provide unbiased assessments.

2. The Feedback Paradox

Delivering constructive criticism becomes particularly challenging. Heightened awareness of others' feelings can make high-EQ evaluators hesitant to present negative findings, even when necessary. This reluctance can lead to the following:

- Softened feedback that doesn't convey the full scope of problems

- Delayed delivery of critical information

- Overcautious recommendations

3. The Burnout Risk

Constant management of both their own and others' emotions can lead to emotional exhaustion. High-EQ evaluators often carry the emotional weight of:

- Program stakeholders' anxieties

- Participants' personal stories

- Team members' concerns

- Their own professional pressures

 

Finding the Balance: Best Practices

To leverage the benefits of high EQ while maintaining professional effectiveness, evaluators should:

1. Establish Clear Boundaries

   - Set emotional boundaries with stakeholders

   - Create structured feedback frameworks

   - Maintain professional distance when necessary

2. Implement Objective Measures

   - Use standardized evaluation tools

   - Rely on data-driven metrics

   - Document decision-making processes

3. Practice Self-Care

   - Schedule regular breaks

   - Seek peer supervision

   - Maintain work-life balance

 

The Path Forward

The key to success lies in finding a sweet spot between emotional intelligence and professional objectivity. High-EQ evaluators should view their emotional intelligence as a tool in their professional toolkit, one that should be used thoughtfully and in conjunction with other evaluation skills.

Consider the development of these complementary skills.

- Strong analytical abilities

- Data interpretation expertise

- Project management capabilities

- Clear documentation practices

 

Conclusion

High emotional intelligence in program evaluation is like a powerful lens through which to view and understand program dynamics. However, similar to any other tool, it must be used wisely. The most effective evaluators learn to harness their emotional intelligence while maintaining professional distance and objectivity.

Success comes from recognizing when to lean into your emotional intelligence, such as during stakeholder interviews or team conflicts, and when to step back and let the data drive the process. By maintaining this balance, high-EQ evaluators can deliver thorough objective assessments while building strong, trust-based relationships with their stakeholders.

The goal isn't to suppress emotional intelligence but to channel it productively, creating evaluations that are both rigorous and emotionally intelligent. After all, the best program evaluations don't just measure success—they help build it.

Sunday, January 19, 2025

 Equity, Evaluation and AI Assistance - Can it be Useful?

I've been using AI to increase equity in my evaluation and consulting. AI has revolutionized my workflow, so let me share my journey with you.

At the beginning of each project, it is essential to understand the problem accurately. AI has been helpful in synthesizing the different perspectives of diverse community members and partners by helping me review online meeting notes from different collaborative participants. I worked on a project focusing on substance use disorders and behavioral health emergencies in a rural community. I added special instructions to ChatGPT (“Whenever possible, suggest ways for my questions and your response to be made more equitable”). AI responses made the equity issues in my evaluation methods more obvious by reminding me to review each method and dataset for diverse respondent demographics.  I have used AI suggestions for editing the vision and mission statements, acting as an editor to ensure accuracy and readability for diverse readers (Lex-page). AI summarizes some of my worksheet data, leaving more time to explain results to diverse communities (Numerous.ai).

AI has also been invaluable as a research assistant for the YMCA to incorporate substance abuse prevention and mental health education into their youth programs (ChatGPT) and edited our reports (Canva, Scalenut, ResearchRabbit, Scholarcy) by simplifying them so that a 5th grader can understand (Guidde for videos). AI played a significant role in enhancing our surveys and interviews, recommending more precise language and additional questions to engage underserved groups (ChatGPT), as well as broadening our data collection approaches for patients from diverse backgrounds (ResearchRabbit, Scholarcy).

AI has revolutionized my work, making it more efficient and effective at increasing equity, one small step at a time. One must always be aware of the potential for AI search results to incorporate biases or be untruthful based on the training data, which tends to be biased. Carefully review all results from AI tools (@MushtaqBilalPhD, ChatGPT). AI applications are best used as collaborator-research assistants.

Lessons Learned

By leveraging AI in my evaluation and consulting work, I have experienced a transformative shift in my workflow that has significantly contributed to increasing equity. AI has proven invaluable at the beginning of each project, helping me synthesize multiple perspectives and gain a clearer understanding of complex issues. Furthermore, AI has played a crucial role in diversifying our data and ensuring that our surveys and interviews are comprehensive and inclusive. I always review the results of AI searches and AI suggestions for bias and credibility. Overall, the integration of AI has revolutionized my work, making it more efficient and effective in promoting equity.

Rad Resources

Conducting Equitable Evaluations by Katrina Bledsoe and Rucha Londhe, https://oese.ed.gov/files/2022/11/Conducting-Equitable-Evaluations.pdf

How we incorporate diversity and inclusion in evaluation by Eyerusalem Tessara, https://www.evalacademy.com/articles/how-can-we-incorporate-diversity-equity-and-inclusion-in-evaluation

Follow Mushtaq Bilal on X (formerly Twitter) here for sound advice on using AI for writing, revising and editing drafts of blogs and reports https://twitter.com/MushtaqBilalPhD

AI links to aid equity in evaluation: ChatGPT for ideas, reviewing drafts, outlines. Microsoft Bing and Google Bard are great too.

Note taking for meetings and saving websites and ideas, https://get.mem.ai/

Otter and Fellow for transcribing meetings, meeting note-taking, summarizing transcripts, otter.ai and/or fellow.ai .

The role of AI in Diversity, Equity and Inclusion