Professional Development Program Evaluation for the Win

How to Sleep Well at Night Knowing Your Professional Learning Is Effective.

By Sheila B. Robinson, Ed.D.

Wouldn’t it be great if you knew when your professional learning programs were successful? What if you knew more than just the fact that teachers liked the presenter, were comfortable in the room or learned something new? Wouldn’t it be better to know that teachers made meaningful changes in teaching practice that resulted in increased student learning?

You can ascertain all of this and more by conducting program evaluation.

In This Article: Every day you engage in random acts of evaluation…

Every day you engage in random acts of evaluation — multiple times per day, in fact. When you get dressed in the morning, you implicitly ask yourself a set of questions and gather data to answer them.

Of course, getting dressed is pretty low stakes. At worst, you might find yourself too warm or cold, or under- or overdressed for an occasion. Buying a new car, however, is a higher stakes proposition. You could end up with a lemon that costs you a lot of money, or even worse, is unsafe. When you evaluate, you are more or less systematic about it depending on the context. For the car, you may create a spreadsheet and collect data on different models, their price, performance, safety features and gas mileage. Or, at the very least, you would read up on this information and note it in your head.

But what about your professional learning programs? What does it mean to evaluate a program?

What is Program Evaluation?

Program evaluation is applying systematic methods to collect, analyze, interpret and communicate data about a program to understand its design, implementation, outcomes or impacts. Simply put, program evaluation is gathering data to understand what’s going on with a program, and then using what you learn to make good decisions.

Program evaluation gives you key insights into important questions you have about your professional learning programs that help inform decisions about them. For example, you may want to know:

Part of the innate beauty of program evaluation lies in its abundant flexibility. First, there are numerous forms and approaches, and second, evaluation can be conducted both before and during a program, as well as after the program ends.

You may enjoy this hand-picked content:

Systematic Methods

What do I mean by systematic methods? Much like a high-quality lesson or unit plan, program evaluation is the result of good thinking and good planning. It’s knowing what you want your programs to accomplish and what types of assessment will help you determine if you are successful. Being systematic means:

There are myriad strategies for collecting data. Surveys, interviews or focus group interviews, and observations or walkthroughs are common methods. You can also look at student achievement data, student work samples, lesson plans, teacher journals, logs, video clips, photographs or other artifacts of learning. The data you collect will depend on the questions you ask.

5 Reasons to Evaluate Professional Learning Programs

Learning Forward offers a set of standards, elements essential to educator learning that lead to improved practice and better results for students. The Data Standard in particular calls for professional learning programs to be evaluated using multiple sources of data. While adhering to a set of standards offers justification for action, there are specific advantages of program evaluation that substantiate its need:

  1. Evaluating professional learning programs allows leaders to make data-informed decisions about them. When leaders have evaluation results in hand they can determine the best course of action for program improvement. Will the program be expanded, discontinued, or changed?
  2. Evaluating professional learning programs allows all stakeholders to know how the program is going. How well is it being implemented? Who is participating? Is it meeting participants’ learning needs? How well is the program aligned to the Every Student Succeeds Act (ESSA) definition of professional learning?
  3. Evaluation serves as an early warning system. It allows leaders to peek inside and determine the degree of progress toward expected outcomes. Does it appear that program goals will be achieved? What’s going well? What’s going poorly? Evaluation uncovers problems early on so that they can be corrected before the program ends.
  4. Program evaluation helps you understand not only if the program has been successful (however “success” is defined) but also why the program is or is not successful. It allows you to know what factors influence success of the program.
  5. Program evaluation allows you to demonstrate a program’s success to key stakeholders such as boards of education and community members, or potential grant funders. Evaluation results allow you to document accomplishments and help substantiate the need for current or increased levels of funding.

All Evaluation is NOT the Same

The word “evaluation” can strike fear into the hearts of teachers and administrators alike. People naturally squirm when they think they are being evaluated. Although personnel or employee evaluation shares some characteristics with program evaluation — such as collecting and analyzing data, using rubrics, assigning a value or score and making recommendations — they serve entirely different purposes.

Program evaluation focuses on program data, not on an individual’s personal performance. The focus of the evaluation is on how the program performs. In education, it’s important to take great care not to let program evaluation results influence personnel evaluation. And remember the example about buying a car? That’s product evaluation, and it too shares traits with program evaluation but serves a different purpose.

Are you convinced that program evaluation will help you generate insights that inspire action to improve professional learning in your school or district?

You may enjoy this hand-picked content:

What’s Involved in Evaluating a Professional Development Program?

Evaluating a program consists of five interdependent phases that don’t necessarily occur in a perfectly linear fashion.

Phase 1: Engagement and Understanding

Program evaluation is most successful when stakeholders are involved and work collaboratively. Who should be involved? Administrators and teachers connected to the program, along with instructors and participants will likely be called upon to offer or collect data, and should be included in planning the evaluation.

Think of a professional learning program in your school or district. Is there a clearly articulated description of the program? Are there stated and measurable outcomes? Does everyone involved with the program know what participants are expected to learn, how they might change their practice, and what student outcomes are expected as a result? Does everyone agree on these? Don’t worry! It’s quite common to answer “no” to one or all of these questions.

In the next section, you will learn about the importance of program descriptions and logic models. I’ll share how these tools can be easily created to promote shared understanding of your professional learning programs and how this sets you up to conduct high quality evaluation.

Phase 2: Evaluation Questions

Developing evaluation questions is foundational to effective program evaluation. Evaluation questions form the basis for the evaluation plan and drive data collection.

Conducting evaluation is much like conducting a research study. Every research study starts with one or a few broad questions to investigate. These questions inform how and from whom you collect data. The following are examples of the types of questions you might pursue in evaluating your professional learning programs:

Developing evaluation questions is foundational to effective program evaluation.

Phase 3: Data Collection

Data on professional learning programs is collected to answer your evaluation questions, and all decisions about data collection strategies to use rest squarely on these.

In the section “Collecting the Data” you’ll find a more in-depth look at the advantages and disadvantages of specific data collection strategies, along with ideas for exploring more innovative data sources.

Phase 4: Data Analysis and Interpretation

This is the phase that scares people the most. People often think they need to understand statistics or have advanced spreadsheet skills to do data analysis. They worry when their datasets aren’t perfect or whether they have collected data in a “scientific” enough way. They are concerned about whether their data is reliable and valid, especially if it is qualitative and perceptual data, such as answers to open-ended questions from surveys or interviews.

These concerns are understandable, but in truth, there’s no need to get worked up. Later in this article, we will put to rest all of these fears!

Given the types of data used to evaluate professional learning programs, you rarely need statistics beyond simple frequencies and averages. And datasets are seldom perfect. When answering surveys, for example, some people don’t answer certain questions. Others misinterpret questions, or it’s clear they make mistakes answering them.

On one feedback form after a very well-received workshop, a participant checked “Strongly disagree” for every statement when it was clear that “Strongly agree” was the intended answer. How did I know this? Between the statements were comment boxes filled with glowing praise about how much the participant enjoyed the workshop, valued the materials and loved the instructor. It was clear the person mistook “Strongly disagree” for “Strongly agree” based on the location of those responses on the sheet.

Phase 5: Reporting and Use

Evaluation should be conducted with an emphasis on use. Once you interpret the data and glean insights that can inform future decisions about a program, you need to consider how to report new learning to key stakeholders. The formula for effective reporting includes:

  1. identifying appropriate audiences for evaluation reports,
  2. understanding their information needs, and
  3. knowing how they consume information.

Are you reporting to a Board of Education? A superintendent? A group of administrators and teachers? Do they need all the details, just a few key data points, or a brief summary of results? Knowing your audience and how to engage them informs how you create reports, and reports can come in a wide variety of formats. Here are just a few examples:

Program evaluation is an iterative process, and should be conducted with an emphasis on use.

Evaluation as an Iterative Process

Earlier, I mentioned that these phases aren’t necessarily linear. In the graphic, you see them as a cycle where Reporting and Use points back to Engagement and Understanding. As you complete an evaluation for a program and make decisions about its future, you may enter another evaluation cycle. Also, as you collect data, you may analyze and report on it even as the evaluation work continues, thus revisiting Phases 3, 4 and 5 multiple times in one evaluation cycle.

You may enjoy this hand-picked content:

Why Is Effective Professional Learning So Elusive?
Not sure how to take that first step, and overwhelmed by the enormity of redesigning an entire professional learning program? Here’s our take.

3 Surprising Ways to Engage Stakeholders in Evaluating Professional Development (and Ensuring a Quality Evaluation!)

At Greece Central School District in Rochester, NY, we hired certified trainers to facilitate the 8-day Cognitive Coaching® seminar for all of our teacher leaders. We followed up with monthly collegial circle meetings for them to reflect on their learning, share how coaching sessions were going and practice scenarios with one another to refine their skills. We invited the trainers back for yearly refresher sessions. This program was designed to influence changes in teacher practice and ultimately impact student learning.

But, ask each of our teacher leaders to describe what changes in practice might look like and how coaching would impact student learning, and you’ll get almost as many answers as participants.

How do you evaluate a professional learning program if everyone has a different idea of what the program does, who it serves or what success looks like? How might you generate the right questions to ask, identify appropriate expected outcomes and determine what to measure?

Part of evaluating a program is understanding the program and what we expect it to do. And part of a successful evaluation effort is getting stakeholders — teacher participants, principals, district office administrators, Board of Education members, etc. — on board to support the work.

In the previous section, I outlined five phases of program evaluation with the first being engagement and understanding. Here, I’ll describe three evaluation-related practices:

These can be used to engage stakeholders, build a common understanding of professional learning programs and set up for a successful program evaluation.

Why spend time on crafting these elements? There’s nothing worse in program evaluation than collecting and analyzing data only to realize that the results aren’t useful. They don’t help you answer questions, or inform decisions you have to make about the program. Let’s take a look at these three elements, and how they lay the foundation for successful program evaluation.

Program Description

Why is a program description is so important to program evaluation? A program description promotes clarity and contributes to a shared and comprehensive understanding of what the program is, who it is intended to reach and what it expects to accomplish.

A thorough description also identifies why the program was developed. What is the need or problem the program addresses? It’s worth gathering a group of key people to craft a few brief paragraphs to answer these questions, even when the program has been developed by someone else.

It’s OK if your description isn’t the same as another district might come up with. For example, maybe your district held the Cognitive Coaching® seminar for administrators, not teachers, and for a different reason than my district did. Our descriptions of need, target audiences and expected outcomes will look different, even when the program itself may be delivered identically in both places.

Logic Model

A logic model is a graphic representation — a concept map of sorts — of a program’s inputs, activities, outputs and outcomes.

Outputs and outcomes are easily confused. Just remember that outputs are program data, and outcomes are people data! The Tearless Logic Model 1 describes an interactive, collaborative (even fun!) process for creating a logic model that is certain to appeal to educators.

Program Theory

When you create professional learning programs, purchase professional learning curriculum or hire consultants to facilitate learning, it’s easy to think you have high quality professional development, but how do you really know? Programs may meet certain characteristics that make them likely to be high quality (e.g., ongoing, job-embedded, data-driven). But how can you connect the dots between what the teachers are learning and how their students will benefit?

Recently, I led a collegial book study on Culturally Responsive Teaching and the Brain by Zaretta Hammond. My team and I had teachers read the chapters and participate in online discussions. But how did we expect that teachers reading a book and writing about their thoughts would lead to improvement for students?

This is where program theory comes in. Program theory describes how the program is supposed to work. Some might call this “theory of change.” The program theory blends elements from the program description and information outlined in the logic model. Most importantly, a program theory articulates the linkages among the components of the logic model.

A key reflective question for articulating a program theory is this: What makes us think that this program, the way it is designed, and these particular program activities will lead to those expected outcomes we identified? A simple program theory for my book study might start like this:

A few bullet points later, we might articulate how teachers will change their practice, and eventually, there will be a connection to the specific areas of student learning and achievement we want to improve.

The investment you make in program evaluation should result in more clarity and shared understandings of how professional learning is expected to produce results.

The idea is that we’re identifying how we expect our professional learning programs to work. Once we do this, we can identify where we want to ask questions for the evaluation. Do we want to know if teachers are in fact, deepening their learning? Or do we want to investigate whether teacher learning is resulting in change in practice? A program theory helps us to know where to look and what to look for in a program evaluation.

Creating a program description, developing a logic model and articulating a program theory need not take a great deal of time. The investment, however, is sure to result in more clarity around your professional learning programs and shared understandings of how your professional learning programs are expected to produce results. They lay the foundation for you to identify relevant evaluation questions and set you up to collect the right data for your program evaluation.

The Linchpin: Evaluation Questions

W. Edwards Deming, famous American management consultant, once quipped, “If you do not know how to ask the right question, you discover nothing.”

Evaluation questions form the cornerstone of professional development program evaluation. They both frame and focus the work, pointing you in the right direction for data collection. After all, how would you know what data to collect, and from whom, if you haven’t settled on the questions you’re asking? Crafting the right questions for a particular evaluation project is critical to an effective evaluation effort.

What Are Evaluation Questions?

Think of evaluation questions as research questions. They are broad questions that usually cannot be answered with a simple yes or no, and require collecting and analyzing data to answer. Most importantly, these are not the individual questions you would ask someone on a survey (I’ll get to those in the future!).

Evaluation questions are big picture questions that get at program characteristics and are evaluative. That is, the answers to these questions will help you understand the importance, the quality or the value of your programs.

Well-conceived evaluation questions help us to understand the importance, quality and value of our professional development programs in K-12.

Click To Tweet


Imagine you are evaluating a professional development program. What do you need to investigate? To answer this, let’s take a quick step back. The previous section showed how to engage stakeholders and generate shared understanding of how your professional learning programs work by creating program descriptions, logic models, and a program theory. Now, you’ll see what you can do with these products to focus your evaluation!

Identifying Information Needs and Decisions

First, consider what you need to know about your professional learning program. This may depend on what decisions you (or others) have to make about it. Do you need to decide whether to continue offering the program? Offer it to an expanded audience or at multiple sites? Eliminate it altogether? Try a different program to address the problem at hand (e.g., improving middle school writing skills)?

Once you’ve identified decisions that need to be made, revisit those three products — program description, logic model and program theory. What are the implicit or explicit assumptions being made about the program? For example, does the program theory state that the professional learning will change teachers’ thinking? Encourage them to use new strategies or resources in their teaching practice? Does the logic model identify certain expected outcomes for students?

Determining the Questions

You may be thinking at this point, “Well, we just need to know if our program is working.” To that I would ask, “What do you mean by working?”

“Well,” you might say, “We want to know if the program is effective.” And I would answer with another question: “What do you mean by effective?”

And so it would go until you can define and describe exactly what you are looking for.

Again, revisit your three documents. As you review the program description and program theory, what clues do you have about what it should look like if the program is working or effective? Try to describe this scenario in as much detail as possible.

Here’s an example:

If your professional learning program on writing instruction for middle school teachers is working (or effective, or successful…) you would hear teachers saying that they’ve tried the new strategies they learned, and are now using them in their daily practice. They would be able to show you how they are now teaching writing using the new templates. They would be able to show you “before” and “after” examples of student writing, and be able to describe in specific ways how students’ writing has improved.

Once you have defined success, you can turn these ideas into evaluation questions. One of my favorite tricks for doing this is to ask “To what extent…” questions. For the example above, these questions might look like this:

Posing these questions may also inspire some sub-questions:

As you can see, your list of evaluation questions can grow quite long! In fact, you may be able to identify dozens of potential evaluation questions. To keep the evaluation feasible, prioritize these and settle on perhaps just 1-2 big questions, especially if each has a couple of sub-questions.

To keep your evaluation feasible, prioritize and focus on just a few big questions, especially if each includes sub-questions.

Need More Inspiration?

Here are generic examples of the types of evaluation questions you may need to ask. Some questions might be formative in nature. That is, they may be used to inform potential changes in the program. Think of these as process questions or implementation questions.

Other questions might be summative in nature. These questions ask about outcomes, impacts or changes that occur that we believe can be attributed to the program.

Using a checklist may be helpful in determining whether your questions are appropriate and will be effective.

Choose your data collection strategy carefully; each brings its own weaknesses and strengths.

It’s critical to craft the right questions when evaluating #professionaldevelopment for teachers.

Click To Tweet

Evaluation Questions Lead to Data Collection

As I wrote previously, you can conceive of program evaluation as occurring in five phases: Engagement and Understanding, Evaluation Questions, Data Collection, Data Analysis and Interpretation, and Reporting and Use.

As you can see from the above examples, evaluation questions point you to where and from whom to collect data. If your question is, “To what extent are teachers using the new resources?” then you know you need to collect data from teachers. If your question is, “Are students’ writing skills improving?” you know you will likely need student work samples as evidence.

In each case, you will have to determine the feasibility of using different data collection strategies such as surveys, interviews, focus groups, observations or artifact reviews (e.g., looking at lesson plans or student work samples). Each of these strategies features a set of distinct advantages and disadvantages and requires different resources.

You may enjoy this hand-picked content:

Collecting the Data: Using Surveys, Interviews, Focus Groups and Observations

Once you have settled on a small set (about 1-3) of evaluation questions, set your sights on how to collect data to answer them.

There are a multitude of ways to collect data to answer evaluation questions. Surveys (aka questionnaires), interviews, focus groups and observation are the most commonly used, and each has distinct advantages and disadvantages.

You’ll choose data collection strategies based on these along with which align best with your evaluation questions.

Let’s take a quick look at each strategy:

Surveys

A survey is “an instrument or tool used for data collection composed of a series of questions administered to a group of people either in person, through the mail, over the phone, or online.” 2 Surveys tend to have mostly closed-ended items — questions that have a question stem or statement, and a set of predetermined response options (answer choices) or a rating scale. However, many surveys also include one or more open-ended questions that allow respondents — the people taking the survey — the opportunity to write in their own answers.

Many surveys are still administered on paper, but they’re conducted more frequently now in online environments. Professional development management systems, such as Frontline Professional Growth, allow feedback forms to be attached to professional development courses, and also feature the ability to construct and administer follow-up surveys.

The main advantage of a survey is that it can reach a large number of respondents. With electronic platforms, one click can send a survey to hundreds or thousands of respondents. Survey data is also relatively easy to analyze, and allows for easy comparison of answers across groups (such as elementary vs high school teachers, or different cohorts of participants). The main disadvantage is that you lack the opportunity to ask respondents follow-up questions, and quantitative survey data isn’t often as rich and detailed as data that result from interviews and focus groups.

Interviews

An interview is a set of questions asked in person or over the phone to one individual at a time. It’s essentially a conversation between interviewer and respondent. In contrast to surveys, interviews are largely composed of open-ended questions with the interviewer taking notes or recording respondents’ answers for later analysis. Interviewers can use “probes” to elicit more detailed information from respondents. Probes are specific follow-up questions based on how a respondent answers, or they can be more generic, such as, “Can you say more about that?”

An interview’s main advantage is that it allows you to deeply understand a respondent’s perspective and experience. An interview can give you a strong sense of how someone experienced new learning from professional development, and how that learning plays out in their teaching practice. The main disadvantage is that you usually don’t have time to interview more than a handful of people, unlike the hundreds of responses you can collect with surveys. Interview data is also qualitative, and thus a bit time-consuming to analyze.

Interviews allow you to deeply understand a respondent’s perspective and experience.

Focus Groups

A focus group is simply a group interview. Typically a small group of people (ideally about 6-8) are brought together and asked a set of questions as a group. While one focus group member may answer a question first, others then chime in and offer their own answers, react to what others have said, agree, disagree, etc. The focus group functions like a discussion. It’s best to have both an interviewer and a notetaker and to video record for later review and analysis.

The main advantage of a focus group is that when people respond to questions in a group setting, they build off each other’s answers. Often, the conversation inspires respondents to think of something they may not have remembered otherwise. Also, focus groups allow you to interview more people than individual interviews. The main disadvantage is the same as with interviews — you can still reach only a small number of people, and since the resulting data is qualitative, it can take time to analyze.

Observations

Observing teachers and students in action can be one of the best ways to capture rich data about how teacher professional learning plays out in the classroom. Typically, observers use a protocol informed by the evaluation questions that outlines what the observer is looking for and what data to collect during the classroom visit.

The main advantage of observations is in witnessing first-hand how curriculum is being implemented, how instructional strategies are being used and how students are responding. The main disadvantage is in the potential for conflict, especially if positive relationships and trust aren’t a strong part of the school culture. While many teachers willingly invite observers into their classrooms, there can be tensions among colleagues and with unions who want to ensure that program evaluation does not influence teacher evaluation. It is critical to clearly communicate that data collected for professional development program evaluation is not to be used for teacher evaluation.

Observations let you see first-hand how curriculum is being implemented, how instructional strategies are being used and how students are responding.

A Few Recommended Practices

What to Do With All That Data

I remember struggling through my college statistics class. Just the term “data analysis” made me cringe. After all, I was going to be a teacher, not a scientist! Now that I’ve spent years collecting and analyzing data, I’ve learned I don’t need to be a statistician to do professional development program evaluation.

Whether you love or hate the idea of analyzing data, you probably don’t have loads of time on your hands — but you still need answers. You need actionable knowledge in order to report out results that inform smart decisions about professional learning. Good news! In this section, I’ll share strategies for analyzing and interpreting data in a painless way that doesn’t require unlimited time or advanced skills.

What is Data Analysis?

Data analysis and interpretation is about taking raw data and turning it into something meaningful and useful, much in the same way you turn sugar, flour, eggs, oil and chocolate into a cake! Analyzing data in service to answering your evaluation questions will give you the actionable insights you need. It’s important to remember that these questions drive data collection in the first place.

Since you’re generally not running experiments with randomly sampled study participants and control groups, you don’t need advanced statistical calculations or models to learn from professional development data. You mainly need to analyze basic survey data, and to do that, you will look at descriptive statistics.

Summarizing the Data

Raw data rarely yields insights. It’s simply too overwhelming to scan rows and rows of numbers or lines and lines of text and make meaning of it without reducing it somehow. People analyze data in order to detect patterns and glean key insights from it.

First, it’s helpful to understand the proportion of professional learning participants who complete a survey. This is your survey response rate. Your response rate is simply the number of people who completed the survey divided by the total number eligible to take the survey.

Example:

Next, use descriptive statistics including percentages, frequencies and measures of central tendency to summarize the data. Measures of central tendency — the mean, median, mode, and range — are summary statistics that describe the center of a given set of values.

Summary Data
STATISTICS DEFINITION EXAMPLE / NOTES
Percentage / Frequency A proportion of the whole group 73% of participants felt they learned a great deal from this session. 27 high school teachers and 3 elementary teachers attended this session.
Mean The statistical average of a set of values The mean score on the post-test was 87%.
Median The midpoint of a set of values The median years’ teaching experience among participants was 16.
Mode The most common value in a set of values Not often reported
Range The difference between the highest and lowest values in a set of values Not often reported

Choosing the Right Statistic to Report

How do you know when to use mean vs. median? When you know you have outliers, use the median. Here’s an example of how these measures can differ greatly in the same dataset. Let’s say you want to describe a group of 16 professional learning participants in terms of how much teaching experience they have.

Here are the values and measures of central tendency:

Example Insight: Half of the participants in this professional learning activity were novice teachers, which can be used to inform future decisions about professional development.

Which summary statistic best describes this population of participants? The mean can be very sensitive to outliers, while the median is not. The mean of this dataset is 9 years, but the median is only 3. This means that half of participants have 3 or fewer years’ experience. In this case, knowing that half of participants were novice teachers may give you greater insight and better inform future programmatic decisions than knowing the average number of years of teaching experience of the group.

Next, you may want to cross-tabulate results. Cross-tabulating means looking at your dataset by subgroup to compare how different groups answered the questions.

For example:

Most online survey programs make cross-tabulation easy with built-in features, but you can also use pivot tables if your dataset is in a spreadsheet.

Descriptive Statistics Have Limits

Caution: when participants haven’t been randomly assigned and required to respond to feedback surveys, these types of analyses cannot be used to generalize to all teachers who participated in the professional learning. It’s always a possibility that more satisfied participants completed the survey and that more dissatisfied participants did not, or that more novice teachers completed the survey than veteran teachers.

Descriptive statistics are helpful for telling what happened, but they can’t determine causality. They can’t tell you why something happened. You may know that 87% of participants feel they learned a great deal from participating in professional learning, but you won’t know what caused them to learn. That’s where qualitative data can help fill in the blanks.

Qualitative Data Analysis

Surveys may include some open-ended questions, or you may have conducted individual interviews or focus groups as part of professional development program evaluation. Crafting these questions carefully can help you understand why people experienced professional learning the way they did.

But what do you do with all of these answers, the words people write or say in response to these open-ended questions? Rigorous qualitative data analysis involves significant study to develop the needed skills, but you can still take a few easy steps to make sense of qualitative data in a credible way that will give you insight into participants’ experiences in professional learning.

Rigorous data analysis involves significant study, but you don’t need an advanced degree to begin making sense of your data.

Interpreting the Data

Interpreting data is attaching meaning to it. For example, let’s say that 37% of professional learning participants indicated they learned something new. At first glance, that doesn’t sound like a particularly good outcome, does it? Too often, people view raw data like this in either a positive or negative light without taking the time to fully understand what’s really going on.

What if I told you this was a refresher course for people who had already learned the material? In that case you might then interpret it as a positive outcome that more than one third picked up new learning.

Numbers don’t have inherent meaning. It’s up to us to put them in context to make sense of them.

What About Statistical Significance?

People like to ask about this, and most likely, what they’re really asking is, “Are the results you’re reporting on important? Are the differences we are seeing meaningful to us in any way?” Statistical significance is a technical term that has to do with whether results of an experiment are true, or are more likely due to chance. In evaluating professional learning programs, you are not likely to use the statistical analyses that result in statistical significance.

What If I Have a Small Sample?

You may be wondering, “I only surveyed 20 people — is that really enough data to give me an accurate picture of what’s really going on?” Absolutely! Remember, it’s about answering your evaluation questions to inform future professional learning programs. Even with what might seem like low response rates, you can still gain valuable insights, and make smart decisions for your school or district.

You may enjoy this hand-picked content:

How to Create Effective Reports for Communicating Professional Development Program Evaluation Results

Wouldn’t it be great if you knew when your professional learning programs were successful? What if you knew more than just the fact that teachers liked the presenter, were comfortable in the room or learned something new?

Wouldn’t it be better to know that teachers made meaningful changes in teaching practice that resulted in increased student learning?
I posed these questions in the introduction to this eBook. Let’s say you have moved through the first four of the five phases of program evaluation:

At this point, you have a solid understanding of program outcomes. You have a perspective on what teachers learned, if they’re using what they learned in the classroom, and perhaps even how students are responding to changes in teaching practice. Now what? Most likely, you’re not the only one in your district who needs this information. How do you share evaluation results and with whom?

The fifth phase in the cycle of program evaluation is reporting and use of results. In this phase, consider the following:

Most importantly though, consider why you will create and share evaluation reports. The answer to this and the above questions form your communication plan.

Kylie Hutchinson, author of A Short Primer on Innovative Evaluation Reporting offers this insight:

The reason to report and disseminate results is tied to key decisions that need to be made about professional development programming. To be meaningful, evaluation reports need to be used to determine whether a program should be continued, expanded, eliminated or changed in specific ways to improve outcomes.

Reporting and sharing decisions is vital to making key decisions around professional development programming.

What Belongs in a Professional Development Program Evaluation Report?

The most comprehensive form of an evaluation report might include all the details:

While this list may appear logical and sequential, the order also makes for a less engaging report. To ensure the use of evaluation results, many evaluators now encourage beginning reports with the exciting part — the findings and conclusions, and actionable recommendations.

Match the Report to the Audience

If you take only one lesson from this section, let it be this: Match your report to your audience. Consider who needs to know the answers to your evaluation questions and understand your findings from data analysis. Who needs to use the information you share to make key decisions about professional learning? Who might be interested in results because they are in a position to support professional learning programs?

Creating evaluation reports to meet the needs of specific audiences involves three key steps:

  1. 1.) Identify your audiences. Are they administrators? Teachers? Board of Education members? Parents? Community members?
  2. 2.) Understand their information needs. What is important to them with regard to professional learning in general or the specific topic? How does the professional learning program connect to their work and responsibilities?
  3. 3.) Know what actions they will take with the information in your report. Are they decision-makers? Do they sit at the table with decision-makers? Are they likely to share the information with others? Are they potential supporters or detractors?
You may enjoy this hand-picked content:

Consider Multiple Forms of a Report

On the surface, it may sound like a lot of work to create multiple reports, but with careful planning it’s quite manageable. Creating different versions of reports for different audiences can be an enjoyable and rewarding part of the evaluation process and contributes to deepening your own learning as you dive into the data and help others make sense of it. Think about what would hold the most appeal for your stakeholders.

Do you have an audience who needs:

Choose the audience who needs the highest level of detail and create that report first. Then, work to strip away details the other audiences don’t need. You can always make the more comprehensive forms of the report available if they want access to them.

Beware TL;DR

Few people I know love spending endless hours writing long reports. But, if you’re one of those people, here is another reason to carefully consider your audience and their information needs. “TL;DR” is internet slang for “too long: didn’t read.” Part of the problem isn’t necessarily the length of some reports, but the length combined with a report that isn’t visually appealing. It just doesn’t draw the reader in and keep them there.

Fortunately, there are many ways to avoid TL;DR in evaluation reporting by creating different versions for different audiences, using creative or innovative formats, and embedding visual elements.

Creating different versions of reports for different audiences will help get the right information to the right people.

Think Outside of the Document

A written report is far from the only way to communicate evaluation results, and it’s perfectly OK to think flexibly and creatively here. I’m not necessarily suggesting a song and dance routine, but believe me, it has been done!

Here are just a few alternatives to written reports or presentations:

Make It Visual

No matter the style, size or length of your report, be sure to include visuals to engage your audience. Use relevant photos, icons, or illustrations along with charts or graphs to draw the audience’s attention to the main points. There are many, many websites where you can find free stock photography, but consider taking your own photos. It isn’t that difficult, requires nothing more than a smartphone and brings a stronger sense of ownership and connection to the report and to the program. Your audiences will see your teachers in your classrooms doing the real work involved in professional learning, and that is more likely to inspire engagement with the report.

Most audiences also want to see data. They want to quickly and easily understand key findings. Charts or graphs can be efficient and powerful ways to communicate data, and they don’t need to be sophisticated or complex to have impact. Simple bar, line, or pie graphs can communicate meaningful data. I’ve been actively honing my data visualization skills in my spare time by simply reading blogs and books and experimenting. Little by little, I acquire new skills and attach them to prior knowledge to build a robust toolbox and solid repertoire of visualizations I can now use create to communicate program evaluation results.

Be sure to include visuals to engage your audience.

Program Evaluation Is Essential

Professional development remains a critical component of school success. It is essential to continue to create and implement high quality professional learning programs within the constant constraints of budgets and time. A rigorous program evaluation process will help you deeply understand how programs are performing in your school environment and is key to educator professional growth and the continuous improvement of our schools.

More like this… every week.

Looking for more educational insights and industry trends? The Frontline Education blog brings the latest in best practices and useful information to serve leaders in K12.

Sheila B. RobinsonAbout Sheila B. Robinson

Sheila B. Robinson, Ed.D of Custom Professional Learning, LLC, is an educational consultant and program evaluator with a passion for professional learning. She designs and facilitates professional learning courses on program evaluation, survey design, data visualization, and presentation design. She blogs about education, professional learning, and program evaluation at www.sheilabrobinson.com. Sheila spent her 31 year public school career as a special education teacher, instructional mentor, transition specialist, grant coordinator, and program evaluator. She is an active American Evaluation Association member where she is Lead Curator and content writer for their daily blog on program evaluation, and is Coordinator of the Potent Presentations Initiative. Sheila has taught graduate courses on program evaluation and professional development design and evaluation at the University of Rochester Warner School of Education where she received her doctorate in Educational Leadership and Program Evaluation Certificate. Her book Designing Quality Survey Questions was published by Sage Publications in 2018.

1 Lien, A. D., Greenleaf, J. P., Lemke, M. K., Hakim, S. M., Swink, N. P., Wright, R., & Meissen, G. (2011). Tearless Logic Model. Global Journal of Community Psychology Practice, 2(2). Retrieved November 9, 2018, from https://www.gjcpp.org/pdfs/2011-0010-tool.pdf.
2Robinson, S.B. & Leonard, K.F. (2019). Designing Quality Survey Questions. Thousand Oaks, CA: Sage. Hutchinson, K. S. (2017). A short primer on innovative evaluation reporting. Gibson, BC: Community Solutions Planning & Evaluation.
3Hutchinson, K. S. (2017). A short primer on innovative evaluation reporting. Gibson, BC: Community Solutions Planning & Evaluation.

Get Started with Frontline

We know that figuring out how to begin can often be the hardest step. We can help.