CITRAL helps faculty and departments to create assessments to improve teaching and learning, and to interpret data that can contribute to assessment and improvement. Contact us to learn more about:

Program learning outcome assessment associated with UCSB's institutional accreditation processes

"Data About Data," a project that helps the institution to identify where and how programs can use data to drive research and decisions about teaching and learning

Helping programs, departments, or faculty design assessments

How can CITRAL help with your evaluation?

Creating a meaningful evaluation design requires iteration, thought, and maybe even a few attempts. The more tightly the program’s purpose and evaluation questions are woven together, the easier it will be to generate an evaluation design that leads to meaningful results. In other words, if your program does not have an explicit purpose that connects theory to action, you will not have an evaluation ready to go the next day. Saying that, we can help you get there! 
 
The CITRAL team is happy to work with your department, program, or office, to design and undertake studies of questions associated with learning or other University-related outcomes. 

Examples of possible assessment related topics include:

  • Understanding how learning occurs in a particular context
  • Evaluating the role of an activity or intervention on learning
  • Designing formative and/or summative assessments of learning
  • Understanding how your program is working 
This brief guide outlines some preliminary steps that can help us help you with your assessment. If you are able, before meeting with a CITRAL staff member about your assessment, please make a few notes about these items. If anything in this guide is unfamiliar (terminology, ideas, etc.), not to worry! We can help with that, as well.

Part I: Why do you Want to do an Evaluation?

Evaluation can serve multiple purposes. It can be formative, i.e. provide information about what is happening, in the process, so that you can make changes as you go. It also can be summative, i.e., provide a judgment about the program based on student outcomes or results. These are just two of the main options often connected to assessment and evaluations -- though there are others, as well, such as simply descriptive forms of evaluations.

What kind of questions you want to ask. Evaluations can ask many kinds of questions. The CDC, for instance, lists possible purposes of evaluations as “implementation assessment, accountability, continuous program improvement, generating new knowledge, or some other purposes.” For different types of assessment or evaluation questions please refer to “Examples of Assessment Questions”

What resources you may have to answer your questions. What sources of data might help answer your questions? If you’re not sure, this is a great time to contact CITRAL.

Example Assessment Questions

  • Descriptive: Used to help you learn about what’s going on. For instance, you might ask, “What are students (participants in my program) doing in my program?” or the harder to answer, “what is my program causing students to do?” This is more exploratory.
  • Based on some Intensity: To what extent are students enrolled in the program, doing something?”
  • Formative: “How are students doing in the program compared to students in a different program?”
  • Summative: This is a form of assessment usually associated with an evaluation question about program effectiveness. It is a final program check, what was the “effect of the program?”
  • Effectiveness: Do learning/student outcomes reflect the goals of the program and to what extent. Is it because of the program or something else?

Part II: How can CITRAL Help with your Evaluation?

Program Design: If you are designing your program, we can help you figure out how to develop or utilize theories and connect them to program interventions, lessons, or actions.

Program in Progress: If your program is in progress or has already happened, we can consult to help you figure out how to explore your program to develop theories of actions that can be turned into research questions.

Development of valid measures: CITRAL can help develop or create surveys or measures that help answer questions you may have in a way that may attempt to aggregate data. But this will require that you already have a strong theory and program design. Otherwise, we’ll work with you to establish what your program does and how it interacts with students. In some cases, a program evaluation may require multiple measures. (For instance, if there are multiple people responsible for a part of the program, you may want to generate a measure of “fidelity,” or the extent to which each person implements the program as designed).

Gathering and analysis of quantitative and qualitative evidence. Data Analytics and Visualization: If you have data that you need analyzed, we can help you analyze it (remember, data without theory won’t tell you much!). Such methods may be as simple as descriptive statistics or as complicated as latent variable models. Qualitative Analysis: Speaking with participants may be a great way to develop theories or answer questions and we can help. Observation, focus groups, or interviews are great ways to help make meaning of data.

Part III: Before You Begin...

1. Write one sentence to answer the question: What do I want to know? Use one of these starters or create your own:

  • How are students learning about <this thing> in <this place>? or How do students understand <this idea> in <this context>?
  • What is the role of <this activity, assignment, intervention, etc> in the development of <this aspect of student learning>?

2. Write a 2-3 sentences to answer the question: What are the student/person/program outcomes of interest that would be worth investigating to answer the question above? Use statements that are concrete (not just, “students learned more”)***

Use this starter or create your own: The purpose of <this thing> is to <lead to this result>. This will occur because <theory or idea about students, learning, assignment, etc. that informs why you believe this will take place.>

3. Given the student outcomes that you’re interested in, what are the mechanisms that you think are responsible for these outcomes?

4. Write a few sentences to describe the ultimate goal of the context where the program is located and connect the activity, assignment, etc. to the goal. Use this starter or create your own:

  • The goal of this <course, program, etc.> is to <do this thing.>. The ideal outcome for a participant in this program will leave the program <learning to or doing something that they didn’t do prior to the program>. This <mechanism of the program/what aspect of the program you’re focusing on> will help because it <how it contributes.> <This is how I think it contributes>.

Part IV: What Helps and What Hurts The Meaningfulness of an Evaluation?

Programs operate in the real world and we understand that not everything about the program design will be ideally constructed for meaningful program evaluation. However, it’s worth understanding what program design elements lead to more meaningful inferences based on the program evaluation, and what features of a program design may hinder the meaningfulness of inferences based on a program evaluation. An example of typical program design elements that will either help  with a more meaningful evaluation or hinder making more meaningful inferences is listed below. An ideal program evaluation will begin with clear connections between program design, sources of evidence, the program outcomes, and the research questions.

No program design is perfect, and even the most thoroughly designed programs will have design elements in both the “Helps” and “Hinders” categories. However, it is worth noting what program designers may need to have ready to go before even starting an evaluation. Really, program evaluation is all about design and theory.

Helps with Program Evaluation:

  1. Clear program goals
  2. A clear question about program goal(s).
  3. A logical, or systematic, approach to answering this question.
  4. Logical and strong sources of evidence for outcomes and program indicators.
Hinders Program Evaluation:
  1. Lack clear program goals
  2. No aspect of the program clearly connects the goals to the program implementation.
  3. Questions that don’t match the program or are opaque and non-specific
  4. No logic of evidence gathering

Part V: Evaluation Process

The good program evaluation actually starts with the design of the program itself. However, once the program has been implemented and you want to run an evaluation, meeting with an evaluator and carrying out the evaluation will look something like the list below:

  1. Consider and articulate program goals?
  2. Why do you want to do an evaluation? How would you like to use the results
  3. Consider what outcomes would be indicative of the program’s success.
  4. Consider what indicators might be necessary to improve the meaningfulness of this evaluation (for instance, a measure of program fidelity – to what extent was the program implemented?)
  5. Develop a logic of inquiry: what (forms) of evidence will serve as indicators/outcomes? How does this connect to the program goals and design?
  6. Develop the measures/surveys/interview protocols necessary to answer the questions.
    1. Do you want to quantify?
    2. Start with what you know about students/program subject and what they do in your program.
    3. Use observational information (including simple program observation or interviews) to develop the questions you want to ask for measure purposes.

Part VI: Logic Models and Evaluation Design: What’s the Best Way to Answer Your Question?

“Logic model” is a commonly used term in the field of evaluation. It is essentially visualization about how the whole program is set up and how the evaluation is set up. The logic model usually involves elements such as the design and description of design of the program, the theory involved in the program, the people involved in the program and their roles, and the intended effect of the program. The logic model visualizes the connections between the different facets of the program and intended effect of the program. The logic model helps with understanding the evaluation design and the claims that we can make based on evaluation design.

Some Types of Evaluation Designs:

Experimental or Randomized Control Trial: Often, the hallmark of evaluation design is an experiment of some sort, often in the form of a randomized controlled trial (RCT). In this ideal, program subjects are randomly assigned to take part in a program. There are various methods for improving the “internal validity” of experimental designs like these. Internal validity describes the extent to which conditions are met such that causal claims can be made under the experimental condition. Some common threats to internal validity are:

  1. The group that does not take part in the program, still somehow garners the effect of the program through interaction with the program. 
  2. No randomization: students opt in or opt out, so you cannot eliminate causes of student outcomes external to the program. It should be noted that sometimes this selection of students into programs may be of interest in itself.
  3. There is no control group. There is no hint of knowledge about what would happen if program participants did not receive the program treatment. 
  4. No pre-and-post design when one would expect changes. Differences in the groups may simply be an artifact of the two groups. Randomization may help with this, though.

Quasi-Experimental: There is also a vast set of tools available where an experiment is not possible. 

  1. A natural experiment. For instance, if there is some cutoff so that students/program participants just below or above some cutoff (test score, grade, GPA) are required to participate/not-participate in the program. The control group would be students within a narrow range of the cutoff. 
  2. Model-based methods: These are a bit more controversial in terms of making causal claims (in the realm of econometrics, this may include fixed-effects and propensity score matching). 

Observational: When there is no way to construct the counterfactual, or, “What would happen if participants did not receive the treatment’ of the program,” you are in the realm of an observational study. This may be what you want to do anyway, if you’re asking a descriptive question.

Important Terms You May Hear

Effectiveness evaluation: An evaluation or assessment with the primary goal to assess participants based on the program goals as a way to assess if and how well the program accomplished its goals. 

Exploratory Analysis: A method of research in which one does not have any, or very few, a priori hypotheses about the subject of research. It is a way to see just what’s happening. 

Formative Evaluation/Assessment: An evaluation form to better understand learning (or program participant) outcomes during the program or after an early version of the program with the primary goal to modify the program to better accommodate needs and goals.

Implementation Evaluation: An evaluation with the goal to understand if and to what extent the program’s design is used and applied.

For a great and more thorough primer on program evaluation, the US Centers for Disease Control and prevention (CDC) has some great resources: https://www.cdc.gov/eval/approach/index.htm