Analyze and Interpret Findings

One of the more difficult aspects of assessment is making sense of the data collected to evaluate student learning. To so this, it is necessary to look for patterns in and differences within the data and the relationships among the data points. These can uncover important information about the correlations among the program goals, assessment measures, and student learning as well as help identify the program’s collective strengths that should continue to be developed and areas of need that should be addressed. In addition, the analysis may reveal the limits of the data and cause you to question the choice of the assessment tool as an accurate measure of student learning.

Also important is to compare your results against any targets you developed to see if there are meaningful differences between the target and the actual findings. It is a good time to evaluate the quality of your assessment strategies to see if they are valid measures of the data--that they are accurate, representative, and useful for deciding on actions to improve the program. For example, if a sample of student work is used, can you assume the sample accurately reflects the group from which the sample was drawn? If test items were used, how well does each item effectively discriminate between high and low scores? What is the level of difficulty in the test item? The "Questions to Consider" section below include more issues to consider when analyzing and interpreting data. Depending upon the answers to these questions, you may want to consider rethinking or modifying the assessment plan.

Summaries and analyses of program assessment data are of little value if they are not shared with the program, department, and school and used to improve teaching and learning. Sharing the data and discussing the findings with the program or department may lead to changes in the curriculum or individual courses that may provide a more enriching experience for the students in the program.

Questions to consider:

  • What is the data telling you about what and how well students are achieving the learning outcomes for the program?
  • Do the findings make sense?
  • Is it relevant to the outcome(s) being assessed?
  • Is it representative of students’ work or the situation being assessed?
  • What additional information is needed?
  • In what areas do students often have difficulty in the program?
  • In what areas are students excelling?
  • How consistent is student learning across multiple sections of the same course?
  • How might the timing of the assessment (e.g., Year Two) affect your interpretation of the findings?