Academic Assessment

Overview

All academic units at the University of New Mexico must assess their degree-granting programs.

A stepped chart going from course to program level, then to colleges, schools, branches and finally UNM overall.

Academic units collect and share evidence of how classroom assessment at the course level ‘rolls up’ to provide evidence of teaching and learning for assessment at the program level. The OAAPR synthesizes this data into department-level and institution-wide reports to create an understanding of UNM as a whole. 

The UNM Academic Unit Assessment Manual defines the processes here in more detail.

Timeline

Academic units observe an annual assessment cycle based on academic years, collecting data throughout the Spring and Fall semesters and submitting a final report for that academic year by January 31 of the next year.

Note: January 31 is the deadline for all college/school/branch assessment coordinators to submit their program plans and reports to the Office of Assessment & Academic Program Review (OAAPR). Internal deadlines will be much earlier (between October and December). Please submit assessment documents according to your internal procedures and timelines.  

Templates

Please use the following documents for academic unit assessment unless an alternative version is provided for a particular college, school, or branch.

Additional documents

Submissions

The UNM Digital Repository is the central location for all assessment documents.

If you are the person in your unit responsible for uploading these documents, please see this detailed guide on uploading to the repository. Otherwise, please adhere to your internal submission requirements. If you are a representative from an academic unit/program within a college/school/branch, please submit your assessment documents to your Assessment Coordinator or CARC Chair, not the repository.  

If you are already familiar with the uploading process, you can go directly to the repository submission form.

Reports

Academic Years2012-2013 | 2013-2014 | 2014-2015 | 2015-2016 | 2016-2017 | 2017-2018 | 2018-2019 | 2019-2020 | 2020-2021 | 2021-2022

This report is based on individual program assessment plans and reports as well as the Academic Program Assessment Maturity Rubric which is administered annually to evaluate the assessment maturity of each academic program and determine the overall state of assessment of each college, school, and branch.

In 2023, the OAAPR began providing individualized academic assessment reports by college, school, and branch to deans, associate deans, and assessment coordinators. Contact your assessment leadership to see your area’s results. 

Best Practices

The OA/APR has developed a one-page guide on developing student learning outcomes that includes definitions, guidance, and expectations. For examples of specific parts of the assessment process, see the tabs below.

Good Assessment Examples

  • Students will develop a breadth and depth of expertise appropriate to their career goals in the languages, literatures, and cultures of the Luso-Hispanic world.

  • Students will make a significant contribution to the knowledge in their field and will present that in oral and written form.

  • Students will participate formally in professional dialogues beyond UNM.

  • Students will find and participate in professional activities such as colloquia, conferences, and workshops.

  • Students will exhibit critical thinking skills to address diverse business challenges and opportunities.
  • Students engage in scholarly or professional communities through attendance at or leadership in workshops, talks, or other events related to their area of study.

  • Students will evaluate relevant literature within a research topic.

  • Students contextualize primary texts or data within a broad knowledge of Iberian, Latin American or Southwest Hispanic literature and culture or Hispanic language and linguistics. 

  • Students will be able to identify and analyze an issue or problem and provide recommendations. 
  • Students will intervene and stabilize patients while in transport to an advanced care facility.

  • MA Comprehensive Examinations were scored by committees of three or more faculty members using a standardized rubric. Exams were scored using a standardized 4-point descriptive rubric: High Pass, Pass, Low Pass, Fail.

  • Graduate Student Annual Review Forms completed by students and discussed with and approved by faculty mentors. The Director of Graduate Studies compiled information from the GSAR forms.

  • All course-related assessments use a three-scale rubric as follows: 3 = Exemplary, 2 = Satisfactory, 1 = Unsatisfactory. 

*Please contact the Office of Assessment at assess@unm.edu if you would like assistance in creating a rubric.

  • Our goal criteria for success is attainment of an 8.0 or higher average on our rubric rating assessment for all students (Rubric A)
  • Our goal criteria for success is attainment of an 8.0 average or higher on all criterion assessed on the comprehensive Written and Oral exams (Rubric B)
  • The Comprehensive Examination/Portfolio is either Passed with Distinction, Passed, or Failed. 100% students successfully complete the 590 assessment.
  • 60% of students will achieve a passing grade of 70% on each course exam.
  • 75% of students will submit journals that demonstrate reflection and critical thinking as described in the journal rubric.
  • 70% of students will achieve a passing grade of 75% on the comprehensive final exam.
  • 75% of students will achieve an average minimum score of 35 points(out of 50) on the presentation rubric.
  • 75% of students will achieve an overall passing score on in-class and homework assignments.
  • Target levels for outcomes attainment have been established as 75% of students achieving an outcome of 2 or better.
  • In reviewing the outcomes, it was noted that while we want students to attend events and workshops related to their academic program and professional goals, the exercise of having students turn in a list of events attended—on the Graduate Student Annual Review—was not effective as a data-gathering tool, since the students understood the criteria for inclusion in different ways. We need to promote a culture of professionalism including attendance at these events, but the assessment may not be the right tool.

  • In response to the Academic Program Assessment Maturity Rubric and communications between faculty in the department, the Assessment Faculty Member thoroughly revised the assessment plan.

Assessment Mistakes to Avoid

Mistake: Creating unattainable or overly complex goals.
Example: Students will obtain a professional internship or job post-graduation in an affiliated discipline.
Reason: The goal as stated is very difficult to control, track, and requires capturing data after students have left the program.
 
Mistake: Creating goals that are too narrowly focused that are more student outcomes than program goals.
Example: Students will illustrate understanding of the role of ethics in social justice.
Reason: “Understanding of the role of ethics” focuses on a specific student learning outcome, not a program goal (The proper overarching goal would be “Students will increase their knowledge in social justice”).
Mistake: Measuring more than one behavior in each outcome. 
Example: Students will effectively communicate in an oral and written manner.
Reason: There are two types of communication to be assessed: “oral” and “written.” This outcome requires that the student has to perform well in both types to demonstrate effective communication. If outcomes are written together, it is difficult to isolate each student behavior to assess.
              
Mistake: Using verbs/student behaviors that are unattainable or unmeasurable.
Example: Students will emphasize the importance of their opinion in conversation.
Reason: Emphasizing the “importance of their opinion” is difficult to measure based on affective components.

Mistake: Constructing outcomes that do not align with the targeted population or sample of students.
Example: Students in 1000-level courses will demonstrate mastery in advanced research methods.

Mistake: Developing outcomes that assess something other than student learning (e.g., employment or graduation).
Example: X% of students will graduate the program.
Reason: Outcomes should be focused on student learning, not on potential program outcomes.

Mistake: Using the same set of outcomes for a department or program that includes multiple majors, minors, or certificates.
Example: Same outcomes stated for Masters and Ph.D.
Reason: Outcomes should be different for each level of learning since student populations have unique and distinct learning behaviors at specific levels. 

Mistake: Describing learning outcomes with the behavior/verb focusing on the delivery of the knowledge/skill/responsibility instead of the student’s attainment of the knowledge/skill/responsibility.
Example: Students will be taught to …, Students will be exposed to …
Reason: Outcomes should be focused on learning outcomes, not on instructional delivery.
Mistake: Using assignment grades, course grades and/or GPA as a measure of student behavior.
Example: The final course grade will illustrate the students’ attainment of this SLO.
Reason: Overall grades (for a course, assignment, or GPA) are not measures of specific student behavior because a grade is summative by design and it is therefore difficult to isolate whether the SLO has been achieved or not.  

Mistake: Misalignment between the assessment measures and what students are learning.
Example: Teaching understanding of a theory and then assessing the application of the theory.
Reason: Understanding and applying a theory are two different levels of thinking.

Mistake: Using only one exam/quiz item to represent student attainment of an outcome.
Example: Within a quiz, item number 5 will represent the attainment of an outcome.
Reason: At least 2-3 exam/quiz items are needed to represent the assessment of a specific student learning behavior.

Mistake: Not considering pre-existing data collection streams and methods.
Example: A new survey was designed for data collection when one that already existed collected the same data.

Mistake: Using vague names for measures.
Example: Various homework assignments and exams will be assessed.
Reason: Not identifying specific measures introduces variability into the process.
Mistake: Keeping the same benchmarks year after year even though they are always being met or surpassed.
Reason: If benchmarks are met every year, the assessment process may not lead to continuous improvement.
Mistake: Assessing “for the sake of assessment.”
Example: Collecting data but not analyzing, sharing, or using it.
Reason: If results are stored away in a “black box,” the assessment process will not lead to continuous improvement and the assessment loop will not be closed.

Mistake: Not sharing unfavorable data.
Example: Data that are unfavorable do not make it into the assessment report or get communicated out to faculty/staff in the department.
Reason
: Unfavorable data still provide good insight into where improvement efforts and resources should be focused.