Rubric Best Practices, Examples, and Templates

A rubric is a scoring tool that identifies the different criteria relevant to an assignment, assessment, or learning outcome and states the possible levels of achievement in a specific, clear, and objective way. Use rubrics to assess project-based student work including essays, group projects, creative endeavors, and oral presentations.

Rubrics can help instructors communicate expectations to students and assess student work fairly, consistently and efficiently. Rubrics can provide students with informative feedback on their strengths and weaknesses so that they can reflect on their performance and work on areas that need improvement.

How to Get Started

Best practices, moodle how-to guides.

  • Workshop Recording (Spring 2024)
  • Workshop Registration

Step 1: Analyze the assignment

The first step in the rubric creation process is to analyze the assignment or assessment for which you are creating a rubric. To do this, consider the following questions:

  • What is the purpose of the assignment and your feedback? What do you want students to demonstrate through the completion of this assignment (i.e. what are the learning objectives measured by it)? Is it a summative assessment, or will students use the feedback to create an improved product?
  • Does the assignment break down into different or smaller tasks? Are these tasks equally important as the main assignment?
  • What would an “excellent” assignment look like? An “acceptable” assignment? One that still needs major work?
  • How detailed do you want the feedback you give students to be? Do you want/need to give them a grade?

Step 2: Decide what kind of rubric you will use

Types of rubrics: holistic, analytic/descriptive, single-point

Holistic Rubric. A holistic rubric includes all the criteria (such as clarity, organization, mechanics, etc.) to be considered together and included in a single evaluation. With a holistic rubric, the rater or grader assigns a single score based on an overall judgment of the student’s work, using descriptions of each performance level to assign the score.

Advantages of holistic rubrics:

  • Can p lace an emphasis on what learners can demonstrate rather than what they cannot
  • Save grader time by minimizing the number of evaluations to be made for each student
  • Can be used consistently across raters, provided they have all been trained

Disadvantages of holistic rubrics:

  • Provide less specific feedback than analytic/descriptive rubrics
  • Can be difficult to choose a score when a student’s work is at varying levels across the criteria
  • Any weighting of c riteria cannot be indicated in the rubric

Analytic/Descriptive Rubric . An analytic or descriptive rubric often takes the form of a table with the criteria listed in the left column and with levels of performance listed across the top row. Each cell contains a description of what the specified criterion looks like at a given level of performance. Each of the criteria is scored individually.

Advantages of analytic rubrics:

  • Provide detailed feedback on areas of strength or weakness
  • Each criterion can be weighted to reflect its relative importance

Disadvantages of analytic rubrics:

  • More time-consuming to create and use than a holistic rubric
  • May not be used consistently across raters unless the cells are well defined
  • May result in giving less personalized feedback

Single-Point Rubric . A single-point rubric is breaks down the components of an assignment into different criteria, but instead of describing different levels of performance, only the “proficient” level is described. Feedback space is provided for instructors to give individualized comments to help students improve and/or show where they excelled beyond the proficiency descriptors.

Advantages of single-point rubrics:

  • Easier to create than an analytic/descriptive rubric
  • Perhaps more likely that students will read the descriptors
  • Areas of concern and excellence are open-ended
  • May removes a focus on the grade/points
  • May increase student creativity in project-based assignments

Disadvantage of analytic rubrics: Requires more work for instructors writing feedback

Step 3 (Optional): Look for templates and examples.

You might Google, “Rubric for persuasive essay at the college level” and see if there are any publicly available examples to start from. Ask your colleagues if they have used a rubric for a similar assignment. Some examples are also available at the end of this article. These rubrics can be a great starting point for you, but consider steps 3, 4, and 5 below to ensure that the rubric matches your assignment description, learning objectives and expectations.

Step 4: Define the assignment criteria

Make a list of the knowledge and skills are you measuring with the assignment/assessment Refer to your stated learning objectives, the assignment instructions, past examples of student work, etc. for help.

  Helpful strategies for defining grading criteria:

  • Collaborate with co-instructors, teaching assistants, and other colleagues
  • Brainstorm and discuss with students
  • Can they be observed and measured?
  • Are they important and essential?
  • Are they distinct from other criteria?
  • Are they phrased in precise, unambiguous language?
  • Revise the criteria as needed
  • Consider whether some are more important than others, and how you will weight them.

Step 5: Design the rating scale

Most ratings scales include between 3 and 5 levels. Consider the following questions when designing your rating scale:

  • Given what students are able to demonstrate in this assignment/assessment, what are the possible levels of achievement?
  • How many levels would you like to include (more levels means more detailed descriptions)
  • Will you use numbers and/or descriptive labels for each level of performance? (for example 5, 4, 3, 2, 1 and/or Exceeds expectations, Accomplished, Proficient, Developing, Beginning, etc.)
  • Don’t use too many columns, and recognize that some criteria can have more columns that others . The rubric needs to be comprehensible and organized. Pick the right amount of columns so that the criteria flow logically and naturally across levels.

Step 6: Write descriptions for each level of the rating scale

Artificial Intelligence tools like Chat GPT have proven to be useful tools for creating a rubric. You will want to engineer your prompt that you provide the AI assistant to ensure you get what you want. For example, you might provide the assignment description, the criteria you feel are important, and the number of levels of performance you want in your prompt. Use the results as a starting point, and adjust the descriptions as needed.

Building a rubric from scratch

For a single-point rubric , describe what would be considered “proficient,” i.e. B-level work, and provide that description. You might also include suggestions for students outside of the actual rubric about how they might surpass proficient-level work.

For analytic and holistic rubrics , c reate statements of expected performance at each level of the rubric.

  • Consider what descriptor is appropriate for each criteria, e.g., presence vs absence, complete vs incomplete, many vs none, major vs minor, consistent vs inconsistent, always vs never. If you have an indicator described in one level, it will need to be described in each level.
  • You might start with the top/exemplary level. What does it look like when a student has achieved excellence for each/every criterion? Then, look at the “bottom” level. What does it look like when a student has not achieved the learning goals in any way? Then, complete the in-between levels.
  • For an analytic rubric , do this for each particular criterion of the rubric so that every cell in the table is filled. These descriptions help students understand your expectations and their performance in regard to those expectations.

Well-written descriptions:

  • Describe observable and measurable behavior
  • Use parallel language across the scale
  • Indicate the degree to which the standards are met

Step 7: Create your rubric

Create your rubric in a table or spreadsheet in Word, Google Docs, Sheets, etc., and then transfer it by typing it into Moodle. You can also use online tools to create the rubric, but you will still have to type the criteria, indicators, levels, etc., into Moodle. Rubric creators: Rubistar , iRubric

Step 8: Pilot-test your rubric

Prior to implementing your rubric on a live course, obtain feedback from:

  • Teacher assistants

Try out your new rubric on a sample of student work. After you pilot-test your rubric, analyze the results to consider its effectiveness and revise accordingly.

  • Limit the rubric to a single page for reading and grading ease
  • Use parallel language . Use similar language and syntax/wording from column to column. Make sure that the rubric can be easily read from left to right or vice versa.
  • Use student-friendly language . Make sure the language is learning-level appropriate. If you use academic language or concepts, you will need to teach those concepts.
  • Share and discuss the rubric with your students . Students should understand that the rubric is there to help them learn, reflect, and self-assess. If students use a rubric, they will understand the expectations and their relevance to learning.
  • Consider scalability and reusability of rubrics. Create rubric templates that you can alter as needed for multiple assignments.
  • Maximize the descriptiveness of your language. Avoid words like “good” and “excellent.” For example, instead of saying, “uses excellent sources,” you might describe what makes a resource excellent so that students will know. You might also consider reducing the reliance on quantity, such as a number of allowable misspelled words. Focus instead, for example, on how distracting any spelling errors are.

Example of an analytic rubric for a final paper

Above Average (4)Sufficient (3)Developing (2)Needs improvement (1)
(Thesis supported by relevant information and ideas The central purpose of the student work is clear and supporting ideas always are always well-focused. Details are relevant, enrich the work.The central purpose of the student work is clear and ideas are almost always focused in a way that supports the thesis. Relevant details illustrate the author’s ideas.The central purpose of the student work is identified. Ideas are mostly focused in a way that supports the thesis.The purpose of the student work is not well-defined. A number of central ideas do not support the thesis. Thoughts appear disconnected.
(Sequencing of elements/ ideas)Information and ideas are presented in a logical sequence which flows naturally and is engaging to the audience.Information and ideas are presented in a logical sequence which is followed by the reader with little or no difficulty.Information and ideas are presented in an order that the audience can mostly follow.Information and ideas are poorly sequenced. The audience has difficulty following the thread of thought.
(Correctness of grammar and spelling)Minimal to no distracting errors in grammar and spelling.The readability of the work is only slightly interrupted by spelling and/or grammatical errors.Grammatical and/or spelling errors distract from the work.The readability of the work is seriously hampered by spelling and/or grammatical errors.

Example of a holistic rubric for a final paper

The audience is able to easily identify the central message of the work and is engaged by the paper’s clear focus and relevant details. Information is presented logically and naturally. There are minimal to no distracting errors in grammar and spelling. : The audience is easily able to identify the focus of the student work which is supported by relevant ideas and supporting details. Information is presented in a logical manner that is easily followed. The readability of the work is only slightly interrupted by errors. : The audience can identify the central purpose of the student work without little difficulty and supporting ideas are present and clear. The information is presented in an orderly fashion that can be followed with little difficulty. Grammatical and spelling errors distract from the work. : The audience cannot clearly or easily identify the central ideas or purpose of the student work. Information is presented in a disorganized fashion causing the audience to have difficulty following the author’s ideas. The readability of the work is seriously hampered by errors.

Single-Point Rubric

Advanced (evidence of exceeding standards)Criteria described a proficient levelConcerns (things that need work)
Criteria #1: Description reflecting achievement of proficient level of performance
Criteria #2: Description reflecting achievement of proficient level of performance
Criteria #3: Description reflecting achievement of proficient level of performance
Criteria #4: Description reflecting achievement of proficient level of performance
90-100 points80-90 points<80 points

More examples:

  • Single Point Rubric Template ( variation )
  • Analytic Rubric Template make a copy to edit
  • A Rubric for Rubrics
  • Bank of Online Discussion Rubrics in different formats
  • Mathematical Presentations Descriptive Rubric
  • Math Proof Assessment Rubric
  • Kansas State Sample Rubrics
  • Design Single Point Rubric

Technology Tools: Rubrics in Moodle

  • Moodle Docs: Rubrics
  • Moodle Docs: Grading Guide (use for single-point rubrics)

Tools with rubrics (other than Moodle)

  • Google Assignments
  • Turnitin Assignments: Rubric or Grading Form

Other resources

  • DePaul University (n.d.). Rubrics .
  • Gonzalez, J. (2014). Know your terms: Holistic, Analytic, and Single-Point Rubrics . Cult of Pedagogy.
  • Goodrich, H. (1996). Understanding rubrics . Teaching for Authentic Student Performance, 54 (4), 14-17. Retrieved from   
  • Miller, A. (2012). Tame the beast: tips for designing and using rubrics.
  • Ragupathi, K., Lee, A. (2020). Beyond Fairness and Consistency in Grading: The Role of Rubrics in Higher Education. In: Sanger, C., Gleason, N. (eds) Diversity and Inclusion in Global Higher Education. Palgrave Macmillan, Singapore.

Center for Teaching Innovation

Resource library.

  • AACU VALUE Rubrics

Using rubrics

A rubric is a type of scoring guide that assesses and articulates specific components and expectations for an assignment. Rubrics can be used for a variety of assignments: research papers, group projects, portfolios, and presentations.  

Why use rubrics? 

Rubrics help instructors: 

  • Assess assignments consistently from student-to-student. 
  • Save time in grading, both short-term and long-term. 
  • Give timely, effective feedback and promote student learning in a sustainable way. 
  • Clarify expectations and components of an assignment for both students and course teaching assistants (TAs). 
  • Refine teaching methods by evaluating rubric results. 

Rubrics help students: 

  • Understand expectations and components of an assignment. 
  • Become more aware of their learning process and progress. 
  • Improve work through timely and detailed feedback. 

Considerations for using rubrics 

When developing rubrics consider the following:

  • Although it takes time to build a rubric, time will be saved in the long run as grading and providing feedback on student work will become more streamlined.  
  • A rubric can be a fillable pdf that can easily be emailed to students. 
  • They can be used for oral presentations. 
  • They are a great tool to evaluate teamwork and individual contribution to group tasks. 
  • Rubrics facilitate peer-review by setting evaluation standards. Have students use the rubric to provide peer assessment on various drafts. 
  • Students can use them for self-assessment to improve personal performance and learning. Encourage students to use the rubrics to assess their own work. 
  • Motivate students to improve their work by using rubric feedback to resubmit their work incorporating the feedback. 

Getting Started with Rubrics 

  • Start small by creating one rubric for one assignment in a semester.  
  • Ask colleagues if they have developed rubrics for similar assignments or adapt rubrics that are available online. For example, the  AACU has rubrics  for topics such as written and oral communication, critical thinking, and creative thinking. RubiStar helps you to develop your rubric based on templates.  
  • Examine an assignment for your course. Outline the elements or critical attributes to be evaluated (these attributes must be objectively measurable). 
  • Create an evaluative range for performance quality under each element; for instance, “excellent,” “good,” “unsatisfactory.” 
  • Avoid using subjective or vague criteria such as “interesting” or “creative.” Instead, outline objective indicators that would fall under these categories. 
  • The criteria must clearly differentiate one performance level from another. 
  • Assign a numerical scale to each level. 
  • Give a draft of the rubric to your colleagues and/or TAs for feedback. 
  • Train students to use your rubric and solicit feedback. This will help you judge whether the rubric is clear to them and will identify any weaknesses. 
  • Rework the rubric based on the feedback. 

Institutional Research, Assessment and Planning

  • What is a rubric and how do you develop one?
  • Frequently Asked Questions
  • Assessment of Academic Programs

Rubrics are assessment tools developed to help evaluate qualitative data or assignments by providing a specific set of criteria to be rated and specific details about what is needed to achieve each level of performance for each criterion. Rubrics typically have ratings of 1 to 2 or 4 with labels (unacceptable to excellent or undeveloped to mastered).

There are many rubrics that have already been developed for various learning goals and outcomes that are publicly available. Your program might want to start with an established rubric already being used in your discipline, but then alter the rubric to fit your specific program. Another good place to start is to check out the Association of American Colleges & University's (AAC&U's) VALUE (Value Assessment of Learning in Undergraduate Education) Rubrics, which have been widely vetted. The rubrics can be downloaded at  http://www.aacu.org/value-rubrics . Again, these rubrics can be altered to fit the needs of your specific program.

For assistance starting a rubric from scratch, see  Rubistar .

The following book is a good introduction to rubrics:

Stevens, D. D. & Levi, A. J. (2005).  Introduction to rubrics: An assessment tool to save grading time, convey effective feedback, and promote student learning . Sterling, VA: Stylus Publishing.

  • Why do we need to assess? Why should we continue to assess?
  • The Assessment Process at Appalachian
  • What is Xitracs?
  • Why can't I log into Xitracs?
  • Where can I learn how to use Xitracs?
  • What is a Periodic Comprehensive Review (PCR) and how does it fit into assessment?
  • Glossary of Institutional Effectiveness Terms
  • What is an academic program?
  • What's the difference between a Student Learning Outcome (SLO) and an operational outcome for academic programs?
  • What are Student Learning Goals and Outcomes?
  • What are some appropriate assessment measures for academic programs?
  • What are numeric criteria for academic programs?
  • What is a curriculum map and how do you develop one?
  • What are some examples of operational goals and outcomes for academic departments?
  • What are the parts of a Continuous Improvement Report for academic programs?
  • What types of supporting documents should I upload into Xitracs?
  • How many examples do I need?
  • What is an assessment unit?
  • What does assessment of administrative and educational support units involve?
  • What are administrative and educational support unit goals and outcomes?
  • What are some appropriate assessment measures for administrative and educational support units?
  • What are numeric criteria for administrative and educational support units?
  • What are the parts of a Continuous Improvement Report for administrative and educational support units?

‹ What are numeric criteria for academic programs?

What is a curriculum map and how do you develop one? ›

University Writing Program

  • Collaborative Writing
  • Writing for Metacognition
  • Supporting Multilingual Writers
  • Alternatives to Grading
  • Making Feedback Matter
  • Peer Review
  • Responding to Multilingual Writers’ Texts
  • Model Library

Written by Arthur Russell

Just about every discussion of rubrics begins with a caveat: writing rubrics are not a substitute for writing instruction. Rubrics are tools for communicating grading criteria and assessing student progress. Rubrics take a variety of forms, from grids to checklists , and measure a range of writing tasks, from conceptual design to sentence-level considerations.  

As with any assessment tool, a rubric’s effectiveness is entirely dependent upon its design and its deployment in the classroom. Whatever form rubrics take, the criteria for assessment must be legible to all students—if students cannot decipher our rubrics, they are not useful.  

When effectively integrated with writing instruction, rubrics can help instructors clarify their own expectations for written work, isolate specific elements as targets of instruction, and provide meaningful feedback and coaching to students. Well-designed rubrics will draw program learning outcomes, assignment prompts, course instruction and assessment into alignment. 

Starting Points

Course rubrics vs. assignment rubrics.

Instructors may choose to use a standard rubric for evaluating all written work completed in a course. Course rubrics provide instructors and students a shared language for communicating the values and expectations of written work over the course of an entire semester. Best practices suggest that establishing grading criteria with students well in advance helps instructors compose focused, revision-oriented feedback on drafts and final papers and better coach student writers. When deploying course rubrics in writing-intensive courses, consider using them to guide peer review and self-evaluation processes with students. The more often students work with established criteria, the more likely they are to respond to and incorporate feedback in future projects.

At the same time, not every assignment needs to assess every aspect of the writing process every time. Particularly early in the semester, instructors may develop assignment-specific rubrics that target one or two standards. Prioritizing a specific learning objective or writing process in an assignment rubric allows instructors to concentrate time spent on in-class writing instruction and encourages students to develop targeted aspects of their writing processes.  

Developing Evaluation Criteria

  • Establish clear categories. What specific learning objectives (i.e. critical and creative thinking, inquiry and analysis) and writing processes (i.e. summary, synthesis, source analysis, argument and response) are most critical to success for each assignment? 
  • Establish observable and measurable criteria of success. For example, consider what counts for “clarity” in written work. For a research paper, clarity might attend to purpose: a successful paper will have a well-defined purpose (thesis, takeaway), integrate and explain evidence to support all claims, and pay careful attention to purpose, context, and audience. 
  • Adopt student-friendly language. When using academic terminology and discipline-specific concepts, be sure to define and discuss these concepts with students. When in doubt , VALUE rubrics are excellent models of clearly defined learning objective and distinguishing criteria.  

Sticking Points: Writing Rubrics in the Disciplines  

Even the most carefully planned rubrics are not self-evident. The language we have adopted for writing assessment is itself a potential obstacle to student learning and success . What we count for “clarity” or “accuracy” or “insight” in academic writing, for instance, is likely shaped by our disciplinary expectations and measured by the standards of our respective fields. What counts for “good writing” is more subjective than our rubrics may suggest. Similarly, students arrive in our courses with their own understanding and experiences of academic writing that may or may not be reflected in our assignment prompts. 

Defining the terms for success with students in class and in conference will go a long way  toward bridging these gaps. We might even use rubrics as conversation starters, not only as an occasion to communicate our expectations for written work, but also as an opportunity to demystify the rhetorical contexts of discipline-specific writing with students.

Helpful Resources  

For a short introduction to rubric design, the Creating Rubrics guide developed by Louise Pasternack (2014) for the  Center for Teaching  Excellence and Innovation is an excellent resource.  The step-by-step tutorials developed by North Carolina State University and DePaul Teaching Commons are especially useful for instructors preparing rubrics from scratch.  On the use of rubrics for writing instruction and assignments in particular, Heidi Andrade’s “Teaching with Rubrics: The Good, the Bad, and the Ugly” provides an instructive overview of the benefits and drawbacks of using rubrics.  For a more in-depth introduction (with sample rubrics), Melzer and Bean’s “Using Rubrics to Develop and Apply Grading Criteria” in  Engaging Ideas  is essential reading. 

Cited and Recommended Sources

  • Andrade, Heidi Goodrich. “Teaching with Rubrics: The Good, the Bad, and the Ugly.” College Teaching , vol. 53, no. 1, 2005, pp. 27–30, http://www.jstor.org/stable/27559213  
  • Athon, Amanda. “Designing Rubrics to Foster Students’ Diverse Language Backgrounds.” Journal of Basic Writing , vol. 38, No.1, 2019, pp. 78–103, https://doi.org/10.37514/JBW-J.2019.38.1.05  
  • Bennett, Cary. “Assessment Rubrics: Thinking inside the Boxes.” Learning and Teaching: The International Journal of Higher Education in the Social Sciences , vol. 9, no. 1, 2016, pp. 50–72,  http://www.jstor.org/stable/24718020  
  • Broad, Bob. What We Really Value: Beyond Rubrics in Teaching and Assessing Writing . University Press of Colorado, 2003. https://doi-org.proxy1.library.jhu.edu/10.2307/j.ctt46nxvm  
  • Melzer, Dan, and John C. Bean. Engaging Ideas: The Professor’s Guide to Integrating Writing, Critical Thinking, and Active Learning in the Classroom . 3rd ed., Jossey-Bass, 2021 (esp. pp. 253-277), https://ebookcentral-proquest-com.proxy1.library.jhu.edu/lib/jhu/detail.action?docID=6632622  
  • Pasternack, Louise. “Creating Rubrics,” The Innovative Instructor Blog , Center for Teaching Excellence and Innovation, Johns Hopkins University, 21 Nov. 2014.  
  • Reynders, G., et al. “Rubrics to assess critical thinking and information processing in undergraduate STEM courses.” International Journal of STEM Education vol. 7, no. 9, 2020. https://doi.org/10.1186/s40594-020-00208-5  
  • Turley, Eric D., and Chris W. Gallagher. “On the ‘Uses’ of Rubrics: Reframing the Great Rubric Debate.” The English Journal , vol. 97, no. 4, 2008, pp. 87–92, http://www.jstor.org/stable/30047253  
  • Wiggins, Grant. “The Constant Danger of Sacrificing Validity to Reliability: Making Writing Assessment Serve Writers.” Assessing Writing , vol. 1, no. 1, 1994, pp. 129-139, https://doi.org/10.1016/1075-2935(94)90008-6  

Rubric Design

Main navigation, articulating your assessment values.

Reading, commenting on, and then assigning a grade to a piece of student writing requires intense attention and difficult judgment calls. Some faculty dread “the stack.” Students may share the faculty’s dim view of writing assessment, perceiving it as highly subjective. They wonder why one faculty member values evidence and correctness before all else, while another seeks a vaguely defined originality.

Writing rubrics can help address the concerns of both faculty and students by making writing assessment more efficient, consistent, and public. Whether it is called a grading rubric, a grading sheet, or a scoring guide, a writing assignment rubric lists criteria by which the writing is graded.

Why create a writing rubric?

  • It makes your tacit rhetorical knowledge explicit
  • It articulates community- and discipline-specific standards of excellence
  • It links the grade you give the assignment to the criteria
  • It can make your grading more efficient, consistent, and fair as you can read and comment with your criteria in mind
  • It can help you reverse engineer your course: once you have the rubrics created, you can align your readings, activities, and lectures with the rubrics to set your students up for success
  • It can help your students produce writing that you look forward to reading

How to create a writing rubric

Create a rubric at the same time you create the assignment. It will help you explain to the students what your goals are for the assignment.

  • Consider your purpose: do you need a rubric that addresses the standards for all the writing in the course? Or do you need to address the writing requirements and standards for just one assignment?  Task-specific rubrics are written to help teachers assess individual assignments or genres, whereas generic rubrics are written to help teachers assess multiple assignments.
  • Begin by listing the important qualities of the writing that will be produced in response to a particular assignment. It may be helpful to have several examples of excellent versions of the assignment in front of you: what writing elements do they all have in common? Among other things, these may include features of the argument, such as a main claim or thesis; use and presentation of sources, including visuals; and formatting guidelines such as the requirement of a works cited.
  • Then consider how the criteria will be weighted in grading. Perhaps all criteria are equally important, or perhaps there are two or three that all students must achieve to earn a passing grade. Decide what best fits the class and requirements of the assignment.

Consider involving students in Steps 2 and 3. A class session devoted to developing a rubric can provoke many important discussions about the ways the features of the language serve the purpose of the writing. And when students themselves work to describe the writing they are expected to produce, they are more likely to achieve it.

At this point, you will need to decide if you want to create a holistic or an analytic rubric. There is much debate about these two approaches to assessment.

Comparing Holistic and Analytic Rubrics

Holistic scoring .

Holistic scoring aims to rate overall proficiency in a given student writing sample. It is often used in large-scale writing program assessment and impromptu classroom writing for diagnostic purposes.

General tenets to holistic scoring:

  • Responding to drafts is part of evaluation
  • Responses do not focus on grammar and mechanics during drafting and there is little correction
  • Marginal comments are kept to 2-3 per page with summative comments at end
  • End commentary attends to students’ overall performance across learning objectives as articulated in the assignment
  • Response language aims to foster students’ self-assessment

Holistic rubrics emphasize what students do well and generally increase efficiency; they may also be more valid because scoring includes authentic, personal reaction of the reader. But holistic sores won’t tell a student how they’ve progressed relative to previous assignments and may be rater-dependent, reducing reliability. (For a summary of advantages and disadvantages of holistic scoring, see Becker, 2011, p. 116.)

Here is an example of a partial holistic rubric:

Summary meets all the criteria. The writer understands the article thoroughly. The main points in the article appear in the summary with all main points proportionately developed. The summary should be as comprehensive as possible and should be as comprehensive as possible and should read smoothly, with appropriate transitions between ideas. Sentences should be clear, without vagueness or ambiguity and without grammatical or mechanical errors.

A complete holistic rubric for a research paper (authored by Jonah Willihnganz) can be  downloaded here.

Analytic Scoring

Analytic scoring makes explicit the contribution to the final grade of each element of writing. For example, an instructor may choose to give 30 points for an essay whose ideas are sufficiently complex, that marshals good reasons in support of a thesis, and whose argument is logical; and 20 points for well-constructed sentences and careful copy editing.

General tenets to analytic scoring:

  • Reflect emphases in your teaching and communicate the learning goals for the course
  • Emphasize student performance across criterion, which are established as central to the assignment in advance, usually on an assignment sheet
  • Typically take a quantitative approach, providing a scaled set of points for each criterion
  • Make the analytic framework available to students before they write  

Advantages of an analytic rubric include ease of training raters and improved reliability. Meanwhile, writers often can more easily diagnose the strengths and weaknesses of their work. But analytic rubrics can be time-consuming to produce, and raters may judge the writing holistically anyway. Moreover, many readers believe that writing traits cannot be separated. (For a summary of the advantages and disadvantages of analytic scoring, see Becker, 2011, p. 115.)

For example, a partial analytic rubric for a single trait, “addresses a significant issue”:

  • Excellent: Elegantly establishes the current problem, why it matters, to whom
  • Above Average: Identifies the problem; explains why it matters and to whom
  • Competent: Describes topic but relevance unclear or cursory
  • Developing: Unclear issue and relevance

A  complete analytic rubric for a research paper can be downloaded here.  In WIM courses, this language should be revised to name specific disciplinary conventions.

Whichever type of rubric you write, your goal is to avoid pushing students into prescriptive formulas and limiting thinking (e.g., “each paragraph has five sentences”). By carefully describing the writing you want to read, you give students a clear target, and, as Ed White puts it, “describe the ongoing work of the class” (75).

Writing rubrics contribute meaningfully to the teaching of writing. Think of them as a coaching aide. In class and in conferences, you can use the language of the rubric to help you move past generic statements about what makes good writing good to statements about what constitutes success on the assignment and in the genre or discourse community. The rubric articulates what you are asking students to produce on the page; once that work is accomplished, you can turn your attention to explaining how students can achieve it.

Works Cited

Becker, Anthony.  “Examining Rubrics Used to Measure Writing Performance in U.S. Intensive English Programs.”   The CATESOL Journal  22.1 (2010/2011):113-30. Web.

White, Edward M.  Teaching and Assessing Writing . Proquest Info and Learning, 1985. Print.

Further Resources

CCCC Committee on Assessment. “Writing Assessment: A Position Statement.” November 2006 (Revised March 2009). Conference on College Composition and Communication. Web.

Gallagher, Chris W. “Assess Locally, Validate Globally: Heuristics for Validating Local Writing Assessments.” Writing Program Administration 34.1 (2010): 10-32. Web.

Huot, Brian.  (Re)Articulating Writing Assessment for Teaching and Learning.  Logan: Utah State UP, 2002. Print.

Kelly-Reilly, Diane, and Peggy O’Neil, eds. Journal of Writing Assessment. Web.

McKee, Heidi A., and Dànielle Nicole DeVoss DeVoss, Eds. Digital Writing Assessment & Evaluation. Logan, UT: Computers and Composition Digital Press/Utah State University Press, 2013. Web.

O’Neill, Peggy, Cindy Moore, and Brian Huot.  A Guide to College Writing Assessment . Logan: Utah State UP, 2009. Print.

Sommers, Nancy.  Responding to Student Writers . Macmillan Higher Education, 2013.

Straub, Richard. “Responding, Really Responding to Other Students’ Writing.” The Subject is Writing: Essays by Teachers and Students. Ed. Wendy Bishop. Boynton/Cook, 1999. Web.

White, Edward M., and Cassie A. Wright.  Assigning, Responding, Evaluating: A Writing Teacher’s Guide . 5th ed. Bedford/St. Martin’s, 2015. Print.

Berkeley Graduate Division

  • Basics for GSIs
  • Advancing Your Skills

Examples of Rubric Creation

Creating a rubric takes time and requires thought and experimentation. Here you can see the steps used to create two kinds of rubric: one for problems in a physics exam for a small, upper-division physics course, and another for an essay assignment in a large, lower-division sociology course.

Physics Problems

In STEM disciplines (science, technology, engineering, and mathematics), assignments tend to be analytical and problem-based. Holistic rubrics can be an efficient, consistent, and fair way to grade a problem set. An analytical rubric often gives a more clear picture of what a student should direct their future learning efforts on. Since holistic rubrics try to label overall understanding, they can lead to more regrade requests when compared to analytical rubric with more explicit criteria. When starting to grade a problem, it is important to think about the relevant conceptual ingredients in the solution. Then look at a sample of student work to get a feel for student mistakes. Decide what rubric you will use (e.g., holistic or analytic, and how many points). Apply the holistic rubric by marking comments and sorting the students’ assignments into stacks (e.g., five stacks if using a five-point scale). Finally, check the stacks for consistency and mark the scores. The following is a sample homework problem from a UC Berkeley Physics Department undergraduate course in mechanics.

Homework Problem

Learning objective.

Solve for position and speed along a projectile’s trajectory.

Desired Traits: Conceptual Elements Needed for the Solution

  • Decompose motion into vertical and horizontal axes.
  • Identify that the maximum height occurs when the vertical velocity is 0.
  • Apply kinematics equation with g as the acceleration to solve for the time and height.
  • Evaluate the numerical expression.

A note on analytic rubrics: If you decide you feel more comfortable grading with an analytic rubric, you can assign a point value to each concept. The drawback to this method is that it can sometimes unfairly penalize a student who has a good understanding of the problem but makes a lot of minor errors. Because the analytic method tends to have many more parts, the method can take quite a bit more time to apply. In the end, your analytic rubric should give results that agree with the common-sense assessment of how well the student understood the problem. This sense is well captured by the holistic method.

Holistic Rubric

A holistic rubric, closely based on a rubric by Bruce Birkett and Andrew Elby:

The student clearly understands how to solve the problem. Minor mistakes and careless errors can appear insofar as they do not indicate a conceptual misunderstanding.
The student understands the main concepts and problem-solving techniques, but has some minor yet non-trivial gaps in their reasoning.
The student has partially understood the problem. The student is not completely lost, but requires tutoring in some of the basic concepts. The student may have started out correctly, but gone on a tangent or not finished the problem.
The student has a poor understanding of the problem. The student may have gone in a not-entirely-wrong but unproductive direction, or attempted to solve the problem using pattern matching or by rote.
The student did not understand the problem. They may have written some appropriate formulas or diagrams, but nothing further. Or they may have done something entirely wrong.
The student wrote nothing or almost nothing.

[a] This policy especially makes sense on exam problems, for which students are under time pressure and are more likely to make harmless algebraic mistakes. It would also be reasonable to have stricter standards for homework problems.

Analytic Rubric

The following is an analytic rubric that takes the desired traits of the solution and assigns point values to each of the components. Note that the relative point values should reflect the importance in the overall problem. For example, the steps of the problem solving should be worth more than the final numerical value of the solution. This rubric also provides clarity for where students are lacking in their current understanding of the problem.

Student decomposes the velocity (a vector quantity) into its vertical component
Student realizes that the motion should be decomposed, but does not arrive at the correct expression for
No attempt at decomposing the 2D motion into its vertical component.
Student successfully translates the physical question (the highest point of the ball) to an equation that can be used to help solve the motion ( ).
Student identifies the maximum height condition with minor mistakes.
Incorrect or missing identification of maximum height condition.
Applies the kinematic equations to yield a correct expression for the height in terms of the given variables. Solution uses the fact that the vertical motion has a constant downward acceleration due to gravity. The sequence of steps clearly demonstrates the thought process. Most likely, the solution includes solving for the time it takes to reach the top and then uses that time to see how far up the ball traveled.
Mostly correct application with minor error (e.g. algebraic mistakes or incorporating extraneous equations).
Equations include relevant parameters from the problem, but the student does not isolate relevant variables being solved for (such as time or distance).
Some kinematics formulas are written down but they are not connected with the information in the problem.
No attempt.
Correct numerical answer with appropriate units.
Mostly correct answer but with a few minor errors. Still physically sensible answer (e.g. units and numerical values are reasonable).
No attempt or physically unreasonable answer (e.g. a negative maximum height or reporting the height in units of seconds).

Try to avoid penalizing multiple times for the same mistake by choosing your evaluation criteria to be related to distinct learning outcomes. In designing your rubric, you can decide how finely to evaluate each component. Having more possible point values on your rubric can give more detailed feedback on a student’s performance, though it typically takes more time for the grader to assess.

Of course, problems can, and often do, feature the use of multiple learning outcomes in tandem. When a mistake could be assigned to multiple criteria, it is advisable to check that the overall problem grade is reasonable with the student’s mastery of the problem. Not having to decide how particular mistakes should be deducted from the analytic rubric is one advantage of the holistic rubric. When designing problems, it can be very beneficial for students not to have problems with several subparts that rely on prior answers. These tend to disproportionately skew the grades of students who miss an ingredient early on. When possible, consider making independent problems for testing different learning outcomes.

Sociology Research Paper

An introductory-level, large-lecture course is a difficult setting for managing a student research assignment. With the assistance of an instructional support team that included a GSI teaching consultant and a UC Berkeley librarian [b] , sociology lecturer Mary Kelsey developed the following assignment:

This was a lengthy and complex assignment worth a substantial portion of the course grade. Since the class was very large, the instructor wanted to minimize the effort it would take her GSIs to grade the papers in a manner consistent with the assignment’s learning objectives. For these reasons Dr. Kelsey and the instructional team gave a lot of forethought to crafting a detailed grading rubric.

Desired Traits

  • Use and interpretation of data
  • Reflection on personal experiences
  • Application of course readings and materials
  • Organization, writing, and mechanics

For this assignment, the instructional team decided to grade each trait individually because there seemed to be too many independent variables to grade holistically. They could have used a five-point scale, a three-point scale, or a descriptive analytic scale. The choice depended on the complexity of the assignment and the kind of information they wanted to convey to students about their work.

Below are three of the analytic rubrics they considered for the Argument trait and a holistic rubric for all the traits together. Lastly you will find the entire analytic rubric, for all five desired traits, that was finally used for the assignment. Which would you choose, and why?

Five-Point Scale

5 Argument pertains to relationship between social factors and educational opportunity and is clearly stated and defensible.
4 Argument pertains to relationship between social factors and educational opportunity and is defensible, but it is not clearly stated.
3 Argument pertains to relationship between social factors and educational opportunity but is not defensible using the evidence available.
2 Argument is presented, but it does not pertain to relationship between social factors and educational opportunity.
1 Social factors and educational opportunity are discussed, but no argument is presented.

Three-Point Scale

Argument pertains to relationship between social factors and educational opportunity and is clearly stated and defensible.
Argument pertains to relationship between social factors and educational opportunity but may not be clear or sufficiently narrow in scope.
Social factors and educational opportunity are discussed, but no argument is presented.

Simplified Three-Point Scale, numbers replaced with descriptive terms

Argument pertains to relationship between social factors and educational opportunity and is clearly stated and defensible      

For some assignments, you may choose to use a holistic rubric, or one scale for the whole assignment. This type of rubric is particularly useful when the variables you want to assess just cannot be usefully separated. We chose not to use a holistic rubric for this assignment because we wanted to be able to grade each trait separately, but we’ve completed a holistic version here for comparative purposes.

The paper is driven by a clearly stated, defensible argument about the relationship between social factors and educational opportunity. Sufficient data is used to defend the argument, and the data is accurately interpreted to identify each school’s position within a larger social structure. Personal educational experiences are examined thoughtfully and critically to identify significance of external social factors and support the main argument. Paper reflects solid understanding of the major themes of the course, using course readings to accurately define sociological concepts and to place the argument within a broader discussion of the relationship between social status and individual opportunity. Paper is clearly organized (with an introduction, transition sentences to connect major ideas, and conclusion) and has few or no grammar or spelling errors. Scholarly ideas are cited correctly using the ASA style guide.
The paper is driven by a defensible argument about the relationship between social factors and public school quality, but it may not be stated as clearly and consistently throughout the essay as in an “A” paper. The argument is defended using sufficient data, reflection on personal experiences, and course readings, but the use of this evidence does not always demonstrate a clear understanding of how to locate the school or community within a larger class structure, how social factors influence personal experience, or the broader significance of course concepts. Essay is clearly organized, but might benefit from more careful attention to transitional sentences. Scholarly ideas are cited accurately, using the ASA style sheet, and the writing is polished, with few grammar or spelling errors.
The paper contains an argument about the relationship between social factors and public school quality, but the argument may not be defensible using the evidence available. Data, course readings, and personal experiences are used to defend the argument, but in a perfunctory way, without demonstrating an understanding of how social factors are identified or how they shape personal experience. Scholarly ideas are cited accurately, using the ASA style sheet. Essay may have either significant organizational or proofreading errors, but not both.
The paper does not have an argument, or is missing a major component of the evidence requested (data, course readings, or personal experiences). Alternatively, or in addition, the paper suffers from significant organizational and proofreading errors. Scholarly ideas are cited, but without following ASA guidelines.
The paper does not provide an argument and contains only one component of the evidence requested, if any. The paper suffers from significant organizational and proofreading errors. If scholarly ideas are not cited, paper receives an automatic “F.”

Final Analytic Rubric

This is the rubric the instructor finally decided to use. It rates five major traits, each on a five-point scale. This allowed for fine but clear distinctions in evaluating the students’ final papers.

Argument pertains to relationship between social factors and educational opportunity and is clearly stated and defensible.
Argument pertains to relationship between social factors and educational opportunity and is defensible, but it is not clearly stated.
Argument pertains to relationship between social factors and educational opportunity but is not defensible using the evidence available.
Argument is presented, but it does not pertain to relationship between social factors and educational opportunity.
Social factors and educational opportunity are discussed, but no argument is presented.
The data is accurately interpreted to identify each school’s position within a larger social structure, and sufficient data is used to defend the main argument.
The data is accurately interpreted to identify each school’s position within a larger social structure, and data is used to defend the main argument, but it might not be sufficient.
Data is used to defend the main argument, but it is not accurately interpreted to identify each school’s position within a larger social structure, and it might not be sufficient.
Data is used to defend the main argument, but it is insufficient, and no effort is made to identify the school’s position within a larger social structure.
Data is provided, but it is not used to defend the main argument.
Personal educational experiences are examined thoughtfully and critically to identify significance of external social factors and support the main argument.
Personal educational experiences are examined thoughtfully and critically to identify significance of external social factors, but relation to the main argument may not be clear.
Personal educational experiences are examined, but not in a way that reflects understanding of the external factors shaping individual opportunity. Relation to the main argument also may not be clear.
Personal educational experiences are discussed, but not in a way that reflects understanding of the external factors shaping individual opportunity. No effort is made to relate experiences back to the main argument.
Personal educational experiences are mentioned, but in a perfunctory way.
Demonstrates solid understanding of the major themes of the course, using course readings to accurately define sociological concepts and to place the argument within a broader discussion of the relationship between social status and individual opportunity.
Uses course readings to define sociological concepts and place the argument within a broader framework, but does not always demonstrate solid understanding of the major themes.
Uses course readings to place the argument within a broader framework, but sociological concepts are poorly defined or not defined at all. The data is not all accurately interpreted to identify each school’s position within a larger social structure, and it might not be sufficient.
Course readings are used, but paper does not place the argument within a broader framework or define sociological concepts.
Course readings are only mentioned, with no clear understanding of the relationship between the paper and course themes.
Clear organization and natural “flow” (with an introduction, transition sentences to connect major ideas, and conclusion) with few or no grammar or spelling errors. Scholarly ideas are cited correctly using the ASA style guide.
Clear organization (introduction, transition sentences to connect major ideas, and conclusion), but writing might not always be fluid, and might contain some grammar or spelling errors. Scholarly ideas are cited correctly using the ASA style guide.
Organization unclear or the paper is marred by significant grammar or spelling errors (but not both). Scholarly ideas are cited correctly using the ASA style guide.
Organization unclear and the paper is marred by significant grammar and spelling errors. Scholarly ideas are cited correctly using the ASA style guide.
Effort to cite is made, but the scholarly ideas are not cited correctly. (Automatic “F” if ideas are not cited at all.)

[b] These materials were developed during UC Berkeley’s 2005–2006 Mellon Library/Faculty Fellowship for Undergraduate Research program. Members of the instructional team who worked with Lecturer Kelsey in developing the grading rubric included Susan Haskell-Khan, a GSI Center teaching consultant and doctoral candidate in history, and Sarah McDaniel, a teaching librarian with the Doe/Moffitt Libraries.

University of Texas

  • University of Texas Libraries
  • UT Libraries

Information Literacy Toolkit

  • Assignment design rubric for research assignments
  • Welcome to the Toolkit
  • Information Literacy rubric
  • Annotated Bibliography
  • Avoiding Plagiarism Tutorial
  • Background Information and Class Expert
  • Citation managers and research organization skills
  • Comparing Sources
  • Developing a Research Question
  • Developing and Researching a Controversy
  • Digital Projects
  • Everything But the Paper
  • News and Media Literacy
  • Primary Source Literacy
  • How to Read a Scholarly Source (humanities)
  • How to Read a Scholarly Source (sciences/social sciences)
  • Research Log
  • Research Question Abstract
  • Self-Guided Tour of PCL This link opens in a new window
  • Source Analysis/Evaluation
  • Using Scholarly Sources (Synthesizing Sources)
  • Why Use Sources Exercise
  • Write for Wikipedia
  • LAH 350: Treasure Hunt in Campus Archives: Discovering Islands of Order, Creating Original Humanities Research Projects
  • RHE 368C: Writing Center Internship
  • TC 302: Pathways to Civic Engagement
  • UGS 303: Jerusalem
  • UGS 303: Modern Day Slavery
  • UGS 302: Social Inequality and Education in Latin America
  • UGS 302: Tales of Troy
  • Guides for Students
  • Open Educational Resources (OERs) This link opens in a new window

Assessment Resource Description

Undergraduates learn best from assignments that provide concrete and specific guidance on research methods. Librarians can help you design assignments that will guide your students toward effective research, and this rubric is one tool we use to do that.

Apply the assignment design rubric to your assignment to ensure that it has:

  • Clear expectations about source requirements
  • A clear rationale and context for resource requirements
  • Focus on the research process
  • Library engagement
  • Request a tailored assignment or session with a librarian
  • Toolkit Feedback If you use toolkit materials or notice an omission, please give us feedback.
  • Assignment Design Rubric - Google Drive Link
  • Assignment Design Rubric - Download Link

Updated 7/21

  • Last Updated: Apr 11, 2024 7:44 AM
  • URL: https://guides.lib.utexas.edu/toolkit

Creative Commons License

Assessment Rubrics

A rubric is commonly defined as a tool that articulates the expectations for an assignment by listing criteria, and for each criteria, describing levels of quality (Andrade, 2000; Arter & Chappuis, 2007; Stiggins, 2001). Criteria are used in determining the level at which student work meets expectations. Markers of quality give students a clear idea about what must be done to demonstrate a certain level of mastery, understanding, or proficiency (i.e., "Exceeds Expectations" does xyz, "Meets Expectations" does only xy or yz, "Developing" does only x or y or z). Rubrics can be used for any assignment in a course, or for any way in which students are asked to demonstrate what they've learned. They can also be used to facilitate self and peer-reviews of student work.

Rubrics aren't just for summative evaluation. They can be used as a teaching tool as well. When used as part of a formative assessment, they can help students understand both the holistic nature and/or specific analytics of learning expected, the level of learning expected, and then make decisions about their current level of learning to inform revision and improvement (Reddy & Andrade, 2010). 

Why use rubrics?

Rubrics help instructors:

Provide students with feedback that is clear, directed and focused on ways to improve learning.

Demystify assignment expectations so students can focus on the work instead of guessing "what the instructor wants."

Reduce time spent on grading and develop consistency in how you evaluate student learning across students and throughout a class.

Rubrics help students:

Focus their efforts on completing assignments in line with clearly set expectations.

Self and Peer-reflect on their learning, making informed changes to achieve the desired learning level.

Developing a Rubric

During the process of developing a rubric, instructors might:

Select an assignment for your course - ideally one you identify as time intensive to grade, or students report as having unclear expectations.

Decide what you want students to demonstrate about their learning through that assignment. These are your criteria.

Identify the markers of quality on which you feel comfortable evaluating students’ level of learning - often along with a numerical scale (i.e., "Accomplished," "Emerging," "Beginning" for a developmental approach).

Give students the rubric ahead of time. Advise them to use it in guiding their completion of the assignment.

It can be overwhelming to create a rubric for every assignment in a class at once, so start by creating one rubric for one assignment. See how it goes and develop more from there! Also, do not reinvent the wheel. Rubric templates and examples exist all over the Internet, or consider asking colleagues if they have developed rubrics for similar assignments. 

Sample Rubrics

Examples of holistic and analytic rubrics : see Tables 2 & 3 in “Rubrics: Tools for Making Learning Goals and Evaluation Criteria Explicit for Both Teachers and Learners” (Allen & Tanner, 2006)

Examples across assessment types : see “Creating and Using Rubrics,” Carnegie Mellon Eberly Center for Teaching Excellence and & Educational Innovation

“VALUE Rubrics” : see the Association of American Colleges and Universities set of free, downloadable rubrics, with foci including creative thinking, problem solving, and information literacy. 

Andrade, H. 2000. Using rubrics to promote thinking and learning. Educational Leadership 57, no. 5: 13–18. Arter, J., and J. Chappuis. 2007. Creating and recognizing quality rubrics. Upper Saddle River, NJ: Pearson/Merrill Prentice Hall. Stiggins, R.J. 2001. Student-involved classroom assessment. 3rd ed. Upper Saddle River, NJ: Prentice-Hall. Reddy, Y., & Andrade, H. (2010). A review of rubric use in higher education. Assessment & Evaluation In Higher Education, 35(4), 435-448.

Examples of Rubrics

Here are some rubric examples from different colleges and universities, as well as the Association of American Colleges and Universities (AACU) VALUE rubrics. We would also like to include examples from Syracuse University faculty and staff. If you would be willing to share your rubric with us, please click  here.

  • Art and Design Rubric (Rhode Island University)
  • Theater Arts Writing Rubric (California State University)

Class Participation

  • Holistic Participation Rubric (University of Virginia)
  • Large Lecture Courses with TAs (Carnegie Mellon University)

Doctoral Program Milestones

  • Qualifying Examination (Syracuse University)
  • Comprehensive Core Examination (Portland State University)
  • Dissertation Proposal (Portland State University)
  • Dissertation (Portland State University)

Experiential Learning

  • Key Competencies in Community-Engaged Learning and Teaching (Campus Compact)
  • Global Learning and Intercultural Knowledge (International Cross-Cultural Experiential Learning Evaluation Toolkit)

Humanities and Social Science

  • Anthropology Paper (Carnegie Mellon University)
  • Economics Paper (University of Kentucky)
  • History Paper (Carnegie Mellon University)
  • Literary Analysis (Minnesota State University)
  • Philosophy Paper (Carnegie Mellon University)
  • Psychology Paper (Loyola Marymount University)
  • Sociology Paper (University of California)

Media and Design

  • Media and Design Elements Rubric (Samford University)

Natural Science

  • Physics Paper (Illinois State University)
  • Chemistry Paper (Utah State University)
  • Biology Research Report (Loyola Marymount University)

Online Learning

  • Discussion Forums (Simmons College)

Syracuse University’s Shared Competencies

  • Ethics, Integrity, and Commitment to Diversity and Inclusion rubric (*pdf)
  • Critical and Creative Thinking rubric (*pdf)
  • Scientific Inquiry and Research Skills rubric (*pdf)
  • Civic and Global Responsibility rubric (*pdf)
  • Communication Skills rubric (*pdf)
  • Information Literacy and Technological Agility rubric (*pdf)
  • Journal Reflection (The State University of New Jersey)
  • Reflection Writing Rubric  and  Research Project Writing (Carnegie Mellon University)
  • Research Paper Rubric (Cornell College)
  • Assessment Rubric for Student Reflections

AACU VALUE Rubrics

VALUE (Valid Assessment of Learning in Undergraduate Education) is a national assessment initiative on college student learning sponsored by AACU as part of its Liberal Education and America’s Promise (LEAP) initiative.

Intellectual and Practical Skills

  • Inquiry and Analysis (*pdf)
  • Critical Thinking (*pdf)
  • Creative Thinking (*pdf)
  • Written Communication (*pdf)
  • Oral Communication (*pdf)
  • Reading (*pdf)
  • Quantitative Literacy (*pdf)
  • Information Literacy (*pdf)
  • Teamwork (*pdf)
  • Problem Solving (*pdf)

Personal and Social Responsibility

  • Civic Engagement (*pdf)
  • Intercultural Knowledge and Competence (*pdf)
  • Ethical Reasoning (*pdf)
  • Foundations and Skills for Lifelong Learning (*pdf)
  • Global Learning (*pdf)

Integrative and Applied Learning

  • Integrative Learning (*pdf)

Assessing Institution-Wide Diversity

  • Self-Assessment Rubric For the Institutionalization of Diversity, Equity, and Inclusion in Higher Education

Search Form

Introduction to rubrics.

A rubric is an assessment tool that provides information on performance expectations for students. Essentially, a rubric divides an assessment into smaller parts (criteria) and then provides details for different levels of performance possible for each part (Stevens and Levi 2013). Because rubrics are used to assess performance-based activities, rubrics provide a method for grading a wide range of assessments including discipline-specific skills (playing an instrument, using a microscope, repairing a transmission, executing a specific dance technique, etc.), student-created products (written reports, constructed objects, works of art, concept maps, models, etc.) or specific student behaviors (presentation skills, peer-review of student writing, discussions, group evaluations, etc.; Brookhart 2013, Stevens and Levi 2013).

Rubrics are constructed in a matrix (table) with different levels of performance explained for each specific criteria within the matrix (Table 1). A rubric differs from a grading sheet as the rubric provides details for each performance level for each criterion (Allen and Tanner 2006, Felder and Brent 2016) instead of just stating the criteria with point designations (total points possible for each criterion). Most rubrics are designed to have the criteria for an assessment as rows and the columns used to indicate the different performance levels (Table 1).

CriteriaExcellentAverageLimited
Criterion #1Details for Criterion #1 at the highest performance levelDetails for Criterion #1 for mid-performance levelDetails for Criterion #1 at the lowest performance level
Criterion #2Details for Criterion #2 at the highest performance levelDetails for Criterion #2 for mid-performance levelDetails for Criterion #2 at the lowest performance level
Criterion #3Details for Criterion #3 at the highest performance levelDetails for Criterion #3 for mid-performance levelDetails for Criterion #3 at the lowest performance level

There are two main types of rubrics: holistic and analytical rubrics. Holistic rubrics provide criteria and performance levels but uses generic statements for each level regardless of the criteria being discussed (Allen and Tanner 2006, Wormeli 2006). For example, all criteria in the “Excellent” performance level would be “Demonstrated mastery of the skill” or “Shows deep understanding of the concept without any errors”. Alternatively, an analytical rubric provides specific statements for how criteria are reached for each performance level (Allen and Tanner 2006, Wormeli 2006). For instance, “Demonstrated ability to create a microscope slide and use a microscope to draw different types of bacteria” or “Explains the main factors that culminated in the start of World War I with details (names, dates, etc.) and referencing at least two sources” would be possible statements for the “Excellent" performance level of different rubrics. Holistic rubrics are faster to create and score, but do not explicitly communicate what information is necessary to meet the criteria and therefore lack the level of detail needed for rubrics to be helpful to students (Brookhart 2013). Therefore, it is recommended that educators use analytical rubrics for most assessments, especially formative assessments.

In addition to the basic layout and type of rubric (analytical vs. holistic), there are specific steps to building a well-designed rubric that are explained on the webpage “ Designing Effective Rubrics .”

Rubrics can be used for a variety of purposes in a course. A rubric can be used to aid in the development of an assessment since working through the steps to design a rubric allows for deeper thought on the requirements and alignment of different aspects of the assessment (Brookhart 2013). Rubrics can also be provided to students as a guide for planning and creating a project. In this way, students are provided with the rubric prior to completing an assessment to better explain the expectations for an assessment (and the rubric may or may not be used in the grading process; Allen and Tanner 2006, Brookhart 2013). Often, rubrics allow for streamlining of grading by providing a framework that allows for quicker scoring of student work while simultaneously providing feedback to students (Allen and Tanner 2006, Stevens and Levi 2013, Felder and Brent 2016, Francis 2018). When more than one instructor is teaching a course, the use of rubrics allows for standardization of grading (Allen and Tanner 2006), but only when the group of instructors use calibration practices to ensure the rubric is being used consistently among instructors (Feldman 2019). Rubrics can also be used to track student improvement when used repeatedly for similar assignments or when students are developing a skill (Allen and Tanner 2006, Stevens and Levi 2013). Additionally, rubrics can be used to encourage self-reflection and self-regulation in students (Allen and Tanner 2006, Stevens and Levi 2013, Panadero and Romero 2014, Zhao et al. 2021). Overall, rubrics can increase the clarity and transparency of grading for students (Francis 2018) especially when the assessment criteria are well aligned to the learning objectives (Burton 2006).

Well-designed rubrics can be beneficial for students by reducing aspects of the “hidden curriculum” that many students experience (especially first-generation students or students from historically marginalized groups) by making assessment expectations clearer to students (Allen and Tanner 2006, Wolf et al. 2008, Stevens and Levi 2013). Rubrics have been shown to increase student performance on assessments, however, increased scores were often only found if students were required to use the rubric prior to completing the assessment (Felder and Brent 2016, Francis 2018). For example, when researcher provided students in different class sections of the same course with different levels of engagement with a rubric (no rubric control group, rubric provided, rubric explained in class, rubric explained in class and students given access to additional resources), results showed no difference in student scores between the no rubric and rubric provided groups indicating that just giving students a rubric does not positively influence student grades on the assessment (Francis 2018). Yet, when the rubric was explicitly explained during a class session (regardless of availability of additional resources), significant improvements were seen in these student’s scores on the assessment (Francis 2018). In a separate study on self-assessment, researchers found that when a rubric was provided, students with access to the rubric (compared to the control group) reported higher levels of self-regulation, increased performance on the assessment, and higher self-accuracy (self-assessment scores better matched the earned grade) for the assessment (Panadero and Romero 2014). Thus, instructors must provide students with opportunities to meaningfully engage with the rubric for the rubric to be beneficial for student performance.

One method to achieve the level of engagement needed for rubrics to be helpful for students is to create assignments that require students to assess an example or conduct a peer-review using the rubric (Francis 2018). Alternatively, involving students in creating the rubric can also provide the level of engagement needed for rubrics to be beneficial for students (Panadero and Romero 2014, Francis 2018). In fact, student-instructor generated rubrics (usually created during a whole class activity) increased student engagement with the rubric while also reducing resistance to using the rubric for grading and promoted the development of skills including negotiation, self-regulation, and self-reflection in students (Zhao et al. 2021). It is also noteworthy that most peer-generated rubrics for grading of group dynamics or group projects often are highly aligned to what an instructor would have created for students to use (Zhao et al. 2021).

Students are not the only ones to benefit from rubrics. Rubrics can assist instructors by reducing the time required to grade assignments, providing timely feedback to students, and allowing for more consistency in grading (Allen and Tanner 2006, Stevens and Levi 2013, Felder and Brent 2016, Francis 2018). Designing a rubric can help instructors develop a better assessment and provides a framework for grading when multiple instructors are teaching different sections of a course (Allen and Tanner 2006, Brookhart 2013). Thus, rubrics can benefit instructors once they are constructed and aligned to the learning objectives and assessment.

Not all educators find rubrics to be useful for their courses. Many find the time and effort needed to develop a well-designed rubric does not result in higher performance from students or that the rubric does not accurately reflect the students actual understanding of the topic being assessed (Wormeli 2006, Felder and Brent 2016). Although some pre-made rubrics are available, most analytic rubrics (those that best assist students) require either substantial adaptation of these pre-existing rubrics or construction of new rubrics (Allen and Tanner 2006). These rubrics also require updating and re-evaluation to determine if the criteria and descriptions for each level of performance are accurately reflecting student learning (Allen and Tanner 2006, Wormeli 2006). Thus, rubrics can require a large investment of time from instructors and may not result in increased learning by students.

Others find using rubrics (and grades in general) in courses causes unintended negative consequences that undermine learning (Kohn 2006). When students are graded (especially using a rubric), they are less inclined to learn the material deeply (instead just doing the minimum to meet the requirements), to take risks (including creativity and innovation), and often develop a loss of interest in learning and take on a more fixed mindset (Kohn 2006). Grading in general impedes critical thinking (Nieminen 2020), reduces student motivation (Schinske and Tanner 2014, Chamberlin et al. 2018, Schwab et al. 2018), and widens inequalities (Feldman 2019, Link and Guskey 2019). By trying to make something subjective (assessment) and make it more quantitative (using rubrics, standardized tests, and similar approaches), instructors rationalize and legitimize the grades they give students over any real understanding of what students are learning (Kohn 2006). Additionally, the underlying reasoning for student improvement when rubrics are provided may not reflect increased “pedagogical value” in the rubric but instead relate to reductions in student anxiety and stress (due to having a better idea of assessment requirements) allowing them to focus and create better assignments (Francis 2018). If this is the case, then any method that allows students to gain a better understanding of the assessment requirements will improve learning and rubrics may not be the best method. Thus, it is up to the instructor to determine the best method to explicitly convey assessment criteria and expectations while also providing students with timely feedback on assessments.

Allen, D. and K. Tanner (2006). Rubrics: Tools for making learning goals and evaluation criteria explicit for both teachers and learners. CBE – Life Sciences Education 5: 197-203.

Brookhart, S. M. (2013). How to create and use rubrics for formative assessment and grading. ASCD, Alexandria, VA, USA.

Burton, K. (2015). Continuing my journey on designing and refining criterion-referenced assessment rubrics. Journal of Learning Design 8: 1-13.

Chamberlin, K., M. Yasue, and I. A. Chiang (2018). The impact of grades on student motivation. Active Learning in Higher Education DOI: https://doi.org/10.1177/1469787418819728

Felder, R. M., and R. Brent (2016). Teaching and Learning STEM: A practical guide. Jossey-Bass, San Francisco, CA, USA.

Feldman, J. (2019). Grading for Equity: What it is, why it matters, and how it can transform schools and classrooms. Corwin, Thousand Oaks, CA, USA.

Francis, J. E. (2018). Linking Rubrics and academic performance: an engagement theory perspective. Journal of University Teaching and Learning Practice 15: DOI: http://ro.uow.edu.au/jutlp/vol15/iss1/3

Kohn, A. (2006). The trouble with rubrics. English Journal 95:1-5.

Link, L. J., and T. R. Guskey (2019). How traditional grading contribute to student inequities and how to fix it. Educational, School, and Counseling Psychology Faculty Publications 53:

https://uknowledge.uky.edu/edp_facpub/53

Nieminen, J. H. (2020). Disrupting the power relationships of grading in higher education through summative self-assessment. Teaching in Higher Education DOI:

https://doi.org/10.1080/13562517.2020.1753687

Panadero, E. and M. Romero (2014). To rubric or not to rubric? The effects of self-assessment on self-regulation, performance and self-efficacy. Assessment in Education: Principles, Policy & Practice 21: DOI https://doi.org/10.1080/0969594X.2013.877872

Schwab, K., B. Moseley, and D. Dustin (2018). Grading grades as a measure of student learning. SCHOLE: A Journal of Leisure Students and Recreation Education 33 87-95.

Schinske, J., and K. Tanner (2014). Teaching more by grading less (or differently). CBE – Life Science Education 13:159-166.

Stevens, D. D., and A. J. Levi (2013). Introduction to Rubrics: an assessment tool to save grading time, convey effective feedback, and promote student learning. Stylus Publishing, Sterling, VA, USA.

Wolf, K., M. Connelly, and A. Komara (2008). A tale of two rubrics: improving teaching and learning across the content areas through assessment. Journal of Effective Teaching 8: 21-32.

Wormeli, R. (2006). Fair isn’t always equal: assessing and grading in the differentiated classroom. Stenhouse Publishers, Portland, ME, USA.

Zhao, K., J. Zhou, and P. Dawson (2021). Using student-instructor co-constructed rubrics in signature assessment for business students: benefits and challenges. Assessment in Education: Principles, Policy & Practice 21: DOI https://doi.org/10.1080/0969594X.2021.1908225

This page was authored by Michele Larson and last updated September 15, 2022

RELATED LINKS

  • How to Design Effective Rubrics
  • How to Use Rubrics in Canvas

COMMENTS

  1. Example 1

    Example 1 - Research Paper Rubric. Characteristics to note in the rubric: Language is descriptive, not evaluative. Labels for degrees of success are descriptive ("Expert" "Proficient", etc.); by avoiding the use of letters representing grades or numbers representing points, there is no implied contract that qualities of the paper will ...

  2. Example 9

    Professor provides this rubric to students when the assignment is given. It serves as a tool for them to structure as well as self-evaluate their work in each area of their research project. This rubric is developed for a specific original research assignment; it would need to be revised to describe the expectations for each specific assignment.

  3. PDF Research Presentation Rubrics

    The goal of this rubric is to identify and assess elements of research presentations, including delivery strategies and slide design. • Self-assessment: Record yourself presenting your talk using your computer's pre-downloaded recording software or by using the coach in Microsoft PowerPoint. Then review your recording, fill in the rubric ...

  4. Rubric Best Practices, Examples, and Templates

    Rubric Best Practices, Examples, and Templates. A rubric is a scoring tool that identifies the different criteria relevant to an assignment, assessment, or learning outcome and states the possible levels of achievement in a specific, clear, and objective way. Use rubrics to assess project-based student work including essays, group projects ...

  5. Using rubrics

    A rubric is a type of scoring guide that assesses and articulates specific components and expectations for an assignment. Rubrics can be used for a variety of assignments: research papers, group projects, portfolios, and presentations. Why use rubrics? Rubrics help instructors: Assess assignments consistently from student-to-student.

  6. Creating and Using Rubrics

    A rubric is a scoring tool that explicitly describes the instructor's performance expectations for an assignment or piece of work. A rubric identifies: criteria: the aspects of performance (e.g., argument, evidence, clarity) that will be assessed descriptors: the characteristics associated with ...

  7. PDF Scoring Rubric: Research Report/Paper

    Scoring Rubric: Research Report/Paper. The report is both accurate and com-pelling. The writing begins with an inter-esting or provocative introduction that contains a clear and concise thesis state-ment. The body fully explores the topic and presents information in a sensible order. The conclusion restates the thesis or offers a com-ment or ...

  8. What is a rubric and how do you develop one?

    Outline. Rubrics are assessment tools developed to help evaluate qualitative data or assignments by providing a specific set of criteria to be rated and specific details about what is needed to achieve each level of performance for each criterion. Rubrics typically have ratings of 1 to 2 or 4 with labels (unacceptable to excellent or ...

  9. Rubrics

    Rubrics are tools for communicating grading criteria and assessing student progress. Rubrics take a variety of forms, from grids to checklists, and measure a range of writing tasks, from conceptual design to sentence-level considerations. As with any assessment tool, a rubric's effectiveness is entirely dependent upon its design and its ...

  10. Rubric Design

    A complete holistic rubric for a research paper (authored by Jonah Willihnganz) can be downloaded here. ... "describe the ongoing work of the class" (75). Writing rubrics contribute meaningfully to the teaching of writing. Think of them as a coaching aide. In class and in conferences, you can use the language of the rubric to help you move ...

  11. PDF RUBRIC for ORIGINAL RESEARCH PROJECT

    RUBRIC for ORIGINAL RESEARCH PROJECT. Criteria. Expert. Proficient. Apprentice. Novice. Introduction. [Introductory paragraph(s), literature review, hypotheses or propositions] Clearly identifies and discusses research focus/purpose of research Research focus is clearly grounded in previous research/theoretically relevant literature ...

  12. Rubrics

    A rubric divides the assigned work into component parts and provides clear descriptions of the characteristics of the work associated with each component, at varying levels of mastery. Rubrics can be used for a wide array of assignments: papers, projects, oral presentations, artistic performances, group projects, etc. Rubrics can be used as ...

  13. Examples of Rubric Creation

    Sociology Research Paper. ... depended on the complexity of the assignment and the kind of information they wanted to convey to students about their work. Below are three of the analytic rubrics they considered for the Argument trait and a holistic rubric for all the traits together. Lastly you will find the entire analytic rubric, for all five ...

  14. PDF Research Paper Rubric.xls

    The central purpose or argument is not consistently clear throughout the paper. The purpose or argument is generally unclear. Content. Balanced presentation of relevant and legitimate information that clearly supports a central purpose or argument and shows a thoughtful, in-depth analysis of a significant topic. Reader gains important insights.

  15. Rubric for a research paper or literature review or annotated

    Sample rubric for a research paper or literature review or annotated bibliography (or any other sort of assignment that would include both a bibliography and some sort of context in which the sources were used or discussed) Sample outcomes for Authority is Constructed and Contextual: Students will be able to select sources whose authority is ...

  16. Assignment design rubric for research assignments

    Librarians can help you design assignments that will guide your students toward effective research, and this rubric is one tool we use to do that. Apply the assignment design rubric to your assignment to ensure that it has: Clear expectations about source requirements; A clear rationale and context for resource requirements; Focus on the ...

  17. PDF Research Project Writing Rubric

    Evidence of using mostly reliable sources of information on 2 art forms (arch., art, design, drama, music). Excessive reliance on 1 or 2 sources. Reasonable picture of historical context in which arts were made. Reasonable choice of examples. Evidence of using unreliable sources, or excessive reliance on a single source.

  18. PDF graduate research rubric

    graduate_research_rubric.xlsx. student may need significant support. student needs some support to be successful in graduate research. student is prepared for graduate research. area of strength; student is already doing graduate-level work. student has exceptional preparation.

  19. PDF A METHOD FOR DEVELOPING RUBRICS FOR RESEARCH PURPOSES1

    developing rubrics for research purposes. A brief rationale for using this method rather than other, often-used, data analysis methods is provided, with a description of the methodology, using an example to support the description. Finally, recommendations are included for those who plan to undertake the task of rubric development for research ...

  20. Assessment Rubrics

    Assessment Rubrics. A rubric is commonly defined as a tool that articulates the expectations for an assignment by listing criteria, and for each criteria, describing levels of quality (Andrade, 2000; Arter & Chappuis, 2007; Stiggins, 2001). Criteria are used in determining the level at which student work meets expectations.

  21. Examples of Rubrics

    Ethics, Integrity, and Commitment to Diversity and Inclusion rubric (*pdf) Critical and Creative Thinking rubric (*pdf) Scientific Inquiry and Research Skills rubric (*pdf) Civic and Global Responsibility rubric (*pdf) Communication Skills rubric (*pdf) Information Literacy and Technological Agility rubric (*pdf) Writing. Journal Reflection ...

  22. PDF Universeity of Sample Research Evaluation Rubric

    The rubric provides descriptors for the indicators associated with each criterion. Departments may wish to use a rubric like this for evaluation, ... work and degree of significance in collaborations should be taken into account. ... research, the researcher, the research team, the discipline, and the various communities the research might ...

  23. Introduction to Rubrics

    Introduction to Rubrics. A rubric is an assessment tool that provides information on performance expectations for students. Essentially, a rubric divides an assessment into smaller parts (criteria) and then provides details for different levels of performance possible for each part (Stevens and Levi 2013). Because rubrics are used to assess ...

  24. GRANTS Q&As for Reviewers

    The lead SC Program Manager is the best resource for information about the relative weighting and scoring rubric for the review you are participating in. ... Each PIER Plan is expected to be tailored to the research project and thus unique and integral to the scientific and technical merit of the proposed research. ... As members of the ...