Research and Teaching
Journal of College Science Teaching—July/August 2021 (Volume 50, Issue 6)
By Doug Czajka, Gil Reynders, Courtney Stanford, Renée Cole, Juliette Lantz, and Suzanne Ruder
Process skills, also known as transferable, professional, or soft skills, are important components of the learning environment in STEM classrooms. This is especially true when using active-learning techniques that often require higher-order thinking (cognitive skills) and peer interaction (interpersonal skills) during class activities. The cognitive skills include information processing, critical thinking, and problem solving, while the interpersonal skills include interpersonal communication, teamwork, and management. These skills also carry value beyond the classroom as they are necessary in preparing students for their future careers and roles as contributing citizens (NRC, 2012). Employers often cite these skills as desirable in new hires, even valuing them over technical knowledge related to the job (NACE, 2020; AAC&U, 2018). While process skills are embedded in course- and program-level learning outcomes within many STEM disciplines at many institutions, they are rarely the explicit focus of classroom learning goals. Students are rarely given direct instruction (NRC, 2012) or assessed on their process skills, and thus they are less likely to receive feedback on the development of these skills as compared to the feedback they receive on content. This lack of alignment between intended learning outcomes and assessment practices (Biggs, 2003) means that while instructors may tell students that process skills are valuable, students may perceive the lack of process skill assessment and feedback as an indicator that skill development is unnecessary.
Feedback is critical in helping students gauge their progress toward achieving intended learning outcomes, and it ranks among the most effective ways of improving student achievement (Hattie, 2008; Kluger & Denisi, 1996; Schneider & Preckel, 2017). It is important to provide formative feedback during each unit of instruction (related lessons) along with summative feedback at the end of a unit (Nicol & MacFarlane-Dick, 2006). Hattie and Timperley (2007) provide a model of feedback suggesting that feedback provided to students should answer three questions: (1) What are the goals? (2) What progress am I making toward the goals? and (3) What should I do to make better progress? Answering these questions can be achieved by providing students with clear feedback about their performance in relation to the intended learning outcomes and developmental feedback that students can use to identify goals and strategies to improve (Ferguson, 2011; Lizzio & Wilson, 2008; Shute, 2008). Wollenschläger et al. (2016) demonstrated that when using rubrics to assess students’ ability to design a scientific experiment, the inclusion of improvement information in addition to performance evaluation led to better scientific experiment planning and enabled students to more accurately assess their own abilities. Surveys and interviews with students reinforce this idea, showing that they find targeted feedback with guidance for improvement to be most constructive (Fong et al., 2018; Weaver, 2006).
Instructors may think that students do not pay attention to or use feedback, but there is evidence that the opposite is true: Most students value and use feedback extensively (Mulliner & Tucker, 2017; Zimbardi et al., 2017). When Mulliner and Tucker (2017) surveyed undergraduate students, they found that 93% of respondents said that they always use the feedback provided to them. The results of Zimbardi et al. (2017) were similar: “Ninety-two percent of first-year and 85% of second-year students accessed their feedback, with 58% accessing their feedback for over an hour.” With the evidence supporting the efficacy and high student use of feedback, it is clearly a critical component of the learning environment.
While the studies previously outlined have looked at the role of feedback on student performance with respect to content learning, it is likely that feedback can play a similar role in improving student process skills. However, many instructors do not explicitly focus on the development of student process skills in the classroom, assess these skills, and then provide students with appropriate feedback on their skill development; instead, instructors may assume that students are developing process skills without directly measuring them (NRC, 2012). Knowing that students value and use feedback, we assert that the key to improving student process skills lies in the nature of the feedback they receive.
The Enhancing Learning by Improving Process Skills in STEM (ELIPSS) project was started with the goal of developing resources that could be used by instructors to identify, develop, and assess process skills in undergraduate STEM courses (Cole et al., 2018). The project initially began with efforts to create process-skill rubrics for a single organic chemistry course (Ruder et al., 2018). However, as these skills transcend any particular STEM discipline or course, the project used a multidisciplinary collaboration team to generate a set of rubrics that can be applied to multiple disciplines and learning environments. The choice of a rubric was supported by the idea that student and instructor perceptions of rubrics in higher education tend to be positive, and they can also lead to positive improvements in academic performance and course instruction or activities (Bauer & Cole, 2012; Reddy & Andrade, 2010). Analytic rubrics (Dawson, 2017) were created for the following process skills: information processing, critical thinking, problem solving, written and interpersonal communication, teamwork, and management. Figure 1 shows an example of the information processing analytic rubric that was developed by the ELIPSS project. These rubrics consist of a traditional grid layout, where each row represents a category (i.e., the evaluative criteria) within the targeted skill and each column is a performance level with quality descriptors populating the grid for ratings of one, three, and five (see Figure 1). The rubrics were designed for assessing process skills by either evaluating students’ written work or students’ interactions during in-class group work.
The analytic process-skill rubrics were classroom tested by the authors and a primary collaboration team (PCT) of instructors who were trained and experienced in facilitating and developing process skills through the use of Process Oriented Guided Inquiry Learning (POGIL) (Moog & Spencer, 2008; Simonson, 2019) in their classrooms. The PCT instructors taught in a variety of disciplines including chemistry, computer science, biology, engineering, and mathematics. Our previous work has described the development process for the analytic rubrics, including a review of existing tools for assessing process skills and how the rubrics were tested for validity, reliability, and their utility for providing feedback (Reynders et al., 2019). The instructors used the rubrics in a variety of settings, including small- and large-enrollment (> 150 students) classes, as well as laboratory courses at a range of different institutions (see Tables 1 in Cole et al., 2018; Cole et al., 2019).
The ELIPSS project produced analytic rubrics for process skills that can be implemented in multiple STEM learning environments to assess students, but one issue remained: how to provide more detailed feedback to students to help them improve in their process skills development. While the ELIPSS analytic rubrics were beneficial in helping instructors become more familiar with and assess process skills in their classrooms, they did not necessarily provide a clear pathway for delivering actionable feedback to students. Use of the analytic rubrics demonstrated that they can be used to measure students’ process skill growth throughout a course, but students still struggled to assess their own skills (Reynders et al., 2020) and these improvements are not seen consistently across all settings. Traditional analytic rubrics give students a sense of “what are the goals?” by pointing out what proficiency looks like in the descriptor for the highest rating in a category. Students can begin to get a sense of “what progress am I making?” by looking at a rating for their written work or group interactions. However, if these ratings do not effectively direct students to understand what specific actions or qualities of work earned those ratings, students will get only a glimpse of “what should I do to make better progress?” and the feedback they receive may have diminished impact. While students may identify the criteria for improved performance by looking at the rubric, they may not have a good sense of what actions need to be taken to reach that level. Ultimately, analytic rubrics can tell students about their current performance, but the descriptors in these types of rubrics are primarily evaluative and may not provide easily interpretable guidance for how students can improve. Thus, an opportunity to foster the growth mindset in students may be missed.
Interviews with students revealed that they could use the ELIPSS analytic rubrics to understand their current performance, but they did not know how to change their behavior to improve their scores. When students received scores of less than five, they would say that they “know the area that needs to be improved upon, just not what specifically needs to be improved in the area” and that their instructor or teaching assistant should “try to point out what I should do to get a five.” Some students received written comments on their rubrics and said that these comments would be key to helping them improve. For example, one student said that “without the comments, I’d have a vague idea of what to do, but nothing specific enough that I think I could change it.” Another student said that “I feel like [the analytic rubric is] more formatted to be reflective rather than ‘these are the steps you should take’ so maybe if there was another column...like recommendations.” In addition to the student interviews, feedback from undergraduate teaching assistants (UTAs) also informed changes to the analytic rubrics. After these UTAs assessed group interactions during class, they reported that it was difficult to write down student scores while also trying to write comments on the rubric. Based on feedback from instructors, teaching assistants, and student users of the analytic-style rubrics, the ELIPSS project set out to design a new style of rubric that is better suited for delivering actionable feedback to students.
Institutional Review Board approval was obtained at Virginia Commonwealth University with the project number HM20005212. The project was classified as exempt, and informed consent was obtained from all participants prior to any data collection. The goal of the feedback-style rubric development was to create a rubric that was more intuitive to use when assessing student process skills and to enable instructors to give a range of viable suggestions for student improvement. As with the analytic rubrics, these new feedback rubrics were intended to use language that is accessible and applicable to undergraduate students across multiple STEM disciplines. Additionally, we wanted the feedback rubrics to contain suggestions for improvement that would resemble advice an instructor might give a student in a face-to-face interaction. With these principles in mind, both the language and formats of the analytic rubrics were adapted into a new format dubbed a feedback rubric (Figure 2). These feedback rubrics are continually being tested in the same iterative process that was used to develop the analytic rubrics (Cole et al., 2018) in which feedback is gathered from faculty members, teaching assistants, and students. This new rubric style contains three sections for each process skill category. First, the ratings section contains the same language as the analytic rubrics but in a condensed format to save space for the other parts of the feedback rubric. In this section, a clear definition of the category is provided along with a rating scale from 0 to 5 that includes adverb modifiers relevant to each category. Next to the ratings section is a list of observable characteristics for each category. These are behaviors or indicators that the rater can look for as evidence of the skill during student interactions or on student work products (e.g., exams, group quizzes, projects, laboratory reports, homework). Finally, for each category there is a list of suggestions for improvement that the evaluator can use to extend the feedback to include concrete actions students can take to improve performance.
The new feedback versions of the process-skill rubrics were piloted in multiple classrooms at three universities. Here we describe the implementation in a large-enrollment (>150 students), two-semester organic chemistry course at a large, public, research-intensive university. The course was taught using the POGIL methodology (Moog & Spencer, 2008; Simonson, 2019), where students spent most of the class time working on guided inquiry group activities. The course was supported by 10–13 UTAs per semester who were trained to facilitate large-enrollment, active learning environments (Ruder & Stanford, 2018; 2020). The UTAs attended weekly training meetings to learn how to facilitate course content during class activities and to elicit and assess student process skills during group interactions using the ELIPSS analytic rubrics. Midway through the fall 2018 semester of Organic Chemistry I, the TAs were trained in the use of the feedback rubrics and transitioned to solely using them.
Part of the UTA responsibilities included providing an end-of-semester reflection on their experience as an organic chemistry UTA. Undergraduate teaching assistant reflections demonstrated an overwhelming preference for the newer feedback rubrics and some common ideas arose that were representative of most UTA reflections. Many commented on the usefulness of the observable characteristics section, which, as one UTA stated: “for a lot of the more abstract concepts like information processing and critical thinking, it helped to contextualize them by assigning certain behaviors to them.” Other TAs similarly mentioned that the observable characteristics helped them become aware of actions that they would not have considered representative of a certain process skill, and this likely made them more accurate in their ratings. Additionally, the UTAs found the suggestions for improvement a valuable component of the new rubrics. As one UTA stated, “the suggestions for improvement definitely helped me better identify areas that each group was struggling with and formulate feedback that represented the goals of each process skill. With the original [analytic] rubrics, I felt that I was making more general statements that weren’t necessarily aligned with the specific skills being assessed.” From a more practical perspective, the UTAs reported that they could more quickly give comments to students than before because they could now simply mark an item in the suggestions for improvement section instead of writing all comments by hand.
During the fall 2018 and spring 2019 semesters, students’ abilities to assess their own process skills were investigated. Additionally, the role that external feedback, in the form of a UTA-completed feedback rubric, could play in improving student self-assessments was explored. Over the course of the fall 2018 semester Organic Chemistry I course, student groups used a form of the feedback rubrics that did not contain the suggestions for improvements section to assess their group’s information processing, teamwork, and critical thinking at various times. Students were asked to check all of the behaviors they engaged in during group activities and to give themselves a rating for each category. Each day that students assessed one of their skills, UTAs would assess the student groups on the same process skill. As Figure 3 shows, a majority of student groups overestimated their performance in demonstrating process skills compared to the rating given by the UTA, a finding that is in line with the idea that students tend to overestimate their performance on content assessments (Hacker et al., 2000; Hawker et al., 2016). This type of overestimation could lead students to think that their skills are already sufficient and thus could inhibit their motivation to improve (Kruger & Dunning. 1999).
In the spring 2019 Organic Chemistry II course, we tested whether students would have more accurate perceptions of their process skills if they received completed feedback rubrics from the UTAs. The UTAs in spring 2019 were the same cohort who had been previously trained in the use of the feedback rubrics in fall 2018. During spring 2019, UTAs exclusively used the feedback rubrics to assess student process skills. Additionally, paper rubrics were replaced by digital versions created using Google Sheets that UTAs could access via an electronic device such as their phone or a tablet during class. The use of digital rubrics also allowed for the delivery of feedback to each individual member of a group, as opposed to the entire group sharing a single paper rubric. Completed digital rubrics were converted to PDFs and uploaded to the online grading software Gradescope (Singh et al., 2017). Gradescope allowed the course instructor to see if a student had opened a graded assignment to determine if the feedback was reviewed by the student.
Students were again asked to self-assess their group skills in spring 2019, but they did so individually. During a class period midway through the semester, the UTAs rated each group using the teamwork feedback rubric. After class, students were asked to individually rate their group teamwork skills using a digital version of the rubric through the course management software. As in the fall semester, students in the spring semester overestimated their teamwork skills compared to the UTA (Figure 4a). A few days before students were assessed on their teamwork skills for a second time, the UTA-completed rubric was returned to students via Gradescope so students could see the ratings, suggestions for improvement, and any comments made by the UTAs. This timing was a function of the logistics of providing the feedback and to encourage students to reflect on their prior performance shortly before being assessed again. Similarly to the first instance, the UTAs assessed students on their teamwork, and after class, students assessed themselves. During this second self-assessment in the spring, students were more accurate in their own self-assessments (Figure 4b). This was determined statistically using Lin’s Concordance Correlation Coefficient. Lin’s rc ranges from -1 to 1, and measures how close a line of best fit is to a 45° line through the origin, which in this case represents perfect agreement between the student and UTA ratings. The concordance for the postfeedback self-assessment (rc = 0.403, n = 97) was closer to 1 (perfect agreement) than the prefeedback self-assessment (rc = 0.175, n = 137), indicating better alignment between UTA and student self-assessment after receiving external feedback, including the suggestions for improvement.
The feedback-style rubrics developed by the ELIPSS project are a useful new tool to guide students toward better perceptions of what it means to engage in the effective development of process skills such as information processing, critical thinking, communication, and teamwork. While analytic rubrics provide a measure of achievement or performance, they can be seen as largely evaluative; they primarily provide students with feedback on “What are the goals?” and begin to provide feedback on “What progress am I making?” The built-in suggestions for improvement in the feedback rubrics can work to support a growth mindset in students by helping to clarify and specify “What progress am I making?” and answering the “What should I do to make better progress?” question that is essential to effective feedback. Students value this type of developmental feedback, and having an observer use the feedback-style rubrics described here can lead to more accurate student self-assessment, an important component of student growth and improvement.
The new rubrics are readily adoptable and can be employed effectively by instructors in a variety of classroom settings, not just those involving learning assistants or TAs. The observable characteristics provide helpful guidance to instructors in recognizing indicators of a specific process-skill category in student group interactions or written work. These characteristics may be especially important in the assessment of process skills because identifying evidence for these skills may be less familiar to both students and instructors. Providing instructors and teaching assistants with specific behaviors to look for in each category may also support more accurate assessments and ratings of the targeted skills. These feedback rubrics represent a valuable tool for instructors looking to help students develop process skills that will allow them to become better learners and be successful beyond the classroom.
We would like to thank our Primary Collaboration Team and Cohort members for all their valuable input during the development of the feedback rubrics. We would also like to thank all the students and undergraduate teaching assistants who have allowed us to examine their work and provided reflections on using the rubrics and receiving feedback. This work was supported in part by the National Science Foundation under Division of Undergraduate Education grants #1524399, #1524936, and #1524965. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
More information about the ELIPSS project and access to both the analytic and feedback-style process-skill rubrics can be found at the project website at http://elipss.com.
Doug Czajka is an assistant professor in the Department of Earth Science at Utah Valley University in Orem, Utah. Gil Reynders is a professor of chemistry at Sauk Valley Community College in Dixon, Illinois. Courtney Stanford is an assistant professor of chemistry and chemistry education at Ball State University in Muncie, Indiana. Renée Cole is a professor in the Department of Chemistry at the University of Iowa in Iowa City, Iowa. Juliette Lantz is a professor in the Department of Chemistry at Drew University in Madison, New Jersey. Suzanne Ruder (sruder@vcu.edu) is a professor in the Department of Chemistry at Virginia Commonwealth University in Richmond, Virginia.
Assessment Preservice Science Education STEM Teaching Strategies Postsecondary