SWF Primer: Evaluation Factors (Part Two)

[Again, a preface to communicate my sincere regret that this post pertains almost exclusively to Full-Time faculty at Ontario Colleges. It is absolutely shameful that the work associated with the preparation and evaluation of classes taught by contract faculty is not measured, despite repeated efforts by partial-load and full-time faculty — as a union — to remedy this disgusting inequity. In an upcoming post I will try to relate some of the SWF issues discussed here to the work of contract faculty.]

So, I wanted to follow up on my last post, in which I outlined the different evaluation factors, as presented in the Collective Agreement.

Today, I wanted to show how they appear on a SWF (Standard Workload Form), and what a SWF communicates to full-time faculty about the time that they are provided, to provide evaluation and feedback to students in all of their classes.

Let’s start by taking a sample SWF, which I have highlighted in different colours. I recognize that they may be formatted differently in different colleges, although they should all contain all of the information in the sample SWF provided on pp. 85-86 (e-pages 89-90) of the Collective Agreement.

The parts that are of significance for the purposes of a discussion of evaluation factors are in blue and pink. The blue column identifies the breakdown of evaluation factors for each of the classes assigned, and the pink translates that breakdown into a number of hours provided for evaluation and feedback.

So let’s start with the totals and work our way backwards: Our hypothetical professor from a hypothetical semester long, long ago is assigned to teach five sections of class (three different preps for what it’s worth), responsible for evaluating and providing feedback on the work of a grand total of 173 students in all of the classes, combined.

And to do all of the evaluation for all of the students in all of the classes, our hypothetical teacher is provided with a total of 12.47 hours weekly, for each of the weeks covered by this workload assignment. (This is shown on the bottom row of the left pink column.)

Looking upward from that row, we can see that the five sections are each allocated a specific amount of time for all of their evaluation/feedback needs — anywhere from 2.23 hours to 2.61 hours.

But let me focus on the fourth row of the classes — the section that is attributed 2.41 hours weekly for evaluation / feedback. That section has a class size of 38, yet the hypothetical professor is attributed LESS time to evaluate those 38 students than they receive to teach the 35 students in the course directly below it.

So how can a class have 8% more students, but provide a professor with 8% less time to grade and provide feedback on their assessments? The answer lies in the evaluation factors, which we see broken down by percentage in the blue columns.

(Class #1 has 70% of its assessments listed as Essay-type, 20% as Routine/Assisted and 10% as In-Process. Class #2 has 75%, 0%, 25% of the assessment under each category, and Class #3 is broken down as only 50% Essay-type and 25% for each of Routine/Assisted and In-Process.)

(As a reminder, the terms “Essay”, “Routine/Assisted” and “In-Process” to describe types of evaluation can be misleading; explanations of each are provided in the previous post.)

An increase in the percentage of “Routine/Assisted” or (especially) “In-Process” evaluation types and a corollary reduction in the percentage of “Essay”-type evaluation means that faculty will be attributed less time to evaluate more students.

So, when I was a member of a Workload Monitoring Group, one of the most common questions that I used to get was, “How can my class sizes keep going up and up, but my total workload remains the same on the SWF? The answer was most frequently that the classes in question were being credited with less “Essay”-type evaluation and more “Routine/Assisted” or “In-Process”.

Which could be absolutely fine, if the nature of the course’s evaluation was discussed between the professors who teach the class and the manager, as both faculty and management have agreed must happen (cf. Article 11.01 E3), and they came to an agreement about what types of assessments the course material requires. That conversation should see the manager asking the professor what kind of assessments and feedback would be needed, in order to determine (and promote) students’ achievement of the course outcomes, and should see both parties agreeing upon the appropriate breakdown of evaluation factors for the assignments administered to the students.

(There’s one more step, but I’ll come to that in an upcoming post.)

The problem is when the evaluation factors fail to accurately reflect the work that is completed by the students in a class, and in turn the work that must be performed by that class’ teacher. If online courses are given “in-process” evaluation factors when it is impossible for the teacher to evaluate the students and provide them with feedback during the class time, then that teacher is denied the time that they need to evaluate the students’ actual work, and those students are in turn cheated out of the feedback that they — and Ontario taxpayers — are paying for.

If the evaluation time provided on a SWF is insufficient to meet students’ needs, then faculty are left with two options: a) Perform uncredited, unacknowledged, unpaid labour to provide the students with the feedback that they deserve and need (Just like Contract faculty!) or b) Spend only the time that they are attributed for the task of evaluating students, regardless of whether that time is sufficient to meet student needs.

As ever, please submit corrections, questions, and rants to ontariocollegeprof@yahoo.com. Any letters posted in whole or in part will have identifying information removed.


SWF Primer: Evaluation Factors (Part One)

[I note that this post concerns SWFs, and is therefore of interest primarily to Full-Time faculty. I’ll try to follow it up with a post on a related issue that might be of some relevance to Partial-Load faculty.]

Okay, so… let’s talk SWFs for a bit.

It’s worth introducing the topic at this point, since SWFs for the Winter semester are due six weeks before the start of the first scheduled day of teaching, not including holidays.

So, if the teaching period of the Winter semester happens to start on Monday, January 4, then the SWF would be due on… Friday, November 13, if my calendar and math are correct.

The SWF’s delivery is, according to the Collective Agreement (Art. 11.02 A1[a]), supposed to be preceded by a discussion of the workload proposed by your manager.

In addition, Article 11.01 E3 reads:

And it’s precisely the methods of evaluation that I want to review, because the shift to online teaching may impact them in ways that faculty don’t realize.

So, let’s start by covering the three categories of evaluation / feedback recognized in the Collective Agreement. In my next post on this topic, I’ll talk about how to read a SWF to understand how evaluation factors translate to specific amounts of time that faculty are given, to evaluate their students in each of their classes and provide feedback.

Let me preface with a couple of points:

  1. Full-time faculty are given a finite amount of time to complete the evaluation and feedback of the students in their classes; that time is clearly recorded on the SWF.
  2. The Collective Agreement provides minimum attributions of time for the grading of each student in each class. The attribution of time depends on the type of evaluation that is required for each class.

There are three different types of Evaluation. I’ll outline them here, and then in my next post, I’ll demonstrate how they are put into action on a SWF.

Essay Type: Article 11.02 E2 of the Collective Agreement specifies that the Essay evaluation factor is used for grading “essays; essay type assignments or tests; projects; or student performance based on behavioral (sic) assessments compiled by the teacher outside teaching hours”. Given that list, the term “Essay” type is a bit misleading.

Broadly speaking, the litmus test for determining whether a test or assignment is “essay type” depends upon the level of interpretation required. If you’re grading according to the presence/absence of keywords, or if the work could possibly be put into a format where it could be graded by a computer or a well-trained chimpanzee, then it’s probably doesn’t qualify as “Essay” type. If, on the other hand, the evaluations depend on students working with or applying abstract concepts, or writing extended passages, then it probably does.

The other point that I would make concerns the role of student presentations, which I interpret to qualify as “student performance” that is graded “based on behavioral assessments”, and that therefore count as “Essay” type evaluation if the teacher has to compile the assessment and feedback at home.

By the same token, I understand all portfolios of students’ work (including, say, student journals or any assessment representing a compilation of student work throughout the semester) to count as “Essay-type” assessment.

If a course’s evaluation/feedback is rated as 100% “Essay”-type, then a professor is attributed a minimum of 108 seconds per week, per teaching hour of the course. In other words, a prof could be granted as little as 5.4 minutes of time each week to complete the assessments for each student of a course that has three teaching hours weekly.

Routine or Assisted Type: The same article defines the second category of evaluation — “Routine or Assisted” — as “grading by the teacher outside teaching contact hours of short answer tests or other evaluative tools where mechanical marking assistance or marking assistants are provided”.

There has been some debate about which noun was modified by the “where” clause, but let’s let that slide for the moment. I’ve always interpreted this type to include the grading that could be done by Scantron or could be graded according to an answer key that could be provided to a marking assistant (e.g., “Connect the terms in Column A to the definitions in Column B”).

It’s significant that this is a kind of assessment that isn’t evaluating students’ ability to apply theoretical concepts, or demonstrate critical thinking or apply principles to their own experiences.

The minimum attribution for Routine/Assisted-type grading in the Collective Agreement is half that attributed to Essay-type grading. This translates to 54 seconds per teaching hour per week, for a course that has only Routine/Assisted-type assessments.

In-Process: The final category of assessment for the purpose of measuring workload is “in-process”, which is defined as “evaluation performed within the teaching contact hour”.

When asked for examples of “in-process”-type assessments, I often offer an example of nursing students demonstrating their ability to use a blood pressure cuff. The assessment takes place during the class time; the student knows their grade within the class period; and there is nothing about the evaluation or feedback that the teacher (i.e., professor or instructor) needs to complete at home.

Since the evaluation takes place (and feedback is provided) during the class hour — which is already counted on the SWF — in-process evaluation/feedback is attributed very little time to complete: approximately half the time attributed for Routine/Assisted grading. So if a 3-hour course were to be graded entirely using in-process grading, the minimum attribution would be 1 minute and 40 seconds for evaluation and feedback to each student, weekly.

Some points worth considering about the evaluation types:

  • In-Process assessments (those done “in real-time” during the class teaching hour) are virtually non-existent in online teaching. I suppose that if a professor were teaching a class on American Sign Language, then it might be possible to assess every student’s skill (and return their grade) within scheduled class hours, over a screen. Beyond that, I’m having a hard time imagining what in-process grading might look like in an online (and particularly an asychronous) context.
  • Changing evaluation factors is the easiest way for managers to assign more students to faculty without increasing the mathematical measurement of their workload. Faculty are attributed as much time to grade essays for 50 students as they are to grade multiple choice tests for 100 students, or to grade in-class assessments for 163 students.
  • The minimum attribution for “Essay” type evaluation is simply inadequate for providing meaningful feedback to students. Teachers are attributed only twice as much time to grade a class with all essays as they are to grade a class that only has multiple-choice tests all semester. And they’re only given 3.3x more time to grade essays at home than faculty are given to grade assessments during the class time. And, in fact, I can read and grade an essay in a couple of minutes. What I can’t do in a couple of minutes is explain the basis for that grade to the student, and give the student feedback to help them improve on future work.
  • The time attributed to faculty for evaluation/feedback is based upon the teaching hours of a class. If a class were reduced from 3 weekly teaching hours weekly down to 2, then the time attributed for evaluation would similarly be reduced by 1/3, although it’s not clear that the assessment obligations of the faculty would be reduced appropriately.

That’s it for now. Next post will be looking at how these numbers get plugged into a SWF, and how FT faculty members can determine whether their SWFs accurately reflect the time-demands presented by their classes’ actual evaluation and feedback needs.

As ever, I invite faculty — particularly contract faculty — in Ontario Colleges to share their questions, stories and opinions. You can do this by e-mailing ontariocollegeprof@yahoo.com. Confidentiality will be strictly guarded.