Soft marking and academic fraud

The thing to understand about modelling and econometrics in particular is that the story often gets ahead of the analytics. Some of you may recall doing the FuelWatch debacle that the government was relying on a regression model that was ultimately shown to be incomplete, while last year the government had a very dodgy graph and regression in the Budget papers that got exposed here. So it is with the latest ‘evidence’ of soft marking at universities. The analysis is much more sophisticated than what we’ve seen out of Canberra, but again the story is running ahead of the evidence actually presented.

The paper by Gigi Foster has received massive attention in the Australian and also some publicity at Club Troppo (cross posted at Core Economics) and Andrew Norton.

GIGI Foster knows her disturbing research findings on international students won’t make her many friends. In a university sector grown dependent on international fee revenue, it might not do much to progress her academic career either.

But the audience she wants to reach is not academe but the policy-makers. It’s at this level where change could be driven to address the poor language and cultural skills she says are undermining their performance.

“It is risky for me, but it is my duty to look at this,” says Foster, a Harvard graduate who moved to Australia in 2003.

But she believes her research provides evidence that universities are too often turning a blind eye to the poor written and verbal English skills of many international students.

She says her statistical analysis reveals that international students are being allowed to underperform and this is being camouflaged to an extent by grade inflation.

At the same time, these poor English skills weigh on the results of domestic students in the same tutorials.

“I want Australian policy-makers to see what is actually happening,” Foster says.

But she believes concerns over fee revenue, sensitivities over the potential for appearing “xenophobic” and political correctness are preventing the sector from confronting the issues.

Those comments are not supported by the paper. The research question being asked is

educational equity concerns would also lead us to ask how the infusion of international and NESB students into Australian higher education impacts upon the marks of other students.

I worry when I see questions like this. I can imagine Archie Bunker asking this sort of question. If we really believe that the presence of international students reduces the marks of ‘other students’ what sort of policy conclusions should/would be adopted? Anyway, I digress. The paper takes a ‘have data, will regress’ approach to things in the way economists often do things.

The sample consists of undergraduate students at two Australian universities (UniSA and UTS) in Business faculties during the autumn and spring semesters of 2008 and 2009. There are 74,276 observations. Not that many if you think that there are four semesters and full-time students should be taking four subjects per semester (so 74,276 divided by 16 is a rough estimate of the number of actual students about 4,600 or so). What we don’t know is the break-up of those students. For example, how many are full-time or part-time? We also don’t know what subjects they are doing. As I suggest below, this could be an important factor. I have also some difficulty understanding the grade Foster uses in the analysis. She speaks of a ‘tutorial mark’ – is that a grade earned in tutorials or is it the in-term assessment?

She then slices and dices the data into a number of categories: international v non-international and NESB v Non-NESB. Unsurprisingly she reports that non-international students get an average grade of 61.69% and international students get 57.4% while non-NESB get 62.23 and NESB get 58.34. Those average marks are a bit lower than I expected but the relativities are about right. The ‘lowness’of the marks could be driven by the fact that some students fail and get to repeat subjects. But Foster doesn’t tell us what she has done with those students or how she has handled drop-outs (probably nothing). As Andrew Norton explains this is not evidence of soft-marking. But then Foster goes into a whole bunch of regression analysis. When the paper gets re-written hopefully she will explain her regressions and variable definitions much better. In the meantime, there are some strange results (or at least my understanding of the results could be enhanced with more explanation). For example in table 3 in some regressions it looks like new students (first years?) get higher marks ceteris paribus than everyone else. It also appears that domestic NESB students get lower marks ceteris paribus. That is very counter-intuitive. So that would be the kids of migrants who have come to Australia and speak a foreign language at home but have come up through the Australian education system.

We then get to the crunch. Figure 3 shows the distribution of international students in tutorials.

If I understand the figure correctly, about 15% of tutorials have no international students in them, while some (a very small number) have close to 100% international students. This raises a question about tutorials; are these tutorials compulsory or voluntary and are students assigned to a tutorial class or can they self-select into a tutorial? Table 5 is the controversial table; columns (3) and (4) to be precise.

In Table 5 columns (1) and (2) we see the performance of non-NESB students as a function of domestic NESB and international NESB students enrolled in the course and in the tutorial. An increase in domestic NESB students in tutorials leads to non-NESB students getting better marks on average while an increase in international NESB leads to non-NESB students getting lower marks on average.

In table 5 columns (5) and (6) we see the performance of domestic NESB students as a function of domestic NESB students in the course and in tutorials and international NESB students in the course and in tutorials. Here we see the more domestic NESB students in tutorials the better the students perform on average. This could be interpreted as going to tutorials leads to better marks.

The controversial bits are columns (3) and (4). The performance of international NESB students is better the greater the percentage of international NESB students are enrolled in the course. Some might interpret this as evidence of soft-marking in high-revenue subjects. But that is being polite – this appears to be evidence of corruption; some students are being singled out and given higher marks than they would otherwise receive. I do not believe for one moment that interpretation to be correct. Unfortunately many others do.

We need to know a lot more about the data before coming to any conclusions. For a start, what about self-selection? Are we simply observing the fact that students self-select into particular subjects that have a reputation for being soft? Or conversely self-selecting into subjects that emphasise skills that international students might do better at like maths or lots of memorisation? To my mind that almost certainly explains columns (5) and (6). In the worst case scenario, we may simply be observing a tutor-effect. There are two possible explanations here: first international NESB students could self-select to those tutors who are known to be a soft touch or second international NESB students could self-select to those tutors who better understand their language difficulty (or indeed speak their language) and who are better able to accurately gauge student understanding and performance.

The problem with the Foster analysis is that she assumes a grade to the curve effect and assumes her results show ‘a course-wide downward adjustment in the grading standards applied to these student’ when her analytics do not exclude a whole bunch of other less sinister explanations.
Update: Gigi Forster responds in comments below.

This entry was posted in Uncategorized. Bookmark the permalink.