Computers & Texts No. 16/17
Table of Contents
Winter 1998

Web + QMark + Humanities = ?

A Case Study

Chris Hopkins
English, School of Cultural Studies
Sheffield Hallam University
C.I.Hopkins@shu.ac.uk

This case study provides an outline of the ways in which Computer-Based Learning and Computer-Assisted Assessment have been used for teaching a cross-disciplinary humanities unit in the School of Cultural Studies at Sheffield Hallam University. The article also discusses the responses from the humanities students to these innovations.

Aims on Paper

In 1994, the senior managers at Sheffield Hallam University formulated a plan to deliver a certain number of level 1 units in a new and more economical way in order to liberate staff efforts for new postgraduate development, while also giving students a wider curriculum in their first year, and thence the possibility of 'delayed and informed choice'. That is to say, students would be required to undertake some study in subject areas bordering on their own, both as a good in itself, and to give the possibility of transfer to a different degree route from their initial choice without any disruption.

These not wholly homogenous aims were intended to be brought together by the practical curriculum solutions adopted, whereby groups of five or six (relatively) related degrees would share one common unit. This was meant to achieve economies of scale - and coping with scale certainly became one of the first issues to be addressed as the central plan filtered down to individual Course Teams for degree routes. In the School of Cultural Studies, a single core unit was to be planned for five different degrees: Communications, English, Film, History and Media. The planning group decided to design a unit which would:

Such a unit required a number of new approaches to its delivery. Since it was to be, unusually for a humanities course, taught by lecture only in Semester 1, and was intended to introduce a varied audience to a very broad range of knowledge and concepts, CAA seemed a plausible assessment tool. It could obviously economise on the labour of marking, but was also appropriate to test coverage of the field of ideas, and to help motivate attendance and learning. More conventional detailed learning was to be pursued in Semester 2, when students selected two out of the 10 lecture topics to study further.

This all sounded logical on paper, but hardly anyone in the School had any experience of this kind of assessment. When I became Unit Leader for this new unit, Modernity & Modernisation, I was unsure that this could be made to work, particularly in the rather limited development time available. Some of my colleagues were considerably less optimistic than this.

My colleague, Noel Williams, provided a software package which would allow us to use a multiple-choice test based on a large library of questions from which 20 questions would be randomly selected for each attempt by a student. Students would be able to repeat the test as often as they needed until they scored 70% in a single run. Each student was issued with a copy of the test on disk, which could be used in any PC running Windows; a successful attempt allowed them to print out a pass certificate.

Effective Questions

To provide a large enough library of questions, we thought each of the twelve lectures needed to generate 80 questions. In fact, this proved not only a large task, but one at which we collectively failed. Most of the lecturers concerned found they could produce around 40 useful questions (and this included some alternative phrasings of basically the same questions). The reduction of the question base by half was not a major problem - though it meant that questions recurred more frequently.

More problematic was knowing how to ask effective questions in the required multi-choice format - an assessment technique none of us had any experience of. The format of the test required a proposition to be matched to one of three alternatives. The quality of our questions undoubtedly varied - tending to improve as we gained experience. Most questions seemed to us to be genuinely useful, demanding requisite knowledge and/or the ability to show compre-hension of definitions and concepts. Examples we thought were reasonably effective are included in Figure 1.

In practical terms, and despite the gloomy predictions of some colleagues that the test would fail so badly to function that we would have to substitute a test on paper, this first use of CAA in the School of Cultural Studies did work. Four hundred and fifty students were assessed without any human marking (with about 95% achieving a pass mark).

[multiple questions 1]
[Multiple Questions 2]
Fig. 1. Question Types

Feedback

How effective, though, was the test for assessing the learning outcomes of the unit? The Unit team were sure that it assessed understanding of the lecture programme and felt that it was able to discriminate between secure and partial comprehension of the material. But any such professional conviction is only half the battle - students and colleagues' perceptions are also vital factors. Some colleagues not involved in teaching the unit were inclined to be sceptical - arguing that anyone could guess the correct answers, whether they had attended the lectures or not. This impression - which though not materially relevant did contribute to staff perceptions of the test - was modified in some instances when staff attempted the test, and often found it less readily passable than they expected. Student feedback suggested that there were aspects of the test which worked well, but that there were also weaknesses. Questionnaires completed by students after completion of the tests showed a fair degree of consensus in student perception of benefits and problems. The following comments are representative:

Some of the positive feedback was a source of comfort - particularly that which suggested that this form of assessment suited current patterns of student learning and life, and that the assessment supported and reinforced learning. Some feedback was less re-assuring: though happy that students did not find the test too stressful, it seemed clear that it was insufficiently challenging in its current form. Where we had estimated that the test would on average need to be attempted 10 times to pass, the feedback suggested that most students were passing in five attempts or under.

Even more worrying were the expressions of a partial or complete lack of faith - the feeling that it was merely luck or that it tested nothing, or that only an essay could really assess humanities learning. Though we could say that in our professional judgement the questions were generally well designed to rigorously test understanding, while admitting that the level of difficulty of the whole test was not yet right, it was more difficult merely to contradict long-held assumptions.

Some did not believe that any process other than guessing was needed to pass the tests (though this is improbable in terms of the ways people approach multi-choice questions, as well as in terms of probability). Clearly, a few students regarded it as essential for assessment to have a do-or-die format and expressed considerable irritation that retrieval of 'failure' was allowed through the repeated attempts whereby performance could be improved even at the point of assessment. Others - and here they were in accord with many lecturers - felt that a multi-choice test was a poor, or no, substitute for an essay or exam involving extended argument. They were, of course, quite right - it is not a fit substitute, but is doing something different. This had been explained in the briefing about the assessment package (Semester 1: a broad introduction tested by Computer Test; Semester 2: more detailed work on specific topics from Semester 1, assessed by two assignments), but nevertheless, CAA was still perceived as fundamentally wrong for humanities subjects.

Improvements

In the next year of delivery, we used basically the same test, with some improvements to the questions, and raised the pass mark to 15/20. This fairly minor change had an impact on the level of difficulty and increased the number of attempts needed to pass to an average of seven. Feedback suggested that the test was still perceived as too easy by many students. Feedback on the unit as a whole also suggested quite strongly that students did not expend the same level of effort on this unit as on more conventional ones, with some responses explicitly requesting more compulsion to do work beyond that needed to gain the pass threshold.

To address both the need to improve the test and increase student engagement and learning, we decided to use the newly available QuestionMark software, with support from Sheffield Hallam's Learning and Teaching Institute (the LTI). We undertook two kinds of reformation. First:

And secondly:

The Web site contained information about the unit, including an explanation of its purpose, rationale and teaching methods, and information about each lecture, including its aims, intended learning outcomes, a summary of its main points, and a list of core (i.e. compulsory) reading and additional references. There was also, for each lecture, a link to a set of 15 self-assessment questions. Students could do these after each lecture and after having done the weekly core reading to test their own understanding. These questions were not compulsory or assessed in any way - though they were a small sample of questions drawn from the final assessment library. Each self-test gave a percentage score plus feedback on each answer option selected, right and wrong. This was designed to motivate use of the Web site.

[Modernity & Modernisation web page]

Fig. 2. An example page from the Modernity and Modernisation Web site (http://www.lti.shu.ac.uk/cs/mm/)

Student reactions to the Web site were quite sharply divided between technophobes and technophiles. Feedback from questionnaires and from monitoring of Web site use suggested that once a student had used the site, they tended to use it further and quite extensively. Students who had done the self-assessment questions mainly thought them very useful, and there was also positive feedback on the aims, outcomes and summaries of each lecture. However, worryingly, a number of responses suggested that the Web was a site of fear and anxiety, while others had used it once but thereafter disregarded it, because it was 'inconvenient' to use, and/or not a compulsory part of the unit. These responses may arise from the conditions under which most students have to use university computer facilities - they have to book a computer and plan to spend more time on campus. It does not seem to have been the case that there was any real shortage of computers available; rather, some students did not see the Web site as integral to the unit, despite the incentive of being offered help with the final assessment, and therefore felt they could economise on it. It was noticeable that a number of mature students who had access to the Web at home rated the site as particularly useful. We have already decided that the formative self-assessment questions should be fully integrated into the unit in future, by making it compulsory for every student to attempt each week's set of questions and score at least 50%.

Overall, I think there are seven main things which we learnt from teaching the unit. Some of these may be generalised to many uses of CAA and CBL, others are perhaps more specific to the humanities. In the first group of learning outcomes We learnt:

In the second group of outcomes we learnt that:

Above all it seems clear that for humanities students and staff to have faith in CAA and CBL, they must have an explicit sense of the assessment rationale underpinning this, including the sense that it is not a replacement for essays and exams, but something fundamentally different. Cultural assumptions, however, run deep, and not least in those disciplines which study culture.


[Table of Contents] [Letter to the Editor]


Computers & Texts 16/17 (1998). Not to be republished in any form without the author's permission.

HTML Author: Michael Fraser
Document Created: 25 April 1998
Document Modified: 3 April 1999

The URL of this document is http://info.ox.ac.uk/ctitext/publish/comtxt/ct16-17/hopkins.html