Dissertation | Pilot < Analysis > Conclusions

Auditing catalogue quality by random sampling

6. Analysis of results

6.1 The catalogue-to-collection test

The overall error rate of 34.4% from the first (convenience) sample was higher than expected, given the presumed tendency of records in that sample to have better cataloguing, but the second (systematic) sample confirmed this figure. The incidence of major errors, somewhere between 7.6% and 11.4%, was less alarming. That as many as 5.9% of records had errors in more than one field (ignoring those with distinct errors in just one field) suggests that there is some tendency for errors to cluster, so when an error is corrected it is worth verifying the whole record.

The field with the most errors was imprint (21.5%), followed by series (16.3%), edition (10.0%) and title (8.7%). Errors were found in all three parts of the imprint, but since it is not an access point it does not seem worthwhile to record them separately. Many of the incorrect dates were corrupted in the form 's1980'. The audit procedure made allowance for legitimately substandard cataloguing, such as the short forms of authors in the minimal records; it could be argued that known sources of error should be discounted in the same way. In both cases the catalogue user is unlikely to be aware of the situation and will simply note a poor quality record.

A possible solution is to present two sets of figures, one with such errors included and one with them excluded, although this increases the labour in collecting and compiling the figures. Ignoring the 43 corrupted dates reduces the overall figure for errors to 19.4%, and 6.6% for imprints. To be consistent, the single instance of a title which was incomplete due to a deleted 248 field should also be ignored, although this makes only a minor difference to the results.

This shows an advantage of the pilot, which was that full records had been printed and kept for further reference. If cumulative figures had been recorded instead then it would have been appreciably harder to extract corrupted dates. A decision on their retention would need to be made during the audit and communicated to all auditors.

The high level of incorrect series statements is worrying. It was noted in section 4.3.9 that although not all series are access points, it is safe to assume that most are traced. Bath's catalogue confirms this, with 72 series entered under 440 or 840 and just 2 with only a 490 entry. There were three records with both 490 and 840 entries, which is the practice when the series is transcribed (490) in a form not suitable as a heading (840). These 490 fields contained only the series numbering, presumably due to corruption, and since this is the field displayed by the OPAC, it counted as an error.

In the second sample there were no errors in the edition field, compared to 10% in the first sample. One explanation for this is that the first sample consisted only of books which had been borrowed; there is a plausible correlation between books which go into more than one edition and books which are more likely to be borrowed.

Ballard & Lifshin (1992) and Randall (1999) both found a majority of errors in title fields and notes fields. These studies involved browsing keyword indexes for errors, so their results may not be replicated across a catalogue, but the absolute number of errors in notes seems high enough that it would be worth expanding the checklist to include notes fields.

The classification of errors as 'incomplete or incorrect' or 'missing' makes the audit slower while not directly benefiting analysis. Incomplete information is usually better than missing information, but missing information is usually preferable to incorrect information. The only fields in which there were large discrepancies between the two error types were title (which cannot be missing), statement of responsibility (seldom missing) and imprint (in which a missing place had to be recorded as 'incomplete'). It is not obvious that useful conclusions can be drawn from this information.

6.2 The collection-to-catalogue test

It quickly became clear when performing the collection-to-catalogue test that very few items, if any, would not be represented in the catalogue. With such a low incidence of problems, a far smaller sample is sufficient: if 1 in 1,000 items is estimated to be uncatalogued then a sample of 30 will give the true proportion to within 1.1% with a 95% confidence level. Indeed, it is perhaps unnecessary to check for uncatalogued items at all unless there is some other motivation for an investigation such as problems at the point of issue or following a stock check.

One justification for the collection-to-catalogue test is that it is a much more suitable place than the catalogue-to-collection test to verify the classmark given in the record. As noted in section 4.3.10, there is a tendency for items to be assumed missing when in fact the classmark shown in the record is incorrect. Checking that the shelfmark written on an item in hand matches the classmark in its catalogue record is more sensible than the reverse.

It was recognised that the test for the presence of duplicate records was extremely cursory, but it seemed a very simple addition to the test. The failure to find even near-duplicates suggests that the more sophisticated approaches taken in deduplication studies are essential, and that the duplicate test should be abandoned.

6.3 How representative is Bath?

There are few reasons to assume that the library at the University of Bath is not representative of medium-sized UK academic libraries; it holds material in a comprehensive range of subjects and has only small non-book collections. The varying standards of records in Bath's database may not be exactly replicated elsewhere, but most large libraries had to confront online conversion of card catalogues and this process inevitably affected catalogue quality, whether items were catalogued afresh or cards were keyed en masse. In some cases quality may actually have improved as a result of checks at the time of input and increased consistency.

Non-academic libraries may have substantially different collections: public libraries will invariably hold much more fiction, and workplace libraries will often have more grey literature. In principle, the technique can be applied with the same success regardless of the collection; in practice, it may give results which while accurate are not as useful as they could be, not noting inadequate authority work or giving undue emphasis to accuracy in fields which are seldom searched. The feasibility of the technique when checking the union catalogue of a public library authority with many branch libraries, or any multi-site library, also needs to be investigated.

Dissertation | Pilot < Analysis > Conclusions

Owen Massey McKnight <owen.mcknight@worc.ox.ac.uk>