Dissertation | Analysis < Conclusions > Revised technique
The pilot indicated few reasons to reject the use of sampling in auditing the accuracy of a catalogue. It did demonstrate the problems encountered when a random sample cannot be generated automatically, but the systematic sample taken from the shelves was a successful substitute that most libraries should be able to implement.
The large samples needed for accurate estimates do require correspondingly large amounts of time, so shortening the test by reducing the amount of information to be collected is desirable. To this end, it is proposed that the division into incorrect or incomplete information and completely missing fields be scrapped, because most fields were rarely missing, except for corporate headings and series, which were far more often missing than merely incorrect.
The seriousness of an error is to be decided solely by the field in which it occurs, an approach with precedents in Dyson (1984) and Taylor & Simpson (1986). This simplification makes it practical to include a check on notes fields, which several studies found to be prone to errors, and which can be important if keyword searching queries their contents.
The fields are now classed in three groups. Group 1 contains authority-controlled access points for comparison with an authority file, while Groups 2 and 3 cover the descriptive cataloguing of the item, with indexed fields in Group 2. There is limited authority control for titles in the form of uniform titles, and series headings should strictly be controlled, but for simplicity's sake these are not included in Group 1. This grouping allows a direct interpretation of the results in terms of errors in access points and errors elsewhere which are arguably less significant.
Applying the two tests in the audit to just 18 items per person per hour may be seen as a luxury by technical services departments with few staff to spare or with a large processing backlog. Using sequential analysis (DiCarlo & Maxfield 1988) or accept- ing a less accurate estimate of the error rate can reduce the sample size and hence the length of the audit procedure. Abandoning the collection-to-catalogue test can, at best, halve the time needed for the audit, and this is recommended.
Records are checked for correct transcription rather than correct spelling, so the catalogue-to-collection test does not depend on the spelling ability of the staff performing it, at least in theory. It is arguable that comparing items and their records is trickier when they are not in English, or use unorthodox orthography (often the case in early printed items), but since computer spelling checkers have no advantage in these situations, there is no alternative but to take great care.
The technique depends on accurate circulation records. A bug in the library automation software or human error at the issue desk could lead to an item being recorded as missing when it is on loan or vice versa. Kiger & Wise (1996) recommended preceding the catalogue audit with an audit of the circulation system.
It is not expected that checks on material description, language, genre / category and location / branch ID, which have not been tested, will cause any particular difficulties. The appropriateness of a genre heading is a matter of opinion, but as with the assignment of subject headings, it is possible to check for keyboarding errors in the words, which will prevent access as surely as an inappropriate heading.
The matrix does not provide for the case when information is included in the wrong field, for example when a series migrates to the subtitle. This has been counted not as a single error but as an omission and an incorrect addition. Since this is a rare occurrence, it does not matter greatly as long as it is recorded consistently as either one or two errors. Similarly, it is conceivable that a field could contain information when it should be empty (perhaps a series-like statement that does not strictly merit inclusion) and neither 'incorrect or incomplete' nor 'not applicable' are entirely appropriate, but this problem can be overlooked because of its rarity.
The audit technique is weak on evaluating access points and authority work, as opposed to the separate process of checking descriptive cataloguing. The presence of authority control in the cataloguing software is a valuable form of quality assurance, especially if there is access to a national authority file.
The pilot was unable to confirm the feasibility of comparing headings to an authority file, but in any case authority files cannot hope to include every heading required and experience of authority work is necessary to confirm that headings are validly constructed. Moreover, the wider the variety of sources of records, the greater the likelihood of conflicting headings in the catalogue. Nonetheless, despite the diversity of authority work, it is possible to gain some idea from the literature of the most common errors, for which exploratory searches can be conducted.
Standards in name headings have developed from requiring the fullest form of a name, even if never used in that form by its bearer, to accepting shorter, more recognisable forms, primarily those used on title-pages. The addition of date of death to headings established while the author was alive is a contentious issue; death dates are rarely essential for unique identification, but many libraries choose to add them for information, introducing inconsistencies when records are derived from different sources.
The most common variations in personal names are fullness of forenames and the addition of initials (Weintraub 1991); other changes follow ennoblement or marriage or the adoption of a pseudonym (Harrison, Hendrix & Lipniacka 1996). These observations suggest that qualifiers, such as full names where initials are habitually used and dates of birth and death, and additions to names such as titles of nobility, should be subject to particular care in checking, along with their corresponding subfield indicators. Non-Western names are also susceptible to errors in the handling of titles and entry elements. As always, allowances should be made for variations for local headings (especially geographic names) and deliberate retention of outdated forms.
Comparing consecutive headings in a name index is a simple way to ensure consistent headings have been assigned, although it is more successful for personal authors than corporate bodies which may have complicated hierarchical structures. A study of cross-references (Watson & Taylor 1987) indicated that most variations in corporate names were of fullness (especially the expansion of abbreviations), indication of subordination and inversion, which may not be obvious when browsing.
The importance to catalogue users of subject access, whether by subject headings or keyword searching, merits the investigation of typical errors. Cataloguers using LCSH often construct subject headings not listed explicitly by applying free-floating subdivisions and headings taken from name authority files. Pappas (1996) and Chan & Vizine-Goetz (1997) concurred that a majority of errors in subject headings were keyboarding errors and that ill-formed headings were far more frequent than the choice of non-preferred terms. Romero (1995) found that incorrect main headings occurred more often than correct headings with incorrect subdivisions, but Chan & Vizine-Goetz found the opposite to be the case with obsolete headings.
Comparison with an authority file seems to be the most successful approach, as not all keyboarding errors can be detected unless the correct forms are known. The prevalence of errors in construction would complicate a search for incorrect headings because many errors would be missed by a simple comparison of main entry terms with the official subject headings. Fortunately, extensive cross-references in authority files should make the task easier, but this is an area which might be better separated from testing bibliographic accuracy and certainly needs piloting.
The checklist was designed with books in mind, although the fields were chosen to correspond to ISBD areas, so it should be relatively straightforward to apply it to non-book materials and electronic resources. Antiquarian cataloguing may require more detailed breakdowns of errors and special attention to physical description, but the principle of the technique is sound.
The audit is unsuitable for serials, however. Many of the fields in the checklist are inapplicable to serials and there is nowhere to note discrepancies in holdings records. Sampling serial titles is theoretically valid, but it is difficult to know whether all holdings must be checked or whether one volume can be taken as representative. A separate tool should be developed to audit serials cataloguing.
A fundamental problem with the technique is that it looks at items in isolation and so it cannot check consistency between records, for example, whether series statements are consistently recorded. Variations in the quality of records will be confusing for readers who cannot assume that any particular record is accurate or complete. There are many other aspects of quality which cannot easily be checked mechanically. A straightforward way to check if series (or anything else) are being assigned consistently is to take a small random sample and assign a trained cataloguer to check each item, noting quality issues rather than simply counting errors on the checklist.
The currency of cataloguing is another quality issue not included in the audit, even though a sizeable backlog could be said to reduce recall. Currency could be measured by the percentage of the collection yet to be processed or the median length of time taken to catalogue an item.
Instead of a crude division between errors which prevent access and errors which do not, a refinement of the technique would be to weight errors in different fields according to their seriousness. Thus the proportion of errors in title and headings fields could be tripled, those in edition and imprint fields halved and the average of these figures taken to give a single score encapsulating a great deal of information. Unfortunately there is no obvious way to decide on appropriate weights and they would make the process dauntingly complicated.
The audit's diagnostic role has been downplayed in favour of simply recording the overall incidence of errors. Not just the quality of the catalogue is of interest but the means of improving it, and this will often require knowledge of areas of the collection subject to particular cataloguing problems. Existing suspicion about poor quality can be confirmed or dispelled by a separate sample of relevant subpopulations, or by the methods described in Chapter 2, but a random sample of the whole collection is the only way to detect unsuspected or unpredictable errors.
There are conflicting pressures for detail and simplicity, for standardisation and customisation. Unfortunately, any deviation from a standard procedure makes it impossible to compare the results of audits performed at different times or in different libraries. It may be that libraries have sufficiently different collections that no two will need precisely the same tool; the question is whether the gain of being able to compare results justifies a method which suits no library exactly. However, there is no reason why a library cannot collect detailed data for its own purposes before condensing it for external or longitudinal comparisons.
Dissertation | Analysis < Conclusions > Revised technique
Owen Massey McKnight <owen.mcknight@worc.ox.ac.uk>