Gambar halaman
PDF
ePub

later stage do not result in unreasonable

additional cost (such as that required to reprocess source data) and when later corrections do not result in the loss of information

produced at intermediate steps. The primary considerations in adopting this technique should be costs and savings and the capability of the verifying operator or operation.

2.24 Stratification.--Frequently, in processing data, some subset of the items being processed demands differential treatment. Examples of such cases include (a) documents or punch cards on which it is necessary that there be no error in identification information, but on which a certain amount of error in the data fields can be tolerated, and (b) a small identifiable proportion of the documents, to be included in a summary tabulation, accounting for a major category of the tabulated data.

In the first example, it is usually possible to devise some system for giving greater attention to the identification information while giving routine treatment to the remainder of the card or document. In the second example, it is possible to assign the best qualified personnel to those segments of the work which require expert attention and to treat the remainder in a routine manner at a savings.

2.25 Removal of substandard personnel.The best method of insuring good output is to insure good input. One method of doing this is to remove persons doing substandard work from the production operation. This system can be used both with personnel who are in the training process and those who are experienced but deteriorate in quality. By this simple device, it is possible to improve the quality of output without increasing costs. The major part of the cost function is the cost of training replacement personnel.

In a

[blocks in formation]

A product is intended to be used. With any product, there is always the problem of the effect of errors upon the user or consumer and, in turn, on the producer. Where the product is verified on a sample basis, as a minimum there is the ability to predict the number of defective items the consumer will receive. Where all items of production have been verified, the expectation is that the number of defective items remaining after verification will be very small. But if the product is not verified, there is no way to know anything about the quality until it is used. Neither the consumer nor the producer is able to plan intelligently for the occurrence of defective items.

Measure

It is usually relatively simple to identify a defective item at the time it is used. For example, there is the simple mating of nuts and bolts in an industrial operation. If the two fit, all is well; if they do not, either one or the other is defective. ments will tell which one. It is a simple matter to set aside the defective item, use another in its place, and return the defective one to the producer for replacement. mechanical edit for individual punched cards gives very much the same picture for a clerical operation. But the question is: "How many times will this happen; what will it cost?" As the operation becomes more complicated, the cost multiplies rapidly.

A

[blocks in formation]

matter how the process deteriorates, the product will be good enough for the intended use.

(3) There are other controls on the operation which make verification unnecessary.

(4) The producer chooses to forego his determination of the quality, letting someone else (the consumer) do it for him.

(5) The consumer is willing to accept the output regardless of its quality; that is, the quality of output does not affect him.

In a census, it is usually determined that some verification is necessary. It is too costly an undertaking to forego all check on the quality of the data, it is not possible to assure a low error rate in advance of the undertaking, and it is not practical to reprocess and correct the data after the consumer detects errors.

3.2 Complete verification

In complete verification, the verifier repeats an operation for all the items in a unit of work and corrects all the errors that he discovers. Complete verification attempts to assure "perfect" quality of output. In actual practice, it never does so. In fact, if the verifier and corrector both tend to make errors, complete verification of a clerical operation may result in more errors in the final product than will sample verification.

Complete verification, in terms of the number of items verified, is the most expensive type of verification. It can provide the most information about an operation and the quality of output, and it controls the errors at a low level; but the additional information and control are seldom worth the effort.

In spite of cost, there are conditions under which it may be desirable to perform complete verification rather than to do sample

verification. The following are two such conditions:

(1) When the error rate before verification (incoming error rate) greatly exceeds the required limit; for example, an operation with an editing-coding error rate of nine percent from which is required an average error rate after verification not to exceed one percent. In such a case, the total cost of verifying a sample, then adding to the sample because it exceeds the desired level, can very well exceed the cost of doing 100% verification in the first place.

(2) When the total production is so small that the savings in cost from sample verification would not be worthwhile.

3.3 Sample verification

Sample verification is a specific type of partial verification. The selection of the items to be verified, the rules for making decisions, and the action taken after decision are of such a nature that the (a) results can be predicted and (b) estimates of quality can be made with measurable reliability. Sample verification requires that the items to be verified must be drawn in a random manner. The least that can be expected from a sample verification plan is that it will yield a measure of the quality of output.

3.31 Time utility.--While actual money cost is the usual point of reference, another aspect of cost is in terms of time utility. Time utility is important in any operation where a printed report is the final product. The earlier the user receives the statistics, the more useful the data are to him. The value of a statistics-producing organization increases with the speed in which accurate reports are published. Because not all items are verified, sample verification makes possible an earlier publication with a smaller expenditure of money.

3.32 Cost compared with complete verification. --The cost of complete verification is equal to the cost of locating and verifying each item produced or processed. Sample verification, when properly used, is generally less expensive than complete verification.

The cost of sample verification includes at least two components, and usually three. Of necessity, there is the cost of selecting the item for verification, and there is the cost of verifying it. Additionally, since most sample verification plans require complete verification of rejected work, this component must be included. In its simplest form, the cost of operating a sample verification plan is the cost of selecting the sample items plus the cost of verifying them plus the cost of complete verification to replace rejected work. On a cost-per-unit basis, sample verification is more expensive than complete verification.

In a short-term operation in which the number of clerks or operators is relatively large (as in a census), a high proportion of the work may undergo complete verification during the period of training. The cost of verification during the training period is just as much a part of the quality control budget as the sample verification later. A sample verification plan, compared with complete verification, begins to save money at the point where the quality of output is controlled at a desirable level and the point where most of the work is being verified for only a sample of cases.

3.4 Spot checking

Spot checking, sometimes called subjective sampling, is much older than statistical quality control as a method of judging and controlling quality. The term covers a

multitude of different devices, but generally it involves a subjective selection of items for verification. This may mean a haphazard selection of items, selection from parts of the production that seem suspect, or selection by some other intuitive device. Much depends on the intuition of the selector of the items to be verified. His intuition may be perfect (he somehow can spot all the errors); or it may be entirely erroneous (he can spot none of the errors). Most likely it falls somewhere in between but nearer to the entirely erroneous end. If his intuition is perfect, there is the advantage of obtaining quality as good as could be obtained under complete verification but at a smaller cost. If entirely erroneous, the cost of spot checking is excessive; the quality is the same as would have resulted from no verification but the cost is greater. If in between, there is a tendency to reduce the number of errors in the output and perhaps take some action quality-wise, whether right or wrong.

Spot checking is economical from the point of view that fewer items are verified than under a system of complete verification. There is the possibility also that fewer items are verified than under a more efficient sample verification plan. But there are disadvantages which go with this economy. Verification of the selected items may cause alarm when there is no need for alarm or--equally as bad--it may lead one into a sense of security and calm when in fact there is real danger. Further, it provides no estimate of quality and no regular system for maintaining quality.

3.5 Token verification

Token verification is a special type of partial verification in which a small random sample is used. For the short run, the sample is purposely selected too small for estimating

the quality of output or for taking action with a high degree of reliability. Token verification can be useful in certain field and office operations. It depends, in part, on the psychological effect the presence of some verification has on the clerks or operators. For large operations, it is economical in terms of the results that can be obtained quality-wise. The shortcoming of token verification is that it postpones quality decisions and, in those cases where there is damaging deterioration, the deterioration may be detected too late.

4. MEASUREMENT OF QUALITY

Quality of a census operation is often measured in terms of an error rate (or rates). The rate may be expressed as the percentage of establishments missed in the enumeration, the percentage of incorrect codes that a coder assigned to product entries, or the percentage of incorrect codes punched into a batch of cards.

4.1 Definition of error rate

The error rate may be computed by dividing the number of errors found in verification by an appropriate base, such as establishments processed or cards punched. Thus, a 3% error rate in key punching would mean that 3 cards in 100 punched were found by the verifier to have one or more errors. Since a key punch operator would be punching many columns on each card, he would not, in fact, make errors on 3% of his key strokes; however, it is not practical to count all the key strokes, and experience has shown that relatively few errors are made by trained key punch operators. An error rate that is shown as a percentage of cards punched is more easily understood, and it effectively measures the performances of key punch operators.

Similarly, in editing and coding, a clerk may examine many entries and enter several codes for each case that he processes. His error rate, though, can be determined by the number of establishments he incorrectly processes per 100. It is unlikely that any one item on the questionnaire would have an error rate of anything close to 3%, since errors made by editor-coders would probably be scattered over several items.

4.2 Establishment of acceptable levels of performance

Good administration requires that each employee be told the acceptable levels of performance expected of him for the work to which he is assigned. The determination of these standards is a very important responsibility of management. In almost any operation where performance can be measured quite accurately, such as editing-coding, a new employee is likely to ask, "Do you want quantity or quality?" The answer to this question is, "We want as much production as possible provided the quality of your work remains within the acceptable error rate." The performance of an employee can be measured solely on his rate of production provided his error rate is acceptable. If his error rate is not acceptable, his work must be corrected; thus, his performance is not satisfactory. On the other hand, if a clerk has a low error rate and low productivity rate, he may be working too cautiously; or he may even be doing the work over and over to avoid making errors. be advised that his productivity can be improved by placing less emphasis on quality since he gains no credit for producing work with an error rate better than required.

He may

Experience that is gained in pre-tests of clerical and punching activity should indicate that each employee in an operation must complete satisfactorily a specified amount of

work per day by the end of a learning period. If employees are required to meet a high performance standard, experience has shown that most of them will produce at a higher rate than might have been thought possible. Thus, it is probably better to seek greater performance than is generally expected, rather than set a standard that employees may exceed without effort.

Ordinarily a good standard is high enough so that about 5% of the employees have great difficulty in meeting it. It is better to set the standard too high rather than too low. A standard may be lowered by management without difficulty, whereas a standard established at too low a level usually cannot be raised without widespread complaints.

4.3 Significant and non-significant errors

In every operation, errors are made that have varying effects on the quality of the final data. In certain instances, the error is not significant because-

(1)

(2)

(3)

It happens so seldom that no damage is done to the final product.

The erroneous code is within the same tabulation group when the data are published; for example, punching the number of persons engaged as 33 instead of 23, when the size class is 20 to 49.

The error will be discovered and corrected
in a later operation without undue effort;
for example, failure to enter a total
cost of materials when the computer edit
program will provide the total by
adding the detail items.

There are cases, however, where the error is significant because it will result in serious problems at a later stage of operations, or because it will seriously affect the accuracy of published data. Examples of significant errors would be (a) assigning a missing serial number in such a manner that duplicate serial numbers are created, (b) punching an erroneous ED number and thus creating a serious problem

« SebelumnyaLanjutkan »