104 JOURNAL OF THE SOCIETY OF COSMETIC CHEMISTS the following, which also generally qualify instrumental methods as well. They should be simple, expedient, provide fast data turnaround, have precision, and be accurate, reproducible, cost-effective and compatible with other methods (4). Minimally, they must satisfy three basic technical requirements: 1) well-defined sensory characteristics that all technical staff understand, 2) standard measuring scales, and 3) standard and reproducible testing conditions (5). The most suitable methods for general sensory quality applications provide decision- oriented information and are practical in terms of the kinds of quality differences they detect. However, some of these methods are so sensitive that they allow no room for deviation from the control regardless of whether or not the differences would be per- ceived by consumers. Others are so general and inconclusive that they cannot be used for decision-making purposes. Ideally, the most useful sensory information in these cir- cumstances indicates how different the samples are from a standard and in what regard they differ. Preference tests or other consumer tests are the least appropriate methods for several reasons. First, to do a real consumer test with product users on every production batch is impractical from both a time and budget viewpoint. Holding up production to wait for the results of a consumer test is not realistic. Second, and this is usually the case, if a small group of company employees is used, they do not represent true consumers. Third, these kinds of tests provide no diagnostic information on the nature of the differences. Further, two very different samples could be equally preferred, each for different reasons. Therefore, from a quality standpoint, preference tests have absolutely no application to the task of insuring product consistency and reproducibility. In certain instances, based on the circumstances and the cost of scrapping a large batch, once a significant sensory difference has been established based on a sound testing scheme, a preference test may indicate whether the difference is critical to consumer acceptability. However, such practice is not recommended because it does not address the issue of product consistency as a critical dimension of quality. Furthermore, such practice may result in gradual quality drifts that will eventually erode consumer con- fidence and loyalty. To further address the place of consumer preference tests in the QC environment, such tests are very helpful in the early stages of product development, to aid in establishing standards and specifications. And they should, in fact, be built into the early stages of program development because they help relate sensory responses to actual consumer levels of tolerance for variability. But they should not be used for routine monitoring of product sensory quality and reproducibility, for the reasons already discussed. The problems inherent in using "typical" assessment for determining product quality have been briefly covered. However, this seems to be one of the most prevalent methods practiced in QC programs, probably because of its apparently simplistic approach. As simple as it may appear, it actually provides the evaluator with very little structure and allows a very large area for interpretive error. What is "atypical" to one assessor may be "typical" to another. And under "typical" circumstances, the assessors have minimal or no formal training regarding what is "typical" and what is not. Another type of difference test that is frequently used in the manufacturing environment is the triangle test. Figure 7 depicts the format for a triangle test that requires the
SENSORY QUALITY 105 SAMPLE # .... SAMPLE # .... SAMPLE #___ WHICH SAMPLE IS THE ODD SAMPLE? .... Figure 7. Triangle test format. evaluator to identify the different or odd sample out of three possible choices. There are a number of reasons why this test design is not appropriately applied in a QC context. Most of the reasons revolve around the sensitivity of these test designs. They are either too sensitive and pick up very small differences without elaborating the nature or dimension of the differences, or they are not sensitive enough because of the order effects and carryover that sometimes occur based on the nature of the material being evaluated. The downside of using triangle difference tests in a QC testing environment is that many more batches than necessary may end up being rejected because of small but statistically significant differences. The key question should be the size of the difference from the control or standard. This kind of sensory information provides more meaningful data upon which management can base informed "go" or "no-go" decisions. One of the best applications of the sensory method known as descriptive analysis is in quality control. However, implementation of this method requires considerable budget and manpower commitments because of the resources involved in training a panel. In lieu of descriptive analysis, difference testing versus a standard provides the most actionable information. The degree of difference from control method offers several practical advantages in a plant QC environment. It provides actionable information on the size of the difference from the standard and is relatively easy to teach. It is also simply structured, which makes the task easier for the evaluator. The format for a typical degree-of-difference test is displayed in Figure 8. Establishing the specification range based on degree-of-difference test results is straight- PLEASE EVALUATE THE REFERENCE STANDARD FIRST. TEST SAMPLE # HOW DIFFERENT IS TEST SAMPLE FROM THE REFERENCE? 5 -- SAMPLE IS NOT DIFFERENT/MATCHES STANDARD 4 - SAMPLE IS VERY CLOSE TO STANDARD 3 -- SAMPLE IS MODERATELY DIFFERENT FROM STANDARD 2 -- SAMPLE IS CONSIDERABLY DIFFERENT FROM STANDARD I = SAMPLE IS VERY DIFFERENT FROM STANDARD Figure 8. Degree of difference format.
Previous Page Next Page