THEORY AND DESIGN OF MECHANICAL MEASUREMENTS PDF

adminComment(0)
    Contents:

th edition of Theory and Design for Mechanical Measurements. This text provides modeling, test plan design, and uncertainty analysis. E1FTOC 09/15/ Page 10 E1FFIRS 09/09/ Page 1 Theory and Design for Mechanical Measurements Fifth Edition Richard S. Figliola. Theory and Design for Mechanical Measurement. Article (PDF Available) in Measurement Science and Technology 7(7) · July with 3,


Theory And Design Of Mechanical Measurements Pdf

Author:LATRINA GUILDFORD
Language:English, Arabic, Japanese
Country:Nigeria
Genre:Fiction & Literature
Pages:737
Published (Last):14.12.2015
ISBN:667-4-47929-789-3
ePub File Size:17.87 MB
PDF File Size:16.83 MB
Distribution:Free* [*Registration Required]
Downloads:34505
Uploaded by: NICOLASA

Figliola and Beasley's 6th edition of Theory and Design for Mechanical Measurements provides a time-tested and respected approach to the theory of. 1, Calculus Practice Problems For Dummies®,. Published by: John Wiley & Sons, Inc., River St., Hoboken, NJ Theory and Design for Mechanical Measurements, Binder Ready Version [ Richard S. Figliola, Donald E. Beasley] on beijuaganette.gq *FREE* shipping on.

He has served previously as Department Chair. In , he received the Class of Award for Excellence, Clemson University's highest faculty award.

Figliola has published extensively in the area of fluid and thermal transport and many of his publications discuss novel measurement methods. He is senior author of an engineering measurements textbook that is now its 5th edition. He holds 5 patents and has engineered several successful commercial products.

He has a background in aerospace and heat transfer, which still occupy part of his research as they relate to air vehicle systems integration and uncertainty analysis. His research in biofluid mechanics involves experimental patient-specific models to better understand the hemodynamics associated with right heart congenital defects and single ventricle physiology. He joined the faculty in Mechanical Engineering at Clemson University the same year.

Beasley's research interests are in the areas of fluid mechanics and heat transfer, including flow and heat transfer in rod bundles, multiphase flows, as well as the application of nonlinear dynamics and chaos in the thermal fluid sciences. He has more than refereed conference and journal papers, and is co-author of a leading textbook in instrumentation and measurements. Theory and Fundamental Research. Request permission to reuse content from this site. Undetected country. NO YES. Theory and Design for Mechanical Measurements, 6th Edition.

Selected type: Added to Your Shopping Cart. Evaluation Copy Request an Evaluation Copy. Engine load on a test stand is well-controlled. On track, the driver does not execute exactly on each lap, hence varies load such as due to differences in drive path 'line' and this affects principally aerodynamic loads and tire rolling resistance.

Incidentally, all of these are coupled effects in that a change in one affects the values of the others. Ram air effect of moving car can be simulated but difficult to get exactly Each engine is an individual. Even slight differences affect handling and therefore, how a driver drives the car thus changing the engine load. Four lathes, 12 machinists are available to produce batches of machine shafts.

The machinists are randomly assigned. J, K, L Data from each lathe should be indicative of the precision associated with each lathe and the total ensemble of data indicative of batch precision. However, this test matrix neglects the effects of shift and day of the week.

One method which treats machinist and lathe as extraneous variables and reduces test size selects 4 machinists at random. Suppose more than one shaft size is produced at the plant. A first-order curve fit to this data, for example using a least squares regression analysis, will provide the fit y L x.

The linearity error is simply the difference between the measured value of y at any value of x and the value of y L predicted by the fit at that x. A manufacturer may wish to keep the linearity error below some target value and, hence, may limit the recommended operating range for the system for this purpose. In your experience, you may notice that some systems can be operated outside of their specification range but be aware their elemental errors may exceed the manufacturer's stated values.

Hysteresis error A sequential static calibration over a specified range will provide the input-output behavior between y and x during upscale-only and downscale-only operations. This will tend to maximize any hysteresis in the system. The hysteresis error is the difference between the upscale value and the downscale value of y at any given x.

Test matrix to evaluate performance SOLUTION Tire performance can mean different things but for passenger tires usually refers to braking and lateral load adhesion during wet and dry operations. For a given series of performance tests, performance will depend on tire and car a tire will perform differently on different makes of cars.

For the same make, subtle differences in production models can affect test results so we treat the car as an individual and extraneous variable. We could select 4 cars at random 1,2,3,4 to test four tire brands A,B,C,D 1: A, B, C, D This provides a data pool for evaluating tire performance for a make of car.

Note we ignore the variable of the test driver but this method will incorporate driver variation by testing four cars. Other strategies could be created. Expected calibration curve SOLUTION Part of a test matrix is to specify the range of the independent variable and to anticipate the range resulting in the dependent variable. In this case, the pressure drop will be measured so that it is the dependent variable during a static calibration.

To anticipate the output range of the calibration then: The term linearity should not be applied directly. The nonlinear calibration result is just a normal consequence of the physics. However, a signal conditioning stage could be inserted within the signal path to produce a linear output.

This is done using logarithmic amplifiers. To illustrate this, plot the calibration curve in Problem 1. The result will be a linear curve. Alternately, you could take the log of each column and plot them on a rectangular scale to get that same result.

A logarithmic amplifier Chapter 6 performs this same function as the plot scale or log key directly on the signal. A linearity measure can then be extracted with some meaning.

As flow rate is the variable varied and pressure drop is the variable measured in this calibration, pressure drop is the dependent variable. The flow rate and the fixed values of area and density are independent variables. One approach is to number the pistons and allocate them to the four subcontractors with subsequent analysis of the plating results.

For example, send 24 pistons each to the four subcontractors and analyze the resulting products separately. The variation for each subcontractor can be estimated and can be statistically tested for significant differences.

The ability to provide the exact voltage on replication is important in obtaining consistent results in many transducers. Even if you use a regulated laboratory variable power supply, this effect can be seen in your data variation on replication as a random variation. If you use an unregulated source, be prepared to trace these effects as they change from hour to hour or from day to day. Many LVDT units allow for use of dc power, which is then transformed to ac form before being applied to the coil.

It is easiest to see the effect of power setting on the results when using this type of transducer. Data scatter about a curve fit will provide a measure of repeatability for this instrument methods are discussed in Chapter 4. Reproducibility involves re-testing the system at a different facility or equivalent such as different instruments and test fixtures.

Think of this as a duplication. Even though a similar procedure and test matrix will be used to test for reproducibility, the duplication involves different individual instruments and test fixtures.

A reproducibility test is a special type of replication by using the different facility constraint added. The combined results allow for interference effects to be randomized. Bottom Line: The results leading to a reproducibility specification are more representative of what can be expected by the end user YOU!

The operating loads form a 'load profile' to simulate the road course. Allowing a driver to operate a car over a predetermined course provides a realistic simulation of expected consumption. No matter how well controlled the dynamometer test, it is not possible to completely recreate the driving situation that a real driver provides.

However, each driver will drive the course a bit differently. Extraneous variables include: In the hands of a good test engineer, valuable information can be ascertained and realistic mileage values obtained. Most important, testing different car models using a predetermined load profile forms an excellent basis for comparison between car makes this is the basis of a 'standarized test.

The variables in a test affect the accuracy of the simulation. Actual values obtained by a particular driver and car are not tested in a standarized test. Even if not exact representations, information obtained in one can be used to get realistic estimates to be expected in the other.

For example, a car that gets 10 mpg on the chassis dynamometer should not be expected to get 20 mpg on the road course. This is not an uncommon situation when siblings own similar model cars. The drivers, the cars, and the routes driven are all extraneous variables in this direct comparison. Simply put, you and your brother may drive very differently. You both drive different cars. You likely drive over different routes, maybe very different types of driving routes.

You might live in very different geographic locations altitude, weather. The maintenance of the car would play a role, as well. An arbitrator might suggest that the two of you swap cars for a few weeks to compare.

If the consumption of each car remains the same under different drivers and associated different routes, location, etc , then the car is the culprit. If not, then driver and other variables remain involved. The measure of 'diameter' represents an average or nominal value of the object.

Differences along and around the rod affect the value of 'diameter. Measurements made at different positions along the rod show 'noise', that is data scatter, which is handled statistically that is, we average the values to obtain a single diameter. Using just a single measurement introduces interference, since that one value may not be the same as the average value. Tabulated values of material properties represent average or nominal values.

These should not be confused as being exact values, regardless of the number of decimal places found in the tables although the values can be assumed to be reasonably representative to within a decimal place. Properties of a material will vary with individual specimens as such, differences between a nominal value and the actual specimen value will behave as an interference. Applied tensile load Controlled variable: Bridge excitation voltage Dependent variable: Bridge output voltage which is related to gauge resistance changes due to the applied load Extraneous variables: Specimen and ambient temperature will affect gauge resistance A replication will involve resetting the control variable and specimen and duplicating the test.

Be sure to operate within the elastic limit of the specimen. Direct comparison and data scatter about a curve fit will provide a measure of repeatability specific methods to evaluate this are discussed in C4.

Semiconductor behavior of pentagonal silver nanowires measured under mechanical deformation

Note that the reproducibility test is also a replication but with the different facility constraint added. This forms a good opportunity for class discussion. Car speed could be determined as: The following is a list of the minimum variables that are important in this test: The assumption is that any speed change is small in regards to the measured value.

This assumption imposes a systematic error on the measured result. The car is assumed to be a point. This assumption may introduce a systematic error into the results. As for a concomitant approach: So its acceleration is easily anticipated and the ideal velocity at any point along the path can be calculated directly from simple physics.

The actual velocity will be the ideal velocity reduced by resistance effects, including frictional effects, such as between the cars wheels and the track and within the wheel axles, and aerodynamic effects. The actual velocity will be a bit smaller than the ideal velocity, a consequence of the systematic error in the assumptions.

But what it does give us is a value of comparison for our measurement. If the measured value is markedly different, then we will know we have some problems in the test. The aerodynamic drag increases with speed while the compressor power remains fairly constant with speed.

To test this question, you might develop a test plan as follows: Operate the car at several fixed, but well separated, speeds U 1 , U 2 , U 3 in each of two configurations, A and B. Configuration A uses the compressor and all windows are rolled up closed. Configuration B turns the compressor off but driver window is rolled down open.

Obviously, there can be alternate configurations by rolling down differing windows, but the idea is the same. U 1 , U 2 , U 3 Concomitant approach: An analytical approach to this problem would tradeoff the power required to operate the vehicle at different speeds under the two configurations based on some reasonable published or handbook values for example, most modern full-sized sedans have a drag coefficient of about 0.

But you might research these numbers. Most of these codes can be found in a library with a quality engineering section or at the appropriate website for the professional group cited. The results from these searches form a good opportunity for class discussion. Define signal and provide examples of static and dynamic input signals to measurement systems. A signal is information in motion from one place to another, such as between stages of a measurement system.

Signals have a variety of forms, including electrical and mechanical. Examples of static signals are: List the important characteristics of signals and define each. Magnitude - generally refers to the maximum value of a signal 2. Range - difference between maximum and minimum values of a signal 3. Amplitude - indicative of signal fluctuations relative to the mean 4. Frequency - describes the time variation of a signal 5.

Dynamic - signal is time varying 6. Static - signal does not change over the time period of interest 7. Deterministic - signal can be described by an equation other than a Fourier series or integral approximation 8. Non-deterministic - describes a signal which has no discernible pattern of repetition andcannot be described by a simple equation. A random signal, or stochastic noise, represents a truly non-deterministic signal. However, chaotic systems produce signals that appear random, but are truly deterministic.

An example would be the velocity in a turbulent fluid flow, that may appear random, but is actually governed by the Navier-Stokes equations. The resulting values are a The average and rms values for the time period 0 to 20 seconds represents the long-term average behavior of the signal. The values which result in parts a and b are accurate over the specified time periods, and for a measured signal may have specific significance. Discrete sampled data, corresponding to measurement every 0.

The mean and rms values of the measured data. The mean value for y 1 is 0 and for y 2 is also 0. However, the rms value of y 1 is The mean value contains no information concerning the time varying nature of a signal; both these signals have an average value of 0. But the differences in the signals are made apparent when the rms value is examined. The effect of a moving average signal processing technique is to be determined for the signal in Figure 2. Discuss Figure 2. In essence, this emphasizes longer term variations, while removing short-term fluctuations.

It is clear that the peak-to-peak value in the original signal is significantly higher than in the signal that has been averaged. The natural frequency may be determined,. From Equations 2. From Equation 2. T is a period of y x FIND: Thus the series may be written 2 1 cos 0.

Since the function is neither even nor odd, the Fourier series will contain both sine and cosine terms.

Figliola Beasley Mechanical Measurements 4th Solutions

Since the contribution from to 0 is identically zero, it will be omitted. Fourier series for the function y t. An odd periodic extension is assumed. The function is extended as shown below with a period of 4. Odd Periodic Extension -1 The first four partial sums of this series are shown below First Four Partial Sums 0 0.

Construct an amplitude spectrum plot for this series. Amplitude Spectrum 0 0. The relative importance of the various terms in the Fourier series as discerned from the amplitude of each term would aid in specifying the required frequency response for a measurement system. Signal sources: Sketch representative signal waveforms. Find the amplitude-frequency spectrum. This is important because if we represent the signal by a discrete-time series that has an exact integer number of the periods of the fundamental frequencies, then the discrete Fourier series will be exact.

Using the companion software disk, issues associated with sampling continuous signals to create discrete-time series. The time series and the amplitude spectrum are plotted below. Signal average value, amplitude and frequency. Express the signal, y t , as a Fourier series. Express the signal as a Fourier series and plot the signal in the time domain, and construct an amplitude spectrum plot.

Wall pressure is measured in the upward flow of water and air. The flow is in the slug flow regime, with slugs of liquid and large gas bubbles alternating in the flow. The figure below shows the amplitude spectrum for the measured data. There is clearly a dominant frequency at 0. By inspection of Figure 2. The signal can be reconstructed from the above information, as 5sin 2 3sin 6 0. Then 1 1 Hz 1 10 giving 0. The value of A o is determined from Equation 2. Separate Representation of First Three Terms -1 Figure 2.

Discuss the effects of low amplitude high frequency noise on signals. Assume that Figure 2. Several aspects of the effects of noise are apparent. The waveform can be altered significantly by the presence of noise, particularly if rates of change of the signal are important for specific purposes such as control.

Generally, high frequency, low amplitude noise will not influence a mean value, and most of the signal statistics are not affected when calculated for a sufficiently long signal. To do this, this system is modeled as a zero order equation. Clearly, the model shows that if K were to be increased, the static output y would be increased.

Here K is a constant, meaning that the relationship between the applied input and the resulting output is constant. The calibration curve must be a linear one. Notice how K, through its value, takes care of the transfer in the units between input F and output y. COMMENT Because we have modeled this system as a zero order responding system, we have eliminated any accommodation for a transient response in the system model solution. The forcing function i. So in the transient sense, this solution for y is valid only under static conditions.

However, it is correct in its prediction in the steady output value following any change in input value. System model FIND: Unless noted otherwise, all initial conditions are zero. For a first order system, the percent response time is found from the time response of the system to a step change in input. Alternatively, Figure 3. For a second order system, the system response depends on the damping ratio and natural frequency of the system and can be established from either 3.

Problem 3. Be certain to always provide units for all answers; magnitudes alone are not sufficient. Thermometer similar to Example 3.

We saw in Example 3. Specifically, the heat transfer coefficient is dependent on such conditions. If a student were to remove a sensor from hot water and transfer it to cold water by hand, it would be in motion part of the time. Further, one student may hold the sensor in the cold bath more steadily than another. Movement will change the heat transfer coefficient on the sensor by a factor of from 2 to 5 or more. Hence, the variation in time constant noted between students was simply a lack of control of the heat transfer coefficient that is, a lack of control of the test condition.

Their answers results are not incorrect, just inconsistent! The results simply show the effects of a random error, in this case due to variations in the test condition. By proper test plan design, they can obtain a reasonable result that is bracketed by their test uncertainty. This uncertainty can be quantified by methods developed in Chapters 4 and 5. Dynamic calibration using a step input y. The solution is given by 3. At time 1. That is, the value KA will be attenuated by a factor of 0.

Attenuation results with a negative value for while a positive value indicates a gain. In effect, the measurement system cannot respond quickly enough to follow the input signal. This creates a filtering effect whereby a significant portion of the signal amplitude is attenuated the term "attenuation" refers to a reduction in value and is indicated by a negative dynamic error. This system is a more effective as a filter than it is as a measuring system. Associated with this large dynamic error is a large phase shift and associated time lag.

So based on the constraint for dynamic error, we want 1 M 0. Frequency range to meet constraint. For a first order system, M will always be equal or less than unity.

So that the dynamic error constraint reduces to For this to be true, Hence, a good system model can be written as 3. We must set this threshold value. However, we probably do not wish to set it too low or we run the risk of an unnecessary shut down resulting from just random noise.

COMMENT We should note that as the set threshold value is pushed to lower values of error fraction, the value for time constant becomes smaller. This places a more restrictive design constraint on the sensor and installation selected. L such that M 0. Plot M and. Either from the plots or from 3. First-order system i. The test plan should impose the step input and system output recorded at time intervals much less than T d e.

The peak values maximum amplitudes , or alternatively the minimum amplitudes, should be plotted versus time. The transient response is found from the homogeneous solution to the equation model.

Then solving for in the equation 3. Second order system responding to a sine wave as an input 2 f 40 Hz M f 0. Use Figure 3. Also from the Figure, it is apparent that this value will meet the constraint at 40 Hz as well. Of course, the actual values selected will depend on availability. RCL circuit: Values for R, C, L remain constant.

However, its value can fall below 0. So the transducer does not meet the constraint over the entire frequency range of interest. The time lag indicates that the output signal appears at a time 1 after the input signal is applied. So the input signal amplitude is attenuated The instrument effectively filters out the information at Hz. Inspection of Figure 3. M 0. First order system: This measurement system is not a good choice for measuring the 2 information. Second order system accelerometer: For this system, 1 1 2 2 2 2 0.

The resonance frequency is found to be 2 R f 1 2 0. Seismic Accelerometer of Example 3. By inspection, this instrument will be best suited to measure signals having frequency content that is greater than its natural frequency. Two coupled second order systems Input signal: The following is one approach to choosing a system. With these values selected, we examine each stage in the system.

But the problem demonstrates one approach to dealing with such an open ended problem.

Note also that the phase shift for the system selected is about First stage: A second stage with a higher natural frequency would bring M 2 closer to unity and decrease the phase shift. Translate this specification into words SOLUTION A typical audio amplifier increases the output amplitude relative to the input amplitude by some amount and that amount is called its gain.

You may be more familiar with the term power instead of gain, such as in the expression Watts power. This power is simply another way to state the gain. Now in our equations, the gain is just the static sensitivity, K, for the amplifier at some reference frequency.

So the amplifier gain is stated at some reference frequency. In literature pertaining to amplifiers, the static sensitivity is called the static gain but it all means the same thing. This system specification states that for an input frequency within 0 to 20, Hz the amplifier gain does not vary by more than 1d. Another way to write this is: So the product KM f is frequency dependent and therefore the amplifier gain is frequency dependent.

So between 0 and 20, Hz, the signal amplitude is essentially constant to within 1d. A typical audio amplifier will have some spikes and troughs across its frequency response. But the specification is explicit that the amplitude never varies by more than the 1dB.

Music signals are a series of sinusoidal frequency terms. Even single notes, such as a middle C, consist of a fundamental frequency and harmonics. The harmonics give distinction to the source of the sound so that different instruments are recognizable. Under normal circumstances, we would want the reproduction electronics to neither add nor detract from the signal information i.

Determine if measurement system specifications are adequate for the input signal. We offer one possible solution to illustrate the design selection approach. The second order displacement transducer will be most heavily tested at the highest input frequency. Suppose we impose the restriction on the transducer that 1 0. A quick inspection of Figure 3.

That is not good. To do this, raise the natural frequency. Now we should meet our criterion without resonance problems. The spectrum measurement device will not be a factor owing to its wide frequency response relative to these input frequencies. The slope is 0. The short essay should be written in the students own words rather than a restating of text material. Each student understands material in their own individual way.

So this is an opportunity for written technical communication between instructor and student. This is a step function input as the circuit sees it, i. Around the loop: So we seek the time required to charge the capacitor to this voltage level. The input magnitude is controlled by the user and may be varied with time. The instrument time constant is set by the user and may be varied with time. The user should select a time constant and an amplitude, start the program, and then vary the amplitude creating a step change in input.

The system response slows down relative to time as the time constant is increased independent of the magnitude of the step change imposed. Interpolation gives P 2. Toss of four coins FIND: Develop the histogram for the outcome of any toss.

State the probability of obtaining three heads on one toss. The possible outcomes are: Number n j of Heads 4 1 3 4 2 6 1 4 0 1 The histogram is shown below. Because of the few number of tosses, the development of the histogram is primitive. But the symmetry is obvious.

This type of distribution is best described as a Binomial distribution see Table 4. As the number of possible outcomes number of coins tossed becomes large say 30 or more , the two distributions become nearly identical over a wide interval about the mean and the Gaussian distribution can be used for ease.

Problem 4. This is because each shot is independent of the other and each shot differs from the other by random variation. A 'better' player will have a mean distance in the outcome that is close to the target point and have a low variance.

That is, the player will have a low systematic error average distance from target point and low random error variations from the average point. This game and its interpretation are similar to the dart game discussed in Chapter 1 in the discussion of random and systematic error.

In the US, a variation of this game is called 'matchbook football' where the objective is to slide the matchbook across a table so that it just overhangs the table edge.

Histogram for Table 4.

Frequency distribution for Table 4. Compare and discuss histograms for Table 4. Each histogram clearly shows a central tendency and in each case it is in bin 4. Three datasets from Table 4. Data from Table 4.

This is what is meant by a central tendency in a population a tendency for a data point to have or be close to one value over all others. Data of Table 4. Then from Table 4. Column 2: Indeed, inspection of the dataset shows that this is a true statement.

Then, we can expect that the true mean value should lie within the interval defined by This gives the mean value for this data set and a statement of the range of mean values we would expect to find from any data set.

Compare the meaning of this statement to that found in Problem 4. They are very different! Different data sets of the same variable will give somewhat different mean values.

As N becomes large, the sample mean will approach the true mean and the confidence interval will go to zero. Remember this assumes that there is no systematic error acting on the measurement.

Hence, an additional measurement of F would be expected to fall within the range of By alternating between the two constant temperature environments, differences in indicated values within each environment would be indicative of the precision error to be expected of the instrument at that temperature.

Of course, this assumes that the constant temperatures do indeed remain constant throughout the test and the instrument is used in an identical manner for each measurement. Systematic error is a fixed offset.

In the absence of random error, this would be how closely the instrument indicates the correct value. This offset would be present in every reading. So an important aspect of this check is to calibrate it against a value that is at least as accurate as you need. This is not trivial. For example, you could use the ice point 0 o C as a check for systematic error. The ice point is formed from a carefully prepared bath of solid ice and liquid water.

As another check, the melting point of a pure substance, such as silver, could be used. Or easier, the steam point. Accuracy requires a calibration to assess both random and systematic errors. If in the preceding test the temperatures of the two constant temperature environments were known, the above procedure could serve to establish the systematic error, as well as random error of the instrument.

To do this: The difference between the average of the readings obtained at some known temperature and the known temperature would provide an estimate of the systematic error. Between any two days of different barometric pressure, the boiling point measured would be different this offset is due to the interference effect of the pressure.

Consider a test run over several days coincident with the motion of a major weather front through the area. Clearly, this would impose a trend on the dataset. For example, the measured boiling point may be seem as increasing from day to day. By running over random days separated by a sufficient period of days, so as not to allow any one atmospheric front to impose a trend on the data, the effects of atmospheric pressure can be broken up into noise.

The measured boiling point might then be high one test but then low on the next, in effect, making it look like random data scatter, i.

Refine your editions:

Resolution affects a user's ability to resolve the output display of an instrument or measuring system. Consider a simple experiment to show the effects of resolution. Under some fixed condition, ask several competent, independent observers to record the indicated value from a measurement system. Collect the results this becomes your dataset. Because the indicated value is the same for each observer, the scatter in your dataset will be close to the value of the resolution of the measurement system.As the number of possible outcomes number of coins tossed becomes large say 30 or more , the two distributions become nearly identical over a wide interval about the mean and the Gaussian distribution can be used for ease.

The mean and rms values of the measured data. You can also find solutions immediately by searching the millions of fully answered study questions in our archive. Because the indicated value is the same for each observer, the scatter in your dataset will be close to the value of the resolution of the measurement system. Randomization makes systematic errors behave as random errors, which are more easily interpreted. Several influencing extraneous variables include: This uncertainty can be quantified by methods developed in Chapters 4 and 5.

Many variables can affect auto model efficiency: A working standard would be calibrated against the laboratory standard.