On the first exam, your score reflects two things:

- Your ability in applying the material learned in this course
- Errors associated with how the exam was graded. These
errors come in two forms; one is much worse than the other:
1. Random error (can be corrected for - see below)

2. Systematic Error (extremely serious if you don't know it exists)These Errors works like the following:

What we measure is X but what we are interested in is the distribution of the true variable, T. To measure T, however, we have to know what the random error, e

_{r}and systematic error e_{s}is.Without knowledge of e

_{r}and e_{s}, T can never be accurately measured. This potentially is a huge problem.What is e

_{r}Random errors increase the dispersion. These errors are associated with apparatus or method used in obtaining the data. All data sampling is subject to random error, period. There is no way to avoid it. However, if you know what the value of e

_{r}is, you can "subtract" it out of the data.Because of the presence of random error, it is always important to compare distributions via the method we have previously discussed (this method is sometimes referred to as the Z-statistic):

For example: Supposed your in a small class of 25 students. The average on the first midterm was 75 out of 100 with a dispersion of 15. The average on the second midterm was 68 out of 100 with a dispersion of 5. Is there a significant difference between these 2 distributions? We find only marginal significance in this case Z-statistic = 2.2

What is different about the two midterms is that the second midterms likely has a lower random error in the way it was graded.

What if N were 100, would the differences be significant? Yes, the Z-statistic is 4.4

Now, suppose that, for the first midterm, I know that the random error was 12 points and for the second midterm, the random error is 3 points. We will show below that, compensating for this random error, will produce a significant difference between these two exam distributions.

Here is how you subtract out a known random error. In a bell-shaped distribution, the various components are added or subtracted in

__quadrature__. What does that mean?:(measured dispersion) ^{2}= (true dispersion)^{2}+ e_{r}^{2}so

(true dispersion)

^{2}= (measured dispersion)^{2}- e_{r}^{2}Example: Suppose a measured distribution has a dispersion of 8 units and we know that the measuring error e

_{r}is 6 units. What is the true dispersion?(true dispersion) ^{2}= 8^{2}- 6^{2}

(true dispersion)^{2}= 64 -36

(true dispersion)^{2}= 28

(true dispersion) = 5.3 (square root of 28)Now we apply this to the case of our two exams:

- Exam 1:
- (true dispersion)

^{2}= 15^{2}- 12^{2}

(true dispersion)^{2}= 81

(true dispersion) = 9 (square root of 81) - Exam 2:
- (true dispersion)

^{2}= 5^{2}- 3^{2}

(true dispersion)^{2}= 16

(true dispersion) = 4 (square root of 16)

Now what do we get using 75 +/- 9 and 68 +/- 4

Significant difference exists; Z-statistic is 3.6 in the raw data, this significance is obscured by the large amount of random error. This is a real life problem which is why its important to try and assess how much random error exists in your sample or data

What is e

_{s}?Systematic errors mean that different methods of measurement are being applied to the same variable. This means that the position of the mean is strongly effect. For example, suppose there are two patrolment on the freeway both with identical radar guns. Except that one of them systematically reads 5 mph higher than the other due to a "calibration" error back at the station. Which policeman do you want to speed by?

- Exam 1:

Your exam was graded by 4 different people although all have the
same exam key. The exams were * randomly * sorted as they
were handed in and the 4 graders randomly picked a pile. Each pile
has 35 exams.

Since the intrinsic distribution of exam scores will be normal (bell- shaped) then by the sampling principles established earlier we know that 35 samples is enough to accurately reflect the distribution.

Therefore, each of the 4 piles should have the same mean and dispersion, to within the errors. That is, the difference between any 2 exam piles should be less than 2.5 in terms of the calculated Z-statistic (ideally, they should be less than 2.0).

Differences in grading style, however, will cause each grader to have a slightly different mean and dispsersion.

e_{s} will then manifest itself
by __ significantly __
different mean scores for different graders. This can be corrected
for easily.

e_{r} manifests itself by different
dispersions for different graders on a per question basis.
This is due to random errors
associated with grading the questions and assigning a point value
in a subjective manner.

For this exam, your dispersion score is calculated by the instructor
after e_{r} and e_{s} had been determined. In principle,
this needs to be done for every exam you take in large classes.
Most instructors don't do this, period (its too much work).

You should always demand that your instructors prove to you that in the case of multiple graders on the exam, no bias exists

Typically for this exam:

- e
_{r}was much smaller than the intrinsic dispersion (and so e_{r}^{2}is very much smaller than the (intrinsic dispersion^{2}), hence the measured dispersion is very close to the true dispersion. - e
_{s}was not significant (all graders, graded in the same manner to within the errors).

The important role of data precision. (Next class we will have a PRS exercise based on estimation).

Understanding the role of measurement errors is crucial to proper data interpretation. For instance, the measured dispersion in some distribution represents the convolution of

- the intrinsic dispersion
- measurement error
- the precision of the measurements

In general, you only care about the intrinsic dispersion in some distribution. That is, you don't want to have the dispersion dominated by measurement error or poor precision because then you can't draw any valid conclusion.

Example: Column 1 contains the data that was measured with good precision. That is, the measuring error of the instrument was less than 0.1. Column 2 represents the same data that was measured with and instrument that had a measuring error of +/- 1 unit:

- 20.1 18
- 20.2 19
- 20.3 19
- 20.3 21
- 20.4 20
- 20.4 19
- 20.5 22
- 20.6 21
- 20.7 23
- 20.8 21
- 20.9 20

The first column yields a dispersion of 0.23

The second column yields a dispersion of 1.44

Clearly the first column is a better measure of the intrinsic distribution of the sample than the second column. Essentially the numbers in the second column are meaningless.

Note, your GPA is actually determined in a rather imprecise way. Your GPA is recorded to an accuracy of 2 digits (e.g. 3.14), yet each class is measured far more coarsely (to within a precision of 0.3 grade points). In principle, grades should be calculated on a strictly numerical scale, with precision of 0.1.

That way, if your between an A and a B you would get a 3.5 for your grade (as opposed to either an A- 3.7 or B+ 3.3).

Every measurement has an error associated with it and hence a measurement is only as good as its error. Knowing the size of measuring or sampling errors is often difficult but it still is important to try and determine these errors

For some kind of sampling, error estimation is straight forward. For instance, opinion poll sampling has an error that depends only on the Number of people in the sample. This error has to do with counting statistics and is expressed as

For a sample of 16 people, the error would be 4/16 = 25%. This a large error since the range of YES vs NO is from 0-100% if 12 people answered yes and 4 people answered no then your result would be:

- Percentage yes = 75 +/- 25%
- Percentage no = 25 +/- 25%

For a sample of 1000 people, the error would be SQRT(1000)/1000 = 33/1000 = 3%. If 750 answered yes and 250 answered no then your result would be:

- Percentage yes = 75 +/- 3%
- Percentage no = 25 +/- 3%

Conclusion: Always ask what the measuring errors are!!!