This page contains a Flash digital edition of a book.
Becoming a data guru


2. Don’t be ‘mean’ about your two friends.


When people talk about averages, they are often talking about the ‘mean’. Two of the mean’s friends should be included for a better story: the median and range. The median tells us the middle value. The range tells us the spread of values. They help us unpack the nuances in the data. So really they’re our friends too.


For example, at a Graduate Induction, two of the speakers both receive mean scores of four out of five from 200 surveyed. At first glance, it looks like both speakers were not doing too badly. What if we could only invite one back? Looking more closely at the median gives us the answer if the mean is the same for both. Median for Speaker A: 3 Median for Speaker B: 4


There seems to have been a wider range of scores for Speaker A than for Speaker B, suggesting a few high scores pushed Speaker A’s mean above the median. Looking at the ranges clears up the story. Range for Speaker A: 1 - 5 Range for Speaker B: 3 – 5


The range confirms that there was a much more mixed experience with Speaker A compared with Speaker B. We’d probably go for Speaker B being invited back if we were being asked.


A Smarty would: Check the mean, median and range to understand how the data is distributed.


3. When it comes to samples, size matters. The sample size is the total number of people or data points represented in a study. It’s important to know how big your sample is, because larger samples tend to yield more robust results. Samples should be representative of the wider pool. So yes, size does matter.


Imagine you are looking at the results of a survey that says 50% of graduates are dissatisfied with their development. Looking more closely at the sample, however, you realise that the survey covered just eight graduates of your 150. What’s more, six of the eight graduates came from the same stream and all of them were based in the London office. That 50% figure was based on a highly unrepresentative sample and each participant had a 12.5% weight! See what we’re saying? Perhaps not so good and not so robust.


A Smarty would ask: How big is the sample? Was it representative?


4. Correlation does not mean causation. Focusing on the relationship between only two factors can be misleading, because although there may be a correlation between them, it does not mean that one has caused the other. Confused? Don’t be.


For example, an organisation tracks the volume of recruitment marketing collateral distributed on campus at different times of the year and discovers that application numbers rise and fall with the volume of collateral. If they fail to consider the time of year, however, they might miss the fact that they have higher volumes of collateral and applications in autumn—which is when prospective graduates are most likely to apply anyway. So it’s not the collateral that boosts applications, it’s the time of year.


A Smarty would ask: What else could be affecting the two variables?


Something


5. Statistical significance speaks volumes. Something is statistically significant when the relationship or effect between factors is so large or consistent that it would rarely occur by chance.


is statistically significant when the


relationship or effect between factors is so large or consistent that it would rarely occur by chance.


Fancy another example? OK. Let’s think about a graduate recruiter. Let’s say that when analysing data on universities and graduate performance, they notice that two universities seem to produce the best graduates in terms of performance. On average, graduates scored a seven out of 10, but graduates from the two universities scored 9.5. Before changing their recruitment strategy to place a greater emphasis on these two ‘star’ universities, however, the significance of the result was examined. The effects were found to be insignificant, which means the variation could be random. How do you test it to be significant? Well, you could use regressions [a statistical process].


A Smarty would ask: Have we checked our results for significance?


6. Be 100% absolutely sure. Changes can sometimes be exaggerated when described using percentages. We’ve got a juicy example of this below.


A company that has doubled its intern cohort for the last four years (i.e. an increase of 700%) might have started with only one intern and increased to just eight, but it’s still a 700% increase. On the flipside, a large organisation might recruit 500 interns a year. An increase of three interns here would only be a 0.6% increase. Makes you think, doesn’t it?


A Smarty would ask: Do we have the absolute figures alongside the percentages?


www.agr.org.uk | Graduate Recruiter 13


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40