**Tags**

Dysgenics, Flynn Effect, HBD Chick, height, IQ, Jan te Nijenhuis, Journal Intelligence, Michael A. Woodley, nutrion, Raegan Murphy, Reaction time, Richard Lynn, statistics, Victorians

There’s been much discussion about a paper by scholars Michael A. Woodley, Jan te Nijenhuis, and Raegan Murphy, published in the prestigious journal *Intelligence*, arguing that genetic IQ in Western populations has declined by about 1 standard deviation since the 19th century. Although conventional IQ tests indicate people are getting smarter, the paper argues that simple reaction time (measured in milliseconds) is better for comparing people across centuries because although it’s a very crude measure of intelligence, it’s much less sensitive to the non-genetic factors that have caused the Flynn Effect.

However such a large decline in IQ in such a short span of time seems extremely unlikely.

A blogger named HBD Chick argued the paper went wrong by using poor sampling. The oldest study the paper cites was conducted by Francis Galton circa 1889. The paper was criticized for using this study because the sample is too elite to represent Victorians (only 4% of the sample was unskilled laborers, even though 75% of Victorians were). But if the vast majority of Victorians were unskilled laborers, then I suggest we use the mean reaction time of just this occupation to represent Victorians. Yes, excluding the top 25% of occupations might bias the estimate downward, but the fact that these were unskilled workers intellectually curious enough to volunteer for Galton’s study would bias the estimate upward, so it cancels out.

An analysis of Galton’s sample (see figure 10 & 11) shows that unskilled laborers aged 14-25 averaged reaction times of 195 milliseconds in males and 190 in females (an average of 192.5).

Do we have a modern sample of similarly aged people that are equally representative of a Western population? Yes. A 1993 study by W.K. Anger and colleagues (see table 2) found that a sample of American postal, hospital, and insurance workers, aged 16-25 (in three different cities) had reaction times of 260 in males and 285 in females (an average of 273). Just as unskilled labor was an average job in the 19th century, working for the post office, hospital or insurance company seems pretty average in modern times. Thus, by subtracting 192.5 from 273, we can estimate the average Western reaction time has slowed by 80.5 milliseconds since the 19th century. Since the standard deviation for reaction time is estimated to be 160.4 milliseconds (see section 3.2), reaction time has slowed by 0.50 standard deviations in over a century (equivalent to a drop of about 8 IQ points). This is actually virtually identical to the effect size the paper found using far more data points, but then they statistically adjusted the effect size, making it implausibly large. So in my humble opinion, the problem with the paper was not the samples they cited, but the statistical corrections they made.

The paper argued that since simple reaction time has a true correlation of NEGATIVE 0.54 with general intelligence, they needed to divided the effect size by 0.54 to estimate the true decline in general intelligence. The logic was that since reaction time is a very rough measure of intelligence, it underestimates the true decline in genetic intelligence. I disagree. Such inferences only make sense if you know a priori that there’s been direct selection for lower intelligence, thus dragging reaction time along for the ride, but that’s an assumption the paper was supposed to test, not rest upon. It could be the opposite. There’s been selection for slower reaction time, thus dragging intelligence along for the ride, in which case the effect size should be multiplied by 0.54, not divided. Most likely, there’s just been recent selection for more primitive traits in general, and both reaction time and genetic intelligence reflect this dysgenic effect to parallel degrees, so the change in one equals the change in the other.

To illustrate the point further, consider that height has increased by 1.5 SD since the 19th century. Height correlates only 0.2 with IQ. Would it make sense to argue that since height is such a weak proxy for intelligence, we need to divide the 1.5 SD increase by 0.2 to estimate how intelligence has changed since the 19th century? By such logic, intelligence would have increased by 7.5 standard deviations since the 19th century (equivalent to 113 IQ points)!

So clearly, the paper was unjustified in dividing the effect size by 0.54.

Without, the adjustment, Victorians were genetically 8 points smarter than moderns, which sounds a lot more believable than 14 points.

But if Victorians had a genetic IQ of 108 (by modern standards) , how could they score only 70 on the Raven IQ test? In a previous post I argued that the Raven is a culture fair IQ test and thus the low Victorian scores must be biological (Richard Lynn’s nutrition theory). Citing Richard Lynn, I also argued that malnutrition stunts non-verbal IQ (the Raven) more than Verbal-numerical IQ (I estimate by a factor of 31). So if sub-optimum nutrition stunted their non-verbal IQ by 38 points, their verbal-numerical IQ’s would have been stunted by only 38/31= 1 point. Thus the Victorians had a verbal-numerical IQ of 107, however because verbal tests are so culturally biased, their verbal scores would be artificially depressed by lack of schooling and exposure to mass media. But on a culture reduced measure of verbal-numerical ability like Backwards Digit Span, they might have scored the equivalent of 107.

So with a culture fair verbal-numerical IQ (Backwards Digit Span) of 107, and a culture fair non-verbal IQ (the Raven) of 70, their overall IQ’s would have been 86, which is 1.5 standard deviations below their genetic IQ of 108. That 1.5 SD gap between phenotype and genotype is perfectly explained by Richard Lynn’s nutrition theory, since average height in Western countries was also 1.5 standard deviations lower in the 19th century. And the great accomplishments of Victorians can be explained by their verbal-numerical IQ of 107.

Greying Wanderer

said:What I personally think may have happened is multiple things at once operating in different directions:

1) general nutrition:

improved but disproportionately for people on the left and middle of the Bell curve thus increasing the population average without changing the top much.

2) specific nutrition – iodine:

worsened among those populations who traditionally got the bulk of their iodine from milk and dairy i.e. north European diaspora

3) dysgenic breeding:

lowering the population average IQ but mainly through lowering the average on the left side and not effecting the right side much

combined together (and assumed weighting 1 > 2 > 3) these factors might give two contrary effects

a) left-side: +nutrition -iodine -dysgenic breeding -> net positive

b) right-side: -iodine -> net negative

-> average increasing but innovation declining

#

(alternatively perhaps)

a) left third: +nutrition -iodine -dysgenic breeding -> net negative

b) middle third: +nutrition -iodine -> net positive

c) right third: -iodine -> net negative

pumpkinperson

said:The notion that the Flynn Effect and/or nutrition disproportionately impacts the left side of the bell curve is so pervasive and important that rather than respond here, I am going to create a whole new post on that subject specifically.

brucecharlton

said:This was my simple analysis of the data from Sliverman’s meta-analysis where I derived the idea of (more than) one standard deviation decline in intelligence since Victorian times:

http://iqpersonalitygenius.blogspot.co.uk/2012/08/taking-on-board-that-victorians-were.html

The relevant passage is:

“We do not have a standard deviation (measure of scatter) for the Victorian data – so we need to compare (looking at men) a (mean) average modern reaction time of 250 milliseconds (SD 47) with a (median) average Victorian RT of 183.

“This implies that average (and being conservative in my interpretation) Victorian reaction times were more than one standard deviation faster than modern RTs; or, that the average Victorian would be placed comfortably in the top 15 percent of the modern population – probably higher.

*

“If we assume that reaction time is a valid measure of general intelligence, in other words that RT has a linear correlation with g – then this would mean that the average Victorian Englishman had a modern IQ of greater than 115.”

So the slowing of simple reaction timesis more than one standard deviation on the basis of the standard deviation quoted by Silverman. I assume that this corresponds to a decline in real general intelligence of a similar magnitude – but IQ is merely based on rank ordering while sRT is at least an interval scale, and arguably could be used as a ratio scale such that a group with an average sRT of 300ms could legitimately be regarded as having half the intelligence of a group with average sRT of 150ms:

http://iqpersonalitygenius.blogspot.co.uk/2013/02/the-ordinal-scale-of-iq-could-be.html

and modern populations therefore being about two-thirds the intelligence of Victorians:

http://iqpersonalitygenius.blogspot.co.uk/2013/05/more-than-one-third-decline-in-general.html

pumpkinperson

said:The relevant passage is:“We do not have a standard deviation (measure of scatter) for the Victorian data – so we need to compare (looking at men) a (mean) average modern reaction time of 250 milliseconds (SD 47) with a (median) average Victorian RT of 183.

“This implies that average (and being conservative in my interpretation) Victorian reaction times were more than one standard deviation faster than modern RTs; or, that the average Victorian would be placed comfortably in the top 15 percent of the modern population – probably higher.

Correct me if I’m wrong, but doesn’t the Woodley paper argue that there was severe range restriction in Silverman’s samples, and that a more representative sample would have a reaction time standard deviation of 160.4 milliseconds?

If so, then the reaction time difference between moderns and Victorians would be much less than 1 SD.

Pingback: Height, reaction time & Victorian intelligence | Brain Size

Michael A. Woodley

said:Quote: “So clearly, the paper was unjustified in dividing the effect size by 0.54.”

This is incorrect. Imperfect construct validity always attenuates effect sizes. Disattenuation for imperfect validity therefore requires that you divide by the g-loading.

Take the change in simple RT scaled in terms of standard deviation units (i.e. the change in simple RT divided by the standard deviation). This value (d) does not translate into an equivalent change in IQ because the synthetic g-loading of simple RT is 0.54 (meaning that it measures 54% of the g factor and thus does not exhibit perfect validity). Converting parameter d into IQ by simply multiplying this by 15 will inject massive amounts of error into your estimate of the IQ change. You will end up underestimating the change by statistical necessity – by 85% in the case of a variable with a g-loading of 0.54.

Parameter d can therefore only be rescaled in terms of an IQ equivalent by dividing by the g-loading of simple RT, thus disattenuating it for the imperfect validity of simple RT. Then you can multiply the resultant quotient by 15 in order to recover the true IQ change.

This is not something that I simply conjured up. It is called validity generalization and has been employed to great effect in meta-analytic research for over 30 years. I suggest that you read Hunter and Schmidt (2004).

Ref.

Hunter, J. E., & Schmidt, F. L. (2004). Methods of meta-analysis: Correcting error and bias in research findings (2nd Ed.). Thousand Oaks, CA: Sage.

pumpkinperson

said:This is incorrect. Imperfect construct validity always attenuates effect sizes. Disattenuation for imperfect validity therefore requires that you divide by the g-loading.Imperfect construct validity means that there is error in the estimate, but you can’t just assume that the error underestimated your effect size. It could have overestimated the effect size, or the error could have cancelled itself out.

You’re absolutely correct that a 1 SD change in g will cause only a 0.54 SD change in reaction time. However in my humble opinion, that doesn’t mean you can assume a 0.54 SD change in reaction time was caused by a 1 SD change in g, because regression works both ways. A 0.54 SD change in reaction time predicts a 0.29 change in g because (0.54 SD)(0.54).=0.29.

To illustrate the ambiguity of the situation, turn to page 326 of the book “The g Factor” by Arthur Jensen. Jensen does the opposite of what you do. Instead of quantifying a decline in g caused by 20th century dysgenics, he tries to quantify the increase in g caused by 20th century nutrition. He notes that since brain size has a g loading of 0.4 (a very conservative estimate) and since brain size had increased by over 1 SD during the 20th century, then g should have increased by over 1 SD(0.4) = 0.4 SD.

But by your methodology, Jensen should have divided the effect size by 0.4 and concluded that g had increased by 2.5 SD.

It really comes down to an assumption. If one assumes that a 20th century change in reaction time/brain size is CAUSED by a 20th century change in g, then one would divide the effect size by the g loading of the changed variable. However if one assumes that the changed variable CAUSED a change in g, you would multiply. If like me, you assume a third variable caused both, you leave the effect size as it is.

Michael A. Woodley

said:Quote: “It really comes down to an assumption. If one assumes that a 20th century change in reaction time… is CAUSED by a 20th century change in g, then one would divide the effect size by the g loading of the changed variable.”

But this is precisely and explicitly what IS assumed. This is also what the data indicate. In Woodley, te Nijenhuis and Murphy (2014) we state the following:

“Perhaps more important is Rijsdijk et al.’s (1998) finding that the covariance between IQ and simple RT is due to 100% common genetic variance. This complements our own previously reported finding that a substantial variance in subtest heritabilities arrays positively along with simple RTs and subtest g loadings on a ‘genetic g’ common factor. This implies that dysgenic selection doesn’t actually need to act directly on simple RT, as by selecting against ‘genetic g’, dysgenic fertility could plausibly have engendered a decline in simple RT also.” (p. 137).

Thus we have explicitly posited the following causal cascade:

Dysgenic selection -> decrease in genetic g -> decrease in simple RT

The association between dysgenic fertility and g (as opposed to specialized abilities) has been demonstrated using the method of correlated vectors in Woodley and Meisenberg (2013), and more recently in Reeve, Lyerly and Peach (2013).

Thanks for the Jensen reference incidentally. I am currently meta-analyzing brain size increase data in terms of increasing IQ along the same lines as Jensen and completely forgot that he did a crude analysis of his own.

Refs.

Reeve, C. L., Lyerly, J. E., & Peach, H. (2013). Adolescent intelligence and

socio-economic wealth independently predict adult marital and reproductive

behavior. Intelligence, 41, 358–365.

Rijsdijk, F. V., Vernon, P. A., & Boomsma, D. I. (1998). The genetic basis of the

relation between speed-of-information-processing and IQ. Behavioral

Brain Research, 95, 77–84.

Woodley, M.A., & Meisenberg, G. (2013). A Jensen effect on dysgenic fertility: An analysis involving the National Longitudinal Survey of Youth. Personality & Individual Differences, 55, 279-282.

Woodley, M.A., te Nijenhuis, J., & Murphy, R. (2014). Is there a dysgenic secular trend towards slowing simple reaction time? Responding to a quartet of critical commentaries. Intelligence, 46, 131-147.

pumpkinperson

said:Thus we have explicitly posited the following causal cascade:Dysgenic selection -> decrease in genetic g -> decrease in simple RT

And that’s a valid assumption, and you might be 100% correct. I do wonder though if other causal models are plausible. For example, dysgenic selection + mutation load causing a decrease in genetic quality which caused PARALLEL decreases in both g and reaction speed.

A more far fetched causal model might be that relaxed selection for physical combat (elimination of duels and other forms of 19th century violence) caused dysgenic selection for reaction time, which caused a reduction in g.

The association between dysgenic fertility and g (as opposed to specialized abilities) has been demonstrated using the method of correlated vectors in Woodley and Meisenberg (2013), and more recently in Reeve, Lyerly and Peach (2013).I haven’t read those specific papers, but I suspect that a major reason why the relationship was stronger with g than with “specialized abilities” is that on many IQ tests, g has far more biological reality than other sources of variance which are often just acquired knowledge, skills, test taking attitudes and culturally specific forms of thinking. However because reaction time is a much more directly physiological test than IQ tests are, even the non-g reaction time variance might be highly heritable, and thus sensitive to dysgenic fertility and mutation load.

Height is analogous to reaction time in that they’re both physiological variables with weak genetic correlations with IQ (a strong proxy for g). Genetic mutations/abnormalities seem to reduce both height and IQ. For example, young adults with Downs Syndrome are 2.5 SD below average in height. Now one could adopt a causal model similar to yours and argue:

Chromosomal abnormality -> decrease in genetic g -> decrease in height

This causal model would justify one dividing the effect size for height (2.5 SD) by height’s g loading (conservatively estimated at 0.2) and thus concluding that Down’s Syndrome reduces g by 12.5 SD (188 IQ points!). But in reality, Down’s Syndrome only reduces IQ by 50 points (3.33 SD). That’s an example of where simply leaving the effect size alone gives a much better prediction of IQ.

The reason is that even though IQ and height are genetically correlated, Downs Syndrome affects height INDEPENDENTLY of its effect on IQ. Analogously, 20th century dysgenics and 20th century mutation load may affect reaction time INDEPENDENTLY of their effect on g.

Now I understand that in science one must make assumptions and perhaps the assumptions you make are the most reasonable given the preponderance of evidence that currently exists. In fact, while looking for an example that proves you wrong, I found some data that strongly supports your methods. On page 194 of Jensen’s “The g Factor”, you might have noticed that Jensen talks about the effect of inbreeding depression on both IQ and reaction speed. It seems inbreeding depression impairs reaction speed by 0.18 SD while impairing IQ by about 0.5 SD. That’s an example where dividing the reaction time effect size by its g loading gives much better results than simply leaving it alone, or multiplying by the g loading.

Thanks for the Jensen reference incidentally. I am currently meta-analyzing brain size increase data in terms of increasing IQ along the same lines as Jensen and completely forgot that he did a crude analysis of his own.Interesting. Last week I was thinking of posting about the 20th century rise in brain size. I’m a huge fan of Richard Lynn’s nutrition theory, so I’ve already talked a lot about the 1.5 SD increase in height, but the increase in brain size is far more directly relevant to the Flynn Effect than height gains are. I was going to do a quick post comparing gains in adult cranial capacity with gains in adult brain weight with gains in childhood head circumference to see how closely they all parallel each other and the gains in height and IQ. So feel free to send me your brain size paper before publishing, because I may be able to provide constructive feedback.

Pingback: Was the 19th century the most influential? | Brain Size

Pingback: Has the average adult become as genetically dumb as a nine-year-old? | Brain Size

Pingback: Is g a thing? | Brain Size

Pingback: Brain Size readers tower with an average IQ of 147 | Brain Size