Tags

, , , ,

There are a lot of problems with how we measure intelligence. Some people measure intelligence using childhood ratio scores (if a 10 year old is as smart as a 12 year old, he gets an IQ of 120, since he’s functioning at 120% of his chronological age). Other people measure IQ with deviation scores, so if a 10 year old is smarter than 90% of 10 year olds, he’s assigned an IQ of 120. Some tests define the population standard deviation at 15, others 16, others 22 or 24.

About the only thing all IQ tests agree on is that the average score is 100. But the average 20 year old is far smarter than the average 3 year old, so assigning them both an IQ of 100 gets confusing. We must clarify that 100 is the average IQ for one’s age. But IQ’s vary enormously from country to country. So an IQ of 100 is said to represent the average in America, or is it Britain? It depends who you ask. And what happens as demographic shifts cause the populations of both countries to change? What long-term consistency is there in defining IQ 100 as the American average. And what about the Flynn Effect? Average IQ has gone up by some 30 points since the Victorian era yet the average IQ is still 100? And what about dysgenics? Average IQ has gone down by 15 points since the Victorian era yet the average is still 100? I get a headache just keeping it all straight.

And is an IQ of 100 really twice as smart as an IQ of 50? Is a 10 point difference between an IQ of 120 and 130 the same as a 10 point difference between IQ 80 and 70?

Clearly, a much simpler scale is needed:

Around the turn of the millennium, a member of the unbelievably brilliant Prometheus society published one of the most interesting and important articles in the history of psychology, and amazingly, virtually no one has ever read it. The article asserted that problem solving speed doubles every 10 IQ points. The Promethean would later go on to revise the figure to every 5 points. Yes, reaction time (information processing speed) has a beautifully Gaussian distribution, but because the human mind operates in parallel, an IQ 105 is not 5% smarter than an IQ of 100, but rather twice as smart! And an IQ of 110 is four times smarter! Although the Promethean never put it in those terms, this inference was based upon the fact that no matter how cognitively homogeneous a classroom, the difference in learning speed is always at least an order of magnitude. Think back to your high school math class. The brightest kid in the class grasped what the teacher was talking about in seconds, while the dullest may take all year.

So I propose a new intelligence scale to reflect these huge differences. We could call the units of the scale BP scores (Brain Power scores) to differentiate them from IQ scores. We must first anchor our scale to some clear definable unambiguous stable level. I say, the intelligence of the average adult ape in its peak years. To the best I can determine, the average ape has a deviation IQ of 40 (sigma 15). So let’s arbitrary assign the average adult ape (IQ 40) a BP score of 1, and then double the BP for every 5 points above 40.

Thus the conversion between IQ scores and BP scores is as follows:

IQ 40 = BP 1
IQ 45 = BP 2
IQ 50 = BP 4
IQ 55 = BP 8
IQ 60 = BP 16
IQ 65 = BP 32
IQ 70 = BP 64
IQ 75 = BP 128
IQ 80 = BP 256
IQ 85 = BP 512
IQ 90 = BP 1,024
IQ 95 = BP 2,048
IQ 100 = BP 4,096
IQ 105 = BP 8,192
IQ 110 = BP 16,384
IQ 115 = BP 32,768
IQ 120 = BP 65,536
IQ 125 = BP 131,072
IQ 130 = BP 262,144
IQ 135 = BP 524,288
IQ 140 = BP 1 million
IQ 145 = BP 2 million
IQ 150 = BP 4 million

So the average American (IQ 100) would have a BP of about 4000, meaning they’re 4000 times smarter than an ape. The average Ivy league graduate (IQ 130) would have a BP of about 262,000, meaning they’re 262,000 times smarter than an ape. And the average academic Nobel Prize winner (IQ 150) would have a BP of 4 million, meaning they’re 4 million times smarter than an ape. And if intelligence gaps really are as huge as this scale implies, then no wonder we have so much economic inequality!