?

Log in

No account? Create an account

Previous Entry | Next Entry

Fun information!

I thought this was an interesting artical that I received in my e-mail today. Thought I'd share.



Inventing IQ



Mention efforts to measure intelligence, and most people immediately think of two letters--IQ. Yet the "intelligence quotient" wasn't the first attempt to measure intellectual wattage, and many experts think it's overrated.

First, Measure Something



Scientific efforts to measure intelligence date to the late 19th century, when Charles Darwin's cousin, Sir Francis Galton, set out to determine the extent to which "genius" was hereditary (and helped launch the eugenics movement). Galton created a battery of tests measuring, among other things, sensitivity to the smell of roses, the ability to hear high-pitched whistles, and skill in identifying slight differences in weight.

Such abilities, Galton suggested, were signs of a keen mind. He even opened a lab at a museum in London, where he charged interested parties a few bucks to have their wits measured. An American colleague, James Cattell, transplanted Galton's project to the United States in the 1890s, but in 1901 the entire endeavor took a serious blow when a follow-up study found no correlation between the results of Cattell's tests and actual academic achievement.

Next, Identify Aptitude



A few years later, a prominent French psychologist, Alfred Binet, launched a more successful effort to measure intelligence--or at least scholastic aptitude. At the time, French teachers faced a problem familiar to educators everywhere: they needed an efficient and objective means to determine which kids had actual learning problems and which were just behind or "behaviorally challenged."

In 1904, the French Ministry of Public Instructions asked Binet and others to develop such a means. A year later, Binet and his colleague, Theodore Simon, published the test that launched intelligence measurement as we know it. Unlike Galton and Cattell, Binet and Simon built their test around faculties like memory, reasoning, problem solving, and vocabulary--abilities that today's intelligence tests continue to measure. The proof was in the pudding: Binet and Simon's test successfully predicted (to a reasonable degree) how students did in school.

Now Call It a Quotient



Binet and Simon's test didn't produce an IQ score. Rather, each child who took it was assigned a "mental age" based on the average test score for children of a particular chronological age. For example, a 10-year-old who did as well on the test as the average 12-year-old was said to have a "mental age" of 12.

Then, in 1916, Lewis Terman, an American psychologist affiliated with Stanford University, published an adapted version of Binet and Simon's test known as the Stanford-Binet Intelligence Scale (a revised version is still widely used today). Terman added questions appropriate for adults and reported performance on his version of the test with a single score--the "intelligence quotient," or IQ.

Terman borrowed the idea for IQ from German psychologist William Stern, who computed the figure simply by dividing mental age by chronological age, then multiplying by 100. So a 10-year-old who did as well as the average 12-year-old would receive an IQ of 120 (12/10 x 100).

Growing Up



For obvious reasons, "mental age" scoring never worked well for adults (an 80-year-old scoring as well as the average 40-year-old gets an IQ of 50), and the entire concept of "mental age" has since fallen out of vogue. Today, many intelligence tests still produce overall IQ scores, but most compute them according to test-takers' statistical variation from the average performance of others their age, starting from an average score of 100. Essentially, that means everyone is graded on a curve, where 100 is average for their "class."

The vast majority of people--95 percent--have IQs between 70 and 130. Two-thirds land in an even narrower IQ range, between 85 and 115. But what do these numbers really mean? The evidence suggests that they correlate fairly well with school grades, though they leave something like three-fourths of the variation in student performance unexplained. Their correlation with measures like job performance or salary is weaker.

What's more, many experts now agree that standardized intelligence tests don't really sample all forms of intelligence. They don't measure creativity, common sense, social skills, or wisdom, to name a few. Some psychologists and educators want to discard them entirely. Others want to improve them, or use new tests. Whatever the solution, it's clear that a high IQ alone probably won't get you very far in life--unless you find a job taking intelligence tests.




Steve Sampson
January 24, 2005


Want to learn more?
Get in-depth intelligence on intelligence
http://www.michna.com/intelligence.htm

Comments

( 2 comments — Leave a comment )
memeslayer
Jan. 25th, 2005 12:35 am (UTC)
I'm half convinced that every measure of intelligence ever created is either an attempt to justify eugenics or an attempt to make people feel better about doing poorly in mathematics. I'm not sure what the point of these measurements is, anyway. What do you *do* with the information? Two people may have low IQs for completely different reasons!

It's been my observation that intelligence is, at least in part, a self-sustaining phenomenon. People who begin to identify themselves as stupid will tend to do so, especially if they're reinforced. This is one reason why I think "tracking" systems in schools are a bad idea. IMHO intellectual confidence is far more important than pure aptitude.
tenthz
Jan. 25th, 2005 01:44 am (UTC)
I know what you're getting at. Except us humans like to try to quantify EVERYTHING when we'd be better off attempting to qualify.

Le *sigh*
( 2 comments — Leave a comment )

Profile

blogging
tenthz
Catherine

Latest Month

December 2011
S M T W T F S
    123
45678910
11121314151617
18192021222324
25262728293031

Page Summary

Powered by LiveJournal.com
Designed by Keri Maijala