I noticed that high range IQ tests with good quality approximately follow this logic:
solvability of item |
rarity |
IQ sd15 |
90% |
90% |
120 |
70-75% |
98% |
130 |
50% |
99.9% |
146,5 |
25% |
99.997% |
160 |
12.5% |
99.9999% |
171,5 |
6.25% |
99.999997% |
181 |
3.125% |
99.9999999% |
190 |
This table is a hint on how to make norms, but also a hint
on how to be critical.
We see that 50% change in solvability approximately makes
a 10 IQ points change.
What is a logical interpretation compared with my quality analysis?
Simply, 50% quality is a 10 IQ points uncertainty. Your score
is only in the +/- 10 points interval.
As 0.7 squared is approximately 0.5, 70% quality is a 5 IQ
points uncertainty.
As 0.5 squared is 0.25, 30% quality is approximately 15-20
IQ points uncertainty.
We can conclude that quality higher than 70% is desirable, while quality lower than 50% is almost useless.
So, can we measure a high IQ?
Yes, we can but with good tests and with certain uncertainty.
Also, averaging scores from more tests is recommended.
Join Real
IQ Society!
In making my norms I combine the table above with the following table:
top p % of testees |
IQ sd15 |
p=90 |
130 |
p=45 |
146,5 |
p=10 |
160 |
p=1-2 |
171,5 |
I obtained these tables from the statistical data of Logima
Strictica 36 as explained in the first post of my
blog.
For other tests those two tables are not in a perfect coincidence,
but I always try to be somewhere between these two tables, giving a slight advantage
to the second table.
That's my way of avoiding both too generous norms and too
strict norms.