- photo by Mila Zinkova/Wikimedia Commons
- Ants in 80-million-year-old Baltic amber.
Can humans become more intelligent? Probably not, according to an article in July's Scientific American that considered several approaches to upping our IQ, each coming with its own set of problems. You could increase brain size, for instance, but the greater average distance between neurons would slow the brain down. Also, more neurons require more energy, and at only 2 percent of body mass, the brain already consumes 20 percent of body energy. (Not to mention the problem of how a woman could birth a larger head through her currently maxed-out pelvis!) Another approach is to pack more, smaller, neurons into our existing brains - but that would result in too much random "noise" in brain circuitry. The article concludes that perhaps "life has arrived at an optimal neural blueprint."
What wasn't addressed, however, was the question implicitly asked over 50 years ago by astronomer Frank Drake: Have we already overstepped the viable limits of intelligence? Drake's interest is SETI, the search for extraterrestrial intelligence through the detection of anomalous radio waves. He came at the "limits of intelligence" question via his SWAG (Scientific Wild-Ass Guess) for "N," the number of intelligent civilizations in the Milky Way with whom communication may be possible. "Intelligent" in this case simply means that they emit radio waves into space.
After plugging into the "Drake Equation" a bunch of probabilities (What fraction of stars have planets? What are the odds of life developing on a planet? etc.) and the annual rate at which stars form in the galaxy, we arrive at an estimate for the number of communicating civilizations that can be expected to arise each year. N is this figure multiplied by "L," the average time span such civilizations can be expected to release detectable signals into space.
Why L? Because the equation assumes that technology contains the seeds of its own destruction: A civilization smart enough to build a radio transmitter is smart enough to blow itself up. If this is true, what's a reasonable value for L? How long a window might a technologically advanced civilization have between (1) developing radio transmitters and (2) wiping itself out in a nuclear holocaust / creating an unstoppable plague / overpopulating itself to extinction / (fill in your own doomsday scenario)? If the staying power of previous earthly civilizations is anything to go by, L is fleeting. Rome lasted 500 years; the Mayans twice that. The British Empire rose and fell in three centuries. The Thousand-Year Reich managed just 11 years.
We live on a knife-edge world in which nine nations threaten the planet with 20,000 nuclear warheads, oceans are dying and the next mutant virus could be unstoppable. Next week, next year, next century, next millennium, we're probably doomed. Chances are, these 100,000-year-old big brains of ours have a limited shelf life. Ant brains, on the other hand, with 400,000 times fewer neurons than ours, have been around a thousand times longer, and surely will be here long after we're gone. Our problem, as my mother used to say, is that we're too smart for our own good.
Barry Evans (firstname.lastname@example.org) just feels lucky that we've made it this far. His Field Notes compilation can be found at Northtown Books and Eureka Books.