Never Smart Enough

Everyone wishes for higher intelligence. Like beauty and fitness, it’s another quality everybody seems to want. But at some point in life, most people accept what they have and just plow ahead. This sense of defined limits comes from grades, standardized tests, performance evaluations, and chosen pathways reinforced throughout life in competitive comparison. Because of this, attitudes toward intelligence become a perfect set-up enhancement marketing. Rarely is the definition of intelligence questioned, even though the concept is extremely murky. Instead, what gets advanced is the hope of salvation, supplement, addition, or replacement of native functioning, these days offered in a dizzying array of methods, tricks, and technologies. Memory boosting supplements like Brainmentin and Optimind flood the consumer market, often pitched to aging baby-boomers.

Students drink Red Bull or acquire ADD-drugs to study for tests. Exercise and nutritional products promise sharper thinking through purportedly “natural” means. Dig a little further, and one finds unexamined values in intelligence discourse, which privilege reasoning and memory over just about anything else. Important as such traits may be, alone they can’t account for many and diverse ways people navigate their lives, adapt to changing circumstances, or act in creative ways.

So, what is intelligence? The Cambridge Dictionary says it’s the “ability to understand and learn well, and to form judgments and opinions based on reason.” Most other sources say roughly the same thing. Yet people who study intelligence argue that single definitions just won’t do. There simply are too many variables that go into “intelligent” thinking and behavior –– among them cognition, capacity, context, experience, emotion, orientation, language, memory, motivation, and overall physical health. Definitions of intelligence have changed throughout history and vary from culture-to-culture. Western societies in particular tend to value analytical skill over other traits. Critiques of such narrow thinking have a long history in philosophy, with Socrates, Plato, and Aristotle each coming up with different views. Much in these early debates focused on the question of knowledge itself and how people express their thoughts. But as societies became more bureaucratic and mechanized, increasing value was placed on spreadsheets, metrics and algorithms.

Obviously, reason remains a bedrock value in fields like science, engineering, medicine, and law. Certain kinds of questions can only be answered that way. But this becomes a problem when reason gets pushed into everything. Much in the humanities, arts, and areas social sciences simply can’t be reduced can’t be reduced to rigid proofs or numbers. And quantification alone often faulters when looking at multidimensional or intersecting matters. Some critics argue that economic worries lie behind a push to rendered everything like a spreadsheet –– and that the logic of business is intruding where it shouldn’t. This becomes an issue of fairness when judgements get made about programs or people that might otherwise be valued differently (and with different findings) if assessed by other means. These matters now drive ongoing debates over standardized testing.

After all, stratification drives testing whether on an IQ test or the SAT. The first IQ tests were developed in 1908 by Alfred Benet to help schools in France to sort children. Shortly afterwards, Stanford University’s Lewis Terman modified the instrument into what would become the “Stanford-Binet IQ Test.” The U.S. used the measures during World War I in assigning different roles to troops, and before long American public schools started using IQ to identify “gifted” youngsters. Before long, eugenic beliefs in inherited traits soon began influencing U.S testing. This led to racialized assumptions about intelligence differences, confirmed at the time by what appeared to be “scientific” data. Things soon got worse for others scoring badly on the tests, especially the economically disadvantaged and those with learning differences. In its darkest moments, eugenic proponents used IQ results to identify “idiots” and the “feebleminded” ––branding them threats to the American gene-pool. This led to forced sterilizations of 65,000 with low IQ scores in a campaign validated in the U.S. Supreme Court’s 1927 Buck v Bell decision. Those sterilized were disproportionately poor or of color. Continued government sterilization on the basis of intelligence or criminality continued in some states until the 1970s.

Opposition arose from legal, civil rights, and scientific camps, beginning in the 1970s with suits from the Southern Poverty Law Center on behalf of prison inmates. Later challenges accused the test of cultural bias, especially in portions of the exam based on Eurocentric knowledge. Further scrutiny confirmed that performance on such tests was less a matter of “intelligence” per se, as it was an individual’s upbringing, socioeconomic status, and access to quality education. By the 1980s, psychologists like Howard Gardener would argue that IQ’s narrow focus excluded the “multiple intelligences” at work in people’s minds. Gardener said that an exclusive emphasis on reasoning ignored aptitude in language, communication, practical problem-solving, creativity, and ethical analysis. Later this argument was modified by neuroscientists to deemphasize the idea of isolated brain functions, replacing it with the concept of a “general intelligence” capable of performing different tasks. In recent years, accumulating evidence of unfairness in standardized testing has led many colleges and universities end use of exams like the SAT and ACT.

The “information age” of the twenty-first century brought even more attention to intelligence, as cognitive skills replaced physical work in many jobs. Consumer culture has become overrun with books, apps, and devices to improve one’s “brain power,” as well as so-called “smart” foods and drugs. Cognitive enhancement also underlies the nomenclature of smart phones, smart appliances, smart cars, home assistants, and wearables. Many of these items truly do increase human capacity, inasmuch as a device as commonplace as an iPhone now carries more than 250,000 times the data capacity as the guidance system in the Apollo 11. In this arena “smartness” functions alternately as metaphor for connectivity and the ability of devices to operate semi-autonomously. Regardless of specific meanings, the appeal of all smart devices lies in their interface with a human mind.

Then there is artificial intelligence (AI) – a technology rife with mixed feelings of hope and fear. Amid enthusiasm in the late twentieth about rising computer capability, speculation swelled about the future of AI. Now a famous science fiction trope, predictions began circulating about computers outpacing humans and taking on a life of their own. Such worries about runaway technology date to the machine age, with AI giving them renewed urgency. In the early 1990s science fiction writer Vernor Vinge predicted what he termed a technological “singularity,” occurring when super-intelligent computers gained sufficient capacity to improve themselves without human involvement. Frightening many was Vinge’s claim that the singularity would happen in an unexpected “explosion.” Soon this became a favorite theme in futuristic stories about AI, many of which saw bad outcomes in a world ruled by machines. Tempering such paranoia have been futurists like More, who has argued that AI will evolve more gradually and in ways that can be anticipated and controlled. In More’s view, the more pressing threats from AI lie in job losses in easily automated tasks, as seen today in call centers and warehouses. (See Chapter 1).

None of this has slowed the search for superintelligence, whether though additions to the body or external supports. Today these come together in a growing array of exotic devices. These include a technology known as a “Multielectrode Array” (MEA), entailing implants with thread-like electrodes (thousands in some cases) that precisely trigger parts of the body. Musk launched the tech-startup Neuralink in 2019 to develop MEAs to stimulate portions of the brain. Initially Neuralink has been working to alleviate physical paralysis, with hopes of later creating “superintelligence” according to Musk. Neuralink claims the robotically-done insertion procedure is as “painless and safe” as laser eye surgery, although no patient has yet undergone the operation. Since 2016, a parallel project also has been working on neural interfaces, owing to $100 million from Braintree founder Bryan Johnson. Dubbed Kernel, the ambitious enterprise plans to use implants to help people with conditions like Parkinson’s disease, and later “reboot the brain” to create real-life cyborgs.

Keep in mind that implants already are widely used in medicine. In cataract surgery the natural lens of the eye is removed and replaced by an intraocular disk made of plastic or acrylic. First introduced in the 1950s, now over 3 million such procedures are done annually in the U.S. alone. Heart pacemakers entered mainstream medicine in 1960s owing to the invention of the long-lasting lithium battery. Pacemakers use a tiny computer inserted under the skin to keep the heart beating consistently. Cochlear implants to improve hearing came into use in the 1970s. Such implants bypass parts of the periphery auditory system to electrically stimulate the cochlear nerve that sends sound messages to the brain. People with diabetes now can get an Eversense implant to monitor blood sugar with results seen on a mobile phone. This eliminates the need for painful finger-sticking to check blood glucose. And of course, various mechanical procedures long have been used on the brain itself. These date to the introduction in the 1930s of Electroconvulsive Therapy (ECT) for people unresponsive to other treatments. Today’s highly calibrated ECT treatments are done under general anesthesia and often produce minimal visible effects. Another less invasive treatment called “transcranial direct-current stimulation” places low-current electrodes on the skull to reduce depression, with some research showing it can improve cognition. A similar technology called “transcranial magnetic stimulation” (placing magnets against the skull) already is in use to treat conditions like anxiety, bipolar disorder and substance abuse.

Most of what I’ve just described is geared toward medical treatment rather than supplemental enhancement. Indeed, the vast majority scientists working on the machine-brain interface say their goal is on healing rather than anything else. But as is often the case, procedures introduced to serve pressing medical needs often tempt those who simply want more capacity. This means the future of intelligence enhancement is very much up for grabs. As always, questions of fairness haunt such projects, especially in research funded by the super-rich. If “prosthesis is the origin of human inequity,” as philosopher Bernard Stiegler has argued, there certainly is cause for concern. As the above discussion has highlighted, the competitive history of intelligence measurement, testing, and institutional inequity has often worked to reinforce existing hierarchies and social biases. The question is not so much whether AI will “take over” humanity as it is whether humanity will use such technology to its own detriment.

Leave a Reply

Your email address will not be published. Required fields are marked *