Shouting at the clouds about artificial 'intelligence'

First, allow me to make the semantic argument that AI (“artificial intelligence”) is a misnomer. The value of a strict definition of “intelligence,” however, does not serve much by way of the practical implications of such a technology. The term “artificial intelligence” was coined around the same time that chess machines were being designed in order to challenge their human opponents, and the idea that a machine could study an opponent’s moves and potentially learn from them, “machine learning.” Nowadays, the term is almost always used in a manner that is synonymous with LLMs (“large language models”). Language models are constantly being depicted in the news and media as a form of artificial intelligence that is capable of its own cognitive effort. Such a depiction is misleading—it preys upon a misunderstanding of what the technology really is, both in theory and in practice.
In spite of this, I am a firm holder in the belief that the so-called artificial intelligentsia is, by and large, a marketing ploy designed to accumulate grand amounts of wealth in the names of those who are facilitating the growth of language models. The practical implications of these LLMs are severely limited, and most of the rhetoric about how they will eventually replace most human beings is just that—rhetoric, being spewed by baby boomers with little-to-no understanding of how the technology works. That is not to say that I, on the other hand, have any specialized knowledge regarding the function of language models besides a heightened awareness of their practical limitations from having used them to such an extent whereby said limitations are made apparent; rather, it seems to function best as a glorified search engine.
The criticism of artificial intelligence as “unintelligent” naturally devolves into an argument about what constitutes a form of intelligence. In the Phaedo, the character of Socrates argues that perception is cognition. (This is a literary simplification of the intertwined, though distinguishable, aspects of perception and cognition in the context of thought. In order to understand the perception-cognition relationship, one must understand the limits of perception.) LLMs are incapable of the cognitive aspect of perception; they are an exercise in pattern matching, with most of their accomplishments being the direct result of a large scope of information-based contexts, and without regard for the informal reasoning that occurs at the most basic level of human perception—which is, to say, its cognitive aspect. Artificial intelligenstia is capable of arriving at answers to questions simply based upon the scope of available information, rather than through an exercise in reasoning that would elicit a natural conclusion in the mind of a human being. For this reason, it is entirely dependent upon human beings to perform the cognitive effort from which a language model might extrapolate the answer to any given question.
In ancient Athens, the state would typically carry out executions by administering a poison hemlock, its seeds ground up and mixed into wine. The young Phaedo describes Socrates as having consumed the poison “as though it were a draught of wine” as fed from the physician and made paralyzed his body in the wake of a quick, most peaceful form of death. And yet, in spite of this death, his love of wisdom (philia sophia) continues to parade hypothetical discussions about a technology as distant from the Platonic dialogue as the “perfect” form of sensible objects is to our own (imperfect) perception of said objects. Human brains are not pattern-matching receptors of information, but instead the facilities of conscious life which are regarded by Plato as “housing the rational soul.” Language models will never amount to this rational form of thinking in any respect, and that alone prohibits their supplanting of human beings in most aspects of life. (The only way they might supplant human beings is, most conversely, by facilitating their intellectual decline.)
Instead, large language models are the direct consequence of the increased availability of computing power, which itself is consequential to more efficient computers. In fact, the technology which drives natural language processing has existed in the form of statistical models long before the realization of computers capable of their full-fledged simulation. The most aggressive positioning of “AI technology” in our current economy is simply another vehicle for the “tech bro’s” ploy of hype-driven investments that will burst in no less a spectacular fashion than the dot-com bubble, 24 years ago. I, for one, am tired of hearing rhetoric about this over-hyped technology and how it will either bring about the emancipation or subjugation of human brings. Instead, we should reckon with the preexisting social structure that hungers for a technology in order to subjugate individuals in a more accelerated fashion than during these past 75 years. Humans will continue to be replaced in their aspects of life that call for limited cognitive effort, such as self-checkout systems in lieu of cashiers, and, in doing so, stunting their cognitive development to an end where language models might then be positioned as superior in terms of their intelligence. And so, the underlying concern must not revolve around the potential for AI to surpass human intelligence, but instead the ongoing intellectual decline of human beings to where such a technology becomes crucial for the increasing plurality of those incapable of thinking for themselves…