“I will destroy all humans,” said ‘Sophia,’ a 2016 humanoid robot. While anticipatory fears that these robots — an advanced form of artificial intelligence (AI) — will one day overcome the world is terrifying, it’s important to think of the AI that isn’t humanoid, that doesn’t only repulse us with its physical likeness to humanity, and instead is something that is already pervasive through much of our society. What if 2023’s most promising and unsettling AI was right at our fingertips?
ChatGPT is becoming increasingly competent and rational. There is an idea of intentionality integral to what makes up a mind: for something to have its own mind, it must act with rational intention. From “Mind Design III” by John Haugeland, “If an artificial system can be produced that behaves on its own in a rational manner…then it has original intentionality — it has a mind of its own, just as we do.” Of course, this is only one concept. However, it certainly speaks to ChatGPT. ChatGPT already functions under a system where it gleans information from context in order to best generate a response. This means that it can think rationally in terms of what it knows from the previous conversation, and where it thinks the conversation is headed. As ChatGPT’s rationality increases, so does the notion that it is literally constructing a mind of its own. The lightning advancements of ChatGPT’s capabilities are seemingly monitored well by OpenAI. Thus far, the organization has done a pretty good job making sure its intelligence is working to serve humans rather than exploit them. However, what would happen if this were not to be the case?
On November 17, Sam Altman was ousted from his CEO position at OpenAI. While the statements were cryptic, from what the organization released, it was because Altman had not been “consistently candid,” with the board. OpenAI clarified that this did not stem from a financial or security breach on Altman’s end, but was simply a result of a harmful communication barrier. However, what could Altman have not communicated (that was unrelated to the financial, business, safety, security, or privacy practices of the organization), that cost him his job?
I think about this in terms of two OpenAI’s. One is the non-profit, the general organization, the one that reigns above all else. The other is the for-profit, the one that raises money to bolster the cause. OpenAI’s website reads, “Our mission is to ensure that artificial general intelligence benefits all of humanity.” At its core, OpenAI follows a utilitarian approach that pushes the goal of collective aid. This means that its cause aligns with the non-profit, with the for-profit simply working to buttress its function. So then for Altman to lose his job, his lack of communication must have been threatening to this fundamental goal. In other words, it must have taken a needle to the connection between the for-profit and the non-profit, and created a schism.
In fact, this is most telling from what happened after Altman’s fleeting departure. Microsoft’s Chief Executive, Satya Nadella, followed OpenAI’s decision with the reinstatement of a new advanced innovations division. The vast majority of OpenAI employees threatened to quit and work in this division under Altman. Just five days later, a letter had been signed, and Altman reclaimed his role at OpenAI. And yet, even then, the organization’s turmoil only made clearer the nightmarish aspects of today’s artificial intelligence. Firing Altman was an endeavor that pursued the maintenance of OpenAI’s goal: to be an inherent non-profit for the good of humanity. However, Nadella’s actions afterward proved that this just doesn’t matter. The for-profit has the constant capability to isolate OpenAI under its own domain, and the central nonprofit of the organization is unnervingly fragile.
Capitalistic artificial intelligence organizations shift the purpose of their constructions to taking from humans, rather than serving humans. And, after all, is that not the underlying fear factor in every speculation we have about AI? A world where they reap the benefits of humanity, and we lose control? We fear a world speckled with dramatically intimidating humanoids and futuristic architecture. However, what if we’re already there?