Elon Musk’s new AI startup is as ambitious as it is doomed


Almost nothing is known about Elon Musk's latest effort, an artificial intelligence startup called xAI. But "almost nothing" is still something. And we can learn a lot from the little we know.

As Cointelegraph recently reported, Musk announced xAI on July 12 in a statement composed of three sentences, “Today we announce the formation of xAI. The goal of xAI is to understand the true nature of the universe. You can meet the team and ask us questions during a Twitter Spaces chat on Friday, July 14.”

Based on this information, we can deduce that xAI exists, is doomed, and more information about how it will fail on Twitter will be revealed. The reason it is doomed is simple: the laws of physics prevent it.

According to a Reuters report, Musk's motivation for xAI is based on a desire to develop safe artificial intelligence (AI). At a recent Twitter Spaces event, he said:

"If you tried to understand the true nature of the universe, it's actually the best I can think of from an AI safety standpoint."

This is a laudable goal, but any attempt to understand the "true" nature of the universe is doomed to failure because there is no center of truth somewhere where we can verify our theories.

It's not that humans aren't smart enough to understand the nature of the universe; the problem is that the universe is very, very big and we are trapped inside it.

Heisenberg's uncertainty principle says unequivocally that certain aspects of reality cannot be simultaneously confirmed through observation or measurement. This is why we can't just measure the distance between Earth and Uranus, wait a year, measure it again, and determine the exact distance. rate of the expansion of the universe.

The scientific method requires observation and, as an anthropic principle teaches we all observers are limited.

In the case of the observable universe, we are further constrained by the nature of physics. The universe is expanding at such a rapid rate that forbids prevent us from measuring anything beyond a certain point, no matter what tools we use.

The expansion of the universe doesn't just make it bigger. It gives you a distinct and definable "cosmological horizon" that the laws of physics prevent us from measuring beyond. If we were to send a probe at the maximum speed allowed by the laws of physics, the speed of light, then every part of the universe that is beyond the exact point that the probe could travel in X amount of time would be inaccessible forever. .

This means that even a hypothetical superintelligence capable of processing all the data that has ever been generated would still not be able to determine any basic truth about the universe.

A slight twist on Schrödinger's cat thought experiment, called Wigner's friend, demonstrates why this is the case. In the original, Erwin Schrödinger envisioned a cat trapped in a box with a vial of radioactive liquid, and a hammer that would hit the vial, thereby killing the cat by completing a quantum process.

One of the fundamental differences between quantum and classical processes is that quantum processes can be affected by observation. In quantum mechanics, this means that the hypothetical cat is alive and dead until someone observes it.

The physicist Eugene Wigner was reportedly "upset" by this and decided to put his own spin on the thought experiment to challenge Schrödinger's claims. His version added two scientists, one inside the lab opening the box to see if the cat was dead or alive and one outside opening the lab door to see if the scientist inside knows if the cat is dead or alive.

What xAI seems to be proposing is a reversal of Wigner's thought experiment. Apparently they want to take the cat out of the box and replace it with a general pre-trained transformer (GPT) AI system, i.e. a chatbot like ChatGPT, Bard, or Claude 2.

Related: Elon Musk to launch truth-seeking artificial intelligence platform TruthGPT

Instead of asking an observer to determine whether the AI ​​is dead or alive, his plan is to ask the AI ​​to discern basic truths about the lab outside the box, the world outside the lab, and the universe beyond the cosmological horizon. without doing anything. observations.

The reality of what xAI seems to propose would mean the development of an oracle: a machine capable of knowing things for which it has no proof.

There is no scientific basis for the idea of ​​an oracle; its origins are rooted in mythology and religion. Scientifically speaking, the best we can hope for is that xAI develops a machine capable of analyzing all the data that will ever be collected. generated.

There is no conceivable reason to believe this would make the machine an oracle, but perhaps it will help scientists see something they missed and lead to greater understanding. Maybe the secret to cold fusion lies in a Reddit dataset somewhere that no one has managed to use to train a GPT model yet.

But unless the AI ​​system can defy the laws of physics, any answers it gives us regarding the "true" nature of the universe should be taken on faith until confirmed by observations made from beyond the box and the cosmological horizon.

For these reasons, and many others related to how GPT systems really interpret queries, there is no scientifically viable method by which xAI, or any other AI company, can develop a binary machine that runs classical algorithms capable of observing the truth about our quantum universe.

Tristan Greene is deputy news editor at Cointelegraph. In addition to writing and researching, he likes to play with his wife and study military history.

This article is for general information purposes and is not intended to be and should not be taken as legal or investment advice. The views, thoughts and opinions expressed herein are those of the author alone and do not necessarily reflect or represent the views and opinions of Cointelegraph.