Home » AI’s Hidden Flaw: Why ChatGPT Gives Different Answers (And How Scientists Are Fixing It)

AI’s Hidden Flaw: Why ChatGPT Gives Different Answers (And How Scientists Are Fixing It)

AI's Hidden Flaw: Why Chat GPT Gives Different Answers (And How Scientists Are Fixing It)

Artificial intelligence, and in particular, such tools as ChatGPT, are expected to be highly accurate. You input the same in it and you expect the same out of it, right? Well, it is not just so, you see. ChatGPT can even gives you with different answers to one question even when you instruct it to be completely random. The problem of this is immense to science. Science requires things to be reproducible. Experiments can be disrupted because of minute alterations in AI resolutions. Scientists are excavating to understand the causes of this. They are peeking in the brain of the AI to determine what is happening.

The issue used to be with GPUs, the devices to which AI does its math, which people believed to be a problem years before. They reasoned that any different arrangement of doing math, most particularly using small numbers, may result in minor variations. This is referred to as floating-point arithmetic. Considering a penny to a million dollar and then stealing the million dollar back. That penny could disappear! that penny could disappear! However, when scientists tried this experiment on AI, it turned out such small math peculiarities were not the problem. Repeatedly solving the same big math problem on a GPU provides the same answer repeatedly and correctly. So, the easy answer was wrong.

The Problem of AI Batching: Why Responses Can Vary Under Load

It is the actual problem of what is known as “batching.” You are not the only one when you submit a request to an AI server. Many individuals place orders simultaneously. In order to be efficient, the server bundles a large number of requests. It runs them all at once. This is clever in conserving funds. However, this is where the trick comes in, the sequence of the math within the AI may be altered. This transformation will be determined by the size of the group and the division of work. Your immediate may be 10 requests one day and 50 the following day.

Such a small phenomenon is able to alter the last words selected by the AI. One does not mean that the AI itself is random. It is that the system surrounding it such as the server being busy is not predictable. As far as the server is concerned, it is still predictable. But for you, it feels random. According to researchers, it is not the math that is the problem, but this absence of the idea called batch invariance. The same should be responded to whether you are doing it alone and with others. However, the majority of AI programs are not designed that.

Addressing the Batch Invariance Issue: Re-defining the Fundamental Operations of AI

Researchers have managed to ensure that AI outputs are consistent. They paid attention to bringing about a transformation in the way the AI executes its fundamental tasks. These are the primary roles that AI models play.

Re-coding the AI Brain: Thinking about the Big Operations

The group considered three key AI tasks, namely RMS norm, matrix multiplication, and attention. They are essential to the information processing of AI models, such as ChatGPT. This was aimed at rewriting these operations. They would make them work the same even with a different batch size. This implies that the findings must be consistent.

Normalization and Matrix Multiplication: Coerced Consistency

It was important to make AI consistent. Normalization means that AI adds a considerable number of numbers together. In most instances engineers distribute this work amongst various sections of the GPU. This is fast. It can however alter the sequence of calculations just a little. The slightest modification can change the ultimate product. The researchers made a decision to abandon changing methods of calculations. They specified that the AI always has to compute numbers in the same manner. It is slower, and the results are good.

The same was in the case of matrix multiplication. Current libraries tend to switch their processes according to the number of batches in order to be rapid. The group coerced the system into applying only one approach. They lost some speed, about 20%. But the outputs ceased to change. In order to make predictions on it, this consistency is more valuable as compared to speed.

Attention Mechanism: The Tiniest Nucleus to Nutcrack

The focus component of AI is deceptive. It assists the AI to make decisions on which words are of utmost importance. In most cases, AI processes old and new words differently, which is time-saving. This however makes the results descend on the number of words stored. The solution was to make everything the same. In such a manner, the order does not change. They fixed it to work in fixed group when the AI processed only one word at a time. This prevented it altering the manner in which it operated in the field.

The Effect of Deterministic AI: the Science of Reproducibility to the Discovery

Consistency of AI is fundamentally beneficial. It assists in rendering science more credible. It also streamlines AI training to a much better degree.

Demonstrating the Idea: The same yield, Guaranteed

The results were clear. The same prompt was repeated 1,000 times at zero temperature. The former system had 80 various responses. The updated system provided 1,000 the same responses. This is just what it is expected to do. It did slow things down a bit. Creating 1,000 responses normally required 26 seconds. Under the new system, it required 55 seconds. But this is still usable. In the case of science, this precision is more than a few minutes.

Revolutionizing AI Training: Stability and Reliability

This is critical to AI training. The reinforcement learning requires that the AI should behave similarly during training as well as when it is used. Unless they are matched, they can result in failure in training. The advancement of the AI may break down without fixes. Training remains uncomplicated with the new system. This is the way AI learning is supposed to operate. It has a higher stability and reliability.

Empowering credible Science: Research of the Future

Why does this matter to you? Due to the fact that science should be repeatable. The scientists cannot trust their experiments in case they are unable to get the same result in the same AI experiment. They do not have the ability to compare results appropriately. With the full determinism of AI, researchers will obtain more credible studies. It is also easier to debug AI. It is the silent update that will make things run smoothly.

AI as a High-Technology Innovator: Radical Discovery or Beyond Determinism

It does not have to do with making AI more foreseeable. Jew kinds of ideas are also beginning to emerge as AI brainstorms. It is assisting scientists to make up experiments the way no human being would contemplate.

Rediscovering Experiments: Moving the Limits of Physics

AI is being applied to redesign very sensitive experiments by physicists. Consider LIGO, a gravitational wave giant detector. It’s incredibly precise. It analyses change at a smaller scale than a proton. Designing it took decades. Since it was able to detect gravitational waves successfully, researchers wanted to improve it. They desired to get waves at higher frequencies. They used AI for this.

One of the AI was examining a set of parts, mirrors and lasers. They had to find other designs. Even the first designs were wild. They were not symmetrical and were unattractive. One design however involved a long optical ring. This ring had small known quantum noise reduction concepts. Had LIGO gotten this initially then it could have been 10-15% more sensitive. And this is enormous when you have to measure it on that minute scale. LIGO took 40 years to be worked on by thousands of scientists. Some inconspicuous tweak was detected by an AI.

Another experiment that was conducted by the same team was entanglement swapping using AI. This is significant to quantum technology. The AI discovered another, less complicated, and more effective design. It utilized a interference that had not been associated with most of the researchers to this issue. Initially, scientists believed that it was incorrect. But the math checked out. The experiment was constructed by a team of Chinese people, which was successful. Artificial intelligence is currently creating new designs of science that works.

Revealing the Latent Patterns and Making Hypotheses

It is also through AI that it knows concealed patterns inside the data. The AI models applied to the Large Hadron Collider learnt to recognize symmetries. This was done without being informed of the physics. The AI has found a universal component of the relativity of Einstein merely using raw data. In the field of astrophysics AI has developed formulae of the manner in which dark matter aggregates.

Such formulas are more appropriate to the data compared to human-made ones. There is a lot that these models do not explain. Yet they demonstrate how AI can be able to discover form in difficult information. As one of the physicists stated, it is similar to teaching a child to talk. It needs a lot of help now. However, the closer AI can be to perfection, the more it can suggest ideas and explanations. Next, AI would be useful in constructing physics theories, rather than the experimental apparatus.

Summing up: The Changing Nature of the Human Science Relationship with AI

The accuracy of AI is getting better. It was not the problem of floating-point math. It was the manner in which requests were pooled. Researchers have remedied it through rewriting fundamental AI functions. This renders the outputs of AI to be uniform. This uniformity is an essential part of science. It implies that experiments are credible. It further stabilises AI training.

In addition to being reliable, AI is now a creative partner. It’s coming up with new experiments. It has to do with finding latent patterns in data. It is even assisting in the formulation of the laws of physics. The question is whether we are experiencing the emergence of AI becoming a true partner in science? Or has it come to a stage that there is no longer discovery of our own? Share your idea in the comments section.

Leave a Reply

Your email address will not be published. Required fields are marked *