anomalien.com
New AI Refuses to Think Like a Human, Forms Its Own Understanding
A new AI model from OpenAI, which uses a trial-and-error approach, has begun delivering surprising results on complex challenges. This method, previously seen in gaming AIs like AlphaGo, is now being adapted for a wider range of tasks, including language models.
DeepMind’s AlphaGo was the first AI to master a game without relying on human instructions or reading the rules. Instead, it used reinforcement learning (RL) to independently develop its understanding of Go.
This approach enabled AlphaGo to defeat the European Go champion 5-0 and later overcome the world’s top human player.
OpenAI’s latest model, o1, is producing remarkable outcomes on similarly complex problems. Like AlphaGo, o1 forms its own understanding of problem spaces through trial and error, without relying on human input.
This makes it the first large language model (LLM) to create its own highly efficient, AlphaGo-style interpretation of problem-solving.
This method allows the AI to tackle previously unsolvable challenges by learning from real-world interactions rather than being limited to language-based inputs. As a result, AI will be able to solve increasingly complex problems that were previously beyond its reach.
Although o1 shares many similarities with earlier models, its key difference lies in the addition of “thinking time” before responding to a query.
During this phase, o1 generates a “thought chain,” carefully considering and justifying its reasoning before arriving at a solution.
Experts suggest that this new learning approach will result in AI exhibiting behaviors that may seem unusual or unpredictable, driven by its own unique logic. In doing so, AI could discover new knowledge and methods beyond human comprehension. The future is already starting to unfold.
The post New AI Refuses to Think Like a Human, Forms Its Own Understanding appeared first on Anomalien.com.