Microsoft’s AI Phiaf warns that machine learning is a waste of time

The head of Microsoft’s Ai Division Mustafa Suleyman thinks that AI developers and researchers should stop trying to build AI.
“I don’t think that’s the job you should be doing,” Suleyyman told CNBC in an interview last week.
Suleyman thinks that while AI may program well enough to reach some form of superintelligence, it cannot develop the human emotional intelligence necessary to reach consciousness. At the end of the day, any “emotional experience” that seems to be ai seems to be just an imitation, he says.
“Our experience of pain is something that makes us very sad and feel bad, but AI does not feel sad when it experiences pain’. “In reality it creates an impression, which seems to be related to the experience and consciousness itself, but that is not what it experiences.”
“It would be perfect to pursue research that investigates that question, because it isn’t [conscious] and they can’t be,” said Suleyman.
Consciousness is a tricky thing to define. There are many scientific theories that try to explain what knowledge can be. According to one such view, proposed by the famous philosopher John Searle who died last month, consciousness is a natural phenomenon that cannot be truly replicated by a computer. Many AI researchers, computer scientists and neuroscientists also subscribe to this belief.
Even if this idea turns out to be true, that doesn’t stop users from asking for information about computers.
“Unfortunately, because the amazing abilities of LLM forms have the power to mislead people, people can attribute symbolic qualities to llms,” Polish researchers andrzej Porebski and Yakub Figura wrote in a study published last week,
In an article published on his blog in August, Suleyman warned that he “seems to know.”
“The arrival of AI that seems inevitable is inevitable and undesirable. Instead, we need a vision of AI that can fulfill its potential as a useful companion without being attacked,” said Suleyman.
He points out that AI cannot be conscious and the illusion that gives consciousness can cause interactions that are ‘rich in feeling and experience,’ something fenced off as “AI Psychosis” in the cultural context.
There have been many high-profile cases in the past year of AI-Obsessess driving users to murderous murders, manic episodes and even suicide.
With guardrails in place to protect vulnerable users, people wholeheartedly believe that the AI chatbots they interact with on an almost daily basis have real knowledge and experience. This led people to “fall in love” with their conversations, sometimes they would be played with fatal consequences when they shot that Chatbot “known when they tried to enter New York to meet the New Notbot in person.
“Just as we must produce AI that prioritizes human involvement and real-world interactions in our physical environment and people, AI must only be developed when it augments cognitive cues,” Suleyman wrote in the post. “We have to build AI for people, not to be a digital person.”
But because the nature of consciousness is becoming more accessible, some researchers are increasingly concerned that technological advances in Ai may destroy our understanding of our own workings.
“If we are able to create consciousness – even by accident – it can raise huge ethical challenges and existential risks,” said Seligian Scientist in danger, announcing the paper he wrote to ask for scientific research.
Suleyman himself has been promoting the creation of “superhumans” rather than a God-like AI, although he believes that superintelligence will not be able to do things in the next decade.
“I’m more prepared on ‘How does this actually help us as a species? ‘ Like that should be a technical job,” Suleyman told Wall Street magazine Robook earlier this year.



