If Elon Musk has five minutes to himself, he will try to create something to change the world. And he’s at it again, announcing a new effort to combat woke artificial intelligence which he believes is a problem with deadly consequences. 

An outlet called The Information reported Musk has approached some of the world’s leading “artificial intelligence researchers in recent weeks about forming a new research lab to develop an alternative to ChatGPT.”

He’s trying to put together an all-star team of AI experts. On the list is reportedly Igor Bauschkin, a researcher who worked in the DeepMind AI unit at Alphabet and OpenAI, the creator of ChatGPT.

Even though Musk helped create OpenAI, he is appalled by what’s happening now and what ChatGPT has morphed into. 

“The danger of training AI to be woke – in other words, lie – is deadly.  OpenAI was created as an open source (which is why I named it ‘Open’ AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft. Not what I intended at all.”

In Musk’s expert opinion, AI advancement is happening too fast, and the negative effects could be catastrophic in his view. He recently spoke about ChatGPT and AI in general in Dubai. 

“One of the biggest risks to the future of civilization is AI. But AI is both positive and negative – it has great promise, great capability, but also, with that comes great danger. I mean, you look at say, the discovery of nuclear physics. You had nuclear power generation but also nuclear bombs.”

Will this be a top priority for Musk, more important than space travel or electric cars? Considering the potential ramifications of artificial intelligence and what he told the audience in Dubai, it seems like it might be.

“I think we need to regulate AI safety, frankly. Think of any technology which is potentially a risk to people, like if it’s aircraft or cars or medicine, we have regulatory bodies that oversee the public safety of cars and planes and medicine. I think we should have a similar set of regulatory oversight for artificial intelligence because I think it is actually a bigger risk to society.”

Add comment