Elon Musk releases code for his Grok AI chatbot. This is why it matters

Some of the world’s biggest companies and richest people are fighting over a question that will help shape the future of AI: Should businesses disclose exactly how their products work?

Elon Musk, CEO of Tesla and SpaceX, fueled the debate in recent days by choosing to release the computer code behind his AI chatbot, Grok.

This move is in contrast to the approach taken by OpenAI, the company behind the popular AI text bot ChatGPT. OpenAI, which is partly owned by tech giant Microsoft, has chosen not to release much detail about the latest algorithm behind its products.

Elon Musk did not respond to ABC News’ request for comment. Neither did OpenAI.

In a statement earlier this month, OpenAI denied claims that the company kept its AI models secret.

“We advance our mission by building beneficial tools that are widely available. We are making our technology generally usable in ways that empower people and improve their daily lives, including through open source contributions,” said the company. “We provide broad access to today’s most powerful AI, including a free version used by hundreds of millions of people every day.”

Here’s what you know about Grok, why Elon Musk revealed the computer code and what it means for the future of AI:

What is Musk’s AI chatbot, Grok?

Last year, Musk launched an artificial intelligence company called xAI, promising to develop a generational AI program that will compete with established offerings like ChatGPT.

MORE: Dispute over threat of extinction caused by artificial intelligence is taking a toll on a booming industry

On several occasions, Musk has warned against the risks of political bias in AI chatbots, which help shape public opinion and risk the spread of misinformation.

However, content moderation itself is a polarizing topic and Musk has made comments that place his approach within that hot-button political context, several experts previously told ABC News.

In November, xAI debuted an early version of its first product, Grok, which responds to user prompts with funny comments based on the classic sci-fi novel “Hitchhiker’s Guide to the Galaxy.”

Grok is powered by Grok-1, a large language model that generates content based on statistical probabilities learned from scanning massive amounts of text.

To access Grok, users must purchase a premium subscription to X, the social media platform owned by Musk.

“We believe it is important to design AI tools that are useful to people of all backgrounds and political views. We also want to empower our users with our AI tools, subject to the law,” said xAI in a blog post in the month of November. “Our goal with Grok is to explore and demonstrate this approach publicly.”

Why did Musk make the code openly available?

The decision to release the code behind Grok comes down to two issues that are important to Musk: The threat of AI and an ongoing battle with rival company OpenAI.

For years, Musk has warned that AI poses a great risk of societal harm. In 2017, it tweeted: “If you’re not worried about AI safety, you should be.” And most recently, in March 2023, he signed an open letter warning of the “profound risks to society and humanity” of AI.

In Sunday’s comments, Musk appeared to frame the open-source decision as a means to ensure transparency, protect against bias and minimize the danger of Grok.

“There’s still work to be done, but this platform is the most transparent and truth-seeking by far,” Musk said in a post on X.

Photo: OpenAI CEO Sam Altman attends a session of the World Economic Forum (WEF) meeting, January 18, 2024, in Davos, Switzerland.  (Fabrice Coffrini/AFP via Getty Images)

PHOTO: OpenAI CEO Sam Altman attends a session of the World Economic Forum (WEF) meeting, January 18, 2024, in Davos, Switzerland. (Fabrice Coffrini/AFP via Getty Images)

The move is also directly related to a public conflict between Musk and OpenAI.

Musk, who co-founded OpenAI but left the organization in 2018, sued OpenAI and its CEO Sam Altman earlier this month, alleging that the company abandoned its mission to benefit humanity in a sprint toward profit.

Days after the lawsuit was filed, Musk said on X that he would drop the case if OpenAI changed its name to “ClosedAI.”

In a statement earlier this month, OpenAI said it plans to move to dismiss all of Musk’s legal claims.

“Discussing a non-profit structure to further the mission, Elon wanted us to merge with Tesla or he had full control. Elon left OpenAI, saying that a relevant competitor to Google/DeepMind was needed and that He said he would support us in finding our own path,” said OpenAI.

What is the fight against open source and closed source AI?

The debate over whether to release the computer code behind AI products divides into two competing visions of how to limit harm, eliminate bias and optimize performance.

On the one hand, proponents of open source say that publicly available code allows a broad community of AI engineers to identify and fix flaws in a system, or adapt it to a purpose that is different from its original intended function.

In theory, open source code allows programmers to improve the security of a particular product while ensuring accountability by making everything visible to the public.

MORE: Is TikTok different in China? Here’s what you know

“When someone is creating a piece of software, there can be bugs that can be exploited in ways that can cause security vulnerabilities,” said Sauvik Das, a professor at Carnegie Mellon University who focuses on AI and cyber security, with ABC News. “It doesn’t matter if you’re the greatest programmer in the world.”

“If you open source, then you have a whole community of practitioners who poke holes and gradually build patches and protections over time,” Das said.

In contrast, closed source supporters argue that the best way to protect AI is to keep the computer code private so it stays out of the hands of bad actors, who could reuse it for malicious purposes.

PHOTO: SpaceX, Twitter and Tesla CEO Elon Musk arrive at the U.S. Senate's bipartisan Artificial Intelligence (AI) Insights Forum at the U.S. Capitol, Sept. 13, 2023, in Washington.  (Stefani Reynolds/AFP via Getty Images, FILE)PHOTO: SpaceX, Twitter and Tesla CEO Elon Musk arrive at the U.S. Senate's bipartisan Artificial Intelligence (AI) Insights Forum at the U.S. Capitol, Sept. 13, 2023, in Washington.  (Stefani Reynolds/AFP via Getty Images, FILE)

PHOTO: SpaceX, Twitter and Tesla CEO Elon Musk arrive at the U.S. Senate’s bipartisan Artificial Intelligence (AI) Insights Forum at the U.S. Capitol, Sept. 13, 2023, in Washington. (Stefani Reynolds/AFP via Getty Images, FILE)

Closed source AI also supports companies that may want to take advantage of advanced products that are not available to the general public.

“It’s harder to redeploy the closed source systems for terrible reasons simply because they already exist and there are only certain things you can do with them,” said Kristian Hammond, a professor of computer science at Northwestern University. who studies AI, with ABC News. .

Last month, the White House announced it was seeking public input on the benefits and dangers of open source AI systems. The move came as part of a broad set of AI rules the Biden administration issued through an executive order in October.

Das, of Carnegie Mellon, said Musk’s open-source release may have been motivated by both public and personal interests but the move sparked a much-needed conversation about this aspect of AI safety.

“Even if the reasons aren’t necessarily entirely clean, what society needs right now is the fact that this is raising public awareness of the idea of ​​open versus closed – and the benefits versus risks of both. raise public awareness,” said Das.

Elon Musk releases code for his Grok AI chatbot. This is why it’s important to appear first on abcnews.go.com

Leave a Reply

Your email address will not be published. Required fields are marked *