From ChatGPT to AI Safety Summit: The year in AI

Artificial intelligence is one of the biggest issues in technology in 2023, driven by the rise of AI generation and apps like ChatGPT.

Since OpenAI made ChatGPT available to the public in late 2022, awareness of the technology and its potential has grown – from being discussed in parliaments around the world to being used to write TV news segments.

Public interest in AI generation models has pushed many of the world’s biggest tech companies to introduce their own chatbots, or talk more publicly about how they plan to use AI in the future, and increased debate by regulators about how countries can and should tackle them. opportunities and potential risks of AI.

In 12 months, conversations around AI have gone from concerns about how school children could benefit from doing their homework for them, to Prime Minister Rishi Sunak hosting the first AI safety summit of nations and tech companies to discuss how to prevent AI from breaching. humanity or even an existential threat.

In short, 2023 was the year of AI.

Like the technology itself, product launches around AI have moved rapidly over the past 12 months, with Google, Microsoft and Amazon all following OpenAI in announcing AI generation products due to the success of ChatGPT.

Google unveiled Bard, an app it said would be ahead of any of its competitors in the new AI chatbot space because it was powered by the data from Google’s industry-leading search engine, and based on the Google Assistant virtual assistant , found in smartphones and smart speakers.

On a similar note, Amazon used its big product of the year launch to talk about how it was using AI to make its Alexa virtual assistant secure and respond in a more human way – able to understand context and respond easier on follow-up questions. .

And Microsoft began rolling out its new Copilot, its take on combining AI generation with a virtual assistant on Windows, allowing users to ask for help with any task they were doing, from writing a report to windows open to organize on his screen.

Elsewhere, Elon Musk announced the creation of xAI, a startup focused on work in the artificial intelligence space.

The first product from that startup has already appeared in the form of Grok, a conversational AI available to paying subscribers to Musk-owned X, formerly known as Twitter.

Governments and regulators could not ignore such large-scale developments in the sector, and the debate over the regulation of the AI ​​sector has also intensified during the year.

In March, the Government published its White Paper on AI, which suggested using existing regulators in different sectors to carry out AI governance, rather than giving responsibility to a single new regulator.

But any AI Bill has yet to be brought forward, a delay that has been criticized by some experts, who have warned that there is a risk of letting the technology go unchecked just as the use of AI tools is exploding.

The Government has said it does not want to rush into legislation while the world is still grappling with the potential of AI, and says its approach is more agile and allows for innovation.

In contrast, earlier this month the EU agreed on its own set of rules on AI oversight, although they are unlikely to become law before 2025, which will give regulators the power to scrutinize AI models and data provide information on how models are trained.

But Mr Sunak’s desire for the UK to be a key player in AI regulation was underlined in November when he hosted world leaders and industry figures at Bletchley Park for the world’s first AI Safety Summit.

Mr Sunak and Technology Secretary Michelle Donelan used the two-day summit to discuss the threats posed by the so-called “AI frontier”, cutting-edge aspects of technology that, in the wrong hands, could used in terrible ways.

All international attendees at the summit, including the US and China, signed the Bletchley Declaration, which acknowledged the risks of AI and pledged to develop safe and responsible models.

And the Prime Minister announced the launch of the UK’s AI Safety Institute, along with a voluntary agreement with major companies including OpenAI and Google DeepMind, to allow the institute to test new AI models before they are released.

Although not a binding agreement, it has laid the groundwork for AI safety to be a more prominent part of the debate going forward.

Elsewhere, the AI ​​industry saw some boardroom soap operas at the end of the year, with ChatGPT maker OpenAI losing chief executive Sam Altman at the end of November.

But it sparked a backlash among the staff, nearly all of whom signed a letter promising to leave the company and join Altman on Microsoft’s proposed new AI research team if he was not reinstated.

Within days Altman was back at the helm of OpenAI and the board had been reconfigured, and the reasoning behind the saga remains unclear.

Since then, the UK’s Competition and Markets Authority (CMA) has sought views from within the industry on Microsoft’s partnership with OpenAI, in which the tech giant has invested billions in the AI ​​firm and has an observer on its board.

The CMA said it had decided to look into the partnership in part because of the Altman saga.

Another sign that scrutiny of the AI ​​sector is likely to continue to intensify in the coming year.

Leave a Reply

Your email address will not be published. Required fields are marked *