The UK should be more positive about AI so they don’t miss out on a tech ‘goldrush’

The UK’s approach to artificial intelligence has become too narrowly focused on the safety of AI and the potential threats of the technology, rather than its benefits, meaning it could “miss out on the AI ​​goldrush”, a House of Lords Committee has warned.

In a major report on artificial intelligence and large language models (LLMs) – generated by AI tools such as ChatGPT – the Lords Communications and Digital Committee said the technology would create era-defining changes comparable to the invention of the internet.

However, he warned that the UK needed to rebalance its approach to the matter to also consider the opportunities that AI could offer, or it will lose its international influence and become strategically dependent on overseas technology firms for technology that is expected to play a central role. in everyday life in the years to come.

He said some of the “apocalyptic” concerns about threats to human life from AI were exaggerated, and should not distract policymakers from responding to more immediate issues.

The UK hosted the first AI Safety Summit at Bletchley Park in November, where the Government brought together more than 25 nations, as well as representatives from the UN and the EU, to discuss the technology’s long-term threats , including its potential to pose an existential threat to humans as well as help criminals carry out more sophisticated cyberattacks or used by bad actors to develop biological or chemical weapons.

The Prime Minister, Rishi Sunak, and the Technology Secretary, Michelle Donelan, have said that for the UK to reap the benefits of AI, governments and technology firms must “address the risks”.

While calling for mandatory safety tests for high-risk AI models and a greater focus on safety by design, the report urged the Government to take action to prioritize open competition and transparency in the AI ​​market, giving warnings that a small number would be seen if they failed to do so. of the largest technology firms, they consolidate control over the growing market and block new players in the sector.

Institute for Higher Education Policy Report

The technology would provide era-defining changes comparable to the invention of the internet, the committee said (John Walton/PA)

The committee said it welcomed the Government’s work to position the UK as an AI leader – including by hosting the AI ​​Safety Summit – but said a more positive vision for the sector was needed to deliver the social and economic benefits to achieve.

The report called for more support for AI start-ups, a boost to computing infrastructure and more work to improve digital skills, as well as further investigation into the potential for a major “dominant” language model in the UK.

Baroness Stowell, chair of the Lords Communications and Digital Committee, said: “The rapid development of AI Large Language Models is likely to have a profound impact on society, comparable to the introduction of the internet.

“So it is vital for the Government to get its approach right and not miss opportunities – especially if this is out of caution for far-fetched and impossible risks. We have to face risks to be able to take advantage of the opportunities – but we have to be proportionate and practical. We must avoid the UK losing out on a potential AI goldrush.

“One lesson from the way technology markets have developed since the dawn of the internet is the danger of a small group of companies dominating the market. The Government must ensure that exaggerated predictions of an AI-driven apocalypse, coming from some of the technology firms, will lead it to policies that close open source AI development or exclude small innovative players from the development of AI services.

“We must be careful to avoid regulatory capture by the established technology companies in an area where regulators will be scrambling to keep up with rapidly evolving technology.

“There are risks associated with the wider spread of LLMs. Of greater concern is the possibility of making existing malicious acts faster and easier – from cyber attacks to manipulating images for the sexual exploitation of children. The Government should focus on how these can be tackled and not be distracted by end-of-the-world scientific situations.

“One area of ​​interference in AI that can and should be tackled promptly is the use of copyrighted material to train LLMs. LLMs rely on ingesting huge data sets to function properly but that doesn’t mean they should be able to use any content they can find without permission or paying rights holders for the privilege. This is an issue that the Government can and should tackle quickly.

“These issues will be of huge importance in the coming years and we expect the Government to act on the concerns we have raised and take the necessary steps to make the most of the opportunities that exist in front of us.”

In response to the report, a spokesperson from the Department for Science, Innovation and Technology (DSIT) said: “We do not accept this – the UK is a clear leader in AI research and development, and as a Government we are already supporting it. AI’s limitless potential to improve lives, pouring millions of pounds into implementing solutions that will transform healthcare, education and business growth, including through our newly announced AI Opportunities Forum.

“Safe AI is the future of AI. It is by addressing the risks of today and tomorrow that we can take advantage of the incredible opportunities that exist and attract even more of the jobs and investment that will come from this new wave of technology.

“That’s why we’ve spent more than any other government on safety research through the AI ​​Safety Institute and are promoting a pro-innovation approach to AI regulation.”

Leave a Reply

Your email address will not be published. Required fields are marked *