A joint parliamentary committee has warned the biggest technology and social media companies to protect users from content designed to undermine democracy by not working together on the issue.
The Joint Committee on National Security Strategy (JCNSS) said it was concerned about the different approaches across different technology firms to monitoring and regulating potentially harmful content.
The committee said that the evidence it received from the main platforms, as part of its inquiry into the protection of democracy, showed that companies were developing individual policies based on their own principles, rather than coordinating standards and best practice.
Dame Margaret Beckett, chair of the JCNSS, said evidence from companies including X (formerly Twitter), TikTok, Snap, Meta, Microsoft and Google showed “an uncoordinated, foolish approach to the threats and harms numerous challenges facing UK and global democracy”.
Social media and wider technology platforms are under extra scrutiny this year due to the higher number of people expected to take part in elections.
Polls are due in more than 70 countries including the UK, USA and India. That combined with the rapid evolution of artificial intelligence is fueling an increase in AI-generated content, including misleading content, better known as deepfakes.
Dame Margaret said the committee was also concerned about businesses using free speech as a justification for allowing certain types of content to remain online.
“The committee is well aware that many social media platforms were born, at least nominally, as platforms to democratize communication: to allow and support free speech and to circumvent censorship,” said Dame Margaret.
“These are admirable goals but they never gave these companies or anyone running and profiting from them the right or authority to arbitrate what constitutes free speech; that is the job of democratically accountable authorities.
“That just makes the form that many of these publishing platforms really are – one of monetizing the dissemination of information through addictive technologies.”
She added that committee members were also concerned about the approach of some of the biggest tech firms to tackling the growing problem of AI-driven misinformation, and she also criticized their approach to to give evidence to the inquiry.
“This year we’ve seen groups develop technology to help people discover the truth of the disparate variety of information available every minute online.
“We expected that kind of initiative and responsibility from the companies that benefited from spreading the information,” she said.
“Initially, we expected social media and technology companies to proactively engage with our parliamentary inquiry, particularly one that directly relates to their work at a critical time in our world history.
“And if we have to go after a company that operates and makes a profit in the UK to address a parliamentary inquiry, we expect a lot more than some of its publicly available material that doesn’t specifically address our to rekindle an inquiry.
“Much of the written evidence submitted – with few notable exceptions – reflects an uncoordinated, foolish approach to the many threats and potential harms facing UK and global democracy.
“Free speech coverage does not cover false or harmful speech, and it does not give tech media companies a free pass for accountability for information disseminated on their platforms.”
While some platforms have announced tools to better monitor and flag AI-generated content on their sites, industry-wide standards on the issue are still not in place.
Earlier this year, fact-checking charity Full Fact warned that the UK was “vulnerable” to misinformation, partly due to gaps in existing legislation and the rise of technology such as AI generation.
But Dame Margaret warned there was also “too little evidence” that tech firms were doing enough to manage the threats too, and called for more Government intervention.
“Although we have not concluded our inquiry or reached our recommendations, there is little evidence from global commercial operations of the insight that was expected: to anticipate and develop transparent, verifiable and accountable policies for the managing unique threats in such a year. like this,” she said.
“There is too little evidence of the learning and collaboration necessary to respond effectively to a sophisticated and evolving threat, of the kind described by the Committee in our report on ransomware earlier this year.
“The Government’s Task Force on the Defense of Democracy could be a useful coordinating body for social media companies to proactively submit and share their learnings on foreign interference techniques.”
A Government spokesman said: “Protecting our democratic processes is an absolute priority and we will continue to call out malicious activity that threatens our institutions and values, including through our Defense of Democracy Task Force.
“Once implemented, the Online Safety Act will also require social media platforms to quickly remove illegal misinformation and disinformation – including when generated by AI – as soon as they become aware of it.”