The rise of artificial intelligence and its ever-increasing presence in our daily lives has sparked a plethora of debates.
Among these, the question of its governance has emerged as one of the most pressing issues of our time, with Brussels at the forefront of the race to regulate AI with its flagship AI Act.
However, regulation is not without its own barriers, and solutions are needed to determine the future of both technology and European citizens.
At this year’s International Artificial Intelligence Summit, organized by Euronews in Brussels on 8 November, Euroviews talk to Jeremy RollisonHead of EU Policy and European Government Affairs at Microsoft Europe, about many of the issues related to European and global regulatory cooperation, and what it will mean in practice.
Euroviews: At the Brussels Tech Forum in 2018, you named AI as one of two emerging technologies that will have the biggest impact on people and their relationship with technology. How much more to come?
Jeremy Rollison: At Microsoft, we’ve been working on AI for years. It is already part of many of our products, and our customers use it every day.
But this year we’ve seen AI accelerate more than ever and become mainstream with the advent of large language models and generative AI. Technology enhances human potential, and is changing everything about how we live, work and learn.
We are seeing AI helping research into new medicines, finding solutions to accelerate decarbonisation of power grids, and enhancing cyber security.
We also see that technology is driving economic growth by supporting the development of new products and services.
So we really see AI playing a key role in helping to tackle some of society’s most pressing challenges.
Euroviews: You often have the opportunity to talk to decision makers about the various challenges they face in regulating AI. Could you share with us some of the key takeaways on what can and needs to be done to ensure future-proof regulation, particularly in the EU?
Jeremy Rollison: There is a balance to be achieved between supporting Europe’s ability to innovate and protecting the rights and values of Europeans.
Legislation is needed to get the balance right. As AI technology continues to develop rapidly, it is important that there is an ongoing dialogue between companies, governments, businesses, civil society and academia.
AI governance frameworks need to move forward with technological innovation and we need diverse voices to achieve this.
This is how risk is minimized and opportunity maximized, allowing people to use technology safely and benefiting society as a whole.
Ultimately, it’s about promoting responsible innovation in technology that we believe will lead to massive economic growth in Europe and beyond.
Euroviews: There was a lot of talk about protecting the future. Can we really look that far ahead?
Jeremy Rollison: We should definitely be looking ahead. The EU has ambitious plans — take the Digital Decade objectives and the European Green Market as an example.
AI will play a key role in driving this transformation including accelerating the use of sustainability solutions and the development of new solutions – faster, cheaper and better.
A great example is the Belgian startup BeeOdiversity that developed an AI-based system that farmers use to measure environmental impact and better protect biodiversity.
What is important is to ensure that frameworks are in place to address risks that may arise. The pace of innovation is moving so fast that AI systems require safety breaks built into them by design.
We also need frameworks that allow us to respond quickly to emerging challenges, while ensuring that the AI ecosystem can thrive and companies in Europe can adopt the technology at scale.
Euroviews: In the past, you have supported Data for All, making data available to all and not to a few, be it governments or companies, especially in the context of AI development. How important is this access to data for AI and why?
Jeremy Rollison: Data plays a key role in the responsible advancement of AI technology.
The quality of its data as well as how well its data is controlled will determine the value an organization will derive from AI. Data is powering algorithms and enabling them to learn and make predictions.
Data policies and practices responsible for the inputs and outputs of both AI models and applications allow for the benefit of AI while protecting user privacy, safety and security.
The growth of AI has made access to big data and language models more important than ever. AI models can perform a wide range of tasks using natural language — from drafting the first draft of a presentation to writing computer code.
In addition, AI’s ability to process large databases and provide insights can be critical to advancing key societal challenges, for example, accelerating progress on climate action.
AI technologies can be used to analyze energy consumption patterns and to make the best use of renewable energy sources or to help in the effective management of natural resources and recommend conservation strategies.
Euroviews: What are the ways to ease the concerns of legislators and citizens when they hear the words “free access to data”? What can be done to ensure that their personal data is safe and remains private?
Jeremy Rollison: People will only use technology they trust. We believe that customers’ data is theirs and theirs.
Data processed by AI products is subject to the GDPR as well as our customer commitments to data privacy and security which often exceed the strong data privacy laws in the EU.
Organizations, large and small, are using AI solutions because they can achieve more at scale, more easily, with AI protections that are accountable at the enterprise level.
Customers can trust that the AI applications they use on our platforms meet the legal and regulatory requirements for responsible AI and that we keep their data secure.
Our mission is to empower customers to achieve more and enable them to drive their own innovation. Their success is our success.