what it means to be human is not clear

<span>Photo: John Walton/PA</span>” src=”https://s.yimg.com/ny/api/res/1.2/Jg1e94qepJgVV.CSAV6ntA–/YXBwaWQ9aGlnaGxhbmRlcjt3PTk2MDtoPTU3Ng–/https://media.zenfs.com/en/theguardian_763/75d4109628d4e101e15beeb3a3ba066a” data- src=”https://s.yimg.com/ny/api/res/1.2/Jg1e94qepJgVV.CSAV6ntA–/YXBwaWQ9aGlnaGxhbmRlcjt3PTk2MDtoPTU3Ng–/https://media.zenfs.com/en/theguardian_763/75d4109628d4e101e15beeb3a3ba066a”/></div>
</div>
</div>
<p><figcaption class=Photo: John Walton/PA

Intelligent machines have been serving and enslaving humans in the realm of imagination for decades. The omniscient computer – sometimes benign, usually rude – was a staple of the science fiction genre long before any such entity was possible in the real world. That moment may now be approaching faster than societies can draft appropriate rules. In 2023, the potential of artificial intelligence (AI) came to the attention of a wide audience far beyond tech circles, thanks in large part to ChatGPT (launching in November 2022) and similar products.

As the field is rapidly advancing, that interest is sure to intensify in 2024, along with fears of some of the most apocalyptic scenarios possible if the technology is not adequately controlled. . The closest historical parallel is the acquisition of nuclear power by mankind. AI is arguably the bigger challenge. A theoretical understanding of how to split the atom to assemble a reactor or bomb is difficult and expensive. Malicious code applications can be transmitted and replicated online with viral efficiency.

The worst outcome yet – human civilization accidentally programming itself into obsolescence and collapse – remains the stuff of science fiction, but even the low probability of a catastrophe must be taken seriously. Meanwhile, injuries on a finer scale are not only possible, but present. The use of AI in automated systems in the administration of public and private services risks embedding and exacerbating racial and gender bias. An “intelligent” system trained on data skewed by the centuries in which white men dominated culture and science will produce medical diagnoses or evaluate job applications according to criteria with built-in prejudice.

This is the end of a less glamorous concern about AI, which perhaps explains why it receives less political attention than lurid fantasies of a robot uprising, but it is also the most urgent task for regulators. While there is a risk in the medium and long term of underestimating what AI can do, in the short term the opposite tendency – because the technology is unnecessarily overburdened – prevents prompt action. The systems currently being implemented in all sorts of fields, making useful scientific discoveries as well as deep sinister political propaganda, use concepts that are extremely complex at the code level, but not conceptually intuitive.

Organic nature
Big language model technology works by absorbing and processing huge data sets (much of it scraped from the internet without permission from the original content producers) and generating solutions to problems at breakneck speed. The end result resembles human ingenuity but is, in fact, a highly plausible synthetic product. It has almost nothing in common with the subjective human experience of cognition and consciousness.

Some neuroscientists plausibly argue that the organic nature of the human mind – the way we have evolved to navigate the universe through biochemical mediation of sensory perception – is qualitatively so different from the modeling of an external world by machines that the his experience together ever. .

That doesn’t stop robots from competing with humans in performing increasingly sophisticated tasks, which is clearly happening. But it does mean that the essence of what it means to be human is not as soluble in the rising tide of AI as some gloomy prognostications suggest. This is not just an abstruse philosophical distinction. To manage the social and regulatory implications of increasingly intelligent machines, it is essential to maintain a clear sense of human agency: where the balance of power lies and how it might shift.

It is easy to get carried away with the capabilities of the AI ​​program and forget that the machine was acting according to the instructions of the human mind. The muscle is the speed of data processing, but the imagination is the animating force behind the wonders of computing power. ChatGPT’s answers to complex questions are remarkable because the question itself boggles the human mind with its limitless possibilities. The text itself is usually weak, even rather stupid compared to what a qualified person could produce. The quality will improve, but we should not forget the fact that the sophistication on display is our human ingenuity reflected back to us.

Ethical impulses
That reflection is also our greatest vulnerability. We will anthropomorphize robots in our own minds, projecting on them feelings and conscious thoughts that do not really exist. This is how they can then be used for deception and manipulation. The better machines succeed in replicating and surpassing human technical achievements, the more important it becomes to study the nature of creative intelligence and the way societies are defined and held together by shared experiences of the imagination.

The more robotic capabilities expand into our everyday lives, the more it will be necessary to understand and teach future generations about culture, art, philosophy, history – fields called the humanities for a reason. Although 2024 will not be the year robots take over the world, it will be a year of increasing awareness of the ways in which AI has embedded itself in society, and of the demands for political action.

The two most powerful motors currently accelerating the development of technology are a commercial race for profit and the competition between states for strategic and military advantage. History teaches that such things are not easily restrained by ethical considerations, even when there is a clear declaration of intent to proceed responsibly. In the case of AI, there is a particular risk that public understanding of the science cannot keep up with the questions that policymakers grapple with. This can lead to apathy and irresponsibility, or moral panic and bad law. This is why it is crucial to distinguish between the science fiction of all-powerful robots and the reality of amazingly sophisticated tools that eventually take instruction from humans.

Most non-experts struggle to get their heads around the inner workings of supercomputers, but that’s not the qualification needed to understand how to control the technology. We don’t have to wait to find out what robots can do when we already know what it is to be human, and that the power for good and evil is in the choices we make, not in the machines we build.

Leave a Reply

Your email address will not be published. Required fields are marked *