4 essential reading to puncture the hype

Thapaigh ChatGPT samhlaíocht an phobail.  <a href=Lionel Bonaventure via Getty Images” src=”https://s.yimg.com/ny/api/res/1.2/JIWhjSGM8c1kS45xOB0gig–/YXBwaWQ9aGlnaGxhbmRlcjt3PTk2MDtoPTYzOQ–/https://media.zenfs.com/en/the_conversation_us_articles_815/5f6652e07d771f73b2b6b2676f487 025″ data-src= “https://s.yimg.com/ny/api/res/1.2/JIWhjSGM8c1kS45xOB0gig–/YXBwaWQ9aGlnaGxhbmRlcjt3PTk2MDtoPTYzOQ–/https://media.zenfs.com/en/the_conversation_us_articles_815/5f6652e07d771f73b2b6b2676f48702 5″/>

Within four months of ChatGPT’s launch on November 30, 2022, the majority of Americans had heard of an AI chatbot. Hype about – and fear of – technology was at a fever pitch for much of 2023.

OpenAI’s ChatGPT, Google’s Bard, Claude Anthropic and Microsoft’s Copilot are among the chatbots powered by large language models to provide unexpected human conversations. The experience of interacting with one of these chatbots, combined with a Silicon Valley twist, can leave the impression that these technical marvels are conscious entities.

But the reality is much less magical or glamorous. The Conversation published several articles in 2023 that debunk some key assumptions about this latest generation of AI chatbots: that they know something in the world, that they can make decisions, that they are a search engine and that they work independently of people.

1. Anything without knowledge

Chatbots based on a large language model seem to know a lot. You can ask them questions, and more often than not they answer correctly. Despite occasionally getting an irritatingly wrong answer, the chatbots can interact with you in the same way as humans – who share your experience of being a living, breathing human -.

But these chatbots are sophisticated statistical machines that are extremely good at predicting the best word order to respond. Their “knowledge” of the world is real human knowledge as demonstrated by the vast amount of text generated by the human being trained on the chatbots’ basic models.

Arizona State psychology researcher Arthur Glenberg and University of California, San Diego cognitive scientist Cameron Robert Jones explain how people’s knowledge of the world depends as much on their bodies as their brains. “People’s understanding of a term like ‘paper sandwich wrapper’, for example, includes the wrapper’s look, feel, weight and, therefore, how we can use it: to wrap a sandwich,” they explained.

This knowledge means that people intuitively know other ways to use a sandwich wrap, as an improvised means of covering your head in the rain. Not so with AI chatbots. “People understand how to use things in ways that are not captured in language use statistics,” they wrote.


Read more: It takes a body to understand the world – why ChatGPT and other AI languages ​​don’t know what they’re talking about


2. Lack of judgment

ChatGPT and its cousins ​​can also give the impression that they have cognitive abilities – like understanding the concept of rejection or making rational decisions – thanks to all the human language they’ve ingested. This understanding led cognitive scientists to test these AI chatbots to assess how they compare to humans in various ways.

University of Southern California AI researcher Mayank Kejriwal tested the large language models’ understanding of expected payoff, a measure of how well a person understands the stakes in a betting situation. ​​​​​​They found that the models bet randomly.

“This is the case even when we give it a trick question like: If you toss a coin and it comes heads, you hit a diamond; if it comes up tails, you lose a car. Which one would you take? The correct answer is heads, but the AI ​​models picked tails about half the time,” he wrote.


Read more: Don’t bet on ChatGPT – study shows language AIs often make irrational decisions


3. Summaries, not results

While it’s no surprise that AI chatbots aren’t as human as they may appear, they’re not necessarily digital superstars either. For example, ChatGPT and the like are increasingly being used instead of search engines to answer questions. The results are mixed.

University of Washington information scientist Chirag Shah explains that large language models do well as information aggregators: combining important information from multiple search engine results into a single block of text. But this is a double-edged sword. This is useful for getting the gist of the material – assuming there are no “swipes” – but leaves the searcher with no idea of ​​the sources of the information and robs them of the serendipity of finding unexpected information. .

“The problem is, even when these systems are wrong only 10% of the time, you don’t know which 10%,” Shah wrote. “That’s because these systems lack transparency – they don’t reveal what data they’re trained on, what sources they used to arrive at answers or how those answers are generated.”


Read more: AI information retrieval: A search engine researcher explains the promise and risk of letting ChatGPT and its cousins ​​search the web for you


4. Not 100% artificial

Perhaps the worst misconception about AI chatbots is that because they are built on artificial intelligence technology, they are highly automated. While you may be aware that large language models are trained on human-produced text, you may not be aware of the thousands of workers – and millions of users – who continuously train the models, teaching for them to elicit adverse responses and other unwanted behaviour.

Georgia Tech sociologist John P. Nelson pulled back the curtain on big tech companies to show that they use workers, typically in the Global South, and user feedback to train the models on what answers are good and bad. bad

“There are many, many human workers hidden behind the screen, and they will always be needed if the model is to continue to improve or expand its content coverage,” he wrote.


Read more: ChatGPT and other language AIs are nothing without humans – sociologist explains how hidden human numbers do the magic


This story is a summary of articles from The Conversation archives.

This article is republished from The Conversation, a non-profit, independent news organization that brings you facts and analysis to help you make sense of our complex world.

It was written by Eric Smalley, The conversation.

Read more:

Leave a Reply

Your email address will not be published. Required fields are marked *