What Happened When Computers Learned How To Read

Credit – Getty Images

cComputers love to read. And it’s not just bedtime fiction. They read voraciously: all literature, all time – novels, encyclopedias, academic articles, private messages, advertisements, love letters, news stories, hate speech, and crime reports – everything written and transmitted, no matter who so insignificant.

This ingestible printed matter contains a jumble of human wisdom and emotion – the information and the disinformation, the fact and the metaphor. While we were building railroads, fighting wars, and buying shoes online, the machine child went to school.

Computer literates now scurry everywhere in the background, powering search engines, recommendation systems, and customer service conversations. They flag offensive content on social networks and delete spam from our inboxes. At the hospital, they help convert patient-doctor conversations into insurance billing codes. Sometimes, they alert law enforcement to possible terrorist plots and predict (poorly) the threat of violence on social media. Legal professionals use them to hide or uncover evidence of corporate fraud. Students are writing their next school paper with the help of a smart word processor, which can not only complete sentences, but generate entire essays on any topic.

In the industrial age, automation came for the shoemaker and the factory worker. Today, it arrived for the writer, the professor, the doctor, and the attorney. Every human activity now goes through a computational pipeline – even the sanitation worker turns the effluent into data. Like it or not, we are all subject to automation. In order to survive, we must also learn to join software engineers and join, well – what you are doing is amazing!

Read more: AI and the Rise of the Middle Ore

If any of the above surprises you, I think my work is mostly done. Out of curiosity, you will now begin to notice literary robots everywhere and join me in wondering about their origins. The unsurprised may believe (mistakenly!) that these silicons have only recently learned how to converse, somewhere in the fields of computer science or software engineering. I’m here to tell you that machines have been getting smarter this way for hundreds of years, long before computers, advancing on much more obscure grounds, such as rhetoric, linguistics, hermeneutics, literary theory, semiotics , and philology.

In order for us to hear them speak—to read and understand a large library of machine texts—I want to introduce some essential ideas that underpin the ordinary magic of literate computers. Hidden deep within the circuitry of everyday devices – yes, even “smart” light bulbs and refrigerators – we find tiny poems whose genre remains unrecognized. In this sense, these computers are full of not only an instrumental capacity (to keep food cold or to give light) but also a capacity for creativity and collaboration.

It’s tempting to ask existential questions about the nature of artificially intelligent things: “How smart are they?” “Do they really ‘think’ or ‘understand’ our language?” “Will they ever become — are they already — sentient?”

Such questions cannot be answered (in the way they are asked), because the categories of consciousness come from human experience. To understand alien life forms, we must think in alien ways. And rather than arguing about definitions (“Are they smart or not?”), we can begin to describe the ways in which the meaning of intelligence continues to evolve.

Not long ago, one way to be smart was to memorize some obscure facts—to become a walking encyclopedia. Today, that way of knowing seems like a waste of precious mental space. Huge online databases make effective search habits more important than memorization. Information changes. It is not possible, therefore, to assemble the puzzle of its fundamentals from sharp, binary characteristics, which are always and everywhere laid out in the same way: “Can machines think: yes or no?” Rather, we can begin to put the pieces together in context, at particular times and places, and in terms of an evolving shared resource: “How do they think?” and “How do we feel about them?” and “How does that change the meaning of thinking?”

In answering the “how” questions, we can find a strange kind of connected history, which includes the arts and sciences. People have been thinking this way – with machines and through machines – for centuries, just as they have been thinking with us and through me. The mind, the hand, and the instrument move at the same time, in unison. But the way we train minds, hands or tools treats them almost like completely separate appendages, housed in different buildings, in unrelated areas on a university campus. Such an educational model isolates ends from means and means, disempowering their community. Instead, I’d like to imagine another, more integrated curriculum, offered to both poets and engineers—eventually tied for a machine reader as part of another corpus of training.

The next time you pick up a “smart” device, like a book or phone, pause the average use to consider your body posture. You may be watching a video or writing an email. The mind moves, requires mental abilities like perception and interpretation. But the hand, too, moves an animated body in conjunction with technology. Pay attention to the posture of the intellect – the way your head tilts, the movement of individual fingers, pressing buttons or pointing in particular directions. Feel the glass of the screen, the roughness of the paper. Leaf and swipe. Such physical rites—inflammation representing thought, body, and instrument—give the making of intellect. The whole thing is “it.” And that’s already kind of the point: Thinking happens in the mind, by hand, with a tool – and, by extension, with the help of other people. Thought moves through mental powers, along with physical, instrumental and social powers.

What separates natural from artificial forces in that chain? Does natural intelligence end when I think of something for myself, quietly, by myself? How about using a notebook or calling a friend for advice? How about going to the library or consulting encyclopedias? Or in a conversation with a machine? None of the boundaries seem convincing. Information requires artifice. Webster’s dictionary defines intelligence as “the skillful use of reason.” “Artifice” itself derives from the Latin “ars”, meaning skilled work, and “facere,” means “to do.” In other words, artificial intelligence simply means “reason + skill.” There are no hard boundaries here – only synergy, between the human mind and its extensions.

What about smart things? First thing in the morning, I stretch and, at the same time, I reach for my phone: to check my schedule, read the news, and batch in a faint glow kudos, hearts, and likes from different social apps. How did I get into this job? I ask alongside the beetle from Kafka’s The Metamorphosis. Who taught me to move like this?

It wasn’t planned, really. We also don’t really have beetles in our natural habitat, the ancient forest floor. Our personal rituals change organically in response to a changing environment. We live inside crafted spaces, which contain the designs for purposeful living. The room says, “Eat here, sleep there”; the bed, “Lie on me like this”; on the screen, “Keep me like this.” Smart objects further change in response to our inputs. To do that, they need to be able to communicate: have a set of written instructions. Somewhere in the nexus between the tap of my finger and the response pixel on the screen, an algorithm registered my choice for a morning routine. I am the input and the output: the tools evolve as they transform me. And so, I go back to sleep.

Excerpt from Literary Theory for Robots: How Computers Learned to Write. Copyright 2024 by Dennis Yi Tenen. Used by permission of the publisher, WW Norton & Company, Inc. All rights reserved.

Call us at letters@time.com.

Leave a Reply

Your email address will not be published. Required fields are marked *