ChatGPT Made Me Question What It Means to Be a Creative Human – BusInsiders

ChatGPT Made Me Question What It Means to Be a Creative Human

- Technology - February 7, 2023

I don’t say this lightly, but this tech is one of the most astonishing, and terrifying, technologies I’ve ever seen, and I’ve been writing about technology for almost two decades. Not only because of what it is capable of today, with its ability to output truly “creative” text (or at least text that appears creative), but because of what this technology will be capable of in the next year or two—and the number of jobs it could (or should I say “will”) replace when it gets there. For example, when I asked ChatGPT to list 50 jobs that could be replaced by ChatGPT, it spit out this list in less than a second: customer service representative, technical support specialist, sales representative, receptionist, data entry clerk, call center agent, transcriptionist, legal secretary, medical secretary, executive assistant, personal assistant, journalist, novelist, travel agent, insurance agent, retail salesperson, bookkeeper, court reporter, marketing manager, public relations manager, advertising manager, and on and on and on.

While there are already examples of crude AI writing simple articles for news outlets today—some basic stock reports, sports updates, and weather-related stories are written by robots—the advent of ChatGPT, and the coming iterations of this tech, illustrate that in the coming year or so, my editor (if he or she still has a job) might not ask me or another journalist to write a story with an analysis of what Elon Musk will do to Twitter, or a detailed look at how people voted in Georgia to determine how they may vote in 2024; instead, they could simply type a prompt into an app like ChatGPT. The same is true with art and design and illustration, as we’ve seen a spate of other new AI products released in recent months that are threatening all areas of the arts and creative careers. There are the text varieties, like GPT-3, which is the basis for ChatGPT and is capable of reading and writing like a human. And then there are the astonishing image-generation abilities of computers, like DALL·E 2 and Stable Diffusion, which can draw or paint anything in mere seconds, in any style you want, based on a single command. 

Already, I’m hearing anecdotal reports from friends with kids in high school and college that some professors and teachers who have learned about the technology are in a panic after seeing ChatGPT and what it’s capable of, with some proclaiming the impending death of the high school and college essay. ChatGPT is already being used to automatically generate essays based on a prompt or topic, which makes the traditional process of brainstorming, researching, and writing essays obsolete. Why waste your time doing all that when you can just put your homework assignment as a prompt into ChatGPT and receive a complete essay in a matter of seconds? You might think a professor or teacher could decipher the difference between something written by an AI and something written by a human, but that is impossible, and it’s also impossible for an AI to tell the difference. One of the things you can do with ChatGPT is give it a paragraph or sentence and make it continue writing the rest of the essay. I did this with a made-up science fiction story and asked people to tell me which parts of the essay were written by me and which were written by AI. No one could tell the difference; it felt a little like the Pepsi Challenge. Then I fed the same text back to the AI and asked it to tell me which parts were written by a computer and which were written by a human, and ChatGPT guessed incorrectly. 

In 2017, a research paper titled “Attention Is All You Need” landed on the internet to little fanfare outside of the esoteric tech circles of people interested in the cutting edges of natural language processing and artificial intelligence. The paper talked about “dominant sequence transduction models,” an idea called the “Transformer,” and “recurrent neural networks,” and for 99.999999% of society, trying to read the theories in this 11-page report would be akin to trying to read a book written in a language you’ve never heard of before while wearing a blindfold. But the paper, written by a team of researchers at Google Brain, an AI research team that is part of Google’s AI division, proposed a new approach to natural language processing—the branch of artificial intelligence concerned with giving computers the ability to understand human language in much the same way human beings can—that has arguably changed the field forever. 

The paper essentially reimagined how to model information processing. The researchers argued that traditional models—which worked like a librarian who carefully sorts each book into its proper place on the shelves, making sure that everything is organized and easy to find—were inefficient. Instead, they proposed an “attention-based model.” It works like this: When a reader is looking for something, it scans all the books and focuses its attention on the ones that contain the information it needs, without worrying about organizing all the books on the shelves.

These programs have since been fed millions of examples of human writing and art and music and creativity, and the machines have since learned how to replicate these styles. All of this has made me ask myself: What does it mean to be human in a future where robots can potentially be more creative than us? Can the next iteration of AI (or the one after that) have better ideas than humans? Or will these things just become tools that help us? 

Members of the pro-AI tech set concede that this technology has the potential to automate many tasks that today require human creativity, but they point out that machines are not truly capable of understanding or appreciating art in the same way that humans are. Machines do not have consciousness; a computer can’t feel what it’s like to fall in love or lose a loved one or be tormented to a point that you have to chop off your ear. The argument goes: Machines can mimic our creations, but they cannot truly understand the emotions and experiences that inspire us to create. But, to me, if machines are able to imitate art with emotion and depth because they are learning from things humans have created over hundreds of years, then the machines are, in turn, an extension of those human emotions. A machine does not have to be conscious or capable of experiencing emotions to create art that is meaningful to us. The value and significance of the art lie not in the machine’s ability to feel, but in the ability of the viewer to appreciate it. 

TAGS:
Comments are closed.