Fantastic article on The New Yorker on “Why AI Isn’t Going to Make Art“. Thought provoking and plenty of good threads to reflect on.
Article dives into the need to make so many little decisions in order to create something of value. With a prompt, the amount of decisions made are minuscule.
Even with code generators like Github CoPilot, this has been the issue. LLMs can generate basic code really quickly. It is certainly a great tool for picking up programming or learning a new language. Perhaps it will make a mid-level engineer efficient by 2-3%. However, for a competent coder, it offers little in the way to help solve actual problems. Code’s just an expression of a solution. Solving problems in code require a lot of decision making.
It also talks about how real connections matter:
Some individuals have defended large language models by saying that most of what human beings say or write isn’t particularly original. That is true, but it’s also irrelevant. When someone says “I’m sorry” to you, it doesn’t matter that other people have said sorry in the past; it doesn’t matter that “I’m sorry” is a string of text that is statistically unremarkable. If someone is being sincere, their apology is valuable and meaningful, even though apologies have previously been uttered.
Perhaps my favourite part:
We are entering an era where someone might use a large language model to generate a document out of a bulleted list, and send it to a person who will use a large language model to condense that document into a bulleted list. Can anyone seriously argue that this is an improvement?
No, it isn’t.
https://www.newyorker.com/culture/the-weekend-essay/why-ai-isnt-going-to-make-art
Leave a Reply