![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ca73834-16a9-4c04-bb40-eb248ea7a890_1018x1004.png)
I was recently listening to a podcast episode about copyright issues and Artificial Intelligence. It was an interesting interview. From a moral standpoint, it certainly made me more sympathetic to artists and knowledge workers whose words have been fed into Large Language Models without their permission.
What I’m less sympathetic to, however, is the idea that the value of all knowledge work derives from its being produced by humans. Specifically, I disagree with the claim made in the episode by NYT journalist Sheera Frenkel:
The whole point of news is that what we’re bringing you is fresh and based on new reporting. So the conclusions we’re drawing for readers are constantly changing. And AI can’t do that. It can only repeat and regurgitate what’s already been given to it, what’s already in the system.
I think this is an overly precious view of what the news offers. Yes, LLMs need some means of receiving new information about the world—presumably via human input. Yes, I want some human checks on the veracity of AI-generated prose. And yes, some journalism is opinion or analysis written from a very particular perspective, such that it would feel dishonest for an AI to have written it.
But when it comes to actually composing a basic article (choosing the specific sequence of words, putting the news in context of past events, giving the piece a title), I’m sorry: I just don’t care if an AI did most of the work. If World War III is starting, it makes no difference to me whether it’s Michael Barbaro telling me or ChatGPT—I just want to know the story’s accurate.
In contrast, I really do care if a poem, song, novel, or movie has a human creator behind it. (Possible exception: pure entertainment like AI-generated memes or bubblegum pop?) One of the issues in question, for example, is a lawsuit being brought by comedian Sarah Silverman, on the grounds that ChatGPT easily apes her style and must therefore be trained on her words. Again, I’m sympathetic to the fact that she never consented to provide material as training data—that’s an issue worth interrogating, and probably regulating. But is this really a threat to her livelihood? Who’s the person that’s paying for or using “Sarah Silverman” jokes that gets anything close to the enjoyment/success of the actual Sarah Silverman? As with all art, part of what makes comedy great is the very real human behind it, not its raw informational content.
Is this even a controversial take?
Interesting take on differentiating between news and writing that is more expressive. As the models get more and more sophisticated, I think this will become more problematic.
A lot of “art” is derivative of works that already exist but that is easier to accept when a human is doing the interpretation. As AI continues to evolve, it will become harder and harder to determine who or what is behind the content.
Nonetheless, it’s very useful right now for getting out first drafts of prose that is written to inform rather than to entertain (I heard someone describe ChatGPT as a good intern.)