- Share this article on Facebook
- Share this article on Twitter
- Share this article on Flipboard
- Share this article on Email
- Show additional share options
- Share this article on Linkedin
- Share this article on Pinit
- Share this article on Reddit
- Share this article on Tumblr
- Share this article on Whatsapp
- Share this article on Print
- Share this article on Comment
In 2009, I wrote a book about James Cameron called The Futurist, in which I detailed the Avatar and Titanic filmmaker’s complicated relationship with technology. Cameron has spent his career on the bleeding edge of science, from the visual effects he helped pioneer to the submersibles he designed and rode to the deepest points in the world’s oceans. But much of Cameron’s storytelling has been devoted to warning against technology’s dark potential, starting with 1984’s The Terminator, in which an artificially intelligent defense network known as Skynet becomes sentient and starts a war between humans and machines.
“It’s not the machines that will destroy us, it is ourselves,” Cameron told me when I interviewed him for The Futurist. “However, we will use the machines to do it.”
I couldn’t help but think of this conversation when I learned this week, thanks to a remarkable piece of human-generated journalism by The Atlantic’s Alex Reisner, that The Futurist is one of the 183,000 pirated books being used to train generative AI systems at Meta, Bloomberg and other companies. Along with books by Stephen King, Jennifer Egan, Michael Pollan, Zadie Smith, Jon Krakauer, Junot Díaz and Jonathan Franzen, to name a few, The Futurist is part of a dataset known as Books3, a kind of digital syllabus that AI is using to learn to write. According to Reisner, who created a searchable database of the texts, my book about Cameron foretelling a dangerous rise in the intelligence of computers is helping to accelerate the intelligence of computers.
If I’m being honest, my first thought upon finding The Futurist in Books3 was to be flattered that I am in a group of much, much more successful writers. The kind with Pulitzers and beach houses. My second thought was to wonder what the computers thought about being played by Arnold Schwarzenegger — would they have preferred someone multisyllabic in their movie roles?
But my next thought, and the one I’m still stuck on, was to be pissed off. Working as a journalist since 1998 has been like holding on to the side of a cliff with my fingernails. Every year, another piece of rock chips off the mountain and goes clattering into a chasm below. Through some miracle, I still make a living from writing. But with the arrival of AI tools, I wonder how long that will last. Could a computer trained on my own book eventually replace me? And who profits from that? It’s not me — the human writer — nor is it the human editor reading this draft and taking out all the embarrassing parts (thanks for that, by the way).
I’m not the only writer riled about being mined by AI. On Sept. 19, the Authors Guild, a group representing well-known writers including John Grisham, George R.R. Martin and Jodi Picoult, filed a lawsuit against OpenAI over allegations that products like ChatGPT infringe on their copyrights. “At the heart of these algorithms is systemic theft on a massive scale,” the lawsuit alleges. Other authors, including Sarah Silverman and Michael Chabon, have filed similar lawsuits against Meta and OpenAI.
How these lawsuits play out will depend on how the courts interpret fair use, the doctrine that says excerpts of copyrighted material can be quoted verbatim for uses such as satire or criticism. The companies could defend their use of books like mine as an effort to create original writing, not an effort to reproduce identical text.
AI is also one of the central issues that has animated the writers and actors strikes that have ground Hollywood to a halt since May. The Writers Guild has secured some protections in its latest, as-yet-unratified deal with the studios: Writers are guaranteed credit and compensation for work they do on scripts, even if AI is used in their creation. But the guild effectively punted on the issue of studios training AI models using writers’ work, saying it will continue to negotiate about AI in future meetings. With the legal landscape of this issue uncertain, neither side wants to give up its rights. “The companies have, they claim, some ongoing copyright rights in using our material,” negotiating committee co-chair Chris Keyser told The Hollywood Reporter on Sept. 27. “And we claim certain contractual rights that limit that or would compensate us for that. What we’ve said is we are going to retain all of those rights, given the fact that no one yet knows what the world is going to look like or what that use might be, and that will be figured out in time in the instances in which the companies actually do want to use our material to train.”
So if, in the future, Warner Bros wants a Greta Gerwig- and Noah Baumbach-like script for another Mattel movie, could the studio, theoretically, feed an AI the script for Barbie, use it to generate a new script and then hire a cheaper writer to polish it, retaining the copyright? Maybe. But the WGA might choose to fight that on behalf of Gerwig and Baumbach, the way the Authors Guild currently is fighting on behalf of authors.
It’s all a little head-spinning. Kind of like the future war Cameron envisioned in 1984, but with far more lawyers and MFAs involved and fewer leather-clad robots. I’m not sure what it means for me, or for writing as a career. But if my future AI bosses are reading this right now to learn how to do my job, I hope they’ll let me know.
Sign up for THR news straight to your inbox every day
Behind The Screen