top of page

AI Case Study

The Atlantic editor's neural network aimed at generating publishable content failed

Adrienne Lafrance, the editor of wished to train a deep learning algorithm to produce articles like herself. She collaborated with writer and technologist Robin Sloan. They firstly trained the system on 3MB of data from everything she had written throughout two years totalling 532,519 words. After running the test, it became evident that the data was not enough to train the system, as the text produced was nonsensical. The editor provided more stories, reaching 725,000 words. Even though the generated text slightly improved, all the trials that followed were unsuccessful in producing something that made sense.


Consumer Goods And Services

Media And Publishing

Project Overview

Adrienne Lafrance, the editor of who collaborated with writer and technologist Robin Sloan, explains:

"So I sent Sloan the document, which contained painstakingly copy-and-pasted text from two years of published stories—almost all the stuff I’d written for The Atlantic since 2014, totaling 532,519 words. Sloan turned to an open-source Torch-RNN package—which you can find on GitHub, courtesy of the Stanford computer scientist Justin Johnson—and he got to work.
It became clear pretty quickly that half-a-million words, or about 3 MB of text, wasn’t enough for the neural network to learn language the way I’d hoped.

The computer had ingested the Adrienne Corpus and produced mostly gobbledygook. This wasn’t exactly a surprise. “The deal with these networks—the thing that makes them so powerful, but also weirdly fragile—is that they know NOTHING at the outset of their training,” Sloan said. “It's not like you tell them, ‘Okay, this is going to be English text, look for nouns and verbs, etc.’—it's just a stream of characters. And from that stream, it infers all this amazing structure. BUT the inference requires a lot of input".

Increasing my sample size to 100 megabytes would mean going from 500,000 words to something like 18 million—or the equivalent of War and Peace 30 times in a row. That wasn’t going to happen, but Sloan was still willing to experiment with a much (much) leaner sample. After hours of piling on the megabytes—adding text from stories I’d written for The New York Times, The Washington Post, Slate, Gawker, Honolulu Civil Beat, and elsewhere dating back to around 2012—I was only up to about 725,000 words. 

There was a slight difference, but the output still wasn’t exactly publishable, not even close. So Sloan encouraged the model to be less conservative, riskier in its predictions about which letter to generate next in sequence. “Those predictions are hedged and probabilistic, not definite: at every step, the net establishes a probability for every possible next character,” Sloan explained. 

When a model is less risky, it’s more likely to be repetitive. (That’s why, on the most conservative setting, the Adrienne Corpus produced sentences like this: “The most people who are all the story of the company and the story of the story of the story of the company and the first place.”) But too risky can predict its own breed of nonsense.

And yet there are still some delightful surprises in the text Robot Adrienne produced. I liked how the moon kept coming up, for instance, and I found myself lingering on made-up words like “somethative,” “macketing,” and “replored,” the last of which sounds vaguely French. (“As you can see, the model trained on ~3MB of text is still making a lot of spelling errors, and taking stabs at words like ‘technologication’ which I think is pretty cute,” Sloan wrote in an email after the first round of experimentation.)

In later rounds, using the larger dataset, Robot Me began to include quotes in its work, which is pretty good in terms of journalistic mimicry!"

Reported Results

The RNN system failed to generate content that made sense and was publishable.



R And D

Product Development


"A little over a year ago, I started asking around—among computer scientists at universities and tech companies, mostly—to see if someone would help me design and carry out a weird little experiment I had in mind. I wanted to train a machine to write like me.

The idea was this: We’d give Robot Adrienne a crash course in journalism by having it learn from a trove of my past writings, then publish whatever Robot Me came up with. "



725,000 words of the editor's work

bottom of page