AI Case Study
Researchers at IBM Research Australia, the University of Toronto, and University of Melbourne build model to generate sonnets using deep learning
Researchers at IBM Research Australia, the University of Toronto, and University of Melbourne trained a deep learning model on 2,700 of William Shakespeare’s sonnets, focusing on language, rhyme, and meter.
Industry
Public And Social Sector
Education And Academia
Project Overview
"That led researchers at IBM Research Australia, the University of Toronto, and University of Melbourne to construct a deep learning model, dubbed “Deep-Speare,” to see if they could re-create poetry that matches Shakespeare’s high level of ingenuity and beauty.
[They] focused on three aspects of The Bard’s work: language, rhyme, and meter. They used TensorFlow to construct a recurrent neural network architecture that included three separate model, including a language model built on a Long Short Term Memory (LTSM) encoder-decoder model; an encoder-decoder model for capturing iambic pentameter; and an unsupervised model for learning words that rhyme.
Then they took 367,000 of Shakespeare’s poetic words and used them to train the model, which ran on an Nvidia GPU cluster. After adjusting the weights until they were happy, they asked the system to generate four-line poems, which were then judged by human experts."
Function
R And D
Core Research And Development
Background
"Despite the rapid progress in AI, the deep learning approach so far has failed to match humans in one important aspect: creativity. And while AI programs have excelled to some degree in in creating music and paintings that can rival masters, they have fallen well-short in the linguistic category."
Reported Results
"The results were mixed. While Deep-Speare could generate poems that scored high in rhyme and meter (even higher than human poets, according to the experts), the poems overall lacked readability and emotion."
Benefits
Technology
"They used TensorFlow to construct a recurrent neural network architecture that included three separate model, including a language model built on a Long Short Term Memory (LTSM) encoder-decoder model; an encoder-decoder model for capturing iambic pentameter; and an unsupervised model for learning words that rhyme."
Data
367,000 of Shakespeare’s poetic words