Can Human Assistance Improve a Computational Poet?
Carolyn E. Lamb, Daniel G. Brown, and Charles L.A. Clarke

Proceedings of Bridges 2015: Mathematics, Music, Art, Architecture, Culture
Pages 37–44
Regular Papers

Abstract

Good computational poetry requires sufficiently interesting poetic phrases to be generated or chosen. Different metrics for determining what makes a sufficiently interesting phrase have rarely been directly compared. We directly compare of a number of metrics—topicality, sentiment, and concrete imagery—by collecting human judgments on each metric for the same data set of human-generated phrases, then having humans judge computationally generated poems chosen to include high-scoring phrases against each other. We find through a quantitative analysis that the output of at least some of these metrics is perceived as better than output using none of these metrics.

Files