“To direct the A.I. model in generating the images, we created a Voice-to-City technology, based on generative artificial neural networks. Thanks to Voice-to-City, it is possible to manipulate the urban landscape with the voice, using any kind of vocal input, from simple utterances to more complex phrases. To implement the experiment, then, we decided to use words arranged in the most synthetic and perceptually dense literary form, rooted in the sphere of aurality (i.e. in both orality and literacy, auditory and visual sensations): poetry. Each city included in the project has been assigned one or more poetic texts – poems stricto sensu or poetic proses – focused on both its physical landscape (i.e. architectural and visual elements) and the inner space (i.e. feelings, memories, and emotions) that it evokes. When fed to the A.I. model, the chosen texts – featuring authors like Alda Merini, Giulia Niccolai, Stefano Benni, Giorgio Caproni, Cesare Pavese, Goffredo Parise, Valerio Magrelli, Enrico Testa – foster unpredictable reactions, generating an estrangement effect that broadens immensely the common imaginary of cities. Because the model is trained to recognize patterns – that is, iterations instead of singular, unique items – the urban images conveyed by the poetic words are associated with a series of common, unremarkable places in the cities, while their more obvious landmarks are obliterated.
This happens even when those landmarks – such as Piazza del Duomo in Milan, or Piazza di Spagna in Rome – are explicitly mentioned in the poems: the result is, at once, surprising and emotionally charged, as the viewer witnesses a substitution of the standards of urban identity and enjoyment.”
You are right that the sequences are very unpredictable. I played it twice and I cannot say I saw the same image each time (smile). Unpredictable to the eyes!