Pix2Pix learns to generate an image B from an image A, making it possible to consider experiments that do not seek a conversion to a particular style but a shift in time and space. Inspired by an experiment conducted by the developer Damien Henry, we used consecutive stills from a video as training pairs and we gave the network the task of learning to create the next still, in other words, to predict the immediate future. Another experiment was done using stereoscopic pairs with which the network was to learn to generate the second image in the pair, in other words, to see the same image from a slightly different perspective (the difference between the right eye and the left eye, a difference of some 6 centimetres).
These trainings move metaphorically in time and space and are, as a result, omens, predictive visualisations of what will occur. As predicting behaviour is one of the obsessions of Big Data and monitoring today, the abstract results of these experiments, when generated in a recursive manner, function as an ironic commentary on the promises of technology.