Explorations in AI

Henry Fox Talbot Meets Aliens on the Moon

My first exploration into AI used Midjourney to generate the following images based on the prompt “Henry Fox Talbot meets aliens on the moon”. The result is a fantasy where victorian era humans meet, and are ultimately attacked and destroyed by aliens. All due to the unfortunate discovery made by Sir John Herschel that an ancient alien civilization inhabited the dark side of the moon, If only he had focused his energy on inventing the Calotype process as our current reality indicates he did. The project is a play on history and how an AI that didn’t exist in the past interprets that era in the present.


Anna Atkins Makes Xerox Copies

My focus on using AI has been on how it interprets history when given inaccurate information, continuing that line of thinking I decided to begin inserting modern technologies into the 19th century to see how the AI would interpret that. This series was created on Midjourney using the prompt “Anna Atkins makes Xerox copies”. The AI knew Atkins made Cyanotypes of plants and knew I wanted ‘copies’ and so I ended up with a more literal interpretation for this prompt.


Sir John Herschel Invented Computers

Also done in Midjourney, this prompt ended up producing the most surprising and random results thus far, “Sir John Herschel invented computers” definitely confounded the young AI. The results range from literal to completely abstract and mathematical. Very few photographs of Sir John Herschel exist, and this became very obvious in the way the AI struggled to depict him. Additionally, by including the term ‘computers’ in this prompt, the AI tended to lean towards mathematical artworks using colors and shapes to make charts and graphs that represent nothing.


Midjourney Vs. DALL-E, A 1:1 Comparison

DALL-E admitted me off the wait list and I was able to begin doing some side-by-side comparisons between it and the Midjourney AI. What I discovered was a stark contrast between Midjourney AI which generates prompt solutions I would describe as ‘expressive’ if a human had made them, versus DALL-E AI and it’s strict adherence to literal generations that appear like what I would expect a machine to create. The prompt used for the above series was the same for each AI, “Eadweard Muybridge robot locomotion study” but the results are significantly different.


Henri Cartier-Bresson Photograph of Robot Jumping Over Puddle

In my previous experiments, I had become very interested in how close to reality DALL-E seemed to want to stay, so I decided to use some prompts referencing 20th Century artists. The first attempt was feeding it the prompt “Henri Cartier-Bresson photograph of robot jumping over puddle”. The results were shocking – it clearly knew I was referencing the famous photograph illustrating the ‘Decisive Moment‘ and did its best to recreate it using a robot in place of a human. As you can see in the first example above, it was able to replicate the shapes and design of the original image and the tonal range as well. Clearly the AI had more references to draw from as I moved closer to the present.


Robot Polaroid Selfies in the Style of Lucas Samaras

As I keep exploring, I continue to move forward in history, deciding to feed it a prompt from a post-modern artist, Lucas Samaras, using more recent technology, the Polaroid. The results were delightful, the AI knew exactly which series of Lucas Samaras Polaroid self-portraits I was referencing and interpreted that style as a robot might, more geometric and mathematical. Instead of one robot in each frame I ended up with three, most likely attributed to using the plural ‘selfies’ in the prompt instead of the singular. As I may have mentioned, the DALL-E AI stays true to its prompts, sometimes painfully so.