The Wonderful World of AI Art
With the rise of AI artwork, people often wonder if the careers of artists could be threatened in any way. What’s interesting about AI technology is how the creative industry can all be boiled down to algorithms that can create detailed pieces, copying the styles of the world’s most famous works of art. But AI is so much more than mathematical formulas and an empty canvas: it’s training, teaching, and guiding artificial intelligence to seeing what the human eye can see. So, we can’t help but wonder: can creativity be taught?
What is AI Art?
The answer to this question is daunting, to say the least. Imagine creating a computer algorithm that could teach a machine to copy the Mona Lisa, or create intricate works of art using software and hardware that’s intelligently programmed to do so.
The shortest and most intuitive definition would be that AI art is a collection of artwork pieces created entirely by a computer. However, one could question the validity of this statement by saying that human guidance is also responsible, as it takes a human being to create the algorithm or the machine. Yes, but for how long…?
How Does AI-Generated Art Work?
AI can operate in two distinct ways when it comes to art. It can identify the style and elements of a specific work of art, and then apply the knowledge to modify existing images to replicate the style. Otherwise, it can identify elements across similar images, and then use the gathered knowledge to create something new.
AI can be applied to anything from music, videos, photography, paintings, and others. The concept of using this style transfer is pretty easy: you can choose the artworks or the images you want to recreate, and then apply an algorithm to copy that style and the elements on another image. AI can also create mashups of different images, to blend several styles for the final result.
In order to take art and technology to the next level, artists are now looking to fiddle with technology to make it a partner in creating new art pieces. Through machine learning algorithms, artists can now turn technology into an ideation partner. Mario Klingemann, for instance, has shifted from being an artist to being a neurographer. He builds software that uses these special algorithms in order to push it to learn the elements used in different drawings, photos, and paintings.
In 2018, Christie’s New York sold an AI painting at an auction for almost half a million dollars. Edmond de Belamy was the name of the piece, created by an art collective group called Obvious. The auction for the piece lasted about seven minutes, and it is the first one of its kind sold at an auction. This could open up a world of possibilities, as far as AI-generated paintings are concerned.
The painting itself seems pretty distorted (you can check it out in our list of AI artworks described below), and was signed using the algorithm that was used to build it. For a long time, this will be considered a turning point in art history, leaving people to question what is the threshold between man-made art and AI-generated pieces.
What Is Google’s Deep Dream?
Alexander Mordvintsev, an engineer at Google, has created a computer program called Deep Dream. By using a neural network, this piece of technology can identify and recreate patterns in images that already exist, to create other over-processed images.
The program is capable of detecting faces and patterns in existing images, looking for ways to understand and classify them. Once the software is trained, the network is capable of running in reverse, adjusting the original image. To better understand how this works, Google allows internet users to upload their own photographs, and witness the fantastical result of an almost psychedelic image based on their original photo.
How Does Deep Dream Work?
In order for Deep Dream to work, Google has created an artificial neural network, which is short for “program that can learn on its own”. These networks can be adapted based on how the human brain works, with artificial neurons that replace biological ones. The data used to create the AI images is filtered in a number of ways before reaching the final result.
We may have used the word “train” a lot. That’s because neural networks aren’t just programmed to identify data from the get-go. You have to train them, by feeding them with enough sets of data until they can reach their own conclusion about what’s what. Once they have enough information that can be used as a reference point, they will be able to make sense of it.
In one of their blog posts, Google has explained that the training process is a series of sets of repetition and analysis, meant that the program actually has to see a lot of images of the same object before it can fully understand it. For example, you have to show the program a million images with cars if you want it to learn how to identify a car all on its own.
This neural system might not be that different compared to the human brain (which, in the case of art, might actually have an easier time in learning how to make this or that). In other words, you won’t need to look at a million pictures of the car before learning what are the elements that make the car look the way it does, or how to replicate the image. But the outlined process is pretty similar: once the network knows how to identify certain objects, it can recreate them.
But the truth behind what makes Deep Dream so fascinating is the fact that no one has yet fully understood what control the output. Since there is no human intelligence guiding the software to a specific result, the program is pretty much taking a series of pre-programmed tasks and using the information it learned in order to generate a result for the vague instructions. The resulted image is a representation of how Deep Dream has interpreted its job and tasks, and maybe this precise computer input and unpredictability put the “art” in “AI artwork”.
“Memories of Passersby” by Mario Klingemann
Mario Klingemann, a German computer scientist, has created an AI piece fully made by a computer. The amazing art depicts a series of distorted faces that form portraits on two separate screens, as a result of a computer algorithm. There is a lot of controversy on whether this piece can really be considered a product that’s 100 percent AI-generated, as Klingemann had to build the machine himself.
“Edmond de Belamy” by Obvious
$432,000. This was the final selling price for the Edmond de Belamy artwork, a piece that was originally estimated at a value of $10,000. Created by the Parisian art collective named Obvious, this painting shows a man wearing a black coat. While there are a lot of controversial opinions on how this painting seems unfinished, behind it lies a technology that used information taken from over 15,000 portraits, using the GAN (Generative Adversarial Network) algorithm. In the lower right corner, the painting was signed using the very algorithm formula that was used to create it.
“The Fall of the House of Usher” by Anna Ridler
Back in 2017, artist Anna Ridler created a 12-minute video that was inspired by a short story written by the immortal Edgar Allan Poe. The stills that form this video were created based on the artist’s ink drawings, training a neural net. As Ridler, herself stated, this was not an attempt to train a machine to create art pieces, but rather to show how digital art can neuter the messy world we live in.
“Nude Portraits” by Robbie Barrat
Robbie Barrat is another promising name in the field of AI-generated artwork. His series of nude portraits were created by using Generative Adversarial Network algorithms, applies to a series of nude portraits taken from WikiArt. The neural network was trained to create these unrealistic portraits. The artist himself claims that the machine didn’t learn the right attributes of the portraits scrapped, and has created the final images with a minimal understanding of the original work. The result? Blob-looking and severely misshapen nude portraits.
“Neural Glitch” by Mario Klingemann
Back in 2018, artist Mario Klingemann started exploring a new technique called “Neural Glitch”. Through the manipulation of the GANs, the artist is capable of exchanging, altering or deleting the trained weights, managing to create a piece with semantic and textural glitches. What’s even more interesting about this technique is that there are no existing photographs being filtered or processed, so the entire set of works is created from the ground up.
“I See You” by Mike Tyka
Part of his series entitled “Portraits of Imaginary People”, Mike Tyra has created this AI artwork, depicting a human face that doesn’t exist in reality. The face was created using thousands of different portraits found on Flickr and used the GANs that we kept mentioning earlier. These systems are based on two neural networks: one of them aimed at telling real and artificially-generated photos apart, while the other tries to create convincing output.
“Deep Dinosaur” by Chris Rodley
Here is an interesting example of style transfer created by Chris Rodley. By using DeepArt.io, Rodley has created a mashup between flowers and dinosaurs, recreating the images of the latter component by using elements from the former. This piece of work went viral on Reddit, as a psychedelic example of how to use deep styles to create AI art.
The topic of AI art is giving people a lot to talk about. As with every other form of art, AI will also generate pieces that are the focus of subjective opinions, so it’s pretty much a love/hate relationship, without anything in between. For a long time, art was used to evoke an emotional response from the audience, or to express memories and feelings of artists who felt they have something to say.
By combining the oldest forms of creating artwork, with the modern technologies that programmers use to create complex algorithms, we can’t wait to see what other results the future brings, in terms of how numbers and formulas can influence the ever-creative world of art.