Renoir paints RSAP protestors

artificial intelligence

This is a painting "by Renoir" of the protest group RSAP, or "Renoir Sucks at Painting". A new artificial intelligence deep learning algorithm that deals with artistic style was used to automatically piece different patches of Luncheon of the Boating Party onto a news media photograph of the picket line outside the Metropolitan Museum of Art. It took me over an hour using a GPU enabled EC2 instance to generate this image in 4 tiled parts. I'm especially grateful to the developers of neural-style for torch.

This generative piece is my personal artistic statement about RSAP, Renoir, and ai+art. I also tried using a screenshot of Habbo Hotel "Pool's Closed" as the style image, but it was less than recognizable, and this ai model seems overfit to paint strokes and everything else just looks paintstrokified.

Deepdreaming new trainees: Still fuzzy blobs

Follow-up from the previous one about training a new neural network to deepdream about new topics.

I had to let a larger training set of Art History run, in hopes that the expanded dataset would yield a more representational result. There were 2.3 million images, including photometric distortions. Again, the task was classifying which artist did which painting. This collection of images was 26 times larger than the one in my previous post. I only trained for 7 epochs (36 hours) before my spouse said enough was enough with the cloud computing bill and forced me to stop the instances. The resulting trainee seems to be producing more complex organic forms when deep dreamt but still nowhere near as sophisticated as the bvlc_googLeNet (which was stopped after 60 epochs).

Here are the resulting paintings/dreams generated with different settings, but using the same chromatic gradient image as a guide. The first using a lower resolution version of the guide image, fewer octaves, and only 10 iterations.

deepdream 2 art history

The second was 10 octaves with 40 iterations. You can see it really loves the skin color.

deepdream 2 art history

As I saw in some of the super duper high resolution (128 megapixel) puppyslug murals generated by David A Lobser, higher resolution isn't necessarily more interesting. I find the good stuff occurs in a lower resolution, perhaps an image that was roughly the size of one of the training images. Not a coincidence. Unfortunately, that's not so useful for people who want to make large murals.

The sweet spot I seek is the one where the deconv deep vis is showing activation images that resemble a scrambled version of the training images, because my theory is that it leads to a more representational synthesis in the deep dream. I continue on my weekends.

Anyway, more on kitsch. I'm not necessarily saying that kitsch is bad. I have a lava lamp in my house. What I think would be bad about deep learning + art is having a very promising scientific phenomenon trivialized and frozen in public memory as the software that performs the one single task. I think it has more potential than that, and I thank all of you who got in touch with me to say you agree.

deepdream training error graph

Deepdream: Avoiding Kitsch

Yes yes, #deepdream. But as Memo Atkin and others point out, this is going to kitsch as rapidly as Walter Keane and lolcats unless we can find a way to stop the massive firehose of repetitive #puppyslug that has been opened by a few websites letting us upload selfies. I don't think we should stop at puppyslug (and its involved intermediary layers), but training a separate neural network turns out to be more technically difficult for most artists. I believe applying machine learning in content synthesis is a wide open frontier in computational creativity, so let's please do what we can to save this emerging aesthetic from its puppyslug typecast. If we can get over the hurdle of training brains, and start to apply inceptionism to other media (vector based 2D visuals, video clips, music, to name a few) then the technique might diversify into a more dignified craft that would be way harder to contain within a single novelty hashtag.

Why does it all look the same?

Let's talk about this one brain everyone loves. It's a bvlc_googLeNet trained on ImageNet, provided on the caffe model zoo. That's the one that gives us puppyslug because it has seen so many dogs, birds, and pagodas. It's also the one that gives you the rest of the effects offered by dreamscopeapp because they're just poking the brain in other places besides the very end. Again, even the deluxe options package is going to get old fast. I refer to this caffemodel file as the puppyslug brain. Perhaps the reason for all the doggies has to do with the number of dog pictures in ImageNet. Shortly following is a diagram of the images coming from different parts of this neural network. You can imagine its thought process like a collection of finely tuned photoshop filters, strung together into a hierarchical pipeline. Naturally, the more complex stuff is at the end.

network visualization

What's the Point?

My goal in this post is to show you some deepdream images that were done with neural networks trained on other datasets – data besides the the entirety of ImageNet. I hope that these outcomes will convince you that there's more to it, and that the conversation is far from over. Some of the pre-trained neural nets were used un-altered from the Caffe Model Zoo, and others were ones I trained just for this exploration.


It's important to keep in mind that feeding the neural net next to nothing results in just as extravagant of output as feeding it the The Sistine Chapel. It is the job of the artist to select a meaningful guide image, whose relationship to the training set is of interesting cultural significance. Without that curated relationship, all you have is a good old computational acid trip.

The following image is a chromatic gradient guiding a deep-dream by a GoogLeNet trained on classical Western fine art history up to impressionism, using crawled images from Dr. Emil Krén's Web Gallery of Art. This version uses photometric distortion to prevent over-fitting. I think it results in more representational imagery. The image is 2000x2000 pixels, so download it and take a closer look in your viewer of choice.

deepdream by arthistory1 neural net

This one is the same data, but the training set did not contain the photometric distortions. The output still contains representational imagery.

deepdream by arthistory1 neural net

The below image is a neural network trained to do gender classification, deepdreaming about Bruce Jenner, on the cover of Playgirl Magazine in 1982. Whether or not Bruce has been properly gender-classified may be inconsequential to the outcome of the deepdream image.

High Resolution Generative Image

Notice that when gender_net is simply run on a picture of clouds, you still see the lost souls poking out of Freddy Krueger's belly.

High Resolution Generative Image

Gender_net deepdreaming Untitled A by Cindy Sherman (the one with the train conductor's hat).

High Resolution Generative Image

This was a more intermediary layer from deep-dreaming a neural network custom trained to classify various labeled faces in the wild (LFW).

High Resolution Generative Image

This was dreamt by the same neural net, but using a different gradient to guide it. The resulting image looks like Pepperland.

High Resolution Generative Image

This is the same face classifier (innocently trying to tell Taylor Swift apart from Floyd Mayweather) guided by a linear gradient. The result is this wall of grotesque faces.

High Resolution Generative Image

Just for good measure, here's hardcore pornography, deep-dreamt by that same facial recognition network, but with fewer fractal octaves specified by the artist.

High Resolution Generative Image

Technical Notes

Training neural networks turned out to be easier than I expected, thanks to public AMIs and nvidia digits. Expect your AWS bill to skyrocket. Particularly if you know about machine learning, it helps to actually read the GoogLeNet publication. In the section called Training Methodology, that article mentions photometric distortions by Andrew Howard. This is important not to overlook. When generating the distortions, I used ImageMagick and python. You can also generate the photometric distortions on the fly with this Caffe fork.

If you want to bake later inception layers without getting a sizing error, go into deploy.prototxt and delete all layers whose name begins with loss. In nvidia digits, the default learning rate policy is Step Down but bvlc_GoogLeNet used Polynomial Decay with a power of 0.5. I can't say that one is necessarily better than the other since I don't even know that properly training the neural net to classify successfully has anything to do with its effectiveness in synthesizing a deepdream image.

The highest resolution image I could train on the greatest ec2 instance turned out to be 18x18 inches at 300 dots per inch. Any more than that and I would need more than 60 gb of RAM. If anyone has access to such a machine, I would gladly collaborate. I also seek to understand why my own training sets did not result in such clarity of re-sythesis in the dreams. It's possible I simply did not train for long enough, or maybe the fine tweaking of the parameters is a matter more subtle. Please train me!

PLINK 16F863

This is a PIC16F863 , my first plink plink fizz! Nitric Acid and Acetone. No luck on the logos or artwork, although I do see a number down at the bottom left. Enjoy!


we dont need no ojijoji

I just replaced my girlfriend with a short script. She has an obnoxious but adorable way of singing songs. She substitutes creative variations of her affectionate nickname for me into the lyrics even if she knows the lyrics by heart. Singing songs this way is one of her methods for stealing my attention. In effort to give her an exhausting taste of her own medicine, I wrote a natural language translator that will take any song lyrics and produce the ojijoji'd version. The python code uses NLTK (Natural Language Toolkit) to determine parts of speech and syllable count. Since Becca's human process depends also on the musical timing of the lyrics, I had to use commas to mark the counts. The code currently only computes for 4/4 time signatures. I am replacing nouns and verbs and such, but I think the algorithm would benefit from being able to compute cultural significance and other abstract sentiment, as this also is a determining factor in her human processing. I'd say my script is about 90% accurate in reproducing Becca's incorrigible bad habit.

we dont need no ojijoji
hey oji leave those kids a jo ! ojijo its just a joji jo oji jyo
oji joji ! give me a ji
will you still oji will you still joji when im oji jyo
strangers in the jo ojijo glances ojijo in the jo what were the chances wed be ojijo before the jo oji
youve lost that oji joji
i was oji as a joji in an ojijo that much is jyo
our jo in the oji of our ji
I want to oji like an ojijo
oji jean jo not my lover shes just a jo who joji i am the jo ,but the jo is not o jyo
are you ojijo ojijo fair oji jo rosemary joji
God jo you please mrs ojijo heaven holds oji for those who pray hey o jyo
you must be ojijo jo cause you shine on jo where ever oji
thats me oji joji thats me ojijo light oji jo ojijo
jo jo jo ojijoji pie drove my oji to the joji ojijo was dry the good old boys oji joji and rye oji this will be the jo joji jyo
here in my jo oji safest of all joji lock all my doors jo the only jo to live oji
R E S P ojijo find out what it oji jyo
You are ojijo of my jo thats why jo oji be joji
you can stand under my umberella oji joji oji
dude looks like an oji
ragdoll ojijoji oji
its close to mid jo and oji evils joji oji jyo
You would ojijo your eyes if ten oji joji flies jo up oji as i fell joji
ojijo that chunky oji ojijo jojijo joji oji
Been there jo that ojijo ojijo jo dont ojiji
Ill never let you oji off my jyo
This ojijo oji bulletji
This ojijo oji bullet jyo

If ji lost and ji look and ji will find me ojijoji
Submitted by James Powderly

wouldn't it be nice if we were oji, then we wouldn't oji joji jo
i'm an ojijoji with my pocket calculoji
Submitted by Marc Nimoy

fab unsubscribe

I unsubscribed from fab.com because it was too overwhelming. I need the innovations and fashion trends to be summarized for a producer/designer, not a consumer.

Dither for After Effects

Intended for the fine connoisseurs of 1-bit dithering, this new stylize plugin for Adobe After Effects adds that classic grit to your mograph masterpiece in more ways than a Photoshop action could ever do for that image sequence you just exported. This plugin was designed with Jake Sargeant for his adventures in lo-fi. In addition to an overwhelming number of error-diffusion and pattern dither algorithms, the plugin also features an interactive pattern designer box that allows you to load and save an 8x8 pixel threshold mask. Help us test the beta. More features to come in future versions.

Not My Uncle

Data visualization illustrating my genetic relationship to a hollywood celebrity. Because of my last name, I get asked about this relationship quite frequently. People often assume for some (probably very interesting and study-able socialogical) reason that Leonard is my uncle when in reality, he is my second cousin once removed. He is my second cousin because my dad shares a great-grandpa with him. He is once removed because I am one generation deeper. And so I have printed the visualization out and laminated it. I now carry this in my wallet.

  • Page 1 of 20
  • Page 1 of 20