Welcome to Fractal Forums

Real World Examples & Fractical Applications => Fractal News across the World => Topic started by: kram1032 on June 20, 2015, 11:53:04 AM




Title: Turning Neural Networks Upside Down
Post by: kram1032 on June 20, 2015, 11:53:04 AM
http://googleresearch.blogspot.co.uk/2015/06/inceptionism-going-deeper-into-neural.html

First training Neural Networks to recognize certain images and then feeding it random images letting it interpret them and enhance what it sees, you end up with strikingly modified imagery.

(https://lh3.googleusercontent.com/H3i6hazGsHUaXOjozPZZBSQg2MR7dSVjySTS55--q4c=w1921-h515-no)

Doing this by starting with noise and, after each iteration, zooming in a little, you end up with very fractal-y images which all are inspired by the things the network knows about:

(http://1.bp.blogspot.com/-XZ0i0zXOhQk/VYIXdyIL9kI/AAAAAAAAAmQ/UbA6j41w28o/s1600/building-dreams.png)

Click the link on top for more information as well as a full gallery of images generated this way!

This one's a video showing off the interpret-and-zoom technique on an image of clouds as a base. Perhaps watch at lower speeds.
https://photos.google.com/share/AF1QipPX0SCl7OzWilt9LnuQliattX4OUCj_8EP65_cTVnBmS1jnYgsGQAieQUc1VQWdgQ/photo/AF1QipOlM1yfMIV0guS4bV9OHIvPmdZcCngCUqpMiS9U?key=aVBxWjhwSzg2RjJWLWRuVFBBZEN1d205bUdEMnhB


Title: Re: Turning Neural Networks Upside Down
Post by: Chillheimer on June 20, 2015, 12:15:19 PM
so computers now do art.
and i actually like it!  :o
crazy... welcome to the 21st century... :alien:


Title: Re: Turning Neural Networks Upside Down
Post by: kram1032 on June 20, 2015, 12:54:32 PM
welcome to the 3rd millenium ;)


Title: Re: Turning Neural Networks Upside Down
Post by: cKleinhuis on June 20, 2015, 01:09:33 PM
i love it, in a certain ways it is a way to visualise how a neural network works, that is nicely described in the descriptions, it reminds of some of the images that billtavis has done for the compo, as they described, they usually have no idea how the cells in the network are connected and why, and so it can provide insights very interesting and cool!


Title: Re: Turning Neural Networks Upside Down
Post by: Chillheimer on June 20, 2015, 10:27:21 PM
This is SOOO incredible! It takes me some time to really grasp and appreciate the scale of what this means!
it'S really watching machines think! trained by a similar recursive training that our brains go through when we age.
(and how could it be different, the results show fractal patterns....)

"If we apply the algorithm iteratively on its own outputs and apply some zooming after each iteration, we get an endless stream of new impressions, exploring the set of things the network knows about. We can even start this process from a random-noise image, so that the result becomes purely the result of the neural network, as seen in the following images:"

and out comes something like the attached image?! from random noise?!?!
this is what a machine interprets into noise?! random fluctuations?
 

if this is not a "thought".. then I don't know what a "thought" is.

so.... unbelievable...!!


ps: found a link to the final picture, that is the most awesome for me. as it came from nothing. a blank canvas.
https://lh3.googleusercontent.com/-PcD4unsMEpc/VYKZDpoF1SI/AAAAAAAAjp8/lSq5R5o4ScI/w2786-h1296/Research_Blog__Inceptionism__Going_Deeper_into_Neural_Networks.jpg


Title: Re: Turning Neural Networks Upside Down
Post by: youhn on June 21, 2015, 10:06:17 PM
Just upvoting that image! Been fascinated by it aswell, knowing how it came to be.

 :thumbsup1:


Title: Re: Turning Neural Networks Upside Down
Post by: Syntopia on June 21, 2015, 11:39:14 PM
Extremely fascinating. I've been browsing the papers they link to in the blog post, but don't get their approach.

But I have found that they use a GoogLeNet 'Inception' Convoluted Neural Network (with 22-layers!) trained on ImageNet (the latter examples on the Places data sets). It is possible to download it - fully trained - from here: https://github.com/BVLC/caffe/tree/master/models/bvlc_googlenet - and run it using the free Caffe framework (CPU & GPU).

But that only allows you to classify images (forward inference). Not to go backwards. The blog post is not very clear on how this is achieved:

"In this case we simply feed the network an arbitrary image or photo and let the network analyze the picture. We then pick a layer and ask the network to enhance whatever it detected"

Does this mean that they are using back-propagation to adjust the original input vector? The papers refered to in the blog post [1]-[4] are much more complicated, but seem to generate much more lousy images.

My favorite image is this one:
(https://lh3.googleusercontent.com/wxGI7CKdpwsokgS3tThWzYPkssFC5eoFUdvUy2JBbjQ=w1145-h862-no)

Btw: there is also a video, which I didn't notice at first:
https://photos.google.com/share/AF1QipPX0SCl7OzWilt9LnuQliattX4OUCj_8EP65_cTVnBmS1jnYgsGQAieQUc1VQWdgQ/photo/AF1QipOlM1yfMIV0guS4bV9OHIvPmdZcCngCUqpMiS9U?key=aVBxWjhwSzg2RjJWLWRuVFBBZEN1d205bUdEMnhB


Title: Re: Turning Neural Networks Upside Down
Post by: 3dickulus on June 22, 2015, 03:14:43 AM
 :o incredible images, fascinating details.


Title: Re: Turning Neural Networks Upside Down
Post by: kram1032 on June 22, 2015, 08:58:15 AM
I'm pretty sure what they are doing is, that they take a fixed chosen layer's output and superimpose that on the original image, then repeat the process. Each layer stores more abstract pieces of the image. Layer 1 only stored line segments and dots. Layer 2 begins storing curve segments. Higher layers refine curves, can store textures and individual body parts and eventually even entire objects.

They also mention that this only works together with a constraint that enforces correlation between neighboring pixels. Else you probably just get a noisy jumble.  This part is the one less clear to me. I mean,  why it's necessary is not so surprising but how to do it I'm not sure.


Title: Re: Turning Neural Networks Upside Down
Post by: Chillheimer on June 22, 2015, 11:37:54 AM
woohooo, found more pictures:
https://photos.google.com/share/AF1QipPX0SCl7OzWilt9LnuQliattX4OUCj_8EP65_cTVnBmS1jnYgsGQAieQUc1VQWdgQ?key=aVBxWjhwSzg2RjJWLWRuVFBBZEN1d205bUdEMnhB

hey, does anyone know how download a photo in the highest resolution?
they have that pic I posted in large resolution with 1.5mbytes, but I only can download it in 700kb..



hmmm.. I wonder what would come out if you do this with pictures of the mandelbrot set or mandelbulb3d stuff....
I really hope they release this as a little tool or a google-beta-thing.. ;)

if anyone finds out anything more about it, or just more pictures, please share here!


(this seems so important, made it sticky)


Title: Re: Turning Neural Networks Upside Down
Post by: Syntopia on June 22, 2015, 09:14:14 PM
I'm pretty sure what they are doing is, that they take a fixed chosen layer's output and superimpose that on the original image, then repeat the process. Each layer stores more abstract pieces of the image. Layer 1 only stored line segments and dots. Layer 2 begins storing curve segments. Higher layers refine curves, can store textures and individual body parts and eventually even entire objects.

It is a convolutional net (http://cs231n.github.io/convolutional-networks/), so the output of a layer is not an image (some of the first layers may have a spatial structure, but for instance the final layer will output a classification vector with 1000 entries). I imagine they must be sending information backwards through the network to arrive at something in image space. That seems to be the approach taken in the papers they cite (where they invert the networks).

Quote
They also mention that this only works together with a constraint that enforces correlation between neighboring pixels. Else you probably just get a noisy jumble.  This part is the one less clear to me. I mean,  why it's necessary is not so surprising but how to do it I'm not sure.

That is discussed in the papers they reference. Ref [2] (http://arxiv.org/pdf/1412.0035v1.pdf) uses a 'total variation' regulariser as a natural image prior approximation to ensure correlation. Ref [3] uses another approach, whereby the natural image prior is trained based on the images in the training set. But I don't think it is the approach Google used. Their images seems to be different, and much more interesting than in those references.



Title: Re: Turning Neural Networks Upside Down
Post by: kram1032 on June 24, 2015, 10:20:41 PM
As a followup work, yet another neural network was applied to the output. This one is supposed to describe a scene in a sentence.
Here are the results:
http://www.cs.toronto.edu/~rkiros/inceptionism_captions.html
and here's how it works:
http://kelvinxu.github.io/projects/capgen.html
Clearly this tech has to go a long way still but it's pretty darn impressive already.
(Also it's weirdly in love with clocks)
(Also it's able to see the forest AND the tree)
(Also it does have a rudimentary sense for what fractals are.)
(Also, for those who are familiar, I'm weirdly reminded of legendary artifacts in Dwarf Fortress.)


Title: Re: Turning Neural Networks Upside Down
Post by: Chillheimer on June 24, 2015, 11:10:33 PM
I'm weirdly reminded of legendary artifacts in Dwarf Fortress.)
bwahaha, that just made my day! :)

edit: hm, never thought of dwarf fortress as using fractal/procedural calculations to generate everything. of course!! explains why it was able to "steal" half a year of my live.. ;)
wow, i didn't expect that they are still working on it! i left version 0.28..  maybe i should... just once... uhoh.. better turn of the computer!  88)


Title: Re: Turning Neural Networks Upside Down
Post by: phtolo on June 24, 2015, 11:31:12 PM
There was a free online course describing back-propagation a few years ago. (https://www.coursera.org/course/neuralnets)
Not sure if the material is still available through their site.

Among other things a wake-sleep algorithm was mentioned in one of the lectures.
First some iterations of wake phase where you only go one direction, after that some iterations of sleep where you back-propagate with no input data.
Then repeat the process.

You can read input channels during the sleep phase and it will almost be like looking at what the model is dreaming.


Title: Re: Turning Neural Networks Upside Down
Post by: kram1032 on June 24, 2015, 11:39:21 PM
There are various Coursera courses on AI https://www.coursera.org/courses?query=AI&categories=cs-ai
There's also https://www.edx.org/course/artificial-intelligence-uc-berkeleyx-cs188-1x
and https://www.udacity.com/course/intro-to-artificial-intelligence--cs271
and probably many more

Chillheimer, there was an absolutely epic fractal bug in dwarf fortress: http://dwarffortresswiki.org/index.php/Planepacked


Title: Re: Turning Neural Networks Upside Down
Post by: Syntopia on June 25, 2015, 01:01:41 AM
Here is a new implementation of Google's inceptionism visualization: https://317070.github.io/LSD/  - including an interactive demo: http://www.twitch.tv/317070

For a great introduction to the Convolutional Neural Networks behind this, try the @karpathy course: http://cs231n.github.io/convolutional-networks/


Title: Re: Turning Neural Networks Upside Down
Post by: Chillheimer on June 26, 2015, 10:48:34 AM
cool, just read this in the twitch-stream chat:
"benanne: code will be released in a few days! takes some tinkering to get it to run though"


here's a little more info about the real-time render-video (http://www.twitch.tv/317070) i copied from the chat:

 carmethene_tv: @317070 +1, thanks for doing this! any chance of making the software behind this available? it would be fantastic for parties
paradonym: carmethene_tv I'd guess the needed CPU/GPU power is WAY too high for standard desktop systems
benanne: @Carmethene_tv source will be made available in a few days, but it will mainly be useful for experimentation I tihnk
benanne: and yeah, you need a decent GPU and lots of RAM
paradonym: @benanne - does this run on CAD or standard GPUs?
benanne: GTX 980 previously, now it's running on an older GTX 680
benanne: standard gamer GPUs
gwelengu: wow, 32 GB? that's crazy. I'll never be able to run this
ChaozCoder: @paradonym: or maybe it was some kind of bundled GPU i don't know exactly, but nothing like a supercomputer
benanne: you need 4GB of video RAM
ChaozCoder: @gwelengu: he said the machine has 32 GB you probably don't need that much
ChaozCoder: @benanne: ok
benanne: 32GB of CPU RAM but that may be fixed at some point. We suspect some sort of memory leak in the encoder
ChaozCoder: yeah i meant cpu ram not gpu ram
Dustinator2: my rig is a little lacking in the RAM department but my graphics card makes up for it until I can get more
ChaozCoder: memory leaks, that's why i love c# (for non performance critical stuff)
benanne: @Chaozcoder: not much we can do about it though, it's not in the code @317070 wrote but rather in the video encoder it's using. Maybe he'll try with a different one, I don't know
ChaozCoder: @benanne: oh
benanne: he may already have solved it, I don't know, it definitely seems to be more stable today
317070: @Chaozcoder @Benanne I think I solved the problem though. There was some fishy socket-handling by this bot which could have caused the issue.
ChaozCoder: so the bot was causing it, that is hilarious lol
foofoobarfoo: which software/language has been used in implementing this? python/theano..?
ChaozCoder: of all he things
benanne: okay. Still 27GB of memory in use right now
benanne: @Foofoobarfoo yeah, Python, Theano, Lasagne
foofoobarfoo: it's something like a diabolo network? (autoencoders)
benanne: My guess is the code could be made much more memory efficient but he doesn't need to bother because the machine has 32GB anyway
benanne: @Foofoobarfoo: no it's a feedforward neural net run backwards. Technical details here:http://317070.github.io/LSD/
paradonym: Next step: comparing different hardware's dream-images to the exact same thing
wiibrewer: You should train these on larger image sets so they become more defined
carmethene_tv: again, thanks for setting this up, it's a fantastic idea
benanne: @Wiibrewer: yeah, with more data it should work better. We did not train this net ourselves though, we just downloaded the parameters to save time
Dogeapi: was hoping for a link to some code at the github page, but good writeup!
carmethene_tv: so it only recognises what's in the list linked below?
MysteryXi: What is this running on? Isn't this super CPU intensive?
wiibrewer: @Benanne ahh, gotcha. Does it require monitoring to be trained?
wiibrewer: @Benanne meaning does someone have to be there to train it?
benanne: @Wiibrewer: it requires a lot of parameter tuning, but once you've foudn the parameters you just leave it running for a couple of weeks on a GPU
benanne: @Mysteryxi: the neural net is running on a GTX 680 GPU. The CPU only does the frame interpolation and video encoding (which is actually pretty intensive)
wiibrewer: @Benanne awesome, so in theory I could set up and automate this thing myself?
@benanne so this is mainly a RAM-Work? - Could you probably post Hardware occupation? How much load on GPU/CPU/RAM aso?
 benanne: if you have the hardware, sure
* carmethene_tv: @Benanne where should I keep an eye out for the software release?
benanne: $ uptime 18:03:19 up 19:39, 2 users, load average: 5.44, 5.88, 5.92
benanne: hexacore CPU is pretty much maxed out
wiibrewer: @Benanne is 20 gigs of ram, gtx770, quad core cpu strong enough?
benanne: for the GPU we don't have utilization info because it's a gamer GPU. But it's at 73C right now which means it's not maxed out probably
benanne: wiibrewer: if you reduce the resolution a bit it should be. Or it might not even be necessary, I don't know
 benanne: oh yeah, important detail: this is running on a Linux machine. Setting it up on Windows / Mac OS is not impossible but it'll be a lot more tedious
paradonym: this chat nearly convinces me purchasing 32 gb of RAM instead of a GTX 980 ti
MysteryXi: @benanne What distro?
benanne: @Paradonym: get both,980 Ti is sweet too
paradonym: to run this as CPU benchmark
benanne: @Mysteryxi Ubuntu 14.04 LTS





edit2: they're updating it right now and are enabling 2word combination. I wonder what the tractor-sloth will look like.. ;)






Title: Re: Turning Neural Networks Upside Down
Post by: kram1032 on July 02, 2015, 12:48:21 AM
another blogpost http://googleresearch.blogspot.co.at/2015/07/deepdream-code-example-for-visualizing.html


Title: Re: Turning Neural Networks Upside Down
Post by: Chillheimer on July 02, 2015, 11:30:39 AM
Wooooooohoooooooo!!!!!!!!!!
Now you can do this by yourself!
http://deepdreams.zainshah.net/
upload an image and let it dream!  :D

(edit: i guess we'll have to wait a little longer.. site is extremely slow, I've been waiting 10 minutes for the last pic)


Title: Re: Turning Neural Networks Upside Down
Post by: Chillheimer on July 02, 2015, 11:45:08 AM
soooo cool! :)


Title: Re: Turning Neural Networks Upside Down
Post by: KRAFTWERK on July 02, 2015, 12:49:36 PM
 O0 I am in love with this AI!


Title: Re: Turning Neural Networks Upside Down
Post by: KRAFTWERK on July 02, 2015, 12:56:47 PM
Second test, original image:

(http://nocache-nocookies.digitalgott.com/gallery/17/thumb_1002_01_07_15_3_59_10.jpeg) (http://www.fractalforums.com/index.php?action=gallery;sa=view;id=17975)


Title: Re: Turning Neural Networks Upside Down
Post by: Chillheimer on July 02, 2015, 12:58:06 PM
me too!
this is so cool! and how much cooler will this become, when improved?
Think of telling the computer what to look for, like in the live-stream linked above..
tell it to search for... fishes, or whatever.. in a picture and it will make it look "fishier" :)
and then this will be used on movies as well! and in 3d. and on the oculus!
oh my god!

the future is AWESOME!!!!


Title: Re: Turning Neural Networks Upside Down
Post by: KRAFTWERK on July 02, 2015, 01:00:53 PM
It is really weird  ;D

I wish it would be possible to teach it, to choose which images it should know from the start...


Title: Re: Turning Neural Networks Upside Down
Post by: kram1032 on July 02, 2015, 02:54:35 PM
Those are really pretty :)
More restricted/targeted versions like in the stream would be great though.
Here's an experiment to try:
Use a high res MSet image to be interpreted.
Apply another step of z^2+c.
Reinterpret
Remap...
Will the image converge to an attractor of both z^2+c and the AI?
(You know,  Orbitmap style)


Title: Re: Turning Neural Networks Upside Down
Post by: cKleinhuis on July 02, 2015, 03:08:20 PM
people, where are you creating these images ?!?!? the links seem to be broken :/


Title: Re: Turning Neural Networks Upside Down
Post by: Chillheimer on July 02, 2015, 03:14:16 PM
people, where are you creating these images ?!?!? the links seem to be broken :/
still here: http://deepdreams.zainshah.net/
you need lots of patience, it's the first online-page to do this an it's probably totally overwhelmed. So it can take a few minutes until the site opens.
I actually have 6 tabs of it open, trying to dream non-stop..

edt: now it really seems to be completely down.
but I'm sure others will follow. this will get huge!!

here's the results:
http://chillheimer.deviantart.com/gallery/


Title: Re: Turning Neural Networks Upside Down
Post by: ellarien on July 02, 2015, 05:12:05 PM
What fun!

This one started as a JWildfire image. I get the impression there were a lot of dogs in the initial training set ...



Title: Re: Turning Neural Networks Upside Down
Post by: cKleinhuis on July 02, 2015, 05:37:27 PM
still here: http://deepdreams.zainshah.net/
you need lots of patience, it's the first online-page to do this an it's probably totally overwhelmed. So it can take a few minutes until the site opens.
I actually have 6 tabs of it open, trying to dream non-stop..

edt: now it really seems to be completely down.
but I'm sure others will follow. this will get huge!!

here's the results:
http://chillheimer.deviantart.com/gallery/

this produces incredible results, in fact training the system so much no one knows what comes out of it

your image with the fractal looks incredible,because the machine uses the predefined streaks to build stuf along it, very astonishing ...


Title: Re: Turning Neural Networks Upside Down
Post by: M Benesi on July 02, 2015, 06:20:15 PM
you need lots of patience, it's the first online-page to do this an it's probably totally overwhelmed. So it can take a few minutes until the site opens.
I actually have 6 tabs of it open, trying to dream non-stop..

edt: now it really seems to be completely down.

really?


Title: Re: Turning Neural Networks Upside Down
Post by: Chillheimer on July 02, 2015, 06:32:18 PM
really?
nope, now it's nine.. ;)


Title: Re: Turning Neural Networks Upside Down
Post by: eiffie on July 02, 2015, 08:50:16 PM
It is addictive. Thanks for links guys but now I want a trainable desktop version!


Title: Re: Turning Neural Networks Upside Down
Post by: Syntopia on July 02, 2015, 09:02:45 PM
If you want to run it locally the Google source code is here:
https://github.com/google/deepdream
or an alternative implementation here:
https://github.com/jcjohnson/cnn-vis

They both run on top of Caffe, where several pretrained image models are available:
https://github.com/BVLC/caffe/wiki/Model-Zoo

The bad news: it is very difficult to run Caffe on Windows. I have tried for a couple of hours, and though it is possible to build the Caffe executable, I can not build the needed Python wrappers. I have also tried running on a VirtualBox Linux distribution, but here it is not possible to use the GPU (CUDA). I think I'll have to do a real Linux install.

Finally, try searching twitter for #deepdream - there are a lot of amazing images.


Title: Re: Turning Neural Networks Upside Down
Post by: Chillheimer on July 02, 2015, 10:49:34 PM
the interactive video guys published their source code:
http://317070.github.io/Dream/


Title: Re: Turning Neural Networks Upside Down
Post by: thargor6 on July 02, 2015, 11:50:55 PM
mind-blowing stuff - thanks for sharing it!


Title: Re: Turning Neural Networks Upside Down
Post by: 3dickulus on July 03, 2015, 02:44:39 AM
 :o I do hope that someone(two,three) in the FF collective can create an image model built strictly from all the images that might be extracted from the many deep zooms into the mandelbrot set that are available.

can this thing "watch" a video and use each frame to build an image model?


Title: Re: Turning Neural Networks Upside Down
Post by: Chillheimer on July 03, 2015, 07:43:00 AM
I don't know, but:
https://vimeo.com/132462576


Title: Re: Turning Neural Networks Upside Down
Post by: cKleinhuis on July 03, 2015, 10:29:50 AM
cool video, yay, although i think the modifications look quite similar, the effect is stunning, and using it for video is just a natural step ;)


Title: Re: Turning Neural Networks Upside Down
Post by: youhn on July 03, 2015, 11:03:20 AM
Wooow ... very trippy stuff man. Way cool!

 :beer:


Title: Re: Turning Neural Networks Upside Down
Post by: weavers on July 03, 2015, 05:54:34 PM
Greetings and Salutations
Greetings and Salutations,
Greetings and Salutations


Master Jesse, Master Bib993, Master Buddhi, Master Kali, Master Kleinhuis, the sun has baked the land dry, and the rain fell from a blue sky, it's soothing droplets bring new life to every ones eyes, the shared sense, at the heart of you, and your humanity, and exploratory curiosity is a hymn, a beautiful rainbow, in the skies, that you have, given to all, so here ye, where ever you be, this message is for thee,    .    .   .

You were here as a young man, a man that would stand, but would stand high, give the world a 3D program product that would keep evolving forever in the sky,  The Master builder, both you be, always resilient, never lose your focus,  the book was being written in eternity, first in your mind, and then in your hand, took an idea created by The Mandelbulb man, turned
it into a Master Plan!

Nothing compares with the pride, and humility to have you all, as fractal leaders.


Your applications programs the 3D Mandelbulb , and 3DMandelbulber, Kali programs tutorials, with so many imaginative possibilities, was the bold, brash, powerful experimenting foundational pioneers, paving the way, but alas, Masters, behold the realms of the Universe, as it stirs the tiny particles with in it, that blow in the fractacality of its fabrics winds, doing it, searching for the Fractal is to be on an addictive drug, the brain turns into an architectural factory, addicted to exploring its self in senses and passage ways holding meanings for each participant , popular, addiction that it be, that makes one, try to cast your memory back,  to recognize patterns that live inside your head, but there is no place like the new home you found, it'll  lock in your head, what a world, what a world, do you hear, this is the most happening place in the Fractal world!
All the knowledge, scaffolding and a repository of upward know how is here!

The New Arts and the Fractal Forums, is here, symbolizing a global community, in a quest, as if medicating its self, to heal and grow into something better, and better, like as if its in a clinic!

The power, the power Master, behold the power, of evolutionary time creations provocation inside the minds mind, a mecca, this place be, only a certain type of people can be here, the refrain, a curiosity man, who can hold, and withstand the power of fractacality, in the palm of his hand!

Alas, Masters, A day will come, when you will not recognize your programs.
 Third, fourth, fifth, six, generations, and more! Tales from the towering Masters Like yourselves that have given to mankind the ability to be Artists, to be creators, to be Members of the Fractal Forums, Master mind fraternity, to be able to c the c!
Thank you Masters!

 They, will now come by the thousands to visit this place, that is part memorial, part resurrection to futures iconic art zones!

 They come,   .  .  It feels healing in a way, above the human scale, Off the rails, Off those bended knees learning to crawl, learning to standup, learning to run, oh what scientific fun. A new scene, brand new, open for business, following you, inspired core intellectual curiosity, and more, for years it has acquired an aurora of wizardry!
 
 In the Fractal Forums, one, finds a kind of clarity, that relates to history, knowing that what you see here was here long before man, its just being discovered now, as man is evolving too, who knew, who knew?

Communing with that spirit, picture it in your minds eye, majestic  was, majestic will be, ostentatious be, that takes you up forever!

They come, in search for new educations, from ladders of concrete, push a button isn't it neat, in this quiet raw technical place, the views get better, and better, all over the place, its like your lost in the clouds, completely in the open, break open the walls, to soar, zooming into the future without boundaries, no cages, no windows, no windows just space, absolute nothing from keeping them from evolving, there is a kind of hope here in the forums, a kind of warmth, a kind of dream city, that adores the pretty, this land is your land, this land is our land!
 In the Fractal Forums, where you started, profoundly intelligent and practical with the idea of making the Art settings, of life more pleasurable, designing to be reaching out, reaching the stars, exploring the outer realms of the mind! In your bid, successfully , you did!

Masters! Look as Your Art spreads to the four corners of the world, all of your followers are experimenting to bring it to the next step higher, and higher, we salute you all, you must be proud, the concept is undergoing an evolution, and again a big thanks for Master Kleinhuis, dedication to be host to the Home of the Masters of Masters!

We give also tribute, to this artist, Memo Aken, aka Chillheimer,

that made this first iconic film of the new fractal style, so rich, so mesmerizing, from its roots to the outer limits, entitled," Journey through the layers of the mind"

https://vimeohuge.com/132462576

Chillheimer, sage ,anxious to step off into frontiers, unknown to him, the deliberate plunge head first that excites him, he shares his exploits, bewildered and addicted, he lets go of the floor, must get more, connecting, connecting, making man one with it! A holiday ritual, that he'll make habitual, have a feeling we're not in the same world as you knew before, anymore, realize the power of it, plunging down into its window, but it's not a window, its something else, look into your film again, you have seen it many times, but look at it now, go ahead,  with new eyes, study the man in it, looking back a you! Study him closely,   .   .  . that you never knew, pretend you do!




Some body asked whats a mind, a fractal mind? A mind better than mind?
The fractal mind, is nothing more than a temporal niching soul, to a vehicle facilitator to manifest it self in a reality or a dimension! Simply stated : the mind needs a vehicle. The vehicle chosen is currently the brain. Thats not to say in time, it may choose another mechanized superior formatted thinking construct, such as a computerized manifestation yet invented to move into! And why not, its free, it can do what it wants to, if somebody were to convince it , it can! Will that somebody be a programmer magician man? We said the child first on hearing this, I never heard of this. So what, So what? The mind, comes in all kinds, all flavors, morphs on a dime, possesion of which, not understood,  if the mind is educated to expand it self without being limited, can it do anything? Can it? What do you think?
Can go ergo, where no mind has gone before, and what type of mind has  audacity, to trifle with mality, germinate what it can and then again dares to do, what it can do, dares to think, not only out side of the box, but outside the outside, where synthetic dieties dwell, its so able and nimble it creates and do, and does, magical things, the world as you are living it, cannot even dream of doing?
Just as one vehicle is, so is another, for the new mind homes, are coming earthy brother!






To be continued!


The Fractal forums, the possibility is infinite!




.



Title: Re: Turning Neural Networks Upside Down
Post by: kram1032 on July 03, 2015, 06:05:43 PM
3dickulus, if you look at the original blogpost I linked to in this thread's first post, if I recall correctly it mentions video in future work. If it wasn't in there, I'm absolutely certain that it was somewhere. This network works on static frames so all you can do would be to
- filter single frames
- do repeated filtering of frames that are altered in a different way (like, for instance, zooming in)

The later isn't actually a transformed video. it's basically an on-the-fly generated deep-zoom similar to one into the M-set. And it's shown in the above-linked vimeo video.
And the former... well you could just filter all the frames and play the filtered frames back in order, but as far as I know, there would not necessarily be temporal coherence. It might flicker quite a bit.
Though truly video-based techniques are in the works. It'll probably take quite some time (months or years, though I doubt decades) for them to actually arrive in the public but it'll happen.

And besides coherent filtering of entire videos, with such a temporally aware AI you could perhaps do some other fun stuff like, say, take some image and ask the network how it'll likely evolve in the next 100 frames, or super advanced "tweening" (where you interpolate what happens between two frames)

EDIT: here's what happens with multiple frames. note the jumps and lack of visual coherence:
https://pbs.twimg.com/tweet_video/CI9mmXvWUAAhz_c.mp4


Title: Re: Turning Neural Networks Upside Down
Post by: 3dickulus on July 04, 2015, 01:56:37 AM
kram, I didn't mean applying it to each frame in a video, although that little clip looks very interesting, I meant teach a network from only mandelbrot images, instead of zoo, plants, scenery and real world imagery. Then let it dream to see what it recognizes from the fractal image model in a real world image.


Title: Re: Turning Neural Networks Upside Down
Post by: youhn on July 04, 2015, 02:21:56 AM
Yeah, and the simulated brain get 'm trained with both the images from real-world stuff projected onto fractal, and the other way around. And of course iterate this a few times, to amply the weird glitches that will occur.


Title: Re: Turning Neural Networks Upside Down
Post by: Syntopia on July 04, 2015, 02:32:37 AM
Finally managed to run Google's deepdream network. It was ridiculous hard to build, this might be due to my inexperience with Linux, though.

Unfortunately, I going away on holiday without getting a chance to try other datasets or play with the parameters.

Example (with standard settings):
(https://pbs.twimg.com/media/CJB56ZNUYAEhTk7.jpg:large)
 


Title: Re: Turning Neural Networks Upside Down
Post by: 3dickulus on July 04, 2015, 02:43:44 AM
 O0 has anyone tried feed back? like taking the above image and sending it through another dream cycle?


Title: Re: Turning Neural Networks Upside Down
Post by: Syntopia on July 04, 2015, 02:58:00 AM
I think that is the way it works already - the above image was run for 30 or so iterations. I just stopped it at some point.


Title: Re: Turning Neural Networks Upside Down
Post by: 3dickulus on July 04, 2015, 03:02:43 AM
hmmm.. like accumulating subframes in Fragmentarium? but with a twist  :dink:


Title: Re: Turning Neural Networks Upside Down
Post by: mclarekin on July 04, 2015, 04:28:24 AM
This is also amazing.

@chillheimer.  That is freaky, love it!!


Title: Re: Turning Neural Networks Upside Down
Post by: TheRedshiftRider on July 04, 2015, 09:41:37 AM
That looks amazing, I've tried this myself but the server doesn't react.


Title: Re: Turning Neural Networks Upside Down
Post by: kram1032 on July 04, 2015, 09:46:30 AM
btw, this method isn't just not frame-to-frame-coherent, it's also not frame-self-coherent.
Like, if you run the network a second time, it'll find different variations. The clearest features will be very similar but never quite the same. The undetailed stuff (like whatever the network puts in the place of a pure-colored region) will tend to be very different.
Therefore I propose, if that isn't too much computational effort (it likely is though), to at each iteration actually generate a bunch of versions of the same image, but to average them together, and to use that average as input for the next generation.
That way the noisier, less suggestive bits won't matter as much while the more consistent bits will hopefully become all the clearer.
Of course, if you like the addition of all that noise, that's fine too. But I'd like to see a clearer version as well, if that's possible.
Using that above-linked website for that is tediously slow though. I cranked up the timeout time in Firefox tenfold so I wouldn't drop the website's connection so frequently. (In Chrome that setting apparently doesn't even exist) but it's a long wait per iteration. Entirely impractical.


Title: Re: Turning Neural Networks Upside Down
Post by: Kalles Fraktaler on July 04, 2015, 04:48:42 PM
Really cool. Thanks for the images. Keep'em coming :) :o


Title: Re: Turning Neural Networks Upside Down
Post by: 3dickulus on July 04, 2015, 11:44:24 PM
hmmm.. like accumulating subframes in Fragmentarium? but with a twist  :dink:

that's one heck of a twist ;)

Quote from: jcjohnson link=https://github.com/jcjohnson/cnn-vis
One trick for demystifying a CNN is to choose a neuron in a trained CNN, and attempt to generate an image that causes the neuron to activate strongly. We initialize the image with random noise, propagate the image forward through the network to compute the activation of the target neuron, then propagate the activation of the neuron backward through the network to compute an update direction for the image. We use this information to update the image, and repeat the process until convergence.


Title: Re: Turning Neural Networks Upside Down
Post by: kram1032 on July 06, 2015, 01:20:51 PM
I tried the averaging technique on an image of a bubble:
Here are three iterations of averaging 5 images and putting them into that website. (Painfully slow, I wish there was an easy to use Windows implementation, also, jpeg artifacts became visible. It must compress at fairly low quality)
(http://i.imgur.com/DydOHMO.jpg)
(http://i.imgur.com/gelYuR9.jpg)
(http://i.imgur.com/8RLGaXs.jpg)

And here are the five versions of the third iteration that then were averaged together to give the last of the above images.

(http://i.imgur.com/VG5cLmt.jpg)
(http://i.imgur.com/Rtx7Yh6.jpg)
(http://i.imgur.com/yfwFeAc.jpg)
(http://i.imgur.com/JYELxfw.jpg)
(http://i.imgur.com/Lkj5UXn.jpg)

Check out the bottom right and top right corner for differences.

The largest differences appear in the black region which is inherently featureless and thus technically only comprised of noise.
As executed, the technique is flawed: The website outputs already largely converged images which can look very different from each other and already have a lot of noise in the black regions, and 5 samples are probably not particularly great.
You'd probably want more like 16+ samples, applied at each iteration step. The former part of that is impractical with this website and the later is impossible.
So if somebody of you who already has a Linux implementation up and running could try doing that, that'd be cool.


Title: Re: Turning Neural Networks Upside Down
Post by: KRAFTWERK on July 06, 2015, 04:03:13 PM
Just for the fun of it...

(http://orig05.deviantart.net/9306/f/2015/187/a/a/watch_artificial_intelligence_interpret_my_artwork_by_mandelwerk-d904ut0.gif)


http://fav.me/d904ut0


Title: Re: Turning Neural Networks Upside Down
Post by: cKleinhuis on July 06, 2015, 04:08:37 PM
just for clarification, how does the zooming work ? do you feed the algorithm with just that part of the original image, or do you re-feed the part of the image it has generated?


Title: Re: Turning Neural Networks Upside Down
Post by: KRAFTWERK on July 06, 2015, 04:12:04 PM
just for clarification, how does the zooming work ? do you feed the algorithm with just that part of the original image, or do you re-feed the part of the image it has generated?

In my case I let it dream on the original image, fed it with a part of the image which came out of the "dream" and so on...

Why are there several threads about this topic by the way? Very fractal...


Title: Re: Turning Neural Networks Upside Down
Post by: kram1032 on July 06, 2015, 04:46:54 PM
I counted three topics, but two of them are more specific. One is a gallery post and the other is a repost of a video already found in this thread
In that zoom image I loved the large amount of stuff before you zoomed into the, uh, "face". After that it became a little boring. This AI likes eyes a bit too much. Presumably because they are such a prominent feature irl too.


Title: Re: Turning Neural Networks Upside Down
Post by: cKleinhuis on July 06, 2015, 04:51:42 PM
as said in the other threads, this is a new topic, definately related to fractals, in fact it is an applied technique of chaos theory, many people confuse mandelbrot image renderings with the concepts, the stuff we call fractals here in the forums and most likely around the world are related to 2d fractals, and recently to 3d fractals, but all of such images are reduced to the most simple stuff that creates mathematical chaotic behaviour and we envy this and see it as beautiful, nevertheless the underlying concept is far more wider than many people believe, we do ground-research here, as example, it is like we are letting apples fall down in vacuum, enjoying the results that newton brought us and we play the whole day around letting other stuff fall like feathers or giant rocks, but it is just playing at the uttermost base, stuff that comes out of the newton principles like building rockets or travelling through space is what actually is made of this simple ground thoughts, similar to what we encounter now!


Title: Re: Turning Neural Networks Upside Down
Post by: KRAFTWERK on July 06, 2015, 08:03:02 PM
I counted three topics, but two of them are more specific. One is a gallery post and the other is a repost of a video already found in this thread
In that zoom image I loved the large amount of stuff before you zoomed into the, uh, "face". After that it became a little boring. This AI likes eyes a bit too much. Presumably because they are such a prominent feature irl too.

All right, two threads, but it could be enough with one... And yes, I am getting a bit bored with pagodas, dogs and eyes.  O0

And well spoken Christian, I agree 100%!


Title: Re: Turning Neural Networks Upside Down
Post by: youhn on July 06, 2015, 08:57:39 PM
I would like to turn it upside down in another way. Now it seems to fill up the details, zooming into complexity. But what about getting the big picture, the abstraction/connecting part of image recognition?


Title: Re: Turning Neural Networks Upside Down
Post by: kram1032 on July 06, 2015, 09:07:39 PM
Youhn that's a problem this AI isn't equipped to solve, I'm pretty sure. Gotta have an AI that actually understands composition. Those are in the works too (and I even linked to an example of one such effort earlier in this thread)

KRAFTWERK I guess if we really want it to know more images, we'd have to train our own. The full AI is available, right? You *could* technically have it learn what ever imagery you like.
But that's insanely much work. Google uses huge datasets and trains it for 1000 words. More words would require more outputs or, perhaps, a different approach altogether.


Title: Re: Turning Neural Networks Upside Down
Post by: youhn on July 06, 2015, 09:42:36 PM
Just train the AI with a game. Guess the picture. Show a details, and it has to guess the context. I admit it can be pretty hard, even for humans. But seeing all kinds of things in a piece of bush is a kinda fail. Or the AI has indeed taken some psychoactive drugs;

(http://psychic-vr-lab.com/deepdream/img_original/3318.jpg)
(http://d11cbnttr0b724.cloudfront.net/img_dreamed/3318.jpg)
Source: http://psychic-vr-lab.com/deepdream/pic.php?d_serial=3318


Title: Re: Turning Neural Networks Upside Down
Post by: 3dickulus on July 07, 2015, 02:52:54 AM
seems more "A" than "I" as in...
"this is the closest thing in my database that resembles this bit of the picture so just replace and blend that bit"
I know it's not that simple, that statement is an assumption based on the above result.

KRAFTWERK I guess if we really want it to know more images, we'd have to train our own. The full AI is available, right? You *could* technically have it learn what ever imagery you like.

Q: how many images of the mandelbrot set are represented here on FF, include all stills and every frame from every zoom/pan/morph video?
me thinks this is a reasonably large number ;)


Title: Re: Turning Neural Networks Upside Down
Post by: cKleinhuis on July 07, 2015, 03:26:27 AM
lol, if you want to know: more than 18.000 pictures in the gallery :D


Title: Re: Turning Neural Networks Upside Down
Post by: 3dickulus on July 07, 2015, 05:48:12 AM
...and how many minutes of video? I know, it's a bit of a rhetorical question but it's also an interesting piece of trivia. :dink:
...add this to the youtube vids and it should be quite substantial.

would this be enough to "train" a net?



Title: Re: Turning Neural Networks Upside Down
Post by: KRAFTWERK on July 07, 2015, 08:40:54 AM
Just train the AI with a game. Guess the picture. Show a details, and it has to guess the context. I admit it can be pretty hard, even for humans. But seeing all kinds of things in a piece of bush is a kinda fail. Or the AI has indeed taken some psychoactive drugs;



LOL it is not sane, that's for sure  O0

I guess it has to do with this:

"We aren't actually asking the system what it thinks the image is, we're extracting the image from somewhere inside the network. From any one of the layers. Since different layers store different levels of abstraction and detail, picking different layers to generate the 'internal picture' hi-lights different features."

(taken from the description of the video here: vimeo.com/132462576 )


Title: Re: Turning Neural Networks Upside Down
Post by: kram1032 on July 07, 2015, 02:45:19 PM
The problem with training may not be the data set but the amount of time it takes on normal hardware. As far as I know Google does this stuff on a Google-sized computing farm.
Though even with the data set you gotta be careful to do it right.

And for context you'll have to:
- either focus on a small number of possible contexts
- or really increase the network size and data set so it actually learns relationships between objects and perhaps even pop culture.

There's a reason why this kind of task is even hard for humans and why humans in different situations will give vastly different answers.
A "guess the context" game is more likely to tell you about the person you ask than about the context in the image. It's more of a psychological state of mind test than an intelligence one.

And the AI sees all those things in the hedge because that's literally all it knows. It doesn't know hedges. It knows dogs. (In fact a rather large variety of dog races. The training data happened to emphasize that)

What this AI also doesn't have is a hierarchy of categories. For instance, while it does know a dalmatian from a pug, It'd have no clue at all that both of those things happen to be dogs.


Title: Re: Turning Neural Networks Upside Down
Post by: eiffie on July 07, 2015, 06:06:03 PM
If you are really asking the AI to create art on its own then you would need LOTS of training on objects and their relationships but as a tool - like fractals - that an artist uses then it seems a small set of training images (with the usual rotations, scaling) would suffice.


Title: Re: Turning Neural Networks Upside Down
Post by: cKleinhuis on July 07, 2015, 06:53:14 PM
https://www.youtube.com/watch?v=oyxSerkkP4o


Title: Re: Turning Neural Networks Upside Down
Post by: kram1032 on July 07, 2015, 08:21:18 PM
Nice. This guy did other videos as well: https://www.youtube.com/channel/UCOfLZwshIblObgtAYOCD7BA
There's one of 2001: A Space Odyssey
https://www.youtube.com/watch?v=tbTJH8aPl60
and another one which just shows off the various layers of a different network trained on landscape, using The Scream as a case study.
https://www.youtube.com/watch?v=6IgbMiEaFRY
And probably more to come.


Title: Re: Turning Neural Networks Upside Down
Post by: Syntopia on July 08, 2015, 02:32:43 AM
Got CUDA working, which means generating new images new is much faster. Unfortunately, 2GB of ram on my GPU is not enough for bigger images.

If you want to ponder on the personality of these nets, here are some Rorschach-tests:


Title: Re: Turning Neural Networks Upside Down
Post by: Syntopia on July 08, 2015, 02:33:26 AM
And:


Title: Re: Turning Neural Networks Upside Down
Post by: 3dickulus on July 08, 2015, 07:05:14 AM
@Syntopia brilliant! Rorschach-tests hmmm how about some of those weird color blindness test images ?


Title: Re: Turning Neural Networks Upside Down
Post by: M Benesi on July 08, 2015, 08:24:29 AM
Check out this page.  Couple of good links, along with a nice explanation of some realtime computer tripping.

  Actually, here is a direct link to the explanation video.  I like it.

https://www.youtube.com/watch?feature=player_embedded&v=yTvOoMCGlAc (https://www.youtube.com/watch?feature=player_embedded&v=yTvOoMCGlAc)

http://artofericwayne.com/2015/07/08/google-deep-dream-getting-too-good/ (http://artofericwayne.com/2015/07/08/google-deep-dream-getting-too-good/)


Title: Re: Turning Neural Networks Upside Down
Post by: Syntopia on July 08, 2015, 11:37:35 AM
Color blindness tests proved surprisingly robust to inceptionism. Here is an example, but I had to crank up the step size:


Title: Re: Turning Neural Networks Upside Down
Post by: Syntopia on July 08, 2015, 11:39:28 AM
But the most disturbing results are probably using food:



Title: Re: Turning Neural Networks Upside Down
Post by: Chillheimer on July 08, 2015, 11:54:47 AM
As soon as anyone knows of a new tool or an (relatively) easy way to set this up on a windows machine, please post here!!
the new zainshah site doesn't allow multiple tabs and is far slower then the first... :(


Title: Re: Turning Neural Networks Upside Down
Post by: kram1032 on July 08, 2015, 04:43:18 PM
I love the food pics and actually, somehow, the blindness test is one of the most interesting images I've seen thus far. I'd love to see a higher resolution version of it to see what patterns emerge within circles. - Although one ought to be careful with that. Eventually the results will mostly look like those of all flat areas, essentially random, only changed up by the border regions. It seems to me that the network has some kind of "preferred size" at which it is most likely to understand (or not) its input.


Title: Re: Turning Neural Networks Upside Down
Post by: kram1032 on July 08, 2015, 05:01:16 PM
Another video explaining why this work was even done, as well as showing off other visualization techniques.
https://www.youtube.com/watch?v=AgkfIQ4IGaM


Title: Re: Turning Neural Networks Upside Down
Post by: cKleinhuis on July 08, 2015, 05:09:55 PM
very cool, and what this whole thing shows is that neuronal networks actually work, so, contrary to computer science which poses a limit to mechanical machines, it is not needed to reach ultimate non-determinism its just enough to provide all the kind of informations, and neural networks provide a method to actually handle every input, so in fact this approach to make a machine actually talk, or think seems to be really just a step ahead, on this step translations and various other stuff can be done by them

the thing that scares me is that we in fact render us people obsolete, lol


Title: Re: Turning Neural Networks Upside Down
Post by: Fitz on July 08, 2015, 05:57:29 PM
the thing that scares me is that we in fact render us people obsolete, lol

I personally look forward to the dog/pagoda-based future the robots have in store for us.


Title: Re: Turning Neural Networks Upside Down
Post by: kram1032 on July 08, 2015, 06:16:23 PM
with all this training on dogs they'll think dogs are their original overlords and humans mere pets


Title: Re: Turning Neural Networks Upside Down
Post by: cKleinhuis on July 08, 2015, 06:56:41 PM
naah, you all know the systems are far to small to overcome the actual problem, the EU is developing a neural network of the size of a human brain ... image recognition is a sub part of a larger "brain" much more components, like audio recognition, speach recognition, smells recognition etc have to be trained as well to have a base


Title: Re: Turning Neural Networks Upside Down
Post by: kram1032 on July 08, 2015, 07:09:55 PM
Smell recognition, huh? Well that ought to take some time. Smell is the sense we know about the least and also the one we can replicate the least well with tech, of all our senses.
(And if it becomes easily reproducible, I certainly don't want the inevitable addition of smellovision :D)


Title: Re: Turning Neural Networks Upside Down
Post by: Chillheimer on July 08, 2015, 08:19:30 PM
I did it! !!!!!  whuaaaaahhhhhh!  :rotating: :horsie: :joy: :joy: :joy: :joy: :joy: :joy:
I got it set up! my workstation is dreaming right now!!!!! :D
phew. took my the whole afternoon...
It's coommandline stuff and I have no idea what I am doing and where all that 4gb of stuff installed itself to, but its working! not comfortable at all. but does the job! :)
I don't know the last steps exactly and what lead to it finally working, but if you try this guys version, it obviosly is manageable for a non-coder like me:
https://github.com/Dhar/image-dreamer
try at your own risk!! and feel free to ask me, maybe I recall what I did when I was stuck at the same point  - probably not.. ;)

----
wow. this is slow! and it doesn't solve my problem of how to do larger resolutions than 1024*1024. Seems like memory kills it. damn. doesn't make use of my 16gb ram. why not?! aww.. all this for nothing?!
well, at least I can continue to dream low res without the website..


Title: Re: Turning Neural Networks Upside Down
Post by: Chillheimer on July 08, 2015, 11:06:07 PM
We give also tribute, to this artist, Memo Aken, aka Chillheimer,

that made this first iconic film of the new fractal style, so rich, so mesmerizing, from its roots to the outer limits, entitled," Journey through the layers of the mind"
I really wish I were that guy and could do what he did - but I'm not.. :( I didn't intend to make that impression..

O0 has anyone tried feed back? like taking the above image and sending it through another dream cycle?
of course, i did this with all of my pics, sometimes 5 or 6 times and changing between the 2 different algorithms..


Title: Re: Turning Neural Networks Upside Down
Post by: thargor6 on July 09, 2015, 01:46:11 AM
But the most disturbing results are probably using food:
Or porn:
http://motherboard.vice.com/en_uk/read/what-computers-dream-of-when-they-look-at-porn-nsfw (http://motherboard.vice.com/en_uk/read/what-computers-dream-of-when-they-look-at-porn-nsfw)


Title: Re: Turning Neural Networks Upside Down
Post by: 3dickulus on July 09, 2015, 02:26:12 AM
wow, that's disturbing, but...

https://www.youtube.com/watch?feature=player_embedded&v=X_tvm6Eoa3g

... so is this  :-\ chatbots? (I know a bit off topic but could be the audio interaction portion)


Title: Re: Turning Neural Networks Upside Down
Post by: 3dickulus on July 09, 2015, 04:23:15 AM
Color blindness tests proved surprisingly robust to inceptionism. Here is an example, but I had to crank up the step size:

Really interesting! when a human takes that test if we recognize the numbers or letters then we are not colour blind, I suppose that another test with a grey-scale version of that image that has no discernible pattern would yield a similar result, if so does this mean that NN is colour blind? given that it's only trained with dogs or buildings and not text or numbers I guess this result could be expected.


Title: Re: Turning Neural Networks Upside Down
Post by: flexiverse on July 09, 2015, 04:53:21 AM
I did it! !!!!!  whuaaaaahhhhhh!  :rotating: :horsie: :joy: :joy: :joy: :joy: :joy: :joy:
I got it set up! my workstation is dreaming right now!!!!! :D
phew. took my the whole afternoon...
It's coommandline stuff and I have no idea what I am doing and where all that 4gb of stuff installed itself to, but its working! not comfortable at all. but does the job! :)
I don't know the last steps exactly and what lead to it finally working, but if you try this guys version, it obviosly is manageable for a non-coder like me:
https://github.com/Dhar/image-dreamer
try at your own risk!! and feel free to ask me, maybe I recall what I did when I was stuck at the same point  - probably not.. ;)

----
wow. this is slow! and it doesn't solve my problem of how to do larger resolutions than 1024*1024. Seems like memory kills it. damn. doesn't make use of my 16gb ram. why not?! aww.. all this for nothing?!
well, at least I can continue to dream low res without the website..


Just use fractal compression to enlarge to any size withiout loss.  With these images fractal compression is perfect to create higher resolution versions.


Title: Re: Turning Neural Networks Upside Down
Post by: Chillheimer on July 09, 2015, 10:59:27 AM
Just use fractal compression to enlarge to any size withiout loss.

 :o
How?!? What tool can do this? (win8)

(I found why I can't do larger images. I change the allocated memory in oracle virtual box to 8gb but as soon as I start vagrant up it automatically sets it down to 2048mb. I already use the 64bit cmd.exe, so this is not the problem. any tips how I can raise the memory in vm constantly?)

also: found that with max 2gb you can go as high as around 1550pixels.


Title: Re: Turning Neural Networks Upside Down
Post by: cKleinhuis on July 09, 2015, 11:00:08 AM
wow, that's disturbing, but...

https://www.youtube.com/watch?feature=player_embedded&v=X_tvm6Eoa3g

... so is this  :-\ chatbots? (I know a bit off topic but could be the audio interaction portion)

lets be careful ... the AI is trained to be veeeeeeeeeeeeeeery polite :D


Title: Re: Turning Neural Networks Upside Down
Post by: Syntopia on July 09, 2015, 02:10:35 PM
Some notes from my experiments so far:

Speed

(tested on 0.4MP image, 4 octaves, 10 iterations):

Use "caffe.set_mode_gpu()" to enable GPU (if you have a CUDA compatible setup)

CPU (Intel i7-4710HQ CPU @ 2.50GHz): 154.99 s
GPU (Nvidia 850M):   7.03 s (22x faster)
Using NVIDIA cuDNN may improve GPU speed by a additional factor of 2x (haven't tried)

Memory

On my 2GB GPU, I run out of memory when running on images larger than ~0.5 MP.

I have made the following function to automatically resize:
Code:
def loadWithMaxSize(imagename, maxSize = 400000):
    ig = PIL.Image.open(imagename)
    dim = ig.size[0]*ig.size[1]
    print "Initial size: ", dim, " ", ig.size[0], "x", ig.size[1]
    factor = math.sqrt(float(maxSize)/dim)
    ig = ig.resize((int(factor*ig.size[0]),int(factor*ig.size[1])), PIL.Image.ANTIALIAS)
    dim = ig.size[0]*ig.size[1]
    print "Final size: ", dim, " ", ig.size[0], "x", ig.size[1]
    return ig

imagename = '/home/mikael/Downloads/cheese1.jpg'
img = np.float32(loadWithMaxSize(imagename))
showarray(img)

Predictions

It is also interesting to see what the net actually predicts to be in the image.

To do this you need the label definitions, which for instance can be downloaded from here:
https://github.com/HoldenCaulfieldRye/caffe/blob/master/data/ilsvrc12/synset_words.txt


Code:
with open("/home/mikael/Downloads/synset_words.txt") as f:
    labels = f.readlines()
        
input_image = caffe.io.load_image(imagename)
prediction = net.predict([input_image], oversample=False)
top5predictions = prediction[0].argsort()[-5:][::-1]

for p in top5predictions:
    print "Predicted class:", labels[p].strip('\n').split(' ', 1)[1], " (", "{:.3f}".format(100*prediction[0][p]), "%)"



which will output something like:

Code:
Predicted class: cheeseburger  ( 99.991 %)
Predicted class: hotdog, hot dog, red hot  ( 0.004 %)
Predicted class: bagel, beigel  ( 0.003 %)
Predicted class: guacamole  ( 0.001 %)
Predicted class: plate  ( 0.000 %)

In order for this to work, you need to added a 'raw_scale' parameter to the classifier initalization, e.g.:

Code:
net = caffe.Classifier('tmp.prototxt', param_fn,
                       mean = np.float32([104.0, 116.0, 122.0]), # ImageNet mean, training set dependent
                       channel_swap = (2,1,0), # the reference model has channels in BGR order instead of RGB
                      raw_scale=255)





Title: Re: Turning Neural Networks Upside Down
Post by: youhn on July 09, 2015, 09:51:00 PM
lets be careful ... the AI is trained to be veeeeeeeeeeeeeeery polite :D

LOL!!! thanks for sharing this video conversation. Great quotes from the context to be plucked. But all together ... "it makes me sad".

Whish they could add some tone/melody/emotion. It's a kind a stupid to read *whispers, while the robotic voice continues to speak out loud and constant as ever. More dynamics! More fire! Let's go to war!

And why does it seem that they are trying real hard to be polite ... ?!  :hmh:

Weird stuff. AI programmers have lots of learning and developing to do.  :order:


Title: Re: Turning Neural Networks Upside Down
Post by: Chillheimer on July 10, 2015, 08:07:48 PM
here's a nice guide that shows you how to set up deepdream on windows:
http://overstruck.com/how-to-use-googles-deepdream-in-windows/

wish I had found this before spending 2 days cursing and crying ;)


and I recommend this, to find out what the different Layers do with your images:
https://www.reddit.com/r/deepdream/comments/3ck6mi/here_is_a_python_script_to_test_every_type_of/?sort=old
here someone posted example images of all modes:
http://hideepdreams.tumblr.com/tagged/layer-test


Title: Re: Turning Neural Networks Upside Down
Post by: Chillheimer on July 10, 2015, 10:51:37 PM
Best online site yet:
https://dreamscopeapp.com
no queue, very fast, and lots of modes to choose from! :)

also, best collection of tips and tricks and many links here:
https://www.reddit.com/r/deepdream/comments/3cawxb/what_are_deepdream_images_how_do_i_make_my_own/


Title: Re: Turning Neural Networks Upside Down
Post by: Sockratease on July 10, 2015, 10:56:51 PM
here's a nice guide that shows you how to set up deepdream on windows:
http://overstruck.com/how-to-use-googles-deepdream-in-windows/

wish I had found this before spending 2 days cursing and crying ;)


and I recommend this, to find out what the different Layers do with your images:
https://www.reddit.com/r/deepdream/comments/3ck6mi/here_is_a_python_script_to_test_every_type_of/?sort=old

Best online site yet:
https://dreamscopeapp.com
no queue, very fast, and lots of modes to choose from! :)

Thanks Muchly for those links!  Been meaning to check this out but never found any direct links to where to start, only background info and images.  I've been too stressed with personal life to patiently read the whole thread or full articles to find where to begin using this toy   :sad1:

Was planning on trying this weekend, but this should save a lot of time seeking the right starting point.  I've got a lot of images I wanted to plug into this, so I may as well go have a peek.  Will post any interesting results  O0


Title: Re: Turning Neural Networks Upside Down
Post by: Fitz on July 11, 2015, 05:11:53 AM
This is one of the most impressive developments in technology that I've seen in my lifetime. The thought network of a computer was able to turn this:

http://i.imgur.com/vkxSzlt.jpg

Into this:

http://i.imgur.com/5H16Ykt.jpg


Title: Re: Turning Neural Networks Upside Down
Post by: Chillheimer on July 11, 2015, 12:49:28 PM
google have updated this page:
https://github.com/google/deepdream/blob/master/dream.ipynb
at the bottom they show how you can analyze a picture and interpret it into another picture :)


Title: Re: Turning Neural Networks Upside Down
Post by: Sockratease on July 11, 2015, 01:10:28 PM
Just curious why this obvious one hasn't been seen here yet, unless I missed it!

(http://i106.photobucket.com/albums/m278/sockratease/mdb_zpsiz3sy44p.jpg) (http://s106.photobucket.com/user/sockratease/media/mdb_zpsiz3sy44p.jpg.html)

Woof!

I finally used this   O0  

Despite being a google thing, it's actually pretty cool   :alien:


EDIT - YUP!  I knew this had to be in this thread!  I somehow read from page 1 to page 3, and missed the stuff on page 2!  Oopsy...   :embarrass:


Title: Re: Turning Neural Networks Upside Down
Post by: Chillheimer on July 12, 2015, 11:36:49 AM
more resources, huge overview of how all the different layers and training models look like:
http://www.csc.kth.se/~roelof/deepdream/


Title: Re: Turning Neural Networks Upside Down
Post by: kram1032 on July 12, 2015, 11:52:09 AM
Looks like there is a new technique in town
https://plus.google.com/+ResearchatGoogle/posts/NoBnPqwq3wh
"Guided Dreaming" where you take an image which's motifs are searched for in another image.
I suspect it'll still only work within whatever the network has actually learned but nevertheless it could be an interesting way to combine two fractals: Have one be the input and one the search image.
Here's one of the examples. The AI is asked to specifically dream of antelopes and Savanna trees found in an image of clouds in the sky:
(http://i.imgur.com/JXPoq05.jpg)
This already comes closer to what might be considered art: First, a scene is interpreted, then that interpretation is used to modify another scene.


Title: Re: Turning Neural Networks Upside Down
Post by: Sockratease on July 12, 2015, 12:33:51 PM
Looks like there is a new technique in town
https://plus.google.com/+ResearchatGoogle/posts/NoBnPqwq3wh
"Guided Dreaming" where you take an image which's motifs are searched for in another image....

I left a similar suggestion in one of those site's comment boxes recently!  I wanted a way to favor bugs over doggies and thought that either using an "input image" as a guide or else a way to assign "weighted values" to which general type of stuff to recognize.

Looks like I was thinking along the same lines as these folks.

What an amusing way to kill an afternoon!  I still need to get this installed and running locally so I can do higher resolution stuff, but have had great fun with the online toys   O0

Here's what it did to my entry in this year's Contest.  It started out as a Black and White JWildfire image.  Colors all added by the toy.

(http://i106.photobucket.com/albums/m278/sockratease/ce1ea7c6-7571-46dc-87e1-5518dd006674_zpsywzvrffn.jpg) (http://s106.photobucket.com/user/sockratease/media/ce1ea7c6-7571-46dc-87e1-5518dd006674_zpsywzvrffn.jpg.html)


Title: Re: Turning Neural Networks Upside Down
Post by: Chillheimer on July 12, 2015, 12:52:30 PM
wow, this whole thing just keeps getting better and better. i got the other training-net run
http://www.csc.kth.se/~roelof/deepdream/VIS_googlenet_places205_allLayers.html
and this in combination with mandelbulb3d architecture is just frigging mindblowing!!!  :o
I want to live there!!


Title: Re: Turning Neural Networks Upside Down
Post by: Sockratease on July 12, 2015, 03:14:58 PM
... I wanted a way to favor bugs over doggies ...

Has anybody else noticed a general trend for the AI to put doggies on brightly colored areas and buggies on darker colored areas?

I begin to see how this thing thinks! Going to test this theory...


Title: Re: Turning Neural Networks Upside Down
Post by: Sockratease on July 12, 2015, 03:24:08 PM
Has anybody else noticed a general trend for the AI to put doggies on brightly colored areas and buggies on darker colored areas?

I begin to see how this thing thinks! Going to test this theory...


How better to test this idea than with a black and white photo of Bettie Page?

(http://i106.photobucket.com/albums/m278/sockratease/betty.jpg) (http://s106.photobucket.com/user/sockratease/media/betty.jpg.html)

Pretty sure that is Terms Of Service compliant.  I can be blind to such things, so if not...  I am confident that it will be removed with no hard feelings   :police:

But it does support my theory:


(http://i106.photobucket.com/albums/m278/sockratease/bettydream_zpszj56hywk.jpg) (http://s106.photobucket.com/user/sockratease/media/bettydream_zpszj56hywk.jpg.html)

I never thought of Bettie Page as a dog, but ...  She's a very pretty dog!

WOOF!

But notice the dress has more of the snakes and centipedes while the brighter areas have more doggies and birds.  It's not an absolute rule, but it seems to favor one over the other in terms of distribution based on color as a guide.


Title: Re: Turning Neural Networks Upside Down
Post by: kram1032 on July 12, 2015, 03:47:20 PM
Well, presumably to some extent it uses color information.
For instance, in that twitch bot that just keeps on dreaming indefinitely, there are some clear trends: Bananas tend to make the image more yellowy, strawberries more reddish, volcanoes tend to cause a ton of darkness - almost blackness - along with a few bright red splotches (so there is a clear emergent behavior of rocky bits and lava) while bubbles, which often contain some bright reflection, will overally tend to lighten up the image...
So clearly, color information is present in the resulting vectors.
It's definitely not an absolute though: I once saw bubbles that remained rather dark for the entire minute of run-time and it was one of the most beautiful bubble runs there was.


Title: Re: Turning Neural Networks Upside Down
Post by: Sockratease on July 12, 2015, 04:34:14 PM
I was just thinking in terms of AI Psychology.

Bright = Happy = Puppy Dogs   :spgloomy:

Dark = Scary = Buggy Bugs   :spsad:

Again, not a strict rule, but a general theme.  Meanwhile, Interesting results from a Z Buffer image of a Cow made for use in MB3D!

(http://i106.photobucket.com/albums/m278/sockratease/46_zpszpdswcpy.jpg) (http://s106.photobucket.com/user/sockratease/media/46_zpszpdswcpy.jpg.html)


(http://i106.photobucket.com/albums/m278/sockratease/cowdreem_zpsrhiaxbui.jpg) (http://s106.photobucket.com/user/sockratease/media/cowdreem_zpsrhiaxbui.jpg.html)


But note the doggy in the nose?  Black area.  Expected bugs there...

Guess it just likes dogs!


Title: Re: Turning Neural Networks Upside Down
Post by: Syntopia on July 12, 2015, 05:23:38 PM
Yeah, the guided stuff will not create new objects which are not part of the 1000 item training set. Instead of guiding using a image, it would be easier if you could specify the categories directly.

Here is one guided by insects:



Title: Re: Turning Neural Networks Upside Down
Post by: Syntopia on July 12, 2015, 05:24:12 PM
And another one guided by beer:


Title: Re: Turning Neural Networks Upside Down
Post by: tit_toinou on July 12, 2015, 05:25:08 PM
This definitely needs a new section there is too much potential in this...

Just like someone said earlier in the topic, we should try to feed neural network with fractal patterns (the easiest would be to take the images from the Mandelbrot Safari from Pauldebrot in fractalforums !) and apply the same technique to images from the real world.
That would be much more interesting ; I'm tired of seeing theses dogs & birds faces emerging all the time :) I would prefer fractals patterns to emerge.


Title: Re: Turning Neural Networks Upside Down
Post by: Chillheimer on July 12, 2015, 06:51:11 PM
This definitely needs a new section there is too much potential in this...
I second that.
What do you think Christian?



Now that I finally can do HiRes dreaming up to 2560*1440 I'm confronted with the following problem:
It seems like the shapes that are dreamed (like dogfaces, buildings..) have a fixed pixel size.
This means if I did a picture in 500pix width and it had e.g. 2 dogfaces in there (say 200 pixels in diameter), when I redo the same image to achieve higher resolution, I get a totally different output. I get 2*5=10 dogfaces. still 200 pixels in diameter, but they look much smaller in the total picture..
--maybe a picture is easier. up is the one dreamed in 1024, lower is dreamed in 2048 and then downscaled to 1024. see how the temples are half the size?
How can I prevent that? Any idea, Syntopia perhaps?


Title: Re: Turning Neural Networks Upside Down
Post by: kram1032 on July 12, 2015, 07:16:54 PM
What would that new category be? AI-guided or -transformed art? That, then, would include:
https://www.youtube.com/watch?v=buXqNqBFd6E
or
http://picbreeder.org/
among other things.

It really likes dogs simply because its training set proportionally contained a lot more dogs than anything else because so many of its categories are separate breeds of dogs. If it was trained more evenly amongst different categories - either by providing multiple categories for more specific variations of things other than dogs, or by reducing "dog" to a single categoriy - you'd end up with much more varied pics, I expect.


Title: Re: Turning Neural Networks Upside Down
Post by: Syntopia on July 12, 2015, 07:34:26 PM
Now that I finally can do HiRes dreaming up to 2560*1440 I'm confronted with the following problem:
It seems like the shapes that are dreamed (like dogfaces, buildings..) have a fixed pixel size.

The Google script applies the reverse transformation at different scales (which they call 'octaves'). Per default they use 4 different scales, each scaled by a factor 1.4x.
Try increasing the number of octaves: you will still get the small structures, but some larger structure might survive from the smaller scales. The parameters are called:
octave_n=4, octave_scale=1.4 in the script.

It really likes dogs simply because its training set proportionally contained a lot more dogs than anything else because so many of its categories are separate breeds of dogs. If it was trained more evenly amongst different categories - either by providing multiple categories for more specific variations of things other than dogs, or by reducing "dog" to a single categoriy - you'd end up with much more varied pics, I expect.

I count something like 120 dogs out of the 1000 categories. Since they are adjacent in the feature vector (entry 152 to 269), I imagine it should be possible to minimize the influence of these categories somehow. But the problem is the way the Google script works, is by transferring features from some intermediate layer from the guide (e.g. 'inception_4c/output') - and here the dimensions do not correspond nicely to the 1000 categories. I'm pretty sure it can be done - if I recall correctly one of the earlier online scripts allowed you to tweet categories?


Title: Re: Turning Neural Networks Upside Down
Post by: 3dickulus on July 12, 2015, 09:08:18 PM
5cents ;)
I have 6G system mem and a 2G nVidia card, rendering 2880x3840 image requires 3.1G system mem (includes running X11 desktop)
Using cnn-vis https://github.com/jcjohnson/cnn-vis with cuDNN

in cnn-vis the gfx card mem is used for calculating, not holding image data, the image data goes to a png file,
the --batch-size option controls how much ram/threads is used on the GPU,
I get this output no matter what the image dimensions are when --batch-size=128 ...
Code:
I0712 11:20:44.759096  8298 net.cpp:213] Network initialization done.
I0712 11:20:44.759105  8298 net.cpp:214] Memory required for data: 800160768
I can push it past 128 but at 128 and less seems to perform better :)
using hybridCNN_iter_700000_upgraded.caffemodel


Title: Re: Turning Neural Networks Upside Down
Post by: cKleinhuis on July 12, 2015, 09:24:49 PM
What would that new category be? AI-guided or -transformed art? That, then, would include:

so, i would say the new category should just be "AI" because in my eyes its not the art that strikes out here, its the applied neuronal network stuff in action which is going to be used for far more different stuff, so, perhaps a new main board "Neural Networks - AI" with subsections "Art", "and whatever comes to our minds to group it "


Title: Re: Turning Neural Networks Upside Down
Post by: cKleinhuis on July 12, 2015, 09:31:39 PM
so, and art forms we have here would be "music creation/recognition" "image creation/recognition" "talk/turing tests"


Title: Re: Turning Neural Networks Upside Down
Post by: kram1032 on July 12, 2015, 09:36:29 PM
If you actually want to make a new category out of this (I'm honestly not sure if the current amount of content warrants this, although surely the fairly new future will bring a lot more of this), this thread and the other two should probably be moved to there, then.


Title: Re: Turning Neural Networks Upside Down
Post by: 3dickulus on July 13, 2015, 02:59:34 AM
a new main board "Neural Networks - AI" with subsections "Art", "and whatever comes to our minds to group it "
:beer:

@Chillheimer I ran into the same thing, these two pics are reduced from 3840x2880, one is at 10 iterations the other is at 100


Title: Re: Turning Neural Networks Upside Down
Post by: Chillheimer on July 13, 2015, 09:56:28 AM
Try increasing the number of octaves
thx syntopia! setting octaves up to 10 did a perfect job! :) Love the outcome:
http://www.fractalforums.com/images-showcase-(rate-my-fractal)/heavens-gate/msg85752/#msg85752



Title: Re: Turning Neural Networks Upside Down
Post by: Chillheimer on July 16, 2015, 12:12:51 AM
best lookup reference yet, 81 layers at 99 iterations 8 octaves:
http://imgur.com/a/or4rZ?gallery


Title: Re: Turning Neural Networks Upside Down
Post by: kram1032 on July 16, 2015, 12:29:31 AM
Very nice stuff.
I'm assume the numbers in those images mean what layer depth we are talking about?
So inception_3b/output would be in the third layer of 5?
If that's the case that really show just how biased the network is towards dogs. In all the layer 3 pictures, inception_3b/output is the only thing that features something recognizable and that's dogs.
All other more complex objects (like, say, humans) only start becoming recognizable in layer 4 upwards.


Title: Re: Turning Neural Networks Upside Down
Post by: Chillheimer on July 17, 2015, 07:49:52 PM
what a nice guy, I asked if he could do this for the google205 places set and he did :)
https://www.reddit.com/r/deepdream/comments/3dm3ba/exploring_places_81_layer_99_iterations_8_octaves/

I'm assume the numbers in those images mean what layer depth we are talking about?
So inception_3b/output would be in the third layer of 5?
If that's the case that really show just how biased the network is towards dogs. In all the layer 3 pictures, inception_3b/output is the only thing that features something recognizable and that's dogs.
All other more complex objects (like, say, humans) only start becoming recognizable in layer 4 upwards.
I don't think so.. I think that these are different training sets. some were sets with lots of dogs, others with reptiles, or with cars. and those with "pool" are combinations of them. at least that's what I believe. I have no clue what is really going on ;)


Title: Re: Turning Neural Networks Upside Down
Post by: kram1032 on July 17, 2015, 08:51:41 PM
That places-training set is really neat


Title: Re: Turning Neural Networks Upside Down
Post by: Syntopia on July 18, 2015, 07:23:04 PM
Very nice stuff.
I'm assume the numbers in those images mean what layer depth we are talking about?
So inception_3b/output would be in the third layer of 5?
If that's the case that really show just how biased the network is towards dogs. In all the layer 3 pictures, inception_3b/output is the only thing that features something recognizable and that's dogs.
All other more complex objects (like, say, humans) only start becoming recognizable in layer 4 upwards.

All of the 9 'inception modules' are at different layers (the modules are applied after each other). The reason for the naming seems to be that e.g. 'inception_3b' and 'inception_3a' operate on the same spatial size of data. See http://arxiv.org/pdf/1409.4842.pdf, in particular page 7 for the NN architecture, and table 1. Each inception module spans 3 layers.

what a nice guy, I asked if he could do this for the google205 places set and he did :)
https://www.reddit.com/r/deepdream/comments/3dm3ba/exploring_places_81_layer_99_iterations_8_octaves/
I don't think so.. I think that these are different training sets. some were sets with lots of dogs, others with reptiles, or with cars. and those with "pool" are combinations of them. at least that's what I believe. I have no clue what is really going on ;)

All of the layers are trained on the same data set.


Title: Re: Turning Neural Networks Upside Down
Post by: Syntopia on July 26, 2015, 01:35:30 AM
I managed to run the deepdream network with CUDA acceleration on Windows, using this build for the Python wrappers
https://github.com/drakh/caffe-win-python-anaconda/

I used Anaconda 2.7, together with CUDA 7 on Windows 8.1. I needed to install protobuf using:
"conda install -c https://conda.binstar.org/dhirschfeld protobuf"


Title: Re: Turning Neural Networks Upside Down
Post by: schizo on August 23, 2015, 12:43:41 PM
Another free online service to create deep dream images is https://dreamdeeply.com/ (https://dreamdeeply.com/)
The results are quite small but still interesting.
Here three examples of deep dreamed fractals created with this service:
(http://nocache-nocookies.digitalgott.com/gallery/18/6545_23_08_15_12_34_21.jpeg)

(http://nocache-nocookies.digitalgott.com/gallery/18/6545_21_08_15_9_22_33.jpeg)

(http://nocache-nocookies.digitalgott.com/gallery/18/6545_19_08_15_9_43_26.jpeg)

Have a good dream  :dink:


Title: Re: Turning Neural Networks Upside Down
Post by: kram1032 on August 23, 2015, 01:22:54 PM
those are actually amongst the most interesting I've seen thus far. Neat!


Title: Re: Turning Neural Networks Upside Down
Post by: Caleidoscope on August 23, 2015, 02:41:40 PM
I just try one, funny!  Thank you for the url. 


Title: Re: Turning Neural Networks Upside Down
Post by: Chillheimer on August 30, 2015, 12:51:17 PM
Next Level!  :o

https://www.youtube.com/watch?v=-R9bJGNHltQ&feature=youtu.be

(http://i.imgur.com/sb8dHcY.png)


Title: Re: Turning Neural Networks Upside Down
Post by: kram1032 on August 30, 2015, 01:03:32 PM
Really neat! I'm waiting for ones working on movies though. (They also already are in the works)
And I'm curious what DNNs which have attention will do. (Currently the movie-networks mostly "just" have what's called long short term memory)


Title: Re: Turning Neural Networks Upside Down
Post by: Chillheimer on September 01, 2015, 07:48:41 PM
and it is open source!
woohoo!
https://github.com/kaishengtai/neuralart

now I'll just have to wait for a guide for dummies.. ;)


Title: Re: Turning Neural Networks Upside Down
Post by: Kalles Fraktaler on September 01, 2015, 09:02:26 PM
and it is open source!
woohoo!
https://github.com/kaishengtai/neuralart

now I'll just have to wait for a guide for dummies.. ;)
I want a mandelbrot fractal as van Gogh!


Title: Re: Turning Neural Networks Upside Down
Post by: Chillheimer on September 04, 2015, 10:06:40 AM
wow:
http://imgur.com/a/ujf0c


Title: Re: Turning Neural Networks Upside Down
Post by: kram1032 on September 04, 2015, 09:00:04 PM
http://gitxiv.com/posts/jG46ukGod8R7Rdtud/a-neural-algorithm-of-artistic-style an implementation


Title: Re: Turning Neural Networks Upside Down
Post by: KRAFTWERK on September 04, 2015, 10:14:54 PM
This is very interesting, but also sad and boring...
My first reaction to it was of pure curiosity, but I see no meaning of making bad versions of old masters works. Make someting new instead!
just my 2 cents... this fantasic feature should be used in a different way, these look alike images just makes me bored.


Title: Re: Turning Neural Networks Upside Down
Post by: cKleinhuis on September 04, 2015, 10:37:18 PM
the functionality is jaw dropping, the creativity to use it comes now


Title: Re: Turning Neural Networks Upside Down
Post by: kram1032 on September 04, 2015, 10:47:56 PM
Some of those certainly are better than others. This is just brilliant:
(http://i.imgur.com/eBxFDoY.png)

Currently it's probably better as source for inspiration than as directly usable final results though.
But that's along the lines of what I was saying before: I'm sure computers will eventually be capable of creativity, and they have come a long way, but this isn't it yet.


Title: Re: Turning Neural Networks Upside Down
Post by: Syntopia on September 05, 2015, 11:08:45 AM
Kyle McDonald has made a lot of progress on this: https://medium.com/@kcimc/comparing-artificial-artists-7d889428fce4

If you look at his Twitter stream, there are some examples of animations: https://twitter.com/kcimc/media (I'm afraid his latest one is NSFW)

Artistic potential aside, I think these techniques are extremely impressive. And there seems to be much potential. Image mixing different artists or applying the same techniques to audio or litterature.


Title: Re: Turning Neural Networks Upside Down
Post by: Fractal universe on September 06, 2015, 02:57:46 PM
Now they make a fractal zooms only with deepdream.

One zoom that I like particularly begin with noise, and the patterns appear progressively.
I let you watch it.

https://www.youtube.com/watch?v=dbQh1I_uvjo


Title: Re: Turning Neural Networks Upside Down
Post by: 3dickulus on September 06, 2015, 07:49:46 PM
ok, that one is interesting  O0


Title: Re: Turning Neural Networks Upside Down
Post by: Chillheimer on September 09, 2015, 01:44:25 PM
dammit, I'm too slow. I also have a deepzoom using deepdream in the workings.. will take a few more days..

but I actually came here to post this list of deepstyle images:
http://kylemcdonald.net/stylestudies/

also, here is a tutorial how to make deepstyle images by yourself.
https://www.reddit.com/r/deepdream/comments/3jwl76/how_anyone_can_create_deep_style_images/


Title: Re: Turning Neural Networks Upside Down
Post by: kram1032 on September 09, 2015, 02:21:37 PM
That latest patch of style studies shows that this technique still doesn't work wonders.
The "styles" are pretty good at picking up color palettes, but actual strokes are only really carried over if the style image is fairly similar to the input image. - it looks like the input and style image need fairly matching amounts of noise. Then the result picture switches the noise "style".
Proposal: Add Gaussian noise to your input image (specifically, if you can be selective about this, to overly smooth areas) and see if that improves things.


Title: Re: Turning Neural Networks Upside Down
Post by: hgjf2 on September 12, 2015, 10:35:53 PM
Now they make a fractal zooms only with deepdream.

One zoom that I like particularly begin with noise, and the patterns appear progressively.
I let you watch it.

https://www.youtube.com/watch?v=dbQh1I_uvjo

This type of fractal was been used for the clip "Thrillex and Diplo with Justin Bieber" at the math abstract dots paintings


Title: Re: Turning Neural Networks Upside Down
Post by: hgjf2 on September 12, 2015, 10:37:41 PM
COOL
 :peacock: :smileysmileys:


Title: Re: Turning Neural Networks Upside Down
Post by: schizo on October 03, 2015, 01:41:07 PM
Can't resist to post this deep dream. It is an older image created with Vchira and runs well through the deep dream. I especially like the fox in the upper right corner.  ^-^

(http://nocache-nocookies.digitalgott.com/gallery/18/6545_03_10_15_1_35_18.jpeg)


Title: Re: Turning Neural Networks Upside Down
Post by: kram1032 on March 09, 2016, 10:56:07 PM
There is now software out to train your own perception network. With it, I think, it should be possible to do deep-dream-like imagery with _any_ image set you like.
http://googleresearch.blogspot.co.at/2016/03/train-your-own-image-classifier-with.html
Of course training one of these on your own will require tons of data and lots of computing power but perhaps you could feed it something like a large library of fractals, making it guess what specific rendering technique or formula or what not was used.
If you then were to use it in Deep Mind, it would turn everything into fractals instead of dogs or buildings (the two variants that already have been around)


Title: Re: Turning Neural Networks Upside Down
Post by: kram1032 on April 25, 2016, 01:02:14 AM
News from this department: Large improvements were made as seen here:
https://www.youtube.com/watch?v=3lp9eN5JE2A

This is the currently published state of the art:
http://arxiv.org/pdf/1602.03616v1.pdf
But if you watch the video, at around 51min into the presentation he shows off even better results than in this paper. The latest results are almost life-like. They look like taken by cameras that were too drunk to drive but no longer like ones tripping on LSD.


Title: Re: Turning Neural Networks Upside Down
Post by: TheRedshiftRider on May 25, 2016, 07:22:43 AM
I found another article. These images are amazing.
http://www.boredpanda.com/inceptionism-neural-network-deep-dream-art


Title: Re: Turning Neural Networks Upside Down
Post by: Tglad on May 25, 2016, 08:35:57 AM
Wow, that's the most insanely impressive program I have seen in years.

I noticed someone crossed a leopard:
(https://ost6imgs.s3.eu-central-1.amazonaws.com/uploads/content/image/148997/thumb_img_681921f41b.jpg)
with a Mandelbulb3D creation:
(https://ost6imgs.s3.eu-central-1.amazonaws.com/uploads/style/image/47121/thumb_img_69426b4af2.jpg)
to make:
(https://ost6imgs.s3.eu-central-1.amazonaws.com/uploads/pimage/imageurl/577821/thumb_img152620_dba39930473bbd45.jpg)


Title: Re: Turning Neural Networks Upside Down
Post by: Chillheimer on May 25, 2016, 09:34:59 AM
I just wished this was finally put into a nice little programm.
with resolutions higher than 640*480.


Title: Re: Turning Neural Networks Upside Down
Post by: kram1032 on May 25, 2016, 12:09:41 PM
There is https://deepart.io/ which takes this
https://deepart-io.s3.amazonaws.com/content/TWcZpFSXlwJszVdiKtaRyIjuC.jpg (large image)

and this (HUGE IMAGE of the famous "pillars of creation")
https://deepart-io.s3.amazonaws.com/style/MYLTKWmuHOSJZcdNfFqQljIRU.jpg

and turns it into this
(https://deepart-io.s3.amazonaws.com/cache/1d/d5/1dd57f58e4440ffa3d96fcf78c5a215e.jpg)
but for higher resolutions you gotta pay

And if the servers aren't stalled by huge qeues, it's actually a matter of minutes to get the images.

I believe there are some open source implementations on GitHub though? Just gotta compile them... Just...

That leopard looks great! In experimenting with deepart.io I noticed very much that pictures of already somewhat similar structure or ones where you use a "style" with a lot of specific texture work best. The leopard was bound to be a great match with already similar colors. The result is quite astonishing.


Title: Re: Turning Neural Networks Upside Down
Post by: Chillheimer on May 25, 2016, 12:35:43 PM
ah, good to know that there is progress.
but paying 149€ for a 3000*3000 image is seriously overpriced.
20€ for full hd would be ok, but for 1300*1300... not my kind of format.

I guess prices will fall with increasing cpu speeds and more competition.


Title: Re: Turning Neural Networks Upside Down
Post by: kram1032 on May 25, 2016, 01:02:33 PM
It's certainly on the pricey end


Title: Re: Turning Neural Networks Upside Down
Post by: kram1032 on June 01, 2016, 08:09:16 PM
Google Brain now has launched Magenta, dedicated to using deep neural networks to create art
http://magenta.tensorflow.org/
Everything will be open source and available here
https://github.com/tensorflow/magenta
and there is an official discussion site here
https://groups.google.com/a/tensorflow.org/forum/#!forum/magenta-discuss (https://groups.google.com/a/tensorflow.org/forum/#!forum/magenta-discuss)
though it's currently rather empty. Must have just been opened.
Generally, https://tensorflow.org/ could be interesting


Title: Re: Turning Neural Networks Upside Down
Post by: Tglad on June 02, 2016, 03:46:05 AM
Quote
paying 149€ for a 3000*3000 image is seriously overpriced
They charge what people are willing to pay, like any business.
To me its an incredible algorithm, and it has no competition... but I'm happy to just look at the gallery.


Title: Re: Turning Neural Networks Upside Down
Post by: kram1032 on June 02, 2016, 09:22:52 AM
Actually I'm pretty sure almost every single step of what they are doing is available Open Source. You'd just have to put it all together and try it yourself. The largest problem you'll run into is computing power but even then, if you use some cloud service, you'll probably pay a lot less. And honestly I highly doubt their current price-point is optimal. There would probably be a whole lot more interest if they just dropped the prices and it would probably end up meaning lots more customers and overally more profit.
That being said, with each request taking a couple minutes to fulfill, they actually don't even really want that big an influx of people: It'd drive up the queue length a LOT. (During a recent competition to find a great themed combination, the waiting times went from minutes all the way to a week of waiting time) So maybe that's part of the reason.


Title: Re: Turning Neural Networks Upside Down
Post by: kram1032 on June 02, 2016, 09:24:32 AM
In other news, Google Magenta has composed its first piece of music:
http://thenextweb.com/google/2016/06/01/lets-talk-song-google-ai-made/
(I wasn't able to quickly find it on one of Google's pages, oddly enough)
It's not amazing, in fact the beginning is kinda boring, but it does get going after a while.


Title: Re: Turning Neural Networks Upside Down
Post by: Chillheimer on June 02, 2016, 10:14:01 AM
thx for keeping us up to date kram.. all this development is mindblowing.


Title: Re: Turning Neural Networks Upside Down
Post by: Max Sinister on June 02, 2016, 10:52:17 PM
Yes, suddenly things seem to go so fast. For years we only had tiny advances (feels like it anyway), but suddenly the development with Deep Learning...


Title: Re: Turning Neural Networks Upside Down
Post by: kram1032 on June 03, 2016, 10:05:24 AM
The technology was being worked out for a while now but it has reached a milestone of usability recently. And it will keep expanding quickly before running into its current limitations. Then there will be some stagnation again until those limits are overcome, leading into yet another expansion. That's how technology works :)


Title: Re: Turning Neural Networks Upside Down
Post by: Chillheimer on June 03, 2016, 10:19:44 AM
And it will keep expanding quickly before running into its current limitations. Then there will be some stagnation again until those limits are overcome, leading into yet another expansion. That's how technology works :)
when the technique of neural learning will be used to learn how to improve the technique of neural learning...  :o

it's all very fascintating - but there's also quite some danger there.
for the first time in human history, we really have no way of understanding 100% of the technology we created. you can't 'reverse engineer' why the go-computer learned completely new ways that humans never thought of.
technology starts to live and evolve. and that evolution is in part beyond our influence and knowledge.


Title: Re: Turning Neural Networks Upside Down
Post by: kram1032 on June 03, 2016, 10:59:19 AM
I'd say that's largely a question of time: People already are working on the necessary tools to dissect exactly why NNs are so effective.
That's actually not really unusual, though it's less of a tech thing and more one of math. In math we often find lose connections we don't really get, then somebody takes a stab at it and BAM a deeply rooted connection is found and spawns an entire new field of research or three at the same time.


Title: Re: Turning Neural Networks Upside Down
Post by: Max Sinister on June 04, 2016, 12:56:55 AM
In the past, NNs had the problem of being black boxes. They work, but nobody knows why. (Pretty much like the natural NNs, our brains.)


Title: Re: Turning Neural Networks Upside Down
Post by: kram1032 on June 04, 2016, 11:03:23 AM
They still have that but we are working on windows


Title: Re: Turning Neural Networks Upside Down
Post by: Vega on June 04, 2016, 11:46:02 AM
Self Portrait

(http://nocache-nocookies.digitalgott.com/gallery/19/4939_26_05_16_12_18_35.jpeg)


Title: Re: Turning Neural Networks Upside Down
Post by: kram1032 on June 04, 2016, 12:55:49 PM
nice result!


Title: Re: Turning Neural Networks Upside Down
Post by: Chillheimer on June 09, 2016, 10:38:11 AM
and it continues: "A.I. Learns Nobel Prize Experiment in Just 1 Hour!"
https://www.youtube.com/watch?v=lJcGzmsLRUo&feature=youtu.be

then there's this cool tool that uses neural learning to calculate handwritten math formulas:
http://mathpix.com/

and deepstyle for videos:
https://www.youtube.com/watch?v=Khuj4ASldmU


Title: Re: Turning Neural Networks Upside Down
Post by: TheRedshiftRider on June 30, 2016, 09:10:26 PM
https://youtu.be/py5byOOHZM8

https://youtu.be/BFdMrDOx_CM


Title: Re: Turning Neural Networks Upside Down
Post by: Chris Thomasson on June 30, 2016, 10:59:51 PM
Thank you for posing these links! :^)


Title: Re: Turning Neural Networks Upside Down
Post by: TheRedshiftRider on July 22, 2016, 07:30:05 PM
https://www.youtube.com/watch?v=uSUOdu_5MPc


Title: Re: Turning Neural Networks Upside Down
Post by: valera_rozuvan on July 27, 2016, 04:52:00 AM
It's 05:45 in the morning, and I just got this amazing idea I simply must share with you guys!

Imagine that you use the progression of generated images of a fractal zoom as a training set to a neural net. Then at some zoom level (say N), the neural net will be smart enough to come up with the next generated image of a zoom level N+1. You will just tell it - I want to zoom in closer at this point. In essence - the neural net will become a fractal generator engine!

After training it, it will be interesting to see how it behaves zooming in at another point location.

Anyone want to work on this together?

PS: I have looked carefully through this thread, and don't think that such idea has been mentioned before.


Title: Re: Turning Neural Networks Upside Down
Post by: Chillheimer on July 27, 2016, 10:21:00 AM
I'm pretty sure that the results of a purely 'dreamed' deepzoom won't match a real deep zoom. It will learn the basic shapes that appear anywhere, but I don't think it will do what we call shapestacking, as this is a process that needs deliberate, conscious input, and the result is only visible in one or two of the hundred pictures of a zoom sequence. not sufficient to train imho.

But that doesn't matter - I would be interested in participating, because I'm curious!
If you have a simple step by step info how to set a standard windows machine to train, I'll happily let it run for a few weeks. we could generate countless zoompaths using kalles fraktaler´s "find minibrot" function.


Title: Re: Turning Neural Networks Upside Down
Post by: valera_rozuvan on July 28, 2016, 01:12:09 AM
If you have a simple step by step info how to set a standard windows machine to train, I'll happily let it run for a few weeks.

The problem is that I wasn't able to find a ready-to-use code base for such an experiment. While there are many projects based on neural-network technology, none seem to do out of the box what we want. See for example Awesome Deep Vision. Image Generation. (https://github.com/kjw0612/awesome-deep-vision#image-generation).

What I want to do:

(http://nocache-nocookies.digitalgott.com/gallery/19/14000_28_07_16_1_11_34.png)
source: http://www.fractalforums.com/index.php?action=gallery;sa=view;id=19459


Title: Re: Turning Neural Networks Upside Down
Post by: Chillheimer on July 28, 2016, 11:55:34 AM
very interesting, but I know too little about this.
if you find a way please update me, I'll be happy to let my cpu chew on these for a few days/weeks..


Title: Re: Turning Neural Networks Upside Down
Post by: valera_rozuvan on July 28, 2016, 12:52:18 PM
Just going through some earlier posts. Some work on the front of fractals + neural nets has already been carried out by the forum members.

Art created with Google's Deep Dream (https://github.com/google/deepdream):

1.) The Procession (http://www.fractalforums.com/images-showcase-(rate-my-fractal)/the-procession/)
2.) Neural Dreaming (http://www.fractalforums.com/images-showcase-(rate-my-fractal)/neural-dreaming/)
3.) Neural Dreaming 2 (http://www.fractalforums.com/images-showcase-(rate-my-fractal)/neural-dreaming-2/)
4.) DeepDream Mandelbrot (http://www.fractalforums.com/images-showcase-(rate-my-fractal)/deepdream-mandelbrot/)

Video of a fractal zoom inside a Google's Deep Dream:

1.) Dense DEEPDREAM Mandelbrot zoom - Julia morphing (http://www.fractalforums.com/movies-showcase-(rate-my-movie)/dense-deepdream-mandelbrot-zoom-julia-morphing/)

Some general discussions on fractals and neural nets:

1.) AI fractal zooming? (http://www.fractalforums.com/programming/ai-fractal-zooming/)
2.) Understanding Long Short Time Memory Neural Networks (http://www.fractalforums.com/new-theories-and-research/understanding-long-short-time-memory-neural-networks/)
3.) Using complex neural networks to generate 2D fractals: Thesis Survey (http://www.fractalforums.com/new-theories-and-research/using-complex-neural-networks-to-generate-2d-fractals-thesis-survey/)

Interesting application of neural nets on fractals:

1.) experimenting with mandelbrot set and neural network (http://www.fractalforums.com/programming/experimenting-with-mandelbrot-set-and-neural-network/)
2.) (it works!) Neural Network, Self Organizing Map, and Mandelbrot Set (http://www.fractalforums.com/programming/neural-network-self-organizing-map-and-mandelbrot-set/)


Title: Re: Turning Neural Networks Upside Down
Post by: TheRedshiftRider on August 26, 2016, 08:27:43 PM
https://youtu.be/BsSmBPmPeYQ


Title: Re: Turning Neural Networks Upside Down
Post by: Caleidoscope on August 27, 2016, 10:52:52 AM
I had a lot of fun with a similar online generator. And I even made special pictures and even fractals , to try to create an outcome that I had in mind for that is the most fun otherwise it get boring, I think. Like the example in the vid about the cat's eyes and such that makes it more exiting. Here are some of the pictures I created with dreamdeeply if anybody is interested ;)
https://www.youtube.com/watch?v=zerXkr4o9xg






Title: Re: Turning Neural Networks Upside Down
Post by: kram1032 on August 29, 2016, 10:13:48 PM
I wish there was a web-based portal where not only can you pick a given trained network to "deepdreamify" your image, but you can also actually contribute to a completely custom training set and train have that trained up via some kind of *@HOME-like technique.
That way you could broaden the scope of these a lot.
What if I don't just want buildings, human faces or animals?
Most of the process is automatic. You just pick a certain NN architecture, a training set and a bunch of "hyperparameters" which essentially end up finetuning how the NN performs, and else just lots and lots of time.

I mean I can only imagine an internet-inspired NN to be mildly disturbing, but still, it should work, right? Even if it was slow, given enough computing power a couple of decent models should easily emerge.


Title: Re: Turning Neural Networks Upside Down
Post by: Sockratease on November 05, 2016, 10:56:49 AM
Finally got back to playing around with this toy!

Here's a Chaoscope Strange Attractor I took over to that crazy Deep Dream thingy.

(http://nocache-nocookies.digitalgott.com/gallery/19/162_05_11_16_10_50_48.png)

I posted it in the gallery (http://www.fractalforums.com/index.php?action=gallery;sa=view;id=19749) with a Frank Zappa Tribute title and description   O0


Title: Re: Turning Neural Networks Upside Down
Post by: tit_toinou on November 05, 2016, 06:24:12 PM

and deepstyle for videos:
https://www.youtube.com/watch?v=Khuj4ASldmU

I've used this technique if anyone is interested ! (please watch in 1080p or 4K)

https://www.youtube.com/watch?v=2YRVt80g2Ek&list=PLbRiR5PpVynZX5qPEVZ4agNjxfCjP4nER&index=2


Title: Re: Turning Neural Networks Upside Down
Post by: Chillheimer on November 05, 2016, 10:15:49 PM
nice!
I am indeed interested in how you did this in 4k! I thought even the smallest resolutions like 640*480 require 16gb+ ram on your graphicscard, as this goes up exponentially 4k was totally out of reach for the next years. and this was for a single frame, not 6 minutes!  
has anything changed?!?


Title: Re: Turning Neural Networks Upside Down
Post by: tit_toinou on November 06, 2016, 06:11:37 PM
Yes things changed (It's more like 4K would take 20GB of RAM I suppose) .
I have a GTX 980 (4GB VRAM) I can go up to 1.1 MPixels (width*height), so I just divided my 4K into 4x4=16 patches and blend the results ;) .

For the 6 minutes question : there's a new technique, Fast Style Transfer, in which you train a neural network to produce a specific style (last multiple days) and then you can use the result to apply the style to image really fast. And there's now a new technique where you can train multiple style at the same time, and afterwards interpolate style with given interpolation parameters.


Title: Re: Turning Neural Networks Upside Down
Post by: Chillheimer on November 08, 2016, 01:27:35 PM
thanks a lot for that update.
The speed of development of all this is awesome. I'm absolutely convinced that we are very close to some major surprises and fundamental changes of the way we're living. It's already starting.. :) http://futurism.com/robotintelligence/

exciting times we have the privilege to live in!


Title: Re: Turning Neural Networks Upside Down
Post by: TheRedshiftRider on July 03, 2017, 05:56:07 PM
https://youtu.be/S5AeqYfcb7w

Found this video. Somewhat related, Its more about how we look at how computers and algorithms look at faces and how some people are weirded out by it.


Title: Re: Turning Neural Networks Upside Down
Post by: gagardzo on August 13, 2017, 12:31:26 PM
I've used this technique if anyone is interested ! (please watch in 1080p or 4K)

https://www.youtube.com/watch?v=2YRVt80g2Ek&list=PLbRiR5PpVynZX5qPEVZ4agNjxfCjP4nER&index=2

Awesome job, thanks


Title: Re: Turning Neural Networks Upside Down
Post by: Chillheimer on August 13, 2017, 12:51:27 PM
another masterpiece by julius horsthuis, using deepstyle transfer of amsterdam aerial images on mandelbox:
https://vimeo.com/229380883


Title: Re: Turning Neural Networks Upside Down
Post by: tit_toinou on August 13, 2017, 03:56:52 PM
WOOOOOOW
Can't see the link so here it is : https://vimeo.com/229380883 (https://vimeo.com/229380883)

I really don't understand how he uses the neural style algo to generate this. It's both temporally consistent AND 3d consistent (or it feels like it)