|
Chillheimer
|
|
« Reply #1 on: June 20, 2015, 12:15:19 PM » |
|
so computers now do art. and i actually like it! crazy... welcome to the 21st century...
|
|
|
Logged
|
--- Fractals - add some Chaos to your life and put the world in order. ---
|
|
|
kram1032
|
|
« Reply #2 on: June 20, 2015, 12:54:32 PM » |
|
welcome to the 3rd millenium
|
|
|
Logged
|
|
|
|
cKleinhuis
|
|
« Reply #3 on: June 20, 2015, 01:09:33 PM » |
|
i love it, in a certain ways it is a way to visualise how a neural network works, that is nicely described in the descriptions, it reminds of some of the images that billtavis has done for the compo, as they described, they usually have no idea how the cells in the network are connected and why, and so it can provide insights very interesting and cool!
|
|
|
Logged
|
---
divide and conquer - iterate and rule - chaos is No random!
|
|
|
Chillheimer
|
|
« Reply #4 on: June 20, 2015, 10:27:21 PM » |
|
This is SOOO incredible! It takes me some time to really grasp and appreciate the scale of what this means! it'S really watching machines think! trained by a similar recursive training that our brains go through when we age. (and how could it be different, the results show fractal patterns....) "If we apply the algorithm iteratively on its own outputs and apply some zooming after each iteration, we get an endless stream of new impressions, exploring the set of things the network knows about. We can even start this process from a random-noise image, so that the result becomes purely the result of the neural network, as seen in the following images:"and out comes something like the attached image?! from random noise?!?! this is what a machine interprets into noise?! random fluctuations? if this is not a "thought".. then I don't know what a "thought" is. so.... unbelievable...!! ps: found a link to the final picture, that is the most awesome for me. as it came from nothing. a blank canvas. https://lh3.googleusercontent.com/-PcD4unsMEpc/VYKZDpoF1SI/AAAAAAAAjp8/lSq5R5o4ScI/w2786-h1296/Research_Blog__Inceptionism__Going_Deeper_into_Neural_Networks.jpg
|
|
« Last Edit: June 20, 2015, 11:31:37 PM by Chillheimer »
|
Logged
|
--- Fractals - add some Chaos to your life and put the world in order. ---
|
|
|
youhn
Fractal Molossus
Posts: 696
Shapes only exists in our heads.
|
|
« Reply #5 on: June 21, 2015, 10:06:17 PM » |
|
Just upvoting that image! Been fascinated by it aswell, knowing how it came to be.
|
|
|
Logged
|
|
|
|
|
3dickulus
|
|
« Reply #7 on: June 22, 2015, 03:14:43 AM » |
|
incredible images, fascinating details.
|
|
|
Logged
|
|
|
|
kram1032
|
|
« Reply #8 on: June 22, 2015, 08:58:15 AM » |
|
I'm pretty sure what they are doing is, that they take a fixed chosen layer's output and superimpose that on the original image, then repeat the process. Each layer stores more abstract pieces of the image. Layer 1 only stored line segments and dots. Layer 2 begins storing curve segments. Higher layers refine curves, can store textures and individual body parts and eventually even entire objects.
They also mention that this only works together with a constraint that enforces correlation between neighboring pixels. Else you probably just get a noisy jumble. This part is the one less clear to me. I mean, why it's necessary is not so surprising but how to do it I'm not sure.
|
|
|
Logged
|
|
|
|
Chillheimer
|
|
« Reply #9 on: June 22, 2015, 11:37:54 AM » |
|
woohooo, found more pictures: https://photos.google.com/share/AF1QipPX0SCl7OzWilt9LnuQliattX4OUCj_8EP65_cTVnBmS1jnYgsGQAieQUc1VQWdgQ?key=aVBxWjhwSzg2RjJWLWRuVFBBZEN1d205bUdEMnhBhey, does anyone know how download a photo in the highest resolution? they have that pic I posted in large resolution with 1.5mbytes, but I only can download it in 700kb.. hmmm.. I wonder what would come out if you do this with pictures of the mandelbrot set or mandelbulb3d stuff.... I really hope they release this as a little tool or a google-beta-thing.. if anyone finds out anything more about it, or just more pictures, please share here! (this seems so important, made it sticky)
|
|
« Last Edit: June 22, 2015, 01:02:27 PM by Chillheimer »
|
Logged
|
--- Fractals - add some Chaos to your life and put the world in order. ---
|
|
|
Syntopia
|
|
« Reply #10 on: June 22, 2015, 09:14:14 PM » |
|
I'm pretty sure what they are doing is, that they take a fixed chosen layer's output and superimpose that on the original image, then repeat the process. Each layer stores more abstract pieces of the image. Layer 1 only stored line segments and dots. Layer 2 begins storing curve segments. Higher layers refine curves, can store textures and individual body parts and eventually even entire objects.
It is a convolutional net ( http://cs231n.github.io/convolutional-networks/), so the output of a layer is not an image (some of the first layers may have a spatial structure, but for instance the final layer will output a classification vector with 1000 entries). I imagine they must be sending information backwards through the network to arrive at something in image space. That seems to be the approach taken in the papers they cite (where they invert the networks). They also mention that this only works together with a constraint that enforces correlation between neighboring pixels. Else you probably just get a noisy jumble. This part is the one less clear to me. I mean, why it's necessary is not so surprising but how to do it I'm not sure.
That is discussed in the papers they reference. Ref [2] ( http://arxiv.org/pdf/1412.0035v1.pdf) uses a 'total variation' regulariser as a natural image prior approximation to ensure correlation. Ref [3] uses another approach, whereby the natural image prior is trained based on the images in the training set. But I don't think it is the approach Google used. Their images seems to be different, and much more interesting than in those references.
|
|
|
Logged
|
|
|
|
kram1032
|
|
« Reply #11 on: June 24, 2015, 10:20:41 PM » |
|
As a followup work, yet another neural network was applied to the output. This one is supposed to describe a scene in a sentence. Here are the results: http://www.cs.toronto.edu/~rkiros/inceptionism_captions.htmland here's how it works: http://kelvinxu.github.io/projects/capgen.htmlClearly this tech has to go a long way still but it's pretty darn impressive already. (Also it's weirdly in love with clocks) (Also it's able to see the forest AND the tree) (Also it does have a rudimentary sense for what fractals are.) (Also, for those who are familiar, I'm weirdly reminded of legendary artifacts in Dwarf Fortress.)
|
|
« Last Edit: June 24, 2015, 10:31:26 PM by kram1032 »
|
Logged
|
|
|
|
Chillheimer
|
|
« Reply #12 on: June 24, 2015, 11:10:33 PM » |
|
I'm weirdly reminded of legendary artifacts in Dwarf Fortress.)
bwahaha, that just made my day! edit: hm, never thought of dwarf fortress as using fractal/procedural calculations to generate everything. of course!! explains why it was able to "steal" half a year of my live.. wow, i didn't expect that they are still working on it! i left version 0.28.. maybe i should... just once... uhoh.. better turn of the computer!
|
|
« Last Edit: June 24, 2015, 11:22:06 PM by Chillheimer »
|
Logged
|
--- Fractals - add some Chaos to your life and put the world in order. ---
|
|
|
phtolo
|
|
« Reply #13 on: June 24, 2015, 11:31:12 PM » |
|
There was a free online course describing back-propagation a few years ago. ( https://www.coursera.org/course/neuralnets) Not sure if the material is still available through their site. Among other things a wake-sleep algorithm was mentioned in one of the lectures. First some iterations of wake phase where you only go one direction, after that some iterations of sleep where you back-propagate with no input data. Then repeat the process. You can read input channels during the sleep phase and it will almost be like looking at what the model is dreaming.
|
|
|
Logged
|
|
|
|
kram1032
|
|
« Reply #14 on: June 24, 2015, 11:39:21 PM » |
|
|
|
|
Logged
|
|
|
|
|