Logo by mclarekin - Contribute your own Logo!

END OF AN ERA, FRACTALFORUMS.COM IS CONTINUED ON FRACTALFORUMS.ORG

it was a great time but no longer maintainable by c.Kleinhuis contact him for any data retrieval,
thanks and see you perhaps in 10 years again

this forum will stay online for reference
News: Follow us on Twitter
 
*
Welcome, Guest. Please login or register. April 25, 2024, 07:09:47 AM


Login with username, password and session length


The All New FractalForums is now in Public Beta Testing! Visit FractalForums.org and check it out!


Pages: [1] 2 3 ... 13   Go Down
  Print  
Share this topic on DiggShare this topic on FacebookShare this topic on GoogleShare this topic on RedditShare this topic on StumbleUponShare this topic on Twitter
Author Topic: Turning Neural Networks Upside Down  (Read 37113 times)
0 Members and 1 Guest are viewing this topic.
kram1032
Fractal Senior
******
Posts: 1863


« on: June 20, 2015, 11:53:04 AM »

http://googleresearch.blogspot.co.uk/2015/06/inceptionism-going-deeper-into-neural.html

First training Neural Networks to recognize certain images and then feeding it random images letting it interpret them and enhance what it sees, you end up with strikingly modified imagery.



Doing this by starting with noise and, after each iteration, zooming in a little, you end up with very fractal-y images which all are inspired by the things the network knows about:



Click the link on top for more information as well as a full gallery of images generated this way!

This one's a video showing off the interpret-and-zoom technique on an image of clouds as a base. Perhaps watch at lower speeds.
https://photos.google.com/share/AF1QipPX0SCl7OzWilt9LnuQliattX4OUCj_8EP65_cTVnBmS1jnYgsGQAieQUc1VQWdgQ/photo/AF1QipOlM1yfMIV0guS4bV9OHIvPmdZcCngCUqpMiS9U?key=aVBxWjhwSzg2RjJWLWRuVFBBZEN1d205bUdEMnhB
« Last Edit: June 20, 2015, 12:22:34 PM by kram1032 » Logged
Chillheimer
Global Moderator
Fractal Schemer
******
Posts: 972


Just another fractal being floating by..


chilli.chillheimer chillheimer
WWW
« Reply #1 on: June 20, 2015, 12:15:19 PM »

so computers now do art.
and i actually like it!  shocked
crazy... welcome to the 21st century... alien
Logged

--- Fractals - add some Chaos to your life and put the world in order. ---
kram1032
Fractal Senior
******
Posts: 1863


« Reply #2 on: June 20, 2015, 12:54:32 PM »

welcome to the 3rd millenium wink
Logged
cKleinhuis
Administrator
Fractal Senior
*******
Posts: 7044


formerly known as 'Trifox'


WWW
« Reply #3 on: June 20, 2015, 01:09:33 PM »

i love it, in a certain ways it is a way to visualise how a neural network works, that is nicely described in the descriptions, it reminds of some of the images that billtavis has done for the compo, as they described, they usually have no idea how the cells in the network are connected and why, and so it can provide insights very interesting and cool!
Logged

---

divide and conquer - iterate and rule - chaos is No random!
Chillheimer
Global Moderator
Fractal Schemer
******
Posts: 972


Just another fractal being floating by..


chilli.chillheimer chillheimer
WWW
« Reply #4 on: June 20, 2015, 10:27:21 PM »

This is SOOO incredible! It takes me some time to really grasp and appreciate the scale of what this means!
it'S really watching machines think! trained by a similar recursive training that our brains go through when we age.
(and how could it be different, the results show fractal patterns....)

"If we apply the algorithm iteratively on its own outputs and apply some zooming after each iteration, we get an endless stream of new impressions, exploring the set of things the network knows about. We can even start this process from a random-noise image, so that the result becomes purely the result of the neural network, as seen in the following images:"

and out comes something like the attached image?! from random noise?!?!
this is what a machine interprets into noise?! random fluctuations?
 

if this is not a "thought".. then I don't know what a "thought" is.

so.... unbelievable...!!


ps: found a link to the final picture, that is the most awesome for me. as it came from nothing. a blank canvas.
https://lh3.googleusercontent.com/-PcD4unsMEpc/VYKZDpoF1SI/AAAAAAAAjp8/lSq5R5o4ScI/w2786-h1296/Research_Blog__Inceptionism__Going_Deeper_into_Neural_Networks.jpg


* awesome-neural-networks.JPG (165.47 KB, 788x477 - viewed 891 times.)
« Last Edit: June 20, 2015, 11:31:37 PM by Chillheimer » Logged

--- Fractals - add some Chaos to your life and put the world in order. ---
youhn
Fractal Molossus
**
Posts: 696


Shapes only exists in our heads.


« Reply #5 on: June 21, 2015, 10:06:17 PM »

Just upvoting that image! Been fascinated by it aswell, knowing how it came to be.

 Repeating Zooming Self-Silimilar Thumb Up, by Craig
Logged
Syntopia
Fractal Molossus
**
Posts: 681



syntopiadk
WWW
« Reply #6 on: June 21, 2015, 11:39:14 PM »

Extremely fascinating. I've been browsing the papers they link to in the blog post, but don't get their approach.

But I have found that they use a GoogLeNet 'Inception' Convoluted Neural Network (with 22-layers!) trained on ImageNet (the latter examples on the Places data sets). It is possible to download it - fully trained - from here: https://github.com/BVLC/caffe/tree/master/models/bvlc_googlenet - and run it using the free Caffe framework (CPU & GPU).

But that only allows you to classify images (forward inference). Not to go backwards. The blog post is not very clear on how this is achieved:

"In this case we simply feed the network an arbitrary image or photo and let the network analyze the picture. We then pick a layer and ask the network to enhance whatever it detected"

Does this mean that they are using back-propagation to adjust the original input vector? The papers refered to in the blog post [1]-[4] are much more complicated, but seem to generate much more lousy images.

My favorite image is this one:


Btw: there is also a video, which I didn't notice at first:
https://photos.google.com/share/AF1QipPX0SCl7OzWilt9LnuQliattX4OUCj_8EP65_cTVnBmS1jnYgsGQAieQUc1VQWdgQ/photo/AF1QipOlM1yfMIV0guS4bV9OHIvPmdZcCngCUqpMiS9U?key=aVBxWjhwSzg2RjJWLWRuVFBBZEN1d205bUdEMnhB
Logged
3dickulus
Global Moderator
Fractal Senior
******
Posts: 1558



WWW
« Reply #7 on: June 22, 2015, 03:14:43 AM »

 shocked incredible images, fascinating details.
Logged

Resistance is fertile...
You will be illuminated!

                            #B^] https://en.wikibooks.org/wiki/Fractals/fragmentarium
kram1032
Fractal Senior
******
Posts: 1863


« Reply #8 on: June 22, 2015, 08:58:15 AM »

I'm pretty sure what they are doing is, that they take a fixed chosen layer's output and superimpose that on the original image, then repeat the process. Each layer stores more abstract pieces of the image. Layer 1 only stored line segments and dots. Layer 2 begins storing curve segments. Higher layers refine curves, can store textures and individual body parts and eventually even entire objects.

They also mention that this only works together with a constraint that enforces correlation between neighboring pixels. Else you probably just get a noisy jumble.  This part is the one less clear to me. I mean,  why it's necessary is not so surprising but how to do it I'm not sure.
Logged
Chillheimer
Global Moderator
Fractal Schemer
******
Posts: 972


Just another fractal being floating by..


chilli.chillheimer chillheimer
WWW
« Reply #9 on: June 22, 2015, 11:37:54 AM »

woohooo, found more pictures:
https://photos.google.com/share/AF1QipPX0SCl7OzWilt9LnuQliattX4OUCj_8EP65_cTVnBmS1jnYgsGQAieQUc1VQWdgQ?key=aVBxWjhwSzg2RjJWLWRuVFBBZEN1d205bUdEMnhB

hey, does anyone know how download a photo in the highest resolution?
they have that pic I posted in large resolution with 1.5mbytes, but I only can download it in 700kb..



hmmm.. I wonder what would come out if you do this with pictures of the mandelbrot set or mandelbulb3d stuff....
I really hope they release this as a little tool or a google-beta-thing.. wink

if anyone finds out anything more about it, or just more pictures, please share here!


(this seems so important, made it sticky)
« Last Edit: June 22, 2015, 01:02:27 PM by Chillheimer » Logged

--- Fractals - add some Chaos to your life and put the world in order. ---
Syntopia
Fractal Molossus
**
Posts: 681



syntopiadk
WWW
« Reply #10 on: June 22, 2015, 09:14:14 PM »

I'm pretty sure what they are doing is, that they take a fixed chosen layer's output and superimpose that on the original image, then repeat the process. Each layer stores more abstract pieces of the image. Layer 1 only stored line segments and dots. Layer 2 begins storing curve segments. Higher layers refine curves, can store textures and individual body parts and eventually even entire objects.

It is a convolutional net (http://cs231n.github.io/convolutional-networks/), so the output of a layer is not an image (some of the first layers may have a spatial structure, but for instance the final layer will output a classification vector with 1000 entries). I imagine they must be sending information backwards through the network to arrive at something in image space. That seems to be the approach taken in the papers they cite (where they invert the networks).

Quote
They also mention that this only works together with a constraint that enforces correlation between neighboring pixels. Else you probably just get a noisy jumble.  This part is the one less clear to me. I mean,  why it's necessary is not so surprising but how to do it I'm not sure.

That is discussed in the papers they reference. Ref [2] (http://arxiv.org/pdf/1412.0035v1.pdf) uses a 'total variation' regulariser as a natural image prior approximation to ensure correlation. Ref [3] uses another approach, whereby the natural image prior is trained based on the images in the training set. But I don't think it is the approach Google used. Their images seems to be different, and much more interesting than in those references.

Logged
kram1032
Fractal Senior
******
Posts: 1863


« Reply #11 on: June 24, 2015, 10:20:41 PM »

As a followup work, yet another neural network was applied to the output. This one is supposed to describe a scene in a sentence.
Here are the results:
http://www.cs.toronto.edu/~rkiros/inceptionism_captions.html
and here's how it works:
http://kelvinxu.github.io/projects/capgen.html
Clearly this tech has to go a long way still but it's pretty darn impressive already.
(Also it's weirdly in love with clocks)
(Also it's able to see the forest AND the tree)
(Also it does have a rudimentary sense for what fractals are.)
(Also, for those who are familiar, I'm weirdly reminded of legendary artifacts in Dwarf Fortress.)
« Last Edit: June 24, 2015, 10:31:26 PM by kram1032 » Logged
Chillheimer
Global Moderator
Fractal Schemer
******
Posts: 972


Just another fractal being floating by..


chilli.chillheimer chillheimer
WWW
« Reply #12 on: June 24, 2015, 11:10:33 PM »

I'm weirdly reminded of legendary artifacts in Dwarf Fortress.)
bwahaha, that just made my day! smiley

edit: hm, never thought of dwarf fortress as using fractal/procedural calculations to generate everything. of course!! explains why it was able to "steal" half a year of my live.. wink
wow, i didn't expect that they are still working on it! i left version 0.28..  maybe i should... just once... uhoh.. better turn of the computer!  roll eyes
« Last Edit: June 24, 2015, 11:22:06 PM by Chillheimer » Logged

--- Fractals - add some Chaos to your life and put the world in order. ---
phtolo
Navigator
*****
Posts: 79



« Reply #13 on: June 24, 2015, 11:31:12 PM »

There was a free online course describing back-propagation a few years ago. (https://www.coursera.org/course/neuralnets)
Not sure if the material is still available through their site.

Among other things a wake-sleep algorithm was mentioned in one of the lectures.
First some iterations of wake phase where you only go one direction, after that some iterations of sleep where you back-propagate with no input data.
Then repeat the process.

You can read input channels during the sleep phase and it will almost be like looking at what the model is dreaming.
Logged
kram1032
Fractal Senior
******
Posts: 1863


« Reply #14 on: June 24, 2015, 11:39:21 PM »

There are various Coursera courses on AI https://www.coursera.org/courses?query=AI&categories=cs-ai
There's also https://www.edx.org/course/artificial-intelligence-uc-berkeleyx-cs188-1x
and https://www.udacity.com/course/intro-to-artificial-intelligence--cs271
and probably many more

Chillheimer, there was an absolutely epic fractal bug in dwarf fortress: http://dwarffortresswiki.org/index.php/Planepacked
Logged
Pages: [1] 2 3 ... 13   Go Down
  Print  
 
Jump to:  

Related Topics
Subject Started by Replies Views Last post
The Upside Mandelbulb3D Gallery lenord 0 1267 Last post September 06, 2011, 04:07:55 PM
by lenord
Turning around the Beast Mandelbulb3D Gallery bib 1 1652 Last post March 26, 2012, 12:14:34 PM
by cKleinhuis
Using complex neural networks to generate 2D fractals: Thesis Survey (new) Theories & Research rrhvella 12 1882 Last post January 12, 2013, 05:30:52 PM
by Alef
Understanding Long Short Time Memory Neural Networks (new) Theories & Research kram1032 0 690 Last post August 28, 2015, 02:37:25 AM
by kram1032
FRACTALNET: Ultra-Deep Neural Networks Without Residuals Fractals Applied or in Nature mancoast 0 3603 Last post April 02, 2017, 08:05:36 PM
by mancoast

Powered by MySQL Powered by PHP Powered by SMF 1.1.21 | SMF © 2015, Simple Machines

Valid XHTML 1.0! Valid CSS! Dilber MC Theme by HarzeM
Page created in 0.458 seconds with 24 queries. (Pretty URLs adds 0.01s, 2q)