Syntopia
|
|
« Reply #45 on: July 04, 2015, 02:58:00 AM » |
|
I think that is the way it works already - the above image was run for 30 or so iterations. I just stopped it at some point.
|
|
|
Logged
|
|
|
|
3dickulus
|
|
« Reply #46 on: July 04, 2015, 03:02:43 AM » |
|
hmmm.. like accumulating subframes in Fragmentarium? but with a twist
|
|
|
Logged
|
|
|
|
mclarekin
|
|
« Reply #47 on: July 04, 2015, 04:28:24 AM » |
|
This is also amazing.
@chillheimer. That is freaky, love it!!
|
|
|
Logged
|
|
|
|
TheRedshiftRider
|
|
« Reply #48 on: July 04, 2015, 09:41:37 AM » |
|
That looks amazing, I've tried this myself but the server doesn't react.
|
|
|
Logged
|
Motivation is like a salt, once it has been dissolved it can react with things it comes into contact with to form something interesting.
|
|
|
kram1032
|
|
« Reply #49 on: July 04, 2015, 09:46:30 AM » |
|
btw, this method isn't just not frame-to-frame-coherent, it's also not frame-self-coherent. Like, if you run the network a second time, it'll find different variations. The clearest features will be very similar but never quite the same. The undetailed stuff (like whatever the network puts in the place of a pure-colored region) will tend to be very different. Therefore I propose, if that isn't too much computational effort (it likely is though), to at each iteration actually generate a bunch of versions of the same image, but to average them together, and to use that average as input for the next generation. That way the noisier, less suggestive bits won't matter as much while the more consistent bits will hopefully become all the clearer. Of course, if you like the addition of all that noise, that's fine too. But I'd like to see a clearer version as well, if that's possible. Using that above-linked website for that is tediously slow though. I cranked up the timeout time in Firefox tenfold so I wouldn't drop the website's connection so frequently. (In Chrome that setting apparently doesn't even exist) but it's a long wait per iteration. Entirely impractical.
|
|
|
Logged
|
|
|
|
|
3dickulus
|
|
« Reply #51 on: July 04, 2015, 11:44:24 PM » |
|
hmmm.. like accumulating subframes in Fragmentarium? but with a twist that's one heck of a twist One trick for demystifying a CNN is to choose a neuron in a trained CNN, and attempt to generate an image that causes the neuron to activate strongly. We initialize the image with random noise, propagate the image forward through the network to compute the activation of the target neuron, then propagate the activation of the neuron backward through the network to compute an update direction for the image. We use this information to update the image, and repeat the process until convergence.
|
|
|
Logged
|
|
|
|
kram1032
|
|
« Reply #52 on: July 06, 2015, 01:20:51 PM » |
|
I tried the averaging technique on an image of a bubble: Here are three iterations of averaging 5 images and putting them into that website. (Painfully slow, I wish there was an easy to use Windows implementation, also, jpeg artifacts became visible. It must compress at fairly low quality) And here are the five versions of the third iteration that then were averaged together to give the last of the above images. Check out the bottom right and top right corner for differences. The largest differences appear in the black region which is inherently featureless and thus technically only comprised of noise. As executed, the technique is flawed: The website outputs already largely converged images which can look very different from each other and already have a lot of noise in the black regions, and 5 samples are probably not particularly great. You'd probably want more like 16+ samples, applied at each iteration step. The former part of that is impractical with this website and the later is impossible. So if somebody of you who already has a Linux implementation up and running could try doing that, that'd be cool.
|
|
|
Logged
|
|
|
|
|
cKleinhuis
|
|
« Reply #54 on: July 06, 2015, 04:08:37 PM » |
|
just for clarification, how does the zooming work ? do you feed the algorithm with just that part of the original image, or do you re-feed the part of the image it has generated?
|
|
|
Logged
|
---
divide and conquer - iterate and rule - chaos is No random!
|
|
|
KRAFTWERK
|
|
« Reply #55 on: July 06, 2015, 04:12:04 PM » |
|
just for clarification, how does the zooming work ? do you feed the algorithm with just that part of the original image, or do you re-feed the part of the image it has generated?
In my case I let it dream on the original image, fed it with a part of the image which came out of the "dream" and so on... Why are there several threads about this topic by the way? Very fractal...
|
|
|
Logged
|
|
|
|
kram1032
|
|
« Reply #56 on: July 06, 2015, 04:46:54 PM » |
|
I counted three topics, but two of them are more specific. One is a gallery post and the other is a repost of a video already found in this thread In that zoom image I loved the large amount of stuff before you zoomed into the, uh, "face". After that it became a little boring. This AI likes eyes a bit too much. Presumably because they are such a prominent feature irl too.
|
|
|
Logged
|
|
|
|
cKleinhuis
|
|
« Reply #57 on: July 06, 2015, 04:51:42 PM » |
|
as said in the other threads, this is a new topic, definately related to fractals, in fact it is an applied technique of chaos theory, many people confuse mandelbrot image renderings with the concepts, the stuff we call fractals here in the forums and most likely around the world are related to 2d fractals, and recently to 3d fractals, but all of such images are reduced to the most simple stuff that creates mathematical chaotic behaviour and we envy this and see it as beautiful, nevertheless the underlying concept is far more wider than many people believe, we do ground-research here, as example, it is like we are letting apples fall down in vacuum, enjoying the results that newton brought us and we play the whole day around letting other stuff fall like feathers or giant rocks, but it is just playing at the uttermost base, stuff that comes out of the newton principles like building rockets or travelling through space is what actually is made of this simple ground thoughts, similar to what we encounter now!
|
|
|
Logged
|
---
divide and conquer - iterate and rule - chaos is No random!
|
|
|
KRAFTWERK
|
|
« Reply #58 on: July 06, 2015, 08:03:02 PM » |
|
I counted three topics, but two of them are more specific. One is a gallery post and the other is a repost of a video already found in this thread In that zoom image I loved the large amount of stuff before you zoomed into the, uh, "face". After that it became a little boring. This AI likes eyes a bit too much. Presumably because they are such a prominent feature irl too.
All right, two threads, but it could be enough with one... And yes, I am getting a bit bored with pagodas, dogs and eyes. And well spoken Christian, I agree 100%!
|
|
|
Logged
|
|
|
|
youhn
Fractal Molossus
Posts: 696
Shapes only exists in our heads.
|
|
« Reply #59 on: July 06, 2015, 08:57:39 PM » |
|
I would like to turn it upside down in another way. Now it seems to fill up the details, zooming into complexity. But what about getting the big picture, the abstraction/connecting part of image recognition?
|
|
|
Logged
|
|
|
|
|