With just a little bit of mucking around, mandelbulber runs just fine on
Amazon EC2 instances. This means you can have as many machines as you want chewing away at your latest animation, while also chewing away at your wallet. While each machine has (at the moment) a maximum of 8 cores, you can split a keyframe animation up as many times as you like and collate the output images together. This is what I did for my "chlorine fossil" animation, where I ended up having three "high-CPU" 8 core instances rendering different sections.
I haven't tried the latest version (0.98) yet, but the following process worked for the 0.97 x64 version (and the current layout of EC2).
The first thing to do is fire up your EC2 instance. This is assuming you have an account, and have generated a key pair. At the moment when you join you're given a one-core instance to play around with for free, and it's best to use that for all testing. Firing up an actual high-CPU instance will cost you money! Something like 70 US cents an hour.
You'll also need to make a (or just change the default) security group to allow ssh and ftp (port 22 and 21) connections from your home IP address. This is all pretty easy to do, and the introductory amazon tutorials cover all the key pair and security group yadda yadda, so let's get to the fun stuff.
From your EC2 console, press the big "launch instance" button and choose the "community AMIs" tab. We're looking for a 64bit ubuntu image. The one I've been using - and this might not be the best one - is AMI-06067854 ... lucid-10.4-amd64-server. Typing in "lucid" in the search panel narrows the field a bit. Hit the "select" button for that one, then you'll be choosing an instance type. The "micro" one wont cost you any money, so continue with that one. There'll be a page where you can set up tags for multiple instances and the like, just continue straight through. Choose the key pair you want to use and a firewall where you have ssh and ftp access. So, on the review page, you should have an ubuntu x86_64 image with your key pair and everything looking cool, launch it!
Back on the EC2 console, your instance should now be starting up on the instances tab. Click on it, and copy its public DNS ... the one I just fired up looks like ec2-122-248-199-73.ap-southeast-1.compute.amazonaws.com.
Open up a terminal and cd into the directory where you have the key pair the instance is using. You'll want to ssh as the user "ubuntu" to the public DNS of your instance:
ssh -i yourkeypair.pem ubuntu@ec2-blah-blah-blah.compute.amazonaws.comThere'll be a complaint about authenticity - answer 'yes' to continue. Blam! There you are as the user 'ubuntu' on your cloud instance. Here's my mighty one core micro machine...
ubuntu@ip-xx-xxx-xxx-xxx:/home$ whoami
ubuntu
ubuntu@ip-xx-xxx-xxx-xxx:/home$ cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 23
model name : Intel(R) Xeon(R) CPU E5430 @ 2.66GHz
stepping : 10
cpu MHz : 2659.998
cache size : 6144 KB
...
Neat eh? Now, we'll set up an ftp server on the instance so we can get files on and off. vsftpd works, and can be grabbed with:
sudo apt-get install vsftpdThis should all be cool, and now there's an ftp server running on the instance. But there's a couple of problems - we want to be be able to put as well as get (there's nothing much to get now anyway), and at the moment using filezilla or something wont want to connect. First edit the vsftpd config file:
sudo nano /etc/vsftpd.confFind the line that says "#write_enable=YES" and uncomment out the "#". Write to the file (we need to be root to be able to do this, hence the sudo) and exit. I suppose it might work to use vi or something in place of nano. I suppose. Now restart the vsftp daemon:
sudo /etc/init.d/vsftpd restartI found that I needed to give the user "ubuntu" a password to actually connect to the server from my local filezilla ftp program. There are probably more elegant ways around this, but it's easiest to just say fine and give ubuntu a password:
sudo passwd ubuntuFire up filezilla (or whichever ftp proggy is good) and connect to the public DNS address of your instance as the user "ubuntu" with the password you just gave it at the other end. You should be able to see /home/ubuntu and be able to put and get files.
A pretty good file to upload to your instance is mandelbulber0.97x64.tar.gz. Go back to your ssh session at the other end and untar it:
tar -xvzf mandelbulber0.97x64.tar.gzNow you'll have a nice fresh mandelbulber directory in your cloudy home. It wont run out of the box - here's a couple of bits to grab first. I found I had to grab libgtk2.0-dev, then update, then apt-get it again ... this is where it gets flakey:
sudo apt-get install libgtk2.0-dev
sudo apt-get update
sudo apt-get install libgtk2.0-devOnce it starts making fonts and things, it should have taken. Since we're going to be running headless anyway, there really shouldn't be a need for gtk. But, when in Rome. Maybe bring back the headless compilation, Buddhi? Please? Anyway, this seems to bring in enough of libjpeg62-dev as well, so all we need is libsndfile:
sudo apt-get install libsndfile1-devThen we should be good to go: cd into the src/Release directory and make the bulber!
make clean
make allWhat I like to do here is stay in src/Release where the freshly compiled mandelbulber program is and upload my own .mandelbulber folder with all the settings and keyframes I'm going to start rendering. Go back to filezilla and either tar up your .mandelbulber home directory or just upload it in place. The instance will need some sort of .mandelbulber directory up there, either by running the install script or just uploading your own. In any case, once that is done, you can test the cloudy goodness...
ubuntu@ip-xx-xxx-xxx-xxx:~/mandelbulber0.97x64/src/Release$ ./mandelbulber -nogui default.fractGo back to filezilla and make the current remote directory /home/ubuntu/.mandelbulber (you wont find it in a listing but you can manually specify it). You should find the image that was just rendered and can download it to your local machine.
From here, -nogui -keyframe is your friend.
The 8 core "high-CPU" instances seem to run fairly well. There is a little bit of overhead of course, but the performance is in the ballpark of an 8 core machine in the flesh. The advantage is that you aren't limited to the flesh, and can split your animation over as many machines you want to pay for. The "chlorine fossil" animation was made for about $30 in a third of the time it would have taken serially.