Also, while you're taking extra samples, it doesn't really cost any more to incorporate depth of field, motion blur, area lights, etc. In other words, it's not like it takes twice as long to do DOF and motion blur, compared to just DOF alone. So that extra sampling effort to do things correctly is quite justifiable, since it covers all effects if you do it right.
Exactly. BUT, and there's always a but. Motion blur, dof, etc, is free in
unbiased renderers (Maxwell, Octane, Arion, etc) and unfortunately, those are still not viable options. They still take waaaaaay too long when compared to biased renderers and those that utilize GPU processing, have some serious limitations which prohibit their use for any serious commercial production. For instance, depending on the project, it is not uncommon to have geometric scenes in excess of millions or billions of triangles. This is immediately impossible with GPUs. Likewise, using 4k textures, or many of them is also a limitations due to GPU ram. There's much more.
So your standard commercial production renderer like VRay, Mental Ray, etc,
do cheat and that makes turning on a features like depth of field, quite a bit more taxing. For this reason, I'd say about 80% of all projects still use post depth of field. VRay and Mental Ray
do have GPU extensions (VRay RT GPU, IRay) but like I said, they are limited and more useful for single frame production than animation.
It's also interesting to note that film production renderers weren't even raytracers up until recently. Renderman, 3Dlight, from what I know, didn't even do raytracing in order to be able to achieve the speeds required to render at film resolutions quickly and efficiently. I believe films like the Matrix were among the first that used commercial raytracers like Mental Ray to render with global illumination and such.
Though couldn't you "just ()" write an algorithm that sort of does all two (or three if you add bloom) blurs at the same time in post-processing?
You probably could but it would be extremely complicated. For instance, you need to supply a depth map which is a frame that describes the distance of the scene to the camera:
With this you can effectively apply and modify the depth of field:
You would have to do the same to the motion blur: supply a vector map that would describe the direction and length of the motion vectors. This exists and you can do it with select apps. So imagine having to supply two frames of depth and motion information, both of which also have to be computationally derived, just to process one frame? And then imagine doing it for a sequence? Nahhhhhh....
-Rich