What am I trying to achieve anyway?

This is one of 3 follow-ups to my latest post, and is all about rendering. I will talk about the other 2 subjects in upcoming posts!

One fundamental problem with any fractal flame algorithm is that it just takes way too long to render an animation. Close to identical parameters are rendered with only slight changes between frames. This method evidently creates a lot of overlap, so your CPU is forced to compute the same pixels over and over again. What fr0st will try to accomplish is to retain some information from frame to frame during an animation render and save on recalculation times, which would significantly speed up rendering. A similar principle is already used in the current prototype of the fr0st renderer, where every pixel drawn slowly fades away, mixing into subsequent frames.

A simple analogy would be video compression. If you ever tried just stringing together a large number of individually compressed frames without using any codec, you’ll know that this will create a huge file size for your video. Most video codecs combat this by finding overlapping areas along the third dimension of the image, which is time. Any area of the animation that doesn’t change from frame to frame can be compressed considerably, as the same data from one frame in that part of the image could be applied to tens or hundreds of subsequent frames. This is of course a gross simplification of how video encoding works (an area in which I am hardly an expert). Even though these methods can cause a significant amount of quality loss, they’re much better than the alternative: being unable to distribute videos over the internet at all.

For me, this is by far the most important of the 3 areas I am currently working on. Fr0st is not meant to be a copy of Apophysis or Flam3; if I wasn’t trying to fundamentally change the way IFS rendering works, I wouldn’t bother writing an entirely new program. There are just too many good alternatives already out there.

In the 0.4 release the flam3 renderer will be included (through pyflam3), but the long term goal is quite different and will possibly be implemented using the lower levels functions of the flam3 library.

For now I can’t really say much more, because I haven’t quite figured it out yet. Only time will tell.

Advertisements

4 Responses to What am I trying to achieve anyway?

  1. Ian Anderson says:

    Sounds very interesting! I can generally render a 3-minute animation in under 24 hours, but any significant time-saving will be a great bonus. There’s a project being run via deviantART (currently on hold) to put a GUI on flam3, along with some bells and whistles. Link: http://flam3animator.deviantart.com/

  2. John Miller says:

    I’ve been discussing HDR rendering with Erik Reckase lately and keep allowing my mind to wander onto this problem. With an HDR workflow you can move the filtering, DE, and palette application to a post-rendering phase. The actual render would not use logarithmic estimation because you’re no longer constrained to a fairly small range of values – the post-rendering portion is what would take that information and create an image out of it. I keep toying with the idea of storing as much information as possible per-pixel so that the affline shifts could be done to that data.

    I hope you can follow me and that this makes sense, because my understanding of IFS in general is not nearly as complete as yours, but I want to keep track of how many times each xform touches a pixel and what it does to said pixel as much as possible. The initial rendering would run the chaos game and you’d be left with per-pixel data on how many times it was plotted to, by which transforms, and how many times by each transform. Then when one of the transforms rotates, you may be able to do some math on the post-chaos game dataset.

    Now, is this quicker? It seems to me like it might be slower. There’s much more logic going on here than in the chaos game. It also wouldn’t work, probably at all, with variant changes. Though I imagine if you’re storing all that information, you could just erase the variant-shifted transform and run that one again to save time. Memory usage would go up as well, which can already be a problem when trying to render high-DPI images.

    I’m curious if that did make any sense đŸ™‚

  3. Vitor says:

    Ian,

    That sounds like an interesting project. I’ll check it out as soon as I can! I always thought, however, that Python was a more logical choice for a flam3 GUI, given the fact that pyflam3 exists.

  4. Vitor says:

    John,

    Yes, it does make sense. I don’t really know where the experimentation I’m doing right now will take me, but it’s precisely that kind of stuff that can theoretically be improved.

    I’m not a big fan of saving too much data, precisely because memory usage can quickly get out of hand. My idea was more along the lines of preserving the data buffer between frames, creating a blended image each iteration which allows for a virtual quality improvement. This will only be useful for animations of course.

    I can’t really provide anything more tangible than this idea at the moment, because while I may know a lot about fractals theoretically, the algorithmic implementation of these ideas is always a lot harder.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: