7/19/2012

07-19-12 - Experimental Futures in Oodle

I don't know if this will ever see the light of day, but it's fucking sexy as hell so here's a sneak peek.

"futures" implemented in C++98 with Oodle :


void example_future_comp_decomp()
{
    future<OodleBufferRC> rawBuf = oodle_readfile("r:\\oodle_example_future_input");
    
    // call :
    // oodle_compress_sync( rawBuf, OodleLZ_Compressor_LZH, OodleLZ_CompressSelect_Fast );
    // but not until rawBuf is done :
    future<OodleBufferRC> compBuf = start_future<OodleBufferRC>( oodle_compress_sync, rawBuf, OodleLZ_Compressor_LZH, OodleLZ_CompressSelect_Fast );
    
    future<const char *> write = start_future<const char*>(oodle_writefile,"r:\\oodle_example_future_comp",compBuf);
        
    future<OodleBufferRC> read_compBuf = start_future<OodleBufferRC>( oodle_readfile, write ); 
    
    future<OodleBufferRC> read_decompBuf = start_future<OodleBufferRC>( oodle_decompress_sync, read_compBuf );
    
    start_future<const char *>( oodle_writefile, "r:\\oodle_example_future_decomp",read_decompBuf);
}

This creates an async chain to read a file, compress it, write it, then read it back in, decompress it, and write out the decompressed bits.

Futures can take either immediates as arguments or other futures. If they take futures as arguments, they enqueue themself to run when their argument is ready (using the forward dependency system). Dependencies are all automatic based on function arguments; it occurs to me that this rather like the way CPU's do scheduling for out-of-order-processing.

(in contrast to idea #1 in Two Alternative Oodles , here we do not get the full async graph in advance, it's given to us as we get commands, that is, we're expected to start running things immediately when we get the command, and we don't get to know what comes next; but, just like in CPU's, the command submission normally runs slightly ahead of execution (unless our pipeline is totally empty), in which case we have a little bit of time gap)

Functions called by the future can either return values or return futures. (eg, in the code above, "oodle_readfile" could just return an OodleBufferRC directly, or it could return a future to one). If a function in a future returns a future, then the returned future replaces the original, and doesn't return to the outer scope until the chain of futures returns a non-future value. That is, this is a way of doing coroutine yields basically; when you want to yield, you instead return a future to the remaining work. (this is like the lambda-style coroutine yield that we touched on earlier). (* - see example at end)

future of course has a wait() method that blocks and returns its value. As long as you are passing futures to other futures, you never have to wait.

You can implement your own wait_all thusly :


int nop(...)
{
    return 0;
}

template <typename t_arg1,typename t_arg2,typename t_arg3,typename t_arg4>
future<int> done_when_all( t_arg1 a, t_arg2 b, t_arg3 c, t_arg4 d )
{
    return start_future<int>( nop, a,b,c,d );
}

then call

done_when_all( various futures )->wait();

A few annoying niggles due to use of old C++ :

1. I don't have lambdas so you actually have to define a function body every time you want to run something as a future.

2. I can't induce the return type of a function, so you have to explicitly specify it when you call start_future.

3. I don't have variadic templates so I have to specifically make versions of start_future<> for 0 args, 1 arg, etc. bleh. (though variadic templates are so ugly that I might choose to do it this way anyway).

Otherwise not bad. (well, that is, the client usage is not bad; like most C++ the implementation is scary as shit; also doing this type of stuff in C++ is very heavy on the mallocs (because you have to convert things into different types, and the way you do that is by new'ing something of the desired type), if you are a sane and reasonable person that should not bother you, but I know a lot of people are in fact still bothered by mallocs).

In order to automatically depend on a previous future, you need to take its return value as one of your input arguments. There's also a method to manually add dependencies on things that aren't input args. Another option is to carry over a dependency through a binding function which depends on one type and returns another, but that kind of C++ is not to my liking. (**)

To really use this kind of system nicely, you should make functions whose return value is a compound type (eg. a struct) that contains all its effects. So, for example oodle_writefile returns the name of the file written, because it modifies that object; if you had a function that modified a game object, like say an Actor *, then its return value should include that Actor *, so that you can use that to set up dependency chains. (in real code, oodle_writefile should really return a struct containing the file name and also an error code).

* : example of returning futures to continue the async job :


float test_func_5_1(int x)
{
    Sleep(1);
    return x * (2.0/3.0);
}

future<float> test_func_5_2(int x)
{
    Sleep(1);

    if ( x == 1 )
    {
        // hey in this case I can return my value immediately
        return make_future(0.6f);
    }
    else
    {
        // I need to run another async job to compute my value
        x *= 3;
        return start_future<float>(test_func_5_1,x);
    }
}


then use as :


future<float> f = start_future<float>(test_func_5_2,7);

... do other work ...

float x = f.wait();

does what it does.

This is a necessary building block, it lets you compose operations, but it's an ugly way to write coroutine-style code.

What it is good for is creating more complex functions from simpler functions, like :


future<const char *> oodle_compress_then_writefile(const char *filename, OodleBufferRC rawBuf, OodleLZ_Compressor compressor, OodleLZ_CompressSelect select )
{
    OodleBufferRC compBuf = oodle_compress_sync( rawBuf, compressor, select );

    return start_future<const char*>( oodle_writefile, filename, compBuf );
}

I believe this "future" is much better than the real C++0x std::future, which seems to be missing a lot of features.

** : example of using a binding function to carry over dependencies :


// say I want to run two asyncs :

future<int> f1 = start_future<int>( func1 );

future<float> f2 = start_future<float>( func2 , 7.5f );

// but I want to run func2 after func1
//  due to some dependency that isn't through the return value

// what I can use is a return-to-arg adapter like :

template<typename t1,typename t2>
t1 return1(t1 a,t2 b)
{
    b;
    return a;
}

template<typename t1,typename t2>
future<t1> return1_after2(t1 a,future<t2> b)
{
    return start_future<t1>( return1<t1,t2>, a, b );
}


// then run :


future<int> f1 = start_future<int>( func1 );

future<float> f2 = start_future<float>( func2 , return1_after2(7.5f,f1) );

but like I said previously I hate that kind of crap for the most part. Much better is to use the explicit dependency mechanism, like :

future<int> f1 = start_future<int>( func1 );

future<float> f2 = make_future<float>( func2 , 7.5f );
f2->add_dep(f1);
f2->start();

There is one case where the funny binding mechanism can be used elegantly; that's when you can associate the binding with the actual reason for the dependency. That is, if we require func2 to run after func1, there must be some shared variable that is causing that ordering requirement. Using a binder to associate func1 with that shared variable is a clean way of saying "you can read this var after func1 is done".

No comments:

old rants