3/14/2016

XRGB Bitmap Test

This is obvious and I think it's been done before, but hey.

I was remembering how modern LZ's like LZMA (BitKnit, etc.) that (can) do pos&3 for literals might like bitmaps in XRGB rather than 24-bit RGB.

In XRGB, each color channel gets its own entropy coding. Also offset bottom bits works if the offsets are whole pixel steps (the off&3 will be zero). In 24-bit RGB that stuff is all mod-3 which we don't do.

(in general LZMA-class compressors fall apart a bit if the structure is not the typical 4/8/pow2)

In compressors it's generally terrible to stick extra bytes in and give the compressor more work to do. In this case we're injecting a 0 in every 4th byte, and the compressor has to figure out those are all redundant just to get back to its original size.

Anyway, this is an old idea, but I don't think I ever actually tried it. So :


PDI_1200.bmp

LZNA :

24-bit RGB : LZNA : 2,760,054 -> 1,376,781
32-bit XRGB: LZNA : 3,676,818 -> 1,311,502

24-bit  RGB with DPCM filter : LZNA : 2,760,054 -> 1,022,066
32-bit XRGB with DPCM filter : LZNA : 3,676,818 -> 1,015,379  (MML8 : 1,012,988)

webpll : 961,356
paq8o8 : 1,096,342

moses.bmp

24-bit RGB : LZNA : 6,580,854 -> 3,274,757
32-bit XRGB: LZNA : 8,769,618 -> 3,022,320

24-bit  RGB with DPCM filter : LZNA : 6,580,854 -> 2,433,246
32-bit XRGB with DPCM filter : LZNA : 8,769,618 -> 2,372,921

webpll : 2,204,444
gralic111d : 1,822,108

other compressors :

32-bit XRGB with DPCM filter : LZA  : 8,769,618 -> 2,365,661 (MML8 : 2,354,434)

24-bit  RGB no filter : BitKnit : 6,580,854 -> 3,462,455
32-bit XRGB no filter : BitKnit : 8,769,618 -> 3,070,141
32-bit XRGB with DPCM filter : BitKnit : 8,769,618 -> 2,601,463

32-bit XRGB: LZNA : 8,769,618 -> 3,022,320
32-bit XRGB: LZA  : 8,769,618 -> 3,009,417

24-bit  RGB: LZMA : 6,580,854 -> 3,488,546 (LZMA lc=0,lp=2,pb=2)
32-bit XRGB: LZMA : 8,769,618 -> 3,141,455 (LZMA lc=0,lp=2,pb=2)

repro:

bmp copy moses.bmp moses.tga 32
V:\devel\projects\oodle\radbitmap\radbitmaptest
radbitmaptest64 rrz -z0 r:\moses.tga moses.tga.rrz -f8 -l1

Key observations :

1. On "moses" unfiltered : padding to XRGB does help a solid amount (3,274,757 to 3,022,320 for LZNA) , despite the source being 4/3 bigger. I think that proves the concept. (BitKnit & LZMA even bigger difference)

2. On filtered data, padding to XRGB still helps, but much (much) less. Presumably this is because post-filter data is just a bunch of low values, so the 24-bit RGB data is not so multiple-of-three structured (it's a lot of 0's, +1's, and -1's, less coherent, less difference between the color channels, etc.)

3. On un-filtered data, "sub" literals might be helping BitKnit (it beats LZMA on 32-bit unfiltered, and hangs with LZNA). On filtered data, the sub-literals don't help (might even hurt) and BK falls behind. We like the way sub literals sometimes act as an automatic structure stride and delta filter, but they can't compete with a real image-specific DPCM.


Now, XRGB padding is an ugly way to do this. You'd much rather stick with 24-bit RGB and have an LZ that works inherently on 3-byte items.

The first step is :


LZ that works on "items"

(eg. item = a pixel)

LZ matches (offsets and lens) are in whole items

(the more analogous to bottom-bits style would be to allow whole-items and "remainders";
that's /item and %item, and let the entropy coder handle it if remainder==0 always;
but probably best to just force remainders=0)

When you don't match (literal item)
each byte in the item gets it own entropy stats
(eg. color channels of pixels)

which maybe is useful on things other than just images.

The other step is something like :


Offset is an x,y delta instead of linear
(this replaces offset bottom bits)

could be generically useful in any kind of row/column structured data

Filtering for values with x-y neighbors

(do you do the LZ on un-filtered data, and only filter the literals?)
(or do you filter everything and do the LZ on filter residuals?)

and a lot of this is just webp-ll

3/11/2016

Seven Test

I made a new test set called "sevens", taking the lead from enwik7, the size of each file is 10 MB (10^7).

The goal here is not to show the total or who does best overall (that relies on how you weight each type of file and whether you think this selection is representative of the occurance ratios in your data), rather to show how each compressor does on different types of data, to highlight their different strengths.

Showing compression factor (eg. N:1 , higher is better) :

run details :


ZStd is 0.5.1 at level 21 (optimal)
LZMA is 7z -mx9 -m0=lzma:d24
Brotli is bro.exe by Sportman --quality 9 --window 24 (*)
Oodle is v2.13 at -z6 (Optimal2)

All competitors run via their provided exe

Some takeaways :

Binary structured data is really where the other compressors leave a lot of room to beat them. ("granny" and "records"). The difference in sizes on all the other files is pretty meh.

BitKnit does its special thang on granny - close to LZNA but 2X faster to decode (and ~ 6X faster than LZMA). Really super space-speed. BitKnit drops down to more like LZHLW levels on the non-record files (LZNA/LZMA has a small edge on them).

I was really surprised by ZStd vs Brotli. I actually went back and double checked by CSV to make sure I hadn't switched the columns by accident. In particular - Brotli does poorly on enwik7 (huh!?) but it does pretty well on "granny", and surprisingly ZStd does quite poorly on "granny" & "records". Not what I expected at all. Brotli is surprising poor on text/web and surprisingly good on binary record data.

LZHLW is still an excellent choice after all these years.

(* = Brotli quality 10 takes an order of magnitude longer than any of the others. I got fed up with waiting for it. Oodle also has "super" modes at -z8 that aren't used here. (**))

(for concreteness : Brotli 11 does pretty well on granny7 ; (6.148:1 vs 4.634:1 at q9) but it runs at 68 kb/s (!!) (and still not LZMA-level compression))

(** = I used to show results in benchmarks that required really slow encoders (for example the old LZNIB optimal "super parse" was hella slow); that can result in very small sizes and great decode speed, but it's a form of cheating. Encoders slower than 1 mb/s just won't be used, they're too slow, so it's reporting a result that real users won't actually see, and that's BS. I'm trying to be more legit about this now for my own stuff. Slow encoders are still interesting for research purposes because they show what should be possible, so you can try to get that result back in a faster way. (this in fact happened with LZNIB and is a Big Deal))

Seven Test Space-Speeds

Showing decompress time space-speed tradeoff on the different files of "seven test" :

records7

granny7

game7

exe7

enwik7

dds7

audio7

Note on the test :

This is running the non-Oodle compressors via my build of their lib (*). Brotli not included because it's too hard to build in MSVC (before 2010). "oohc" here is "Optimal2" level (originally posted with Optimal1 level, changed to Optimal2 for consistency with previous post).

The sorting of the labels on the right is by compressed size.

Report on total of all files :

-------------------------------------------------------
by ratio:
oohcLZNA    :  2.37:1 ,    2.9 enc mb/s ,  125.5 dec mb/s
lzma        :  2.35:1 ,    2.7 enc mb/s ,   37.3 dec mb/s
oohcBitKnit :  2.27:1 ,    4.9 enc mb/s ,  258.0 dec mb/s
lzham       :  2.23:1 ,    1.9 enc mb/s ,  156.0 dec mb/s
oohcLZHLW   :  2.16:1 ,    3.4 enc mb/s ,  431.9 dec mb/s
zstdmax     :  1.99:1 ,    4.6 enc mb/s ,  457.5 dec mb/s
oohcLZNIB   :  1.84:1 ,    7.2 enc mb/s , 1271.4 dec mb/s

by encode speed:
oohcLZNIB   :  1.84:1 ,    7.2 enc mb/s , 1271.4 dec mb/s
oohcBitKnit :  2.27:1 ,    4.9 enc mb/s ,  258.0 dec mb/s
zstdmax     :  1.99:1 ,    4.6 enc mb/s ,  457.5 dec mb/s
oohcLZHLW   :  2.16:1 ,    3.4 enc mb/s ,  431.9 dec mb/s
oohcLZNA    :  2.37:1 ,    2.9 enc mb/s ,  125.5 dec mb/s
lzma        :  2.35:1 ,    2.7 enc mb/s ,   37.3 dec mb/s
lzham       :  2.23:1 ,    1.9 enc mb/s ,  156.0 dec mb/s

by decode speed:
oohcLZNIB   :  1.84:1 ,    7.2 enc mb/s , 1271.4 dec mb/s
zstdmax     :  1.99:1 ,    4.6 enc mb/s ,  457.5 dec mb/s
oohcLZHLW   :  2.16:1 ,    3.4 enc mb/s ,  431.9 dec mb/s
oohcBitKnit :  2.27:1 ,    4.9 enc mb/s ,  258.0 dec mb/s
lzham       :  2.23:1 ,    1.9 enc mb/s ,  156.0 dec mb/s
oohcLZNA    :  2.37:1 ,    2.9 enc mb/s ,  125.5 dec mb/s
lzma        :  2.35:1 ,    2.7 enc mb/s ,   37.3 dec mb/s
-------------------------------------------------------

How to for my reference :


type test_slowies_seven.bat
@REM test each one individially :
spawnm -n external_compressors_test.exe -e2 -d10 -noohc -nlzma -nlzham -nzstdmax r:\testsets\seven\* -cr:\seven_csvs\@f.csv
@REM test as a set :
external_compressors_test.exe -e2 -d10 -noohc -nlzma -nlzham -nzstdmax r:\testsets\seven

dele r:\compressorspeeds.*
@REM testproj compressorspeedchart
spawnm c:\src\testproj\x64\debug\TestProj.exe r:\seven_csvs\*.csv
ed r:\compressorspeeds.*

(* = I use code or libs to test speeds, never exes; I always measure speed memory->memory, single threaded, with cold caches)

old rants