12/26/2010

12-26-10 - Fix for my stuttering HTPC

So my MediaPortal HTPC has been randomly stuttering for a few months now. Mostly it's fine, and then randomly video or audio playback will start stuttering, which is super fucking annoying (in my youth I would've smashed something, but now I just get revenge on my HTPC by buying a record player; take that fucking annoying modern electronics!).

So I started reading about stutters and of course there are a million pages on it where people randomly suggest things that don't actually help (change this or that windows setting, install new codecs, blah blah blah). I tried some of the random things that are supposed to maybe help, such as the Windows dual-core TSC hot fix, the AMD dual core optimizer, turning off cool & quiet, turning DXVA on and off, etc. None of that helped.

Of course the real answer is to diagnose the problem and fix the actual issue instead of just randomly changing things. The difficult thing about diagnosing a stutter is you can't just look at what is taking too much CPU in Resource Mon or whatever, because in the steady state it's not happening.

Every video game should have some kind of "long frame trap" mode. What this would do is trace every frame, and if the frame was faster than some threshold (33 millis for example) it would just discard the trace; if it was longer it would parse the trace and pause the game and let you see a hiprof. The idea is that what you care about is the rare spike frame, not the average. (this is a little harder on modern games with many threads and an unclear concept of "one frame of takes X millis" due to the fact that many operations are deferred or spread across several frames)

Anyway, there's a new Windows tool I didn't know about which does something like this : xperf ; which is in the Windows Performance Analysis Tools, which is in the Windows SDK. They are supposed to be Vista+ but you can actually copy xperf.exe onto an XP system and use it in limited ways.

So xperf is cool, but I didn't actually end up needing it because some digging revealed the spikes were due to DPC's. The driver DPC just seems like one of the design flaws in windows. It means that badly written drivers can ruin your computer by causing this stuttering.

The old DPC Latency Checker for XP+ is cool, but to figure out what driver exactly is causing the stutters, you have to go to Device Man and turn on and off devices one by one. There's a new awesome one for Vista+ called LatencyMon that tells you exactly what driver is taking the time. Radical. My newer Dell Precision lappy has no DPC problems, the longest time taken is by the MS network layer (ndis.sys).

So I found the problem on the HTPC was the Asus/Ralink 802.11n Wifi card in the HTPC. The card is branded Asus, but apparently the key bits are actually Ralink. I tried various versions of the drivers for it (from Asus and from Ralink) and all had the problem more or less. I tried switching my network to 802.11g , still the problem. I tried the "WLan Optimizer" which turns off the periodic Wifi scan for networks, no help. So I yanked the card out of the machine and stomped on it. Stutters gone!

Now I have a 50 foot net cable running across the house and I love it.

Some related links :

Measuring DPC time with XPerf - Pointless Blathering - Site Home - MSDN Blogs
Watching H.264 videos using DirectX Video Acceleration (DXVA) My collection of short anime reviews
Stuttering Playback - MediaPortal Wiki
SAF v4.00 ''stable'' (StandAlone Filters) - DXVA ready (H.264 and VC-1). - MediaPortal Forum
Resplendence Software - LatencyMon DPC, ISR and pagefault execution monitor
Resplendence Software - Free Downloads - LatencyMon
ReClock 1.8.7.3 - VideoHelp.com Downloads
Ralink corp. Windows
My Solution for Dell XPS M1530 DPC Latency
What video stuff can Reclock do! [Archive] - SlySoft Forum
How to config ffdshow and reclock for HD audio, - Doom9's Forum
FAQ-WLAN Optimizer - Optimize wireless gaming, audio and video streaming...
Codec FAQ - Playback issues
DXVA - Free with the new DivX Plus H.264 Decoder DivX Labs
M-Audio Delta 1010 Troubleshooting - CPU Throttling
ATI HD Hardware Accelerated DXVA for H.264 AVC L5.0 L5.1 Zach Saw's Blog
ASUS Driver Download ASUS Driver Upgrade (Wireless LAN Security & Wardriving - 802.11)

12/23/2010

12-23-10 - Shitty modern cars and the Frankencar solution

Last post was about modern car electronics and how they foul up your fun even when you think they're all off. I really don't like this trend, but it goes with other trends that are much worse :

Fat, big, heavy cars :
How cars are getting fatter (infographic)

Low visibility and rising waist lines :
Datsun Z old vs new
Camaro old vs new

Every time I see an old-vs-new comparison like that, it's shocking how much better the old one looks, and how much more glass there is. Let there be light!

ADDENDUM : some more :
Honda S600 vs S2000

I also think the old cars are special in a way that new cars aren't. When you see a beautiful old car, like even something as commonplace as a Porsche 356 or a Jag E-type, you actually stop and admire it. They have hand-hammered steel curves; they're real works of sculpture, not works of CAD. Old cars are just infinitely cooler too; when a new supercar goes by, you can see the crowds think "dick" and nobody really cares, but when something old goes by, women drop their babies and everyone stares.

In other "new cars suck trend", I hate the way most car makers are trying to improve fuel efficiency. The Honda CR-Z is a great example of the modern fuckup. It's heavy and weak and gets only 30-35 mpg. You can get that mileage with an old CRX with a plain old gas engine, and the car would be an absolute joy to drive. A modern Honda engine in a lightweight car could easily be tuned to be both very fun and very efficient.

The only automaker I know of that's breaking the mold is Mazda, who have announced a 2200 pound Miata and high compression small engines . Because they are too small of a company to develop a hybrid, their goal is light weight and efficient normal engines. Way to go. Unfortunately -

Government subsidies for hybrids is fucking horrible. It's completely typical of the corrupt government way of doing things - you take a perfectly reasonable basic idea, that government should encourage development of new clean technology, and help consumers buy it, okay that's fine, but then you hand out tons of cash in semi-abritrary ways which allow you to favor certain companies over others, which completely distorts free market innovation. The most offensive subsidies to me are for Fisker and Tesla, which are self-indulgent rip-off operations by the rich for the rich. There are subsidies on both ends, and all over in other places; the car makers get cheap loans (several billion for Nissan and GM), some get direct free cash from the DOE ($500m for Fisker), then car buyers get tax credits, and car buyers get priviledges like HOV access and cheap registration, etc. So sensible Mazda gets no subsidy; somebody who buys a 100 mpg Vespa instead of a car gets no subsidy; but some rich jackass who buys a Tesla even though his daily driver is an SUV gets a subsidy. (and worst of all, people who don't drive at all get no subsidy).

(though I should note that the ridiculousness of this subsidy pales in comparison to the vast stupid ethanol subsidy or the ridiculous corrupt CAFE exceptions for large trucks or large wheelbases or "flex fuel" and tax subsidies of large trucks - which of course should have tax *penalties* under a sane government).

Anyway, I'm getting off topic.

Another thing that's got me thinking about Frankencars is the price of modern car engines. I get on this train of thought where I think - maybe I should tweak my car a bit, nah that would be stupid, maybe I should trade up for a GT3, nah if I want a track car I'd rather have a Cayman, what about a Cayman-GT3?, what if I blow the engine in my car? it's $15k, what if I buy a GT3 engine, it's $30k. Holy fuck, that's too expensive for a track car, which will inevitably need a new engine at some point.

All of this has got me thinking that the idea of an old car with modern internals is more and more appealing. You can easily find old cars that are under 2500 pounds, which is a delightful nimble weight to be. Drop in a modern engine and suspension and you have a crazy power to weight ratio with none of the new car electronic horribleness.

The thing that's kind of amazing is that a custom build like that could be relatively cheap if you choose a donor car that's not very popular. Great engines can be had for peanuts. For example the wonderful Honda K20 engine can be had for around $4k with transmission and ECU and everything - just drop it in! I adore the way those Hondas rev, and in a 2000 pound car, 220-250 hp would be more than enough. And if you run it on the track and blow up your engine - no biggie, just buy another (but you wouldn't because Honda engines are made of indestribonium).

Another cool swap appears to be the Chevy LS1, which is a Corvette engine, but it's also found detuned in their SUVs and whatnot; apparently you can get them from junkyards for $500 and then pay a few K to bring it up to undo the detuning, and blammo you have a vette engine you can stick in a Datsun 240. (you can also get the LS1 in Aluminum so it's not crazy heavy). Apparently the LS series is one of the great engines of modern times; very cheap, robust, reliable, light weight, high power.

As for what donor body to put this engine in, I love the AE86 (hachiroku) and the old MR2, super light and nimble RWD twitchy cars, maybe because I watch too much Best Motoring; I really like older Japanese cars, back when they were really Japanesey, now they're just so bland and generic. The Porsche 944 is the easiest option for transplants because the chassis and suspension and brakes are actually superb - the problem with them is the engines, which you are going to replace - and they are worth almost zero dollars; unfortunately they are a bit heavy. Actually an early Boxster would be a great transplant recipient as well; they are super super cheap now because that 2.5L M96 engine is made of mud and twigs, but the chassis/layout is superb - drop in an LS1 and you have a monster. Actually maybe the 914 is the best choice.

The Opel GT is super cheap and just gorgeous, but apparently the chassis is no good so you'd have to tube-frame it; but an Opel GT - LS1 would be shweet :

Some random links related to swaps :

Welcome to Nofearmotorsports
Renegade Hybrids
YouTube - Sound EG K20
YouTube - ls1 powered datsun 240z
YouTube - 944 Porsche LS1
The Porsche� 944 V8 Conversion Manual
HMotorsonline : Specializing in JDM-USDM engines and parts...
Smog-Legal LS1 Power 1987 Porsche 944 Turbo - $14k all done
LS1 conversion - Rennlist Discussion Forums
Classic Mini Cooper - VTEC K SERIES HONDA ENGINE CONVERSION KIT FOR MINI - www.MiniMania.com
opel_gt_3.jpg (JPEG Image, 797x600 pixels) - purty and cheap
Opel GT 1968
The New Honda Integra Type R Video - just a bit about how ridiculously good Honda engines are
Toyota 2000GT - almost the ideal looking car IMO
Factory Five Type 65 - kit car eewww gross , but it's got the look
FactoryFive Type 65 Coupe
Boxster LS1
Porsche boxster ls1 swap - LS1TECH
Boxster LS1 swap Ideas needed plz - Page 2 - LS1TECH
Boxster engine swap Grassroots Motorsports forum Grassroots Motorsports Magazine

ADDENDUM : some more on 240Z swaps :
Z, Z car, 510, 240SX, Altima, Suspension, Brakes
Ultimate Z Car Link Page!
LS1 Speed Inc - Pro Touring Super Cars
Introductory Discussion of V8 conversions
Cars for Sale - HybridZ
YouTube - 240z rb26dett drift test
Early Z Cars For Sale - The Datsun Classifieds

Also, not directly related, but if you ever start thinking about buying a car and tweaking it and turning it into more of a track car - don't. You can get track cars for ridiculously cheap, and they cost a *lot* to build. For example, somebody might buy an old 911 for $30k, spend $30k in track mods, and the result is a car that resells for $25k - super tweaked out track cars with awesome suspension (Moton) and full cages and all that often sell for less than the equivalent street car. Now granted the engine and tranny may need rebuilds at some point, but even with that cost it's probably cheaper than doing it yourself.

Just as a tiny example :
Race Cars For Sale

The best values are in cars where the base car is not worth much, but they put a ton of expensive mods on it. eg. something like an old Boxster with Motons.

If you're interested in having a track car, it's a way better value to just buy something like a Miata or a 944 that's fully set up for the track already than it is to take something like your Porsche 997 or your BMW M3 and convert it into more of a track car. It's kind of an amazing fact that mods to cars do not help resale value at all - in fact they often hurt it. So if you actually want some mods, you get amazing +value by buying a car that already has them.

ADDENDUM : Another very nice base car is the Lotus Elan +2 which is very pretty and very light. One problem with these older beautiful light cars is that I don't fit very well inside them. The 914, the Opel GT, the Elan, all are problems for people over 6'

12/22/2010

12-22-10 - Obscure issues with modern car electronics

It would be nice if there was a car review site that actually gave you the real dirty information about cars, things like will the brakes overheat in one lap of a track, is it dialed for understeer, can you turn traction control completely off, etc. Instead we get the same shite all the time, stories about how great it felt on their favorite back road and useless 0-60 times and so on. The annoying this is there's just nowhere to look to actually get good information on a car; what cars really can handle track abuse? what cars really give you full manual control with no limitations? what cars really have properly sorted chassis that don't have inherent understeer or snap oversteer or high speed instability?

Some things you may not know that you may want to watch out for :

Just about every car made now is on e-gas (electronic throttle). Mostly this is an improvement, because it ensures that you get a good idle throttle level, and because it means you actually get a fully open throttle for the range of your pedal movements, which often was not the case with old cable-actuated cars. However, there are problems with e-gas that the enthusiast should be aware of :

1 : One is brake throttle override. This mean that pressing the brake sends a signal that cuts the throttle. The idea of this is as a fail-safe if something goes wrong in the electronic throttle, you can still brake. Many e-gas cars already have this (all VW,Audi,Porsches do for example), and it will be even more common because of the Toyota bullshit. (in fact I think it is required for 2012+ cars). In normal driving of course this is no big deal, but it is a big problem if you are trying to left-foot brake, or keep on throttle during braking to spool your turbos, or brake and throttle at the same time to control a spin, etc.

2 : Another problem is that the egas signal is often smoothed by the ECU. Basically they run a low-pass filter on the signal. This is usually done to improve emissions because sudden throttle changes lead to lots of inefficient ignition cycles which are highly polluting. In some cases manufacturers have put smoothing in the throttle to reduce driveline noise and lash. In a non-filtered car, if you are coasting along and then you suddenly slap on the throttle, you will get some clunks and grinding sounds as the driveline goes from unloaded to loaded and lots of little bits of slack gears and such knock together. In order to give cars a false feeling of "solidity" the ECU smooths out the throttle so that it very gently engages pressure on the driveline before ramping up.

Smoothing the throttle is not the worst thing in the world, because for track or daily driving you actually should be smooth on the throttle anyway. But if you're trying to kick the throttle to start a power-on oversteer drift, it's annoying.

3 : Electronic Stability Control is already in most cars, and will be mandatory in all cars in the future. Mostly it's a good thing, but when you want to play around with your car (eg. whipping the tail to do hairpin 180's), it becomes a big negative. Performance cars generally have an "off" for ESC, but it rarely actually turns it all the way off, it just puts it into "minimal" mode. For example, a lot of cars now (G37, 135, etc) use the ESC as a type of electronic LSD. That is, they have an open diff, and the ESC brakes the spinning wheel, which is the only way power is transferred to the wheel with traction. This stays on even when you turn ESC "off". Furthermore, most cars will still kick in ESC when you are braking and the car is sliding, for example Porsches (PSM) and Nissans (VDC) will both kick in the ESC even when it is "off" when you touch the brakes. Because of the "electronic LSD" and other issues, it's probably not even desirable to turn ESC completely off on most cars - they are designed to work only with ESC on.

4 : Most cars are set up to understeer. This is done for safety/liability, and it actually affects the famous oversteering brands the most, such as BMW and Porsche. Because they were so renowned for dangerous oversteer, they are now sold in just the opposite way - understeering plowing beasts. This is done through alignment settings (too much toe), suspension (too soft in the rear usually), and tire staggers. Non-cognoscenti see big tire staggers (wider rear tires than front) and think "muscle", but in reality the excess of rear grip and surfeit of front grip also means understeer.

5 : Your "track ready car" is probably not actually safe to drive on the track. It's sort of pathetic that manufacturers get away with this. They sell cars with "competition package" and talk about how great they are on the track, but the vast majority of "track ready" cars are not track ready, and quite a few are downright unsafe. You need to do research on your exact car to determine what the problems are, but some common ones are : insufficient oil cooling (sends car into limp mode; affects many cars with HPFP's), insufficient brake cooling (very dangerous! affects some M3's and Nissans), cheapo brake pads, cheap lug nuts (for example Dodge SRT's are known to lose wheels), oil sloshing / oil pan starvation ; lots of cars have this problem if you put on R-comps or slicks (because of the increased cornering forces), but most are okay if you are on road tires ; there are exceptions though, for example I know the Pontiac/Holden G8 will slosh all its oil out and you'll be bathed in a cloud of blue smoke.

And of course, there's also the problem that manufacturers will deny warranty claims if you track the car, even with cars like the Dodge Viper ACR or a Porsche GT3 RS which are clearly intended as track weapons, and even when the problem is clearly manufacturing defects and not abuse. But this is totally off the electronics topic, so back to that -

There are some solutions :

1 : On most cars this can be defeated by snipping a wire that goes from the brakes to the ECU. You have to be careful about how you do this on your exact car, because you presumably still want ABS and brake lights and such. The smoothest way to do this is to find the right wire and splice in a switch, so you can turn it on and off. ( some info on Porsche EGas throttle cut )

2 : On most cars you can defeat this with an ECU flash (aka a "tune"). Most of the claims of "tunes" are nonsense (on non-turbo cars; on turbo cars they can of course up the boost and help you blow up your WRX engine) but getting rid of throttle low-pass filtering is something they can do.

3 : Similar to 1, on most cars you can defeat this by finding the right wire and splicing in a switch. On Porsches the most elegant way is to disable the yaw sensor. Put it on a switch and you now have a true "PSM off". Looks like there's a similar trick on Nissans . On Mercs there's a secret code .

4 : It's reasonably easy to undo most of this, but you do have to do some research. The obvious answer is to remove the tire stagger. The details for your car are important, you need to find an expert who knows your suspension and how it all works together. I know that for Boxster/Caymen, and for M3's (E46 in particular), going to a "square" (non-staggered) setup works very well. I wrote before about alignment a bit; and you can get stiffer rear / softer front sways. But depending on the car, you may not be able to dial out the understeer in a nice way and actually get a good handling result - if you just do it by decreasing rear grip that's not very cool.

12/21/2010

12-21-10 - Rambles

C++0x will make lots of new styles of template code practical. It occurred to me the other day that complex template patterns that don't work in current C++ (C++98 ?) just don't even enter my mind any more. When I first started writing C++ I had all these ideas, you know, you start playing with the medium, oh I can do this, I can express this, but then you start running into limitations of the language and the compilers and practical issues, so you just stop doing those things.

The dangerous thing is that you stop even thinking about those possibilities. Your mind becomes so locked into what is practical with your current tools that the ideal doesn't even enter your mind.

It struck me that I'm now in this position with C++, whereas when I was new to it, my mind was fresh and unbiased by this stale old knowledge. Someone who just got into C++ with C++0x would not have that irrelevant historical bias, and would jump right into doing things in new, better ways.

It also struck me that I ran into this a lot as I was coming up in software. I would talk to older practitioners that I respected about ways of doing things, and they would at times tell me that such and such was a bad idea, or that what I was suggesting wasn't possible or practical, when I was pretty sure my idea was good. In hindsight I was almost always right on the technical issue. Obviously the more experienced person has a lot to teach you about restraint and the pitfalls of coding and so on, but they can also hold you back with outdated ideas.

Winter here is so bloody fucking depressing. It starts affecting me in all sorts of weird ways that I don't realize are just winter depression. Like, I start sleeping in a lot and just generally wanting to lay about a lot; okay, that's easy to recognize as depression and I catch that one. But then I also start thinking about buying random things. I already bought a damn PS3, and just moments ago I bought new floor mats for my car for unknown reasons, and I was starting to think about getting a supercharger when I suddenly realized - my god this is just winter depressed shopping. Obviously a lot of consumerism comes from depression. And just like binging on desserts or booze or whatever, it feels good very briefly and then just feels worse than ever.

Another weird side effect of winter depression is that I start thinking about politics. When I'm running around in the summer time and I see people saying "we need government off our backs" I might briefly think "hmm, that's odd, I don't see government on our backs hurting us anywhere, in fact I see quite the opposite, a severe lack of government interference in our lives causing problems right up and down through every level of society". But in the summer time I say "oh well, whatever" and go off bike riding. In the winter I stew on it until I get angry.

I was doing some research for a political blog post that I deleted, and I realized something. When I do scientific research I try to be open to anything, unbiased by preconceptions about what the answer will be. Often I have a hypothesis about what the best approach will be, but if I find conflicting evidence I am ready to change my mind. But when I do political research, I already know the point I'm trying to prove and I'm just looking for data to back it up. I already have the idea that I want to write about in mind and I just want to find "experts" or data to make it seem more "legitimate", which of course is not really research at all (and is a pretty sleazy way to add impact to your argument, though it's beyond standard practice). Of course some people do scientific research that way - they already know the outcome they want to prove and they just keep trying until they find a study that proves it (yes I'm looking at you, medicine).

Doing things "the best" is egotistical self indulgence. Anybody can do it if they waste enough time on it. (Charles' quick recipe to doing anything "the best in the world" : find out how the current best in the world does it, first make an implementation that matches them; then read some other papers and steal some other ideas; put them into your current implementation and tweak until it provides benefit; tada!). One of the things I was really proud of at Oddworld was that we didn't try to do each thing "the best" ; obviously sometimes we self-indulged a bit once in a while, but in general I think we were pretty good at staying away from that childish competitiveness that plagues so many game developers who want to have "the best" graphics engine or whatever. The productive way is to do it well enough for your use case, and then move on to the next thing. That's the only way you can knock out tons of code quickly.

When I'm procrastinating, I start making up all these strange things that I decide are a good idea to do. I feel like I need to be busy doing something productive, I won't just let myself sit on the couch or be nice to my girlfriend, but I wind up doing things that are completely pointless; recently I started writing my own email client, and making air scoop screens for my car, and replacing all the air filters on all the appliances in the house. When I'm doing it I have no concept that I'm procrastinating, in my mind this is a "todo" item and I'm taking care of things, but then I have a moment of clarity and I'm like "whoah wtf am I doing this for?" and realize that I'm just avoiding the work I should be doing.

12/11/2010

12-11-10 - Perceptual Notes of the Day

You have to constantly be aware of how you're pooling data. There's no a-priori reason to prefer one way or the other; when a human looks at an image do they see the single worst spot, or do they see the average? If you show someone two images, one with a single very bad error, and another with many small errors, which one do they say is worse?

There are also various places to remap scales. Say you have measured a quantity Q in two images, one way to make an error metric out of that is :


err = Lp-norm {  remap1(  remap2( Q1 ) - remap2( Q2 ) )  }

Note that there are two different remapping spots - remap1 converts the delta into perceptually linear space, while remap2 converts Q into space where it can be linearly delta'd. And then of course you have to tweak the "p" in Lp-norm.

You're faced with more problems as soon as you have more quantities that you wish to combine together to make your metric. Combine them arithmetically or geometrically? Combine before or after spatial pooling? Weight each one in the combination? Combine before or after remapping? etc.

This is something general that I think about all the time, but it also leads into these notes :

For pixel values you have to think about how to map them (remap2). Do you want light linear? (gamma decorrected) or perhaps "perceptually uniform" ala CIE L ?

The funny thing is that perception is a mapping curve that's opposing to gamma correction. That is, the monitor does something like signal ^ 2.2 to make light. But then the eye does something like light ^ (1/3) (or perhaps more like log(light)) to make mental perception. So the net mapping of pixel -> perceptuion is actually pretty close to unity.

On the subject of Lp norms, I wrote this in email (background : a while ago I found that L1 error was much better for my video coder than L2) :


I think L1 was a win mainly because it resulted in better R/D bit distribution.

In my perceptual metric right now I'm using L2 (square error) + masking , which appears to be better still.

I think that basically L1 was good because if you use L2 without masking, then you care about errors in noisy parts of the image too much.

That is, say your image is made of two blocks, one is very smooth, one has very high variation (edges and speckle).

If you use equal quantizers, the noisy block will have a much larger error.

With L2 norm, that means R/D will try very hard to give more bits to the noisy block, because changing 10 -> 9 helps more than changing 2 -> 1.

With L1 norm it balances out the bit distribution a bit more.

But L2-masked norm does the same thing and appears to be better than L1 norm.


Lastly a new metric :

PSNR-HVS-T is just "psnr-hvs" but with thresholds. (*3) In particular, if PSNR-HVS is this very simple pseudo-code :


on all 8x8 blocks
dct1,dct2 = dct of the blocks, scaled by 1/ jpeg quantizers

for i from 0 -> 63 :
 err += square( dct1[i] - dct2[i] )

then PSNR-HVS-T is :

on all 8x8 blocks
dct1,dct2 = dct of the blocks, scaled by 1/ jpeg quantizers

err += w0 * square( dct1[0] - dct2[0] )

for i from 1 -> 63 :
  err += square( threshold( dct1[i] - dct2[i] , T ) )

the big difference between this and "PSNR-HVS-M" is that T is just a constant, not the complex adaptive mask that they compute (*1).


threshold is :

threshold(x,t) = MAX( (x - T), 0 )

The surprising thing is that PSNR-HVS-T is almost as good as PSNR-HVS-M , which is in turn basically the best perceptual metric (*2) that I've found yet. Results :


PSNR-HVS-T Spearman : 0.935155

fit rmse :
PSNR-HVS   : 0.583396  [1,2]
VIF        : 0.576115  [1]
PSNR-HVS-M : 0.564279  [1]
IW-MS-SSIM : 0.563879  [2]
PSNR-HVS-T : 0.557841
PSNR-HVS-M : 0.552151  [2]

In particular, it beats the very complex and highly tweaked out IW-MS-SSIM. While there are other reasons to think that IW-MS-SSIM is a strong metric (in particular, the maximum difference competition work), the claims that they made in the papers about superiority of fitting to MOS data appear to be bogus. A much simpler metric can fit better.

footnotes :

*1 = note on the masking in PSNR-HVS-M : The masked threshold they compute is proportional to the L2 norm of all the DCT AC coefficients. That's very simple and unsurprising, it's a simple form of variance masking which is well known. The funny thing they do is multiply this threshold by a scale :


scale = ( V(0-4,0-4) + V(0-4,4-8) + V(4-8,0-4) + V(4-8,4-8) ) / ( 4*V(0-8,0-8) )

V(x,y) = Variance of pixels in range [x,y] in the block

threshold *= sqrt( scale );

that is, divide the block into four quadrants; take the variance within each quadrant and divide by the variance of the whole block. What this "scale" does is reduce the threshold in areas where the AC activity is caused by large edges. Our basic threshold was set by the AC activity, but we don't want the important edge shapes to get crushed, so we're undoing that threshold when it corresponds to major edges.

Consider various extremes; if the block has no large-scale structure, only random variation all over, then the variance within each quad will equal the variance of the whole, which will make scale ~= 1. On the other hand, if there's a large-scale edge, then the variance on either side of the edge may be quite small, so at least one of the quads has a small variance, and the result is that the threshold is reduced.

Two more subtle follows up to this :

1A : Now, the L2 sum of DCT AC coefficients should actually be proportional to the variance of the block ! That means instead of doing


 threshold^2 ~= (L2 DCT AC) * (Variance of Quads) / (Variance of Blocks)

we could just use (Variance of Quads) directly for the threshold !! That is true, however when I was saying "DCT" earlier I meant "DCT scaled by one over JPEG quantizer" - that is, the DCT has CSF coefficients built into it, which makes it better than just using pixel variances.

1B : Another thought you might have is that we are basically using the DCT L2 AC but then reducing it for big edges. But the DCT itself is actually a decent edge detector, the values are right there in DCT[0,1] and DCT[1,0] !! So the easiest thing would be to just not include those terms in the sum, and then don't do the scale adustment at all!

That is appealing, but I haven't found a formulation of that which performs as well as the original complex form.

(*2) = "best" defined as able to match MOS data, which is a pretty questionable measure of best, however it is the same one that other authors use.

(*3) = as usual, when I say "PSNR-HVS" I actually mean "MSE-HVS", and actually now I'm using RMSE because it fits a little better. Very clear, I know.

12/09/2010

12-09-10 - Rank Lookup Error

Trying some other methods of testing to make sure the function fit isn't screwing me up too much.

Spearman rank correlations (it's just the Pearson correlation on sort ranks) :


mse.txt                 0.66782
square_ms_ssim_y.txt    0.747733
scielabL1.txt           0.855355
scielabL2.txt           0.868207
vif.txt                 0.87631
ms_ssim_bad_down.txt    0.880403
ms_ssim.txt             0.89391
iw_ms_ssim.txt          0.901085
wsnr.txt                0.90905
mydctdelta.txt          0.932998
my_psnr_hvs_m.txt       0.94082
mydctdeltanew.txt       0.944086

I just thought of a new way to check the fit for this kind of scenario which I think is pretty cool.

You have two vectors of values. One vector is the original which you wish to match, the other is output by your program to try to match the original vector. The problem is that your program outputs in some other scale, different units which may involve some warping of the curve, and you don't know what it is. You wish to find how close your program is to the target without worry about matching that curve.

Well, one way is "rank lookup error" :


given vectors "orig" and "test"

find "test_rank" such that
  r = test_rank[i] means item i is the r'th in sorted order in the vector test

find "sorted_orig" = orig, sorted

Sum{i} :
  err += square( orig[i] - sort_orig[ test_rank[ i ] ] )

that is, the fit value for mapping from test's scale to orig is to find the sort index within test, and lookup the value in the sorted list of originals.

Obviously this isn't quite ideal; it does handle ties and near-ties pretty well though (better than Spearman/Kendall, because you get less error contribution when you get the rank wrong of two items with very similar value). Most importantly it avoids all the fidgetty function fitting stuff.

Here are the scores with "rank lookup rmse" :


mse.txt                 1.310499
square_ms_ssim_y.txt    1.090392
scielabL1.txt           0.853738
scielabL2.txt           0.835508
ms_ssim_bad_down.txt    0.716507
ms_ssim.txt             0.667152
wsnr.txt                0.643821
vif.txt                 0.591508
iw_ms_ssim.txt          0.575474
mydctdelta.txt          0.548946
my_psnr_hvs_m.txt       0.543057
mydctdeltanew.txt       0.494918

if nothing else it's a good sanity check for the fitting stuff.

Also with rank-lookup-error you don't have to worry about transforms like acos for ssim or undo-psnr or anything else that's monotonic.

For comparison, these were the fit scores :


mse.txt                 1.169794
square_ms_ssim_y.txt    1.005849
scielabL1.txt           0.819501
scielabL2.txt           0.808595
ms_ssim_bad_down.txt    0.690689
ms_ssim.txt             0.639193
wsnr.txt                0.632670
vif.txt                 0.576114
iw_ms_ssim.txt          0.563879
my_psnr_hvs_m.txt       0.552151
mydctdelta.txt          0.548938
mydctdeltanew.txt       0.489756

mydctdeltanew is a new one; it just uses mydctdelta for only the AC part of the DCT (excludes the DC which is gross luma differences), and then it adds back on the gross luma difference using a form similar to IW-MS-SSIM (that is, using large filters and then using both variance masking and saliency boosting).

12-09-10 - Perceptual vs TID

My TID2008 score is the RMSE of fitting to match MOS values (0-9) ; fit from perceptual metric to MOS score is of the form Ax + Bx^C , so 0 -> 0 , and for fitting purposes I invert and normalize the scale so that 0 = no error and 1 = maximum error. For fitting I try {value},{acos(value)}, and {log(value)} and use whichever is best.

I also exclude the "exotic" distortions from TID because they are weird and not something well handled by most of the metrics (they favor SSIM which seems to be one of the few that handles them okay). I believe that including them is a mistake because they are very much unlike any distortion you would ever get from a lossy compressor.

I also weight the MOS errors by 1/Variance of each MOS ; basically if the humans couldn't agree on a MOS for a certain image, there's no reason to expect the synthetic metric to agree. (this is also like using an entropy measure for modeling the human data; see here for example)


MSE        : 1.169795  [1,2]
MSE-SCIELAB: 0.808595  [2]
MS-SSIM    : 0.691881  [1,2]
SSIM       : 0.680292  [1]
MS-SSIM-fix: 0.639193  [2] (*)
WSNR       : 0.635510  [1]
PSNR-HVS   : 0.583396  [1,2] (**)
VIF        : 0.576115  [1]
PSNR-HVS-M : 0.564279  [1] (**)
IW-MS-SSIM : 0.563879  [2]
PSNR-HVS-M : 0.552151  [2] (**)
MyDctDelta : 0.548938  [2]

[1] = scores from TID reference "metrics_values"
[2] = scores from me
[1,2] = confirmed scores from me and TID are the same

the majority of these metrics are Y only, using rec601 Y (no gamma correction) (like metric mux).

WARNING : the fitting is a nasty evil business, so these values should be considered plus/minus a few percent confidence. I use online gradient descent (aka single layer neural net) to find the fit, which is notoriously tweaky and sensitive to annoying shit like the learning rate and the order of values and so on.

(*) = MS-SSIM is the reference implementation and I confirmed that I match it exactly and got the same score. As I noted previously , the reference implementation actually uses a point subsample to make the multiscale pyramid. That's obviously a bit goofy, so I tried box subsample instead, and the result is "MS-SSIM-fix" - much better.

(**) = PSNR-HVS have received a PSNR-to-MSE conversion (ye gods I hate PSNR) ; so my "PSNR-HVS-M" is actually "MSE-HVS-M" , I'm just sticking to their name for consistency. Beyond that, for some reason my implementation of PSNR-HVS-M ([2]) is better than theirs ([1]) and I haven't bothered to track down why exactly (0.564 vs 0.552).

WSNR does quite well and is very simple. It makes the error image [orig - distorted] and then does a full image FFT, then multiplies each tap by the CSF (contrast sensitivity function) for that frequency, and returns the L2 norm.

PSNR-HVS is similar to WSNR but uses 8x8 DCT's instead of full-image FFT. And of course the CSF for an 8x8 DCT is just the classic JPEG quantization matrix. That means this is equivalent to doing the DCT and scaling by one over the JPEG quantizers, and then taking the L2 delta (MSE). Note that you don't actually quantize to ints, you stay in float, and the DCT should be done at every pixel location, not just 8-aligned ones.

VIF is the best of the metrics that comes with TID ; it's extremely complex, I haven't bothered to implement it or study it that much.

PSNR-HVS-M is just like PSNR-HVS, but adds masked thresholds. That is, for each tap of the 8x8 DCT, a just noticeable threshold is computed, and errors are only computed above the threshold. The threshold is just proportional to the L2 AC sum of the DCT - this is just variance masking. They do some fiddly shit to compensate for gross variance vs. fine variance. Notably the masking is only used for the threshold, not for scaling above threshold (though the CSF still applies to scaling above threshold).

IW-MS-SSIM is "Information Weighted" MS-SSIM. It's a simple weight term for the spatial pooling which makes high variance areas get counted more (eg. edges matter). As I've noted before, SSIM actually has very heavy variance masking put in (it's like MSE divided by Variance), which causes it to very severely discount errors in areas of high variance. While that might actually be pretty accurate in terms of "visibility", as I noted previously - saliency fights masking effects - that is, the edge areas are more important to human perception of quality. So SSIM sort of over-crushes variance and IW-SSIM puts some weight back on them. The results are quite good, but not as good as the simple DCT-based metrics.

MyDctDelta is some of the ideas I wrote about here ; it uses per-pixel DCT in "JPEG space" (that is, scaled by 1/JPEG Q's), and does some simple contrast-band masking, as well as multi-scale sum deltas.

There's a lot of stuff in MyDctDelta that could be tweaked that I haven't , I have no doubt the score on TID could easily be made much better, but I'm a bit afeared of overtraining. The TID database is not big enough or varied enough for me to be sure that I'm stressing the metric enough.


Another aside about SSIM. The usual presentation says compute the two terms :


V = (2*sigma1*sigma2 + C2)/(sigma1_sq + sigma2_sq + C2);
C = (sigma12 + C3)/(sigma1*sigma2 + C3);

for variance and correlation dot products. We're going to wind up combining these multiplicatively. But then notice that (with the right choice of C3=C2/2)

V*C = (2*sigma12 + C2/2)/(sigma1_sq + sigma2_sq + C2);

so we can avoid computing some terms.

But this is a trick. They've actually changed the computation, because they've changed the pooling. The original SSIM has three terms : mean, variance, and correlation dot products. They are each computed on a local window, so you have the issue of how you combine them into a single score. Do you pool each one spatially and then cross-multiply? Or do you cross-multiply and then pool spatially ?


SSIM = Mean{M} * Mean{V} * Mean{C}

or

SSIM = Mean{M*V*C}

which give quite different values. The "efficient" V*C SSIM winds up using :

SSIM = Mean{M} * Mean{V*C}

which is slightly worse than doing separate means and multiplying at the end (which is what MS-SSIM does).

12/07/2010

12-07-10 - Patents

Ignacio wrote about software patents a while ago and it's got me thinking.

First of all I applaud the general idea that every individual is responsible for acting morally, regardless of the environment they live in. I see far too many people who use the broken system as an excuse to behave badly themselves. eg. lots of people are polluting it doesn't matter how much I hurt the environment, the tax system is broken it doesn't matter how much I cheat, the patent system sucks so I have to be a part of it, etc. I believe this attitude is a facile rationalization for selfish behavior.

In any case, let's get back to the specific topic of patents.

In my youth I used to be the lone pro-patent voice in a sea of anti-patent peers. Obviously I was against the reality of the patent system, which consists of absurd over-general patents on things that aren't actually innovations, and the fact that expert review of technical issues before a court is an absurd farce in which money usually wins. But I was pro the basic idea of patents, partly for purely selfish reasons, because I had these technical inventions I had made and was hoping I could somehow get rich from them, as I saw others do. As a young individual inventor, I believed that getting a patent was my only way to get a fair price for my work.

Now I have a different view for a few reasons :

1. Policy should be made based on the reality of an issue, not your theoretical ideal.

The reality is that patents (and particularly software patents) are ridiculously broken. The court system does not have the means to tell when patents are reasonable or not, and it is unrealistic to think that that can be fixed.

2. The purpose of laws should be to ensure the greatest good for society.

Even if you think software patents are good for the lonely independent developer, that in itself is not reason enough to have them. You have to consider the net benefit to society. I believe that the world would be better off without software patents, but this is a little tricky.

What are the pro-patent arguments ?

One is that they encourage research funding, that companies wouldn't spend money on major research if they didn't have the patent system to ensure a monopoly for their invention.

I find this argument generally absurd. Do you think that IBM or Microsoft really wouldn't fund research that they believe will improve their business if they couldn't patent the result? Companies will fund research any time it is likely profitable; a long-term monopoly from a patent doesn't really change the research equation from "not profitable" to "profitable", it changes it to "ridiculously profitable".

Furthermore, in reality, most of the major tech companies don't actually use their patents to keep monopolies on technology, rather they engage in massive cross-license agreements to get open access to technologies. Patents wind up being a huge friction and cost for these companies, and you have to maintain a big war chest of your own patents to ensure that you can participate in the cross-licensing. The end result of this is yet another oligarchy. The big tech companies form cross-license agreements, and smaller players are frozen out. This is a huge friction to free market innovation.

Now, I believe one legitimate point is that in a world without patents, companies might be more motivated to keep their innovations secret. One pro-patent argument is that it allows companies to patent their work and then publish without fear of losing it. Of course, that is a bit of a false promise, because the value to the public of getting a publication which describes a patented algorithm is dubious. (yes obviously there is some value because you get to read about it, but you don't actually get to use it!).

Finally it is absolutely offensive to me that researchers who receive public funding are patenting their works, or that university professors patent their works. If you receive any public funding, your work should be in the public domain (and it should not be published in the pay-for-access journals of the ACM or IEEE).

But this is an issue that goes beyond software. Are any patents good for society? Certainly medical patents have encouraged lots of expensive research in recent years, and this is often used as a pro-patent argument. But lots of those new expensive drugs have been shown to be no better than their cheap predecessors. Certainly patents provide a massive incentive for drug companies to push the new expensive monopolized product on you, which is a bad effect for society. Would you actually have significantly less useful research if there were no patents? Well, 30-40% of medical research is publicly funded, so that wouldn't go away, and without patents that publicly funded research would be much more efficient, because they could be open and not worry about infringement, and furthermore it would be more focused on research that provides tangible benefits as opposed to research that leads to profits. It's a complicated issue, but it's definitely not obvious that the existance patents actually improve the net quality of medical treatment.

In summary, I believe that patents do accomplish some good, but you have to weigh that against the gains you would get if you had no patents. I believe the good from no patents is greater than the good from patents.


In any case, hoping for patents to go away is probably a pipe dream.

Smaller goals are these :

1. I find it absolutely sick that public universities are patenting things. That needs to stop. Professors/researchers need to take the lead by refusing to patent their inventions.

Any corporation that receives the bullshit "R&D" tax break should be required to make all their patents public domain. Anyone that gets a DoD or NSF research grant should be required to make the results of their work public domain. How can you justify taking public money for R&D and then locking out the public from using the results?

2. Some rich charity dude should create the "public patent foundation" whose goal is to supports the freedom of ideas, and has the big money to fight bullshit patents in court. The PPF could also actively work to publish prior art and even in some cases to apply for patents, which would then be officially released into the public domain.

A more extreme idea would be to make the PPF "viral" like the GPL - build up a big war chest of patents, and then release them all for free use - but only to other people who release their own patents under the same license. All the PPF has to do is get a few important patents and it can force the opening of them all.

(deja vu , I just realized I wrote this before )

12/06/2010

12-06-10 - More Perceptual Notes

To fit an objective synthetic measure to surveyed human MOS scores, the VQEG uses the form :

A1 + A2 * x + A3 * logistic( A4 * x + A5 )

I find much better fits from the simpler form :

B1 + B2 * x ^ B3

Also, as I've mentioned previously, I think the acos of SSIM is a much more linear measure of quality (SSIM is like a Pearson correlation, which is like a dot product, so acos makes it like an angle). This is borne out by fitting to TID2008 - the acos of SSIM fits better to the human scores (though fancy fit functions can hide this - this is most obvious in a linear fit).

But what's more, the VQEG form does not preserve the value of "no distortion", which I think is important. One way to do that would be to inject a bunch of "zero distortion" data points into your training set, but a better way is to use a fit form that ensures it explicitly. In particular, if you remap the measured MOS and your objective score such that 0 = perfect and larger = more distorted, then you can use a fit form like :


C1 * x + C2 * x ^ C3

(C3 >= 0) , so that 0 maps to 0 absolutely, and you still get very good fits (in fact, better than the "B" form with arbitrary intercept). (note that polynomial fit (ax+bx^2+cx^3) is very bad, this is way better).

A lot of people are using Kendall or Spearman rank correlation coefficients to measure perceptual metrics (per VQEG recommendation), but I think that's a mistake. The reason is that ranking is not what we really care about. What we want is a synthetic measure which correctly identifies "this is a big difference" vs. "this is a small difference". If it gets the rank of similar differences a bit wrong, that doesn't really matter, but it does show up as a big difference in Kendall/Spearman. In particular, the *value* difference of scores is what we care about. eg. if it should have been a 6 and you guess 6.1 that's no big deal, but if you guess it's a 9 that's very bad. eg. if your set of MOS scores to match is { 3, 5.9, 6, 6.1, 9 } , then getting the middle 3 in the wrong order is near irrelevant, but the Kendall & Spearman have no accounting for that, they are just about rank order.

The advantage of rank scores of course is that you don't have to do the functional fitting stuff above, which does add an extra bias, because some objective scores might work very well with that particular functional fit, while others might not.

12/05/2010

12-05-10 - Spending and Debt

There are a lot of weird things being said in the news and around the net about spending and debt. Let's have a look at some.

One particularly absurd one is that

"Money saved is helping the economy just as much or more than money spent"

The basic argument here is that money that you put in the bank doesn't sit in the bank, it gets lent out to businesses or home buyers or whatever. The point of their argument is that stimuli which cause people or banks to hold less cash in savings aren't actually stimuli, because they just take money from some useful purpose (being loaned out). This argument is massively flawed on two points. One is that the choice is not really an either/or ; it's not "lend money to businesses" OR "consumer buy things with it", when you buy things as a consumer, the money moves from one bank account to another - it's always in a bank, and therefore can also be used for loans (BTW this does suggest that money held in cash is a big disadvantage to the economy - electronic currency can be used twice while cash money sits and does nothing; and all the people "investing" in gold these days must be very bad for the economy, because the gold is just sitting there). Furthermore, the contention that banks loan out money proportional to their reserves is not quite right. In fact, when there are good prospects for investment, they can leverage up many fold. The idea that businesses wouldn't be able to get loans because banks don't have enough cash is not quite right, because the fed basically gives them free cash to invest whenever they want. In our economy, banks basically leverage up to invest in all the possible money returning investments - their balance sheet is proportional to demand of cash, not supply. The book value of money in circulation can be many times the actual supply of money, and that ratio goes up and down based on the amount of good investments available.

"Deficit spending doesn't actually help the economy"

This one is being spread by the Ron Paul anti-deficit types. Their basic argument is that government deficit spending is not a stimulus, because it is just taking money from somewhere else, either by raising taxes or issuing treasuries. They contend that if the money was left where it was it would be doing just as much good because they claim money in savings accounts is "working" (they love to use silly quotes like "money never sleeps"). From the above argument we can see the hole in this - what government spending actually does is create more demand for money, which may be a useful thing in times of low demand. There is the issue of how much utility you get from the government's choice of spending, but it's certainly greater that zero, so government spending is better than no spending. Also, the people who argue this point seem to be intentionally obtuse about the benefit of foreign capital. Foreign capital is clearly a stimulus in the short term; it's more subtle what the long term effect is, but they argue that it's not even beneficial in the short term. If your economy was actually healthy and provided good investment oppportunities, foreign capital would be coming in that way.

"Government dollars from tax-and-spend do not help the economy"

Now that we've gone over the first two, we can address this slightly more subtle point. What actually happens when the government taxes some money and then spends it? The money is not disappearing from the private economy, it is being spent on something which has some utility, and then it goes back into private circulation. The only possible disadvantage is that during that cycle, the money can't be used for other things (because transactions are not instant but rather take time), and if it was left in private hands, during that time it could be spent on something with higher utility. I believe that the measure of "best" we should use is maximizing the total utility to the citizens. So the question comes down to - would it be spent if it was left in private hands? If it would have just been put in savings, then we should see that the government spending is better. Even if private people would have spent it, I believe the issue is whether their utility would be higher than the governments. Now, in theory if people are rational actors, then being able to make their own choice about how their money is spent should result in higher utility (this is the basic theory behind free market capitalism), but that isn't always the case. For one thing it can happen simply because individual spenders don't have the same choices as a collective force like government; eg. government can buy health care with negotiated prices and force prescription drug makers to void patents (not that the US does this), things that individuals could not.

I started thinking about all this because I was thinking about Christmas spending. It's often said that Christmas spending is a big stimulus to the economy. But is it really? Let's say there was no Christmas, people would still buy lots of things they need, but they would make better choices about it. Christmas forces you to spend a lot more than you might otherwise, and also to buy lots of things that the recipient doesn't really want - these are near zero utility purchases (not quite zero, because the satisfying the social obligation of giving a gift is a form of utility).

It seems obvious to me that instead of spending a thousand dollars on Christmas presents that nobody really wants (low utility), the world would be better off if we took that Christmas money and gave it to the government to spend on roads, transit, schools, etc. - things that will actually help us all.

12-05-10 - Health Care and Deficits

I think Republicans have reached a new peak of hypocrisy and inconsistency.

"Deficits are bad" , "Deficits caused the crisis"

First of all this is just ridiculously not true. Deficits may be a problem 10+ years down the road when the interest becomes excessive, but deficit spending in no way hurts an economy (in fact it helps) and the deficits have absolutely nothing to do with the current weakness in our economy.

And furthermore, where were you during GW Bush when taxes were cut and spending raised? Oh, that's right you were writing policy papers that supported Cheney's famous "deficits don't matter" diatribe.

"We need to control health care spending"

Umm , where were you when Obama was trying to control health care spending last term? Oh, that's right, you were doing photo-ops with seniors in Florida guaranteeing them that their benefits would not be cut. WTF.

Oh, and thanks for the prescription drug benefit in which you very specifically ensured pharma companies would get to keep making obscene profits and the government would not be allowed to require cheaper drugs.

"We need to do something drastic to control the deficit" ; "medicare and social security will bankrupt America"

This is not just a Republican lie, but also a "centrist Democrat" lie. It's simply not true. First of all, the main thing we need to worry about is the short term; most of the big costs that have contributed to the recent deficit are temporary, but the big one we can control is to rescind the GWB tax cuts.

One issue is that a lot of the liars like to compare the pre-recession deficit to the post-recession deficit (which is of course absurd, because tax receipts change massively). For example the WSJ did this to claim that the "bush tax cut is not the problem" because before the recession the deficit wasn't so bad. The NYT also did this to point out that "rescinding the bush tax cut is not enough" to fix the deficit, by using post-recession deficit numbers. Umm, are you guys really that fucking stupid or are you intentionally lieing? Of course the dominant factor is whether the economy is in boom or bust; it is foolish to try to completely balance your budget during a recession. Also clearly you can get away with cutting taxes during a boom, but that doesn't mean you should.

But they don't want to do that, so they cast the Medicare/Social Security problem as "intractable". That's nonsense. The way they usually distort it is to say that the "SS fund will be bankrupt at some point". That's absurd, the SS fund's balance has nothing to do with SS's viability, you simply have to look at what the total cost of it would be. One easy way to fix SS funding permanently is to eliminate the cap on SS tax. Medicare involves an even bigger lie, they claim that aging population makes it an inherently rising cost that will crush the youth. This is also not true (and particularly not true as long as we allow immigration). In fact, the thing that will crush us is super-inflationary growth in health care costs.

See for example :

Why Health Reform Must Counter the Rising Costs of Health Insurance Premiums - The Commonwealth Fund
packet_HealthCareWydenBennettused062607.pdf - Powered by Google Docs
Insurance Premiums Still Rising Faster Than Inflation and Wages - NYTimes.com
Health Care Costs - Baby Boomers Not to Blame for Rising Health Care Costs
Fox's Cavuto so wedded to tax cut mythology that he wrongly corrects his guest Media Matters for America
Employer Healthcare Costs Rising Faster Than Inflation - Health News - redOrbit
Critics Still Wrong on What�s Driving Deficits in Coming Years � Center on Budget and Policy Priorities
CBO Extending Tax Cuts Could Have Huge Debt Impact - TheFiscalTimes.com

"states' rights"

I'm pretty sure nobody actually believes this, it's just a way to hide what they really want. Basically any time the federal government does something they don't like they claim to like "state's rights", eg. national health care, environmental regulations, etc. But then when individual states do things they don't want, suddenly the federal government needs to step in, eg. euthanasia, gay marriage, California's higher environmental standards, etc. It's just ridiculously transparently bullshit.

"constitutional originalism"

See above (states' rights). For one thing the whole idea of this is pretty absurd (that we should be bound by the exact intended meaning of the founders), but moreover it's just bullshit that they abuse to push certain policies. Reading the original meaning is so open to interpretation that it's like reading tarot cards - you can find whatever you want in it.

12/03/2010

12-03-10 - WikiLeaks

I greatly admire what WikiLeaks is doing. Yes some of the things are not perfected, maybe they endangered some undercover operatives (a favored activity of our own government, BTW).

It's foolish and reductive to dismiss a major act of good just because it's not perfect. It's ridiculous to hold them to a standard of moral perfection.

I believe that our government is corrupt and immoral, and furthermore that the system is broken such that attempts to change it by legal means are hopeless. Any brave, moral, righteous individual should work to improve things by any means necessary.

The Obama administration had its chance to back out many of the evils of the last administration, and they have chosen not to. Let's do a quick review of what's wrong :

Independent reporters are not allowed to cover our wars.

Our government creates propaganda/policy reports and injects them into the press as if they were independent, using hired "experts" and fake reporters.

Our government blacks out documents that they are required by law to make public, even though they have no real national security risk.

Our executive branch refuses to comply with congressional subpeonas, in violation of the law.

We hold non-combatants prisoner and refuse to give them civilian trials.

We continue to extradite and torture prisoners.

The NSA continues to snoop on our communications.

The FBI continues to investigate peaceful antiwar activist groups.

Our government doesn't allow investigation into the several cases where we know that we tortured innocent civilians, or into the fabrication of the cause for war in the original Iraq invasion.

.. etc, etc. Our government has demonstrated over and over a lack of respect for transparency, the need to disclose their function to the scrutiny of the public, and international law.

You can no longer trust our government to tell us whether information really is a national security risk. They've cried that wolf too many times. You simply cannot trust them on that any more, they use it to hide their own illegal activity or simply politically embarassing activity, therefore any information is fair game.

Large companies (like Amazon, PayPal, Swiss Bank, Visa, etc) are cooperating with the government's attempts to shut down WL. WL is not charged with any crime, so these companies are under no obligation to cooperate with the government campaign for silence.

The mainstream media (like the NYT) is cooperating with the attempt to smear Mr. Assange as a weirdo, sexual criminal, subversive, whatever. He may be those things, but his personal character is not the issue. We should be talking about the content of the leaks and how our government continues to fail to stand up to the truth and account for its actions.

I strongly believe that the world needs a new internet. A free, open, shadow internet, that is entirely peer-to-peer and encrypted, so it cannot be blocked in Iran or China, so that governments cannot control what's posted on it.

I think that people using their computer skills to try to change the world for the better (whether you agree with them or not) is admirable, and to do so at serious risk to themselves is heroic.

I assume that WikiLeaks will be shut down at some point, but I hope that an even larger free-information movement rises in its wake.

12-03-10 - Politics Misc Rambling

One of the tackiest most retarded things you can do in an argument is to use perjorative descriptors of the other person's argument, such as "naive" or "indefensible" or "uninformed" or whatever. If you disagree with something, you should argue based on the merits of the issue. Perhaps, not surprisingly, this sort of lazy nastiness is rampant, even in such respected places as the WSJ, NYT, and me.

One of the great cons is that many reasonable people have fallen into this trap that anyone who doesn't go along with mainstream thinking is a crackpot. They are simply laughed at or made fun of instead of having their actual issue addressed. It's a way of shutting out any viewpoints outside of the mainstream norm, it happens all the time in the news media, and also in casual conversation.

I think the debate about "left" vs "right" is artificial and obscures the real issue. The real issue is corrupt and retarded government (and populace). Turning it into a theoretical ideological question about whether laissez faire or intervention is a better policy is a false elevation of the issue. It elevates it to a difficult to solve conceptual issue, in reality the problem is just massive corruption.

Let me clarify that because I believe it's an important point. Yes there are a lot of difficult theoretical issues in governance, questions of how much social net a government should provide, when and how much we should have protectionism, etc.. These are difficult and interesting questions, but they are *irrelevant*. Our actual government is a sewer of mismanagement, intentional lies and manipulation. We could massively improve things without ever touching those difficult theoretical issues.

It's sort of like optimization issues in video game code. Often I would find myself in meetings where people were debating whether they should expose a SIMD vector type throughout the code, or whether to use a kdtree or a bsp tree, or whatever. You know what? It's irrelevant. You're calling SuperSlowFunction() a thousand times per frame that you don't need to be. Go fix your huge obvious easy fuckup. By the time you've fixed all the easy obvious fuckups, the game works fine and its time to ship. The difficult theoretical issue that we disagreed on is like an improvement that's way off in tweaky epsilon space that we will never reach in the real world. It's like all the retarded fat people who want to talk about whether fructose or sucrose is more harmful to their glycogen/lipid pathway - umm, hello, put down the fucking beer and the bacon-stuffed twinkie, the scientific details are not why you are fat.

Probably the biggest example of this that's in my mind these days is the issue of capitalism and regulation and free markets in America. This is irrelevant; we don't have anything like free markets, and we never will. What we have are lots of government-enforced monopolies, illegal collusion between corporations, sweetheart deals between governments and certain corporations. See for example : cable, health care, defense, mining/oil, finance, etc.

Any reasonable liberal and any reaonsable free-marketer should be in agreement that these corrupt alliances between corporations and governments are bad for society. The intellectual debate is irrelevant, and it's only used as an excuse to convince us that these issues are "hard" or that "people don't agree on the solution".

There's an awful lot of insanity being bandied about these days. It's important to stop every once in a while and question basic things.

Why are taxes supposed to be bad for an economy? They don't actually take money out of circulation. (if you're using the taxes to pay off debts, then they do, so in fact all the "free market conservatives" who think belt-tightening and paying off our defecit will somehow help the economy are seriously smoking crack; paying off debt is certainly bad in the short term, and should generally be done only when necessary, but that's a complex issue so let's ignore it). Anyway, ignore the paying off debt issue and assume that taxes will just be used for government spending. So the money is getting circulated and put back into the economy. We know that circulating money in general is good for economies. So taking money that a rich person would've just put into the bank and instead taxing and spending it should in fact help growth. The only argument I can see against this logic is that government spending may not be as efficient as private spending; that is, the utility gain per dollar spent may be higher when people are allowed to keep their money and spend it themselves instead of having it taxed and spent by the government. (please don't tell me anything about government "beaurocracy" or "waste" ; the money doesn't disappear somehow, just because a tiny fraction of winds up in federal employee salaries doesn't prevent it from getting back into circulation). (more on this in a future post)

Why didn't places like Ireland just let its banks fail? They had masses of toxic debt, much more than the US as a percentage of GDP. The only convincing reason is that the international community would punish them by not investing in them in the future. Which seems to argue that capital depends on the massive implicit government underwriting of their risk!

During the internet boom, investment banks were taking completely bullshit dot-coms public, while their own analysts were offering "buy" recommendations on those stocks, and their own portfolio managers were putting those stocks in funds. The bubble and bust was not caused by a natural "market cycle", it was caused by massive corruption, which was possible due to deregulation, which let various aspects of financial services work together in collusion.

The US government is not allowed to negotiate fair prices for prescription drugs, unlike most other countries in the world. We're also one of very few countries where pharmaceutical companies are allowed to charge any amount they want for patented drugs (many don't allow patents on necessary drugs, or require some reasonable low price). And the medical research supporting drugs is funded by the pharma companies making those drugs.

The idea that we can have a successful "service" or "knowledge" based economy. That's never been done in the history of the world. We don't have it now, and the contention that we somehow have an edge on this is an egotistical delusion; we might right now, but it is rapidly declining.

The idea that inventions can save the economy. This is a particular canard which the NYT loves, the idea that we need some "breakthrough technology" to save our economy. It's sort of like planning to win the lottery to save your personal finances. Sure it's nice when you invent the steam engine, but it's not an issue in governance.

The idea that the "internet boom" was a great time for the economy. It was pretty good overall for tech companies (I've seen figures suggest that from pre-boom to post-crash the average tech stock went up around 10%, which is not bad), but during that time median income didn't budge, indicating that the net result was an increase in GDP accompanied with an increase in income inequality - eg. the rich got richer.

You might be impressed with our FBI's ability to infiltrate and stop domestic terrorists before they can strike. But in almost all (maybe all?) of those cases, those cells were actually created by the FBI, by planting instigators and providing the weapons and ideas for the strike.

Our government doesn't allow journalists independent access to Iraq or Afghanistan. We receive almost zero unfettered coverage of what's really going on in our war zones. Our government is right now spying on peaceful domestic protest groups. Our government is opaque in ways it never has been before.

The greatest American period of prosperity was from 1940-1970, during which we had a huge debt, high taxes, lots of government spending, and high personal savings rates. None of the things that are supposed to correlate with prosperity actually do.

12/02/2010

12-02-10 - Perceptual Metric Rambles of the Day

There's an interesting way in which saliency (visual attention) fights masking (reduction of sensitivity due to local variance). People who weight image metrics by saliency give *more* importance to high contrast areas like edges. People who weight image metrics by masking give *less* importance to high contrast areas. Naively it seems like those would just cancel out.

It's subtle. For one thing, a smooth area is only not "salient" if it's smooth in both the original and the disorted image. If you do something like creating block artifacts, then those block edges are salient (high visual attention factor). Also, they work at sort of different scales. Saliency works on intermediate frequencies - the brain specifically cares about detail at a certain scale that detects edges and shapes, but not speckle/texture and not gross offsets. Contrast or variance masking applies mainly to the aditional frequencies in an area that has a strong medium-scale signal.

MS-SSIM actually has a lot of tweaked out perceptual hackiness. There's a lot of vision modelling implicit in it. They use a Gaussian window with a tweaked out size - this is a model of the spatial filter of the eye. The different scales have different importance contributions - this is a model of the CSF, variable sensitivity and different scales. The whole method of doing a Pearson-style correlation means that they are doing variance masking.

The issue of feeding back a perceptual delta into a non-perceptual compressor is an interesting one.

Say you have a compressor which does R-D optimization, but its D is just MSE. You can pretty easily extend that to do weighted R-D, with a weight either per-pixel or per-block. Initially all the weights are the same (1.0). How do you adjust the weights to optimize an external perceptual metric?

Assume your perceptual metric returns not just a number but an error map, either per pixel or per block.

First of all, what you should *not* do is to simply find the parts with large perceptual error and increase their weight. That would indeed cause the perceptual error at those spots to reduce - but we don't know if that is actually a good thing in an R-D sense. What that would do is even out the perceptual error, and evening it out is not necessarily the best R-D result. This is, however, an easy to get the minimax perceptual error!


Algorithm Minimax Perceptual Error :

1. Initial W_i = 1.0
2. Run weighted R-D non-perceptual compressor (with D = weighted MSE).
3. Find perceptual error map P_i
4. W_i += lambda * P_i
    (or maybe W_i *= e^(lambda * P_i))
5. goto 2

what about minimizing the total perceptual error?

What you need to do is transform the R/D(MSE) that the compressor uses into R/D(percep). To do that, you need to make W_i ~= D(percep)/D(mse).

If the MSE per block or pixel is M_i, the first idea is just to set W_i = ( P_i / M_i ) . This works well if P and M have a near-linear relationship *locally* (note they don't need to be near-linear *globally* , and the local linear relationship can vary from one location to another).

However, the issue is that the slope of P_i(R) may not be near linear to M_i(R) (as functions of rate) over its entire scale. Obviously for small enough steps, they are always linear.

So, you could run your compress *twice* to create two data points, so you have M1,M2 and P1,P2. Then what you want is :


P ~= P1 + ((P2 - P1)/(M2 - M1)) * (M - M1)

slope := (P2 - P1)/(M2 - M1)

P ~= slope * M + (P1 - slope * M1)

C := (P1 - slope * M1)

P ~= slope * M + C

I want J = R + lambda * P

use

J = R + lambda * (slope * M + C)
J = R + lambda * slope * M + lambda * C

so

W_i = slope_i

J = R + lambda * W * M

with the extra lambda * C term ; C is just a constant per block, so it can be ignored for R-D optimization on that block. If you care about the overall J of the image, then you need the sum of all C's for that, which is just :

C_tot = Sum_i (P1_i - slope_i * M1_i)

it's obviously just a bias to turn our fudged MSE-multiplied-by-slope into the proper scale of the perceptual error.

I think this is pretty strong, however it's obviously iterative, and you must keep the steps of the iteration small - this is only valid if you don't make very big changes in each step of the iteration.

There's also a practical difficulty that it can be hard to actually generate different M1 and M2 on some blocks. You want to generate both M1 and M2 very near the settings that you will be using for the final compress, because large changes could take you to vastly different perceptual areas. But, when you make small changes to R, some parts of the image may change, but other parts might not actually change at all. This gives you M1=M2 in some spots which is a divide by zero, which is annoying.

In any case I think something like this is an interesting generic way to adapt any old compressor to be perceptual.

ADDENDUM :

Note that this kind of "perceptual importance map" generation cannot help your coder make internal decisions based on perceptual optimization - it can only affect bit allocation. That is, it can help a coder with variable bit allocation (either through truncation or variable quantization) to optimize the bit allocation.

In particular, a coder like H264 has lots of flexibility; you can choose a macroblock mode, an inter block can choose a movec, an intra block can choose a prediction mode, you can do variable quantizers, and you can truncate coefficients. This kind of iterative approach will not help the mode decisions to pick modes which have less perceptual error, but it will help to change the bit allocation to give more bits to blocks where it will help the preceptual error the most.

12-02-10 - Orphaned Tech Ideas

If you have a really good idea, you have to just go for it right away. Sometimes I have an idea, but I'm busy with something else, so I just write it down, and then try to come back to it later, but when you come back the excitement is gone. You have to pounce on that moment of inspiration.

That said, there are many things I'd like to do right now but probably have to just set aside and never come back to :

DXTC for lightmaps (and other non-photographic data). In particular there should be a way to preserve smooth data with much higher precision.

DXTC optimized for perceptual metrics. Should be easy, I'll probably do this.

cblib::hash_table entry currently always has {hash, key, data} , should change that to a functor with calls { hash(), key(), data() } so that user can choose to store them or not. (in many cases you don't need seperate key and data because the key is the data).

csv util suite. csv -> google chart, csv lsqr fit, csv eval, etc. basically matlab in command line utils that work on csv's. Would let me do processing from batch files and make html logs of compression runs and such.

Texture synthesis from examples. Fast enough for realtime so it can be used with SVT. Should be progressive. I believe this is the right way to do large world texturing. Texture synthesis from procedural/perlin noise stuff is too hard for artists to use, and doing fancy synthesis offline and then storing it ala Id just strikes me as profoundly silly.

Really good JPEG decoder. Maximum-likelihood decompression. Nosratinia and POCS. You can basically remove all blocking/ringing artifacts. This seems to me like a very useful and important piece of software and I'm surprised it doesn't exist.

Luma aided chroma upsample. Could be part of "Really good JPEG decoder" ; also useful for improving video decompression quality.

Finish lossy image comparison test and release the exe which autogens nice HTML reports.

12/01/2010

12-01-10 - Pratt

Pratt Arts Center in Seattle provides classes and equipment in various "fabrication" or "industrial" arts. They do ceramic, letterpress, glass blowing, cold glass working, forging, casting, stone carving, etc.

We did a weekend glass blowing class a while ago. It was a very powerful experience. You walk in and there's a whoosing sound like a jet engine from the air rushing through the furnaces. The heat and glow is immense. I'm normally just dieing of boredom when teachers go over safety rules and shit like that, but the obvious seriousness of the hot glass made me rapt.

The really cool moment is when you've got a blob of molten glass on your rod and you're trying to blow it and work it and stretch it, I found myself going into this like trance. It's a beautiful kind of trance that I used to get from writing code, where you are concentrating to fully on one little thing, that the rest of the world completely disappears, you get tunnel vision, and you don't know how much time is passing. We just made some goofy little hollow spheres, but after five minutes I was exhausted from the concentration.

It's a pretty unique place to be able to go and play with that stuff. There's at least an annual open house where anyone can stop by and see demos of all the stuff going on, it's worth doing.

Anyway, while I was there I was thinking about the weirdness of economies.

Many of the arts/crafts done at Pratt are fabrication skills that 50 years ago were done in factories in America to make commercial goods. Now that is all gone. You can still get most of those things, like hand-forged wrought iron or hand-blown glass, but it's now done by "artisans" instead of laborers, and it's some frufru thing that's only for the rich. Essentially it's now become much *more* expensive than it used to be, because we no longer have cheap skilled labor to do these things.

There's another weird thing. I can go to Cost Plus and buy a hand blown wine glass for $5-$10. Wine glasses are considered extremely difficult by glass "artists" ; it would cost $100-200 or something to get one made here; of course they don't even make them, because it's too difficult and you can't charge enough to make up for it (except for the morons who custom order wine glasses flecked with color or with unicorn bases or something). I imagine that the third world country laborers that are making the cheap blown shit that we buy are actually much more skilled at the craft of blowing glass - I mean, they must be able to just crank out perfect orbs over and over. For some reason in America we can't have support local skilled craftsmen just making good stuff, but we can support people doing the crafts for "art" which basically involves intentionally doing it badly. Like you're trying to blow a nice big clear sphere, and you fuck up and get some random bits of color in it, and then you let it sag and deflate so it gets wrinkly and droops to one side, blammo now we can sell it. I dunno I guess it makes sense, but the whole phenomenon of resurrecting dead trade skills as art is weird to me.

11/30/2010

11-30-10 - Tchebychev

A little utility, and example of how I use templates : (this is not meant to be efficient, it's for offline apps, not realtime use)

template < int n >
struct Tchebychev
{
    double operator () ( double x )
    {
        return 2.0 * x * Tchebychev < n-1>()(x) - Tchebychev < n-2>()(x);
    }
};

template < >
struct Tchebychev < 0 >
{
    double operator () ( double x )
    {
        return 1.0;
    }
};

template < >
struct Tchebychev < 1 >
{
    double operator () ( double x )
    {
        return x;
    }
};

double TchebychevN(int n,double x)
{
    switch(n) {
    case 0 : return Tchebychev<0>()(x); 
    case 1 : return Tchebychev<1>()(x); 
    case 2 : return Tchebychev<2>()(x); 
    case 3 : return Tchebychev<3>()(x); 
    case 4 : return Tchebychev<4>()(x); 
    case 5 : return Tchebychev<5>()(x); 
    case 6 : return Tchebychev<6>()(x); 
    case 7 : return Tchebychev<7>()(x); 
    case 8 : return Tchebychev<8>()(x); 
    case 9 : return Tchebychev<9>()(x); 
    NO_DEFAULT_CASE
    }
    return 0;
}

template < typename functor >
void FindTchebychevCoefficients(functor f,double lo = 0.0, double hi = 1.0, int N = (1<<20))
{
    double PI_over_N = PI/N;

    #define NUM_T   6

    double t[NUM_T] = { 0 };
    
    for(int k=0;k < N;k++)
    {
        double pk = PI_over_N * (k + 0.5);
        double x_k = cos(pk);
        double farg = lo + (hi - lo) * 0.5 * (x_k+1.0);
        double fval = f( farg );
        for(int j=0;j < NUM_T;j++)
        {
            t[j] += fval * cos( pk * j );
        }
    }
    for(int j=0;j < NUM_T;j++)
    {
        t[j] *= 1.0 / N;
        if ( j != 0 )
            t[j] *= 2.0;

        //lprintfvar(t[j]);
        lprintf("t %d: %16.9f , %16.8g\n",j,t[j],t[j]);
    }
    
    double errSqr[NUM_T] = { 0 };
    double errMax[NUM_T] = { 0 };
    for(int i=0;i < N;i++)
    {
        double xunit = (i+0.5)/N;
        double farg = lo + (hi - lo) * xunit;
        double xt = xunit*2.0 - 1.0;
        double y = f(farg);
        double p = 0.0;
        for(int j=0;j < NUM_T;j++)
        {
            p += t[j] * TchebychevN(j,xt);
            errSqr[j] += fsquare(y-p);
            errMax[j] = MAX( errMax[j] , fabs(y-p) );
        }
    }
    for(int j=0;j < NUM_T;j++)
    {
        lprintf("%d : errSqr = %g errMax = %g\n",j,errSqr[j]/N,errMax[j]);
    }
}

You can also very easily find inner products by using the Integrate<> template I posted earlier plus this simple product adaptor :


template < typename f1, typename f2 >
struct FunctorProduct
{
    double operator () ( double x )
    {
        return f1()(x) * f2()(x);
    }
};

(eg. you can trivially find Hermite or Legendre series this way by doing inner products).

Another handy helper is a range remapper :


template < typename functor >
struct RemapFmTo
{
    double m_fmLo; double m_fmHi;
    double m_toLo; double m_toHi;
            
    RemapFmTo( 
            double fmLo, double fmHi,
            double toLo, double toHi )
    : m_fmLo(fmLo), m_fmHi(fmHi), m_toLo(toLo), m_toHi(toHi)
    {
    }
    
    double operator () ( double x )
    {
        double t = (x - m_fmLo) / (m_fmHi - m_fmLo);
        double y = t * (m_toHi - m_toLo) + m_toLo;
        return functor()(y);
    }
};

template < typename functor >
struct RemapUnitTo
{
    double m_toLo; double m_toHi;
            
    RemapUnitTo( 
            double toLo, double toHi )
    :  m_toLo(toLo), m_toHi(toHi)
    {
    }
    
    double operator () ( double x )
    {
        double y = x * (m_toHi - m_toLo) + m_toLo;
        return functor()(y);
    }
};

Now we can trivially do what we did before to find the optimal approximation in a known range :


    struct SqrtOnePlusX
    {
        double operator () ( double x )
        {
            return sqrt( 1 + x );
        }
    };


    RemapUnitTo< SqrtOnePlusX > tf(-0.075f,0.075f);
    FindTchebychevCoefficients( tf );

and the output is :

t 0:      0.999647973 ,       0.99964797
t 1:      0.037519818 ,      0.037519818
t 2:     -0.000352182 ,   -0.00035218223
t 3:      0.000006612 ,   6.6121459e-006
t 4:     -0.000000155 ,  -1.5518238e-007
t 5:      0.000000004 ,   4.0790168e-009
0 : errSqr = 0.000469204 errMax = 0.0378787
1 : errSqr = 5.78832e-008 errMax = 0.000358952
2 : errSqr = 2.12381e-011 errMax = 6.77147e-006
3 : errSqr = 1.18518e-014 errMax = 1.59378e-007
4 : errSqr = 8.23726e-018 errMax = 4.19794e-009
5 : errSqr = 6.54835e-021 errMax = 1.19024e-010

I guess Remez minimax polynomials are better, but this is so easy and it gives you a good starting point, then you can just numerically optimize after this anyway.

ADDENDUM :

obviously the TchebychevN dispatcher sucks and is silly, but you don't actually use it in practice; you know which polynomials you want and you use something like :


double approx = 
    c0 * Tchebychev<0>(x) + 
    c1 * Tchebychev<1>(x) + 
    c2 * Tchebychev<2>(x) + 
    c3 * Tchebychev<3>(x);

which the compiler handles quite well.

I also did Legendre polynomials :


template < int n >
struct Legendre
{
    double operator () ( double x )
    {
        double L_n1 = Legendre < n-1 >()(x);
        double L_n2 = Legendre < n-2 >()(x);
        const double B = (n-1)/(double)n; // n-1/n
        return (1+B)*x*L_n1 - B*L_n2;
    }
};

template< > struct Legendre<0>
{
    double operator () ( double x ) { return 1.0; }
};

template< > struct Legendre<1>
{
    double operator () ( double x ) { return x; }
};


// but ryg's loop variants are mostly just better :

__forceinline double legendre_ryg(int n, double x)
{
  if (n == 0) return 1.0;
  if (n == 1) return x;

  double t = x, t1 = 1, t2;
  for (int i=1; i < n; i++) {
    t2 = t1; 
    t1 = t;
    const double B = (i)/(double)(i+1);
    t = (1+B)*x*t1 - B*t2;
  }
  return t;
}

__forceinline double chebyshev_ryg(int n, double x)
{
  if (n == 0) return 1.0;
  if (n == 1) return x;

  double t = x, t1 = 1, t2;
  for (int i=1; i < n; i++) {
    t2 = t1;
    t1 = t;
    t = 2.0 * x * t1 - t2;
  }
  return t;
}

// you can find Legendre coefficients like so :

    RemapUnitTo< Legendre<0> > ShiftedLegendre0(-1,1);

    double c0 = Integrate(0.0,1.0,steps,MakeFunctorProduct(functor(),ShiftedLegendre0));
    // (don't forget normalization)

// using new helper :

template < typename f1, typename f2 >
FunctorProduct< f1,f2 > MakeFunctorProduct( f1 _f1 , f2 _f2 ) { return FunctorProduct< f1,f2 >(_f1,_f2); }

11-30-10 - Reference

You can do weighted least squares using a normal least squares solver. To solve :

A x = b 

in the least squares sence (that is, minimize |Ax - b|^2) , with weights W , you just are minimizing the error :

E = Sum_i { W_i * ( Sum_j {A_ij * x_j} - b_i )^2 }

E = Sum_i { ( Sum_j { sqrt(W_i) * A_ij * x_j} - sqrt(W_i) * b_i )^2 }

so set

A'_ij = A_ij * sqrt(W_i)
b'_i = b_i * sqrt(W_i)

then

E = Sum_i { ( Sum_j { A'_ij * x_j } - b'_i )^2 }

so we can just do a normal lsqr solve for x using A' and b'.


If you have a Gaussian random event, probability of x is P(x), the (natural) bits to code an event is -ln(P(x)) =


P(x) = 1/(sigma * sqrt(2pi)) * e^( - 0.5 * ((x-c)/sigma)^2 )

H = -ln(P(x)) = ln(sigma * sqrt(2pi)) + 0.5 * ((x-c)/sigma)^2

H = const + ln(sigma) + 0.5 * ((x-c)/sigma)^2

And bits to code events is a good measure of how well events fit a model, which is useful for training a model to observed events. In particular if we ignore the constants and the ln term,

total H = Sum_ij { (1/sigma_i^1) * (x_j - c_i)^2 }

the bit cost measure is just L2 norm weighted by one over sigma^2 - which is intuitive, matching the prediction of very narrow Gaussians is much more important than matching the prediction of very wide ones.


Now that I have autoprintf, I find this very handy :


    #define lprintfvar(var) lprintf(STRINGIZE(var) " : ",var,"\n");

old rants