I understand how parity works on RAID 2-5. But can anyone explain how the double parity in RAID 6 works? I've done some reading on R-S code, and while I can understand most of the math, I can't seem to put the pieces together. 
To the best of my knowledge, it is just taking RAID 5 one step further in terms of data redundancy.
While in RAID 5 you have one additional parity block being generated, in RAID 6 you have two. The parity blocks are distributed across all disks instead of being stored on a separate parity disk.
For each stripe of data you have 2 parity blocks, which gives you two "layers" of redundancy. In RAID 5, if you lost a disk, you still have an effectively "complete" copy of all your data (thanks to single-parity striping). In RAID 6, if you lose 2 disks, you still have an effectively "complete" copy of all your data (double-parity striping). In either case, once you lose one more disk, your data is no longer complete, and you will be unable to fully recover your data.
Once you replace the dead disks, the missing blocks on those disks can be recalculated from the blocks on the other disks (belonging to the same data stripe). So you can treat RAID 6 as a more robust buffer against disk failure; this is usually used only in disk arrays containing 6 or more disks, since in such arrays the chances of a second disk failure happening while you are still recovering data from the first disk crash is higher (than in a 5-or-less disk array). That is, of course, bad news for a RAID 5 setup.
I don't know the fine details of the algorithm used, you'll have to google that

----------
By the way, one thing that doesn't seem to have been mentioned yet is what is commonly called "silent data corruption" (and less commonly called
RAID-5 write hole). A
quick google search sufficiently illustrates this issue. Not to bring undue attention to it or to overstate its danger, since this occurs on individual-disk setups as well, but it should be pointed out that RAID 5 is a data redundancy solution, not a protection against data corruption.
Some might feel that "silent data corruption" is too strong/scary a term for it, since it's not like the RAID actively corrupts your data while pretending to be all fine and well; it just doesn't tell you if data is corrupted due to bad RAM or other reasons (since there's no way for it to know unless programmed to do so). This probably is of little concern to most readers, so just take what you want out of these two paragraphs.
----------
I have a home server build coming up in another month or so, already have a RAID-Z setup planned for it

Just a pity that RAID-Z and RAID-Z2 pools cannot be dynamically expanded by adding new devices just yet...
For Tatsujin: If you have the cash to fork out for them, pre-assembled NASes are the fastest way to get started on a robust RAID setup. Manufacturers like
Synology and
QNAP (to name a couple) have such devices for the consumer market, although they definitely are nothing short of pricey.
Right now I'm on a Synology CS-407 (discontinued higher-end version of the
CS407e), it's a dream to use. For the casual user who doesn't like taking time to set things up, everything is accessible through the main web-management interface, which is really slick and convenient. Time to having a 3-disk RAID-5 up and running was 10 min of putting disks in, a few clicks in a browser, about half a day for it to initialise (it's using a 500MHz ARM processor with 128MB RAM). Expanding that to a 4-disk array took another half a day.
Those who like using shells can even activate terminal services (although doing so voids your warranty, but Synology is cool about that; they even have a section on the forums for users to discuss software/hardware hacks) for SSH fun. It runs a slim BusyBox distro, and new ARM-compiled packages are available via ipkg.
Of course, without doubt, the better value-for-money proposition is always to re-purpose an old PC
