Discussion Forums > Technology

RAID Boxes

<< < (6/7) > >>

K7IA:

--- Quote from: per on August 04, 2009, 09:15:57 PM ---
Yes, but the original subject was about a RAID box. :)

And, no I'm not from SUN, if I were I would not be using cheap generic PC hardware. ;)

--- End quote ---

But it's not about building your own raid box. People are commenting on possible configurations on a raid setup, if those options are available. Unless of course Tatsujin is a hardware guru who intends to build a box from scratch :)

Cheap generic pc hardware? The setup looks perfectly clean and nice to me, just put a cray logo on the box :)


kureshii:

--- Quote from: Talapus on August 04, 2009, 05:02:53 PM ---I understand how parity works on RAID 2-5. But can anyone explain how the double parity in RAID 6 works? I've done some reading on R-S code, and while I can understand most of the math, I can't seem to put the pieces together.  ???

--- End quote ---
To the best of my knowledge, it is just taking RAID 5 one step further in terms of data redundancy.

While in RAID 5 you have one additional parity block being generated, in RAID 6 you have two. The parity blocks are distributed across all disks instead of being stored on a separate parity disk.

For each stripe of data you have 2 parity blocks, which gives you two "layers" of redundancy. In RAID 5, if you lost a disk, you still have an effectively "complete" copy of all your data (thanks to single-parity striping). In RAID 6, if you lose 2 disks, you still have an effectively "complete" copy of all your data (double-parity striping). In either case, once you lose one more disk, your data is no longer complete, and you will be unable to fully recover your data.

Once you replace the dead disks, the missing blocks on those disks can be recalculated from the blocks on the other disks (belonging to the same data stripe). So you can treat RAID 6 as a more robust buffer against disk failure; this is usually used only in disk arrays containing 6 or more disks, since in such arrays the chances of a second disk failure happening while you are still recovering data from the first disk crash is higher (than in a 5-or-less disk array). That is, of course, bad news for a RAID 5 setup.

I don't know the fine details of the algorithm used, you'll have to google that :)

----------By the way, one thing that doesn't seem to have been mentioned yet is what is commonly called "silent data corruption" (and less commonly called RAID-5 write hole). A quick google search sufficiently illustrates this issue. Not to bring undue attention to it or to overstate its danger, since this occurs on individual-disk setups as well, but it should be pointed out that RAID 5 is a data redundancy solution, not a protection against data corruption.

Some might feel that "silent data corruption" is too strong/scary a term for it, since it's not like the RAID actively corrupts your data while pretending to be all fine and well; it just doesn't tell you if data is corrupted due to bad RAM or other reasons (since there's no way for it to know unless programmed to do so). This probably is of little concern to most readers, so just take what you want out of these two paragraphs.

----------I have a home server build coming up in another month or so, already have a RAID-Z setup planned for it :) Just a pity that RAID-Z and RAID-Z2 pools cannot be dynamically expanded by adding new devices just yet...

For Tatsujin: If you have the cash to fork out for them, pre-assembled NASes are the fastest way to get started on a robust RAID setup. Manufacturers like Synology and QNAP (to name a couple) have such devices for the consumer market, although they definitely are nothing short of pricey.

Right now I'm on a Synology CS-407 (discontinued higher-end version of the CS407e), it's a dream to use. For the casual user who doesn't like taking time to set things up, everything is accessible through the main web-management interface, which is really slick and convenient. Time to having a 3-disk RAID-5 up and running was 10 min of putting disks in, a few clicks in a browser, about half a day for it to initialise (it's using a 500MHz ARM processor with 128MB RAM). Expanding that to a 4-disk array took another half a day.

Those who like using shells can even activate terminal services (although doing so voids your warranty, but Synology is cool about that; they even have a section on the forums for users to discuss software/hardware hacks) for SSH fun. It runs a slim BusyBox distro, and new ARM-compiled packages are available via ipkg.

Of course, without doubt, the better value-for-money proposition is always to re-purpose an old PC :)

bcr123:
Buy.com has the 4 drive (4x500GB) Buffalo linkstation on special right now for example:

buffalo-linkstation-quad-2tb

halfelite:

--- Quote from: per on August 04, 2009, 08:42:07 PM ---Hardware raid cards are mostly a total waste if you use ZFS, ZFS does not really use them at all (the only gain is for the write cache. But adding a SLD SSD for the intent log is better, really).

My home-raid can do 650MB/second streaming read, and more than 100MB/second doing 100% random access.
While seeding the 100 or so torrents I'm currently seeding, it's using less than 1% I/O capacity.
Considering the fact that a Gbit network can only handle a little bit more than 100Mb/second, it's sort of good enough. :-)

It contains 15 drives in a 3x5 stripe/raid5 configuration (and a SSD drive for the OS and cache), and is using two rather cheap 8-port PCI-express SATA controllers.

And I really think that ZFS is extremely easy to set up, compared to any of the alternatives, but I have been a unix system administrator since '92.

On a separate note, when you have more than 3 or so drives, you really need to have some kind of redundancy.

If a single drive as a mean time before failure of 5-10 years (which is more or less what I have noticed), you are likely to get a failure per year with 4 drives.

--- End quote ---

I think people with no experience will have a tough time with ZFS. My only problem with ZFS is its on opensolaris lol. the hacked in versions on other distro's  dont please me. In the next 3 years though file systems will take off and turn very good almost replacing hardware raid cards.  btrfs is one I have been watching.

Talapus:

--- Quote from: Arveene on August 04, 2009, 07:09:10 PM ---...on the math and links this as it's source. It might be worth a read.
--- End quote ---

That's exactly what I needed. I understood how XOR parity worked, but I was confused as how you could generate an independant parity bit that could combine with the XOR parity to reproduce a second lost data bit. That article lays it out much nicer than what I was reading before. It's still not intuitive in my mind, but the math works out.

Thanks  :D

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version