OS/software RAID 5 under Windows XP Pro x64???

Status
Not open for further replies.

CitizenCain

Baseband Member
Messages
55
Hey all.

I broke down and bought myself a nice external drive bay and 10 drives to put in it.

I'm running XP Pro x64, and want to do a software RAID solution (since the provided controller is cheap and can only handle up to 5 drives in one RAID set). I'd heard that Windows XP had this functionality, so I thought it would be no problem. Even found a nice step by step guide to do it.

(Using WindowsXP to Make RAID 5 Happen | Tom's Hardware)

Of course, I go to actually do it... and that's when I remember that Windows XP x64 is actually based on the Server 2k3 platform, not the XP platform, meaning those steps don't work. D'oh. (For starters, the hex lines they reference in the guide don't all exist...)

Does anyone know how to do this, either through XP x64 with free software (and ideally nothing that involves a Linux install)... or have links to a guide (etc) that can walk me through it?
 
Have you tried to search google for results with Server 2K3 instead of XP?
 
Yes. But Windows 2003 Server supports it natively (it's an option under Disk Management), whereas Windows XP x64 doesn't have that option under Disk Management.

XP Pro x64 (for the most part) looks just like XP Pro, but is built on the Server 2003 platform... so I'm not sure if this RAID thing just a cosmetic fix (they hid the option), or there's something you need to do/download to enable it.
 
I wouldn't do unless you have seagate scsi drives. then you could get an adaptec controller card to do it in hardware.

if you use other hdds, they will fail on you eventually.
 
That's kind of the point of RAID 5 - to provide redundancy in the case of a drive failure, so that a single drive failure doesn't take down your whole array.

And, FYI, even Seagate SCSI drives fail. I have to replace failed SCSI drives at work every couple of weeks or so, in our older environments.
 
That's kind of the point of RAID 5 - to provide redundancy in the case of a drive failure, so that a single drive failure doesn't take down your whole array.

And, FYI, even Seagate SCSI drives fail. I have to replace failed SCSI drives at work every couple of weeks or so, in our older environments.


I know, but at least they were built for it. a regular hdd isn't
 
I know, but at least they were built for it. a regular hdd isn't

I disagree. The new servers my company buys from Dell now all come with SATA II, not SCSI. And about **** time. Ask anyone who's had to work with SCSI technology. It sucks. You can fry a 4 figure SCSI controller card by powering things down in the wrong order. It's not hot swappable. It's slower than SATA. And so on.

The only difference between the drives I'm trying to RAID and the drives my company bought to house our office's VMware is that mine are 2800 RPM slower. Which is fine, because I don't have 60 developers trying to access my personal hard drives at the same time.

"Regular" hard drives not being "built for" RAID is just a bunch of crap disk companies use to sell their overpriced "server" drives, which, trust me, fail just as often as workstation drives, all else being equal. And actually, Ive been around enough servers and disk arrays to notice that the difference between the "Server" drives a company makes and the "workstation/regular" drives is usually just that the server version is at 10k or 15k RPM, and the workstation version is at 7200 or 10k RPM.

But I digress... regardless of whether you think it's a good idea or not, do you know how I can set up a 10 disk RAID 5 set under an XP x64 environment?
 
I disagree. The new servers my company buys from Dell now all come with SATA II, not SCSI. And about **** time. Ask anyone who's had to work with SCSI technology. It sucks. You can fry a 4 figure SCSI controller card by powering things down in the wrong order. It's not hot swappable. It's slower than SATA. And so on.

The only difference between the drives I'm trying to RAID and the drives my company bought to house our office's VMware is that mine are 2800 RPM slower. Which is fine, because I don't have 60 developers trying to access my personal hard drives at the same time.

"Regular" hard drives not being "built for" RAID is just a bunch of crap disk companies use to sell their overpriced "server" drives, which, trust me, fail just as often as workstation drives, all else being equal. And actually, Ive been around enough servers and disk arrays to notice that the difference between the "Server" drives a company makes and the "workstation/regular" drives is usually just that the server version is at 10k or 15k RPM, and the workstation version is at 7200 or 10k RPM.

But I digress... regardless of whether you think it's a good idea or not, do you know how I can set up a 10 disk RAID 5 set under an XP x64 environment?


wow...

a sata 7200 rpm drive which costs next to nothing is more rugged than a 17 gig seagate 15,000 rpm which cost 250 dollars and up that were specifically designed to be more rugged to handle raid setups. the 6 that I had in raid 5 were hot swappable (though I never did it).

I'm speechless

you need to check around here. most people that use raid around here with ide and sata kill their hdds. I never killed those 6. my friend is using them now as I sold the server to him
 
wow...

a sata 7200 rpm drive which costs next to nothing is more rugged than a 17 gig seagate 15,000 rpm which cost 250 dollars and up that were specifically designed to be more rugged to handle raid setups. the 6 that I had in raid 5 were hot swappable (though I never did it).

Not what I said, at all, but thanks for playing. I do this professionally, thanks, and I can tell you for a fact that RAID 5 on SATA won't add any undue wear and tear to the drives, for a single user's file server. If anything, it will decrease the wear on the drives because the files I'll be serving up to my entertainment system will be spread across multiple drives, instead of just one. And, like I said, we use SATA drives at work in production to serve VMwares to dozens of developers at the same time, and it doesn't trash the drives. If other people here are having problems with it, it's because they're either doing it wrong, or buying cheap, off brand or low RPM drives and trying to hit the drives with too many simultaneous I/O threads.

I personally set up RAID 5 with 5 80 gig IDE Maxtors back when they first came out, and 4 of the drives still work. The other two died after their 8th year of continuous operation. I know what I'm getting into.

So, thanks, but like I said, I'm not interested in whether anyone thinks it's a good idea or not, just whether or not anyone knows how to do it under XP x64 or hack XP x64 to do it natively.

I'm speechless

If only it were true...
 
Status
Not open for further replies.
Back
Top Bottom