Originally posted by talldude123
64-bit processors are the new thing. They are slowly getting rid of 32-bit architecture. Only Intel Celeron's are 32-bit now.
64-bit operating systems are a little buggy, because software doesn't run well on it. I believe only a select few programs work on 64-bit Windows XP. Also, it costs much more to purchase it.
Here's a whole article that can answer your questions about 64-bit:
Well, other than spewing out some inaccurate differences between 32bit and 64bit, you didn't manage to describe the actual difference as in "what 64-bit means" but that's okay.
First off, Windows XP x64 isn't a beta, they actually released it. It's buggy, yes, but that doesn't mean it's a beta. 64-bit operating systems are not a little buggy. 64-bit Linux works great, 64-bit Mac OS X works great. 64-bit Vista works great. Something isn't buggy "because software doesn't run well on it" ... SOFTWARE is buggy because it doesn't run well. Like I said before, 64-bit Linux and Mac OS X and Vista work perfectly fine.
There is this compatibility library for WinXP x64 called WoW or something like, Win on Win64 methinks, but it's supposed to let 32-bit applications work in 64-bit Windows. With Linux there are two methods to runs 32-bit software in 64-bit Linux - chroots and compatibility libraries. Chroots are a lot more stable, and 99.9% of software works great. Compat. Libs are a bit unstable, not everything works, but what does work is a bit more "native" than a chroot, and takes up less space. A chroot is basically a directory with a whole OS inside it, which can be "booted" ... it's actually quite difficult to explain, but this wiki should help:
On to hardware - AMD and Intel only make 64-bit CPUs now. There are Sempron 64 CPUs which are 64-bit. You are correct, Celerons are 32-bit, but Celeron D's are 64-bit.
Basically the difference between a 32-bit CPU and a 64-bit CPU is this ... A bit is short for binary digit. It is basically how a computer stores and makes references to data, memory, etc. A bit can have a value of 1 or 0, thats it. So binary code is streams of 1s and 0s, such as this random sequence 100100100111. These bits are also how your processor does calculations. By using 32 bits your processor can represent numbers from 0 to 4,294,967,295 while a 64-bit machine can represent numbers from 0 to 18,446,744,073,709,551,615. Obviously this means your computer can do math with larger numbers, and be more efficient with smaller numbers. You really only see a benefit when doing extremely high end floating point calculations. You won't see any difference at all in just standard every-day computing.