Help needed about CPU vs SATA and USB 3.0 speed

foxclab01

Beta member
Messages
4
Hello to everybody in the forum.

Here is a question I have and I cannot answer it. It is a theoritical and not a practical question, but it still bothers me :

Why is it possible for ( any ) SATA and for USB 3.0 to function? I think that these protocols …should just not be able to work!

Let me explain myself :

Let's take a really fast CPU, one that is clocked in 3,6GHz. The CPU needs from 1 up to 4 clock cycles to form a command in machine language, right? So, in the worst case scenario, this CPU can execute 3,6 / 4 = 0,85 billion commands per second or, if you prefer, 850 millions commands per second.

So far, so good. But let's see now the USB 3.0 protocol. This new protocol can transfer 5Gbits per second. Since one CPU command is needed for every bit transfer, we need 5 billion commands per second to be handled by the CPU, in order to make USB 3.0 to work. This is just not possible. In fact, nothing that needs more than 850 million commands per second should work.

I understand that PCI express protocol is another story, because there the bus is handled by hundreds of small processors inside the graphics card and these processors work in parallel, handling the PCI express lines all together.

But what about SATA and USB 3.0? Why on earth are they able to work?? …..I must have a mistake in my calculations, but I do not know where…. J

Can anybody help?
 
A few things (And please if anyone has more information on the subject, correct my mistakes)

First off, the USB and SATA transfers are not necessarily controlled by the CPU. Although the IRQ's (interrupt requests) are handled by the CPU, the data transfers are actually handled by the Southbridge chipset on your motherboard.

Data transfer rate math is a little more complex than simple processes per second/clock cycle.
Wikipedia said:
The bandwidth or maximum theoretical throughput of the front-side bus is determined by the product of the width of its data path, its clock frequency (cycles per second) and the number of data transfers it performs per clock cycle. For example, a 64-bit (8-byte) wide FSB operating at a frequency of 100 MHz that performs 4 transfers per cycle has a bandwidth of 3200 megabytes per second (MB/s):
8B x 100 MHz x 4/cycle
= 8B x 100M x Hz x 4/cycle
= 8B x 100M x cycle/s x 4/cycle
= 3200MB/s
The number of transfers per clock cycle depends on the technology used. For example, GTL+ performs 1 transfer/cycle, EV6 2 transfers/cycle, and AGTL+ 4 transfers/cycle. Intel calls the technique of four transfers per cycle Quad Pumping.
Many manufacturers publish the speed of the FSB in MHz, but often do not use the actual physical clock frequency but the theoretical effective data rate (which is commonly called megatransfers per second or MT/s). This is because the actual speed is determined by how many transfers can be performed by each clock cycle as well as by the clock frequency. For example, if a motherboard (or processor) has a FSB clocked at 200 MHz and performs 4 transfers per clock cycle, the FSB is rated at 800 MT/s.

This is a rather short answer, but hope this helps.
 
Wow, he posted this question to like four forums...

I see that every once in a while. Sometimes they are trolls, other times they are just desperate for information as fast as possible.

Holy cow, I see at least eight different forums...

Sadly, I'm already a member of half of those.
mumble.gif
 
Thank you for your very helpful answer. I think I get the picture now.

Searching in the web, I found that real-life tranfer rate for the USB 3.0 is about 100 megabytes / sec, or 800 megabits / sec if you prefer ( far lower than the theoritical maximum of 4,8 gigabits of course ). But these real life tests had to do with data transfer that included hard disk, and of course there were the limitations of the disks rpm etc.

So, does anybody know or have tested how fast is the real transfer rate of USB 3.0 when hard disks are not involved? ( for example, accessing the USB port for serial data transfer through a software program ). I do not expect 4,8 gbits / sec of course, but not as slow as 800 mbits / sec neither. Is my assumption correct, or in the case of the software program, because now the CPU DOES interfere, there is the limitation of the CPU clock?

Thank you all again.

P.S. I am not a troll, OK? :)

A LITTLE desperate, yes, I admit it. Are you happy now ? ( hahahaha just kidding )...
 
Back
Top Bottom