How a computer distinguishes between separate Binary bits

Chriskat777

Beta member
Messages
3
Location
USA
Since the computer reads a series of electrical pulses, how does it know where one bit starts and ends?

For example, if the computer looks at 00000010 how does the computer tell that the first six 0's are separate and not just one 0.

Does the computer have a set time that it waits before going on to read the next bit?
 
I believe it's from first startup. Just a long, long sequence of 8 bits. No more, no less.

(Someone please correct me if I'm wrong, I'm very much a novice at this)

ie, 8+8+8+8+8+7+8+8+8 would always end up shifting that 7 to the end and messing up the remaining bytes
00010100
00101101
01010101
11101010
1010010
01101011
01001001
10100100
01000100

would become
00010100
00101101
01010101
11101010
10100100
11010110
10010011
01001000
1000100

and thus change the value of the remaining.

The code probably wouldn't execute at all, since the bit values would all be jumbled up.

Errr, well, not shifting the seven, but reading the first bit from the byte after the 7-bit string, making that a 7 bit, so it takes 1 from the next, and so on and so forth.
 
This was probably wrong place to put it but I'm talking about the actual hardware and the electricity going through it
 
Oh... Well basically the same principle. Anything less than 8 electrical highs and lows wouldn't be interpreted correctly, there are no values for them -- the computer relies on a constant stream of 8 electrical highs and lows to be correctly interpreted into bytes.

Anything less and it would crash, most likely.
 
Electrically, the bits in the computer represent switches that are either turned on or turned off. There is a very accurate system clock that runs that keeps everything in sync so the circuit knows exactly when a bit starts and ends. Each bit is x nanoseconds or picoseconds wide.

pulses.jpg


This isn't an exact picture but should give you the idea, In the patterns you see here, where the line is at the top, that would be considered a 1 and where it's at the bottom it would be a 0. All of the clock ticks are the same width. A series of 1's would keep the state in the high position for so many clock ticks while a series of 0's would keep the state in the low position for so many clock ticks.
 
Last edited:
Not quite. Modulation/Demodulation is how music and other data can be carried by a transmitted radio signal. There's AM - Amplitude Modulation and FM - Frequency Modulation that most people are familiar with.

For computers, a modem (modulator/demodulator) can be used to transmit the computer data over a phone line.
 
Electrically, the bits in the computer represent switches that are either turned on or turned off. There is a very accurate system clock that runs that keeps everything in sync so the circuit knows exactly when a bit starts and ends. Each bit is x nanoseconds or picoseconds wide.

Thats what I was looking for, thanks.
 
Since the computer reads a series of electrical pulses, how does it know where one bit starts and ends?

For example, if the computer looks at 00000010 how does the computer tell that the first six 0's are separate and not just one 0.

Does the computer have a set time that it waits before going on to read the next bit?

At the network layer, it reads what is called a preamble which tells the network device when a series of electrical signal will start/end. Everything in between typically are in multiple of 8 bits.

Here's the definition of preamble: https://www.google.com/url?sa=t&rct...-R9AR8lQeOIilacaa3Z7L2A&bvm=bv.56643336,d.b2I
 
Back
Top Bottom