csamuels said:
it handles to transfer of information from the physical layer of the mobo ops to the application layer of the os.
aka it takes the 1s & 0s from the mobo and translates it to a software language
I wouldn't say that this is an accurate description of what was being described in the article.
Signals, semaphores, and flags are used for inter-process communication within an operating system.
They are basically all similar forms of the same thing, so I'll just describe what a semaphore is in general.
If you have two threads (or processes) that need to access the same shared resource, there must be a way to arbitrate which thread may access the resource at which time.
The simplest form of semaphore is the binary semaphore which is always either in a locked or unlocked state.
Consider two threads, Thread A and Thread B, which both want to access some common memory locations X and Y.
Thread A wants to execute:
Y = X + 2
X++
Thread B wants to execute:
X++
Y = X + 2
Now, without semaphores, the order in which these statements are executed is critical to the result.
Assume X initially equals 1.
If Thread A executes entirely before Thread B, then when thread A completes, the value of X will be 2, and Y will be 3. When Thread B completes subsequently, the value of X will be 3 and the value of Y will be 5.
Now, if Thread A is interleaved with Thread B such that the statements are executed A1, B1, A2, B2, the values at the end will be X = 3, Y = 5.
Now interleave them the other way. B1, A1, B2, A2. The values will be X = 3, Y = 4.
Clearly, the results of this execution are unpredictable, and thus unacceptable. If calculations depend on the result of these two small code sections, bugs could be introduced based on subtle timing variations.
Semaphores introduce the concept of the 'atomic operation'. Normally an operation like X++ is actually executed as Temp = X, Add Temp, X, X = Temp. Therefore, this operation could actually be interleaved in any number of unpredictable ways. In order to make these operations 'threadsafe' binary semaphores allow the programmer to ensure that this cannot happen.
Now, you would have the concept of something called a critical section, which is the section of a thread that accesses shared memory.
Therefore you have:
Code:
// Some non-shared code
Request Semaphore
If successful
// Critical section
// Release semaphore
Else
// Wait until semaphore is released
// Request semaphore
// Some non-shared code
Does any of this make sense?