My understanding of this subject is -

Two or more processes accessing a semaphore concurrently can cause
Deadlock
The provelms with semaphores is that you can forget to call the release method and it can cause deadlock
Starvation
Both processes can change the P and V counters of the semaphore and by doing put the process in an inconsistenet state.

Is there anything anyone can add to this or have i just confused myself too much

My understanding of this subject is -

Two or more processes accessing a semaphore concurrently can cause
Deadlock
The provelms with semaphores is that you can forget to call the release method and it can cause deadlock
Starvation
Both processes can change the P and V counters of the semaphore and by doing put the process in an inconsistenet state.

Is there anything anyone can add to this or have i just confused myself too much

WHEW
You have only scratched the surface of 'Semaphore Science'. There are many 'flavors' of semaphores. One is the most basic, the Event Semaphore which you can use for most multi-threaded (let's look at MT and MP as equivalent for now) operations where a resource is itself restricted to serial use. One common example would be a master index table in a database. While the table can be accessed for reference many times concurrently, it can only be accessed serially for edits.

In this case the editing thread or process will have to lock the semaphore and wait for it to clear (the count goes to 0) before proceeding with its edit operation. It is a worthy exercise to consider the effects of this simple example before moving on to more complex operations and uses of semaphores.

1.) Consider the use of a named (globally scoped) vs. an anonymous (locally scoped) semaphore. Both can be referenced by a handle.

2.) Use an array of semaphores to protect records, ranges, and auxiliary tables in a very busy MT or MP environment.

3. Use a class to automate the care of a semaphore.

Again, all this is just the beginning of uses and considerations.

Good luck,
- Ed.

commented: Nice post +14

Let's try this again-- apparently providing links to useful information isn't looked kindly upon.

A semaphore is used to monitor a set of resources. Let's pretend we have 10 computers and 15 users. Clearly only 10 computers can be used and 5 users will be left waiting. Since programming and computer systems don't care about these kinds of conditions they don't know how to handle a problem like this-- hence the problem of process synchronization. This gave birth to an abstract idea-- a semaphore.

We can use a semaphore in this problem. Our semaphore would be the computers. At default the semaphore's value would be 10 (we have 10 computers). Each time a user comes in and wants to use a computer it must "semaphore_wait" on the semaphore (the computer). The semaphore will be decremented by 1, and if the semaphore isn't negative then the resource (the computer) will be given to the user. Therefore we can see 10 users are able to semaphore_wait and the 11-15th users will end up waiting and producing a negative semaphore and causing the user to wait for a free resource. Each time a user is finished with a resource they must semaphore_post which increments the semaphore. Once a resource is freed it can be given to one of the waiting users, or it can increment the semaphore so any new user can grab it when they come.

As you have hinted a problem can occur if a programmer forgets to release a resource. This may or may not cause a deadlock depending on how many resources are needed and how many are available.

Apart from semaphores-- you may want to look at other kinds of tools for synchronization such as binary locks/mutex locks.

One often ignored synchronization method is a message send/receive/reply. The receiver is the gatekeeper. One who wants access to the resource sends a message asking permission, if there are no others who have asked, the receiver replies, unblocking the sender who knows he has exclusive access to the resource. When done, he sends a "I'm done" message, and the receiver knows that the next sender can get the resource, doing a basic ACK reply. If more than one sender asks for the resource, they get queued, and blocked, until the resource is available. They can set timers so that if they are blocked for too long a period, they are broken out of the wait-for-reply state and can go on and do other stuff. This is how message-passing micro-kernel operating systems such as QNX, Plan-9, Thoth, and the Amiga OS work.

Ignored for a very good reason: The obvious increased overhead and the attendant affinity for errors and failure mode permutations. Often ignored, in this case is a good thing! Fluffy, but inelegant with a nightmarish pathology.
- Ed.

That's why we use it to run Billion$ semiconductor fabs, US Navy repair depots, stealth fighter avionics, highly sensitive medical devices, automobile manufacturing plants, space shuttle systems...

Sounds about right - that's what the cost of maintenance would take. Enjoy

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.