Currently I am studying multi-thread, and I feel pretty confusion about the concept
of blocking, non-blocking, synchronous and asynchronous.

Exactly, there are so many explanations, and each of them have something differents,
someone says synchronous could be non-blocking too, someone says synchronous must be blocking.Someone says asynchronous must be non-blocking, someone says it could be blocking too.

Someone says synchronous is same as blocking, someone says it is different.

I am extremely confuse about those concepts now, don't know how to discern the differents between them.

Could anybody help me clear up my mind?Thanks a lot

Hmm... Not everyone gets the idea of concurrent programming. Please don't feel bad about it. I will try to explain as plain as I could...

Anyway, think of synchronous as simultaneous instead. Two or more threads are running at the same time and share the same resources. While sharing the same resources, a thread may be holding resources that other threads need. As a result, blocking occurs until the thread who has the resources releases them.

Now about asynchronous, think of it as a function calls another function. A thread is running and all the sudden it needs results from another thread, or it cannot proceed. As a result, it stops. You could say that this is another type of blocking as well.

The non-blocking is similar to lock-free. In other words, regardless threads are sharing resources, they will never get stuck forever. It is the ideal when you work with threads.

Now about asynchronous, think of it as a function calls another function.

What if asynchronous threads have share resources?
Thanks

It may also get blocking (lock). It is just that the threads may be doing completely different tasks or using the shared resources in a different way. Though, these threads may still depend on the other to finish the job. The blocking problem from asynchronous is more difficult to deal with.

Currently I am studying multi-thread, and I feel pretty confusion about the concept
of blocking, non-blocking, synchronous and asynchronous.

Exactly, there are so many explanations, and each of them have something differents,
someone says synchronous could be non-blocking too, someone says synchronous must be blocking.Someone says asynchronous must be non-blocking, someone says it could be blocking too.

Someone says synchronous is same as blocking, someone says it is different.

I am extremely confuse about those concepts now, don't know how to discern the differents between them.

Could anybody help me clear up my mind?Thanks a lot

Okay, here's what they mean.

"Blocking" and "non-blocking" describe individual operations, or individual function calls (or system calls). A blocking operation is basically one that can take a long time. In particular, if you made a function that read data, and waited for data to arrive before returning, it would be a blocking operation, because it could take arbitrarily long before the data arrived, and the CPU would be idle (or another thread would be scheduled to run). A non-blocking operation, on the other hand, would be one designed to return immediately, with some return value indicating that it does not have data at this time.

If you want to manage multiple forms of I/O at the same time, blocking system calls are fine as long as you use multiple threads. A blocking system call will only block the thread it's on, and that thread has nothing better to do, so it's reasonable for that thread to be idle while other threads work. There are plenty of classic examples of this, such as a file server which uses a thread for each connection, alternating between loading data from the file and sending it over a socket. (Or maybe two threads per connection, one which loads the file and another which sends it over the socket. This would get more throughput if the OS doesn't do any file caching or network buffering, can you explain why?)

The problem with having one thread per connection is that it can be an awfully expensive thing to do. Switching between threads can take a lot of CPU time, and the thread contexts themselves can take a lot of RAM, if you have millions of connections you're managing, or if your system has 32-bit address space. So the trick is to use non-blocking I/O, and since it would be ridiculous to try the same I/O operation over and over again in a loop, we ask the kernel to notify us when the status of a file descriptor (or socket) has changed. Then we do non-blocking reads on the corresponding file descriptor until we can't get any more data off them. Then if you have nothing to do, you can do a blocking operation that waits on all your file descriptors at once. This is called asynchronous I/O because when using it, you'll be saying you'd like to do some I/O, and when it's ready, you'd like some function you've provided to be called later, but right now you have other things to do. If you want to get comfortable with the asynchronous model, spend some time using Node.js.

Synchronous I/O is what was described earlier, where the function call blocks, and instead of taking a callback to be called later, your function call just waits and eventually returns.

So I hope that clearly explains what they are.

This would get more throughput if the OS doesn't do any file caching or network buffering, can you explain why?

Case 1 :
resources : thread A, thread B, data A, data B
thread A in charge of data A(loading and sending)
thread B in charge of data B(loading and sending)

Case 2 :
resources : thread A, thread B, data A
thread A in charge of the loading of data A
thread B in charge of the sending of data A

without caching :
Case 1 :
Although there are two threads handle different data, but all of the threads
have to detect the data should be sent or not by itself, you have to add
some if...else loop or something similar to that, this would waste a lot of time

Case 2 :
thread A handle loading, thread B handle sending, thread A can notify thread B when
the data A need to be send. With the help of notify, thread A could run as fast as
it could.

with caching :
Case 1 && Case 2:
Because the time of loading near zero, two threads handle two files would be faster
than two threads handle one file.

I don't know my answers correct or not, please correct me if I am wrong, thanks.
Case 2 is asynchronous?

OK, here is my answer in general (not include caching in the determination)...

Case 1: is synchronous and non-blocking. Each thread handles its own data set which is loading and sending. They are mutual exclusive in accessing a data set.

Case 2: is asynchronous and blocking happens all the time. For the same data, both threads will be executed at a different time (A completes before B starts). Because thread B is always waiting for (depends on) thread A, thread B may completely stop if thread A is either dead or working non-stop without notifying thread B.

Wow, thanks. Now I get it, although synchronous act like blocking
it is not same as blocking, that means synchronous could be blocked and non-blocked

Besides, how about my answers?Anything need to be fixed?
Thank you very much.

Hmm... I don't really know what "caching" you are talking about. Does it mean caching the data used in the thread execution? Or caching the thread memory usage? Or else???

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.