This issue is probably discussed before. However, I can still use a little insight. I am a python beginner looking to develop a small client/server suite. The clients will reside in different PCs across a LAN and their job is to recieve a file (say, a .txt file) from the Server and open it with local copies of another application (say,Gnote ... I use Linux). The clients interact with their local Gnote and report states/progress to the Server. The Server uses the states report to know how to allocate remote tasks to the clients. Lets say that each client has to extract a text from the initial txt file and store the extract as a small .mxt file (just any extension) on a shared resource (maybe a samba share or nfs share). Each extract will be different and nearly sequential in each client ... so there is no repeats. At the end of the extracting tasks, the Server will instruct one of the Clients to collate the .mxt files, recreate the initial .txt file and deliver it to a specified location (different folder ... for comparison and assessment).

That kinda summarizes the scenario.

I've read that there are different ways to approach his, Threading and Multiprocessing being the most popular. I am wondering which will best suit this scenario. The initial txt file in our example can be anything between 300MB to 10GB++ in size. Of course, that's a test size ... just so we know there will be a lot of file transfers across LAN.

So, Which way should I go?

I dont know about Multiprocessing but i know that threading+socket will do this job without any issues at all.

;)

The advantages of Multiprocessing are:
it does not use a GIL and
it can use multiple cores if you are on a multi-core machine.
This means that it is usually faster, but it ultimately comes down to what you understand and can use.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.