Does anyone know of an easy way to get the computer to do separate tasks for each core?

Assuming each task has no memory collisions etc. how can I get each core to work on it's own separate block of code?

This certainly exists, the main one that I know of is OpenMP. Pretty much all modern compilers support openmp. It is really easy to use, you just put some #pragma statements at the appropriate places, and you configure the number of threads to use (in relation to your number of cores) either in the compilation option or in the code itself. This is pretty much the minimal example (a parallel for-loop):

int main(int argc, char *argv[]) {
    const int N = 100000;
    int i, a[N];

    #pragma omp parallel for
    for (i = 0; i < N; i++)
        a[i] = 2 * i;

    return 0;
}

I believe there are also other similar tools out-there, but OpenMP is by far the most popular. But, of course, the cadillac of development tools for this purpose is Intel's Parallel Studio.

I'm still a bit confused on how to use it.

If I have:

void myFunc1(){
    a+=1
    etc..
}

void myFunc2(){
    b+=1
    etc..
}

then I would do:

void myFunc1(){
    #pragma omp parallel for
    a+=1
    etc..
}

void myFunc2(){
    #pragma omp parallel for
    b+=1
    etc..
}

??

Also, how is OpenMP better/different from standard 'Threads'? (STD::thread)

You don't understand, the instruction #pragma omp parallel for literally means that the for-loop following that instruction will be split up into a number of segments that run in parallel. For example, if you do this:

#include <iostream>

int main() {

    #pragma omp parallel for
    for(int i = 0; i < 100; i++)
        std::cout << i << std::endl;

    return 0;
};

Instead of printing 0 1 2 3 4 ... it might print something like 0 25 50 75 1 26 51 76 2 ... (it's probably going to be more random than that). This is because the for-loop will be split into, lets say, 4 threads that execute a segment each, e.g., one thread does [0, 24], another does [25, 49], and so on, all in parallel. And this is just one of many different instructions you can do.

Also, how is OpenMP better/different from standard 'Threads'? (STD::thread)

Standard threads are good for multi-threading, but that's not the same thing as parallel processing. If you were to take a for-loop and split it up into many segments that run in parallel using standard threads, you would have quite a bit of work on your hands (and the code would look nothing like a simple for-loop anymore). With OpenMP, it's just a one-line instruction and the compiler does the rest. Multi-threading is for running different concurrent tasks on different threads, while parallel processing is generally for distributing one big and repetitive task among a number of threads that run in parallel. So, OpenMP is more about specifying certain sections of the code or some for-loop that should be executed by many threads in parallel, it is not for creating one thread to do this and another to do that, that's what a multi-threading library is for (like the std::thread or Intel's TBB). They solve completely different problems.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.