hi, Can anyone suggest me the approach for finding the lcm of the n numbers?

my approach: find the max of the n numbers and start dividing it by all numbers , if all are dividing it then print it, else take next multiple of the max number and repaet the division process. but i think this is too slow. i want better than this. So any one has better idea bout this ?

You can use recursion to find GCD. Then divide the product of given numbers by GCD to get LCM.

Recursive GCD :

int gcd(int x, int y)
{
        if (y == 0)
           return x;

        else
           return gcd(y, x % y);
}

And gcd( a , b) * lcm( a , b) = a * b.

You can use recursion to find GCD.

Why? What's wrong with a simple loop? Less overhead, easier to read... And why not just calculate the LCM? Why go through all the GCD calculations?

Less overhead

Not really. I don't know what compiler you're using, but any decent one would seize the opportunity to perform tail call optimization in np complete's code, and generate output identical to the one corresponding to an iterative implementation.

easier to read

I think this is the first time I see someone claiming that imperative style code is more readable than functional style code :D So, you really think that a loop is more readable than this one-liner?

int gcd(int a, int b) { return b == 0 ? a : gcd(b, a % b); }

Why go through all the GCD calculations?

Well, at least np complete provided something that could help the OP reduce the complexity of his/her algorithm (which, BTW, is not quite correct - it's not guaranteed to find the least common multiple). Folding a list of numbers using a binary LCM function looks much better than what the OP is trying to do.

What did you contribute to the thread with your post, apart from biased, misleading statements?

why not just calculate the LCM?

So, tell us, how exactly would you do that in a simple and efficient way without using GCD? What algorithm do you have in mind? Prime factorization? Something else? (What?)

commented: hey man! you are simply awesome. Daniweb is full of great people! keep it up! +2

Anything here (including GDC I suppose, if you want) but not with recursion. Unless you can prove to me recursion uses fewer resources than non-recursion.

@npcomplete this is the most common mistake which you have done. This thing is valid for the 2 numbers and if you want to extend it then here goes:

lcm(a,b,c,d....)= gcd(a,b,c,d....)* (a/gcd(a,b,c,d.....))* (b/gcd(a,b,c,d.....)) * (c/gcd(a,b,c,d....)).......

try with 5,10,15 and your algo will not work and then do it with mine, it will work for every set of numbers.

@waltP you are talking about which algorithm if you don't want to make use of gcd here? please throw a little bit light on it.

@m4ster rOshi Can you please explian a little bit the solution which you are providing here ? please explain a little bit , just i need a jist to understand it. thanks in advance to all of you. ;)

#include<stdio.h>

int gcd(int a,int b)
{
    if(b>a)
    return gcd(b,a);

    if(b==0)
    return a;

    return gcd(b,a%b);
}

int lcm(int a,int b)
{
    int k=gcd(a,b);

    a/=k;

    return a*b;
}

int main()
{
    int a[]={5,10,15,20,40};
    int n=sizeof a/sizeof a[0];
    int n1=n;
    int k=a[0];

    n--;

    while(n>=1)
    {
           k=lcm(k,a[n--]);
    }

    printf("LCM of N numbers is: %d\n",k);

    return 0;
}

i have calculates the time complexity as O(nlogn). Am i right ? please verify.
And can anyone improve this any more ? thanks.

commented: Yeap, you got the idea. +6

@ nitin1:

i have calculates the time complexity as O(nlogn). Am i right ? please verify.

If n denotes the number of your numbers, I'd say it's O(n). The method described here (which looks like what you were trying to do) also has an asymptotic time complexity of O(n). I used the term complexity in a broader sense in my previous post. Let's take a deeper look at what happens in both approaches. Note that even if operating on the list argument is an option, the second method still has to make a copy, because the algorithm needs both lists.

@ WaltP:

Unless you can prove to me recursion uses fewer resources than non-recursion.

The term you're looking for is iteration. And I don't have to prove anything. What you said is a fact.

Consider this:

#include <stdio.h>

inline int gcdRec(int a, int b) { return b == 0 ? a : gcdRec(b, a % b); }

inline int gcdLoop(int a, int b)
{
    while (b != 0)
    {
        int temp = a % b;
        a = b;
        b = temp;
    }

    return a;
}

int main()
{
    printf("%d\n", gcdRec (12, 18));
    printf("%d\n", gcdLoop(12, 18));

    return 0;
}

This is what my compiler generated for the gcdRec and gcdLoop functions, respectively:

[...]

    movl    $12, %eax
    movl    $18, %ecx
    jmp L17
    .p2align 2,,3
L19:
    movl    %ecx, %eax
    movl    %edx, %ecx
L17:
    cltd
    idivl   %ecx
    testl   %edx, %edx
    jne L19

[...]

    movl    $18, %ecx
    movl    $12, %eax
    jmp L18
    .p2align 2,,3
L20:
    movl    %ecx, %eax
    movl    %edx, %ecx
L18:
    cltd
    idivl   %ecx
    testl   %edx, %edx
    jne L20

[...]

As you can see, the only things that change are the label and variable names. Otherwise, it's the exact same piece of code.

Don't be hasty and say "I said fewer resources, not the same."; I'm not finished yet.

When talking about resources, most inexperienced programmers mean execution time and memory consumption. However, there's another kind of resource that is of equal (and sometimes greater) importance [1] -> development time.

Writing functional style code tends to improve development time for several reasons. First, you need to type less. Typing less means less bugs. Less bugs and small code base mean less debugging time. Small code base also means easily maintainable code base. The one who will be taking care of your code after you, won't need too much time to pick it up and extend it.

That last one is especially true for functional style code because it (functional style code) also allows a more direct mapping of your thoughts to the code that implements them. You can see this in the example I posted above, even though it's a small example. The recursive approach doesn't look very different than the mathematical formula you can see in wikipedia. To implement the iterative approach, one has to introduce a temporary variable. You have to stop thinking as a problem solver and start thinking as a machine. Another, better example is a comparison between a recursive and an iterative implementation of depth first search. If you choose the iterative approach, you'll have to provide your own stack. And if you are in C and can't use "vectors, maps, numeric_limits, sorts, and all that crap", you'll have to implement that stack yourself. Suddenly, the recursive approach doesn't look that bad, eh?

[1] If this wasn't true, Martin Odersky's brainchild wouldn't have gained so much popularity so fast. I quote ...

... from here:

Scala [...] smoothly integrates features of object-oriented and functional languages, enabling Java and other programmers to be more productive. Code sizes are typically reduced by a factor of two to three when compared to an equivalent Java application.

... and here:

Scala is being used by many more organisations and steadily moving into mainstream business critical applications. Scala's use has grown by a factor of 10 over the last year and it has matured into a solid production language.

i am traversing full length of array and for each element i am calling gcd() also. So what time gcd is taking in this procedure of finding lcm ? o(n) is the travsering array length. okay? than in each iteration how much gcd takes ? please clearify, thanks

how much gcd is taking in each iteration ? O(n) is the time taken to travserse the array length. then where is the gcd time included in this complexity ? is it a constant according to you ?

then where is the gcd time included in this complexity ? is it a constant according to you ?

It's not a constant. And that's not according to me either, Euclid's GCD algorithm has been rigorously analyzed (you can find a fantastic treatise in Knuth vol. 2). The upper bound is O(n) where n represents the number of digits in the smaller of the two values being calculated.

So the time complexity of your loop over an array would be closer to O(nm) where n is the size of the array and m is the average length of the values in the array. You could dig down deeper to find a more appropriate representation for m, but the average should be sufficient to determine growth. A more exact measure would only be necessary for a proof, and not in practice, in my opinion.

sir, then m is also a constant. like if int max is there, then m is 10 here. so can't we take it as a constant ?

sir, then m is also a constant.

No. Just because the range is tightly bound in one particular case (ie. your implementation) that doesn't mean you can suddenly say that the algorithm in general works with constant time. That's like saying quick sort is O(1) because you're only going to use it on an array of size 10.

so when i say that range is fix like the INT_MAX, then can't i say that it is O(n) ?

so when i say that range is fix like the INT_MAX, then can't i say that it is O(n) ?

How many languages would you like me to say "no" in before I make my point?

oops! first time you got angry! :( sorry! no need of any other language. C is best and english is best! :( okies! o(mn) finally ;)

oops! first time you got angry! :(

Often a curt reply can be extremely effective.

If you manage to actually make me angry, much less angry enough to post without deep consideration behind my words, you'll be in a very small and elite group that I won't need more than two hands to count and will remember forever. In other words, ain't gonna happen. ;)

I know professionals never get angry in thier job and working area. you have actually thought me the prfoessionlism and from 20 till my life will increase this professionalism in myself. no anger, no love, no excessive kissing,no hate ;) thanks alot ;)

I deal with the LCM's of (1 through n) exclusively and quite extensively. I can only help with this if you're numbers are consecutive. Let me know if this discussion is still alive...

A quick trick by hand is to use the prime bases of numbers to arrive at the LCM of 1-n. The LCM of 1-17 is 1x2x3x2x5x1x7x2x3x1x11x1x13x1x1x2x17 .

Coding would depend on whether you are dealing with large n or not. If n is small, I would make a code for the[Product(with min of 1 and max of n) of [p^[Floor of the [[log_e of n]/[log_e of p]]]]] (where p is a Prime # of Range 1 to n). I have a code for larger n which is original and works so good for my research that I'm reluctant to give that away until I capitalize on it. I deal with the LCM's of (1-n) that have about 1 billion to 10 billion decimal digits. For such large n, the answer is too much to display and is shorthanded through the use of Log_10 to also know how many digits the LCM has and is indexed as a power of 10 exactly so that the LCM could be reconstructed by raising 10 to the whatever-point-whatever. I also have code for the number of factors the LCM has that are quite impressive. I code in Mathematica so I can only speak to you in math-ese. Let me know if any of this applies and I will help further.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.