In most modern languages, there are two fundamental approaches to repeating an action: either iteration, in which an action is placed inside a special-purpose structure which causes the action to be repeated, or recursion, in which a general-purpose function calls itself (or another function which in turn calls the earlier function). Which is to be used depends on the nature of the problem at hand, and to a great degree, the language being used. Some languages, especially Lisp and the various functional languages, strongly favor recursion; but most other languages, including C++, focus almost entirely on iteration.

As a result, novice C++ programmers are often given little if any experience with recursion, which tends to lead to it being overlooked even for problems where it is well suited. Also, recursive design takes a very different view of what it means to repeat something, and the approach can seem alien to programmers more familir with iteration. Finally, because it is based on function calls, recursion generally takes more memory and time than the equivalent iteration, and while there are ways for a compiler to optimize away this overhead for some types of recursion (specifically tail calls, where the recursive call is the last step in the function), few C++ compilers do any tail call optimization. As a result, recursion is poorly understood by many C++ programmers.

This is unfortunate, as in some ways recursion is the simpler of the two approaches to looping. It does not require any additional language constructs other than conditionals and function calls, and once you are familiar with the general approach, it can seem as natural as iteration.

The key idea behind a recursive program is having a function which calls itself. A simple example of this would be:

void infinite_stars() 
{
    std::cout << '*';
    infinite_stars();
}

This is a function which, when called, prints out asterisks until the stack is exhausted, or until the user interrupts the program. It is equivalent to the iterative function:

void infinite_stars()
{
    while (true)
    {
        std::cout << '*';
    }
}

... except that the call stack gets used up in the function calls. For those unfamiliar with it, the call stack is a stack supported by the computer hardware which provides the space in memory where local function variables are stored; every time you calll a function, the stack is advanced by enough space hold everything used by the function. This space is called the functions environment, or activation record. When the function exits, the top of the stack is moved back to the level of the previous function's activation record. Because there is an activation record for each new call of the function, a function which calls itself can have multiple activation records, which means that the calls do not interfere with each other.

Because this function uses a tail call - a function call which is the last action in the function - the compiler could make it so that the activation record gets re-used, such that the stack would never get used up. I'll explain more about this later, but for now, be aware that this is something C++ compilers could do, but generally don't except as a option or switch.

Now, an infinite loop like this isn't usually very helpful, so in order to make a practical recursive function, you would want to have an exit condition, just as you do with an iteration. Let's modify our function so that it repeats a fixed number of times, given by the parameter n:

void stars(int n)
{
    if (n <= 0)
    {
        return;
    }
    else
    {
        std::cout << '*';
        stars (n - 1);
    }
}

This is equivalent to:

void stars(int n)
{
    for (int i = 0; i < n; i++)
    {
       std::cout << '*';    
    }
}

Now you should notice three things about this function: first, that it checks for the exit condition before it performs the recursive call; second, that each of the successive calls has a different value of n; and third, that each successive call's n is one less than the call before it. This means that, as the calls progress, it counts down to zero, which is the exit condition. If we look at the calls as they appear on the call stack, they would look something like this:

-> stars(5)
-> stars(5) -> stars (4)
-> stars(5) -> stars (4) -> stars(3)
-> stars(5) -> stars (4) -> stars(3) -> stars(2)
-> stars(5) -> stars (4) -> stars(3) -> stars(2) ->stars(1)
-> stars(5) -> stars (4) -> stars(3) -> stars(2) ->stars(1) -> stars(0)
-> stars(5) -> stars (4) -> stars(3) -> stars(2) <-stars(1)
-> stars(5) -> stars (4) -> stars(3) <- stars(2)
-> stars(5) -> stars (4) <- stars(3)
-> stars(5) <- stars (4)
<- stars(5)

Now, you may ask yourself: why bother? The iterative version seems simpler, and more familiar, so why mess around with this when it is less efficient? Well, the answer is, because there are some problems that are a lot easier to solve with recursion than with iteration. For example, imagine having to count all the different combinations of coins which could make up a given value in dollars and cents. To solve it recursively is fairly simple:

// simple recursive method
int count(int sum, int *coins)
{
    if (!*coins || sum < 0) 
    {
        return 0;
    }
    else if (!sum) 
    {
        return 1;
    }
    else
    {
        return count(sum - *coins, coins) + count(sum, coins + 1);
    }
}

but to solve it with an iterative approach, you would need to have an explicit stack data structure, in order to keep track of where you are, which is harder to understand. While it can be proven that recursion and iteration are equivalent - that is to say, anything that can be done with iteration can be done with recursion, and vice versa - there are some problems which are easier to solve recursively.

Now, I had said earlier that in some cases, recursive calls can be optimized so that the activation record is re-used. How this works is, the compiler would simply replace the recursive tail call with a goto or jump, and replace the old values of the arguments with the new ones. In C++, this would mean turning the recursive version of stars() into something like this:

 void stars(int n)
 {
 loop:
    if (n <= 0)
    {
        return;
    }
    else
    {
        std::cout << '*';
        n--;
        goto loop;
    }
}

Of course, the programmer wouldn't see this; it would be in the code that the compiler generates, not something that you would actually write out.

Whenever people start talking about recursion I feel compeled to dispell a few myths and make a few "real world" points. Just a few things I'd like to clarify or point out:

Which [iteration/recursion] is to be used depends on the nature of the problem at hand,

The nature of the problem at hand does not influence to choice of iteration vs. recursion. Although it is true that some problems are more easily expressed as recursions, in fact, probably most non-trivial algorithms are more naturally expressed as recursions, but that has no bearing on the implementation choice; implementation and analysis are two very different things. I sometimes will write out the recursive form in the comments to express what the algorithm does, and then, implement it in the iterative form.

What determines the choice of iteration versus recursion is if there are some guarantees: limited recursion depth, no degenerate / pathological cases that could cause extreme recursion depth, function call overhead is insignificant compared to the rest of the code, or the code is not destined for production code or released software (e.g., some small side-project or toy example). It's actually quite rare that all these assumptions apply, but if they do, then recursion can be better when you can avoid relying on freestore memory allocations (dynamically allocating the stack or queue required for the algorithm's state).

and to a great degree, the language being used. Some languages, especially Lisp and the various functional languages, strongly favor recursion;

That is true. Some languages are structured to strongly favor or only allow the use of recursion. The biggest examples are "pure functional" languages. The main feature here is that all data is immutable, i.e., the language does not allow variables to have different values at different times, each new value means a new variable to store that value. These languages have some analytical and practical advantages. But, if all data is immutable, you cannot implement iterations (except infinite loops) because an iteration basically relies on changing some data at every iteration (e.g., ++i) and checking a particular breaking condition (e.g., i < N), and thus, the only choice for doing any sorts of looping / repetition is to use recursion.

By the way, in C++, template meta-programming is a Turing-complete pure-functional meta-programming language that is a part of C++, and for that, you need to use recursion for everything, including conditional statements. So, in C++, using recursion is an integral part of using the language.

but most other languages, including C++, focus almost entirely on iteration.

That's because iteration is the native way that computer architectures work. Recursive solutions (and pure functional languages) are "unnatural" to computer architectures, in the same way as linked-lists are unnatural compared to arrays (and therefore perform very poorly). This is the reason why most programming languages "favor" iterations, i.e., it matches the computer architecture they rely on and the instructions they are translated to.

novice C++ programmers are often given little if any experience with recursion

I tend to think otherwise, "students" of programming and computer science often know far too much about recursion, and tend to over-use it a lot. As I said, using recursion in implementations (as opposed to analysis) is very rarely appropriate, and you see far too many "novice" programmers that use it a lot, often thinking they are "clever" for doing so. Every computer science curriculum should include a senior-year course entitled "why everything you learned in other classes are things you should not do in practice".

few C++ compilers do any tail call optimization.

I doubt that very much, C++ compilers are, along with C, the most aggressively optimizing compilers of all, no contest. Tail calls are one of the simplest of all optimizations that compilers can do. C++ compilers (which have the same back-end as the C compilers, and any other native compilers, in most cases) do this optimization very well (and I have seen many assembly listings of tail-call optimizations, even some that were quite impressive). The only thing that can be an issue in C++ is that it has classes with non-trivial destructors that are called when objects go out-of-scope. Sometimes it can be easy not to see that there are non-trivial objects being destroyed in-between the recursive call and the return from the function (closing curly-brace). This just means that programmers have to be a bit more careful when implementing functions that they wish to be candidates for a tail-call optimization. But any function that is a candidate for a tail-call optimization will certainly be optimized into an iteration. Any compiler, today, that wouldn't do that is not a compiler worth using. Your statement may have been true about 20 or more years ago, but not any more, modern compilers have become very good at optimizing, and this is one of the easiest optimizations they can do.

And, in any case, any recursion that is a tail recursion is a recursion that is extremely simple to turn into an iteration (as you have shown with your examples of tail recursions), and there is, in general, absolutely no point in using a recursion in those cases (not even for code clarity). The only time when a tail-recursion is warranted is when an iteration is not an option, i.e., in pure functional languages, in which case, tail-call optimization does not apply anyway, because data is immutable (however, when data is immutable, optimizations can reach very far and be very elaborate, but so far, few of these languages have really had good optimizing compilers, making them very slow and resource-hungry in practice).

The reality is, "tail-call optimization" is a piece of trivia that professors like to talk about, nothing more. Understanding it exposes some of the inner workings of the architecture and process execution that can be good to teach students, but it has almost no practical application in itself. All compilers implement this optimization just in case a programmer decides to do a trivial iteration using a tail-recursion, but I would hope it's quite rarely needed.

As a result, recursion is poorly understood by many C++ programmers. This is unfortunate, as in some ways recursion is the simpler of the two approaches to looping.
...
because there are some problems that are a lot easier to solve with recursion than with iteration.

Again, this is the core issue: solving a problem is one thing, implementing a solution is another. Never mistake one for the other. When you read algorithm descriptions and analyses, or read about some data structures, these are about how to solve certain problems, and very often, they end up taking recursive forms, as the most intuitive way to express the solution or the easiest way to "solve the problem". However, after that, you still have the problem of figuring out how best to implement that solution, which is a whole new problem with a whole new set of concerns, often far more practical concerns. Recursions are prevalent in the "how to solve the problem" stage, but almost never make it past the "how to implement" stage.

I think this is what you seem (like many) to have missed. It's not like people start programming in a "iteration-favoring" language and somehow just completely miss everything about recursions. Most C++ programmers, with at least some experience, know all about recursion, but you will only rarely see that translated into actual code. In fact, one of the common everyday tasks of many programmer is actually to translate recursive solutions into iterative implementations.

That said, it is indeed a lot easier for many problems to just use a recursive solution directly (as an implementation). However, real production code is a very different beast. You can't say that you are going to build the foundation of a house out of cardboard because you find it easier to work with than concrete. But, for temporary things and quickly thrown together implementations, sure, do whatever is easiest, but don't make it too strong a habit, because, eventually, you will have to "grow up".

(END OF RANT)

novice C++ programmers are often given little if any experience with recursion

I tend to think otherwise, "students" of programming and computer science often know far too much about recursion, and tend to over-use it a lot.

Really? Perhaps things are different at your university, but most of the recent IT graduates I've spoken with in the US don't even know what recursion is, and have never used it at all. Certainly many of my colleagues from other countries - India and Pakistan especially - are completely unfamiliar with it, even after years in the industry.

Then again, most IT programs I've encountered are more like trade school courses than real university programs, with a nearly exclusive emphasis on writing code in a specific language (usually either C++ - often pre-standard - or Java from circa 2000) and a smattering of algorithmics and maybe some 'software engineering' (invariably based on 'waterfall' design concepts from before 1990). Many have never written a program more than 250 lines, it seems, and few have had any collaboration experience; on the job training is often more thorough than their formal education. I can only think things must be different at McGill, if what you say is true. It's good to know some places actually teach, rather than 'train'.

Perhaps I am over-emphasizing this, but I don't think by much. If anything, it's worse than that. The number of times I've seen professor give 'fill in the blanks' projects, where they give most of the code to the students and ask them to add in a few lines at a time, is truly appalling, as are the number of students who cannot manage even such trivial exercises. There is a real, critical lack of serious IT education in the US, and much of what you do see is cargo cult-ish in extreme. I am sadly reminded of Feynman's experiences with physics 'education' n Brazil in the 1950s: the subject is 'taught' all over the place, but nowhere is it actually being explained.

The only time I have found recursion useful is when transversing the file system, such as in this code snippet. Otherwise I try to avoid it, mainly because many years ago recursion was just too expensive in terms of memory and time. I know those aren't as critical any more but old habits don't go away easily. I see no reason to spend time teaching something that is rarely, if ever, used. There is currently a huge debate here in US about teaching cursive in grade school because people don't use it any more. Thanks to the computer age cursive will one day be a lost art.

I can only think things must be different at McGill

I guess they must be. But it's not like I'm only talking about McGill either. I've been around a bit (e.g., in Europe), and also, from interactions here on Daniweb, it seems that recursion is a recurring topic (pun intended!) with the mass of computer science students that meet on this forum (but maybe that's not a representative sample of the overall population).

Anyways, at my university, it seems that the focus is a lot more on computer science and software engineering, rather than any kind of practical teachings. Just to illustrate, if you mention things like sorting algorithms, recursion, linked-lists, and other basic computer science topics, the average computer science student here will start talking all about it as if they were world-experts on these subjects (which I have seen happen many times). But then, if you give them a simple programming exercise that involves linking with an external library, an exercise that I solved within an hour, then the average computer science students will spend the entire week or two working full-time on it, simply because they have zero real-world programming skills. That's been my impression, but it is certainly not a complete generalization nor a thorough investigation, it's just from my limited experience. Mind you, I have no formal computer science education at all, it's all practical experience for me (and I only picked up some CS knowledge here and there).

The number of times I've seen professor give 'fill in the blanks' projects, where they give most of the code to the students and ask them to add in a few lines at a time, is truly appalling, as are the number of students who cannot manage even such trivial exercises.

Holy ....! I have never heard of "fill in the blanks" exercises in any kind of programming course. This is truly appalling! In the few programming classes that I have taken in the past, I always found the exercises to be kind of easy, but I assumed it was just too easy for me, given my years of practical coding experience. But it was never anything as trivial as a "fill in the blanks" thing. Not even in exams (which are always trivial in terms of coding, for obvious reasons).

(START OF RANT)

I've never studied in the US, but the more I encounter or hear about the level of education in US universities, the more I am convinced that the level is far below what it is in most other places. Don't get me wrong, some places in Europe are pretty pathetic too. Some of the star universities in the US are certainly very good, and if they aren't, they throw enough money at it that the problems "disappear" (e.g., a poorly-trained student having an unlimited budget can still achieve very impressive projects and research, that's how many large US universities manage to "stay on top"). But sometimes, I've heard about 3 year engineering programs where half of the curriculum is made up of GE courses, while, here, all engineering programs are at least 4 years, and don't include any GE courses (they are done before university, in a 2-year "junior college" after high-school). Literally, in my junior year, I was studying the design of super-sonic scram-jet engines, while an American counterpart I met who was in the same year and program as me was doing basic chemistry classes (e.g., mixing acid and base solutions.. rolleyes..). My impression is that the level has dropped to all time lows now in the US. And it's not like US students are not capable, the US students that decide to come to Canada to do their university studies feel the heat in the first year or so, but quickly catch up and do as good as those who haven't been crippled by years of sub-par education.

(END OF RANT)

I'll throw in my two cents as well. I am currently going to college here in the states and from what I have seen in my classes most programmers are doomed. Our C++ class was basically C with classes at the end. I had a couple of java classes as well and it was not much better. Most of the time if your code produced the desired result it was acceptable. I only had one teacher that had any standard of what your code should look like. In my C++ class some people would turn in code that was 200 lines long that could write in 50. No care is given to teach students how to optimize or if sections of code are repeated they should be put into functions. The most depressing thing about the programing classes I took is that never once did we talk about the real world. Theory is all fine and good put if you have no idea how to apply what you know to real world problems what is the point? Isn’t going to collage supposed to prepare you for a career in your field of study? Most of the people I went to school with would need to forget almost everything they were taught because if they tried to code like they did in school in a job they would get fired. I just think it is a disservice for schools to start you off on the wrong foot. I have noticed that a lot a habit’s you get into when you start to code take a long time if ever to break since that is what your whole foundation is built on.

On the other hand i like the idea of recursion. Gone were the old days where the bigest ram was 64mbit. Now we have better memmory and processor speed. I dont see any difference in doing a 2x for loops and or a while loop and the recursion. I think the main question is when to use recursion. There are some tasks recursion is the most simple and effective way to go, some nearly impossible without recurssion.

So from my part, i think recurssion should be used with causion just like any other loop but once you fully understand the power of recursion, its fun to do a whole bunch of operations with just a simple well thought lines of code. So given my vote, i will vote for recussion without looking back. ))

Ahh, young padawan, you have much to learn.

I think the main question is when to use recursion.

There are some cases where it is OK to use recursion, like Ancient Dragon's example with walking through the file system because the file system has very limited depth and the individual steps are quite expensive (the HDD is the performance bottleneck here). These are the basic requirements for recursion to be an acceptable way to implement a solution.

The other case where it is good to use recursion is when (1) the depth of recursion is guaranteed to be very limited and (2) you want to avoid dynamic allocation (on heap) for storing the states in favor of using dynamic growth of the call-stack as a faster means of storing the states. Basically, if you use an iterative solution, you will need a dynamically allocated queue or stack in order to store the progress (or states at each step) of the algorithm. If you use the recursive version, that progress (or states) is stored in the activation records of the recursive function calls. Allocating on the stack is faster than on the heap, but the difference is rarely significant enough to warrant the use of a recursive solution. One example where this does apply is for the intro-sort algorithm (which is the sorting algorithm used by the standard std::sort function, which is a variant of the quick-sort algorithm). In that case, it is better to use recursion because the intro-sort algorithm is designed to guarantee limited recursion depth, and when you are sorting small arrays of small data elements, the overhead of a heap allocation is significant. Another example is in embedded systems where there is no heap, only the call-stack (i.e., when there is no operating system, because the chip is too small to accommodate one).

some nearly impossible without recurssion.

There is nothing that is nearly impossible to do without recursion. In fact, it is often very simple to do. Except for the trivial case of tail-recursion (which turns into a trivial loop), recursive implementations usually boil down to some form of tree traversal (the tree may be real or implicit). A breadth-first traversal turns into a simple queue and a while loop. A depth-first traversal turns into a simple stack and a while loop. A best-first traversal turns into a priority-queue and a while loop. An in-order traversal turns into a simple stack, and a while loop with a few special changes (compared to depth-first). And so on for any other recursion. It's true that the recursion is simpler to implement, but the iterative version is hardly a challenge to write.

once you fully understand the power of recursion

The "power" of recursion is merely an illusion. Recursion has some analytical power, i.e., it is often more convenient when analysing an algorithm. But it has no real "power" when it comes to implemention.

its fun to do a whole bunch of operations with just a simple well thought lines of code.

I agree, it's fun, but not "professional". Making a sand castle on the beach is fun, but expecting that it will survive beyond the next rising tide is naive.

Gone were the old days where the bigest ram was 64mbit. Now we have better memmory and processor speed.

Well, first, those days are not gone, there are still many small embedded systems, micro-controllers, micro-PCs, etc., that have very limited resources, and require very light-weight and fast implementations. Second, the rate of increase of the amount of stuff that we ask computers to do these days far surpasses the rate of improvement of the performance/capabilities of the hardware. And finally, even if the hardware is fast enough to run the code fast enough, sub-optimal performance still translates to a higher power consumption than is strictly necessary. The day you pay the electric bills for a large server farm, you will be very happy if your programmers did a good job with the implementation of the server applications, because saved clock-cycles will translate to more profits / productivity. That "performance doesn't matter because hardware is so good now" argument from the 90s has largely been dispelled in the latest years (5-10 years) for those reasons I just mentioned.

That "performance doesn't matter because hardware is so good now" argument from the 90s has largely been dispelled in the latest years (5-10 years) for those reasons I just mentioned.

For servers applications (e.g. DaniWeb) that may be true, but I doubt for desktop apps where there is only one person using the whole computer.

True, there is a lot more room to breath when writing end-user (GUI) applications. But if things are really moving to the "cloud", then that might change as those end-user apps are now supposed to run on the server (and then, the economic argument applies). In any case, I still prefer writing well-performing code regardless, even if it's just for the elegance of it. I guess that is another perspective on this "recursion vs. iteration" debate. Just as richieking pointed out the elegance of recursive implementations, I would point out the elegance of efficient implementations. In other words, what is more elegant between a 4-line implementation of bubble-sort and a highly-optimized, couple-hundred-lines implementation of intro-sort. If we're talking elegance, I prefer the latter (which, ironically, is a rare case of the good use of recursion).

In any case, I still prefer writing well-performing code regardless, even if it's just for the elegance of it.

Yes, I agree. Just because programs can be bigger and run faster today then years past doesn't mean programmers can write sloppy, bloated, and redundent code.

Member Avatar for ArashVenus

Wasn't resursive algorithms non-efficient ?
I mean if you look at recursive fibonacci's runtime you'll see Its not efficient at all O(N^2) !

But if things are really moving to the "cloud"

From what I can see, "cloud computing" is nothing more than distributed computing over the internet. Several programs, such as Folding @ Home, have been doing that over 10+ yerars now.

Wasn't resursive algorithms non-efficient ?
I mean if you look at recursive fibonacci's runtime you'll see Its not efficient at all O(N^2) !

It depends greatly on the particular algorithm. While recursion is generally less efficient due to overhead issues (which can be obviated in some instances by tail-call optimization, and by memoization for functional programs), most recursive algorithms are not inherently less efficient than iterative equivalents. The fibonacci algorithm is a specific case where, yes, the naive tree-recursive form is grossly inefficient compared to the naive iterative algorithm, but this isn't going to hold for all recursive algorithms.

OTOH, as mike_2000_17 points out, iteration is in general a better 'fit' to the conventional random-access sequential machine model. Even in languages like Scheme, where recursion is the natural means of repetition, it is common to use iterative algorithms (even if they are implemented by means of recursion) for anything requiring efficiency.

Mike_2000_17 What are you saying??
That recursion should not be practise or are you saying recursion is worse than other loops? Are you serious? Do you want to see a bench mark? I cant get you.
Please explain yourself because i have seen how you have dismantled my comments without any bench mark. We cant give wrong idea like that wihtout any tangeble proofs. Like i said there are not much difference between recursion method and other loops and in some cases its the best way to go. So what is than you disagrees with.?
The way you have commented seems a bit strange as a pro. You may not like recursion but the power and its elegance can not be disputed.
I prefer bench amrk from you before smashing recursion like that because i have well accepted benchmarks to proof your assertions wrong.

If the function contains just one recursive function call, then it's true that expressing the function iteratively is often fairly simple. But tree-walking algorithms that contain two or more recursive function calls are much more natural when expressed recursively. This recursive example walks a binary tree. Is there a simple way to rewrite this using iteration?

typedef struct node_s
{
  char name[100];
  struct node_s * left;
  struct node_s * right;
} node_t;

void process_node(node_t *node)
{
    if (!node) return;
    printf("%s\n", node->name);
    process_node(node->left);
    process_node(node->right);
}

`

cont

Yes, there is a fairly straight-forward way to write this as an iteration. This is a depth-first iteration, and it turns into a stack to represent the nodes to be traversed:

void process_node(node_t *node)
{
    std::stack<node_t*> tasks;
    tasks.push(node);
    while(!tasks.empty()) {
        cur_node = tasks.top(); tasks.pop();
        if (!cur_node) 
            continue;
        printf("%s\n", cur_node->name);
        tasks.push(cur_node->right);   // notice the change of order of left and right, so that left is traversed first (on top).
        tasks.push(cur_node->left);
    };
};

Of course, this does not look as nice. But the main point here is that the maximum depth of the recursion is equal to the maximum depth of the tree that is traversed (depth of the deepest leaf-node). In the recursive form, the "stack" (i.e., "tasks" in the iterative form) is formed on the call-stack (in successive activation records of the recursive function calls). Forming the stack on the call-stack is more efficient than forming it in heap-allocated memory (as the iterative form does it), but the call-stack memory is limited and if you run out of it, the program will suffer a immediate halt (crash). And predicting the amount of space you have available on the stack is not really possible in a reliable way, and also very hard to handle, even if you could predict that you will run out. This is not a problem with heap-allocated memory, which is only limited by the amount of memory you have on the system as a whole (RAM). So, if you cannot guarantee that the depth of recursion is very limited (such as O(log(N))), you cannot use the recursive form, period. So, that is why it is OK to analyse and understand an algorithm in recursive form, but most often than not, you must turn it into an iterative form afterwards, when you implement it.

So my question to

mike_2000_17

is that what makes you think that your code is better than

Harod_2 's

recurssion??
Like i pointed to you recurssion is more elegant,simple but effective to read. the concept and use of recurssion can be a personal choice but i think you when a bit far with your previous comment about recurssion.

I think Mike explained it pretty well in his last post -- recursion can often run the program out of stack space and when that happens your entire program will just crash. Now, you tell us which is more eligent -- pretty code that crashes often, or not-so-pretty code that runs without problems?

AD... That is not true.

"Now, you tell us which is more eligent -- pretty code that crashes often, or not-so-pretty code that runs without problems?"

All basic loop or recurssion can crash. The main problem is the algorithm in place and the amount of memmory to use so the concept of crashing an application due too memory is very trival.

What i am demandeing is the over all effectiveness and readability of the 2 codes. Like i said and i will repeat that again. There are situations recursion is the right way to go.

Your point here Mr AD is not good from a pro. Pretty and elegant codes are well contained and maintained, easy to debug than a worm code. Wriiting a wornm code is more dangerous.

Yes, loops can crash too, but everything else being equal recursion is more likely to crash than loops.

Like i said and i will repeat that again. There are situations recursion is the right way to go.

I absolutely agree with that statement, and I posted a link to one example of it.

Pretty and elegant codes are well contained and maintained, easy to debug than a worm code.

Yes, but not if pretty and elegent code crashes due to to finite memory constraints. There is no one right way -- it all depends on the situation, on the hardware and on the operating system. What may work perfectly fine on one operating system and computer may not work at all on another.

BTW: Don't try to win with Mike -- he never loses :)

Ok... I'm reluctant to jump into this boat again, and just repeat what I've already said, but here we go.

That recursion should not be practise or are you saying recursion is worse than other loops? Are you serious? Do you want to see a bench mark?

When I read that post / questions I was a bit startled, because I thought "oh no, did I make unsupported claims about iteratives implementations being faster than their recursive counter-parts", because that would have been a mistake, as it is wrong in general. Except for tail-call recursions, where iteration is definitely faster, recursion will be faster in general. It varies from being quite a bit faster (in simple cases) to making almost no difference at all, but it's very rare, with a good optimizing compiler, that a recursive implementation will be much slower.

However, I was quickly reassured when I looked back on my previous posts to find out that I never made such a claim. You must have interpreted what I said wrong. I'm arguing that iterative implementations are preferrable to recursive implementations, most of the time, but performance isn't a factor here.

All basic loop or recurssion can crash.

All code can have bugs, is that what you are saying? But we are not talking about crashes due to bugs. Assuming a correct, bug-free implementation, you can prove that an iterative implementation will always execute correctly and completely. But you can never prove that for a recursion because it requires an undetermined amount of stack space beyond the first frame, which means that its execution context (the program within which it is called) determines the correctness and completeness of the implementation. That entanglement means that you can never guarantee correctness of a recursion. That's why it's application is limited (either you must have relaxed requirements (i.e., not production code), or you must control the entire execution environment (e.g., embedded no-OS applications)).

The main problem is the algorithm in place and the amount of memmory to use so the concept of crashing an application due too memory is very trival.

A single point of failure is never a trivial concern. You say that the "main problem is the algorithm", i.e., the time and space complexity of the algorithm, and, of course, it's correctness for the given problem. That is the first step, i.e., the "how to solve the problem" step that I talked about early in contrast to the "how best to implement the solution" step. Once you have chosen the best solution (algorithm) for solving a given problem, you move on the deciding how to implement it. And, at that point, having a single point of application-wide failure like running out of stack memory is indeed a critical concern. Remember, at that stage of the game, the concerns you must consider are no longer related to space-/time-complexity or correctness of the algorithm, but move on to performance and reliability of the implementation, and the latter takes precedence over the former, in general.

What i am demandeing is the over all effectiveness and readability of the 2 codes.

The effectiveness and readability of the 2 codes is pretty much the same. Effectiveness is the same because they both implement the exact same algorithm, so, there can't be a difference. If you are talking about effectiveness as in "lines of code needed" for the implementation, then you and I are talking very different languages here, because the number of lines of code is not a measure of effectiveness in my book.

As for readability, I can concede that some people are not used to seeing this pattern of code (i.e., iterative depth-first traversal). I can equally easily concede that some people find the for(int i=0;i<N;++i) loop hard to understand compared to the more explicity int i=0; while(i<N) { /*..*/ ++i; }; version of it. I guess the point of this (gratuitous) example is that readability will always involve the reader and what he is or isn't used to seeing. And an iterative depth-first traversal is, in my perspective, a very common and recognizable pattern (just like other iterative tree-traversals), just like the for-loop is. I wouldn't consider this a readability liability in code.

Pretty and elegant codes are well contained and maintained, easy to debug than a worm code. Wriiting a worm code is more dangerous.

I don't consider those two versions as being any different on beauty and elegance (like I said, they are subjective). But I completely agree that maintainability is very important and that the clarity/readability of the code is important for that purpose. But again, the two codes are not really different in that regard.

And I guess by "worm code", you mean "spaghetti code". If you want to see spaghetti code, I can show you some real spaghetti code. I can guarantee you that the iterative code I posted does not come close to qualifying as such. But yes, writing spaghetti code is dangerous.

As for ease of debugging, I don't know why you would imply that a recursive implementation is easier to debug. Or maybe we have a different concept of what debugging is. But I must concede that I am not too skilled at debugging, at least, it's a skill that I lost over time from a lack of need for it. But I find it hard to imagine how a recursive implementation would be easier to debug, given that it pollutes the stack with so many activation records. But I guess a good debugger is able to unroll those back-traces reasonably well... I just wouldn't know. Most "debugging" I do involves finding implementation errors when unit-tests fail, and with a short-enough testing cycle, that's rarely something that takes much time or is hard to find, and again, it's just as easy to find it (from a readability POV) in an iterative version as in a recursive version.

recursion will be faster in general. It varies from being quite a bit faster (in simple cases) to making almost no difference at all, but it's very rare, with a good optimizing compiler, that a recursive implementation will be much slower.

If you prototype the two functions, yes recursive algorithm will probably be faster than the iterative algorithm to solve the same problem. But the clock cycles needed to perform recursion is most definetly more than the clock cycles need to perform a loop.

But the clock cycles needed to perform recursion is most definetly more than the clock cycles need to perform a loop.

Yes, the recursive code will include function call prologs and epilogs, unless they are partially or completely optimized away. But unless it's extremely trivial code (like a tail-call), you are going to need some dynamically-allocated memory to replace the build-up of stack-memory in the recursive form. And that is really the main performance difference, that allocating stack-memory is a cheap constant-time operation (just a few integral arithmetic operations, i.e., a few clock cycles), while allocating dynamic memory is a more expensive amortized constant operation (invoking the heap). So, the performance difference really boils down to whether the dynamic allocation is significant or not compared to the rest of the code.

You can also take the dynamic allocation out of the equation by using something like C99 VLAs (variable-length arrays) which are dynamically-sized, single-allocation, stack-based arrays. In that case, the iterative solution would probably be faster on a non-optimizing compiler. But on a good optimizing compiler (with some special optimizing flags set, e.g., -fomit-frame-pointer), the recursive code is probably going to be turned into nearly identical machine code as the iterative version, in that case. It would be interesting to find that out, maybe I'll set up some test code for that.

So, I did those tests and because the results were so interesting, I decided to post it as a tutorial of its own. Check it out.

The right answer depends strongly on external unspecified requirements and environmental conditions. If you know that the depth of the tree is limited, AND you want a solution with small, deterministic execution time, then recursion is the right answer. If you want a robust implementation that can handle any tree depth, then by all means use dynamic memory allocation and iteration.

Mike says that call stack memory is limited. That's actually a property of the runtime environment, and not necessarily true for all implementations of the C standard. Some implementations can grow the stack to use all available memory, same as the heap.

Some implementations can grow the stack to use all available memory, same as the heap

But it is still limited -- there is no such thing as an infinite amount of memory or stack space. So Mike's statement is still correct. But --- any program that attempts to use that much memory and/or stack space needs to be redesigned.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.