Dear All,
I am interested In understanding the Time Complexity for
inorder traversal of the Binary Tree.Generally i would be also happy if you help me understand Space Complexity for the same I am also putting a simple code snippet for the Inorder.

void inorder(struct btree  *sr)
{
       if(sr!=NULL)
        {
                inorder(sr->leftchild);
                printf(" %d",sr->node_no);
                inorder(sr->rightchild); 
        }
        else
             return;
}

To traverse a binary tree you have to touch every node in the tree once. From that you infer that the time complexity is [TEX]O(n)[/TEX]. You can take it further by postulating that because the recursive algorithm actually visits every internal node 2 times** to complete the traversal the time complexity is [TEX]O(2x + y)[/TEX] where x is the number of internal nodes and y is the number of external nodes and [TEX]x + y[/TEX] gives you n.

The space complexity is the same [TEX]O(n)[/TEX] for the same reason. If you take it further, the most space that's ever going to be maintained at any time is the space for the longest branch because that's the maximum number of stack frames that will be pushed at any given time by the recursion. If the tree isn't balanced then the longest branch could be n nodes which makes the space complexity [TEX]O(n)[/TEX]. If the tree is balanced, the space complexity can't be more than the height of the tree. The height of a balanced tree is always going to be [TEX]logn[/TEX] so the space complexity if you assume sufficient balance is [TEX]O(logn)[/TEX].


** because it recurses in and then back out

Dear Mr Hammirick,
Thanks for the reply, i have partialy understood what you have explained it to me . Since the time Complexity is of level O(2x+y).X are my node of the trees which gets visited twice ,but what are y? You have explained y as External Nodes .I have got Little confusion on it . I would be happy if you kindly explain it little more elaboratly ,you step to this conclusion.

Thanks and Regards,
Brijendra Mishra

Dear All,
Since understading of Induction Is very Important in the Analysis of Algorithm .I have studied it ,but have never practically put them in use ,while desigining my programs. can you kindly explain to me how one should exploit this conceps while desiging an algorithm using loops and recursions. Simple Example will help me to uderstand the concept little clearly.

Thanks and regards,
brijendra mishra.

Of course, y = x + 1.

I'd like to nitpick the statement that the height is [TEX]\log n[/TEX]. On a balanced tree, it's correct to say that the height is [TEX]O(\log n)[/TEX], but not exactly (or even remotely close to, in any base) [TEX]\log n[/TEX]

X are my node of the trees which gets visited twice ,but what are y?

x represents internal nodes, that's nodes with one or more children. y represents the rest of the nodes with no children, and they're called external nodes.

6
       /   \
     5       9
    / \     /
   1   4   8
  /   /     \
 0   3       7

[1,4,5,6,8,9] are the internal nodes because they have at least one child. [0,3,7] are the external nodes because they have no children. Actually I was wrong about the 2x because if an internal node has 2 children it's visited 3 times. I guess the exact time complexity would be O(3x + 2y + z) where x represents nodes with 2 children, y represents nodes with 1 child, z represents nodes with no children and x + y + z = n. Big O throws away the stuff that doesn't grow so you get O(x + y + z) which when added together gives O(n). I think that's a tight bound too so you can really say \theta(3x + 2y + z) and be right. :) I'm not sure if you can get away with \theta(n) though.

I'd like to nitpick the statement that the height is \log n.

You're right. I should have said 2\log(n+1).

Dear All,
Since understading of Induction Is very Important in the Analysis of Algorithm .I have studied it ,but have never practically put them in use ,while desigining my programs. can you kindly explain to me how one should exploit this conceps while desiging an algorithm using loops and recursions. Simple Example will help me to uderstand the concept little clearly.

Algorithm design is an exercise in procrastination. Ask yourself, "what's the least I can do that will make this problem easier?" And then do that. For example, take the algorithm that finds the sum of an array of length N.

First, you might think, "okay, to find the sum, well, all we have to do is find the value at index 0, and then add to it the sum of the values from indices 1 to N-1."

So now you're interested in finding the sum from 1 to N-1. And to do that, you just take ray[1] (let's assume that 'ray' is the name of our array) and add it to the sum of the array elements from 2 to N-1. And in general, it looks like we'll want an algorithm that takes the sum of an array from indices i to j, for arbitrary values of i and j, where i <= j. And if i = j, we have a simple problem: finding the sum of an empty slice of array (and that sum is zero).

So we could have

int sum(int* ray, int n) {
  return sum_slice(ray, 0, n);
}

int sum_slice(int* a, int i, int j) {
  if (i == j) {
    return 0;
  }
  else {
    return a[i] + sum_slice(i + 1, j);
  }
}

Now that solves the problem, and we can prove that sum_slice works using induction. We know it works for the case (i,j) where i = j, since \sum_{k=j}^{j-1} a_k= 0, and we know that if it works for the case (i+1,j), then it works for the case (i,j): if sum_slice(i+1,j) returns \sum_{k=i+1}^{j-1} a_k, then sum_slice(i,j) returns a_i + \sum_{k=i+1}^{j-1} a_k, which equals \sum_{k=i}^{j-1} a_k.

That's the kind of thinking that you need for recursive algorithms (which includes those implemented by loops -- they're recursive but they just modify the current stack frame instead of making a new one, and you have to be all careful in how you write your code). Of course, there's a much better way of finding the sum of an array, where instead of adding the first element to the remaining sum, you feed off elements into an accumulator, one at a time, which means you're using constant stack space.

Then instead of exploiting the property that a + sum_slice(i+1,j) = sum_slice(i,j), you're exploiting the property that s + a + sum_slice(i+1,j) = s + sum_slice(i,j).

Another example is looking for an element in a sorted array. You can just say, "it's either the first element of the array, or it's in the rest of the array." That's a true statement. So you have


int find(int f, int* a, int i, int j) {
  if (i == j) {
    return -1;
  }
  else {
    if (a[i] == f) {
      return i;
    }
    else {
      return find(f, a, i+1, j);
    }
  }
}

And there's a much better way to implement this: binary search. That uses the fact that if a is less than f, we only need search in the range [i+1,j), and if it's greater, we need only search in the range [0,i-1).

So the practice of devising an algorithm for a problem, is that of finding a mathematical property that lets you solve an equivalent problem, one that is easier and can be reached in less work. You use this mathematical property in a recursive (or iterative) manner, and to prove your algorithm works, you end up exploiting that property in your inductive proof.

To get the running time of the algorithm, though, you use some inductive property of mathematics on your running times. That's not used for designing algorithms though.

You're right. I should have said [TEX]2\log(n+1)[/TEX].

What? How can the height of a tree be a non-integral value?

What? How can the height of a tree be a non-integral value?

I can remove the unused precision if it bothers you. ;)

But why is your logarithm in the base [tex]\sqrt{2}[/tex]?

Probably because I don't know how to put it into perfect mathematical notation... Honestly, I have no idea what you're talking about.

But why is your logarithm in the base [tex]\sqrt{2}[/tex]?

thank you mr Rashakil Fol
I have tried to understand your explanation on mathematical induction Proof of algorithm. I think i will have read my books on Induction and then try understand the explanation once again. I am
sure after little more analysis of the problems i will be able to put more queries in this regard.Thank you for your detail explanation.
brijendra Mishra

Dear all,
I am currently studying the little more deeply the concept Algorithmic analysis.

I would liked to know which algorithms (with example ) falls in the category of O(logn)

Thanks and Ragards,
Brijendra Mishra

Do you understand big O notation?

Do you understand big O notation?

Yes , I was trying to look for example which do give us a O(logn)
behaviour. Can you suggest Some.
thanks and regards

Generally speaking, algorithms which cut the input size in half on each constant time iteration.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.