actually i dont have complete concept about bigO notation that usually used to find time complexity my question is in bigO notation why we ignore constant for example f=100n we say that our algorithm has order of n time comlexity or order

http://en.wikipedia.org/wiki/Big_O_notation

> why we ignore constant for example f=100n
Because it's meaningless when you're considering the "asymtotic" behaviour. Like 100*infinity is no different to 50*infinity or just infinity.

Such minor order terms only come into play for small n, and vary widely depending on such things as
- the processor
- the compiler
- the language
- the skill of the programmer
- the actual size of n
- the actual data being manipulated
- etc etc.

O-notation

The Θ-notation asymptotically bounds a function from above and below. When we have only an asymptotic upper bound, we use O-notation. For a given function g(n), we denote by O(g(n)) (pronounced "big-oh of g of n" or sometimes just "oh of g of n") the set of functions

O(g(n)) = {f(n): there exist positive constants c and n0 such that
0 ≤ f(n) ≤ cg(n) for all n ≥ n0}.

We use O-notation to give an upper bound on a function, to within a constant factor. This is the formal defination and of course what Mr. Salem has said applies as well.

Hope it helped, bye.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.