I know that NP problems can't be solved in polynomial time.But what do NP complete and NP hard problems mean?

I know that NP problems can't be solved in polynomial time.

Since NP is a superset of P that can't possibly be true for all problems in NP. It is likely that NP-hard can't be solved in polynomial time, but that's an open question (NP ?= P), which has not yet been proven either way.

But what do NP complete and NP hard problems mean?

An NP-hard problem is a problem that is "at least as hard" as all other problems in NP. That is the most efficient solution for an NP-hard problem can't be more efficient than that for any other NP problem. Note that this includes problems that are not themselves in NP.

An NP-complete problem is an NP-hard problem that is in NP.

So NP hard problems are those the scientists believe they can't be solved in polynomial time(no algorithm to solve it in polynomial time ).NP complete problems are problems that haven't been solved by any algorithm in polynomial time but can be solved in polynomial time(open research field).
right?

P problems are problems that can be solved by deterministic Turing machine in polynomial time.

NP problems are problems that can be solved by a non-deterministic Turing machine in polynomial time.

Since a non-deterministic Turing machine can always be at least as efficient as a deterministic one, it follows that all P problems are also NP problems. So NP is a superset of P. It is not known whether NP is a strict superset of P, but that is generally believed to be the case.

So NP hard problems are those the scientists believe they can't be solved in polynomial time(no algorithm to solve it in polynomial time ).

NP hard problems are believed to not be solvable in polynomial times (by deterministic Turing machines), but that is not their definition. That is if computer scientists' believes change that will not change which problems are NP-hard.

NP complete problems are problems that haven't been solved by any algorithm in polynomial time but can be solved in polynomial time(open research field).

It is not known whether NP-complete problems can be solved in polynomial time. If they can this would mean that P=NP. This is generally believed not to be the case.

To be summarize: there are some NP-hard problems that are in NP (which we call NP-complete) and some which are not. Those that are may or may not be solvable in polynomial time (but probably not). Those that are not, can definitely not be solved in polynomial time.

What is meant by deterministic and non deteministic algorithms? I read them but not getting them.I only got an abstract defintion for them.

For NP hard problems is it proved that there no algorithms that can solve this problem in polynomial time?
For NP complete problems there have not yet been algorithms that can solve this problems in polynomial time but you cannot completely rule out the possibility.

For NP hard problems is it proved that there no algorithms that can solve this problem in polynomial time?

Not yet, no.

For NP complete problems there have not yet been algorithms that can solve this problems in polynomial time but you cannot completely rule out the possibility.

If the NP-complete problem is solved, all subsets will be solved as well, including NP-hard.

For NP hard problems is it proved that there no algorithms that can solve this problem in polynomial time?

For some it is. That is there are problems that we know can't be solved in polynomial time and those problems are NP-hard but not NP-complete. There are also NP-hard problems where we don't know whether they're NP-complete. And then there are the NP-complete problems where we know that they can be solved in polynomial time if and only if NP=P.

For NP complete problems there have not yet been algorithms that can solve this problems in polynomial time but you cannot completely rule out the possibility.

Right.

If the NP-complete problem is solved, all subsets will be solved as well, including NP-hard.

NP-hard isn't a subset of NP-complete - it's the other way around.

what do you mean by deterministic and non-determinstic algorithms?

The difference between a deterministic Turing machine and a non-deterministic one is explained on the wikipedia article about non-deterministic Turing machines.

In short a non-deterministic Turing machine can be in multiple states at once whereas a deterministic one can only be in one state at a time.

Greetings,

First of all, it is better to call it Deterministic/Non-deterministic methods rather than algorithms as the latter can be used only when there is a finite sequence of unique instructions, from that we have the following:

Deterministic methods: Each transaction corresponds to a unique instruction or decision, we talk so about an algorithm, whereas

Non deterministic methods: present many choice and one must choose the suitable one among others(e.g. Genetic algorithms), in such case we talk about a pseudo-algorithm.

Kind regards.

Non deterministic methods: present many choice and one must choose the suitable one among others(e.g. Genetic algorithms), in such case we talk about a pseudo-algorithm.

We generally call that a probabilistic method, or probabilistic algorithms. This is quite different from the concepts of deterministic / non-deterministic Turing machines. A non-deterministic Turing machine does not "choose the suitable one" from the many choices, but rather, it simultaneously follows them all (that's the only practical way to look at it). They are called non-deterministic because the result (i.e., the successful path) is not predictable from looking at its previous "states" because it followed all possible next states.

When you consider probabilistic methods, they are essentially deterministic algorithms where the state is a belief-state (or probability distribution) and the transitions predictable ("deterministic") as long as we are satisfied with a belief-state, as opposed to an exact knowledge of the state. For example, we can say that a probablistic algorithm will solve a problem is a certain amount of time because at that point we can predict the state to be in the "solved" regime within a certain degree of confidence (e.g., 99% chance of having succeeded at time point). Genetic algorithms fall under that category, obviously.

But that said, I find much more promising avenues in the field of probabilistic methods than in the "wild-goose" pursuit of non-deterministic machines.

NP problems are problems that can be solved by a non-deterministic Turing machine in polynomial time.

That sounds more promising than it is in reality. Simulating a non-deterministic Turing machine with a deterministic one implies an exponential slow-down. So, all that is really happening here is that the complexity of the problem is moved into the complexity of the machine solving it, i.e., exchanging an intractable problem for an impossible machine. And that is to me, the most compelling argument that P != NP.

Are there any real non-deterministic Turing machines out there? I know that quantum computers are not it.

Greetings,

Yes, mike 2000 17 my answers were concerned with "Deterministic/ Non deterministic" algorithms instead of Turing machines, I tried to explain in greater detail the difference between method and algorithm as the notion is not the same and whose question was:

"" what do you mean by deterministic and non-determinstic algorithms? ""

Further, while Genetic Algorithm belongs admittedly to the probabilistic methods class - because of the crossing-over and mutation probabilities and also the one of selection as to some ways -, it is on another side a non deterministic method since there are a multiple choices at each transaction.

Regards.

Hi again,

*they are essentially deterministic algorithms where the state is a ....
*

Now then, we can not call an algorithm as deterministic or non deterministic as the latter is always deterministic since it presents a finite number of unique instructions (i.e. at each step we have one and only one instruction or decision to effect the reason why it is called deterministic).Then it is better to use the term method instead; because it is devided in two classes:

Deterministic methods which is concerned with algorithms.

Non deterministic methods: concerned with the artificial intelligence methods( e.g. GA, GP, ANNs ...etc).

Otherwise, it would be appreciated if you ever clarify what you mean by "Deterministic", maybe I didn't figure you out.

Regards.

Otherwise, it would be appreciated if you ever clarify what you mean by "Deterministic", maybe I didn't figure you out.

A good place to start is probably the first sentence on the wiki page:

"A deterministic algorithm is an algorithm which, given a particular input, will always produce the same output, with the underlying machine always passing through the same sequence of states."

In other words, a deterministic method just means that given a particular input (or "problem to solve") it will take a specific, finite, and entirely predictable set of steps, and reach the same end result or output.

Under that definition, a quick-sort algorithm that chooses pivots as either the first, last, middle, or median-of-three is a deterministic algorithm. A quick-sort algorithm that chooses pivots using a perfect random-number generator is a non-deterministic algorithm because the same input (the same unsorted array) would not be sorted with the exact same set of steps (i.e., the sequence of pivot points chosen). In the first case, for a particular input array, you can predict the exact number of steps required, and there are specific pathological cases where that number of steps is O(N^2) (i.e., pivots will always be really bad). In the second case, we can evaluate the probability that every chosen pivot is the worst possible choice (i.e., the "most unlucky" run), which is usually very low, meaning that the algorithm nearly always performs pretty well.

Such randomized algorithms that introduce random inputs or random steps are more often called probabilistic algorithms because their analysis has to involve a "probability" such as a probability of success after a certain time or probability of the accuracy of an approximate solution after enough time spent computing / refining it.

People use different terminology and classifications. Under the most conservative definition of a deterministic algorithm (as used on the wiki page), any algorithm that introduces a random or otherwise unpredictable input (user input, system-time value, etc..) should be considered non-deterministic, but it's less ambiguous to refer to those as "randomized" algorithms. Moreover, with the recent years of getting used to working with probabilistic algorithms and probabilistic analysis of randomized algorithms, it's becoming more and more common to view such algorithms as a special "expanded" form of deterministic algorithms, sometimes even called "probabilistic deterministic algorithms".

The point is, the definitions are flexible, like I said earlier, especially if you are willing to accept that a "belief state" (probability distribution over states) can be considered as a state in itself, in which case, a randomized / probabilistic algorithm is clearly deterministic.

This whole thing relates to Turing machine classifications in that it is generally considered that a deterministic Turing machine cannot produce or have access to a non-deterministic input, such as a perfect random-number generator. When you allow the Turing machine to be able to generate perfect random numbers (with some distribution, like a uniform distribution), we begin to talk about a probabilistic Turing machine, which is a separate class from non-deterministic Turing machines. A non-deterministic Turing machine does not make random choices, it makes no choice at all, as it simultaneously explores all choices, which is a critical distinction.

the latter [algorithm] is always deterministic since it presents a finite number of unique instructions

The two are not mutually exclusive. Here is a "finite number of unique instructions":

int foo() {
  int i = 0;
  while( ( rand() % 1000 ) < 900 )
    ++i;
  return i;
};

Can you predict the output? Or the exact number of times the loop executes? In reality, you can only say that it will probably output a number around 9. In fact, you can make a prediction of the output in the form of a probability distribution (I think it's a Poisson distribution in this particular case, but I might be wrong).

In theory, the output of the "rand()" function is deterministic, i.e., it is a pseudo-random number generator, because computers are deterministic machines and cannot produce numbers that are truly random. However, for most practical purposes, we can assume that the output of "rand()" is close enough to being unpredictable / non-deterministic. And when it comes to analysis, we do make that assumption. This is often the dilemma with analysing determinism, it is only a matter of how much you are willing to consider when making the prediction of the outcome.

Deterministic methods which is concerned with algorithms.

Some people do consider that the term "algorithm" implies that it's a "deterministic algorithm". But that's an antiquated terminology and it's largely revolute at this point in time.

Non deterministic methods: concerned with the artificial intelligence methods( e.g. GA, GP, ANNs ...etc).

Non-deterministic methods / algorithms have nothing, in particular, to do with artificial intelligence methods. In fact, if you implement a non-deterministic algorithm on a deterministic machine (every computer or computer-like machine that exists in real life), then the algorithm has to either be turned into an intractable, exhaustive search algorithm (i.e. "brute-force") or be turned into an approximate and/or probabilistic algorithm.

GA / GP algorithms are not much more than somewhat sophisticated, randomized search algorithms. While ANNs are not much more than a basic mathematical form that is suitable for certain learning strategies, most of which are deterministic, and some of which are randomized (e.g., Simulated Annealing). But, overall, artificial intelligence algorithms fall mostly under the deterministic class (i.e., non-randomized), and there are also plenty of probabilistic methods. But AI algorithms are not special in this way at all, that's an urban myth that they stand in some sort of class of their own. I know, I used to believe that, until I studied them in depth and implemented many of these methods.

Greetings,

As I have "guessed"; the whole point of difference is worth the term Deterministic.

so let's be more clear, after reading your comment I realise that you mean by "Deterministic algorithms" any method we can predict previously whose number of steps whereas by the latter(Deterministic) I have meant simply any method - whether we can predict or not the relevant number of steps- where there is only one decision to take for the step i+1.

Further, it is quite important to make the difference between " Turing Machines" and "Algorithms"; as the first one represents one among other computation models (e.g. lambda calculus) which checks simply if a given problem pocesses an algorithm or not whereas the second one(algorithm) means a general solution of a given problem **(not only for some particular instances of problem) that (algorithm) presents a given number of steps or instructions where **each one(instuction) is unique - at level i it is impossible to find more than one choice to consider - so that it is called deterministic since everyone will pass exactly by the same instructions.

Regards.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.