How can I come to know that the system has done a rounding during a calculation or not?
I mean for example 4/2=2 is an exact calculation of doubles because 4 and 2 are represented exactly. But for example 4/3 is not an exact calculation. I would like to know that my result is exact or not! It is possible somehow?

I can tell you one thing, there are books only for floating point structures and the way they are handled by processor or different compilers. But by rounding, if you mean why you don't get 1.3333 as the result for 4/3, then take a look at the code below:

cout <<  4 / 3 << endl; // You get 1 surprisingly!!

cout << 4 / 3.0 << endl; // Now C++ expects the output to be in double format, so the output is 1.33333

Dear group256, you totally misunderstand me.
First, I am taking about double number and not integers:
(double)4 / (double)2 or 4.0 / 2.0.
And I dont want to get 1.3333,
I am not care about this,
"1.3333" is just a string.
What i am talking about that 4./2. is an exact binary number,
and 4./3. is not an exact binary number.
Lets take any function or operator, for example the division (/),
we can ask that the real result of x/y is binary or not.
But we cannot study the result (double)(x/y),
because its always a binary number due to the floating point definition.

How do you think 4/2 is represented in memory? If you use doubles it will be 2.0000... Similarily 4/3 is 1.3333... in memory. Its NOT just a string -- that is the binary representation.

If you want to know how doubles are formatted in memory then you need to study iEEE standards, which is what most PC computers with Intell (or compatible) processors use.

When working with floats and doubles there is no such thing as exact value. Attempting to test for equality will fail in most cases.

commented: yep +5

I don't care about memory, the operator / is important for me!
If we calculate 4./2. than the operator / gets two input double values: 4. and 2.
and the result 4./2. is a rational binary number,
which means that it is represented exactly, so the result is exactly 2.
In the case of 4./3. the operator / gets two input double values: 4. and 3.
but mathematically the result is not a rational binary number,
therefore the double result will not be an exact result,
the algorithm (of operator /) makes an approximation
and gives back the closest double, I think.
But from outside we do not have any information about that
the algorithm has made an approximation or not (because it is not necessary).
I would like to get this information somehow! It is possible?

If your computer supports IEEE floating-point, as most processors do, the five fundamental operations + - * / and sqrt will yield the correctly rounded result.

You do not understand me!
My question is not "correct or not correct"!
I would like to get the information:
rounding is applied or not?
which is one bit information (bool).

If you have an IEEE-compliant implementation, then rounding is according to the current rounding mode, which by default is to round to the nearest value and to round to even in case of ties.

If you set the rounding mode differently (and standard C++ does not offer a standardized way of doing so, so you would have to consult your implementation's documentation to learn how), then rounding occurs as specified by the current rounding mode.

If you do not have an IEEE-conforming implementation, then you get whatever your implementation does.

If what you are trying to do is determine whether a particular computation has been rounded, there is no standard way of doing that in C++. An individual implementation may offer a way.

If this is true, this is a very bad news!
Because if you want to be fast,
you have to use the built in floating point numbers and functions,
but you cannot handle the errors!
This means that the computer is not good
in doing important numeric calculations...

I remembered answering a similar post before here, but then I realized you (merse) were also the OP for that thread. I guess you are obsessed with this topic... If what your are looking for is top speed and perfect accuracy, you just moved beyond obsession.. into fanatical idealism. In the real world (or should I say, in the double world, lol!), there is always going to be a trade-off between accuracy and speed / memory usage. You can't have both extremes, you have to choose your spot. The standard IEEE floating-point arithmetic and representation of real numbers is a good equilibrium for most purposes. Game programmers and others want fast calculation and don't care about accuracy at all, while physicists want very precise results but don't mind having to wait for quite a while for their results (or they have fancy super-computers), while engineers, like me, want reliable results but not necessarily extremely accurate (there are always large safety margins applied anyways) and are very impatient (nobody likes numerical simulations that take more than a few days).

Remember also that the main factor that affects the accuracy of the output of an operation is the accuracy of your input values. And by accuracy, I mean, of course, not the round-off-error of the double variable, but the error in the value you provide. Say, I want to simulate a mass-spring dynamics system, then, the results depend very much on how accurate the values of mass, stiffness, and rest-length I give to the program. Unless I can measure any of these quantities to a relative precision of 10^-16 (which I don't have the means to do, even by a stretch of the imagination), I don't care much about the round-off-error in the floating-point arithmetics of the computer. But I might care about the amplification factor of my numerical algorithm and how my uncertainty interval grows or shrinks. For that, there is quite a body of scientific literature on numerical stability, conditioning the problem-space, numerical damping, simulated annealing, modal decomposition or principal component analysis, etc... and at worst, you can track the uncertainty interval by doubling the calculation.

If your application has input accuracy and output accuracy requirements beyond the precision of the computer's floating-point representations, meaning your are very well equipped in terms of measurement systems and such, then I guess you also have the means to go for other hardware alternatives that might offer better or extended representations of real numbers, such as analogue computers, super-computers, or FPGAs.

I just want you to understand that it is far easier to deal with an expected numerical error and/or control its progression than to try to obtain exact results in a world where that simply does not exist (there are no such things as exact measurements or exact universal constants when dealing with a continuum (real valued space)).

Now, your question was about testing if a division operator is exactly computed or rounded-off before you get the result. Well, it is certainly possible, but not if you want it to be fast (extreme accuracy == extreme resource usage). You would have to study the IEEE standard for the representation of floating point variables, make sure your CPU conforms to it, then you can do a reinterpret_cast to an unsigned integer type of the same size. With that integer type, you can do some bitwise operations and such, to figure out, from the bit patterns, whether the resulting division of the numbers will be a binary rational or not. You can see how crazy this would be and how slow it would get (even in assembly). Finally, ask yourself: What would you do if you find out that a particular operation is indeed rounded-off? Throw an exception or what else? And believe me, there are infinitely more instances where a division yields to a rounding-off, than instances of a resulting binary rational number. Using a rational number representation would be a much better and easier solution for exact accuracy as long as transcendental functions don't get involved.

No I dont want top speed, I just want built in speed, and not to make my own double, because it will be very slow. Bur I need error handling. I can overestimate the error with the well known epsilon(), but sometimes the error is exactly zero. If we dont know when, then the overestimation is relatively infinity, which is very bad.

In my problem, the main factor that affects the accuracy is not the physical error it is really the rounding error.

But in general it is very hard to say that the rounding error is negligible,
if you cannot handle it, and you dont know its value at the end of your calculation. I think nobody can prove mathematically that the rounding error is not important, in a difficult numeric algorithm. Usually there is an acurracy parameter in your calculation, and the decreasing of this parameter means better estimation, but you cannot decrease it without bound. Usually you cannot know this bound (without precise error handling) you just feel it, but sometimes you are wrong.

It's not about sense or how you feel about calculation. It's about how precise the calculation is intended or acquired for a specific reason. If you work in NASA, you round 0.02 meter to 0 when your space ship is landing on the surface of moon, it seems logical!!!! Cause the space does not need to approach the surface with estimation of 2 digit placement. But if you are working on a robot that does eye surgery, you need to be precise as low as 10^-5 or even smaller.

Please remember one thing, nothing in this world is precise(I'm not trying to discuss Physics here) but what is the precise value of PI??? We use it to get the area of a square and volume of a sphere and it has worked flawlessly and correctly up to now, although it's far away from being precise. It's the matter of how accurate for what task instead of being absolute accurate!!

In my problem, the main factor that affects the accuracy is not the physical error it is really the rounding error.

Please show us an example.

In particular, the IEEE floating-point standard was designed with a lot of input from some very good numerical analysts. What you're saying is that you think the problem you're working on is one for which their experience is irrelevant.

Well, this should be an easy claim to prove: Describe the problem and let us make up our own minds about it.

By the way:

1) The IEEE floating-point standard does say that there is a "sticky bit" that the programmer can turn off or test, and that the hardware turns on whenever a floating-point operation does not give an exact result.

2) The current C++ standard does not provide a uniform way of accessing this bit.

3) The current C standard does provide a ay of accessing this bit, which will be picked up by the next version of the C++ standard.

4) Some C++ implementations surely make it possible to do this now.

So if you really care about being able to determine whether a floating-point operation is inexact, I suggest you look at your compiler documentation to see whether it is one of those that already supports this forthcoming feature.

In my problem, the main factor that affects the accuracy is not the physical error it is really the rounding error.

But in general it is very hard to say that the rounding error is negligible,
if you cannot handle it, and you dont know its value at the end of your calculation. I think nobody can prove mathematically that the rounding error is not important, in a difficult numeric algorithm. Usually there is an acurracy parameter in your calculation, and the decreasing of this parameter means better estimation, but you cannot decrease it without bound. Usually you cannot know this bound (without precise error handling) you just feel it, but sometimes you are wrong.

As arkoenig said, I would also be interested in some details on your problem that is _really_ only affected by ROE.

You said: "nobody can prove mathematically that the rounding error is not important"... If you have never encountered those types of proofs I wander what you are doing tackling a numerical method problem in the first place. You can certainly infer upon the amplification of the round-off error in an algorithm, you can do a best and worst case analysis, the best case is when no ROE is assuming all calculations are exact throughout the entire algorithm and the worst case is when all calculations suffer maximum round-off error. In my experience of the few cases and proofs I have seen, these two numbers wind up incredibly close to each other for any algorithm that wasn't designed by a monkey. I suggest you start with the basics. The condition number is a provable way to predict the lower-bound for error amplification in matrix numerical methods, check out the classic paper by (Golub and Van Loan, 1983). If you cannot predict the error amplification or track it via an interval, you can also control it. If you cannot control it, you can stabilize it (see predictor-corrector methods, such as the Hamming method) and latter filter out the numerical flutter.

The list goes on and on... don't think you are the first person to stumble on this problem. The entire field of numerical analysis has been studying this for at least 60 years! And, as arkoenig said, they are the ones who came up with IEEE standards.. so I imagine that they figured that the double or extended precisions were just fine for all purposes that would run on a PC and not a dedicated special-purpose non-standard machine. I think they were right. But tell us what your problem actually is, maybe we can have less general answers.

BTW: You said: "Usually there is an acurracy parameter in your calculation, and the decreasing of this parameter means better estimation, but you cannot decrease it without bound." What you are referring to is the _tolerance_, not the accuracy parameter, and this really has little to do with the ROE. If you decrease that tolerance very much, you might hit a "bound" that has something to do with the ROE, but mainly it will have to do with the fact that your are over-calculating the problem, and that bound comes much sooner than the ROE bound. Over-calculating or over-computing in numerical methods can be thought as if you run an optimization method, iteratively, you will have a tolerance to set at which the numerical method will stop iterating, i.e., the current estimate is "good enough". The more you decrease the tolerance, the more iteration you have to do to get below that tolerance. And often, the estimate will drift away from the "real" optimal solution while trying to reach the tolerance, mostly because of ill-conditioning around the optimum. You might reach the tolerance you asked for, but your solution will be far off. So, yes you can't keep lowering the tolerance ad infinitum, you will hit a bound, but ROE is not the main concern (if it is, you are very lucky!).

How it is possible to access to that sticky bit? Can you give me some references?

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.