I'm working on a program that requires some very low level optimization.
My question is, when precision isn't an important factor, how can I automatically use the most efficient floating point type for whatever architecture the program is being run on?
As far as I'm aware, floats are more efficient on 32 bit systems, and doubles are more efficient on 64 bit systems. Is there some sort of keyword or macro I can use which always uses the native data type for the machine it's being run on?
Thanks, appreciate any help.