I wrote a simple program that sums the square of integers m -> n. If sum is declared as double it gives something like 4.16..x10^10 for integers 1 -5000. However, if sum is declared as an int, for the same range it reports -1270505460.
Obviously this has something to do with the 4-bytes allocated for int, but vague intuition != explanation. Can someone give me a good explanation for this phenomenon?