I'm writing a program for a contest (it's a demo, not the actual thing - so I'm not cheating by asking for help - and my question isn't directly related to the algorithm they want anyway). To submit a program, you send them the .cpp file, and they execute it on their server (automatically) for a number of test trials (around 10), and show you the results. This problem has 11 trials. For the first five trials, the program runs fine. However, the sixth run of the program fails, and it throws std::bad_alloc (I'm guessing its caused by vector::push_back - the only other things I'm using from std are file streams and strings). Then, trial 7 works fine. Trials 8 and 9 failed, same reason. Trial 10's runtime was too long (it was stopped at around 1.5 secs, shouldn't go over 1 second), and trial 11 had the bad_alloc again.
I wanted to see exactly where the bad_alloc was occurring, so I copied the test input data for trial 6 into the test input file on my machine, and ran the program. I didn't get bad_alloc, but the program didn't end (for the time I ran it, maybe 10 seconds - I'm going to try leaving it for longer while outputting debug info).
Anyway - this is my assumption of what's going on, tell me if I'm right:
On the virtual machine on the competition's servers, the program is only given the amount of memory that they think it needs to execute. So, my program is memory inefficient, and on their machine, it runs out of memory. However, on my machine, the memory isn't limited, so it keeps using it up, and so instead of getting bad_alloc, the program just runs for a long time. Is this a correct assumption?