Hi All,

Does anyone know which file manipulation library is faster between c's FILE and c++'s fstream?

I would suppose that c++'s fstream would be the faster but I just wanted to see some expert advice/opinion on this.

I googled the internet without finding anything concrete.

If there is any existing thread on this I would appreciate if you could post it here so I can take a look.

Thanks for your help

On current implementations, you'll probably find that the stdio library is faster than the iostream library. The usual recommendation is to try both and use the one that's fastest on your setup, if all you care about is execution performance.

commented: To the point ! +9

On current implementations, you'll probably find that the stdio library is faster than the iostream library. The usual recommendation is to try both and use the one that's fastest on your setup, if all you care about is execution performance.

Thanks Narue,

I will try to make a test using timers for both once I have the time, Ill post the results with some code for any interested readers when I do.

Except which one is "fastest on the current setup" is a moving goalpost with every patch / update / release / new compiler.

Plus if you're writing code which is in any way open source, then a whole raft of different compiler options will appear, each with their own "x is faster than y by z%" metrics.

If you don't already have any need for any C I/O, then you might avoid some code bloat by not dragging in libraries which you're not making a great deal of use of already, for what may be a marginal benefit.

File I/O is terribly slow anyway. You might want "cheetah", but all you've got is a choice between "snail" and "tortoise".

Here's an example. If your C++ I/O takes 1 hour, and your C I/O takes 50 minutes, there isn't any point in making the change. Your users are NOT going to come back from lunch 10 minutes earlier due to your efficiency.

Likewise, if it's around 10 minutes, then a few minutes either way won't get them back from the coffee break any time sooner.

If you're going to do it, one of the methods needs to break through a time barrier where your users would figure out they could go and do something more productive while your program was running.

An hour to a minute say needs a change of algorithm, not a change of I/O library.

commented: Well put. +11
commented: Very nice explanation :) +9
commented: Lol, I want a cheetah too ;) +3
commented: Excellently put bro !!! +2

Oh please. Micro-optimization is not cool. Adhere to the philosophy that "Developers time is far far more important than machine time"
Small optimization won't make significant changes in the execution but will drastically increase your head ache.
I find Salem post to be extremely right in this regard. Very well said.
My advice is : use the one which will save you headache while debugging a month( or perhaps a fortnight) later.


>if it's around 10 minutes, then a few minutes either way won't get them back
>from the coffee break any time sooner.
I can have two cups in those 10 minutes. :)

commented: Yes, a wasted week of every users' time changes the calculation a hell of a lot! +35

I think it depends on what you are doing with the file.
I've found that the FILE, strtok is by far the fastest way of reading in data.
I've been using flat files, (same number of columns in all lines).
On the scale of several gigabytes.

And avoiding the c++ streams was faster.
But I've heard rumors that you can tweak the fstream with a fsetbuf,
that should make it faster.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.