When you read file in C and move to next line and so on. This seems you have to access file top to bottom sequentially without skipping lines.

I need a way to read very 5th line or something similar to it. I am just concerned with data that is present very 5th line. So if I start with line 1, then next line I want is line 6 and so on.

Would I be able to achieve this kind of file reading?

look for "random access files"

When you read file in C and move to next line and so on. This seems you have to access file top to bottom sequentially without skipping lines.

I need a way to read very 5th line or something similar to it. I am just concerned with data that is present very 5th line. So if I start with line 1, then next line I want is line 6 and so on.

Would I be able to achieve this kind of file reading?

Read 4 lines, process the next. Read 4 more, process next.

Member Avatar for iamthwee

The modulus operator is the key.

The normal approach for this problem is to read 4 lines and then read 5th line and again read 4 lines and then read the 5th line and so on. You could even use fseek but the problem with that is you should be having the right offset value. In your case to find the offset value you dunno the length of each line which going to be a pain.

I would rather suggest you to follow the first method which i said. To be honest with even the configuration files are read in the same.

ssharish

look for "random access files"

This is wrong, except under the unusual circumstance that the lines are of a fixed, specified length and the amount you want to skip is large. Random access lets you move the read pointer around to a specified location, which does not help you count lines, because newline characters could occur at any location, thus disrupting the count. If you had a small variance of line length ([min,max]), you might be able to skip ahead a little. You know that the from a given beginning of a line, the kth line will begin at some offset in the range [k*(min+1), k*(max+1)] relative to our current position. (We're adding one in the above expressions because a line of length x will take x+1 characters if you count the newline.)

So, if you're presently at the beginning of a line (which is line 0 relative to your current position) and you want to jump to the beginning of line k, you'll need to jump forward by k*(min+1)-1 and then from there, scan for the newline character that ends the k-1'th line. But this only works if there's no chance of it being the previous newline character, which means that you need to have (k-1)*(max+1) < k*(min+1) . That is, k*(max - min) < max + 1 or k < (max + 1) / (max - min) .

Your disk reads are already going to be buffered in the file reading library, so skipping lines with seeks is probably not going to be a big win unless you are skipping large numbers of lines (or if the lines are very long). Making the reads unbuffered won't improve the situation (that just means extra system calls and maybe extra disk seek time). You might save a little bit of processor time, but even then it's not worth the extra code complexity in this case. (For example, if you ever wanted to change the line's format, you'd have to change the code to read the lines and the code to skip forward -- would you remember doing that? Why would you want to do so much work?)

So the moral of the story for the original poster is that you should ignore yagiD's response and ignore this one too :-)

@sarehu
Thank you for your detailed explanation. After reading job' s problem again, I agree that random access files may not seem the very solution to job' s specific problem. But in case he has much information of similar length, they might become handy.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.