I am working on a project and coding in Python 3.1 where I need to find a string in a file.

I have already considered reading the file to a string and then using rfind() on it, but that does not seem to be efficient, seeing as how I expect my file to be large.
I am considering using a for loop to read each line, but I'm not sure how efficient that is either.

Any better alternatives out there?

P.S. I need to find the last instance of the sub-string. So a bigger file will almost kill my previous method.

You could seek to n KB from end of file, read n KB, if string found stop, otherwise seek n KB - search string length more behind, read n KB, .... until succeed or beginning is reached and string is not in file.

If you are confident that the string is not wrap-
ped around (on 2 lines), then reading the file one record at a time and searching it is the accepted way to go. You have to look at every character in the file no matter how you do it.

Try with regex,and use an compile() regex with finditer() Look like this.

import re

text = """\
Hi,i have car.
I drive to work every day in my car.
This text will find all car,car.car?!car."""


r = re.compile(r'\b(car)\b')
for match in r.finditer(text):
    print match.group()
Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.