Hello, I am trying to write a script for someone at my university. The only experience I've had with perl really is through webpages, which coincidentally this is, albeit a bit different. Normally the webpage is hosted online and I have to use LWP::* or WWW::* to access it, but this is 40,000 pages stored locally. Supposing that recursive directories (ie; subdirectories) are not an issue, how would I go about reading in one file at a time, doing the scraping I need, and then moving on after printing out the extracted info? Basically something on the commandline like this:
myScript.pl *.htm*
where it will read in all htm or html files to be parsed. I feel a bit daft for not knowing how to do this, but oh well, all part of learning I guess.