I am trying to remove entries if they contain the word "connected" in Perl. My file looks something like below. So In this example, I would like to search for "connected" and if found remove all 6 lines associated with it. If not found, then move on to the next entry. I was trying to split the file by blank lines but I am not having much success. Can anyone help? Thanks
........

PIN connected<112>
USE TEST ;
PORT
LAYER TRUE ;
END
END connected<112>

PIN address<110>
USE TEST ;
PORT
LAYER TRUE ;
END
END address<110>

PIN connected<11>
USE TEST ;
PORT
LAYER TRUE ;
END
END connected<11>

Here's some code that I threw together. I wasn't sure exactly what you wanted to do with the information, so I had it print the non-connected information to the screen while ignoring the connected information. This should allow the code to be very easily adaptable to what you want to do with your application. This code assumes that the data is in a file called searchAndReplace.txt.

#!/usr/bin/perl

use strict;

open(FILE, "<searchAndReplace.txt") or die "Error reading input file 'searchAndReplace.txt'\n";

while(my $line = <FILE>)
{
        next if($line !~ /^PIN/);

        if($line =~ /^PIN connected/)
        {
                while($line = <FILE>)
                {
                        last if($line =~ /^\s*$/);
                }
        }
        else
        {
                print $line;

                while($line = <FILE>)
                {
                        print $line;

                        last if($line =~ /^\s*$/);
                }
        }
}

exit;

If this is any version of unix:

grep -v "connected" filename > newfile

deletes all of the lines having the word: connected

The problem is that he wants to remove a group of lines based on the appearance of a line that has connected in it, not that he wants to remove all lines that have that word in them.

chris - you're correct. Perl is overkill which I guess was my point to start with.
3 lines of awk:

awk ' BEGIN {connected=-1}
     $2 ~ /^connected/  {connected*=-1 ; continue}
     connected<0 {print $0 } ' filename

on this file:

PIN connected<112>
USE TEST ;
PORT
LAYER TRUE ;
END
END connected<112>

PIN address<110>
USE TEST ;
PORT
LAYER TRUE ;
END
END address<110>

PIN connected<11>
USE TEST ;
PORT
LAYER TRUE ;
END
END connected<11>

produce this output:

kcsdev:/home/jmcnama> t.awk

PIN address<110>
USE TEST ;
PORT
LAYER TRUE ;
END
END address<110>
commented: brilliant! +1

Thanks everyone, Both the Perl and AWK scripts are working. This really helps.

Mike

If this is any version of unix:

grep -v "connected" filename > newfile

deletes all of the lines having the word: connected

With regards to the use of grep and then output to a file as it was done in the sample above, are there any limitation to its use especially when the file is big? I have experience record truncation when it is output to a file. Anyone ever experience that before? How can this problem be resolved?

best to ask in a shell scripting or unix/linux forum.

Jim,

I had absoloutely no idea about the power of awk. I have a similar problem - I am supposed to modify about 40000 XML files in the following manner.

<?xml version="1.0" encoding="UTF-8"?> 
    <item name="Rollover">
        <value>Testing for rollovers</value>
    </item>
    <item name="SemiRollover">
        <value> Testing semi rollovers </value>
    </item>

    <item name="MoreRollver">
        <value>yes</value>
    </item>
</record>

We need a new element called TestRollver which should have exactly the same value as SemiRollver so the final code would look like this

<?xml version="1.0" encoding="UTF-8"?> 
    <item name="Rollover">
        <value>Testing for rollovers</value>
    </item>
    
<item name="SemiRollover">
        <value> Testing semi rollovers </value>
    </item>
[I][B]<item name="TestRollover">
         <value> Testing semi rollovers </value>
     </item>[/B][/I]
 

    <item name="MoreRollver">
        <value>yes</value>
    </item>
</record>

it would appear to be very simple for you Perl/Awk gurus but I am finding it tough going. Can anyone help? I have about 40000 files that need converting.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.