i have a file, say , abc.log

which contains data like,

123_1
123_2
123_3
123_4
123_5
123_6
123_7
123_8
123_9
123_3
123_5
123_3
123_7
123_6
123_1
123_3

As we can see, there are 16 rows. Now i need to grep for a pattern like '123_*' but want to have only unique rows, not the duplicate ones.

For e.g. grep '123_*' abc.log, will give allthe 16 rows
instead i want only 9 rows, which are unique and avoid all the duplicate rows.

How can i achieve this? please help

Is there nobody in this community who can answer my query?please help me.

sk@sk:/tmp$ cat l
123_1
123_2
123_3
123_4
123_5
123_6
123_7
123_8
123_9
123_3
123_5
123_3
123_7
123_6
123_1
123_3
sk@sk:/tmp$ sort -u l
123_1
123_2
123_3
123_4
123_5
123_6
123_7
123_8
123_9
sk@sk:/tmp$ sort -u l | wc -l
9

Many thanks sknake!! but i dint get what u did on cmd line..

sort function i know but i dint get the following

/tmp$ cat l

and

/tmp$ sort -u l

please tell me in relation to my filename above, abc.log
thanks.

its the solution to your problem.

sort <filename>|grep '123_*'|uniq

Many thanks sknake!! but i dint get what u did on cmd line..

sort function i know but i dint get the following

/tmp$ cat l

and

/tmp$ sort -u l

please tell me in relation to my filename above, abc.log
thanks.

"l" is the name of my file in this case. Replace all occurences of "l" with your filename

/tmp$ cat abc.log
/tmp$ sort -u abc.log

Hello ermithun,

Have you tried my code??

Thanks,
DP

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.