Hi
Can anyone help me out in this. Actually, I have rows like
Bold Text Here
50121 abc.com 28/1/2014-12:00:00
52111 xyz.com 27/1/2014-12:00:00
deusr abc.com 26/1/2014-12:00:00
50121 abc.com 26/1/2014-12:00:00
52111 abc.com 25/1/2014-12:00:00

I removed the duplicates based on first column and got the output as
50121 abc.com 28/1/2014-12:00:00
52111 xyz.com 27/1/2014-12:00:00
deusr abc.com 26/1/2014-12:00:00

but the issue here is I am willing to remove duplicates based on 2 columns comparison for each row. i.e., 1st and 2nd one. I am trying using 'awk' command. But I am not getting it. Can you help me out in this please..

Hello,

I know you mentioned using awk but I thought I would throw this in anyway:

cat myfile.txt | sort -k 1,2 | uniq -w 13

sorts the file by first and second columns (seperated by white space) then prints the unique entries based on the first 13 charactors.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.