Say, we have a file or data that has many duplicate rows or entries and we want to find how many time each one has repeated and maybe want to know which is repeated most of the time. Here is an elegant script that can do that in single line.
sort input.file | uniq -c | sort -n -r
First sort will sort the records in the file. Then uniq -c will count how many times each record is duplicated. And finally sort -n -r will sort the output of uniq -c in reverse order giving us the records that repeated most often to the least often.