How to group from file by some condition or get count of unique data
For example you have file which contains log of some events.
Before reading, I recommend subscribing to my Telegram channel, where you can find the latest news, examples, and hacks in the world of development: @asanov_tech.
Input data:
Dec 9 00:33:42 some log
Dec 9 00:56:49 some log
Dec 9 01:13:12 some log
Dec 9 01:22:02 some log
Dec 9 01:35:52 some log
Dec 9 03:15:52 some log
Dec 9 12:17:52 some log
And you want group this file by hours. The code below will help group and display data:
grep -oP "Dec\s+9\s(\d{2})" | sort | uniq -c
Output data:
2 Dec 9 00
3 Dec 9 01
1 Dec 9 03
1 Dec 9 12
First column - it's count of unique lines with the date.
Flag -o is required to output only the requested substring (Dec\s+9\s(\d{2}))
Flag -P is required for enable perl-style regexes
Command sort sorts the data.
And uniq -c find count of unique lines.
Thank you for reading. Best regards, Ildar.
Комментарии
Отправить комментарий