So, today I had to look for $error_du_jour in a particular log file, in order to do some stuff the the items throwing the errors.
To find the items, I did this:
When I was done, and told my boss, he asked if I'd done it this way:
So, then I went and scanned the
To find the items, I did this:
# grep 'ERROR_DU_JOUR' log > raw_results.txt
# head raw_results.txt
[see raw results here, and find that the item in question is on field 20]
# awk '{print $20}' raw_results.txt > items.txt
# head items.txt
[see raw items here, with a trailing comma]
# sed 's/\,$//g' items.txt | sort -u > clean.txt
# cat clean.txt
[see only list of affected items here]
# When I was done, and told my boss, he asked if I'd done it this way:
# grep 'ERROR_DU_JOUR' log | awk '{print $20}' | cut -d, -f1 | sort -uSo, then I went and scanned the
man cut page. Pretty sweet little trick.
no subject
Date: 2007-01-11 01:15 pm (UTC)For years, not being able to afford "proper database software" I maintained my personal address list with shell scripts (it was a flat text file, but with normalized data fields and pipe character delimiters. Later I switched the shell scripts to awk scripts to manage the same data, and it is still in that format today.
Because it worked so well, I did NOT migrate into database software when I could afford it; and I am very glad. I have changed platforms (Amiga to SVR3 to AIX to HP-UX to IRIX to OSX) several times, and migrating the ascii database and the scripts has always been trivial. If I had put this into dBase or SuperBasePro, I would have had to pay for expensive migration into newer database packages.
There is value in storing data in widely used standardized formats, and sometimes the simple old tools work best of all.