Posts Tagged unix

Selecting a range of lines within a file

Let’s say you want to extract a part of a file, for example from line 12 to 20.
I’ve come up with two solutions:

  • head -n20 |tail -n8
    You take the n’th first line where n is the last line you want, then you go backward by the total line number you want to have, that is: 20-12=8
  • A nicer solution which is straightforward (use the right tools guys !):
    sed -n '12,20p'
    You need the -n option, so that the input is not printed to the output, than give sed an expression (within quotes), the expression is the first line, a coma, the last line, and the “p” instruction which means print.
    This solution doesn’t need you to calculate the number of lines you will get, I find it nicer !

Comments (1)

ssmtp and gmail or google apps

Unix systems often needs a local mailer, but configuring and maintaining a mailer on each system is a timeloss.
You might have a gmail or google apps account. If it’s the case, you can easily configure a mailer on your systems which uses your gmail or google apps. To do so, I’ve used ssmtp and put this in /etc/ssmtp/ssmtp.conf:

root=postmaster
mailhub=smtp.gmail.com:587
AuthUser=your-mail@yourdomain.com
AuthPass=aStr4angeP45s
UseSTARTTLS=YES
hostname=the-hostname

That’s it, simple, effective, working …
To improve the things, maybe, we can use an IP address of the smtp server, so that if our DNS server doesn’t work, we still have mail on the system, but this has a drawback, if the server for which you gave an ip address changes or temporarly doesn’t work, you don’t have mail anymore.
ssmtp doesn’t seem to be able to have several mailhubs !

Comments (4)

watch your process !

I just discovered the watch command, it can be useful !
If you don’t know watch, it does what you would do like this:
while true ; do "your command" ; sleep 1 ; clear ; done
that is, it executes in a while loop the same command , with a sleep so that it doesn’t overkill your cpu.
It also has nice parameters, for exemple --differences that can only show the differences between current and last run.
“your command” could be a du or a df , --differences could be useful when used with an ls to monitor a directory …
Read the manpage and have fun ! πŸ™‚

Leave a Comment

web based bind zone generator

There a some web based bind zone generator, but searching for “zone generator” in google, I found a lot that aren’t working, refining my research didn’t helped me. I finally found one that does the stuff. It’s not optimal , but it work , and it’s there
Please, if you know of a better one, just let me know !

Comments (1)

Biggest file in a tree

There are plenty of solutions to find biggests files in a tree on unix.
I usually used
find . -type f -exec du -sk {} \; |sort -nrk1
until I found it too slow, on a really big partition with a lot of files. It’s slow because the -exec option of find forks for each file , and the du re-fetches the inode for every files (IIRC the inode should be in the buffer cash, the really expensive part is the forking).

Now I usually use this command which is really more efficient (depending essentially on number of files
find . -type f -ls |awk '{print "$7" "$11" }' | sort -nrk1

Conclusion: Fork is expensive πŸ™‚

UPDATE:
As my friend nikoteen said in the comment of this post, there is a better solution:
find -ls | sort -k7
The thing is, unix people are (ohh, sorry, I am) used to use some commands with it’s usual argument for example, I often use: ls -lart, tar zcvf, netstat -atnup. And sort -nrk1 is one of that command I often use. That’s why I’m writing stupid commands with awk | sort rather than just writing a simple sort. So guys, use this command:
find -ls | sort -k7

Comments (1)

My 4 varnish tips

Varnish is a reverse proxy, If you don’t know varnish, this article is not interesting to you πŸ˜‰ .

This is my 4 little tips that greatly optimizes the efficiency of the caching politics:

Removing tracking, this generates a single cache entry for different urls that generates the same content (I use “gclid” as a tracking argument, this is what google uses), use this as the hashing algorithm:

sub vcl_hash {
  vcl.hash += regsub(req.url, ”\?gclid.*”, ””);
  hash;
}

Then we can normalize compression (different browser uses different string for the “Accept-Encoding” header). Add the following in sub vcl_recv:

if (req.http.Accept-Encoding){
 if (req.http.Accept-Encoding ~ "gzip"){
  set req.http.Accept-Encoding = "gzip";
 }elsif (req.http.Accept-Encoding ~ "deflate" ) {
  set req.http.Accept-Encoding = "deflate";
 }else{
  remove req.http.Accept-Encoding;
 ;}
}

When a cookie is generated all subsequent request for any object uses that cookie, we shall remove the cookie for all static content
In sub vcl_recv add this:

if (req.url ~ "\.(js|css|jpg|png|gif|mp3|swf|flv|xml|html|ico)$"){
 remove req.http.cookie;
}

Be carefull with files with these extensions that generates dynamic content (png, jpg, gif file for captcha, html with rewriteΒ  to php or aspx …)

To track client ip address in the log of your web server (the real one, the backend), in sub vcl_recv add this:

remove req.http.X-Forwarded-For;
set req.http.X-Forwarded-For=client.ip;

Then you can log the “X-Forwarded-For” header in your log (doing this depends on your webserver, I do that on apache and lighttpd).

Leave a Comment

mysql2csv

Sometimes, people ask me for some data in a database so I create the SQL request, but I don’t know how to give the output to that people. I think that most people have Excel, doens’t matter if that’s evil or not, they like to use it, and they find it usefull. That’s why I was searching for a way to output CSV from MySQL or from any SQL client that output result on the term and I found that solution that seems to work pretty well:

mysql the_database -B -e "select some,field from mytable where my_condition = something ;" | sed 's/\t/","/g;s/^/"/;s/$/"/;s/\n//g' > thefile.csv

The file thefile.csv is formatted in csv with correct newlines that excel can read, and that’s it πŸ™‚

Comments (1)