Posts Tagged script

netcat as a logging tcp proxy

I felt I needed to write an article about netcat, so here is it !
Netcat is an incredibly usefull tool, that allows you to play with tcp connection easily from the shell.
Basically, as it name implies, it’s just cat over the network, but what its name doesn’t tell you is that it also can act as a socket listener.
So let’s play with pipes, here is one of my favourite use of netcat:

mkfifo proxypipe
cat proxypipe | nc -l -p 80 | tee -a inflow | nc localhost 81 | tee -a outflow 1>proxypipe

This command will redirect traffic from localhost:80 to localhost:81, in the inflow file you while find the incoming http request, in the outfile, you will find the http response from the server.
Similarly, you can do this:

cat proxypipe | nc -l 80 | tee -a inflow | sed 's/^Host.*/Host: www.google.fr/' |  nc www.google.fr 80 | tee -a outflow >proxypipe

This will allow your browser to point to google using http://localhost .
Anyway, this is my favourite but netcat has thounds of other uses, have a look at it !
It can be usefull for file transfers (gzip|nc) , performance measurement (dd|gzip), protocol debugging (replaying requests), security testing (nc does port scan) …

Comments (3)

Installing redmine 0.8 on intrepid (ubuntu 8.10)

I’ve successfully insalled redmine pretty much easily but I needed to find out what packages to install with apt, which one with gem, which version …
Here is my magic receipe to install it all:

apt-get update 
apt-get install subversion mysql-server rubygems rake pwgen
# next line generates a password for the database
export PASSWORD=`pwgen -nc 8 1`
gem install -v=2.1.2 rails
echo "CREATE DATABASE redmine  DEFAULT CHARACTER SET utf8 COLLATE utf8_general_ci ; GRANT ALL PRIVILEGES ON redmine.* TO 'redmine'@'localhost' IDENTIFIED BY '$PASSWORD' WITH GRANT OPTION; FLUSH PRIVILEGES" | mysql 
cd /opt/
svn export http://redmine.rubyforge.org/svn/branches/0.8-stable redmine-0.8
cd redmine-0.8/
cat <<EOF >> config/database.yml
production:
  adapter: mysql
  socket: /var/run/mysqld/mysqld.sock 
  database: redmine
  host: localhost
  username: redmine
  password: $PASSWORD
  encoding: utf8

EOF
rake db:migrate RAILS_ENV="production"
rake redmine:load_default_data RAILS_ENV="production"
apt-get remove pwgen subversion
RAILS_ENV="production" ./script/server  

And that’s it ! Redmine is running on port 3000.
I did this on an EC2 instance and it works like a charm (ami-7cfd1a15).
Maybe next article will discuss running redmine in mongrel or apache, and creating an init script for having redmine running on boot !

Comments (2)

Extract a table from a mysqldump

Today I encountered a problem: I needed to restore a single table from a database mysqldump.
Usually you cat the-mysqldump.sql |mysql the_database so you’re only able to restore the full database. I didn’t find any mysqldump option to extract a single table from a full database dump, so I’ve come up with this (minimal) shell script:

#!/bin/sh

extract_table(){
  TABLE=$1
  DUMPFILE=$2
  grepstr="/*!40000 ALTER TABLE \`$TABLE\`"
  lines=`grep -n "$grepstr" $DUMPFILE |cut -d":" -f1`
  lines=`echo $lines|sed 's/\ /,/' `
  echo "LOCK TABLES \`$TABLE\` WRITE;"
  sed -n "$lines p" $DUMPFILE
  echo "UNLOCK TABLES;"
}

extract_table $1 $2

Use it like this:
./this-script.sh table-to-extract dumfile-for-extract |mysql the_database (use the |mysql after you have checked the content).

Be carefull, this script is minimalistic:

  • It doesn’t check if the file exist and is really a mysqldump file
  • It doesn’t check if the table to extract exists
  • It doesn’t work if disable-keys is set to false in mysqldump
  • It doesn’t have a usage() function

If some people request it, I’ll write all these features, but as usual, I wanted to come up with a solution I could already use one hour a ago, and I’m spending time to write this script, let’s do it the faster I can ! 🙂

Comments (1)

Selecting a range of lines within a file

Let’s say you want to extract a part of a file, for example from line 12 to 20.
I’ve come up with two solutions:

  • head -n20 |tail -n8
    You take the n’th first line where n is the last line you want, then you go backward by the total line number you want to have, that is: 20-12=8
  • A nicer solution which is straightforward (use the right tools guys !):
    sed -n '12,20p'
    You need the -n option, so that the input is not printed to the output, than give sed an expression (within quotes), the expression is the first line, a coma, the last line, and the “p” instruction which means print.
    This solution doesn’t need you to calculate the number of lines you will get, I find it nicer !

Comments (1)

subnet ping scan in shell

Today I logged in a machine I don’t want to install anything on it, but I wanted to find a machine in its network.
I came up with the little shell script that scans the subnet:

CURR=1
SUBNET="192.168.0"

while [ $CURR -lt 255 ] ; do
  ping -c1 -t1 $SUBNET.$CURR 2>&1 >/dev/null
  if [ "$?" -eq "0" ]; then
    echo "$SUBNET.$CURR"
  fi
  let CURR=$CURR+1
done

This script is suboptimal but it does the stuff: It uses ping with a timeout of 1 sec, so If no machine is up, the script takes around 255 seconds to scan the subnet, it doesn’t list the machines that doesn’t reply to ping and so on … but as I said it , it does the stuff.

I tested this script in Linux and OSX.

Comments (13)

Biggest file in a tree

There are plenty of solutions to find biggests files in a tree on unix.
I usually used
find . -type f -exec du -sk {} \; |sort -nrk1
until I found it too slow, on a really big partition with a lot of files. It’s slow because the -exec option of find forks for each file , and the du re-fetches the inode for every files (IIRC the inode should be in the buffer cash, the really expensive part is the forking).

Now I usually use this command which is really more efficient (depending essentially on number of files
find . -type f -ls |awk '{print "$7" "$11" }' | sort -nrk1

Conclusion: Fork is expensive 🙂

UPDATE:
As my friend nikoteen said in the comment of this post, there is a better solution:
find -ls | sort -k7
The thing is, unix people are (ohh, sorry, I am) used to use some commands with it’s usual argument for example, I often use: ls -lart, tar zcvf, netstat -atnup. And sort -nrk1 is one of that command I often use. That’s why I’m writing stupid commands with awk | sort rather than just writing a simple sort. So guys, use this command:
find -ls | sort -k7

Comments (1)

Database (mysql) backup script

Don’t need to say much, everything is in the title of this post 🙂 .
Here is my mysql database backup script:

DIR="/var/backups/db/"
MAIL="your@mail.address"
LOGFILE=$DIR/backupdb.log

function backup_db {
HOST="$1"
USER="$2"
PASS="$3"
DB="$4"
MINSIZE="$5"

BACKUPFILESUFFIX="`date +%m%d`.bz2"
DBLIST=`echo "show databases" | mysql -u backup -p$PASS -h $HOST`
NUMDB=`echo $DBLIST |wc -w`

if [ ! -d ${dir} ]; then
  mkdir ${dir}
fi
if [ ! -e $DIR/count.$HOST ]; then
  echo $NUMDB > $DIR/count.$HOST
fi

COUNT=`cat $DIR/count.$HOST`

if [ "$COUNT" -lt "$NUMDB" ]; then
  echo -e "Databases list: $DBLIST" | mail -s "New database, maybe new backups needed!" $MAIL
  echo $NUMDB > $DIR/count.$HOST
fi

/usr/bin/mysqldump -u $USER -p$PASS -h $HOST --routines $DB|bzip2 > $DIR/$DB.$BACKUPFILESUFFIX

if [ $? != 0 ] ; then
  echo -e "Return code is : $? and log file contains:\n `cat $LOGFILE`" | mail -s "Backup MySQL $HOST: $DB Error" $MAIL
fi

SIZE=`du  -sk $DIR/$DB.$BACKUPFILESUFFIX| cut -f1`
if [ "$SIZE" -lt $MINSIZE ]; then
  echo -e "File is smaller than $MINSIZE k, printing an ls output:\n `ls -l $DIR`" | mail -s "Backup MySQL $HOST potential error" $MAIL
fi
}

# Cleaning up old files, or disks won't fill
DIR="/var/backups/db/"
find $DIR -ctime +7 -name "*bz2" -exec rm {} \; -print

backup_db "192.168.0.1" "user" "password" "database" "9999"

To backup your databases add at the end of script, one line per database, based on this format:
backup_db "host" "user" "password" "database name" "min_size"

You also need to create a backup user on your database, I use this script:

CREATE USER 'backup'@ 'backup-host' IDENTIFIED BY 'a-password';

GRANT SHOW DATABASES ON * . * TO 'backup'@ 'backup-host' IDENTIFIED BY 'a-password ;

GRANT SELECT ,
LOCK TABLES ,
SHOW VIEW ON `a_database_to_backup` . * TO 'backup'@ 'backup-host';

And also add a crontab entry:

cat <<EOF >/etc/cron.d/dbbackup
MAILTO=root
0 5 * * * root /var/backups/db/backupdb.sh 2>&1 >/var/backups/db/backupdb.log
EOF
chmod +x /etc/cron.d/dbbackup

This script have some features I have implemented that I find usefull:

  • It mails you when a new databases is created (devs sometimes create a database but don’t inform me, they need backup of it, in case)
  • It’s easy to add new databases to backup
  • It checks for a minimum size, you know that some databases won’t be less than a fixed size, if it happens, there is probably a problem with the backup script or within the database
  • It also backups stored procedures (we use the --routines option of mysqldump)
  • It has a 7 days rotation mechanism, so the disk don’t fill
  • The databases are compressed

Some improvement that can be done to this script:

  • Better error handling, I’m not really sure how it works, I made this script pretty much fast for my daily needs
  • Use mk-parallel-dump from the maatkit
  • Use .my.cnf, and don’t display password in the script, is it better ?
  • please comment to give me some ideas 😉

Leave a Comment

Older Posts »