Posts Tagged shell

A little one-liner: rename to lowercase recursively

Here is my little one-liner, because I used it today and I find it fun:

for f in `find .` ; do mv $f `echo $f | tr '[A-Z]' '[a-z]'` ; done 

Comments (1)

netcat as a logging tcp proxy

I felt I needed to write an article about netcat, so here is it !
Netcat is an incredibly usefull tool, that allows you to play with tcp connection easily from the shell.
Basically, as it name implies, it’s just cat over the network, but what its name doesn’t tell you is that it also can act as a socket listener.
So let’s play with pipes, here is one of my favourite use of netcat:

mkfifo proxypipe
cat proxypipe | nc -l -p 80 | tee -a inflow | nc localhost 81 | tee -a outflow 1>proxypipe

This command will redirect traffic from localhost:80 to localhost:81, in the inflow file you while find the incoming http request, in the outfile, you will find the http response from the server.
Similarly, you can do this:

cat proxypipe | nc -l 80 | tee -a inflow | sed 's/^Host.*/Host:' |  nc 80 | tee -a outflow >proxypipe

This will allow your browser to point to google using http://localhost .
Anyway, this is my favourite but netcat has thounds of other uses, have a look at it !
It can be usefull for file transfers (gzip|nc) , performance measurement (dd|gzip), protocol debugging (replaying requests), security testing (nc does port scan) …

Comments (3)

installing applications remotely on mac

To use a dmg disk image you would use:
hdiutil mount thefile.dmg
You then have a /Volumes/The\ Application.
In that directory you would usually either have an or a Application.pkg directory.

  • If it’s an .app file, just copy the files to your /Applications/ directory:
    sudo cp /Volumes/The\ Application/ /Applications/
  • f it’s a .pkg file, run:
    sudo installer -package Application.pkg -target /Volumes/Mac\ OS

That’s it !

Leave a Comment

Selecting a range of lines within a file

Let’s say you want to extract a part of a file, for example from line 12 to 20.
I’ve come up with two solutions:

  • head -n20 |tail -n8
    You take the n’th first line where n is the last line you want, then you go backward by the total line number you want to have, that is: 20-12=8
  • A nicer solution which is straightforward (use the right tools guys !):
    sed -n '12,20p'
    You need the -n option, so that the input is not printed to the output, than give sed an expression (within quotes), the expression is the first line, a coma, the last line, and the “p” instruction which means print.
    This solution doesn’t need you to calculate the number of lines you will get, I find it nicer !

Comments (1)

watch your process !

I just discovered the watch command, it can be useful !
If you don’t know watch, it does what you would do like this:
while true ; do "your command" ; sleep 1 ; clear ; done
that is, it executes in a while loop the same command , with a sleep so that it doesn’t overkill your cpu.
It also has nice parameters, for exemple --differences that can only show the differences between current and last run.
“your command” could be a du or a df , --differences could be useful when used with an ls to monitor a directory …
Read the manpage and have fun ! 🙂

Leave a Comment

subnet ping scan in shell

Today I logged in a machine I don’t want to install anything on it, but I wanted to find a machine in its network.
I came up with the little shell script that scans the subnet:


while [ $CURR -lt 255 ] ; do
  ping -c1 -t1 $SUBNET.$CURR 2>&1 >/dev/null
  if [ "$?" -eq "0" ]; then
    echo "$SUBNET.$CURR"
  let CURR=$CURR+1

This script is suboptimal but it does the stuff: It uses ping with a timeout of 1 sec, so If no machine is up, the script takes around 255 seconds to scan the subnet, it doesn’t list the machines that doesn’t reply to ping and so on … but as I said it , it does the stuff.

I tested this script in Linux and OSX.

Comments (13)

Biggest file in a tree

There are plenty of solutions to find biggests files in a tree on unix.
I usually used
find . -type f -exec du -sk {} \; |sort -nrk1
until I found it too slow, on a really big partition with a lot of files. It’s slow because the -exec option of find forks for each file , and the du re-fetches the inode for every files (IIRC the inode should be in the buffer cash, the really expensive part is the forking).

Now I usually use this command which is really more efficient (depending essentially on number of files
find . -type f -ls |awk '{print "$7" "$11" }' | sort -nrk1

Conclusion: Fork is expensive 🙂

As my friend nikoteen said in the comment of this post, there is a better solution:
find -ls | sort -k7
The thing is, unix people are (ohh, sorry, I am) used to use some commands with it’s usual argument for example, I often use: ls -lart, tar zcvf, netstat -atnup. And sort -nrk1 is one of that command I often use. That’s why I’m writing stupid commands with awk | sort rather than just writing a simple sort. So guys, use this command:
find -ls | sort -k7

Comments (1)

Older Posts »