Posts Tagged tip

Installing redmine 0.8 on intrepid (ubuntu 8.10)

I’ve successfully insalled redmine pretty much easily but I needed to find out what packages to install with apt, which one with gem, which version …
Here is my magic receipe to install it all:

apt-get update 
apt-get install subversion mysql-server rubygems rake pwgen
# next line generates a password for the database
export PASSWORD=`pwgen -nc 8 1`
gem install -v=2.1.2 rails
echo "CREATE DATABASE redmine  DEFAULT CHARACTER SET utf8 COLLATE utf8_general_ci ; GRANT ALL PRIVILEGES ON redmine.* TO 'redmine'@'localhost' IDENTIFIED BY '$PASSWORD' WITH GRANT OPTION; FLUSH PRIVILEGES" | mysql 
cd /opt/
svn export http://redmine.rubyforge.org/svn/branches/0.8-stable redmine-0.8
cd redmine-0.8/
cat <<EOF >> config/database.yml
production:
  adapter: mysql
  socket: /var/run/mysqld/mysqld.sock 
  database: redmine
  host: localhost
  username: redmine
  password: $PASSWORD
  encoding: utf8

EOF
rake db:migrate RAILS_ENV="production"
rake redmine:load_default_data RAILS_ENV="production"
apt-get remove pwgen subversion
RAILS_ENV="production" ./script/server  

And that’s it ! Redmine is running on port 3000.
I did this on an EC2 instance and it works like a charm (ami-7cfd1a15).
Maybe next article will discuss running redmine in mongrel or apache, and creating an init script for having redmine running on boot !

Comments (2)

Biggest file in a tree

There are plenty of solutions to find biggests files in a tree on unix.
I usually used
find . -type f -exec du -sk {} \; |sort -nrk1
until I found it too slow, on a really big partition with a lot of files. It’s slow because the -exec option of find forks for each file , and the du re-fetches the inode for every files (IIRC the inode should be in the buffer cash, the really expensive part is the forking).

Now I usually use this command which is really more efficient (depending essentially on number of files
find . -type f -ls |awk '{print "$7" "$11" }' | sort -nrk1

Conclusion: Fork is expensive 🙂

UPDATE:
As my friend nikoteen said in the comment of this post, there is a better solution:
find -ls | sort -k7
The thing is, unix people are (ohh, sorry, I am) used to use some commands with it’s usual argument for example, I often use: ls -lart, tar zcvf, netstat -atnup. And sort -nrk1 is one of that command I often use. That’s why I’m writing stupid commands with awk | sort rather than just writing a simple sort. So guys, use this command:
find -ls | sort -k7

Comments (1)

copying databases

Similarly to my last tip (copying directory with ssh and tar) , you can also copy databases. It’s pretty simple, here is my magic command:
mysqldump -ppassword db |ssh user@remote "cat - | mysql -u dbuser -ppassword db"

Here, you can also gzip or bzip2 the input, and it should be very efficient, because mysqldump output is pure ascii with sql, gzip and bzip2 will easily find good pattern for compression.

Also, as usual, using my.cnf files, you don’t need -ppassword parameters.

Comments (2)

scp with little files, use ssh and tar

scp is slow at copying a lot of little files. It’s probably because, it creates a ssh connection and then runs in a loop that open the file locally , reads it, sends it, creates the file remotely and restarts that loop. I think that the cost of opening files and creating files is high and tar is a lot more efficient at it (if you know exactly why, please post a comment). That’s why I often use ssh with a combination of tar to copy a directory with a lot of little files (let’s say a website with a tone of 10kB php files).

To achieve this, where I typically use:
scp -r a-directory/ user@host:
I would use:
tar cfz - a-directory | ssh user@host "tar zxvf -"

First, I create a tar file that output to stdout (tar cf -, the “-” is to output to stdout) then I pipe the output to an ssh connection that execute a command that read stdin and pass it to tar (tar xvf -, where “-” is stdin).

Surely, you can remove the “z” in the input and output tar , so the content won’t be compressed with gzip, or replace it with “j” to use bzip (with gnu tar).

that would be:

tar cf - a-directory | ssh user@host "tar xvf -" or
tar cfj - a-directory | ssh user@host "tar jxvf -"

Also, tar knows how to use ssh:
tar cvfz user@host:file.tar a-directory but this method have the drawback of creating a tar file remotely, not copying the directory, you still need to uncompress the tar on the other side.

Now some numbers, this is not a benchmark, this is just an indication, I had the idea of writing this article while copying a directory I needed to copy. Numbers may vary with size of files, number of files, local cpu, remote cpu, network bandwidth and probably other parameters:


# du -sh a-directory
109M a-directory
Read the rest of this entry »

Comments (3)