Posts Tagged linux

Installing redmine 0.8 on intrepid (ubuntu 8.10)

I’ve successfully insalled redmine pretty much easily but I needed to find out what packages to install with apt, which one with gem, which version …
Here is my magic receipe to install it all:

apt-get update 
apt-get install subversion mysql-server rubygems rake pwgen
# next line generates a password for the database
export PASSWORD=`pwgen -nc 8 1`
gem install -v=2.1.2 rails
echo "CREATE DATABASE redmine  DEFAULT CHARACTER SET utf8 COLLATE utf8_general_ci ; GRANT ALL PRIVILEGES ON redmine.* TO 'redmine'@'localhost' IDENTIFIED BY '$PASSWORD' WITH GRANT OPTION; FLUSH PRIVILEGES" | mysql 
cd /opt/
svn export http://redmine.rubyforge.org/svn/branches/0.8-stable redmine-0.8
cd redmine-0.8/
cat <<EOF >> config/database.yml
production:
  adapter: mysql
  socket: /var/run/mysqld/mysqld.sock 
  database: redmine
  host: localhost
  username: redmine
  password: $PASSWORD
  encoding: utf8

EOF
rake db:migrate RAILS_ENV="production"
rake redmine:load_default_data RAILS_ENV="production"
apt-get remove pwgen subversion
RAILS_ENV="production" ./script/server  

And that’s it ! Redmine is running on port 3000.
I did this on an EC2 instance and it works like a charm (ami-7cfd1a15).
Maybe next article will discuss running redmine in mongrel or apache, and creating an init script for having redmine running on boot !

Advertisements

Comments (2)

Selecting a range of lines within a file

Let’s say you want to extract a part of a file, for example from line 12 to 20.
I’ve come up with two solutions:

  • head -n20 |tail -n8
    You take the n’th first line where n is the last line you want, then you go backward by the total line number you want to have, that is: 20-12=8
  • A nicer solution which is straightforward (use the right tools guys !):
    sed -n '12,20p'
    You need the -n option, so that the input is not printed to the output, than give sed an expression (within quotes), the expression is the first line, a coma, the last line, and the “p” instruction which means print.
    This solution doesn’t need you to calculate the number of lines you will get, I find it nicer !

Comments (1)

ssmtp and gmail or google apps

Unix systems often needs a local mailer, but configuring and maintaining a mailer on each system is a timeloss.
You might have a gmail or google apps account. If it’s the case, you can easily configure a mailer on your systems which uses your gmail or google apps. To do so, I’ve used ssmtp and put this in /etc/ssmtp/ssmtp.conf:

root=postmaster
mailhub=smtp.gmail.com:587
AuthUser=your-mail@yourdomain.com
AuthPass=aStr4angeP45s
UseSTARTTLS=YES
hostname=the-hostname

That’s it, simple, effective, working …
To improve the things, maybe, we can use an IP address of the smtp server, so that if our DNS server doesn’t work, we still have mail on the system, but this has a drawback, if the server for which you gave an ip address changes or temporarly doesn’t work, you don’t have mail anymore.
ssmtp doesn’t seem to be able to have several mailhubs !

Comments (4)

watch your process !

I just discovered the watch command, it can be useful !
If you don’t know watch, it does what you would do like this:
while true ; do "your command" ; sleep 1 ; clear ; done
that is, it executes in a while loop the same command , with a sleep so that it doesn’t overkill your cpu.
It also has nice parameters, for exemple --differences that can only show the differences between current and last run.
“your command” could be a du or a df , --differences could be useful when used with an ls to monitor a directory …
Read the manpage and have fun ! 🙂

Leave a Comment

subnet ping scan in shell

Today I logged in a machine I don’t want to install anything on it, but I wanted to find a machine in its network.
I came up with the little shell script that scans the subnet:

CURR=1
SUBNET="192.168.0"

while [ $CURR -lt 255 ] ; do
  ping -c1 -t1 $SUBNET.$CURR 2>&1 >/dev/null
  if [ "$?" -eq "0" ]; then
    echo "$SUBNET.$CURR"
  fi
  let CURR=$CURR+1
done

This script is suboptimal but it does the stuff: It uses ping with a timeout of 1 sec, so If no machine is up, the script takes around 255 seconds to scan the subnet, it doesn’t list the machines that doesn’t reply to ping and so on … but as I said it , it does the stuff.

I tested this script in Linux and OSX.

Comments (13)

Crossover Chromium: google’s chrome browser for osx and linux

Codeweavers, the company which is famous for releasing Crossover (a kind of windows emulator for linux and osx, based on the famous wine project) have released crossover chromium which is a package that contains just enough of crossover and a build of chromium to be able to run it. There’s an OSX and a Linux build. Chromium is google’s project which is the base of the chrome browser. Crossover released the packages 11 days after google chrome got out, this is a proof of concept, wine and crossover have became mature, codeweavers are reactive, but becarefull, it’s not advised to use crossover chromium as day to day browser for now (anyway, I think I’ll try to do it !).

Get the OSX package here, and ubuntu 32 bit pacakge here, and Have Fun !

Update: on osx, my chromium keeps crashing, I think this might because of my firewall, on linux, the windowing is quite slow but for me it works.

Leave a Comment

My 4 varnish tips

Varnish is a reverse proxy, If you don’t know varnish, this article is not interesting to you 😉 .

This is my 4 little tips that greatly optimizes the efficiency of the caching politics:

Removing tracking, this generates a single cache entry for different urls that generates the same content (I use “gclid” as a tracking argument, this is what google uses), use this as the hashing algorithm:

sub vcl_hash {
  vcl.hash += regsub(req.url, ”\?gclid.*”, ””);
  hash;
}

Then we can normalize compression (different browser uses different string for the “Accept-Encoding” header). Add the following in sub vcl_recv:

if (req.http.Accept-Encoding){
 if (req.http.Accept-Encoding ~ "gzip"){
  set req.http.Accept-Encoding = "gzip";
 }elsif (req.http.Accept-Encoding ~ "deflate" ) {
  set req.http.Accept-Encoding = "deflate";
 }else{
  remove req.http.Accept-Encoding;
 ;}
}

When a cookie is generated all subsequent request for any object uses that cookie, we shall remove the cookie for all static content
In sub vcl_recv add this:

if (req.url ~ "\.(js|css|jpg|png|gif|mp3|swf|flv|xml|html|ico)$"){
 remove req.http.cookie;
}

Be carefull with files with these extensions that generates dynamic content (png, jpg, gif file for captcha, html with rewrite  to php or aspx …)

To track client ip address in the log of your web server (the real one, the backend), in sub vcl_recv add this:

remove req.http.X-Forwarded-For;
set req.http.X-Forwarded-For=client.ip;

Then you can log the “X-Forwarded-For” header in your log (doing this depends on your webserver, I do that on apache and lighttpd).

Leave a Comment

Older Posts »