Two usefull MySQL requests

To analyse a database content, I used several times theses two requests:

  • Number of columns for all tables in a database:
    select TABLE_NAME, count(TABLE_NAME) from columns where TABLE_SCHEMA="dbname" group by TABLE_NAME order by count (TABLE_NAME);
  • Number of rows for all tables in a database:
    select table_name, table_rows from information_schema.tables where table_schema = 'dbname' order by table_rows;
  • These are simple but effective to know where most of your data are if your database starting to get slow.
    Also, it can help for finding a partitionning or sharding strategy.

Comments (1)

subversion hooks and “light” continuous integration

I recently made something really simple which developpers found really usefull:
When a developper commits some code on a subversion repository, he wants to test if it works on an integration plateform.
In my case, this was on a php project, so no complilation, to do so, I just need to copy the file in the right place !
Actually, the right place for me was on another machine.

So, how do you copy files on a remote machine automatically when developpers commits some new code ?
Easy: use hooks

In your subversion repository you will find a directory called “hooks” , all files in this directory are called something.tmpl , those files are examples / templates.

What i did was put this content in post-commit:

RESULT=`/usr/bin/svnlook dirs-changed "$REPOS" -r $REV | grep -e "the/dir/in/the/repo/project"`; 
if [ ! -z "$RESULT" ]; then 
    /usr/bin/svn export file:///var/lib/svn/the/dir/in/the/repo/project /tmp/project
    /usr/bin/scp -i /home/apache/id_dsa -r /tmp/project/* user@host:/var/www/project
    /bin/rm -rf /tmp/project

Then I needed to do some stuff:

  1. chmod +x post-commit
  2. I also needed to generate ssh keys, put the private key in /home/apache/id_dsa, make it owned by the process running subversion (for me apache, because I’m using apache’s mod_dav_svn), and copy the key to a users’ authorized_keys file …
  3. Authorize the remote host’s key for the user running the post-commit script (su www-data then ssh host and accept the key)

I you make a mistake in your post-commit script (like I did the first time I tried), you get the error on the output when you commit the code !

Easy, fun and nice !

Leave a Comment

limits , open files, pam, daemons and scalability

Be carefull, some daemons have some extensive activity !
It depends on the daemon (its goal, its activity) but some opens a lot of files, some uses a lot a network connections, some both of them (and some takes a lot of cpu , others a lot of ram) …
Anyway, on unix any process is limited by the system with the ulimit mechanism, and one of these “annoying” limits is the maximum number of file descriptor , that is the maximum number of open files a single process can use including network connections (on unix, a network connections has a file descriptor associated to it).

Depending on your daemon and your case, you sometimes need to get rid of these limits (be sure you need to get rid a of these limits). To do so , on a ubuntu , I have:

  • Added the following to /etc/security/limits.conf:
    nobody soft nofile 4096
    nobody hard nofile 63536
  • Added the following to /etc/pam.d/su:
    session required

Here, I have added to the user nobody the right to open 63536 files PER PROCESS (yeah, per process, not for the user).
I added this thing to /etc/pam.d/su because when you run a daemon from an init script, it uses the su mecanism to run as the user it’s configured for.
I have added the user nobody because unfortunately , my running daemon is memcached which runs as nobody (should probably run as memcache user).

Be carefull, sometime you think you need this, but you don’t: I wanted to increase the number of files apache user can open, but it wasn’t needed: my apache was running with forks (because php needs fork), any new connection spawns a new process which has it’s 1024 file descriptor, the whole apache on the system was opening lot more than 1024 files, as I had a limit of 1100 forks running which I reached several times, but a single process wasn’t opening more than 1024 files.
You can check how much a process of files opened with lsof (which stands for LiSt Open Files):
lsof -p 3423 -n |wc -l where 3423 is the PID of the process you want to inscpect ( “-n” means don’t do dns resolution, lsof lists sockets and tries to put names to every address, a server process generally accepts a lot of connection from a lot of different hosts. Without “-n” , lsof is really slow )

That’s it, I hope yo uwill make a good use of these tips ! 🙂

Comments (1)

mysql running on multiple cpus

A little post to write about a tip rarely documented:
Mysql doesn’t use multiple CPU with its default configuration. If you have several cpus on your mysql system use:
in your my.cnf, where n is number_of_core * 2 .
It doesn’t change much the performance because usually mysql is not short on cpu but rather short on ram or disk I/O, but it’s still a gain, it still can handle more concurrent request.
(For general mysql tunning I always recommend to use

Leave a Comment

Executing the same command on different servers: pssh

With pssh (parallel-ssh) you can execute the same command on different hosts.

Pssh is a simple python script, the uses pretty much no python module, so it’s simple to install (it’s also packaged at least in ubuntu).

To use pssh, you need to create a hosts file which contains a list of hosts (one by line) followed by a username to use on that host, then just execute this command parallel-ssh -h hosts-file "command", it will execute “command” on all the hosts that are in the given hosts-file. I copied my ssh public-key so I don’t need to type my password on any server, if you don’t have your key, pssh will prompt for a password.
Pssh has a --print option that prints the output of the command execution, host by host, on the shell you’re launching pssh from, if you don’t use that option , it creates 1 file per host with the result.

Pssh is really nice, but, would be better if I could use the aliases I use in my .ssh/config for hostnames in my hosts-file. Maybe one day, I’ll make a patch to pssh so it uses your .ssh/config to recognize hosts and users in your hosts-file. Nice tool, anyway !

Leave a Comment

installing applications remotely on mac

To use a dmg disk image you would use:
hdiutil mount thefile.dmg
You then have a /Volumes/The\ Application.
In that directory you would usually either have an or a Application.pkg directory.

  • If it’s an .app file, just copy the files to your /Applications/ directory:
    sudo cp /Volumes/The\ Application/ /Applications/
  • f it’s a .pkg file, run:
    sudo installer -package Application.pkg -target /Volumes/Mac\ OS

That’s it !

Leave a Comment

Extract a table from a mysqldump

Today I encountered a problem: I needed to restore a single table from a database mysqldump.
Usually you cat the-mysqldump.sql |mysql the_database so you’re only able to restore the full database. I didn’t find any mysqldump option to extract a single table from a full database dump, so I’ve come up with this (minimal) shell script:


  grepstr="/*!40000 ALTER TABLE \`$TABLE\`"
  lines=`grep -n "$grepstr" $DUMPFILE |cut -d":" -f1`
  lines=`echo $lines|sed 's/\ /,/' `
  sed -n "$lines p" $DUMPFILE

extract_table $1 $2

Use it like this:
./ table-to-extract dumfile-for-extract |mysql the_database (use the |mysql after you have checked the content).

Be carefull, this script is minimalistic:

  • It doesn’t check if the file exist and is really a mysqldump file
  • It doesn’t check if the table to extract exists
  • It doesn’t work if disable-keys is set to false in mysqldump
  • It doesn’t have a usage() function

If some people request it, I’ll write all these features, but as usual, I wanted to come up with a solution I could already use one hour a ago, and I’m spending time to write this script, let’s do it the faster I can ! 🙂

Comments (1)

« Newer Posts · Older Posts »