Unit-Tests with Fortran using pFUnit (supports MPI)

Minimum requirements

The current master can only be built using unreleased gcc versions (4.8.3, or 4.9). The recommended solution is to use pfunit 2.1.x , which I will do in this tutorial.

I used gcc 4.8.1.

Getting the framework

git clone git://pfunit.git.sourceforge.net/gitroot/pfunit/pfunit pFUnit
git checkout origin/pfunit_2.1.0

Building and testing pfUnit

make tests MPI=YES
make install INSTALL_DIR=/opt/pfunit

Testing if the setup and installation succeeded

In the git main directory do:

cd Examples/MPI_Halo
export PFUNIT=/opt/pfunit
export MPIF90=mpif90
make
make -C /somepath/pFUnit/Examples/MPI_Halo/src SUT
make[1]: Entering directory `/somepath/pFUnit/Examples/MPI_Halo/src'
make[1]: Nothing to be done for `SUT'.
make[1]: Leaving directory `/somepath/pFUnit/Examples/MPI_Halo/src'
make -C /somepath/pFUnit/Examples/MPI_Halo/tests tests
make[1]: Entering directory `/somepath/pFUnit/Examples/MPI_Halo/tests'
make[1]: Nothing to be done for `tests'.
make[1]: Leaving directory `/somepath/pFUnit/Examples/MPI_Halo/tests'
mpif90 -o tests.x -I/home/jonas/data/programs/pfunit/mod -I/home/jonas/data/programs/pfunit/include -Itests /home/jonas/data/programs/pfunit/include/driver.F90 /somepath/pFUnit/Examples/MPI_Halo/tests/*.o /somepath/pFUnit/Examples/MPI_Halo/src/*.o -L/home/jonas/data/programs/pfunit/lib -lpfunit -DUSE_MPI 
mpirun -np 4 ./tests.x
.......F...F
Time:         0.002 seconds
  
 Failure in: testBrokenHalo[npes=3]
   Location: []
Intentional broken test. (PE=0)
  
 Failure in: testBrokenHalo[npes=3]
   Location: []
Intentional broken test. (PE=2)
  
 Failure in: fails[npes=3]
   Location: [beforeAfter.pf:33]
intentionally failing test expected: <0> but found: <3> (PE=0)
  
 Failure in: fails[npes=3]
   Location: [beforeAfter.pf:33]
intentionally failing test expected: <0> but found: <2> (PE=1)
  
 Failure in: fails[npes=3]
   Location: [beforeAfter.pf:33]
intentionally failing test expected: <0> but found: <1> (PE=2)
  
 FAILURES!!!
Tests run: 10, Failures: 2, Errors: 0

The output should look like the one above. There are errors in the written tests, but intentionally. If there are compiling errors, go fix them.

More Examples

More examples can be found in the Examples Directory. The examples are all nice and small and self explainatory.

Common errors

Sometimes if you forget to export the compilervariable:

export F90=gfortran
export MPIF90=mpif90

You will receive these errors:

...
make[1]: c: Command not found
...
make[1]: o: Command not found
...
pixelstats trackingpixel

Secure wiping your harddisk

This is a little FAQ about securely wiping your harddisk.

Why is deleting the files not enough ( e.g. rm -rf *)

Because this removes only the meta-data to find the data, but the data itself is still there. It could be recovered scanning the disk. Imagine it like a book where you ripe out the table of contents. You can’t find a chapter by looking up the page number, but you can flick through the whole book and stop when you find what you are looking for.

Is filling the disk with zeros enough, or do I have to use random numbers, how often do I have to rewrite my harddisk?

Magnetic Discs

The amount of bullshit, half-truth and personal opinions out there is amazing. When you try to get to scientific research results are thin. I found a paper and they did some pretty intense tests and the results are surprising (surprising in contrast to all the opinions out there).

Overwriting Hard Drive Data: The Great Wiping Controversy | Craig Wright, Dave Kleiman, and Shyaam Sundhar R.S.

The short answer is: one write with zeros completely and securely erases your harddrive in a manner, that even with special tools e.g. a electron microscope recovery is not possible.

SSDs and Hybrid-Disks (SSD-Cache + Magnetic)

Zero-filling does not work for SSDs. You have to use the Secure Erase feature every SSD has. Have a look here:

http://wiki.ubuntuusers.de/SSD/Secure-Erase

What tools should I use?

Magnetic Discs

The maintenance tools of all harddisk vendors have a option to zerofill the harddisk. Under linux you can use the tool dd to zerofill a disk.

 dd if=/dev/zero of=/dev/sdX bs=4096

to query the dd status you can send the SIGUSR1 Signal to the process. e.g. this sends the signal to all running dd-process:

#> kill -SIGUSR1 $(pidof dd)
320+0 records in
320+0 records out
335544320 bytes (336 MB) copied, 18.5097 s, 18.1 MB/s

SSDs and Hybrid-Disks (SSD-Cache + Magnetic)

Zero-filling does not work for SSDs. You have to use the Secure Erase feature every SSD has. Have a look here:

http://wiki.ubuntuusers.de/SSD/Secure-Erase

I only want to overwrite one partition, but my system freezes and I can’t work anymore during the wipe.

This limits the write speed a bit, but you can work during the wipe (only makes sense of course if you are not wiping the whole disk).

echo 15000000 > dirty_bytes

For all the backgrounds to the dirty-pages-flush have a look here:

http://serverfault.com/questions/126413/limit-linux-background-flush-dirty-pages

pixelstats trackingpixel

Securing ejabberd on Debian Wheezy (7.0) : Bind epmd to localhost (127.0.0.1)

Ejabberd is a nice and (in theory) easy to setup jabber-server. However during setup I came across some WTF’s, I want to share.

What is empd?
epmd is a small name server used by Erlang programs when establishing distributed Erlang communications. ejabberd needs epmd to use ejabberdctl and also when clustering ejabberd nodes. If ejabberd is stopped, and there aren’t any other Erlang programs running in the system, you can safely stop epmd.

  • epmd is started along with ejabberd, but as other erlang programs might use it, it keeps running even if ejabberd is stopped
  • epmd’s default setup is to listen on ALL INTERFACES

For me this seems to be a undesirable default behaviour of the debian package, which can be easily fixed:

Bind epmd to 127.0.0.1

add the following line to the end of /etc/default/ejabberd to make epmd listen on localhost only. The “export” is imporant. Without it won’t work.

export ERL_EPMD_ADDRESS=127.0.0.1

ejabberd looksup the hostname and tries to connect to this ip. If you have a DNS-Hostname it normally does not resolve to 127.0.0.1 . So you have to add to your
local /etc/hosts file the shortname and the fqdn of your server.

Find the shortname and fqdn:

# shortname
$> hostname -s
foo
$> hostname
foo.bar.local

Now add to /etc/hosts:

127.0.0.1  foo foo.bar.local

Stop epmd with ejabberd

add the follwing line to /etc/init.d/ejabberd

 70 stop()
 71 {
....
 84         echo -e "\nStopping epmd: "
 85         epmd -kill
...
pixelstats trackingpixel

Boosting Audio Output under Ubuntu Linux

I often had the problem when I wanted to watch a movie or listen to some audiofile and there was background noise I wanted to turn up the volume but it was already at 100% . I thought it should be possible to turn up the signal beyond 100% and decide by myself if the signal is clipping/distored. And there is a very easy solution for the people using pulseaudio.

Just install the tool paman

sudo apt-get install paman

Now you can boost the audio to 500% volume. For me usually 150% was enough ;).

Selection_024

pixelstats trackingpixel

Encrypted off-site backup with ecryptfs

I was looking for a method to backup my data encrypted. Of course there exist plenty of possibilities, but most of them either encrypt a container or complete partition or seemed to be complicated to setup. I did not want container or partition encryption as I fear if the media is corrupted or something goes wrong during network transfer perhaps all my data would be unaccessable for me. With file-based encryption I have almost the same risk as without encryption. Even if I loose some files to corruption I can still decipher the rest of the data.

Finally I chose ecryptfs because it is a file-based encryption which also encrypts the filenames and it is very easy to setup and use. On the homepage it advertises itself as You may think of eCryptfs as a sort of “gnupg as a filesystem”. and that’s basically what I was looking for. It safes all meta information in the file, so you can recover it when you have the file itself and the encryption parameters (which are few and easy to backup).

So lets get started. I ciphered a testfile on Ubuntu 12.04.1 and deciphered it successfully under Debian 7.0 .

First you have to install the tools which is very easy using apt (the same on both Ubuntu and Debian):

sudo apt-get install ecryptfs-utils

Then create a new directory, (which will be encrypted) and enter some parameters needed by ecryptfs:

mount -t ecryptfs /home/ecrypttest/encrypted/ /home/ecrypttest/encrypted/
Passphrase: 
Select cipher: 
 1) aes: blocksize = 16; min keysize = 16; max keysize = 32 (loaded)
 2) des3_ede: blocksize = 8; min keysize = 24; max keysize = 24 (not loaded)
 3) cast6: blocksize = 16; min keysize = 16; max keysize = 32 (not loaded)
 4) cast5: blocksize = 8; min keysize = 5; max keysize = 16 (not loaded)
Selection [aes]: 
Select key bytes: 
 1) 16
 2) 32
 3) 24
Selection [16]: 
Enable plaintext passthrough (y/n) [n]: n
Enable filename encryption (y/n) [n]: y
Filename Encryption Key (FNEK) Signature [9702fa8eae80f468]: 
Attempting to mount with the following options:
  ecryptfs_unlink_sigs
  ecryptfs_fnek_sig=9702fa8eae80f468
  ecryptfs_key_bytes=16
  ecryptfs_cipher=aes
  ecryptfs_sig=9702fa8eae80f468
Mounted eCryptfs

The filename encryption key FNEK will be created for you and will be different from mine. Just copy & paste the parameters to a textfile. We will need it later for deciphering.

now enter the directory, and create a test file:

cd /home/ecrypttest/encrypted/
echo "hello ecryptfs" > ecrypttest.txt
cat ecrypttest.txt
hello ecryptfs

if everything is fine, unmount the encrypted filesystem

cd ..
umount /home/ecrypttest/encrypted

Now copy the file to your remote computer to try recover it. Of course you can recover your file anywhere you want, also on the same computer you encrypted it. This is just to prove, that it works on another box without copying anthing else than the file and the mount-parameters.

scp /home/ecrypttest/encrypted/ECRYPTFS_FNEK_ENCRYPTED.FWaL-jeCfc1oO-TGS5G.F.7YgZpNwbodTNkQxRlu6HylnEGw7lTdtfV59--- root@yourremotehost.com:/tmp/ecrypt

log into your remote computer and verify the file is there. Then mount the folder in decrypted mode. You need the parameters from above, when you created the first mount. It is basically only the FNEK Key if you used the defaults for the rest.

ls -lah /tmp/ecrypt/*
-rw-r--r-- 1 root       root        12K Aug  4 23:04 ECRYPTFS_FNEK_ENCRYPTED.FWaL-jeCfc1oO-TGS5G.F.7YgZpNwbodTNkQxRlu6HylnEGw7lTdtfV59---

cd /tmp
mount -t ecryptfs /tmp/ecrypt/ /tmp/ecrypt/ -o cryptfs_unlink_sigs,ecryptfs_fnek_sig=9702fa8eae80f468,ecryptfs_key_bytes=16,ecryptfs_cipher=aes,ecryptfs_sig=9702fa8eae80f468,ecryptfs_passthrough=n
Passphrase: 
Attempting to mount with the following options:
  ecryptfs_unlink_sigs
  ecryptfs_fnek_sig=9702fa8eae80f468
  ecryptfs_key_bytes=16
  ecryptfs_cipher=aes
  ecryptfs_sig=9702fa8eae80f468
Mounted eCryptfs
cd /tmp/ecrypt
cat ecrypttest.txt
hello ecryptfs

Voila everything worked fine. Now unmount the encrypted directory and you can copy your encrypted data safely where you want.

pixelstats trackingpixel

Importing tpc-h testdata into mongodb

As written in a former post, tpc-h offers an easy possiblity to generate various amounts of testdata. Download dbgen from this website and compile it: http://www.tpc.org/tpch/

now run

./dbgen -v -s 0.1

this should leave you with some *.tbl files (PIPE separated csvfiles). Now you can use my scripts to convert them into json an import them into mongodb.
i packed already some generated files into the archive and added the header, so you don’t have to generate the tbl-files by yourself. You only have to adjust the load_into_mongodb.sh script so it loads into the correct database (if test is not ok for you).

if you use your own generated tpl files you have to run: create_mongodb_headers.sh
first

mongodb_tpch.tar.bz2

tar -xjvvf mongodb_tpch.tar.bz2
cd mongodb_tpch
./convert_to_json.sh
./load_into_mongodb.sh

the default script imports the data into the db “test” and collections named like the tpc-h tables.

pixelstats trackingpixel

Importing large csv-files into mongodb

I wanted to import some dummy data into Mongo-DB to test the aggregation functions. I thought a nice source would be the tpc-h testdata which can generate arbitrary volumes of data from 1 GB to 100 GB. You can download the data generation kit from the website : http://www.tpc.org/tpch/

In the generated csv-files the header is missing, but you can find the names in the pdf. For the customers table it is:

custkey|name|address|nationkey|phone|acctbal|mktsegment|comment

The mongodb import possibilities are very limited. Basically you can only import COMMA separated (or TAB separated) values, and if the lines have commas in the data then it also fails. So I wrote a little python script which converts CSV-Data to the mongo-db import json format. The first line in the csv file has to be the names of the headers. in the following lines I’m preparing the tpc-h file with headers converting it to json and then import it into my mongodb. mongodb uses a special json format (every value in one line without commas and squarebrackets. You can also import json-arrays, but the size is very limited.

echo "custkey|name|address|nationkey|phone|acctbal|mktsegment|comment" > header_customer.tbl
cat header_customer.tbl customer.tbl > customer_with_header.tbl
./csv2mongodbjson.py -c customer_with_header.tbl -j customer.json -d '|'
mongoimport --db test --collection customer --file customer.json

for a csv file with 150000 lines the conversion takes about 3 seconds.

Converting CSV-Files to Mongo-DB JSON format

csv2mongodbjson.py

#!/usr/bin/python
import csv
from optparse import OptionParser
 
# converts a array of csv-columns to a mongodb json line
def convert_csv_to_json(csv_line, csv_headings):
	json_elements = []
	for index,heading in enumerate(csv_headings):
	    json_elements.append(heading + ": \"" + unicode(csv_line[index],'UTF-8') + "\"")
 
	line = "{ " + ', '.join(json_elements) + " }"
	return line
 
# parsing the commandline options
parser = OptionParser(description="parses a csv-file and converts it to mongodb json format. The csv file has to have the column names in the first line.")
parser.add_option("-c", "--csvfile", dest="csvfile", action="store", help="input csvfile")
parser.add_option("-j", "--jsonfile", dest="jsonfile", action="store", help="json output file")
parser.add_option("-d", "--delimiter", dest="delimiter", action="store", help="csvdelimiter")
 
(options, args) = parser.parse_args()
 
# parsing and converting the csvfile
csvreader = csv.reader(open(options.csvfile, 'rb'), delimiter=options.delimiter)
column_headings = csvreader.next()
jsonfile = open(options.jsonfile, 'wb')
 
while True:
    try: 
        csv_current_line = csvreader.next()
	json_current_line = convert_csv_to_json(csv_current_line,column_headings)
	print >>jsonfile, json_current_line
 
    except csv.Error as e :
        print "Error parsing csv: %s" % e
    except StopIteration as e:
        print "=== Finished ==="
        break
 
jsonfile.close()
pixelstats trackingpixel

Fix sluggish mouse in Ubuntu 12.04 LTS

For some time now i have the problem with my Dell Latitude E6510 Laptop that when I plug in a USB mouse the mouse is really slow and sluggish. Usually a reboot fixes this, but this is very inconvenient. Today I tried some googleing again and found at least a workaround to restart the usb-services without rebooting. This usually helps to fix the mouse.

Find with lspci the device ids of your usb hubs:

lspci | grep -i usb
00:1a.0 USB controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 05)
00:1d.0 USB controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 05)

I wrote this little script, but of course you can also execute the commands directly on the commandline. In this case just be sure, that you have another keyboard than the one that is connected to usb, as of course after the unbind it will not work anymore until the rebind. (If you execute the commands by script or in one line, separated with ; it should be no problem as rebind is triggered directly after unbind without further keyboard involment).

switch the device numbers according to your lspci listing

#!/bin/bash
echo -n '0000:00:1a.0' > /sys/bus/pci/drivers/ehci_hcd/unbind 
echo -n '0000:00:1a.0' > /sys/bus/pci/drivers/ehci_hcd/bind 
echo -n '0000:00:1d.0' > /sys/bus/pci/drivers/ehci_hcd/unbind 
echo -n '0000:00:1d.0' > /sys/bus/pci/drivers/ehci_hcd/bind
pixelstats trackingpixel

Ubuntu Upgrade to 12.04 LTS -> Libreoffice not working anymore

After a update session of several hours from Ubuntu 11.04 over 11.10 to 12.04 LTS I wanted to start using libreoffice, but it terminated right after start:

$> loimpress
terminate called after throwing an instance of 'com::sun::star::uno::RuntimeException'

As root it started without problems, after some googleing and seeing the gdb-trace I found the solution to my problem. Something with the migration of the previous version config files got wrong. So I just deleted them. It is not very elegant, but worked for me and as I did not make any special settings in libreoffice for me it was not painful.

Caution! you will loose all libreoffice-settings with this method

For me the important part was deleting the .ure-Directory, after this it worked.

$> cd ~
$> sudo rm -rf .libreoffice
$> sudo rm -rf .openoffice.org
$> sudo rm -rf .config/libreoffice
$> sudo rm -rf .ure
pixelstats trackingpixel

Ubuntu 12.04 Gnome Classic Panel Right-Click does not work

As I was looking for this really a long time I am reposting it:

http://askubuntu.com/questions/66414/how-to-add-panel-applets-to-classic-gnome-panel

With the new gnome and using a classic-session you have to press META + ALT + RightClick to access the panel menu. In my case META is “Alt Gr”. So try this:

  • ALT + RightClick¬† (if it doesn’t work try next)
  • Alt Gr + Alt + RightClick
pixelstats trackingpixel
←Older