Guide to limits.conf / ulimit /open file descriptors under linux

Why does linux have an open-file-limit?

The open-file-limit exists to prevent users/process to use up all resources on a machine. Every file-descriptor uses a certain amount of RAM and a malicious or malfunctioning program could bring down the whole server.

What is an open file?

The lsof-manpage makes it clear:

An open file may be a regular file, a directory, a block special file,
a character special file, an executing text reference, a library, a
stream or a network file (Internet socket, NFS file or UNIX domain
socket.)

It is important to know that also network sockets are open files, because in a high-performance-web-environment lots of these are opened frequently.

How is the limit enforced?

$> man getrlimit
...
The kernel enforces the open-file limit using the functions setrlimit and getrlimit. 

Newer kernels support the prlimit call to get and set various limits on running processes
...

$> man prlimit
...
The prlimit() system call is available  since  Linux  2.6.36.   Library
support is available since glibc 2.13.
...

What is ulimit?

Mostly when people refer to ulimit they mean the bash builtin ‘ulimit’ which is used to set various different limits in bash context (not to be confused with the deprecated c-routine ulimit(int cmd, long newlimit) from the system libraries). It can be used to set the open file limit of the current bash.

The difference between soft and hard limits

The initial soft and hard limits for open files are set in /etc/security/limits.conf and enforced at login through the PAM-module pam_limits.so . The user then can modify the soft and hard limit using ulimit or the c-functions. The hard limit can never be raised by a regular user. root is the only user who can raise its hard limit. The soft-limit can be freely varied by the user as long as its less than the hard limit. The value that triggers the “24: too many open files”-error is the soft-limit. It is only soft in the sense, that it can be freely set. The user can lower its hard limit, but beware. He can not raise it again (in this shell).

ulimit Mini-Howto

ulimit -n queries the current SOFT limit
ulimit -n [NUMBER] sets the hard and softlimit to the same value
ulimit -Sn queries the current SOFT limit
ulimit -Sn [NUMBER] sets the current soft limit
ulimit -Hn queries the current hard limit (thats the maximum value you can set the softlimit to (if you are not root))
ulimit -Hn [NUMBER] sets the current hard limit

Are there other limits?

Also a system wide open file limit applies. This is the maximum limit of open files the kernel will open for all processes together.

$> man proc
...
       /proc/sys/fs/file-max
              This file defines a system-wide limit  on  the  number  of  open
              files  for  all processes.  (See also setrlimit(2), which can be
              used by a process to set the per-process  limit,  RLIMIT_NOFILE,
              on  the  number of files it may open.)  If you get lots of error
              messages about running out of file handles, try increasing  this
              value:

              echo 100000 > /proc/sys/fs/file-max

              The  kernel constant NR_OPEN imposes an upper limit on the value
              that may be placed in file-max.

              If you  increase  /proc/sys/fs/file-max,  be  sure  to  increase
              /proc/sys/fs/inode-max   to   3-4   times   the   new  value  of
              /proc/sys/fs/file-max, or you will run out of inodes.
...

Note: /proc/sys/fs/inode-max (only present until Linux 2.2)
This file contains the maximum number of in-memory inodes. This
value should be 3-4 times larger than the value in file-max,
since stdin, stdout and network sockets also need an inode to
handle them. When you regularly run out of inodes, you need to
increase this value.

Starting with Linux 2.4, there is no longer a static limit on
the number of inodes, and this file is removed.

To query the maximum possible limit have a look at (this is only informational. Normally a way lower limit is sufficient):

$> cat /proc/sys/fs/nr_open
1048576

Change the system-wide open files limit

Append or change the following line in /etc/sysctl.conf

fs.file-max = 100000

(replace 100000 with the desired number)

Then apply the changes to the running system with:

$> sysctl -p

What does /proc/sys/fs/file-nr show?

$> man proc
       /proc/sys/fs/file-nr
              This (read-only)  file  gives  the  number  of  files  presently
              opened.  It contains three numbers: the number of allocated file
              handles; the number of free file handles; and the maximum number
              of file handles.  The kernel allocates file handles dynamically,
              but it doesn't free them again.   If  the  number  of  allocated
              files  is  close  to the maximum, you should consider increasing
              the maximum.  When the number of free  file  handles  is  large,
              you've  encountered a peak in your usage of file handles and you
              probably don't need to increase the maximum.

So basically it says /proc/sys/fs/file-nr is not the actual number of open files, but the maximum which were opened. It also shows the number of file descriptors which are free for reuse. So max-number – free number = actual number. This applies not only to physical files, but also sockets.

From a newer manpage:

Before Linux 2.6,
the kernel allocated file handles dynamically, but it didn’t
free them again. Instead the free file handles were kept in a
list for reallocation; the “free file handles” value indicates
the size of that list. A large number of free file handles
indicates that there was a past peak in the usage of open file
handles. Since Linux 2.6, the kernel does deallocate freed file
handles, and the “free file handles” value is always zero.

$> cat /proc/sys/fs/file-nr 
512	0	36258
max     free    limit

How is it possible to query the number of currently open file descriptors?

System wide

$> cat /proc/sys/fs/file-nr

Process

lsof lists also lots of content which does not count into the open file limit (e.g. anonymous shared memory areas (= /dev/zero entries)). Querying
the /proc-filesystem seems to be most reliable:

$> cd /proc/12345
$> find . 2>&1 | grep '/fd/' | grep -v 'No such file' | sed 's#task/.*/fd#fd#' | sort | uniq | wc -l

If you want to try lsof use this ( the -n prevents hostname lookups and makes lsof way faster for lots of open connections):

lsof -n -p 12345 | wc -l

You can also insert a number of pids for e.g. php5-fpm into lsof with:

lsof -n -p "$(pidof php5-fpm | tr ' ' ',')" | wc -l

Changing the ulimit for users

Edit the file /etc/security/limits.conf or append:

www-data soft nofile 8192
www-data hard nofile 8192

Set the soft to the hard-limit, so you don’t have to raise it manually, as user.

It is also possible to set a wildcard:

* soft nofile 8192
* hard nofile 8192

For root the wildcard will not work and extra lines have to be added:

root soft nofile 8192
root hard nofile 8192

I set my precious limits, I logout and login but they are not applied

As said before the limits in /etc/security/limits.conf are applied by the PAM-module pam_limits.so.
In the directory /etc/pam.d lie various files, which manage different PAM-settings for different commands.
If you don’t log into your account, but change into it using su or execute a command using sudo, then the special config for this program is loaded. Open the config and make sure the line for loading pam_limits.so is
not commented out:
/etc/pam.d/su

session    required   pam_limits.so

Save and now the limits should be applied.

Program specific special cases

nginx

nginx has some special handling:

This is what applied to my Ubuntu Precise 12.04 Testsystem. The init-script seems to be buggy there.

  1. You can set the ulimit which nginx should use in /etc/default/nginx
  2. /etc/init.d/nginx restart does NOT apply the ulimit settings. The setting is only applied in the start-section of the init script. So you have to do /etc/init.d/nginx stop; /etc/init.d/nginx start to apply the new limit

There is a better distribution independent way to set the worker openfiles limit. Using the config file!:

Syntax:	worker_rlimit_nofile number;
Default:	—
Context:	main

Changes the limit on the maximum number of open files (RLIMIT_NOFILE) for worker processes. Used to increase the limit without restarting the main process.
pixelstats trackingpixel

Unit-Tests with Fortran using pFUnit (supports MPI)

Minimum requirements

The current master can only be built using unreleased gcc versions (4.8.3, or 4.9). The recommended solution is to use pfunit 2.1.x , which I will do in this tutorial.

I used gcc 4.8.1.

Getting the framework

git clone git://pfunit.git.sourceforge.net/gitroot/pfunit/pfunit pFUnit
git checkout origin/pfunit_2.1.0

Building and testing pfUnit

make tests MPI=YES
make install INSTALL_DIR=/opt/pfunit

Testing if the setup and installation succeeded

In the git main directory do:

cd Examples/MPI_Halo
export PFUNIT=/opt/pfunit
export MPIF90=mpif90
make
make -C /somepath/pFUnit/Examples/MPI_Halo/src SUT
make[1]: Entering directory `/somepath/pFUnit/Examples/MPI_Halo/src'
make[1]: Nothing to be done for `SUT'.
make[1]: Leaving directory `/somepath/pFUnit/Examples/MPI_Halo/src'
make -C /somepath/pFUnit/Examples/MPI_Halo/tests tests
make[1]: Entering directory `/somepath/pFUnit/Examples/MPI_Halo/tests'
make[1]: Nothing to be done for `tests'.
make[1]: Leaving directory `/somepath/pFUnit/Examples/MPI_Halo/tests'
mpif90 -o tests.x -I/home/jonas/data/programs/pfunit/mod -I/home/jonas/data/programs/pfunit/include -Itests /home/jonas/data/programs/pfunit/include/driver.F90 /somepath/pFUnit/Examples/MPI_Halo/tests/*.o /somepath/pFUnit/Examples/MPI_Halo/src/*.o -L/home/jonas/data/programs/pfunit/lib -lpfunit -DUSE_MPI 
mpirun -np 4 ./tests.x
.......F...F
Time:         0.002 seconds
  
 Failure in: testBrokenHalo[npes=3]
   Location: []
Intentional broken test. (PE=0)
  
 Failure in: testBrokenHalo[npes=3]
   Location: []
Intentional broken test. (PE=2)
  
 Failure in: fails[npes=3]
   Location: [beforeAfter.pf:33]
intentionally failing test expected: <0> but found: <3> (PE=0)
  
 Failure in: fails[npes=3]
   Location: [beforeAfter.pf:33]
intentionally failing test expected: <0> but found: <2> (PE=1)
  
 Failure in: fails[npes=3]
   Location: [beforeAfter.pf:33]
intentionally failing test expected: <0> but found: <1> (PE=2)
  
 FAILURES!!!
Tests run: 10, Failures: 2, Errors: 0

The output should look like the one above. There are errors in the written tests, but intentionally. If there are compiling errors, go fix them.

More Examples

More examples can be found in the Examples Directory. The examples are all nice and small and self explainatory.

Common errors

Sometimes if you forget to export the compilervariable:

export F90=gfortran
export MPIF90=mpif90

You will receive these errors:

...
make[1]: c: Command not found
...
make[1]: o: Command not found
...
pixelstats trackingpixel

Secure wiping your harddisk

This is a little FAQ about securely wiping your harddisk.

Why is deleting the files not enough ( e.g. rm -rf *)

Because this removes only the meta-data to find the data, but the data itself is still there. It could be recovered scanning the disk. Imagine it like a book where you ripe out the table of contents. You can’t find a chapter by looking up the page number, but you can flick through the whole book and stop when you find what you are looking for.

Is filling the disk with zeros enough, or do I have to use random numbers, how often do I have to rewrite my harddisk?

Magnetic Discs

The amount of bullshit, half-truth and personal opinions out there is amazing. When you try to get to scientific research results are thin. I found a paper and they did some pretty intense tests and the results are surprising (surprising in contrast to all the opinions out there).

Overwriting Hard Drive Data: The Great Wiping Controversy | Craig Wright, Dave Kleiman, and Shyaam Sundhar R.S.

The short answer is: one write with zeros completely and securely erases your harddrive in a manner, that even with special tools e.g. a electron microscope recovery is not possible.

SSDs and Hybrid-Disks (SSD-Cache + Magnetic)

Zero-filling does not work for SSDs. You have to use the Secure Erase feature every SSD has. Have a look here:

http://wiki.ubuntuusers.de/SSD/Secure-Erase

What tools should I use?

Magnetic Discs

The maintenance tools of all harddisk vendors have a option to zerofill the harddisk. Under linux you can use the tool dd to zerofill a disk.

 dd if=/dev/zero of=/dev/sdX bs=4096

to query the dd status you can send the SIGUSR1 Signal to the process. e.g. this sends the signal to all running dd-process:

#> kill -SIGUSR1 $(pidof dd)
320+0 records in
320+0 records out
335544320 bytes (336 MB) copied, 18.5097 s, 18.1 MB/s

SSDs and Hybrid-Disks (SSD-Cache + Magnetic)

Zero-filling does not work for SSDs. You have to use the Secure Erase feature every SSD has. Have a look here:

http://wiki.ubuntuusers.de/SSD/Secure-Erase

I only want to overwrite one partition, but my system freezes and I can’t work anymore during the wipe.

This limits the write speed a bit, but you can work during the wipe (only makes sense of course if you are not wiping the whole disk).

echo 15000000 > dirty_bytes

For all the backgrounds to the dirty-pages-flush have a look here:

http://serverfault.com/questions/126413/limit-linux-background-flush-dirty-pages

pixelstats trackingpixel

Securing ejabberd on Debian Wheezy (7.0) : Bind epmd to localhost (127.0.0.1)

Ejabberd is a nice and (in theory) easy to setup jabber-server. However during setup I came across some WTF’s, I want to share.

What is empd?
epmd is a small name server used by Erlang programs when establishing distributed Erlang communications. ejabberd needs epmd to use ejabberdctl and also when clustering ejabberd nodes. If ejabberd is stopped, and there aren’t any other Erlang programs running in the system, you can safely stop epmd.

  • epmd is started along with ejabberd, but as other erlang programs might use it, it keeps running even if ejabberd is stopped
  • epmd’s default setup is to listen on ALL INTERFACES

For me this seems to be a undesirable default behaviour of the debian package, which can be easily fixed:

Bind epmd to 127.0.0.1

add the following line to the end of /etc/default/ejabberd to make epmd listen on localhost only. The “export” is imporant. Without it won’t work.

export ERL_EPMD_ADDRESS=127.0.0.1

ejabberd looksup the hostname and tries to connect to this ip. If you have a DNS-Hostname it normally does not resolve to 127.0.0.1 . So you have to add to your
local /etc/hosts file the shortname and the fqdn of your server.

Find the shortname and fqdn:

# shortname
$> hostname -s
foo
$> hostname
foo.bar.local

Now add to /etc/hosts:

127.0.0.1  foo foo.bar.local

Stop epmd with ejabberd

add the follwing line to /etc/init.d/ejabberd

 70 stop()
 71 {
....
 84         echo -e "\nStopping epmd: "
 85         epmd -kill
...
pixelstats trackingpixel

Boosting Audio Output under Ubuntu Linux

I often had the problem when I wanted to watch a movie or listen to some audiofile and there was background noise I wanted to turn up the volume but it was already at 100% . I thought it should be possible to turn up the signal beyond 100% and decide by myself if the signal is clipping/distored. And there is a very easy solution for the people using pulseaudio.

Just install the tool paman

sudo apt-get install paman

Now you can boost the audio to 500% volume. For me usually 150% was enough ;).

Selection_024

pixelstats trackingpixel

Encrypted off-site backup with ecryptfs

I was looking for a method to backup my data encrypted. Of course there exist plenty of possibilities, but most of them either encrypt a container or complete partition or seemed to be complicated to setup. I did not want container or partition encryption as I fear if the media is corrupted or something goes wrong during network transfer perhaps all my data would be unaccessable for me. With file-based encryption I have almost the same risk as without encryption. Even if I loose some files to corruption I can still decipher the rest of the data.

Finally I chose ecryptfs because it is a file-based encryption which also encrypts the filenames and it is very easy to setup and use. On the homepage it advertises itself as You may think of eCryptfs as a sort of “gnupg as a filesystem”. and that’s basically what I was looking for. It safes all meta information in the file, so you can recover it when you have the file itself and the encryption parameters (which are few and easy to backup).

So lets get started. I ciphered a testfile on Ubuntu 12.04.1 and deciphered it successfully under Debian 7.0 .

First you have to install the tools which is very easy using apt (the same on both Ubuntu and Debian):

sudo apt-get install ecryptfs-utils

Then create a new directory, (which will be encrypted) and enter some parameters needed by ecryptfs:

mount -t ecryptfs /home/ecrypttest/encrypted/ /home/ecrypttest/encrypted/
Passphrase: 
Select cipher: 
 1) aes: blocksize = 16; min keysize = 16; max keysize = 32 (loaded)
 2) des3_ede: blocksize = 8; min keysize = 24; max keysize = 24 (not loaded)
 3) cast6: blocksize = 16; min keysize = 16; max keysize = 32 (not loaded)
 4) cast5: blocksize = 8; min keysize = 5; max keysize = 16 (not loaded)
Selection [aes]: 
Select key bytes: 
 1) 16
 2) 32
 3) 24
Selection [16]: 
Enable plaintext passthrough (y/n) [n]: n
Enable filename encryption (y/n) [n]: y
Filename Encryption Key (FNEK) Signature [9702fa8eae80f468]: 
Attempting to mount with the following options:
  ecryptfs_unlink_sigs
  ecryptfs_fnek_sig=9702fa8eae80f468
  ecryptfs_key_bytes=16
  ecryptfs_cipher=aes
  ecryptfs_sig=9702fa8eae80f468
Mounted eCryptfs

The filename encryption key FNEK will be created for you and will be different from mine. Just copy & paste the parameters to a textfile. We will need it later for deciphering.

now enter the directory, and create a test file:

cd /home/ecrypttest/encrypted/
echo "hello ecryptfs" > ecrypttest.txt
cat ecrypttest.txt
hello ecryptfs

if everything is fine, unmount the encrypted filesystem

cd ..
umount /home/ecrypttest/encrypted

Now copy the file to your remote computer to try recover it. Of course you can recover your file anywhere you want, also on the same computer you encrypted it. This is just to prove, that it works on another box without copying anthing else than the file and the mount-parameters.

scp /home/ecrypttest/encrypted/ECRYPTFS_FNEK_ENCRYPTED.FWaL-jeCfc1oO-TGS5G.F.7YgZpNwbodTNkQxRlu6HylnEGw7lTdtfV59--- root@yourremotehost.com:/tmp/ecrypt

log into your remote computer and verify the file is there. Then mount the folder in decrypted mode. You need the parameters from above, when you created the first mount. It is basically only the FNEK Key if you used the defaults for the rest.

ls -lah /tmp/ecrypt/*
-rw-r--r-- 1 root       root        12K Aug  4 23:04 ECRYPTFS_FNEK_ENCRYPTED.FWaL-jeCfc1oO-TGS5G.F.7YgZpNwbodTNkQxRlu6HylnEGw7lTdtfV59---

cd /tmp
mount -t ecryptfs /tmp/ecrypt/ /tmp/ecrypt/ -o cryptfs_unlink_sigs,ecryptfs_fnek_sig=9702fa8eae80f468,ecryptfs_key_bytes=16,ecryptfs_cipher=aes,ecryptfs_sig=9702fa8eae80f468,ecryptfs_passthrough=n
Passphrase: 
Attempting to mount with the following options:
  ecryptfs_unlink_sigs
  ecryptfs_fnek_sig=9702fa8eae80f468
  ecryptfs_key_bytes=16
  ecryptfs_cipher=aes
  ecryptfs_sig=9702fa8eae80f468
Mounted eCryptfs
cd /tmp/ecrypt
cat ecrypttest.txt
hello ecryptfs

Voila everything worked fine. Now unmount the encrypted directory and you can copy your encrypted data safely where you want.

pixelstats trackingpixel

Importing tpc-h testdata into mongodb

As written in a former post, tpc-h offers an easy possiblity to generate various amounts of testdata. Download dbgen from this website and compile it: http://www.tpc.org/tpch/

now run

./dbgen -v -s 0.1

this should leave you with some *.tbl files (PIPE separated csvfiles). Now you can use my scripts to convert them into json an import them into mongodb.
i packed already some generated files into the archive and added the header, so you don’t have to generate the tbl-files by yourself. You only have to adjust the load_into_mongodb.sh script so it loads into the correct database (if test is not ok for you).

if you use your own generated tpl files you have to run: create_mongodb_headers.sh
first

mongodb_tpch.tar.bz2

tar -xjvvf mongodb_tpch.tar.bz2
cd mongodb_tpch
./convert_to_json.sh
./load_into_mongodb.sh

the default script imports the data into the db “test” and collections named like the tpc-h tables.

pixelstats trackingpixel

Importing large csv-files into mongodb

I wanted to import some dummy data into Mongo-DB to test the aggregation functions. I thought a nice source would be the tpc-h testdata which can generate arbitrary volumes of data from 1 GB to 100 GB. You can download the data generation kit from the website : http://www.tpc.org/tpch/

In the generated csv-files the header is missing, but you can find the names in the pdf. For the customers table it is:

custkey|name|address|nationkey|phone|acctbal|mktsegment|comment

The mongodb import possibilities are very limited. Basically you can only import COMMA separated (or TAB separated) values, and if the lines have commas in the data then it also fails. So I wrote a little python script which converts CSV-Data to the mongo-db import json format. The first line in the csv file has to be the names of the headers. in the following lines I’m preparing the tpc-h file with headers converting it to json and then import it into my mongodb. mongodb uses a special json format (every value in one line without commas and squarebrackets. You can also import json-arrays, but the size is very limited.

echo "custkey|name|address|nationkey|phone|acctbal|mktsegment|comment" > header_customer.tbl
cat header_customer.tbl customer.tbl > customer_with_header.tbl
./csv2mongodbjson.py -c customer_with_header.tbl -j customer.json -d '|'
mongoimport --db test --collection customer --file customer.json

for a csv file with 150000 lines the conversion takes about 3 seconds.

Converting CSV-Files to Mongo-DB JSON format

csv2mongodbjson.py

#!/usr/bin/python
import csv
from optparse import OptionParser
 
# converts a array of csv-columns to a mongodb json line
def convert_csv_to_json(csv_line, csv_headings):
	json_elements = []
	for index,heading in enumerate(csv_headings):
	    json_elements.append(heading + ": \"" + unicode(csv_line[index],'UTF-8') + "\"")
 
	line = "{ " + ', '.join(json_elements) + " }"
	return line
 
# parsing the commandline options
parser = OptionParser(description="parses a csv-file and converts it to mongodb json format. The csv file has to have the column names in the first line.")
parser.add_option("-c", "--csvfile", dest="csvfile", action="store", help="input csvfile")
parser.add_option("-j", "--jsonfile", dest="jsonfile", action="store", help="json output file")
parser.add_option("-d", "--delimiter", dest="delimiter", action="store", help="csvdelimiter")
 
(options, args) = parser.parse_args()
 
# parsing and converting the csvfile
csvreader = csv.reader(open(options.csvfile, 'rb'), delimiter=options.delimiter)
column_headings = csvreader.next()
jsonfile = open(options.jsonfile, 'wb')
 
while True:
    try: 
        csv_current_line = csvreader.next()
	json_current_line = convert_csv_to_json(csv_current_line,column_headings)
	print >>jsonfile, json_current_line
 
    except csv.Error as e :
        print "Error parsing csv: %s" % e
    except StopIteration as e:
        print "=== Finished ==="
        break
 
jsonfile.close()
pixelstats trackingpixel

Fix sluggish mouse in Ubuntu 12.04 LTS

For some time now i have the problem with my Dell Latitude E6510 Laptop that when I plug in a USB mouse the mouse is really slow and sluggish. Usually a reboot fixes this, but this is very inconvenient. Today I tried some googleing again and found at least a workaround to restart the usb-services without rebooting. This usually helps to fix the mouse.

Find with lspci the device ids of your usb hubs:

lspci | grep -i usb
00:1a.0 USB controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 05)
00:1d.0 USB controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 05)

I wrote this little script, but of course you can also execute the commands directly on the commandline. In this case just be sure, that you have another keyboard than the one that is connected to usb, as of course after the unbind it will not work anymore until the rebind. (If you execute the commands by script or in one line, separated with ; it should be no problem as rebind is triggered directly after unbind without further keyboard involment).

switch the device numbers according to your lspci listing

#!/bin/bash
echo -n '0000:00:1a.0' > /sys/bus/pci/drivers/ehci_hcd/unbind 
echo -n '0000:00:1a.0' > /sys/bus/pci/drivers/ehci_hcd/bind 
echo -n '0000:00:1d.0' > /sys/bus/pci/drivers/ehci_hcd/unbind 
echo -n '0000:00:1d.0' > /sys/bus/pci/drivers/ehci_hcd/bind
pixelstats trackingpixel

Ubuntu Upgrade to 12.04 LTS -> Libreoffice not working anymore

After a update session of several hours from Ubuntu 11.04 over 11.10 to 12.04 LTS I wanted to start using libreoffice, but it terminated right after start:

$> loimpress
terminate called after throwing an instance of 'com::sun::star::uno::RuntimeException'

As root it started without problems, after some googleing and seeing the gdb-trace I found the solution to my problem. Something with the migration of the previous version config files got wrong. So I just deleted them. It is not very elegant, but worked for me and as I did not make any special settings in libreoffice for me it was not painful.

Caution! you will loose all libreoffice-settings with this method

For me the important part was deleting the .ure-Directory, after this it worked.

$> cd ~
$> sudo rm -rf .libreoffice
$> sudo rm -rf .openoffice.org
$> sudo rm -rf .config/libreoffice
$> sudo rm -rf .ure
pixelstats trackingpixel
←Older