WAMS Containerstandorte auf einer interaktiven Karte

Der Verein WAMS ist ein Sozialbetrieb mit dem Ziel Arbeitsplätze für Menschen zu schaffen, die aufgrund ihrer besonderen Lebenssituationen im konventionellen Arbeitsmarkt benachteiligt werden. Ein besonderes Augenmerk des Vereins liegt auch auf dem Recyclinggedanken und der Wiederverwertung von Ressourcen. Aus diesem Grund betreut und betreibt der Verein auch Altkleidersammelstellen und verwertet das gespendete Gewand. Eine Liste der Standorte dieser gelben Container befindet sich im Flyer von WAMS, der auf der Vereinshomepage bezogen werden kann. Für all jene, denen die Innsbrucker Straßennamen noch nicht allzu viel sagen oder deren geographisches Gedächtnis lückenhaft ist, habe ich die Standorte auf einer Karte eingetragen. Der Quellcode ist auf Github verfügbar und hier beschrieben.

Das Innsbrucker Parkzonensystem

Der Innsbrucker Parkraum ist in 20 Parkzonen unterteilt. In diesen Parkzonen ist das Parken zu unterschiedlichen Zeiten gebührenpflichtig. Innsbrucker mit Hauptwohnsitz können, sofern die Voraussetzungen gegeben sind, eine Anwohnerparkkarte beantragen. Der statische Parkzonenplan der Stadt Innsbruck befindet sich hier und einen interaktiven Parkzonenplan habe ich hier erstellt. Ein Klick auf das Bild bringt Sie dort hin. Sie finden diese Webapplikation auch im offiziellen Anwendungskatalog der Open Government Data Initiative.

Open Data

Die Daten für das Kartenoverlay basieren auf dem Datensatz Parkzonen in Innsbruck der Stadt Innsbruck. Die Daten habe ich vom Open Government Data Catalog hier bezogen und in das GeoJSON Format konvertiert. Datenstand ist der 03.02.2016, der eindeutige Datenidentifikator lautet 8ffd16df-e7b8-423b-8348-199e6a5bf0ca.

Open Source Software

Diese Seite wurde mit Open Source Software erstellt. Das Kartenmaterial kommt von Open Street Map, die Kartenbilder von Mapbox und die Zonenoverlays habe ich mit Leaflet JS realisiert. Das Layout und Design ist mit Bootstrap gemacht und die interaktiven Elemente sind mit jQuery implementiert.

Sollsteinhaus

Das Sollsteinhaus befindet sich oberhalb von Hochzirl und lässt sich öffentlich sehr gut erreichen. Mit der S5 geht es vom Haupt- oder Westbahnhof in einer guten Viertelstunde bis zum Bahnhof Hochzirl. Von Innsbruck kommend kann man direkt durch das am (Zug-) Ende des nördlichen (bergseitigen) Bahnsteig gelegene Gatter gehen und man befindet sich bereits am gut ausgeschilderten Wanderweg 213 zum Sollsteinhaus. Zu Beginn geht es gemütlich durch den Wald und nach wenigen Minuten erreicht man eine Forststraße. Diese gestaltet sich etwas steil und kräftezehrend. Dieser Forststraße folgt man etwa 1.5 Stunden entland der sehr guten Beschilderung, bis man die Materialseilbahn des Sollsteinhauses erreicht. Nach einer weiteren halben Stunde erreicht man die private Solnalm, die man nicht fälschlicherweise schon für das ziel halten sollte. Weiter geht es wieder eintlang schmaler Wege, durch ein Bachbett und in wenigen Serpentinen hinauf bis zum Sollsteinhaus auf 1805m Seehöhe. Leider wird das Sollsteinhaus seit 25.09.2016 (Stand 27.09.2016) renoviert und ist daher geschlossen. Die Wanderung ist auch beim Portal Almenrausch gut beschrieben. Die Gesamtwanderzeit betrug bis zum Sollsteinhaus etwa 2.5 Stunden.

Create a Category Page for one Specific Category and Exclude this Category from the Main Page in WordPress

This blog serves as my digital notebook for more than eight years and I use to to collect all sorts of things, that I think are worth storing and sharing. Mainly, I blog about tiny technical bits, but recently I also started to write about my life here in Innsbruck, where I try to discover what this small city and its surroundings has to offer. The technical articles are written in English, as naturally the majority of visitors understands this language. The local posts are in German for the same reason. My intention was to separate this two topics in the blog and lot let the posts create any clutter between languages.

Child Themes

When tinkering with the code of your WordPress blog, it is strongly recommended to deploy and use a child theme. This allows to reverse changes easily and more importantly, allows to update the theme without having to re-implement your adaptions after each update. Creating a child theme is very easy and described here. In addition I would recommend using some sort of code versioning tool, such as Git.

Excluding a Category from the Main Page

WordPress offers user defined categories out of the box and category pages for each category. This model does not fit well for my blog, where I have static pages and a time series of blog posts on the main page. In order to prevent that the posts about Innsbruck show up at the main page, the category ‘Innsbruck’ needs to be excluded. We can create or modify the file functions.php in the child theme folder and add the following code.

<?php
add_action( 'wp_enqueue_scripts', 'theme_enqueue_styles' );
function theme_enqueue_styles() {
        wp_enqueue_style( 'parent-style', get_template_directory_uri() . '/style.css' );

}

// Exclude Innsbruck Category from Main Page
function exclude_category($query) {
     if ( $query->is_home() ) {
         // Get the category ID of the category Insbruck
         $innsbruckCategory = get_cat_ID( 'Innsbruck' );
         // Add a minus in front of the string
         $query->set('cat','-' . $innsbruckCategory);

      }
     return $query;
}
add_filter('pre_get_posts', 'exclude_category');

?>

This adds a filter which gets executed before the posts are collected. We omit all posts of the category Innsbruck, by adding a minus as prefix of the category ID. Of course you could also lookup the category ID in the administration dashboard, by hovering with your mouse over the category name and save one database query.

A Custom Page Specific for one Category

In the second step, create a new page in the dashboard. This page will contain all posts of the Innsbruck category that we will publish. Create the file page.php in your child theme folder and use the following code:

<?php
/**
 * The template for displaying all pages.
 *
 * This is the template that displays all pages by default.
 * Please note that this is the WordPress construct of pages
 * and that other 'pages' on your WordPress site will use a
 * different template.
 *
 * @package dazzling
 */

    get_header();
?>
    <div id="primary" class="content-area col-sm-12 col-md-8">
        <main id="main" class="site-main" role="main">

<?php
     // Specify the arguments for the post query
     $args = array(
        'cat' => '91', // Innsbruck category id
        'post_type' => 'post',
        'posts_per_page' => 5,
        'paged' => ( get_query_var('paged') ? get_query_var('paged') : 1),
    );

    if( is_page( 'innsbruck' )) {
        query_posts($args);
    }
?>

<?php while ( have_posts() ) : the_post(); ?>
    <?php get_template_part( 'content', 'post' ); ?>
    <?php
        // If comments are open or we have at least one comment, load up the comment template
        if ( comments_open() || '0' != get_comments_number() ) :
            comments_template();
        endif;
    ?>
<?php endwhile; // end of the loop. ?>


<div class="navigation">
    <div class="alignleft">&lt;?php next_posts_link('&laquo; Ältere Beiträge') ?&gt;&lt;/div&gt;
    <div class="alignright">&lt;?php previous_posts_link('Neuere Beiträge &raquo;') ?&gt;&lt;/div&gt;
</div>


    </main>&lt;!-- #main --&gt;
</div>&lt;!-- #primary --&gt;
<?php get_sidebar(); ?>
<?php get_footer(); ?>

In this code snippet, we define a set of arguments, which are used for filtering the posts of the desired category. In this example, I used the id of the Innsbruck category (91) directly. We define that we want to display posts only, 5 per page. An important aspect is the pagination. When we only display posts of one category, we need to make sure that Worpress counts the pages correctly. Otherwise the page would always display the same posts, regardless how often the user clicks on the next page button. The reason is that this button uses the global paged variable, which is set correctly in the example above.

The if conditional makes sure that only the pages from the Innsbruck category are displayed. The while loop then iterates over all posts and displays them. At the bottom we can see the navigational buttons for older and newer posts of the Innsbruck category.

Persistent Data in a MySQL Docker Container

Running MySQL in Docker

In a recent article on Docker in this blog, we presented some basics for dealing with data in containers. This article will present another popular application for Docker: MySQL containers. Running MySQL instances in Docker allows isolating database infrastructure with ease.

Connecting to the Standard MySQL Container

The description of the MySQL docker image provides a lot of useful information how to launch and connect to a MySQL container. The first step is to create standard MySQL container from the latest available image.

sudo docker run \
   --name=mysql-instance 
   -e MYSQL_ROOT_PASSWORD=secret 
   -p 3307:3306 
   -d 
   mysql:latest

This creates a MySQL container where the root password is set to secret. As the host is already running its own MySQL instance (which has nothing to do with this docker example), the standard port 3306 is already taken. Thus we publish utilise the port 3307 on the host system and forward it to the 3306 standard port from the container.

Connect from the Host

We can then connect from the command line like this:

mysql -uroot -psecret -h 127.0.0.1 -P3307

We could also provide the hostname localhost for connecting to the container, but as the MySQL client per default assumes that a localhost connection is via a socket, this would not work. Thus when using the hostname localhost, we needed to specify the protocol TCP, wo that the client connects via the network interface.

mysql -uroot -psecret -h localhost --protocol TCP -P3307

Connect from other Containers

Connecting from a different container to the MySQL container is pretty straight forward. Docker allows to link two containers and then use the exposed ports between them. The following command creates a new ubuntu container and links to the MySQL container.

sudo docker run -it --name ubuntu-container --link mysql-instance:mysql-link ubuntu:16.10 bash

After this command, you are in the terminal of the Ubuntu container. We then need to install the MySQL client for testing:

# Fetch the package list
root@7a44b3e7b088:/# apt-get update
# Install the client
root@7a44b3e7b088:/# apt-get install mysql-client
# Show environment variables
root@7a44b3e7b088:/# env

The last command gives you a list of environment variables, among which is the IP address and port of the MySQL container.

MYSQL_LINK_NAME=/ubuntu-container/mysql-link
HOSTNAME=7a44b3e7b088
TERM=xterm
MYSQL_LINK_ENV_MYSQL_VERSION=5.7.14-1debian8
MYSQL_LINK_PORT=tcp://172.17.0.2:3306
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
MYSQL_LINK_PORT_3306_TCP_ADDR=172.17.0.2
MYSQL_LINK_PORT_3306_TCP=tcp://172.17.0.2:3306
PWD=/
MYSQL_LINK_PORT_3306_TCP_PORT=3306
SHLVL=1
HOME=/root
MYSQL_LINK_ENV_MYSQL_MAJOR=5.7
MYSQL_LINK_PORT_3306_TCP_PROTO=tcp
MYSQL_LINK_ENV_GOSU_VERSION=1.7
MYSQL_LINK_ENV_MYSQL_ROOT_PASSWORD=secret
_=/usr/bin/env

You can then connect either manually of by providing the variables

mysql -uroot -psecret -h 172.17.0.2
mysql -uroot -p$MYSQL_LINK_ENV_MYSQL_ROOT_PASSWORD -h $MYSQL_LINK_PORT_3306_TCP_ADDR -P $MYSQL_LINK_PORT_3306_TCP_PORT

If you only require a MySQL client inside a container, simply use the MySQL image from docker. Batteries included!

Persistent Docker Containers

Docker Fundamentals

Docker has become a very popular tool for orchestrating services. Docker it much more lightweight than virtual machines. For instance do containers not require a boot process. Docker follows the philosophy that one container serves only one process. So in contrast to virtual machines which often bundle several services together, Docker is built for running single services per container. If you come from the world of virtualised machines, Docker can be a bit confusing in the beginning, because it uses its own terminology. A good point to start is as always the documentation and there are plenty of great tutorials out there.

Images and Containers

Docker images serves as templates for the containers. As images and containers both have hexadecimal ids they are very easy to confuse. The following example shows step by step how to create a new container based on the Debian image and how to open shell access.

# Create a new docker container based on the debian image
sudo docker create -t --name debian-test debian:stable 
# Start the container
sudo docker start  debian-test
# Check if the container is running
sudo docker ps -a
# Execute bash to get an interactive shell
sudo docker exec -i -t debian-test bash

A shorter variant of creating and launching a new container is listed below. The run command creates a new container and starts it automatically. Be aware that this creates a new container every time, so assigning a container name helps with not confusing the image with the container. The command run is in particular tricky, as you would expect it to run (i.e. launch) a container only. In fact, it creates a new one and starts it.

sudo docker run -it --name debian-test debian:stable bash

Important Commands

The following listing shows the most important commands:

# Show container status
sudo docker ps -a
# List available images
sudo docker images 
# Start or stop a container
sudo docker start CONTAINERNAME
sudo docker stop CONTAINERNAME
# Delete a container
sudo docker rm CONTAINERNAME

You can of course create your own images, which will not be discussed in this blog post. It is just important to know that you can’t move containers from your host so some other machine directly. You would need to commit the changes made to the image and create a new container based on that image. Please be aware that this does not include the actual data stored in that container! You need to manually export any data and files from the original container and import it in the new container again. This is another trap worth noting. You can, however,  also mount data in the image, if the data is available at the host at the time of image creation. Details on data in containers can be found here.

Persisting Data Across Containers

The way how Docker persists data needs getting used to in the beginning, especially as it is easy to confuse images with containers. Remember that Docker images serve only as the template. So when you issue the command sudo docker run …  this actually creates a container from an image first and then starts it. So whenever you issue this command again, you will end up with a new container which does share any data with the previously created container.

Docker 1.9 introduced data volume containers, which allow to create dedicated data containers which can be used from several other containers. Data volume containers can be used for persisting data. The following listing shows how to create a data volume container and mount the volume in a container.

# Create a data volume
sudo docker volume create --name data-volume-test
# List all volumes
sudo docker volume ls
# Delete the container
sudo docker rm debian-test
# Create a new container, now with the data volume 
sudo docker create -v data-volume-test:/test-data -t --name debian-test debian:stable
# Start the container
sudo docker start debian-test
# Get the shell
sudo docker exec -i -t debian-test bash

After we logged into the shell, we can see the data volume we mounted on the directory test-data:

root@d4ac8c89437f:/# ls -la
total 76
drwxr-xr-x  28 root root 4096 Aug  3 13:11 .
drwxr-xr-x  28 root root 4096 Aug  3 13:11 ..
-rwxr-xr-x   1 root root    0 Aug  3 13:10 .dockerenv
drwxr-xr-x   2 root root 4096 Jul 27 20:03 bin
drwxr-xr-x   2 root root 4096 May 30 04:18 boot
drwxr-xr-x   5 root root  380 Aug  3 13:11 dev
drwxr-xr-x  41 root root 4096 Aug  3 13:10 etc
drwxr-xr-x   2 root root 4096 May 30 04:18 home
drwxr-xr-x   9 root root 4096 Nov 27  2014 lib
drwxr-xr-x   2 root root 4096 Jul 27 20:02 lib64
drwxr-xr-x   2 root root 4096 Jul 27 20:02 media
drwxr-xr-x   2 root root 4096 Jul 27 20:02 mnt
drwxr-xr-x   2 root root 4096 Jul 27 20:02 opt
dr-xr-xr-x 267 root root    0 Aug  3 13:11 proc
drwx------   2 root root 4096 Jul 27 20:02 root
drwxr-xr-x   3 root root 4096 Jul 27 20:02 run
drwxr-xr-x   2 root root 4096 Jul 27 20:03 sbin
drwxr-xr-x   2 root root 4096 Jul 27 20:02 srv
dr-xr-xr-x  13 root root    0 Aug  3 13:11 sys
drwxr-xr-x   2 root root 4096 Aug  3 08:26 <span style="color: #0000ff;"><strong>test-data</strong></span>
drwxrwxrwt   2 root root 4096 Jul 27 20:03 tmp
drwxr-xr-x  10 root root 4096 Jul 27 20:02 usr
drwxr-xr-x  11 root root 4096 Jul 27 20:02 var```


We can navigate into that folder and create a 100 M data file with random data.

root@d4ac8c89437f:~# cd /test-data/ root@d4ac8c89437f:/test-data# dd if=/dev/urandom of=100M.dat bs=1M count=100 100+0 records in 100+0 records out 104857600 bytes (105 MB) copied, 6.69175 s, 15.7 MB/s root@d4ac8c89437f:/test-data# du -h . 101M .



When we exit the container, we can see the file in the host file system  here:

stefan@stefan-desktop:~$ sudo ls -l /var/lib/docker/volumes/data-volume-test/_data insgesamt 102400 -rw-r–r– 1 root root 104857600 Aug 3 15:17 100M.dat```

We can use this volume transparently in the container, but it is not depending on the container itself. So whenever we have to delete to container or want to use the data with a different container, this solution works perfectly. Thw following command shows how we mount the same volume in an Ubuntu container and execute the ls command to show the content of the directory.

stefan@stefan-desktop:~$ sudo docker run -it -v data-volume-test:/test-data-from-debian --name ubuntu-test ubuntu:16.10 ls -l /test-data-from-debian
total 102400
-rw-r--r-- 1 root root 104857600 Aug  3 13:17 100M.dat

You can display a lot of usefil information about a container with the inspect command. It also shows the data container and where it is mounted.

sudo docker inspect ubuntu-test

...
        "Mounts": [
            {
                "Name": "data-volume-test",
                "Source": "/var/lib/docker/volumes/data-volume-test/_data",
                "Destination": "/test-data-from-debian",
                "Driver": "local",
                "Mode": "z",
                "RW": true,
                "Propagation": "rprivate"
            }
        ],
...```


We delete the ubuntu container and create a new one. We then start the container, open a bash session and write some test data into the directory.

stefan@stefan-desktop:~$ sudo docker create -v data-volume-test:/test-data-ubuntu -t –name ubuntu-test ubuntu:16.10 f3893d368e11a32fee9b20079c64494603fc532128179f0c08d10321c8c7a166 stefan@stefan-desktop:~$ sudo docker start ubuntu-test ubuntu-test stefan@stefan-desktop:~$ sudo docker exec -it ubuntu-test bash root@f3893d368e11:/# cd /test-data-ubuntu/ root@f3893d368e11:/test-data-ubuntu# ls 100M.dat root@f3893d368e11:/test-data-ubuntu# touch ubuntu-writes-a-file.txt



When we check the Debian container, we can immediately see the written file, as the volume is transparently mounted.

stefan@stefan-desktop:~$ sudo docker exec -i -t debian-test ls -l /test-data total 102400 -rw-r–r– 1 root root 104857600 Aug 3 13:17 100M.dat -rw-r–r– 1 root root 0 Aug 3 13:42 ubuntu-writes-a-file.txt```

Please be aware that the docker volume is just a regular folder on the file system. Writing from both containers the same file can lead to data corruption. Also remember that you can read and write the volume files directly from the host system.

Backups and Migration

Backing up data is also an important aspect when you use named data volumes as shown above. Currently, there is no way of moving Docker containers or volumes natively to a different host. The intention of Docker is to make the creation and destruction  of containers very cheap and easy. So you should not get too attached to your containers, because you can re-create them very fast. This of course is not true for the data stored in volumes. So you need to take care of your data yourself, for instance by creating automated backups like this sudo tar cvfz Backup-data-volume-test.tar.gz /var/lib/docker/volumes/data-volume-test and re-store the data when needed in a new volume. How to backup volumes using a container is described here.

Plotting Colourful Graphs with R, RStudio and Ggplot2

The Aesthetics of Data Science

Data visualization is a powerful tool for communicating results and recently receives more and more attention due to the hype of data science. Integrating a meaningful graph into a paper or your thesis could improve readability and understandability more than any formulas or extended textual descriptions can. There exists a variety of different approaches for visualising data. Recently a lot of new Javascript based frameworks have gained quite some momentum, which can be used in Web applications and apps. A more classical work horse for data science is the R project and its plotting engine ggplot2. The reason why I decide to stick with R is its popularity and flexibility, which is still  impressive. Also with RStudio, there exists a convenient IDE which provides useful features for data scientists.

Plotting Graphs

In this blog post, I demonstrate how to plot time series data and use colours to highlight a specific aspect of data. As almost all techniques, R and ggplot2 require practise and training, which I realised again today when I spent quite a bit of time struggling with getting a simple plot right.

Currently I am evaluating two systems I developed and I needed to visualize their storage and execution time demands in comparison. My goal was to create a plot for each non-functional property, the execution time and the storage demand, while each plot should depict both systems’ performance. Each system runs a set of operations, think of create, read, update and delete operations (CRUD). Now for visualizing which of these operations has the most effects on the system, I needed to colourise each operation within one graph. This is the easy part. What was more tricky is to provide for each graph a defined set of colours, which can be mapped to each instance of the variable. Things which have the same meaning in both graphs should visualized in the same way, which requires a little hack.

Prerequisits

Install the following packages via apt

sudo apt-get install r-base r-recommended r-cran-ggplot2

and RStudio by downloading the deb – File from the project homepage.

Evaluation Data

As an example,we plan to evaluate the storage demand of two different systems and compare the results. Consider the following sample data.

# Set seed to get the same random numbers for this example
set.seed(42);
# Generate 200 random data records
N <- 200
# Generate a random, increasing sequence of integers that we assume is the storage demand in some unit
storage1 =sort(sample(1:100000, size = N, replace = TRUE),decreasing = FALSE)
storage2 = sort(sample(1:100000, size = N, replace = TRUE),decreasing = FALSE)
# Define the operations availabel and draw a random sample
operationTypes = c('CREATE','READ','UPDATE','DELETE')
operations = sample(operationTypes,N,replace=TRUE)
# Create the dataframe
df  df
     id storage1 storage2 operations
1     1       24      238     CREATE
2     2      139     1755     UPDATE
3     3      158     1869     UPDATE
4     4      228     2146       READ
5     5      395     2967     DELETE
6     6      734     3252     CREATE
7     7      789     4049     DELETE
8     8     2909     4109       READ
9     9     3744     4835     CREATE
10   10     3894     4990       READ

....

We created a random data set simulating the characteristics of system measurement data. As you can see, we have a list of operations of the four types CREATE, READ, UPDATE and DELETE and a measurement value for the storage demand in both systems.

The Simple Plot

Plotting two graphs of thecolumns storage1 and storage2 is straight forward.

# Simple plot
p1 <- ggplot(df, aes(x,y)) +
  geom_point(aes(x=id,y=storage1,color="Storage 1")) +
  geom_point(aes(x=id,y=storage2,color="Storage 2")) +
  ggtitle("Overview of Measurements") +
  xlab("Number of Operations") +
  ylab("Storage Demand in MB") +
  scale_color_manual(values=c("Storage 1"="forestgreen", "Storage 2"="aquamarine"), 
                     name="Measurements", labels=c("System 1", "System 2"))

print(p1)

We assign for each point plot a color. Note that the color nme “Storage 1” for instance of course does not denote a color, but it assignes a level for all points of the graph. This level can be thought of as a category, which ensures that all the points which belong to the same category have the same color. As you can see at the definition of the color scale, we assign the actual color to this level there.  This is the result:

Plotting Levels

A common task is to visualise categories or levels of measurement data. In this example, there are four different levels we could observe: CREATE, READ, UPDATE and DELETE.

# Plot with levels
p1 <- ggplot(df, aes(x,y)) +
  geom_point(aes(x=id,y=storage1,color=operations)) +
  geom_point(aes(x=id,y=storage2,color=operations)) +
  ggtitle("Overview of Measurements") +
  labs(color="Measurements") +
  scale_color_manual(values=c("CREATE"="darkgreen", 
                              "READ"="darkolivegreen", 
                              "UPDATE"="forestgreen", 
                              "DELETE"="yellowgreen"))
print(p1)

Instead of assigning two colours, one for each graph, we can also assign colours to the operations. As you can see in the definition of the graphs and the colour scale, we map the colours to the variable operations instead. As a result we get differently coloured points per operation, but we get these of course for both graphs in an identical fashion as the categories are the same for both measurements. The result looks like this:

Now this is obviously not what we want to achieve as we cannot differentiate between the two graphs any more.

Plotting the same Levels for both Graphs in Different Colours

This last part is a bit tricky, as ggplot2 does not allow assigning different colour schemes within one plot. There do exist some hacks for this, but the solution does not improve the readability of the code in my opinion. In order to apply different colour schemes for the two graphs while still using the categories, I appended two extra columns to the data set. If we append some differentiation between the two graphs and basically double the categories from four to eight, where each graph now uses its own four categories, we can also assign distinct colours to them.

df$operationsStorage1 <- paste(df$operations,"-Storage1", sep = '')
df$operationsStorage2 <- paste(df$operations,"-Storage2", sep = '')

p3 <- ggplot(df, aes(x,y)) +
  geom_point(aes(x=id,y=storage1,color=operationsStorage1)) +
  geom_point(aes(x=id,y=storage2,color=operationsStorage2)) +
  ggtitle("Overview of Measurements") +
  xlab("Number of Operations") +
  ylab("Storage Demand in MB") +
  labs(color="Operations") +
  scale_color_manual(values=c("CREATE-Storage1"="darkgreen", 
                              "READ-Storage1"="darkolivegreen", 
                              "UPDATE-Storage1"="forestgreen", 
                              "DELETE-Storage1"="yellowgreen",
                              "CREATE-Storage2"="aquamarine", 
                              "READ-Storage2"="dodgerblue",
                              "UPDATE-Storage2"="royalblue",
                              "DELETE-Storage2"="turquoise"))
print(p3)

We then assign the new column for each system individually as colour value. This ensures that each graph only considers the categories that we assigned in this step. Thus we can assign a different color scheme for wach graph and print the corresponding colours in the label (legend) next to the chart. This is the result:

Now we can see which operation was used at every measurement and still be able to distinguish between the two systems.

Von der Hungerburg zur Umbrüggler Alm, zur Höttinger Alm und zur Seegrube

Ausgehend vom Wanderwege-Hub Hungerburg, erreicht man hinter der Talstation die ausgeschilderten Wanderwege zur Umbrüggeler Alm und zur Arzler Alm.

Hält man sich links, gelangt man nach nur 30 Minuten die Umbrüggler Alm (Beschreibung hier). Man folgt der Beschilderung weiter und gelangt über Forstwege, einen kurzen Waldabschnitt und über die Skipiste nach einer weiteren Stunde zur Höttinger Alm (1487m). Als beinahe etwas bösartig erweist sich der letzte Anstieg, ruhigen Schrittes vorbei an den Rindern, bis zur Alm. Die letzten Höhenmeter ziehen sich, da man die Alm bereits gut im Blick hat.

Nach einer Stärkung geht es weiter entlang des Forstweges Richtung Nordosten. Man folgt der Beschilderung zur Bodensteinalm und quert nahezu steigungsfrei den Hang, bis man eine Gabelung erreicht. Man hält sich bergwärts und erreicht nach ca. 200 Höhenmetern die Bodensteinalm (1661m) entlang eines Forstweges. Man kann nun entweder der Forststraße bis zur Seegrube folgen, oder quält sich über den Seegrubenbahnsteig unterhalb der ebensolchen hinauf bis zur Bergstation. Von der Höttinger Alm bis zur Seegrubenbahnbergstation (1905) benötigt man etwa eineinhalb Stunden. Belohnt wird man mit einem traumhaften Ausblick über Innsbruck und das Inntal sowie auf die unzähligen Touristen in Flipflops und Seidenblusen. Man schließe sich diesen an und gleite bequem ins Tal. Eine weitere Beschreibung der Tour fndet sich hier.

Timelape Photography with the Camera Module V2 and a Raspberry Pi Model B

Recently, I bought a camera module for the Raspberry Pi and experimented a little bit with the possibilities a scriptable camera provides. The new Camera Module V2 offers 8.08 MP from a Sony sensor and can be controlled with a well documented Python library. It allows to take HD videos and shoot still images. Assembly is easy, but as the camera is attached with a rather short ribbon cable, which renders the handling is a bit cumbersome. For the moment, a modified extra hand from my soldering kit acts as a makeshift.

Initial Setup

The initial setup is easy and just requires a few steps, which is not surprising because most of the documentation is targeted to kids in order to encourage their inner nerd. Still works for me as well 🙂

Attach the cable to the raspberry pi as described here. You can also buy an adapter for the Pi Zero. Once the camera is wired to the board, activate the module with the tool raspi-config.

Then you are ready to install the Python library with sudo apt-get install python3-picamera, add your user to the video group with usermod -a -G video USERNAME  and then reboot the Raspberry. After you logged in again, you can start taking still images with the simple command raspistill -o output.jpg. You can find some more documentation and usage examples here.

Timelapse Photography

What I really enjoy is making timelapse videos with the Raspberry Pi, which gives a nice effect for everyday phenomena and allows to observe processes which are usually too slow to follow. The following Gif shows a melting ice cube. I took one picture every five seconds.

A Small Python Script

The following script creates a series of pictures with a defined interval and stores all images with a filename indicating the time of shooting in a folder. It is rather self explanatory. The camera needs a little bit of time to adjust, so we set the adjustTime variable to 5 seconds. Then we take a picture every 300 seconds, each image has a resolution of 1024×768 pixels.

import os
import time
import picamera
from datetime import datetime

# Grab the current datetime which will be used to generate dynamic folder names
d = datetime.now()
initYear = "%04d" % (d.year)
initMonth = "%02d" % (d.month)
initDate = "%02d" % (d.day)
initHour = "%02d" % (d.hour)
initMins = "%02d" % (d.minute)
initSecs = "%02d" % (d.second)

folderToSave = "timelapse_" + str(initYear) + str(initMonth) + str(initDate) +"_"+ str(initHour) + str(initMins)
os.mkdir(folderToSave)

# Set the initial serial for saved images to 1
fileSerial = 1

# Create and configure the camera
adjustTime=5
pauseBetweenShots=300

# Create and configure the camera
with picamera.PiCamera() as camera:
    camera.resolution = (1024, 768)
    #camera.exposure_compensation = 5

    # Start the preview and give the camera a couple of seconds to adjust
    camera.start_preview()
    try:
        time.sleep(adjustTime)

        start = time.time()
        while True:
            d = datetime.now()
            # Set FileSerialNumber to 000X using four digits
            fileSerialNumber = "%04d" % (fileSerial)

            # Capture the CURRENT time (not start time as set above) to insert into each capture image filename
            hour = "%02d" % (d.hour)
            mins = "%02d" % (d.minute)
            secs = "%02d" % (d.second)
            camera.capture(str(folderToSave) + "/" + str(fileSerialNumber) + "_" + str(hour) + str(mins) + str(secs) + ".jpg")

            # Increment the fileSerial
            fileSerial += 1
            time.sleep(pauseBetweenShots)

    except KeyboardInterrupt:
        print ('interrupted!')
        # Stop the preview and close the camera
        camera.stop_preview()

finish = time.time()
print("Captured %d images in %d seconds" % (fileSerial,finish - start))

This script then can run unattended and it creates a batch of images on the Raspberry Pi.

Image Metadata

The file name preserves the time of the shot, so that we can see later when a picture was taken. But the tool also stores EXIF metadata, which can be used for processing. You can view the data with the exiftool.

>ExifTool Version Number         : 9.46
File Name                       : 1052.jpg
Directory                       : .
File Size                       : 483 kB
File Modification Date/Time     : 2016:07:08 08:49:52+02:00
File Access Date/Time           : 2016:07:08 09:19:14+02:00
File Inode Change Date/Time     : 2016:07:08 09:17:52+02:00
File Permissions                : rw-r--r--
File Type                       : JPEG
MIME Type                       : image/jpeg
Exif Byte Order                 : Big-endian (Motorola, MM)
Make                            : RaspberryPi
Camera Model Name               : RP_b'imx219'
X Resolution                    : 72
Y Resolution                    : 72
Resolution Unit                 : inches
Modify Date                     : 2016:07:05 08:37:33
Y Cb Cr Positioning             : Centered
Exposure Time                   : 1/772
F Number                        : 2.0
Exposure Program                : Aperture-priority AE
ISO                             : 50
Exif Version                    : 0220
Date/Time Original              : 2016:07:05 08:37:33
Create Date                     : 2016:07:05 08:37:33
Components Configuration        : Y, Cb, Cr, -
Shutter Speed Value             : 1/772
Aperture Value                  : 2.0
Brightness Value                : 2.99
Max Aperture Value              : 2.0
Metering Mode                   : Center-weighted average
Flash                           : No Flash
Focal Length                    : 3.0 mm
Maker Note Unknown Text         : (Binary data 332 bytes, use -b option to extract)
Flashpix Version                : 0100
Color Space                     : sRGB
Exif Image Width                : 1024
Exif Image Height               : 768
Interoperability Index          : R98 - DCF basic file (sRGB)
Exposure Mode                   : Auto
White Balance                   : Auto
Compression                     : JPEG (old-style)
Thumbnail Offset                : 1054
Thumbnail Length                : 24576
Image Width                     : 1024
Image Height                    : 768
Encoding Process                : Baseline DCT, Huffman coding
Bits Per Sample                 : 8
Color Components                : 3
Y Cb Cr Sub Sampling            : YCbCr4:2:0 (2 2)
Aperture                        : 2.0
Image Size                      : 1024x768
Shutter Speed                   : 1/772
Thumbnail Image                 : (Binary data 24576 bytes, use -b option to extract)
Focal Length                    : 3.0 mm
Light Value                     : 12.6

Processing Images

The Raspberry Pi would need a lot of time to create an animated Gif or a video from these images. This is why I decided to add new images automatically to a Git repository on Github and fetch the results on my Desktop PC. I created a new Git repository and adapted the script shown above to store the images within the folder of the repository. I then use the following script to add and push the images to Github using a cronjob.

>cd /home/stefan/Github/Timelapses
now=$(date +"%m_%d_%Y %H %M %S")
echo $now
git pull
git add *.jpg
git commit -am "New pictures added $now"
git push

You can add this to you user’s cron table with crontab -e and the following line, which adds the images every 5 minutes,

*/5	*	*	*	*	/home/stefan/Github/Timelapses/addToGit.sh

On a more potent machine, you can clone the repository and pull the new images like this:

cd /home/stefan-srv/Github/Timelapses
now=$(date +"%m_%d_%Y %H %M %S")
echo "$now"
git pull --rebase

The file names are convenient for being able to read the date when it was taken, but most of the Linux tools require the files to be named within a sequence. The following code snippet renames the files into a sequence with four digits and pads them with zeros if possible.

>a=1
for i in *.jpg; do
  new=$(printf "%04d.jpg" "$a") #04 pad to length of 4
  mv -- "$i" "$new"
  let a=a+1
done

Animated Gifs

Imagemagick offers a set of great tools for images processing. With its submodule convert, you can create animated Gifs from a series of images like this:

convert -delay 10 -loop 0 *.jpg Output.gif

This adds a delay after each images and loops the gif images infinitely. ImageMagick requires a lot of RAM for larger Gif images and does not handle memory allocation well, but the results are still nice. Note that the files get very large, so a smaller resolution might be more practical.

Still Images to Videos

The still images can also be converted in videos. Use the following command to create an image with 10 frames per second:

>avconv -framerate 10 -f image2 -i %04d.jpg -c:v h264 -crf 1 out.mov

Example: Nordkette at Innsbruck, Tirol

This timelapse video of the Inn Valley Range in the north of the city of Innsbruck has been created by taking a picture with a Raspberry Pi Camera Module V2 every 5 minutes. This video consists of 1066 still images.

IntelliJ IDEA and the ClassNotFoundException

When compiling nested Maven projects in Idea, sometimes the compiler complains about a missing class file.

This occurs on several occasions, depending which part of a project is compiled and what dependencies have been considered. If the project is large this can easily happen, when a specific class should be compiled without having the complete context available. Besides the tipps such as invalidate caches and the ones I found here and here, editing the build configuration of a project helps. Add the a task “Make Project” and the correct class files should be compiled and available.

Hikari Connection Pooling with a MySQL Backend, Hibernate and Maven

Conection Pooling?

JDBC connection pooling is a great concept, which improves the performance of database driven applications by reusing connections. The benefit from connection pools is that the cost of creating and closing connections is avoided, by reusing connections from a pool of available connections. Database systems such as MySQL also assign database resources by limiting simultaneous connections. This is another reason, why connection pools have benefits in contrast to opening and closing individual connections.

Dipping into Pools

There exists a selection of different JDBC compatible connection pools which can be used more or less interchangeable. The most widely used pools are:

Most of these pools work in a very similar way. In the following tutorial, we are going to take out HikariCP for a spin. It is simple to use and claims to be very fast. In the following we are going to setup a small project using the following technologies:

  • Java 8
  • Tomcat 8
  • MySQL 5.7
  • Maven 3
  • Hibernate 5

and of course an IDE of your choice (I have become quite fond of IntelliJ IDEA Community Edition).

Project Overview

In this small demo project, we are going to write a minimalistic Web application, which simply computes a new random number for each request and stores the result in a database table. We use Java and store the data by using the Hibernate ORM framework.We also assume, that you have a running Apache Tomcat Servlet Container and also a running MySQL instance available.

In the first step, I created a basic Web project by selecting the Maven Webapp archetype, which then creates a basic structure we can work with.

Adding the Required Libraries

After we created the initial project, we need to add the required libraries. We can achieve this easily with Maven, by adding the dependency definitions to our pom.xml file. You can find these definitions at maven central. The build block contains the plugin for deploying the application at the Tomcat server.

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
  <modelVersion>4.0.0</modelVersion>
  <groupId>at.stefanproell</groupId>
  <artifactId>HibernateHikari</artifactId>
  <packaging>war</packaging>
  <version>1.0-SNAPSHOT</version>
  <name>HibernateHikari Maven Webapp</name>

  <dependencies>
    <dependency>
      <groupId>junit</groupId>
      <artifactId>junit</artifactId>
      <version>3.8.1</version>
      <scope>test</scope>
    </dependency>
      <dependency>
          <groupId>org.apache.tomcat</groupId>
          <artifactId>tomcat-servlet-api</artifactId>
          <version>7.0.50</version>
      </dependency>
      <dependency>
          <groupId>mysql</groupId>
          <artifactId>mysql-connector-java</artifactId>
          <version>5.1.39</version>
      </dependency>
      <dependency>
          <groupId>org.hibernate</groupId>
          <artifactId>hibernate-core</artifactId>
          <version>5.2.0.Final</version>
      </dependency>
      <dependency>
          <groupId>com.zaxxer</groupId>
          <artifactId>HikariCP</artifactId>
          <version>2.4.6</version>
      </dependency>
  </dependencies>
    
  <build>
    <finalName>HibernateHikari</finalName>
      <plugins>
          <plugin>
              <groupId>org.apache.tomcat.maven</groupId>
              <artifactId>tomcat7-maven-plugin</artifactId>
              <version>2.0</version>
              <configuration>
                  <path>/testapp</path>
                  <update>true</update>

                  <url>http://localhost:8080/manager/text</url>
                  <username>admin</username>
                  <password>admin</password>

              </configuration>

          </plugin>
          <plugin>
              <groupId>org.apache.maven.plugins</groupId>
              <artifactId>maven-war-plugin</artifactId>
              <version>2.4</version>

          </plugin>
      </plugins>
  </build>
</project>

Now we have all the libraries we need available and we can begin with implementing the functionality.

The Database Table

As we want to persist random numbers, we need to have a database table, which will store the data. Create the following table in MySQL and ensure that you have a test user available:

CREATE TABLE `TestDB`.`RandomNumberTable` (
  `id` INT NOT NULL AUTO_INCREMENT,
  `randomNumber` INT NOT NULL,
  PRIMARY KEY (`id`));```


## POJO Mojo: The Java Class to be Persisted

Hibernate allows us to persist Java objects in the database, by annotating the Java source code. The following Java class is used to store the random numbers that we generate.

@Entity @Table(name="RandomNumberTable”, uniqueConstraints={@UniqueConstraint(columnNames={“id”})}) public class RandomNumberPOJO { @Id @GeneratedValue(strategy= GenerationType.IDENTITY) @Column(name="id”, nullable=false, unique=true, length=11) private int id;

@Column(name="randomNumber", nullable=false)
private int randomNumber;

public int getId() {
    return id;
}

public void setId(int id) {
    this.id = id;
}

public int getRandomNumber() {
    return randomNumber;
}

public void setRandomNumber(int randomNumber) {
    this.randomNumber = randomNumber;
}

}



The code and also the annotations are straight forward. Now we need to define a way how we can connect to the database and let Hibernate handle the mapping between the Java class and the database schema we defined before.

## Hibernate Configuration

Hibernate looks for the configuration in a file called hibernate.cfg.xml by default. This file is used to provide the connection details for the database.

    <property name="hibernate.dialect">org.hibernate.dialect.MySQLDialect</property>
    <property name="hibernate.connection.provider_class">com.zaxxer.hikari.hibernate.HikariConnectionProvider</property>
    <property name="hibernate.hikari.dataSource.url">jdbc:mysql://localhost:3306/TestDB?useSSL=false</property>
    <property name="hibernate.hikari.dataSource.user">testuser</property>
    <property name="hibernate.hikari.dataSource.password">sEcRet</property>
    <property name="hibernate.hikari.dataSourceClassName">com.mysql.jdbc.jdbc2.optional.MysqlDataSource</property>
    <property name="hibernate.hikari.dataSource.cachePrepStmts">true</property>
    <property name="hibernate.hikari.dataSource.prepStmtCacheSize">250</property>
    <property name="hibernate.hikari.dataSource.prepStmtCacheSqlLimit">2048</property>
    <property name="hibernate.hikari.dataSource.useServerPrepStmts">true</property>
    <property name="hibernate.current_session_context_class">thread</property>

</session-factory>

The file above contains the most essential settings. We specify the database dialect that we speak `org.hibernate.dialect.MySQLDialect`, define the connection provider class (the Hikari CP) with `com.zaxxer.hikari.hibernate.HikariConnectionProvider` and provide the URL to our MySQL database (`jdbc:mysql://localhost:3306/TestDB?useSSL=false`) including the username and password for the database connection. Alternatively, you can also define the same information in the hibernate.properties file.

## The Session Factory

We need to have a session factory, which initializes the database connection and the connection pool as well as handles the interaction with the database server. We can use the following class, which provides the session object for these tasks.

import javax.servlet.ServletContextEvent; import javax.servlet.ServletContextListener; import javax.servlet.annotation.WebListener;

import org.hibernate.SessionFactory; import org.hibernate.boot.registry.StandardServiceRegistryBuilder; import org.hibernate.cfg.Configuration; import org.hibernate.service.ServiceRegistry; import org.jboss.logging.Logger;

@WebListener public class HibernateSessionFactoryListener implements ServletContextListener {

public final Logger logger = Logger.getLogger(HibernateSessionFactoryListener.class);

public void contextDestroyed(ServletContextEvent servletContextEvent) {
    SessionFactory sessionFactory = (SessionFactory) servletContextEvent.getServletContext().getAttribute("SessionFactory");
    if(sessionFactory != null && !sessionFactory.isClosed()){
        logger.info("Closing sessionFactory");
        sessionFactory.close();
    }
    logger.info("Released Hibernate sessionFactory resource");
}

public void contextInitialized(ServletContextEvent servletContextEvent) {
    Configuration configuration = new Configuration();
    configuration.configure("hibernate.cfg.xml");
    // Add annotated class
    configuration.addAnnotatedClass(RandomNumberPOJO.class);

    ServiceRegistry serviceRegistry = new StandardServiceRegistryBuilder().applySettings(configuration.getProperties()).build();
    logger.info("ServiceRegistry created successfully");
    SessionFactory sessionFactory = configuration
            .buildSessionFactory(serviceRegistry);
    logger.info("SessionFactory created successfully");

    servletContextEvent.getServletContext().setAttribute("SessionFactory", sessionFactory);
    logger.info("Hibernate SessionFactory Configured successfully");
}

}



This class provides two so called contexts, where the session gets initialized and a second one where it gets destroyed. The Tomcat Servlet container automatically calls these depending on the state of the session. You can see that the filename of the configuration file is provided (<span class="lang:default decode:true crayon-inline">configuration.configure(&#8220;hibernate.cfg.xml&#8221;);`) and that we tell Hibernate, to map our RandomNumberPOJO file (`configuration.addAnnotatedClass(RandomNumberPOJO.class);`). Now all that is missing is the Web component, which is waiting for our requests.

## The Web Component

The last part is the Web component, which we kept as simple as possible.

import org.hibernate.Session; import org.hibernate.SessionFactory; import org.hibernate.Transaction; import javax.persistence.TypedQuery; import javax.servlet.ServletException; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse;

import java.io.IOException; import java.io.PrintWriter;

import java.util.List; import java.util.Random;

public class HelloServlet extends HttpServlet { public void doGet (HttpServletRequest req, HttpServletResponse res) throws ServletException, IOException { PrintWriter out = res.getWriter(); addRandomNumber(req); out.println(“There are " + countNumbers(req) + " random numbers”);

    List<RandomNumberPOJO> numbers = getAllRandomNumbers(req,res);

    out.println("Random Numbers:");
    out.println("----------");

    for(RandomNumberPOJO record:numbers){
        out.println("ID: " + record.getId() + "\t :\t" + record.getRandomNumber());
    }

    out.close();

}

/**
 * Create a new random number and store it the database
 * @param request
 */
private void addRandomNumber(HttpServletRequest request){
    SessionFactory sessionFactory = (SessionFactory) request.getServletContext().getAttribute("SessionFactory");

    Session session = sessionFactory.getCurrentSession();
    Transaction tx = session.beginTransaction();
    RandomNumberPOJO randomNumber = new RandomNumberPOJO();
    Random rand = new Random();
    int randomInteger = 1 + rand.nextInt((999) + 1);

    randomNumber.setRandomNumber(randomInteger);
    session.save(randomNumber);
    tx.commit();
    session.close();
}

/**
 * Get a list of all RandomNumberPOJO objects
 * @param request
 * @param response
 * @return
 */
private List<RandomNumberPOJO> getAllRandomNumbers(HttpServletRequest request, HttpServletResponse response){
    SessionFactory sessionFactory = (SessionFactory) request.getServletContext().getAttribute("SessionFactory");
    Session session = sessionFactory.getCurrentSession();
    Transaction tx = session.beginTransaction();
    TypedQuery<RandomNumberPOJO> query = session.createQuery(
            "from RandomNumberPOJO", RandomNumberPOJO.class);

    List<RandomNumberPOJO> numbers =query.getResultList();



    tx.commit();
    session.close();

    return numbers;


}

/**
 * Count records
 * @param request
 * @return
 */
private int countNumbers(HttpServletRequest request){
    SessionFactory sessionFactory = (SessionFactory) request.getServletContext().getAttribute("SessionFactory");
    Session session = sessionFactory.getCurrentSession();
    Transaction tx = session.beginTransaction();

    String count = session.createQuery("SELECT COUNT(id) FROM RandomNumberPOJO").uniqueResult().toString();

    int rowCount = Integer.parseInt(count);

    tx.commit();
    session.close();
    return rowCount;
}

}



This class provides the actual servlet and is executed whenever a user calls the web application. First, a new RandumNumberPOJO object is instantiated and persisted. We then count how many numbers we already have and then we fetch a list of all existing records.

The last step before we can actually run the application is the definition of the web entry points, which we can define in the file called web.xml. This file is already generated by the maven achetype and we only need to add a name for our small web service and provide a mapping for the entry class.

HikariCP Test App

<servlet>
    <servlet-name>hello</servlet-name>
    <servlet-class>HelloServlet</servlet-class>
</servlet>

<servlet-mapping>
    <servlet-name>hello</servlet-name>
    <url-pattern>/hello</url-pattern>
</servlet-mapping>

```

Compile and Run

We can then  compile and deploy the application with the following command:

mvn clean install org.apache.tomcat.maven:tomcat7-maven-plugin:2.0:deploy -e

This will compile and upload the application to the Tomcat server and we can then use our browser, open the URL http://localhost:8080/testapp/hello  to create and persist random numbers by refreshing the page. The result will look similar like this:

WAMS Containerstandorte in Innsbruck

Der Verein WAMS ist ein Sozialbetrieb mit dem Ziel Arbeitsplätze für Menschen zu schaffen, die aufgrund ihrer besonderen Lebenssituationen im konventionellen Arbeitsmarkt benachteiligt werden. Ein besonderes Augenmerk des Vereins liegt auch auf dem Recyclinggedanken und der Wiederverwertung von Ressourcen. Aus diesem Grund betreut und betreibt der Verein auch Altkleidersammelstellen und verwertet das gespendete Gewand. Eine Liste der Standorte dieser gelben Container befindet sich im Flyer von WAMS, der auf der Vereinshomepage bezogen werden kann. Für all jene, denen die Innsbrucker Straßennamen noch nicht allzu viel sagen oder deren geographisches Gedächtnis lückenhaft ist, habe ich die Standorte auf einer Karte eingetragen.

Der Sourcecode ist auf Github verfügbar und hier beschrieben.

Calling Back: An Example for Google’s Geocoder Service and Custom Markers

I recently moved and naturally there was a lot of clothes which I do not need (read: do not fit in) any more. Throwing them away would be a waste and luckily, there is a social business called WAMS which (besides a lot of other nice projects) supports reuse and recycling. WAMS provides and maintains containers for collecting clothes on many locations in Tirol. Unfortunately, there is not yet a map available to find them easily. I took this as an opportunity for a little side project in Javascript. I am not affiliated with WAMS, but of course the code and data is open sourced here.

Idea

The idea was quite simple. I used some of the container addresses I found in the flyer and created a custom Google Map showing the locations of the containers. The final result looks like this and a live demo can be found at the Github page.

Retrieve Geolocation Information

The Google API allows to retrieve latitude and longitude data from any given address. If the address was found in Google’s database, the Server returns a GeocoderResult object containing the geometry information about the found object. This GeocoderGeometry contains the latitude and longitude data of the address. The first step retrieves the data from Google’s API by using the Geocoder class. To do so, the following JSON structure is iterated and the addresses are being fed to the Geocoding service.

{
                "containerStandorte": [
                    {
                        "id": "1",
                        "name": "Pfarre Allerheiligen",
                        "address": "St.-Georgs-Weg 15, 6020, Innsbruck, Austria",
                        "latitude": "",
                        "longitude: "":
                    },
                    {
                        "id": "2",
                        "name": "Endhaltestelle 3er Linie Amras",
                        "address": "Philippine-Welser-Straße 49, 6020, Innsbruck, Austria",
                        "latitude": "",
                        "longitude": ""
            }

The Javascript code for obtaining the data is shown in the following listing:

window.onload = function() {

    // Data
    var wamsData =
 '{ "containerStandorte" : [' +
    '{ "id":"1", "name":"Pfarre Allerheiligen" , "address":"St.-Georgs-Weg 15, 6020, Innsbruck, Austria" , "latitude":"", "longitude":"" },' +
    '{ "id":"2", "name":"Endhaltestelle 3er Linie Amras" , "address":"Philippine-Welser-Straße 49, 6020, Innsbruck, Austria" , "latitude":"", "longitude":"" },' +
    '{ "id":"3", "name":"DEZ Einkaufszentrum Parkgarage" , "address":"Amraser-See-Straße 56a,6020 Innsbruck, Austria" , "latitude":"", "longitude":"" },' +
    '{ "id":"4", "name":"Wohnanlage Neue Heimat" , "address":"Geyrstraße 27-29, 6020 Innsbruck, Austria" , "latitude":"", "longitude":"" },' +
    '{ "id":"5", "name":"MPREIS Haller Straße" , "address":"Hallerstraße 212, 6020 Innsbruck, Austria" , "latitude":"", "longitude":"" },' +
    '{ "id":"6", "name":"Recyclinginsel Novapark" , "address":"Arzlerstraße 43, 6020 Innsbruck, Austria" , "latitude":"", "longitude":"" },' +
    '{ "id":"7", "name":"Höhenstraße / Hungerburg (neben Spar)" , "address":"Höhenstraße 125,6020 Innsbruck, 6020, Innsbruck, Austria" , "latitude":"", "longitude":"" },' +
    '{ "id":"8", "name":"Recyclinginsel Schneeburggasse" , "address":"Schneeburggasse 116, 6020 Innsbruck, Austria" , "latitude":"", "longitude":"" },' +
    '{ "id":"9", "name":"MPreis Fischerhäuslweg 31" , "address":"Fischerhäuslweg 31, 6020 Innsbruck, Austria" , "latitude":"", "longitude":"" },' +
    '{ "id":"10", "name":"Pfarre Petrus Canisius" , "address":"Santifallerstraße 5,6020 Innsbruck Austria" , "latitude":"", "longitude":"" },' +
    '{ "id":"11", "name":"MPreis Bachlechnerstraße" , "address":"Bachlechnerstraße 46, 6020 Innsbruck" , "latitude":"", "longitude":"" }'
    +' ]}';

    // Google Geocoder Library
    var geocoder = new google.maps.Geocoder();
    // Parse the JSON string into a javascript object.
    var wamsJSON = JSON.parse(wamsData);


    /**
        Iterate over containers and retrieve the geo location for their address.
    */
    function processContainers(){
        // Store amount of containers
        var amountOfContainers = wamsJSON.containerStandorte.length;
        // Iterate over all containers
        for (var i=0;i<amountOfContainers;i++){
            var container = wamsJSON.containerStandorte[i];
            // Encode the address of the container
            geocodeAddress(container, processContainerLocationCallback);
        };
    };

    /**
        Process the results
    */
    function processContainerLocationCallback(container,lat,long){
        wamsJSON = updateJSON(container,lat,long, printJSONCallback);
    }

    /**
        Update the JSON object and store the latitude and longitude information
    */
    function updateJSON(container,lat,long,printJSONCallback){
        // Store amount of containers
        var amountOfContainers = wamsJSON.containerStandorte.length;
        // Iterate over containers
        for (var i=0;i<amountOfContainers;i++){
            // Pick the correct id and store the data
            if(wamsJSON.containerStandorte[i].id==container.id){
                wamsJSON.containerStandorte[i].latitude=lat;
                wamsJSON.containerStandorte[i].longitude=long;
            }
        };
        // When the update is done, call the displayCallback
        printJSONCallback();
        return wamsJSON;
    };

    /*
        Google's Geocoder function takes and address as input and retrieves
        (among other data) the latitude and longitude of the provided address.
        Note that this is an asynchronous call, the response may take some time.
        Also remember that the processContainerLocationCallback which is given as
        an input parameter is just a variable. A variable which happens to be a function.

    */
    function geocodeAddress(container, processContainerLocationCallback){
        var address = container.address;
        geocoder.geocode( { 'address': address}, function(results, status) {
            // Anonymous function to process results.
            if (status == google.maps.GeocoderStatus.OK) {
                lat=results[0].geometry.location.lat();
                long=results[0].geometry.location.lng();
                // When the results have been retrieved,process them in the function processContainerLocationCallback
                processContainerLocationCallback(container, lat,long);
            } else {
                alert("Geocode was not successful for the following reason: " + status);
            }
        });
    };

    // Print the result
    function printJSONCallback(){
        var jsonString = JSON.stringify(wamsJSON, null,4);
        console.log(jsonString);
        document.getElementById("jsonOutput").innerHTML = jsonString;
    }

    // Start processing
    processContainers();
}

As the calls to the Google Services asynchronously, we need to use callbacks which are called when the function before has finished. Callbacks can be tricky and are a bit of a challenge to understand the first time. Especially the Google Geocoder methods require to work with several callbacks, which is often referred to as callback hell. The code above does the following things:

  1. Iterate over the JSON structure process each container individually -> function processContainers()
  2. For each container, call Google’s Geocoder and resolve the address to a location -> geocodeAddress(container, processContainerLocationCallback)
  3. After the result has been obtained, process the result. -> processContainerLocationCallback(container,lat,long)
  4. Update the JSON object by looping over all records and search for the correct id. Once the id was found, update latitude and longitude information. -> updateJSON(container,lat,long,printJSONCallback)
  5. Write the result to the Web page -> printJSONCallback()

The missing latitude and longitude values are retrieved and the JSON gets updated. The final result looks like this:

{
                "containerStandorte": [
                    {
                        "id": "1",
                        "name": "Pfarre Allerheiligen",
                        "address": "St.-Georgs-Weg 15, 6020, Innsbruck, Austria",
                        "latitude": 47.2680316,
                        "longitude": 11.355563999999958
                    },
                    {
                        "id": "2",
                        "name": "Endhaltestelle 3er Linie Amras",
                        "address": "Philippine-Welser-Straße 49, 6020, Innsbruck, Austria",
                        "latitude": 47.2589929,
                        "longitude": 11.42600379999999
                    }
                    ...
            }

Now that we have the data ready, we can proceed with the second step.

Placing the Markers

I artistically created a custom marker image which we will use to indicate the location of a clothes container from WAMS.

This image replaces the Google standard marker. Now all that is left is that we iterate over the updated JSON object, which now contains also the latitude and longitude data and place a marker for each container. Note that hovering over the image displays the address of the container on the Map.

// Data for container locations
        var wamsData = '{"containerStandorte":[{"id":"1","name":"Pfarre Allerheiligen",
"address":"St.-Georgs-Weg 15, 6020, Innsbruck, Austria","latitude":47.2680316,"longitude":11.355563999999958},
{"id":"2","name":"Endhaltestelle 3er Linie Amras","address":"Philippine-Welser-Straße 49, 6020, Innsbruck, Austria","latitude":47.2589929,"longitude":11.42600379999999},
{"id":"3","name":"DEZ Einkaufszentrum Parkgarage","address":"Amraser-See-Straße 56a,6020 Innsbruck, Austria","latitude":47.2625925,"longitude":11.430842299999995},
{"id":"4","name":"Wohnanlage Neue Heimat","address":"Geyrstraße 27-29, 6020 Innsbruck, Austria","latitude":47.2614899,"longitude":11.426765700000033},
{"id":"5","name":"MPREIS Haller Straße","address":"Hallerstraße 212, 6020 Innsbruck, Austria","latitude":47.2769524,"longitude":11.442559599999981},
{"id":"6","name":"Recyclinginsel Novapark","address":"Arzlerstraße 43, 6020 Innsbruck, Austria","latitude":47.2833947,"longitude":11.424273299999982},
{"id":"7","name":"Höhenstraße / Hungerburg (neben Spar)","address":"Höhenstraße 125,6020 Innsbruck, 6020, Innsbruck, Austria","latitude":47.2841353,"longitude":11.394666799999982},
{"id":"8","name":"Recyclinginsel Schneeburggasse","address":"Schneeburggasse 116, 6020 Innsbruck, Austria","latitude":47.2695889,"longitude":11.364059699999984},
{"id":"9","name":"MPreis Fischerhäuslweg 31","address":"Fischerhäuslweg 31, 6020 Innsbruck, Austria","latitude":47.261875,"longitude":11.364496700000018},
{"id":"10","name":"Pfarre Petrus Canisius","address":"Santifallerstraße 5,6020 Innsbruck Austria","latitude":47.2635626,"longitude":11.380990800000063},
{"id":"11","name":"MPreis Bachlechnerstraße","address":"Bachlechnerstraße 46, 6020 Innsbruck","latitude":47.2645067,"longitude":11.376220800000056}]}';


        function initialize() {
          var innsbruck = { lat: 47.2656733, lng: 11.3941983 };
          var map = new google.maps.Map(document.getElementById('map'), {
            zoom: 14,
            center: innsbruck
          });

          // Load Google Geocoder Library
          var geocoder = new google.maps.Geocoder();
          // Parse the data into a JSON
          var wamsJSON = JSON.parse(wamsData);
          // Iterate over all containers
          for(var i =0; i < wamsJSON.containerStandorte.length;i++){
              var container = wamsJSON.containerStandorte[i];
              placeMarkerOnMap(geocoder, map, container);
          };
        }

    // Custom marker
    var wamsLogo = {
      url: 'images/wams.png',
      // This marker is 20 pixels wide by 32 pixels high.
      size: new google.maps.Size(64, 64),
      // The origin for this image is (0, 0).
      origin: new google.maps.Point(0, 0),
      // The anchor for this image is the base of the flagpole at (0, 32).
      anchor: new google.maps.Point(64, 70)
    };

    // Define the shape for the marker
    var shape = {
      coords: [1, 1, 1, 64, 64, 64, 64, 1],
      type: 'poly'
    };

    // Place marker on the map
    function placeMarkerOnMap(geocoder, resultsMap, container) {
        // Create Google position object with latitude and longitude from the container object
        var positionLatLng = new google.maps.LatLng(parseFloat(container.latitude),parseFloat(container.longitude));
        // Create marker with the position, logo and address
        var marker = new google.maps.Marker({
          map: resultsMap,
          position: positionLatLng,
          icon: wamsLogo,
          shape: shape,
          title: container.address,
          animation: google.maps.Animation.DROP
        });
    }
    // Place marker on the map
    google.maps.event.addDomListener(window, 'load', initialize);

Hosting the Result

Github offers a great feature for hosting simple static Web pages. All is needed is a new orphan branch of your project, which is named gh-pages, as described here. This branch serves as the Web directory for all your files and allows to host Web pages for public projects for free. You can see the result of the project above here.

Encrypt a USB Drive (or any other partition) Using LUKS

Did you ever want to feel like secret agent or do you really need to transport and exchange sensitive data? Encrypting your data is not much effort and can be used to protect a pen drive or any partition and the data on it from unauthorized access. In the following example you see how to create an encrypted partition on a disk. Note two things: If you accidentally encrypt the wrong partition, the data is lost. For ever. So be careful when entering the commands below. Secondly, the method shown below only protects the data at rest. As soon as you decrypt and mount the device, the data can be read from everyone else if you do not use correct permissions.

Preparation

Prepare a mount point for your data and change ownership.

# Create a mount point
sudo mkdir /media/cryptoUSB
# Set permissions for the owner
sudo chown stefan:stefan /media/cryptoUSB

Create an Encrypted Device

Encrypt the device with LUKS. Note that all data on the partition will be overwritten during this process.

# Create encrypted device 
sudo cryptsetup --verify-passphrase luksFormat /dev/sdX -c aes -s 256 -h sha256

# From the man page:
       --cipher, -c 
              Set the cipher specification string.
       --key-size, -s 
              Sets  key  size in bits. The argument has to be a multiple of 8.
              The possible key-sizes are limited by the cipher and mode used.
       --verify-passphrase, -y
              When interactively asking for a passphrase, ask for it twice and
              complain  if  both  inputs do not match.
       --hash, -h 
              Specifies the passphrase hash for open (for  plain  and  loopaes
              device types).

# Open the Device
sudo cryptsetup luksOpen /dev/sdX cryptoUSB
# Create a file system (ext3)
sudo mkfs -t ext3 -m 1 -O dir_index,filetype,sparse_super /dev/mapper/cryptoUSB
# Add a label
sudo tune2fs -L Crypto-USB /dev/mapper/cryptoUSB
# Close the devicesudo cryptsetup luksClose cryptoUSB

Usage

The usage is pretty simple. With a GUI you will be prompted for decrypting the device. At the command line, use the following commads to open and decrypt the device.

# Open the Device
sudo cryptsetup luksOpen /dev/sdcX cryptoUSB
# Mount it
sudo mount /dev/mapper/cryptoUSB /media/cryptoUSB

When you are finished with your secret work, unmount and close the device properly.

sudo umount /media/cryptoUSB 
sudo cryptsetup luksClose cryptoUSB

Secure Automated Backups of a Linux Web Server with Rrsync and Passwordless Key Based Authentication

Backups Automated and Secure

Backing up data is an essential task, yet it can be cumbersome and requires some work. As most people are lazy and avoid tedious tasks wherever possible, automation is the key, as it allows us dealing with more interesting work instead. In this article, I describe how a Linux Web server can be backed up in a secure way by using restricted SSH access to the rsync tool. I found a great variety of useful blog posts, which I will reuse in this article.

This is what we want to achieve:

  • Secure data transfer via SSH
  • Passwordless authentication via keys
  • Restricted rsync access
  • Backup of all files by using a low privileged user

In this article, I will denote the client which should be backed up WebServer. The WebServer contains all the important data that we want to keep. The BackupServer is responsible for fetching the data in a pull manner from the WebServer.

On the BackupServer

On the BackupServer, we create a key pair without a password which we can use for authenticating with the WebServer. Details about passwordless authentication are given here.

# create a password less key pair
ssh-keygen -t rsa # The keys are named rsync-backup.key.public and rsync-backup.key.private

On the WebServer

We are going to allow a user who authenticated with her private key to rsync sensitive data from our WebServer to the BackupServer, This user should have a low privileged account and still being able to backup data which belongs to other users. This capability comes with a few security threats which need to be mitigated. The standard way to backup data is rsync. The tool can be potentially dangerous, as it allows the user to write data to an arbitrary location if not handled correctly. In order to deal with this issue, a restricted version of rsync exists, which locks the usage of the tool to a declared directory: rrsync.

Obtain Rrsync

You can obtain rrsync from the developer page or extract it from your Ubuntu/Debian distribution as described here. With the following command you can download the file from the Web page and store it as executable.

sudo wget https://ftp.samba.org/pub/unpacked/rsync/support/rrsync -O /usr/bin/rrsync
sudo chmod +x /usr/bin/rrsync

Add a Backup User

First, we create a new user and verify the permissions for the SSH directory.

sudo adduser rsync-backup # Add a new user and select a strong password
su rsync-backup # change into new account
ssh rsync-backup@localhost # ssh to some location e.g.  such that the .ssh directory is created
exit
chmod go-w ~/ # Set permissions
chmod 700 ~/.ssh
chmod 600 ~/.ssh/authorized_keys

Create a Read Only of the Data You Want to Backup

I got this concept from this blog post. As we want to backup also data from other users, our backup user (rsync-backup) needs to have read access to this data. As we do not want to change the permissions for the rsync-backup user directly in the file system, we use bindfs to create read only view of the data we want to backup. We will create a virtual directory containing all the other directories that we want to backup. This directory is called `/mnt/Backups-Rsync-Readonly`` . Instead of copying all the data into that directory, which would be a waste of space, we link all the other directories into the backup folder and then sync this folder to the BackupServer.

One Time Steps:

The following steps create the directory structure for the backup and set the links to the actual data that we want backup. With this method, we neither need root, sudo or any advanced permissions. We simply create a readonly view of the data where the only user with access is rsync-backup.

sudo apt-get install acl bindfs # Install packages
sudo mkdir /mnt/Backups-Rsync-Readonly # Create the base directory
sudo chown -R rsync-backup /mnt/Backups-Rsync-Readonly # Permissions
sudo mkdir /mnt/Backups-Rsync-Readonly/VAR-WWW # Create subdirectory for /var/www data
sudo mkdir /mnt/Backups-Rsync-Readonly/MySQL-Backups # Create subdirectory for MySQL Backups
sudo setfacl -m u:rsync-backup:rx /mnt/Backups-Rsync-Readonly/ # Set Access Control List permissions for read only
sudo setfacl -m u:rsync-backup:rx /mnt/Backups-Rsync-Readonly/MySQL-Backups
sudo setfacl -m u:rsync-backup:rx /mnt/Backups-Rsync-Readonly/VAR-WWW

Testrun

In order to use these directories, we need to mount the folders. We set the permissions for bindfs and establish the link between the data and our virtual backup folders.

sudo bindfs -o perms=0000:u=rD,force-user=rsync-backup /var/www /mnt/Backups-Rsync-Readonly/VAR-WWW
sudo bindfs -o perms=0000:u=rD,force-user=rsync-backup /Backup/MySQL-Dumps /mnt/Backups-Rsync-Readonly/MySQL-Backups

These commands mount the data directories and create a view. Note that these commands are only valid until you reboot. If the above works and the rsync-backup user can access the folder, you can add the mount points to fstab to automatically mount them at boot time. Unmount the folders before you continue with sudo umount /mnt/Backups-Rsync-Readonly/*  .

Permanently Add the Virtual Folders

You can add the folders to fstab like this:

# Backup bindfs 
/var/www    /mnt/Backups-Rsync-Readonly/VAR-WWW fuse.bindfs perms=0000:u=rD,force-user=rsync-backup 0   0
/Backups/MySQL-Dumps    /mnt/Backups-Rsync-Readonly/MySQL-Backups fuse.bindfs perms=0000:u=rD,force-user=rsync-backup 0   0

Remount the directories with sudo mount -a .

Adding the Keys

In the next step we add the public key from the BackupServer to the authorized_keys file from the rsync-backup user at the WebServer. On the BackupServer, cat the public key and copy the output to the clipboard.

ssh user@backupServer
cat rsync-backup.key.public

Switch to the WebServer and login as rsync-backup user. Then add the key to the file ~/.ssh/authorized_keys.
The file now looks similar like this:

ssh-rsa AAAAB3N ............ fFiUd rsync-backup@webServer```


We then prepend the key with the only command this user should be able to execute: rrsync. We add additional limitations for increasing the security of this account. We can provide an IP address and limit the command execution further. The final file contains the following information:

command=”/usr/bin/rrsync -ro /mnt/Backups-Rsync-Readonly”,from="192.168.0.10”,no-pty,no-agent-forwarding,no-port-forwarding,no-X11-forwarding ssh-rsa AAAAB3N ………… fFiUd rsync-backup@webServer



Now whenever the user rsync-backup connects, the only possible command is rrsync. Rrsync itself is limited to the directory provided and only has read access. We also verify the IP address and restrict the source of the command.

#### Hardening SSH

Additionall we can force the rsync-backup user to use the keybased authentication only. Additionally we set the IP address restriction for all SSH connections in the sshd_config as well.

AllowUsers rsync-backup@192.168.0.10 Match User rsync-backup PasswordAuthentication no



## Backing Up

Last but not least we can run the backup. To start synching we login into the BackupServer and execute the following command. There is no need to provide paths as the only valid path is already defined in the authorized_key file.

rsync -e “ssh -i /home/backup/.ssh/rsync-backup.key.private” -aLP  –chmod=Do+w rsync-backup@webServer: .



# Conclusion

This article covers how a backup user can create backups of data owned by other users without having write access to the data. The backup is transferred securely via SSH and can run unattended. The backup user is restricted to using rrsync only and we included IP address verification. The backup user can only create backups of directories we defined earlier.