Ausgewählter Nerdkram von Informatikstudenten der Uni Ulm

Interview: Dru Lavigne

Dru Lavigne

This time the interview series continues with Dru Lavigne.

Dru is amgonst other things the Community Manager of the PC-BSD project, a director at the FreeBSD Foundation and the founder and president of the BSD Certification Group.

She is also a technical writer who has published books on topics surrounding BSD and writes the blog A Year in the Life of a BSD Guru.

Who are you and what do you do?
I currently work at iXsystems as the “Technical Documentation Specialist”. In reality, that means that I get paid to do what I love most, write about BSD. Lately, that means maintaining the documentation for the PC-BSD and the FreeNAS Projects, assisting the FreeBSD Documentation Project in preparing a two volume print edition of the FreeBSD Handbook, and writing a regular column for the upcoming FreeBSD Journal.

Which software or programs do you use most frequently?
At any given point in time, the following apps will be open on my PC-BSD system: Firefox, pidgin, kwrite, and several konsoles. Firefox has at least 20 tabs open, pidgin is logged into #pcbsd, #freenas, #bsdcert, and #bsddocs. make or igor (the FreeBSD automated doc proofreading utility) are most likely running in one konsole, while other konsoles have various man pages open or various commands which I’m testing. Other daily tools, depending upon which doc set I’m working on, include DocBook xml, OpenOffice, the Gimp, Calibre, Acroread, and VirtualBox for testing images. I also spend a fair amount of time in the forums, wikis, and bug tracking systems for the projects that I write documentation for.

Why did you decide to use your particular operating system(s) of choice?
I had been using FreeBSD as my primary desktop since 1996 and pretty much had an installation routine down pat to get everything I needed installed, up, and running as I needed new systems. When Kris Moore started the PC-BSD Project, I liked the graphical installer and its ability to setup everything I needed quickly. Sure, I could do it myself, but why waste an hour or so doing that when someone else had already created something to automate the process?

In what manner do you communicate online?
If it’s not in my inbox, it probably doesn’t happen. However, IRC, Facebook, and LinkedIn are convenient for getting an answer to something now before summarizing in an email or actioning an item.

Which folders can be found in your home directory?
A dir for temporary downloads and patches, one for PC-BSD src, one for FreeBSD doc src, one for presentations, one for articles, one for bsdcert stuff, one for each version of the PC-BSD docs, and one for each version of the FreeNAS docs.

Which paper or literature has had the most impact on you?
My favorite O’Reilly books are Unix Power Tools by Peek, Powers, et al, TCP/IP Network Administration by Craig Hunt, and Open Sources: Voices from the Open Source Revolution.

What has had the greatest positive influence on your efficiency?
Hard to say, as my brain tends to naturally gravitate towards the most efficient way of doing anything. I can’t imagine working without an Internet connection though and the times when Internet is not available are frustrating work-wise.

How do you approach the development of a new project?
Speaking from a doc project perspective, I tend to think big picture first and then lay out the details as they are tested then written. I can quickly visualize a flow, associated table of contents, and an estimate of the number of pages required, the rest is the actual writing.

Which programming language do you like working with most?
I don’t per se, but can work my way around a Makefile. With regards to text formatting languages, I’ve used LaTex, DocBook XML, PseudoPOD, groff, mediawiki, tikiwiki, ikiwiki, etc. I can’t say that I have a favorite text formatting language as it is just yet another way I have to remember to tag as when writing text. I do have to be careful to use the correct tags for the specified doc set and to avoid using tags or entities when writing emails or chats :-)

In your opinion, which piece of software should be rewritten from scratch?
No comment on software (as I’m not a developer). However, I daily see docs that need to be ripped out and started from scratch as they are either so out-of-date to be unusable or their flow doesn’t match how an user actually uses the software. That’s assuming that any docs for that software exist at all.

What would your ideal setup look like?
I like my current setup as it has all of the tools I need. See #2.

Click here for the full picture.

Few words about… The seek for a WhatsApp alternative

Since WhatsApp was sold for 19 billion dollar to Facebook, lots of blogs and news seek for alternatives. In this short comment, I will point out why we all need alternatives, why we all need more than one alternative, why this works and what features our new alternative must have.

Threema, Textsecure or Telegram are just a few new so called WhatsApp competitor nowadays. But before we go out and look for alternatives, we must understand what’s the problem with WhatsApp and Facebook. And before we consider that, we must understand why Zuckerberg payed 19 billion dollar for WhatsApp. I intentionally do not say that WhatsApp is worth that much money. It’s only that much worth for Facebook. The big deal shows us what really matters in the information age. Surprise, it’s information itself. Facebook itself is free, so where comes all the money? Facebook can afford buying WhatsApp, despite Facebook has not a single paying user. This tells us that information is very important and also very expensive. Important for advertising, marketing research or insurance companies. Or intelligence agencies. Information about us. Companies make billions of dollars by selling information they know about us!

The bad thing about this is, that we only understand why this can be a problem when it’s too late. When knowledge about us is used against us and we suddendly recognize it. Before that, we all agree using our personal information. And that’s bad.

So we note that information is important and we must take care of it.

For example by not giving a single company that much information. But there is more. It’s power. Facebook not only has our personal information, it has the power of more than one billion users. And there is almost no business competition.

So we note that using one centralized service supports monopolism and helps aggregating information.

So far, we’ve learned about the disadvantages of an information collecting centralized service. Now let’s have a look at why WhatsApp has so many users despite there are a lot of alternatives. When we read about apps having the potential to compete with WhatsApp, we always stumble upon the word usability. One of the main reason why WhatsApp is so successful, is because everyone can use it. You do not even have to register (explicitly). Registering is done almost instantly and implicitly

So we note that providing a real alternative to people, we must make the barrier of using our product very, very low by optimizing its usability. Features like group-chats or the ability to send multimedia files would increase the acceptance too. Platform support is also very important.

Let’s recap that. A chat system should protect our information. This can be done partially by using the right encryption. Partially, because meta data can be very difficult to encrypt. That means, data between two chatters can be strongly encrypted, but it’s hard to encrypt the information about who talks to each other (meta data). If we store the whole meta information collection at a single place (or company), we can hide what we are talking but not when, to who, where, how often and so on. For the latter, we must take a look at network topologies first. All communication in WhatsApp or Facebook end up at one server or server-cluster (see figure 1). A better alternative is using multiple independent servers. A decentralized system (see figure 2).

network topology: centralized network

Figure 1: Centralized network topology.

Here, each server can be owned by another person or company. Communication is still possible between them because the Internet is designed that way. Think about email for example. Here we have the freedom of choice which provider we want to use. On top of that, we could use TOR (a network for the anonymization of connection data) to disguise even more of our meta data.

network topology: decentral network

Figure 2: Decentralized network topology.

Another network topology we consider is the peer-to-peer architecture (see figure 3). Skype used to have this before Microsoft took it over. But Skype also fails somewhere else. At first, meta data is centralized. Second, it is owned by is a network for the anonymization of connection data one company (Microsoft). Third, it fails on it’s closed source nature. We cannot control or see what’s going on inside the system.
So we note that using an open source decentralized system is good. Also note that this is where most of the recently discussed alternatives fail completely.

network topology: peer to peer

Figure 3: Peer-to-peer network topology.

Another problem with closed source is the denial of choice. For example the choice of crypto algorithms. In an open system, we can use any end-to-end encryption we want. And we want that choice because weak encryption is not considerable for us. We also want encryption that guarantees us deniability and perfect forward secrecy. Deniability means that nobody can proof that your conversation actually took place. Perfect forward secrecy means that if someone comes into possession of your password or encryption keys, your conversation cannot be decrypted afterwards. So we note that we need a system that allows us to use our own clients and our own encryption. Let’s summarize this. Our chat system must be decentralized, support any client and any end-to-end encryption,
be easy to use and support all available platforms. To make it short here, it already exists. It’s called XMPP and was developed in 1999.

Building node-webkit programs with Grunt

The days before Christmas were busy as usual: It’s not just that everyone is hunting gifts for family and friends. There’s also the annual German Youth Team Championship of Chess beginning on 25th December. Together with some friends I’m trying to broadcast this big event with more than 500 young participants to their families left at home, waiting for results.

In the last years, we used a simple mechanism to provide near to live results: The arbiters got a special email adress where they could send their tournament files to. On the server side there was a small node.js mail server (as mentioned in “Einfacher SMTP-Server zur Aufgabenverarbeitung” [german]) that took the proprietary file format, converted it and imported the results of already finished games. Although being a huge progress to the past where the results were imported once all games has been finished, this approach needed an arbiter constantly sending mails around.

Therefore I wanted to try another way: A program that keeps an eye on the tournament file and uploads it once it was changed and automatically triggers the import of new game results. Having just some days for its development it was necessary to stay with the same technology stack we used before for the mail server and tournament file converter: node.js.

As the tournament arbiters aren’t all familiar with command line tools, a graphical user interface was necessary. Strongloop published a blog post about node-webkit, which allows writing native apps in HTML and Javascript, some days before. This blog post is a good entry to the topic. Nettuts+ wrote a nice introduction recently too. Different from their approach I used the plugin for Grunt grunt-node-webkit-builder, which takes on the whole building process. Here’s my project’s setup:

├── dist
├── Gruntfile.js
├── package.json
└── src
    ├── index.html
    ├── package.json
    ├── js
    │   └── index.js
    └── css
        └── style.css

By using the grunt-node-webkit-builder it is necessary to keep the source of the building tool (all in the root directory) separate from the source code of the node-webkit program. Otherwise it may happen that the building tools (Grunt, you know?) get bundled in the node-webkit program as well which leads to high file sizes and slow execution times.

So it’s clear we specify in /package.json only the dependencies that are necessary for the building process:

  "name": "do-my-build",
  "version": "0.0.1",
  "description": "Using Grunt to build my little program",
  "author": "Falco Nogatz <>",
  "private": true,
  "dependencies": {
    "grunt": "~0.4.2",
    "grunt-node-webkit-builder": "~0.1.14"

We also have to create the Gruntfile.js:

module.exports = function(grunt) {
    pkg: grunt.file.readJSON('src/package.json'),
    nodewebkit: {
      options: {
        build_dir: './dist',
        // specifiy what to build
        mac: false,
        win: true,
        linux32: false,
        linux64: true
      src: './src/**/*'


  grunt.registerTask('default', ['nodewebkit']);

The real node-webkit program can be written now in the /src directory. As also mentioned in the tutorials linked above, the /src/package.json should be filled with some node-webkit related fields:

  "name": "my-program",
  "main": "index.html",
  "window": {
    "toolbar": false,
    "width": 800,
    "height": 600

To build the node-webkit program for the architectures specified in /package.json you simply have to call the command:


This downloads the up-to-date binaries necessary for the specified architectures and builds an executable program. The result for Windows is simply a .exe file, for Linux an executable file. It contains all needed to run the program, so the user neither has to install node.js nor Chrome. The builds are located in /dist/releases.

By using this setup it was possible to automate the building process and develop the application within some days. The node-webkit runtime extends some native browser properties, for example it is possible to get the full path of a file selected by an <input type="file">. With that it was possible to create a graphical user interface to select tournament files and watch for their changes, which would trigger the update process.

Today I learned: vi-mode in der shell UND in fast allen Kommandozeilen-Tools

Ich bin ein großer Fan des Text-Editors vi. Wenn auch nicht ganz einfach zu erlernen, ermöglicht er es einem – sobald man sich ein wenig damit auskennt – sehr effizient Texte zu editieren. Die meisten Shells bieten die Möglichkeit eines vi-mode (mit dem Befehl set -o vi). In diesem Modus lässt sich die Kommandozeile mit vi-Tasten bedienen, was das arbeiten damit nochmals deutlich effizienter macht. Mein Problem bisher lag darin, dass der vi-mode nicht funktioniert hat, sobald ich ein anderes interaktives Kommandozeilen-Tool in der Shell öffnete (z.B. sftp). Neulich kam jedoch ein Freund zu mir und sagte: “Doch! Das geht!”
Hier ist die Lösung. Man legt eine Datei ~/.inputrc mit folgendem Inhalt an:
set keymap vi
set editing-mode vi

…und schon funktionieren die vi keybindings in nahezu allen Kommandozeilen-Tools :D

Today I learned: Zuschneiden von SIM-Karten

Nachdem ich vor ein paar Tagen ein neues Smartphone bekam, welches Micro-SIM Karten verwendet, stand ich vor dem Problem nur eine normale (Mini-)SIM Karte zu besitzen. Erfreulicherweise wurde ich darauf aufmerksam gemacht, dass man diese wohl problemlos auf die Größe einer Micro-SIM zuschneiden kann. Zunächst war ich noch etwas skeptisch, aber nach etwas Internet-Recherche entschied ich mich dazu es zu versuchen. Ich war zum Glück im Besitz einer ausländischen Micro-SIM Karte um mit dieser die Größen und Kontakte vergleichen zu können. Ich schnitt meine alte Mini-SIM zunächst mit der Schere in die grobe Form und machte den Feinschliff mit feinem Sandpapier. Und tatsächlich: meine neue Micro-SIM Karte funktioniert ausgezeichnet. Laut diversen Internet-Quellen kostet dies übrigens in Mobilfunkläden je nach Anbieter um die 10-20 Euro. Dieses Geld kann man sich also getrost sparen. Hinweis: manche Mini SIM-Karten kann man wohl auch auf die Größe einer Nano-SIM zuschneiden. Ich würde aber nicht darauf wetten, dass das funktioniert (insbesondere wenn schon die Kontakte größer sind als die Nano-SIM).

Interview: Henning Brauer

Henning Brauer

This time the interview series continues with Henning Brauer (@HenningBrauer).

Amongst other things, Henning is an OpenBSD developer and involved in projects like pf, OpenNTPD or OpenBGPD. pf is a BSD-licensed, advanced packet filter and a default component in OpenBSD. It is comparable to e.g. iptables, though in my opinion pf is a superior and better designed tool with a clear syntax that makes configuration very comfortable. I found it to be a very nice tool and it seems like I am not the only one: pf has been ported to many other operating systems and is e.g. integrated into Mac OS X Lion. Since it is licensed under the permissive BSD license (as everything within the OpenBSD source tree) it is possible for companies to integrate the code within their proprietary systems.

Henning is also the founder and CEO of BSWS, an ISP/MSP based in Hamburg, who makes heavy use of free software. As Henning told me, their technology stack consists basically only of free software. I think this is very nice. It always makes me happy to see businesses build upon free software, contributing back to the development of such.

Who are you and what do you do?
I’m Henning Brauer, 35. I’m the CEO of BS Web Services GmbH, an ISP/MSP here in Hamburg. I have been an OpenBSD developer since 2002, heavily involved with pf – redesigned it completely with Ryan McBride, last not least. I started OpenBGPD a good 10 years ago, OpenNTPD a bit thereafter, and the privsep/messaging-Framework I wrote for bgpd is used by almost all newer daemons in OpenBSD these days. These days I mostly work on the kernel side, the network stack, and pf as an integral part of it. Aside from that I wrote femail, am a board member of the EuroBSDcon Foundation, and do local politics.

Which software or programs do you use most frequently?
I heavily use OpenBSD, which might not come as a surprise. All my laptops run OpenBSD, my workstation at work does, and the vast majority of our servers, routers, firewalls etc run OpenBSD as well. The base system covers a lot of my needs already – webservers are obviously important for my work, all newer setups are on our base nginx, some older ones still on our forked Apache. mysql plays an important role, and unfortunately OpenLDAP as well. Almost all hosts run symon (auto-configured) and most also use femail. LaTeX is used for all documents that we produce.

On the Desktop side, I use mutt for email, both firefox and chromium for the web, tho the latter is foremost a tweetdeck container. mupdf for most PDFs. I fortunately don’t need an office suite. For my presentations I use magicpoint.

Why did you decide to use your particular operating system(s) of choice?
In the late 90s we had a bad DoS attack against a webserver running linux, which behaved poorly. I had the attack recorded and replayed it against a couple of other operating systems. FreeBSD behaved well, OpenBSD much better, and since I liked what I saw (I hadn’t looked at OpenBSD really before) that’s what I picked and stayed with.

Today, the choice is easy. OpenBSD is a good fit for almost all tasks I am confronted with, and since I am so much involved I can fix issues when I run into them instead of having to wait for a vendor or a project to react (or just hope for it), really understand what’s going on when things don’t work and fix issues properly instead of applying stupid workarounds that last from 12 to noon. The result is a setup that is very reliable and very secure, which in turn means that our monitoring doesn’t drive us nuts by demanding fixes at the worst possible times – and happy customers.

In what manner do you communicate online?
Email and twitter, foremost.

Which folders can be found in your home directory?
Found by whom? None for almost everybody.

Which paper or literature has had the most impact on you?
I’m not really into tech books. The few I have read over the last couple of years were all books I was involved with, as tech reviewer – “The Book of PF” and “Absolute OpenBSD” are to be mentioned here, both excellent books.

For papers & presentations, I cannot pinpoint one. I regularily go to conferences – EuroBSDcon, BSDcan and AsiaBSDcon are the standard ones – and visit talks that sound interesting, not just “our” ones. They often bring some kind of enlightment (the Q&A / discussions after my own presentations too). I often end up reading papers when researching on something, but couldn’t point out a specific one.

What has had the greatest positive influence on your efficiency?
Unix :-)

How do you approach the development of a new project?
I think about it for some time, before I write the first line of code. I need to get clear on the structure, break the task down to many small ones. Then get clear on the APIs, including the strictly internal ones, and THEN start coding. Sometimes talking to other developers helps a lot, we frequently use whiteboards.

The worst thing one can do is to sit down and start coding immediately. Spend time on designing your software, don’t just let it happen. Structure is extremely important, breaking down things into smaller, ideally self-contained blocks.

Which programming language do you like working with most?
Depends on the task. For kernel or high-performance network daemons it is C of course. For things like web applications or the like where you really want a higher abstraction level C would be absolutely inapproriate. I frequently use perl for company stuff, accompanied by some shell code (the latter obviously not for web stuff).

In your opinion, which piece of software should be rewritten from scratch?
That’s a tough one. I do believe in evolution, look where the constant revolution approach lead to for the GNU world: gazillions of similar projects, repeating each others faults instead of learning from history. The NIH syndrome (Not Invented Here) is one of the biggest problems in the free software world.

That said, there is a point where evolution is not the right approach. When the base is so bad that you end up rewriting everything anyway, might as well start from scratch. When there is a fundamental design issue, there is barely a way around starting over.

Let me use an example where I was involved: why did I write femail? It is just a little /usr/sbin/sendmail program that doesn’t have a queue but offloads the mail immediately to another mail server via SMTP. There is mini-sendmail doing the same thing. Besides that being GPL and thus not free, I was horrified when I looked at the code. The author brags about it being so small in terms of lines of code – which is pretty damn easy if you use ridiculously long lines instead of the usual 80 char limit. The code is outright unreadable, lack of proper indentation also doesn’t help. Unreadable means unreviewable which in turn has almost always meant buggy as hell. We call that “write-only code”. I then found out that it isn’t even remotely implementing the relevant RFCs, but just the most common subset – play fast and lose. Unusable. So I went on and wrote femail from scratch, which I use in hundreds of installs and which apparently spread quite widely.

femail has been used as the sendmail-compatible command line interface in OpenSMTPD – that’s a nice example on our approach, look for existing code before starting from scratch, faults already made elsewhere don’t need to be repeated.

What would your ideal setup look like?
Not sure that involves computers at all…

Click here for the full picture.

Self hosted Dropbox killer with SparkleShare and GitLab

In this article I will give you a short guide how you can host you own dropbox alternative.
For this, I will use SparkleShare and GitLab. SparkleShare is based on Git, Git needs SSH, GitLab needs Ruby.
This is what you need on the Server.
That means you will need your own Server with static IP and root access.
The installation of git, gitlab or the sparkleshare-server is not part of this tutorial, because there are good guides to setup them.
What I will show you here is that you can put them all together to gain a nice self-hosted dropbox alternative.

SparkleShare allone is basically not more than an automation for Git-Repositories. It tracks all changes and commits/pushes/pulls them automatically. It is already cool to host your own dropbox alternative.
It is based on Git and you can even use it for existing repositories or commit, push or pull manually with Git. However, it uses Git-Commit-Messages internally and therefore might not be fully compatible with plain old Git.. That means if you push something manually, the repository will still be unimpaired but a client’s update-function probably won’t work for a manual commit/push.. A later autocommit/push by sparkleshare however worked properly as I tested it.
A detailed cross use of sparkleshare and POG (plain old git) will not be part of this article, but maybe topic of another one :D

Git itself claims to be not very good at handling binary data or big files, therefore sparkleshare is not very good at that too.

You might ask why should I also use GitLab when sparkleshare serves the purpose. The answer is simple:
Dropbox can handle multiple users sharing the same files.. The so called shared folders.
Sparkeshare and Git alone won’t help you here.

Say you have tree users: Al Bud and Cally. Al, Bud and Cally share the folder Bundies. Al and Bud share the folder TV.
You would then need something to add collaborative structures to your repositories, a here GitLab calls into play.
If you are familiar with GitHub, you won’t have trouble using GitLab.
Here you can define multiple users and repositories where multiple users can work together. Say Al, Bud and Cally are all working on the Repository Bundies and Al and Bud are working on TV. This gives us the shared folders of dropbox, or at least a hacked version of it.

SparkleShare keeps all repositories, that should be tracked, in the folder /home/USER/SparkleShare/.
This is well known from dropbox. However, if Al decides to keep his repository TV not as subfolder of SparkeShare, he can use symbolic links (tested with Linux). Let’s say he has the repository TV in /home/Al/Documents/TV
then he just has to create a symlink in /home/USER/SparkleShare/:

ln -s /home/Al/SparkleShare/TV /home/Al/Documents/TV

SparkeShare does not use any configfiles here, all folders (or links to folders) in /home/USER/SparkleShare/ will be tracked by sparkleshare.

SparkleShare provides a Client for Windows, Linux and Mac.

If you are not willing to setup GitLab, a nice feature in Sparkleshare is that you can also use GitHub, Butbucket and others and sync those repositories.

tmux ohne root-Rechte installieren

Ich bin seit langer Zeit begeisteter Nutzer von tmux. Tmux ist ein Terminal Multiplexer (wie auch GNU Screen). Das bedeutet, dass man mit tmux mehrere (oder auch sehr viele oder auch nur eine) Shells in einem einzigen Terminal haben kann, was in sehr sehr vielen Fällen unheimlich praktisch ist. Ich verwende tmux unter anderem für folgende Zwecke:

  • Um mehrere (Kommandozeilen-)Programme gleichzeitig sehen zu können. Früher habe ich dafür einfach das Terminal-Programm mehrfach geöffnet. Das hat auch seine Berechtigung, wenn man z.B. seine Window-Manager-Funktionen nutzen möchte, um zwischen den Terminals zu wechseln. Auf Dauer hat sich hier tmux aber als praktischer herausgestellt. Unter anderem auch wegen des folgenden Punkts:
  • Um Text in der selben Shell oder zwischen verschiedenen Shells kopieren und einfügen zu können (Copy & Paste). Tmux bietet hierfür wunderbare Unterstützung.
  • Um in einer Shell zurück-scrollen zu können. Klar, fast alle Terminal-Programme können das auch. Aber mit tmux kann ich dann auch gleich noch Text kopieren. Und es funktioniert auch in Terminals, die nicht auf einer grafischen Oberfläche laufen.
  • Um meine laufenden Prozesse nicht zu verlieren, wenn aus irgendwelchen Gründen das Terminal geschlossen wird. Wenn ich mich z.B. auf einem Server per SSH anmelde, dann starte ich zuallererst einen tmux. Wenn jetzt die Internetverbindung abbricht, dann läuft der tmux auf dem Server weiter und ich kann mich einfach neu verbinden. Das Gleiche gilt natürlich, wenn man versehentlich sein grafisches Terminal schließt, oder X11 abstürzt.

Das alles sind eher Kleinigkeiten, die sich aber in der täglichen Arbeit als ungeheuer wertvoll herausstellen. Ich wüsste gar nicht mehr, wie ich ohne meinen tmux klar kommen sollte.

Jetzt ist tmux nicht unbedingt auf allen Rechnern, mit denen man so zu tun hat, verfügbar. Die Rechner des Linux-Pools der Uni-Ulm haben zum Beispiel keinen tmux installiert. Da ich aber unbedingt einen haben wollte, habe ich mich entschieden, diesen einfach selber zu kompilieren. Das ist nicht ganz trivial, da tmux einige Abhängigkeiten hat. Während meiner Recherche bin ich dann auf ein Skript gestoßen, welches tmux lokal installiert, sodass man keine root-Rechte benötigt. Vielen Dank an den Autor für diese Hilfe! Ich habe das Skript nicht komplett ausgeführt, sondern einzelne Befehle davon verändert angewandt, da ich einige der Schritte darin schon selbst erledigt hatte. Man kann es also auch wunderbar als Nachschlagewerk verwenden.

Damit war’s das auch für heute.
Bis bald,

Simple cURL based activity monitor for websites

I wrote a simple activity monitor to notify me about changes on certain websites. The script is executed as a cronjob on a dedicated server and regularly fetches certain URIs. It then executes diff on the last file fetched from this URI. If there is a difference, it is mailed to me. To describe some use cases:

  • Product in Online-Shop is out of stock, I want to be notified when it is available again.
  • University lecture. Want to be notified when news/exercises/etc are put on the lecture website.
  • Usenet Search. Want to be notified when certain files are available and can be found via a Usenet search site.
  • Event registration. I monitored a Barcamp website to get notified, once the registration was available.
  • Monitor changes on certain Wiki pages.
  • Regularly check if something can be ordered now.
  • Monitor changes to a certain Etherpad instance.
  • Monitor changes on databases. Some projects offer a HTTP API (e.g. OpenStreetMap). Regularly exporting the database file and running diffs against it shows changes.
  • Monitor newly created mailinglists via the lists interface offered by our university.
  • Monitor the delivery status of packages, by fetching the transport providers tracking page.

Generally the idea is simply to monitor content changes on HTTP resources and thus the script can easily be used for anything that supports HTTP. A simple monitor is set-up like this:
./ "" ""

If you want to monitor only a certain part of a website you can use a preprocess file to filter out content you don’t want to monitor:
./ "" "" "./preprocess/"

I have released the project under a MIT license via GitHub. Plattform zur Meinungsbildung

OpenCityCamp 2013 in Ulm

Was kann man mit Daten der öffentlichen Hand tun, wenn sie offen für alle zugänglich sind? Welche Anwendungen, Visualisierungen und Apps können aus ihnen entstehen? Wie können diese dabei helfen, Politik transparenter zu machen, Mehrwerte für den Alltag von BürgerInnen zu schaffen und ganz neue Formen des Journalismus zu ermöglichen?

Wie schon im letzten Jahr veranstaltet die datalove Hochschulgruppe auch dieses Jahr wieder ein Barcamp rund um diese Fragestellungen an der Uni Ulm. Neben Vorträgen/Workshops rund um OpenData, Visualisierungen und Anwendungen soll gerne wild gebrainstormt, konzpetualisiert und prototypisiert werden.

Die Veranstaltung findet am 8. und 9. Juni 2013 in den Räumlichkeiten der Uni Ulm statt. Nähere Infos finden sich unter Die Anmeldung ist über möglich.

Benjamin Erb [] studiert seit 2006 Medieninformatik und interessiert sich insbesondere für Java, Web-Technologien, Ubiquitous Computing, Cloud Computing, verteilte Systeme und Informationsdesign.

Raimar Wagner studiert seit 2005 Informatik mit Anwendungsfach Medizin und interessiert sich für C++ stl, boost & Qt Programmierung, Scientific Visualization, Computer Vision und parallele Rechenkonzepte.

David Langer studiert seit 2006 Medieninformatik und interessiert sich für Web-Entwicklung, jQuery, Business Process Management und Java.

Sebastian Schimmel studiert seit 2006 Informatik mit Anwendungsfach Medizin und interessiert sich für hardwarenahe Aspekte, Robotik, webOs, C/C++ und UNIX/Linux.

Timo Müller studiert seit 2006 Medieninformatik. Er interessiert sich allen voran für Mobile and Ubiquitous Computing, systemnahe Enwticklung und verteilte Systeme, sowie Computer Vision.

Achim Strauß studiert seit 2006 Medieninformatik. Seine Interessen liegen in Themen der Mensch-Computer Interaktion sowie Webentwicklung und UNIX/Linux.

Tobias Schlecht studiert seit 2006 Medieninformatik und interessiert sich vor allem für Software Engineering, Model Driven Architecture, Requirements Engineering, Usability Engineering, Web-Technologien, UML2 und Java.

Fabian Groh studiert seit 2006 Medieninformatik. Seine Interessengebiete sind Computer Graphics, Computer Vision, Computational Photography sowie Ubiquitos Computing.

Matthias Matousek studiert seit 2007 Medieninformatik und interessiert sich besonders für Skriptsprachen, Echtzeitsysteme und Kommunikation.

Michael Müller [] studiert seit 2009 Medieninformatik. Er interessiert sich vor allem für Web-Technologien, Ubiquitous Computing, User-Interfaces, UNIX und Creative Coding.

Falco Nogatz [] studiert seit 2010 Informatik mit Anwendungsfach Mathematik. Er interessiert sich für Web-Technologien, Programmierparadigmen und theoretische Grundlagen.


Februar 2015
« Mrz