impact has a new homepage

My old employer, Impact computing, has a new homepage. Looks excellent, nice photos of the products I helped to develop. So I have to update the link on my work website to

impact it startpage

Funnily, the picture is taken at the actual office, the rooms are about 50m to the right (I think), the depicted people are on the bridge between two skyscrapers. I used the bridge to come from the office to the coffee-machine. Excellent view.

Wishing best for Impact-It and the new product they did, EDOCTA. The good old PORTACON still contains my handwriting, and sells good.

Silly Walks Flashmob

FlashMob Ministry of Silly Walks
Kommt am Mittwoch, den 13.02.2008,
pünktlich um 17:45 Uhr
an die Bushaltestelle Rathaus.

Achtet auf eine gepflegte Erscheinung (z.B. Anzug, Aktentasche, Melone, …) Wir performen den Silly-Walk bis 17:55 und verschwinden dann wieder.

Als Anregung:
Monty Python’s Ministry of Silly Walks

Bitte verbreitet diesen Aufruf über e-mail, SMS, Telefon, …

Was ist zu beachten? (aus
1. Wenn Du schon hingehst, dann mach auch mit! Aber nicht stehen bleiben und
2. Erscheine erst zum abgemachten Zeitpunkt (pünktlich)!
3. Folge genau den vorgegebenen Anweisungen!
4. Je mehr es wissen, desto grösser wird die Community. Also: weitersagen!
5. Keine Körper- oder Sachbeschädigungen, Beschimpfungen, etc.! Flash-Mobbers
sind friedlich und wollen Spass haben. Und es gibt einen guten Ruf zu
6. Die “Action-Area” zügig wieder verlassen und seiner Wege gehen.
7. Wenn jemand fragt, was das sei oder wer das organisiert hätte, antworten,
dass man spontan da sei.

Bis dann!
Die FlashMobbers.

Author needed for German Article on Semantic Web

The german magazine is covering the topic “next generation web” in an upcoming issue and searching for an author for the topic of Semantic Web. (the rest in german).

Ich wurde vom Verlag gefragt, ob ich selber einen Artikel schreiben würde, leider habe ich aber momentan zu wenig Zeit (wir haben bald Projekt-Review) um einen guten Artikel zu schreiben. Aber der eine oder andere Leser ist ja selbst Autor.

Konkret sind für den Artikel vier Seiten eingeplant, eine Seite fasst 4.000 Zeichen sowie zwei Bilder, insgesamt also 16.000 bis 17.000 Zeichen sowie 8 bis 10 Bilder. Deadline ist der 25. Februar 2008. Der Verlag honoriert den Text, nicht zu verwechseln mit Wissenschaftlichen Publikationen.

Falls du Semantic Web Experte bist, und bereits Artikel veröffentlicht hast, bitte wende dich direkt an den Chefredakteur Felix Schrader, Kontaktdetails bei createordie. Wie immer, es geht um die öffentliche Sicht aufs Semantic Web, da ist es wichtig ein gutes Bild zu bringen.

In own interest: selling netrunner cards, buying lens

Again, its time to let go. After playing the good old Netrunner game from time to time, but not often enough, I want to pass it on. Netrunner is a trading card game designed by Richard Garfield, the creator of Magic: The Gathering. It is full of insider gags from classic cyberpunk (Neuromancer, etc). Words like “Wilson” or “Chiba” will trigger the right synapses. One player is the evil corporation trying to make big money, the other is a witty hacker trying to steal information from the company by breaking into their network.


My favorite card is “Fortress Architects” because of its text:
“You want us to build that? Not even God has the money to afford that!”

“You’re working for Saburo Arasaka, not God.”

Ok, I will keep my favorite cards and enough for two players to gamble from time to time, but I sell the the worthy rare cards (and the others) on ebay. If you are interested in one or the other card or more, buy them.
At the moment, I am the only one selling netrunner on
All revenue made goes into a “good-mood” cause, last year I bought myself a Nintendo WII, this year its going to be a nice Sigma 1.4 Aperture 30mm lens for my Canon Eos, because my party pics need more light.

Google Data API and GData

Since 2006, google collects API programming interfaces in the Google Data (gdata) project. At their website, you find links to google docs, calendar, spreadsheet, youtube, and more.

It is a one-stop place to find interfaces for the various google services. For Semantic Web developers, it is also a good overview how google shapes its interfaces to its web-based applications. Get inspired by the pros.

Especially the GData protocol and data format is worth a look. It’s a generic API for getting and querying data, based on RSS 2.0 and Atom.
In the GData reference, you find a description of the Atom extensions and a simple query-format extending it.

Assuming a feed is hosted at the URI, then elements within the feed can be queried with the following URI:

A kind of “easy going sparql”.

In their own words:
The Google data APIs provide a simple standard protocol for reading and writing data on the web.

These APIs use either of two standard XML-based syndication formats: Atom or RSS. They also have a feed-publishing system that consists of the Atom publishing protocol plus some extensions (using Atom’s standard extension model) for handling queries.

Many Google services support the Google data API protocol.

Combining Rules with SPARQL

a recent blog-post by Dan Brickley reminded me that we have a Jena-Rule engine augmented with SPARQL dusting our shelves. Its years old, but may be interesting for you.

We augmented Jena’s rules by adding sparql.
The passing of parameters is easy, you just have to use the variables of the rules. Within Jena rules, you can always express graphs using N-Triple Axioms, so its also possible to write RDF files.
Only caveat: no quads.

code is in this folder

download SVN URI:


Here is a snippet of that documentation for you:


After the results have been gathered, inference rules are evaluated against the results.
This means, that you can define rules on which new information is generated
based on a declarative syntax described in the
Jena Rule Engine DOC.
An example file for these rules can be found in the source, at best directly in SVN here:
You can use the existing Jena rules and a special rule that was created by Leo Sauermann to
load additional triples from the gnowsis. This query-rule is defined as follows:

# Load triples from the search storage by a triple patter.
# The search storage is crawled by gnowsis IF you enabled it in the
# configuration and have crawled a few datasources. If not, this query
# will return nothing.

queryStorage(?subject, ?predicate, ?object, 'search')

# ?subject, ?predicate, ?object: a triple pattern. 
# Leave one of the empty (= a unbound variable like ?_x) and it will try to match 
# the empty thing as a wildcard. The variables are not bound in the pattern and
# cannot be used in the same rule. You have to write additional rules to work
# on the queried triples. 

usage example:
# load all project members and managers
(?project rdf:type org:Project) ->
queryStorage(?project, org:containsMember, ?_y, 'search'),
queryStorage(?project, org:managedBy, ?_z, 'search').

If you want to bind the variables and use them: It is not possible. See the
statement of the Jena developers
about this. But this is not a big problem, you can work around it easily.

Debugging Inference

If you want to tweak your inference rules and don’t want to have gnowsis run the query at all times,
you can use our built-in inference debugging tricks.

  • first: when you run the query for debugging, click the ‘rerun only rules’ link at the bottom of your search
  • second: open the inference file by clicking ‘edit rules file’
    (also found at the bottom of each search result)

the first stage brings you into a query mode, where pressing “reload” in the browser
just does the inference and the clustering, but not the search itself. This speeds up
your debugging of inference rules. You will only spot the difference in the addressbar
of the browser, which now contains something like http://…/retrieval?cmd=runrules&query=YourQuery

Also, note that syntax errors in your inference code will be logged to the gnowsis
logging system. This is either the message window, pane ‘org.gnowsis.retrieval.RetrievalService’
or your file logging in ~/.gnowsis/data/… .
You will not see syntax errors in your query results, sorry.

Inference and SPARQL combined

SPARQL reference

You can also use SPARQL queries to refine and expand the search results.
The basic syntax is to run a SPARQL query in the head of a rule, the first argument
is the query, escaped with ”, the following arguments are variables that will
be used in the query. The passed variables are interpreted as named variables
in the SPARQL query, named ?x1 ?x2 ?x3, etc.

Example for querySparql:

(?a ?b ?c) ->
 PREFIX rdfs:    <>
 CONSTRUCT   { ?x1 rdfs:label ?label }
 WHERE       { ?x1 rdfs:label ?label }
', ?c).

The variable named ?x1 will be replaced with the value of ?c.

Note the
following tips:

  • literals are escaped using the \’text\’ markup.
  • All arguments passed after the query will be bound into the query using names ?x1, ?x2, …
  • querySparql can only be used in the head of rules.
  • Attention: if you are querying the gnowsis biggraph, you have to add the graph-name to your
    sparql queries.
  • Try out your queries on the debug interface before you use them.
  • Only ‘construct’ queries are supported, not select or describe.
  • Namespace prefixes: inside the SPARQL query, you can use the namespace prefixes defined in the rule file

An example to do so is given now, the task here is to retrieve the members of a project if a project
was in the result.

#note that these namespace prefixes are available in the sparql query
@prefix skos: <>.
@prefix rdfs: <>.
@prefix rdf:  <>.
@prefix owl:  <>.
@prefix retrieve: <>.
@prefix tag: <>.

# get members with SPARQL
# note the special namspace defined inside
(?hit retrieve:item ?project),
(?project rdf:type org:Project) ->
PREFIX org: <>
  ?x1 org:containsMember ?m. ?m rdfs:label ?labelm. ?m rdf:type ?typem.
  ?x1 tag:todoRelateHitTo _:hit .
  _:hit rdf:type retrieve:InferedHit .
  _:hit retrieve:item ?m .
  _:hit retrieve:textSnippet \'member of project\'.
WHERE       { graph ?g {
  ?x1 org:containsMember ?m. ?m rdfs:label ?labelm. ?m rdf:type ?typem.
} }
', ?project).

# make the missing relations to the hits
# this is needed because you cannot pass blank nodes into the SPARQL engine.
(?item tag:todoRelateHitTo ?tohit),
(?hit retrieve:item ?item) ->
(?hit retrieve:related ?tohit).
run a sparql query, replacing placeholders (?x1, ?x2, ...) in the query with the passed arguments,
arguments have to be bound. Only 'construct' queries are supported.
# retrieve a test sparql
( ?x ?type)
-> querySparql('
CONSTRUCT   { ?x1 rdfs:label ?label }
WHERE       { ?x1 rdfs:label ?label. FILTER (?x1 = foaf:name) }

# retrieve with param - bind ?ont to the foaf ontology, it is called x1 in the query
(?ont rdf:type owl:Ontology)
-> querySparql('
CONSTRUCT   { ?p rdfs:isDefinedBy ?x1. ?p rdfs:label ?label. }
WHERE       { ?p rdfs:isDefinedBy ?x1. ?p rdfs:label ?label. }
', ?ont)

# example for gnowsis. note the use of "....{ graph ?g { ...."
( ?x ?type)
-> querySparql('
CONSTRUCT   { ?x1 rdfs:label ?label }
WHERE       { graph ?g {  ?x1 rdfs:label ?label. FILTER (?x1 = foaf:name) } }

Artikel in “Entwickler Magazin” 2008.1

For the german audience, “Entwickler Magazin” hat in Ausgabe 2008.1 einen Artikel von mir über den Semantic Desktop veröffentlicht.

Cover "Entwickler" Ausgabe 2008.1

In vier Seiten wird dort erklärt, was die Grundlagen von Semantic Web und Semantic Desktop sind, und ein paar links auf Projekte gegeben.
Zu haben um € 6,50 im Zeitschriftenhandel in Deutschland/Österreich/Schweiz, eine der 20k Ausgaben können schon bald dein sein.

Der Semantic Desktop macht den PC zum Denkwerkzeug. wink wink
Wir haben genug Platz, um all unsere E-Mails, MP3s, Photos, Videos und Dokumente am Desktop zu speichern. Das Problem ist, diese Information zu verwalten. Dateisysteme bieten nur starre Hierarchien an. Tim Berners-Lee und das W3C haben bereits weiter gedacht: Menschen denken in Konzepten, das Semantic Web bietet mit RDF und Ontologien einen auf HTTP, URIs und HTML aufbauenden Standard zur Annotation und Suche. Der Semantic Desktop bringt Betriebssysteme und Anwendungen damit weg von den Dateien, auf die Stufe der Gedanken.

Um dahin zu kommen, zuerst ein kleiner Crash-Kurs zum Thema Semantic Web,… brings together google, plaxo, and facebook

A storm-in-a-waterglass gathering more and more momentum, welcomes new members. Before they were heating up the storm, now individual corporate representatives of Scoble, Plaxo, and Facebook, are sitting on one virtual table.

As announced here, blogged here, and then slashdotted, people working for some interesting ventures have today joined

In the last weeks, for those who missed the event that Robert Scoble used an un-released app “pulse” from plaxo to gather his contacts from facebook and got blocked by facebook after this. He contacted them and after a while, was back in, but the problem is obvious: social websites, and the companies running them, have one capital on their stock: data created by us. As “we” were “man of the year” in Time, the data of such a celebrity is worth a lot.

Scoble joined (DP) and blogged this, which made me curious to also look at their site and add a few notes about how RDF and Semantic Web may help them out instead of creating their own standards.
Now that people working for Plaxo, Google, and Facebook join the already impressive list of individuals at dataportability, they can really talk about the mission
To put all existing technologies and initiatives in context to create a reference design for end-to-end Data Portability. To promote that design to the developer, vendor and end-user community.

My biggest fear was, that the standards created by DP were used-less as no big companies were present in their board (not like W3C, where nearly all big companies are onboard). This has changed now and I would expect that the effort indeed now is relevant to the future of the web.