a usesless search engine that looks pretty.
then again, I could just go to search for “axe feather”…. and then three girls require my attention, Ms Dewey, the feather and the lady that beats me if I don’t close the laptop pretty soon.
personal weblog of Leo Sauermann
a usesless search engine that looks pretty.
then again, I could just go to search for “axe feather”…. and then three girls require my attention, Ms Dewey, the feather and the lady that beats me if I don’t close the laptop pretty soon.
I will be blogging about my Semantic Web PhD for the next months, until I am finished. First, you learn what I did and do, and perhaps you can copy something for your own thesis or point me to information I missed, critique, positive and negative, is warmly welcome.
First part of my dissertation will be about integrating data into the semantic desktop. The problem at hand is, that we face data from different sources (files, e-mail, websites) and with different formats (pdf, e-mails, jpg, perhaps some RDF in a foaf file, or an iCalendar file), and these can change frequently or never. Faced with all this lovely data that can be of use for the user, we are eager to represent it as RDF. There are two main problems when transforming data to rdf:
I have experienced, that the second question is far harder to solve. While it is quite easy to find a RDF(S) vocabulary for e-mails, MP3s, or People (and if you don’t find one on schemaweb.info, btw the only website I never had to bookmark because its so obvious, you make up the vocab yourself), finding the correct URI to identify the resource can be a longer task.
The most tricky thing is when identifying files or things crawled from a bigger database like the Thunderbird address book. For files, there are some possibilities, all of them have been used by me or others.
You can skip this section about typical URIs for files, its just an example of what implications the URI may have.
So, after this excursion we know that its not straightforward to identify files with a URI. We tried the first two approaches, but I am not happy with them, perhaps I blog the latest findings regarding URIs sometimes.
On with metadata integration. So, four years ago I needed a way to extract metadata from MP3s, Microsoft Outlook and other files. I created something called “File Adapters”. They worked very elegant: you post a query for ” ?x” and get the answer “Numb”. This was done by analysing the subject URI (file://…) and then invoking the right adapter. The adapter looked at the predicate and extracted only that, very neat. BUT after two years, around 2004 I realised that I need an index of all data anyway to do cool SPARQL queries, because the question “?x mp3:artist ‘U2′” was not possible – for such queries, you need a central index like Google Desktop or Mac’s Quicksilver (ahh, I mean Spotlight) does. For this, the Adapters are still useable, because they can extract the triples bit by bit. But then, if you do it by crawling anyway, then you could simplify the whole thing drastically. Thats what we found out the hard way by implementing it and seeing that interested students that helped out had many problems with the complicated adapter stuff, but are quite quick writing crawlers. We have written this in a paper called “Leo Sauermann, Sven Schwarz: Gnowsis Adapter Framework: Treating Structured Data Sources as Virtual RDF Graphs. In Proceedings of the ISWC 2005.” (bibtex here). Shortly after finishing this paper (may 2005?), I came to the conclusion that writing these crawlers is a problem that many other people have, so I asked the people from x-friend if they would want to do this together with me, but they didn’t answer. I then contacted the Aduna people, who do Autofocus, and, even better for us, they agreed to cooperate on writing adapters and suggested to call the project Aperture. We looked at what we did before and then merged our approaches, basically using the code Aduna had before and putting it into Aperture.
What we have now is an experiment that showed me that accessing the data live was slower and more complicated than using an index, and the easiest way to fill the index is crawling.
The problem that is still unsolved is, that the Semantic Web is not designed to be crawled. It should consist of distributed information sources, that are accessed through technologies like SPARQL. So, at one point in the future we will have to rethink what information should be crawled and what not, because it is already available as SPARQL endpoint. And then, it is going to be tricky to distribute the SPARQL queries amongst many such endpoints, but that will surely be solved by our Semantic Web services friends.
it has never been my object to record my dreams, just to realize them.
-man ray-
and thats why man ray is cool and I am not. I just dream all day long or read books.
found here:
http://flickr.com/photos/maggie_le_chat/sets/72057594137050128/
while a longer travel through flickr after searching for photos needed to illustrate powerpoints for this talk I have to give… when you take your time, work can be relaxing.
also click this pic, I can’t embed it here due to restricted copyright.
A way of social metadata and data integration: tagging by humans.
If you have a photo of somebody holding a camera, that somebody may also have uploaded that photo, which can be tagged to your photo. Now If you again take a photo of someone else…
Here a nice example how a chain can be started. Note the great idea of tagging the related photos by notes on the camera:
http://flickr.com/photos/anjeve/262175558/
(all rights reserved on that one, cannot blog it directly, click link)
I am in the midst of writing my dissertation about “Information representation on the Semantic Desktop”, something I do since three years every day and which I really like doing.
Thomas Roth Berghofer has written a nice little story on how he sees Doctorate Studies at our research company. Its written in German.
I stumbled into this science business, when I found no other way to continue work on gnowsis, and after two years of doing it, I got somehow used to it and learned the “way of the force”, at least a little. Still, the biggest problem is writing. As you may notice, I don’t care much about grammar or cool wording. So, its a long endeavour to do this, and to do real science is even harder. Coming more from the systems engineering group (my diploma was done at distributed systems group Vienna), my work focusses on the engineering science: how can we build the semantic web. Alas, Today I meet my doctorate “supervisor” and colleague Thomas Roth Berghofer, to check my status.
Promise, I will blog more frequently about my scientific work from now on. For example, about my various publications.
Cygri posted some websites that show how the Semantic Web may work.
He collects them using the del.icio.us tag “semwebintro”, which I copy from him, so you find a list with more contributions (or also yours?) here:
The material is practically oriented and is not bloated by theoretical papers on what wishful thingies you may do sometimes in the future given a hypothetical semantic web. I like it as it is: showcases, demos, FAQ, TimBl.
We are building the desktop semantic web server for Nepomuk at the moment, and I had a look how to use OSGI to start Java services as SOAP services.
At the end, we will start a RDF database, some ontology matchers, text indexing, Data Crawling (Aperture and Beagle++) and many other things using this code, so wait a little and you get a really cool Semantic Desktop platform. If everything works fine, it should be cooler than gnowsis 😉
The code will be open-source in December or January, but if you are really interested, I may bundle this as a zipfile for you (its not Nepomuk relevant, its only a hassle with Eclipse)
UPDATE (12.10.2006): Below oddysee is really odd, today Christopher Tuot informed me where the compiled bundles of knopflerfish are, the are in knopflerfish.org\osgi\jars\axis-osgi\axis-osgi_all-0.1.0.jar, I saw them but didn’t realize they were bundles. So if I just imported this bundle to the Eclipse plugins, it would have worked probably within one hour. Although I still don’t know how to add these precompiled bundles to a developing project like we have.
here is my oddysee:
_The plan was to run SOAP services from OSGI, as announced:_
Mikhail has already submitted some service for points one and two – this package might help me:
https://dev.nepomuk.semanticdesktop.org/browser/trunk/java/org.ungoverned.osgi.bundle.http
_Result: It works, checkout the newest NEPOMUK from https://dev.nepomuk.semanticdesktop.org/repos/trunk/java/_
It took me exactly three hours, here is the steps I took.
good to read, sounds exactly what we need.
ha – the SVN is exactly where the doc came from, thats easy:
I decide to checkout the whole SOAP branch somewhere to my disk, outside eclipse. I go for the whole package, because there is an ANT file inside the parent folder of the sub-packages, which seems to indicate that they all belong together.
svn checkout https://www.knopflerfish.org/svn/knopflerfish.org/trunk/osgi/bundles_opt/soap
its 5.81 MB, 195 files, thats nothing.
at this point, I check if Knoplerfish has a compatible license – they use BSD, ok, no problem here.
I try to run the build.xml in the main soap dir. Fails, it needs the other knoplerfish dependencies “commons-logging”
PROBLEMS ok, after all this rubble, I go for Graphical user interface and just start knopflerfish to start the HTTP services using Knopflerfish. Ok, this works and I can install HTTP and AXXIS using some clicks on bundles there: knopflerfish.org\osgi>java -jar framework.jar
All that hassle says: someone did this before.
I decide to make new plugins for Eclipse OSGI, using the Eclipse IDE, and copy the sources and manifest files from knopflerfish.
at this point, I realise, that the Knopflerfish people use Eclipse to code Knopflerfish, so I install the Eclipse IDE plugin to see what it can do for me:
but I learned a lot.
I found no straightforward way to compile Knopflerfish into plugins that can be used conveniently from inside eclipse, so I just take the sourcecode of the plugins and make new Eclipse plugins from that, copying the Manifest files into the Eclipse manifests.
Ok, until now I have these OSGI bundles in my eclipse:
All of them seem to work, I only get one nullpointerexception when starting the axis-soap:
java.lang.NullPointerException at org.knopflerfish.bundle.axis.Activator.setupAxis(Activator.java:109)
This is easy, he wants to find the AXIS configuration in resource resources/axis/server-config.wsdd
I add this to the build.properties of the soap bundle, using the classy graphical editor of Eclipse, rocks.
bin.includes = META-INF/,\ ... resources/axis/
It seems the Eclipse building works differently from the Knopflerfish building process, so I move the contents of resources to the root of the plug-in, most important the resources/axis/.. is now axis/….
_DONE: We can start Java objects as SOAP services_
Go here to see the started SOAP services:
Go here to see the automatically created WSDL files for out example RDF repository component:
All in all this took three hours. It was hard, but not impossible. I would recon that it proves that our SOAP in OSGI approach can rock so hard the keyboards will fly. No guarantee that everything will workout, but this is the only code you have to write to start a SOAP service now:
// inside your Bundle Activator
public void start(BundleContext context) throws Exception {
// assume RDFRepositoryImpl is your Java Object that should be accessible via SOAP
ServiceReference srRA = registerObject("remoteFW", new RDFRepositoryImpl(context));
}
private ServiceReference registerObject(String name, Object obj) {
Hashtable ht = new Hashtable();
ht.put("SOAP.service.name", name);
return (context.registerService(obj.getClass().getName(), obj, ht)).getReference();
}
I needed a quick way to enter unicode in Java sourcecode-strings, and hacked some random html bit I found on the internet.
http://www.dfki.uni-kl.de/~sauermann/2006/10/charmap.html
press the magic “Java Escapes” button and you get some escaped äs and ös.
Note: it wrecks unicodes that are below \uff, you have to add a \u00ff then.
I wondered how to make the Semantic Web fly, so I wandered around looking for possible deployment of Java based semweb stuff (many semweb apps are written in Java). Reading about webhosting and possibilities to host services like gnowsis as a web-application, stumbling around in the world of tomcat web-hosters, a dicussion with a nice line catches my eye:
As someone said, “java is great for engineering next generation
solutions to enable maximization of developer income by means of enhanced buzzword use”.
Point there. Naively I wondered who the someone may be and googled for the phrase “java is …” resulting in an estimated 3,220,000 buzzword bullshitters. Oh, I forgot to quote the quote, so searching “java is …” with quotes it bakes down to 1 someone who said it.
What can you learn? As many people buzzword-pump-fill their web-writings with above terms, they may be highly paid Java people. I can only say:
Java is great for engineering next generation solutions to enable maximization of developer income by means of enhanced buzzword use.
Very nice GUI demo of a relational browser, looks similar to touchgraph but moves very groovy. Danish Nadeem pointed me to it.