A nice little gadgy from mikecpeck: FlickrPapr
I just used it to create the new header design for my blog.

old but still good.
personal weblog of Leo Sauermann
A nice little gadgy from mikecpeck: FlickrPapr
I just used it to create the new header design for my blog.

old but still good.
The Cypher™ alpha release is a program which generates the .rdf (RDF graph) and .serql (SeRQL query) representation of a natural language input. With robust definition languages, Cypher’s grammar and lexicon can quickly and easily be extended to process highly complex sentences and phrases of any natural language, and can cover any vocabulary. Equipped with Cypher, programmers can now begin building next generation semantic web applications that harness what is already the most widely used tool known to man – natural language.
So, a company is going on the market! Horray. Best wishes to them, and I sure want to check if that baby can help gnowsis.
In the latest SVN version of gnowsis, we have various improvements on handling ontologies, validating them and so on.
All information about that is here:
Adding ontologies, removing ontologies, updating ontologies is implemented in the PimoService. A convenient interface to these functions is implemented in the web-gui:
A list of ontologies that work with gnowsis is at DomainOntologies.
The implementation of Domain Ontologies is done using named graphs in sesame.
read on at Named graphs in Pimo.
Domain ontologies are added/deleted/updated using methods of the PimoService.
You can interact directly with the triples of an ontology in the store, but you have to care for inference and the correct context yourself then.
The semantics of the PIMO language allow us to verify the integrity of the data. In normal RDF/S semantics, verification is not possible. For example, setting the domain of the property knows to the class Person, and then using this property on an instance Rome Business Plan of class Document creates, using RDF/S, the new information that the Document is also a Person. In the PIMO language, domain and range restrictions are used to validate the data.
The PIMO is checked using a Java Object called PimoChecker?, that encapsulates a Jena reasonser to do the checking and also does more tricks:
The following rules describe what is validated in the PIMO, a formal description is given in the gnowsis implementation’s PIMO rule file.
Above rules are checking semantic modeling errors, that are based on errors made by programmers or human users.
Following are rules that check if the inference engine correctly created the closure of the model: –
The rules work only, when the language constructs and upper ontology are part of the model that is validated. For example, validating Paul’s PIMO is only possible when the PIMO-Basic and PIMO-Upper is available to the inference engine, otherwise the definition of the basic classes and properties are missing. The validation can be used to restrict updates to the data model in a way that only valid data can be stored into the database. Or, the model can be validated on a regular basis after the changes were made. In the gnowsis prototype, validation was activated during automatic tests of the system, to verify that the software generates valid data in different situations. Ontologies are also validated during import to the ontology store. Before validating a new ontology, it’s import declarations have to be satisfied. The test begins by building a temporal ontology model, where first the ontology under test and then all imported ontologies are added. If an import cannot be satisfied, because the required ontology is not already part of the system, either the missing part could be fetched from the internet using the ontology identifier as URL, or the user can be prompted to import the missing part first. When all imports are satisfied, the new ontology under test is validated and added to the system. A common mistake at this point is to omit the PIMO-Basic and PIMO-Upper import declarations. By using this strict testing of ontologies, conceptual errors show at an early stage. Strict usage of import-declarations makes dependencies between ontologies explicit, whereas current best practice in the RDF/S based semantic web community has many implicit imports that are often not leveraged.
I will be giving a seminar on
The Nepomuk Project – about the upcoming Social Semantic Desktop Platform
Date: Thursday September 07, 2006 at 16:00:00
Location: EJ228 (Directions)
Artificial Intelligence Center, SRI International, 333 Ravenswood Avenue, Menlo Park, CA 94025-3493
webpage: www.ai.sri.com/seminars/detail.php?id=159
please come to this seminar, I want to knit new connections between people in the Europe and California Semantic Web scene. If you come, write a short notice to leo.sauermann@dfki.de and perhaps to Neil Yorke-Smith, (nysmith workingat AI.SRI.COM) who is organizing the event together with Jack Park.
Abstract
Different research institutes are working on a vision titled “Semantic Desktop”, a semantically enhanced desktop computer that allows us to access semantic web data and desktop data in a uniform way. The European Union Integrated Project NEPOMUK (http://nepomuk.semanticdesktop.org) started in 2006 and intends to realize and deploy a comprehensive solution – methods, data structures, and a set of tools – for extending the personal computer into a collaborative environment, which improves the state of art in online collaboration and personal data management and augments the intellect of people by providing and organizing information created by single or group efforts. NEPOMUK brings together researchers, industrial software developers, and representative industrial users. In this talk you will get an introduction on the theory behind the Semantic Desktop, ontologies, databases, user interfaces and projects that work on this topic. Details about the current open-source implementations are presented and a demo is given. The lecture will finish with a discussion, where similarities and differences to the OpenIRIS project by SRI will be an important question.
Bio for Leo Sauermann
Leo Sauermann studied Information science at the Vienna University of Technology. Under the project name “gnowsis” he merged Personal Information Management with Semantic Web technologies, resulting in a master thesis about “Using Semantic Web technologies to build a Semantic Desktop”. Working as a researcher at the DFKI since 2004, he continued the work and now maintains the associated open-source project gnowsis. His research focus is on Semantic Web and its use in Knowledge Management. In autumn 2003 he started to give talks about his work and he is publishing frequently on the topic. From 1998 to 2002 he has been working in several small software companies, including the position of lead architect at Impact Business Computing developing mobile CRM solutions. He is an experienced programmer in both Delphi and Java. At the moment he is working on the EU integrated project Nepomuk.
Note for Visitors to SRI
Please arrive at least 10 minutes early in order to sign in and be escorted to the conference room. SRI is located at 333 Ravenswood Avenue in Menlo Park. Visitors may park in the visitors lot in front of Building E, and should call extension 2592 to be escorted to the meeting room. Detailed directions to SRI, as well as maps, are available from the Visiting AIC web page.
The simile team has published another nice reusable component: timeline visualization.
The advantage of this visualization is that it is programmed completly in Javascript, reusable like the google maps API.
In their words:
Timeline is a DHTML-based AJAXy widget for visualizing time-based events. It is like Google Maps for time-based information.
A introduction howto program it is here:
create-timelines
A first glance at it convinces me: this looks easy to use. They published the code in a SVN repository, so packaging it with something else is possible.
Danny Ayers pickes up the idea, and blogged a nice vision of how to combine GEO-data together with photo annotation and timelines, read here.
Hallo Kaiserslauterer,
nur zur Kurzinfo und wie immer viel zu spät:
Heute ist kein “The Great Escape” im Glockencafe in Kaiserslautern!
die Fussball-WM bietet Ablenkung und geistige Nahrung genug.
Der nächste Great Escape ist dann am Montag 7. August 2006 im Glockencafe, 20:00 Abends!
I visited one of the three major Mozart exhibitions in Vienna, Mozart – Experiment Aufklärung, brought together by Herbert Lachmayer’s, a relative of mine, DaPonte Institue. That visit was on 25th June 2006.
So, Mozart was part of the Age of Enlightment, and the Opera “The Magic Flute/ Die Zauberflöte” is a multi-layered piece of information, it was a part of its time, bringing new ways of story-telling, also its “Free-Masons” hints. So, the setting of the Queen of the Night VS Sarastro, the temple chief of geek wisdom, is perfect for Burning Man.
So, to write down this idea:
If we get more artist people in, it may be perfect. There are actors like Marisa Lenhardt or Natalie Wilson who are both on BurningMan and have sung the Queen of the Night.
Things we have to take with us:
Sites we have to find to re-play Zauberflöte:
I already collected some pictures to go with it, more will come:
www.flickr.com
|
some say WinFS is dead but who knows, according to the blog post by some Microsoft dude, it is changing plans.
So if you want a semantic file system, join SemFS!
yes, we hope that SemFS will integrate with Nepomuk.
Tim Berners Lee blogged on Net Neutrality, click the links, get informed:
blog by Tim Berners Lee
video message
here is the whole message:
When I invented the Web, I didn’t have to ask anyone’s permission.
Now, hundreds of millions of people are using it freely.
I am worried that that is going end in the USA.
I blogged on net neutrality before, and so did a lot of other people.
(see e.g. Danny Weitzner, SaveTheInternet.com, etc.)
Since then, some telecommunications companies spent a lot of money
on public relations and TV ads, and the US House seems to have
wavered from the path of preserving net neutrality. There has been
some misinformation spread about. So here are some clarifications.
(
real video
Mpegs to come)
Net neutrality is this:
If I pay to connect to the Net with
a certain quality of service, and you pay to connect with that
or greater quality of service, then we can communicate at that level.
That’s all. Its up to the ISPs to make sure they interoperate so that
that happens.
Net Neutrality is NOT asking for the internet for free.
Net Neutrality is NOT saying that one shouldn’t pay more money for high quality of service.
We always have, and we always will.
There have been suggestions that we don’t need legislation because
we haven’t had it. These are nonsense, because in fact we have had
net neutrality in the past — it is only recently that real explicit
threats have occurred.
Control of information is hugely powerful.
In the US, the threat is that companies control what I can access for commercial reasons.
(In China, control is by the government for political reasons.)
There is a very strong short-term incentive for a company
to grab control of TV distribution over the Internet
even though it is against the long-term interests of the industry.
Yes, regulation to keep the Internet open is regulation.
And mostly, the Internet thrives on lack of regulation.
But some basic values have to be preserved.
For example, the market system depends on the rule that you can’t photocopy money.
Democracy depends on freedom of speech.
Freedom of connection, with any application, to any party, is
the fundamental social basis of the Internet, and, now, the society based on it.
Let’s see whether the United States is capable as acting according to its
important values, or whether it is, as so many people are saying,
run by the misguided short-term interested of large corporations.
I hope that Congress can protect net neutrality, so I can
continue to innovate in the internet space. I want to
see the explosion of innovations happening out there on the Web,
so diverse and so exciting, continue unabated.