Semantic Network Technologies

Applications

Personal Agent for Mapping Elements that Look Alike - Pamela

This agent is actually in development. A preliminary paper is available.

Second Life Answer Machine - Slammy

Slammy is an interface to our Answer Machine. The Second Life part consists only of a few lines of script.

Slammy can answer simple questions in natural language: basically the what, who and where questions. It uses the native chat box of Second Life (SL) to communicate with the avatars.

A user can click on the activation button and Slammy starts listening to a question of that user. The next input of that user will be considered as a question. Slammy sends the question to an external server (a web application called slam). This application analyses the question and uses a Semantic Network to look for potential answers. The answer(s) are send back to Slammy which will display them.

The interface of Slammy is multilingual. A user can e.g. type in the chat box "Slammy speak German" and Slammy will use the German language to communicate with the user. This will work if the first word is "Slammy" and the third word the name of a language (in whatever language as long as it is known in the Semantic Network). "Slammy spreche Français" will work. When Slammy doesn't have the requested interface language, or the language cannot not be recognized, it will use English. Currently +7000 languages are available in the network, but Slammy's interface messages exist at the time of writing in English, French, Dutch, German, Spanish and Italian. This can be easily expanded with other languages. Once translated it takes 5 minutes to add another interface language because the messages Slammy uses in its interface are nodes in the network. For this reason new languages of the interface can easily be added because only a new "name" in another language has to be added to each of these nodes.

Actually we are investigating through the Second Life Answer Machine how people perceive the way of looking for information through questions in their own natural language.

How does Slammy analyses questions?

First Slammy tries to identify the language in which the question is asked. The found answers will be given in that language. The language of the question (and the answer) doesn't need to be the same one as the language used in the interface of Slammy. Next some very frequent words will be stripped. The remaining part of the question will be analyzed. The who, what and where part of the question tells what kind of answer to look for: a person, a value or a location. Slammy will try to map one or more of the remaining words to the names of kinds of relationships (predicates) or property types (attribute types). If it has found some it will look if there are associated models (if any) and take the ones that match the kind of answer requested. Finally it will look in the Semantic Network for answers that match the remaining models and for which the avatar has sufficient rights. The answers will be formulated using the language of the question and have the form of - subject name - predicate name - object name - or - subject name - property type name - value -.

We still have a lot of ideas how to improve Slammy itself as well on its pure linguistic capacities as on how to use the semantic network when seeking the answers.


Slammy screenshot

Slammy is actually not public but you can ask Anthon Masala for a demonstration when he is online.

Information Extraction from Unstructured Sources - SneGate

A lot of information currently is captured in natural language (i.e. unstructured sources like speech and written text by humans). This information holds many statements like “TNO - is the employer of - Ronald”. It would be a giant leap if we could automatically capture these statements and add them to an ever growing Semantic Network that links everything together. That would lead to structured information on which many analyses could be performed for many different purposes.

A prototype exists that does just that. It tackles two major challenges:

  • Recognizing the statements in the unstructured source
  • Add the statement parts to the Semantic Network for as far these parts are new information parts.

The first challenge is handled by using available (open source) tools out there for Natural Language Processing but also the content of the existing Semantic Network itself.

The second challenge is handled by a highly complex piece of software that does not use a particular ontology for a specific domain. The software is for instance able to identify which “Ronald” is meant in the unstructured source. He could be one of the already known “Ronalds” in the Semantic Network or a new one. The way this is done is similar to the way we humans do that: by recognizing context based on the given associations in the remainder of the unstructured text. The software “zoomes in” on the right the “Ronald” the more statements are extracted from the unstructured source.

The two just mentioned features (ie. no use of a specific ontology and being able to establish context based on semantics rather than based on statistics) makes this application a one of a kind and beyond the current state-of-the-art.

The current prototype demonstrates the concept but is “frozen” for now. Partners are welcome to make the next steps. These are:

  • tuning the many parameters and thresholds used in the software to get the best results
  • making more use of grammar tools to recognize statements inside unstructured sources
  • adding more generic models to the Semantic Network like “persons” “are project leaders of” “projects”
  • adding more type identification services like “persons” and “projects”
  • optimize the use of memory and processing power with regards to the real-time establishment of context based on semantics. The current prototype uses a 24-PC grid that builds a cache of possible context parts that are inside the existing Semantic Network. This technical solution is still inadequate for real life applications.
Ronald Poell  Google+ Logo , Tom Rijgersberg
Last modified: 2013-06-30