EWCBR98 - day 2 scientific programme [view the programme]

The scientific programme opened with the first invited speaker, David Aha who spoke on Reasoning and Learning: Exploiting the Lazy-Eager Dimension. David needs no introduction to anyone who regularly attends CBR conferences or ML meetings. He is without doubt one of the best read and most scholarly of researchers active in CBR and he is able to position CBR research in a wider context. The sub title of his talk was "Exploiting the Timeliness Dimension" or in plain English "to be or not to be lazy". What he demonstrated well was that the divide between lazy and eager learning (or retrieval) algorithms is not discrete, but is really a continuum. Thus, one is deciding to be more or less lazy and consequently more or less eager. The decision needs to be well informed as there are benefits and trade-offs between different approaches. David advised us to remember that "no one algorithm is superior to all others under all circumstances" what he referred to as the "no free lunch theorem."

He gave an excellent overview of the subject with many pointers to the literature that he advised the audience to look at. Finally he concluded with a recommendation for the CBR community to look at the recent ML work on classifier ensembles [a paper on this subject is available here]. The basic idea here is that more than one algorithm is used and the results triangulated. Collectively you get a better classification from the ensemble of algorithms than you would from any individual algorithm. This he felt was a "low hanging fruit" and could be a very profitable line of research for CBR. So all you prospective PhD students, now you have your thesis subject.

d_aha.JPG (14574 bytes)
David Aha

pptani.gif (9163 bytes)
view David's presentation

Petri Myllymaki from the Complex Systems Computation Group at the University of Helsinki described the use of a Bayesian retrieval algorithm. The experimental results were very convincing with retrieval being reliable even if only 2-4 case features were used out of 14-15.  The use of techniques which can use subsets of case features was one which would be returned to on the following day. This is likely to become increasingly important as case-bases are used on large commercial applications.

p_myllymaki.JPG (10588 bytes)
Petri Myllymaki

Ralph Bergmann described the similarity measures that tecInno's new CBR-Works tool uses for retrieving objects. This is an excellent example of the very close relationship between academia and industry, which was discussed yesterday. The problem with retrieving objects is that because of polymorphism the internal structure of objects can vary (hence you are not necessarily comparing like with like). Ralph described a very pragmatic technique that measures the distance objects are apart in the object hierarchy. Although simple the technique works effectively.

r_bergmann.JPG (9653 bytes)
Ralph Bergmann

Bruce McLaren from the University of Pittsburgh provided a change of focus. Coming from the AI in law community with a much stronger cognitive science input his paper dealt with the domain of professional ethics. Of great interest was the inclusion in the case representation of an event chronology based on Allen's temporal logic. Interestingly time is an issue that has yet to really surface in the CBR community but will have all sorts of interesting problems with regard to similarity.

b_mclaren.JPG (9448 bytes)
Bruce McLaren

Bjørnar Tessem from the University of Bergen may have a solution for all you Java programmers struggling with Sun's JDK. Their system has interrogated the 1,700 Java classes in the JDK to extract information about the classes into cases. The programmer can then create a partial class definition which is matched against the case-base to retrieve potentially useful classes. Somewhat unusually they didn't take a high-level approach of describing and retrieving classes by their function instead the system uses a low-level syntactic approach to describe classes. What might be nice would be if their system could sit in the background whilst you programmed and interrupt when it saw you creating a class which was structurally similar to an existing one.

b_tessem.JPG (9857 bytes)
Bjørnar Tessem

After lunch David Aha was on again - this time talking about a project that combines rules with CBR in a conversational CBR system (download a  postscript version here). Those of you familiar with Inference's CBR products will know that in addition to the cases simple rules can be defined. The rules match against text strings in the query and automatically answer confirming questions. Currently case-base authors have to create the rules. A tool under development at the Navy Research Labs called NaCoDAE uses an explicit domain model to let the system infer the rules during retrieval. Empirical results improved retrieval efficiency without harming precision. I also think that this technique could be useful in identifying poor case authoring styles in existing case-bases. David's young son Daniel was attending the conference fringe and presumably will be a force to be reckoned with in only a few short years.

wpe7.jpg (2091 bytes)
a baby Aha

This was my favourite paper at the workshop. David Leake and his student David Wilson have set down a framework by which the community can discuss issues relating to case-base maintenance. A good scholarly exercise surveyed the CBR literature for references to case-base maintenance these were then categorised by how information on a case-base is gathered , how maintenance is timed, why a maintenance activity is triggered, whether maintenance is performed on or off-line, what in the case-base is maintained and whether individual cases or the entire case-base is effected by the maintenance activity.

The CBR community can argue about the categorisation and terminology and neither of the David's claimed that they necessarily had it perfect, but if you are writing about case-base maintenance you'll have to refer to this paper for many years to come - so you might as well download it now (compressed postscript or acrobat).

An interesting point that was raised in the paper refers to Richter's knowledge containers. Each knowledge container in the case-base may need maintenance, but maintaining one may reduce the need to maintain another. For example, maintaining adaptation rules may reduce the need to maintain similarity metrics, or vice versa.

d_leake.JPG (11209 bytes)
David Leake

 

& David Wilson
d_wilson.JPG (9439 bytes)

The afternoon just kept getting better and better. Barry Smyth from University College Dublin presented a new twist to his ideas on modelling case-base competence. This work focused on groups of cases (the competence group) and showed that a group of cases could be the fundamental unit for measuring a case-base's competence, rather than individual cases. Put simply, dense groups give good coverage, whereas sparse groups or individual cases indicate poor coverage. Once again the Irish CBR people showed how a simple experimental design can illustrate a point well. The main outcome would seem to be a metric which could be predictive of case-base performance, particularly if combined with other metrics.

b_smyth.JPG (7965 bytes)
Barry Smyth

The fact that case-base maintenance has become an issue shows that CBR is maturing. This is also indicated by the fact that methodological issues are starting to become more important. Sascha Schmitt is part of a large group drawing on the experience of 9 industrial projects within INRECA-II (a large European Union funded project) to explore methodological issues. The methodology that Sascha described provides a structured way of guiding the development of a CBR system. Interestingly the methodology is neither too academic nor too much like a cook book.

Whilst on this subject you should check out the newly formed INRECA Institute which has also been formed out of the INRECA-II project. The new institute aims to promote the adoption and development of industrial CBR technology and should be supported by the CBR community.

s_schmitt.JPG (11358 bytes)
Sascha Schmitt

It's always refreshing to see alternative approaches to our problems. Sylvie Salotti has a problem with the heuristic nature of similarity in most CBR systems. She outlined how descriptive logics could be used to design the retrieval and selection tasks of a CBR system. Her case-base is organised using a taxonomy of indexed concepts and retrieval is performed by the automatic concept classification of the descriptive logic.

s_salotti.JPG (14405 bytes)
Sylvie Salotti

I gave the final presentation of the day, which obviously I can't really comment on objectively. The paper is about why VR programming environments make interesting development environments for CBR systems, particularly those where visualisation of cases is an important issue (e.g., in case-based design or case-based training systems). I gave a little live concept demonstration, which thankfully didn't crash, and which everyone in the audience seemed to enjoy.

audience.JPG (17387 bytes)
some of the audience at EWCBR-98 (why does everybody sit at the back?)

Ian Watson
Ian Watson

We then all went out for the conference dinner at the Alexandra Hotel just behind Trinity College. The food was excellent, Padraig Cunningham had to order extra wine for us all and an Irish folk group kept us entertained. Then some of us went to a night club (David Leake looks quiet... but I'm blaming him) and we were forced to dance by some friendly young women who had been following Barry Smyth around all day (more about them later). Finally we were all thrown out and had to go back to Trinity.

hotel.jpg (11544 bytes)
the Alexandra Hotel

the night club
This photo has been deliberately distorted to protect the reputations (and dignity) of certain people.

[go back] [next day]

© ai-cbr, 1998