The Benchathlon Network
Home
Activities
The Network
Resources
  Technical
  Data
  Software
  Links
Contact point
Site Map
 

Activities

Parallel activitiesOur planCurrent stage

The ultimate goal of the Benchathlon is to set up a favourable context for evaluating CBIR systems. We are proceeding incrementally. At first, we wish to express and discuss a number of recommendations on different aspects of the construction of a CBIR sytems so as to facilitate its evaluation.
        The key is the standardisation of a number of issues, in order for the central entity (us) to be able to construct a benchmark (eg a software package) that can easily be "connected" to any system. The structure we are looking at is that of a distributed CBIR benchmark. It integrates a number of standard components:
  • The data collections;
  • A set of standard queries;
  • A form of ground truth;
  • A benchmark engine;
  • A set of performance measures;
  • A standard access protocol.
These components are oganized as shown below.
A distributed CBIR benchmark structure
The structure is inspired from that described in
  • Henning Müller, Wolfgang Müller, David McG.\ Squire, Stéphane Marchand--Maillet, Thierry Pun. Performance Evaluation in Content--Based Image Retrieval: Overview and Proposals, Pattern Recognition Letters, Vol. 22, No. 5, pp. 593--601, 2001.

These necessary components map onto the following activities or subtasks.

Parallel activities
^

Creating data collections
The first step in evaluating systems is to find common data to work with. Due to (and to avoid dealing with) copyright policies and management, we wish to gather data that is free of copyright. This unfortunately essentially means that we will need to create our own collection. Further, due to different constraints imposed by different domains, we will need specialised collections.

Creating standard queries
Expertise will then be needed to find standard queries to send to systems under evaluations. The aim is to define orthogonal challenges that will clearly point out weaknesses and strengths of the evaluated systems. A list of CBIR challenges has been compiled and may serve as a starting point for reflexion.

Creating ground truth
The Benchathlon has triggered a number of side activities, including image annotation. In our software repository we are proposing some prototypes for achieving this.

Defining performance measures
A number of performances metrics are available in the literature. The reference
  • Henning Müller, Wolfgang Müller, David McG.\ Squire, Stéphane Marchand--Maillet, Thierry Pun. Performance Evaluation in Content--Based Image Retrieval: Overview and Proposals, Pattern Recognition Letters, Vol. 22, No. 5, pp. 593--601, 2001.

proposes a set of measures, mostly based on the notion of Precision and Recall.

Defining an evaluation protocol
The above reference advocates for an automated procedure. While this may be the ultimate goal, we may define a procedure that is easier to set up. Inspiration may come from what happens in TREC.

Displaying results
This comprises two aspects. First, the result of an evaluation procedure should be clear enough so that it highlights strengths and weaknesses of a given system or methodology. Second, while the main aim of benchmarking is not a harsh competition between systems, it may still be of interest to have these results publicly available, to objectively (if this is possible) compare the efficiency of methodologies.

Getting groups to participate
This is the hardest task. Although benchmarking is viewed as important, most researches are not ready to spend time, efforts and resources on it. It is worth a trial, join us!

If you have interest in any of these activities, please send me a mail.

^

We are following a strict CBIR benchmark construction plan.

^

Take a look at our log book.


Visit also: 
The Bechathlon is part of the EI Internet Imaging Conference Our SourceForge Source RepositoryMRML: Home of the Multimedia retrieval Markup LanguageGIFT: Home of the GNU Image Finding Tool
(c) Benchathlon
17/01/2005
 Top | Home | Contact | Disclaimer | Archive

Site maintained by The Benchathlon WebMaster