CLEF promotes the systematic evaluation of information access systems, primarily through experimentation on shared tasks.

Eight labs are offered at CLEF 2012.

Seven labs will follow a "campaign-style" evaluation practice for specific information access problems in the tradition of past CLEF campaign tracks:

  1. CHiC Cultural Heritage in CLEF a benchmarking activity to investigate systematic and large-scale evaluation of cultural heritage digital libraries and information access systems.
  2. CLEF-IP a benchmarking activity to investigate IR techniques in the patent domain
  3. ImageCLEF a benchmarking activity on the experimental evaluation of image classification and retrieval, focusing on the combination of textual and visual evidence
  4. INEX a benchmarking activity on the evaluation of XML retrieval
  5. PAN a benchmarking activity on uncovering plagiarism, authorship and social software misuse
  6. QA4MRE a benchmarking activity on the evaluation of Machine Reading systems through Question Answering and Reading Comprehension Tests
  7. RepLab a benchmarking activity on reputation management technologies

One lab will be run as a workshop organized as speaking and discussion session to explore issues of evaluation methodology, metrics, and processes in information access and closely related fields:

  1. CLEFeHealth 2012 workshop on Cross-Language Evaluation of Methods, Applications, and Resources for eHealth Document Analysis