Home | Tasks | CLEF program | Publications | Partners | Contest | Contact Us | Tools | CLEF 2022

CLEF 2023 JOKER Track:

Automatic Wordplay Analysis

Wednesday 20th September

JOKER Sessions takes place in ROOM 3 except for the lab overviews.

Lab Overviews (ROOM 1)

CONFERENCE Session 5: Natural Language Processing (Room 1)

Thursday 21st September

JOKER Session 1 (ROOM 3)

Abstract: Interactive computer systems are often designed for some task, with implicit or explicit target notions and quality metrics. Some tasks are work-related, some are less so. How well a system addresses its intended tasks is measured in various ways, but the somewhat more ephemereal and less goal-directed quality aspects of a system are difficult to address and measure well. The target notion of information access systems is “relevance” and quality benchmarks are based on human-assessed topical relevance of items for a topical information need. Semantic components in language processing are typically evaluated using vocabulary benchmarks which are biased towards topicality. How might the delightfulness and enjoyability of an information system, of a text or a conversation, or an interactive session be measured systematically? Can it even be done?

Bio: Jussi Karlgren is a Principal AI Scientist at Silogen, where he works on quality criteria for large language models, and he is one of the chairs of the newly announced ELOQUENT lab at CLEF which will work on such questions. He has worked on statistical language models, evaluation of information system quality, and human-computer interaction in industrial and academic research groups for more than thirty years. His major research interest is in computational stylistics in language, in finding quantitative descriptions of the many different ways we can address a topic in discourse.

JOKER Session 2 (ROOM 3)

JOKER Session 3 (ROOM 3)

Abstract: Virtual Assistants such as Apple’s Siri and Amazon’s Alexa are well-suited for task-oriented interactions (“Call Jason”), but other interaction types are often beyond their capabilities. One notable example is playful requests: for example, people ask their Virtual Assistant personal questions (“What’s your favorite color?”) or joke with them, sometimes at their expense (“Find Nemo”). Failing to recognize playfulness causes user dissatisfaction and abandonment, destroying the precious rapport with the Virtual Assistant. To address the challenge of automatically detecting playful utterances, we first characterize the different types of playful human-virtual assistant interaction. We introduce a taxonomy of playful requests rooted in theories of humor and refined by analyzing real-world traffic from Alexa. In a follow-up work, we focus on equipping AI assistants with the ability to respond in a playful manner to irrational shopping requests. We first evaluate several neural generation models, which lead to unsuitable results – showing that this task is non-trivial. We devise a simple, yet effective, solution, that utilizes a knowledge graph to generate template-based responses grounded with commonsense. While the commonsense-aware solution is slightly less diverse than the generative models, it provides better responses to playful requests. This emphasizes the gap in commonsense exhibited by neural language models.

Bio: Alexander Libov is an applied science manager at Amazon leading a research group in the Alexa Shopping organization. Alexander received his B.Sc., M.Sc and Ph.D. at the Technion - Israel Institute of Technology in the area of Distributed Systems. During his studies, he worked and interned at Intel, Microsoft, IBM and Yahoo. After graduating, Alexander spent two years in the Yahoo Research mail team working on mail search query suggestions and top results selection leading to several publications in the Information Retrieval area. At Amazon Alexander works in the areas of Information retrieval, NLP, and Computational Humor.

Closing and Introduction of CLEF 2024 (Room 1)

All CLEF 2023 JOKER (Automatic Wordplay Analysis) track papers

Including authors unable to present in-person or online: