Vis enkel innførsel

dc.contributor.authorKostric, Ivica
dc.contributor.authorBalog, Krisztian
dc.contributor.authorRadlinski, Filip
dc.date.accessioned2024-04-17T11:28:20Z
dc.date.available2024-04-17T11:28:20Z
dc.date.created2023-12-18T22:52:56Z
dc.date.issued2023
dc.identifier.citationKostric, I., Balog, K., & Radlinski, F. (2023). Generating Usage-related Questions for Preference Elicitation in Conversational Recommender Systems. ACM Transactions on Recommender Systems.en_US
dc.identifier.issn2770-6699
dc.identifier.urihttps://hdl.handle.net/11250/3127014
dc.description.abstractA key distinguishing feature of conversational recommender systems over traditional recommender systems is theirability to elicit user preferences using natural language. Currently, the predominant approach to preference elicitation is to ask questions directly about items or item attributes. Users searching for recommendations may not have deep knowledge of the available options in a given domain. As such, they might not be aware of key attributes or desirable values for them. However, in many settings, talking about the planned use of items does not present any difficulties, even for those that are new to a domain. In this article, we propose a novel approach to preference elicitation by asking implicit questions based on item usage. As one of the main contributions of this work, we develop a multi-stage data annotation protocol using crowdsourcing, to create a high-quality labeled training dataset. Another main contribution is the development of four models for the question generation task: two template-based baseline models and two neural text-to-text models. The template-based models use heuristically extracted common patterns found in the training data, while the neural models use the training data to learn to generate questions automatically. Using common metrics from machine translation for automatic evaluation, we show that our approaches are effective in generating elicitation questions, even with limited training data. We further employ human evaluation for comparing the generated questions using both pointwise and pairwise evaluation designs. We find that the human evaluation results are consistent with the automatic ones, allowing us to draw conclusions about the quality of the generated questions with certainty. Finally, we provide a detailed analysis of cases where the models show their limitations.en_US
dc.language.isoengen_US
dc.publisherAssociation for Computing Machineryen_US
dc.rightsNavngivelse 4.0 Internasjonal*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/deed.no*
dc.titleGenerating Usage-related Questions for Preference Elicitation in Conversational Recommender Systemsen_US
dc.typeJournal articleen_US
dc.description.versionpublishedVersionen_US
dc.rights.holderThe authorsen_US
dc.subject.nsiVDP::Teknologi: 500::Informasjons- og kommunikasjonsteknologi: 550en_US
dc.source.journalACM Transactions on Recommender Systemsen_US
dc.identifier.doi10.1145/3629981
dc.identifier.cristin2215216
cristin.ispublishedtrue
cristin.fulltextoriginal


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel

Navngivelse 4.0 Internasjonal
Med mindre annet er angitt, så er denne innførselen lisensiert som Navngivelse 4.0 Internasjonal