Speaker
Description
Chair: Prof. Janusz Kacprzyk
Abstract:
In recent years, there has been a rapid advancement in generative AI and its applications, highlighted by the widespread release of ChatGPT on the Web, showcasing both its potential and limitations. Large Language Models are one of the core technologies driving generative AI and are currently being used across a wide range of NLP tasks, including machine translation, conversational agents, and more. However, LLMs still expose certain limitations, notably their inability to fully understand and rely on context knowledge relevant to the specific task.
A typical approach is to make use prompting techniques to guide the generation of text by taking into account the so called “in-context”, without modifying the model’s parameters. Recently, a branch of research is focusing on exploring ways to improve the modeling and control of the process of injecting context (world knowledge) into models. In this talk, I will present some approaches aimed at this goal, and I will discuss the research challenge of creating personal language models—LLMs tailored to a specific user knowledge (such as the user expertise and language knowledge of individual users or specific user groups).