Details

Type

Bachelor Thesis
Student Project
Master Thesis

Persons

1

Description

As dashboards have become ubiquitous, ensuring their comprehension is paramount for a broad audience. One approach is to enhance their understanding through the application of Large Language Models (LLMs). This approach encompasses several strategic measures: 

Tasks

  • Crafting onboarding content by engineering prompts based on the specific keywords of each visualization. 

  • Leveraging the LLM's capabilities to determine the onboarding sequence by the type, location, and data inherent to each visual. 

  • Dynamically adjusting the narrative to cater to varied user expertise levels. 

  • Implementing a responsive interface where users can seek further clarity, for instance, by prompting questions like "What's the purpose of the legend?". The LLM-backend onboarding would then adapt its responses, ensuring a personalized and enriched user experience within data visualization platforms. 

  • The text that is generated to explain the visualization could be enhanced by providing explanations of certain words or phrases the user might not know. This could be implemented by showing a tooltip when the user hovers over certain parts of the text for example.

  • Further tasks would involve a user study with expert interviews to evaluate the usability and usefulness of the topic.

Requirements

  • Interest and knowledge in visualization. 
  • Good programming skills.
  • Creativity and enthusiasm. 

Environment

The project should be implemented as a standalone application, desktop or web-based (to be discussed).

 

Supervision: Vaishali Dhanoa (Pro2Future), Andreas Hinterreiter (JKU Linz), Paul Haferlbauer (Pro2Future), Marc Streit (JKU Linz), Eduard Gröller (TU Wien)

Contactvaishali.dhanoa@pro2future.at 

Related Work:

A Process Model for Dashboard Onboarding (Vaishali et. al EuroVIS 2022)

Improving language understanding by generative pre-training (Paper that introduces GPT)

Language models are few-shot learners (Few-shot learning, GPT 3)

Palm-e: An embodied multimodal language model (visual question answering, multimodal models, not too familiar with this but might fit)

Responsible

For more information please contact Eduard Gröller.