Information

  • Publication Type: Master Thesis
  • Workgroup(s)/Project(s):
  • Date: November 2021
  • Date (Start): November 2020
  • Date (End): November 2021
  • Second Supervisor: Eduard GröllerORCID iD
  • Diploma Examination: 15. November 2021
  • Open Access: yes
  • First Supervisor: Manuela WaldnerORCID iD
  • Pages: 110
  • Keywords: machine learning, interactive visualisation

Abstract

In recent years, the usage of machine learning (ML) models and especially deep neural networks in many different domains has increased rapidly. One of the major challenges when working with ML models is to correctly and efficiently interpret the results given by a model. Additionally, understanding how the model came to its conclusions can be a very complicated task even for domain experts in the field of machine learning. For laypeople, ML models are often just black-boxes. The lack of understanding of a model and its reasoning often leads to users not trusting the model’s predictions.

In this thesis, we work with an ML model trained on event-organisation data. The goal is to create an exploratory visual event-organisation system that enables event organisers to efficiently work with the model. The main user goals in this scenario are to maximise profits and to be able to prepare for the predicted number of visitors. To achieve these goals users need to be able to perform tasks like: interpreting the prediction of the current input and performing what-if analyses to understand the effects of changing parameters. The proposed system incorporates adapted versions of multiple state-of-the-art model-agnostic interpretation methods like partial dependence plots and case-based reasoning. Since model-agnostic methods are independent of the ML model, they provide high flexibility.

Many state-of-the-art approaches to explain ML models are too complex to be understood by laypeople. Our target group of event organisers cannot be expected to have a sufficient amount of technical knowledge in the field of machine learning. In this thesis, we want to find answers to the questions: How can we visualise ML predictions to laypeople in a comprehensible way? How can predictions be compared against each other? How can we support users in gaining trust in the ML model? Our event-organisation system is created using a human-centred design approach performing multiple case studies with potential users during the whole development circle.

Additional Files and Images

Additional images and videos

Additional files

Weblinks

BibTeX

@mastersthesis{sbardellati-2021-eveos,
  title =      "Exploratory Visual System for Predictive Machine Learning of
               Event-Organisation Data",
  author =     "Maximilian Sbardellati",
  year =       "2021",
  abstract =   "In recent years, the usage of machine learning (ML) models
               and especially deep neural networks in many different
               domains has increased rapidly. One of the major challenges
               when working with ML models is to correctly and efficiently
               interpret the results given by a model. Additionally,
               understanding how the model came to its conclusions can be a
               very complicated task even for domain experts in the field
               of machine learning. For laypeople, ML models are often just
               black-boxes. The lack of understanding of a model and its
               reasoning often leads to users not trusting the model’s
               predictions.  In this thesis, we work with an ML model
               trained on event-organisation data. The goal is to create an
               exploratory visual event-organisation system that enables
               event organisers to efficiently work with the model. The
               main user goals in this scenario are to maximise profits and
               to be able to prepare for the predicted number of visitors.
               To achieve these goals users need to be able to perform
               tasks like: interpreting the prediction of the current input
               and performing what-if analyses to understand the effects of
               changing parameters. The proposed system incorporates
               adapted versions of multiple state-of-the-art model-agnostic
               interpretation methods like partial dependence plots and
               case-based reasoning. Since model-agnostic methods are
               independent of the ML model, they provide high flexibility. 
               Many state-of-the-art approaches to explain ML models are
               too complex to be understood by laypeople. Our target group
               of event organisers cannot be expected to have a sufficient
               amount of technical knowledge in the field of machine
               learning. In this thesis, we want to find answers to the
               questions: How can we visualise ML predictions to laypeople
               in a comprehensible way? How can predictions be compared
               against each other? How can we support users in gaining
               trust in the ML model? Our event-organisation system is
               created using a human-centred design approach performing
               multiple case studies with potential users during the whole
               development circle.",
  month =      nov,
  pages =      "110",
  address =    "Favoritenstrasse 9-11/E193-02, A-1040 Vienna, Austria",
  school =     "Research Unit of Computer Graphics, Institute of Visual
               Computing and Human-Centered Technology, Faculty of
               Informatics, TU Wien",
  keywords =   "machine learning, interactive visualisation",
  URL =        "https://www.cg.tuwien.ac.at/research/publications/2021/sbardellati-2021-eveos/",
}