Find the upcoming dates on this page.

Previous Talks

Speaker: Prof. Renato Pajarola (Head of the Visualizion and MultiMedia Lab, Universität Zürich)

Sobol indices and other, more recent quantities of interest are of great aid in sensitivity analysis, uncertainty quantification, and model interpretation. Unfortunately, computing as well as visualizaing such indices is still challenging for high-dimensional systems. We propose the tensor train decomposition (TT) as a unified framework for surrogate modeling and sensitivity analysis of independently distributed variables, and introduce the Sobol tensor train (Sobol TT) data structure, a compressed data format that can quickly and approximately answer sophisticated queries over exponential-sized sets of Sobol indices. Furthermore, we propose a novel visualization tool that leverages this new Sobol TT representation. Our approach efficiently captures the complete global sensitivity information of high-dimensional scalar models, allows interactive aggregation and subselection operations, and we are able to obtain related Sobol indices and other related quantities at low computational cost. In our three-stage visualization, variable sets to be analyzed can be added or removed interactively. Additionally, a novel hourglass-like diagram presents the relative importance for any single variable or combination of input variables with respect to any composition of the rest of the input variables. We showcase our visualization with several example models, whereby we demonstrate the high expressive power and analytical capability made possible with the proposed Sobol TT method.

Details

Category

Duration

45 + 15
Host: Eduard Gröller

Speaker: Prof. Takayuki Itoh (Ochanomizu University, Japan)

Visualization of densely connected graphs has been a long-term research problem, and many studies have applied graph hierarchization schemes.
In this talk, as well as the speaker's own hierarchical graph visualization methods, applications to gene networks and human networks will be presented.  Also, this talk will introduce the evaluation criteria that the speaker have proposed for the visualization results of hierarchical graphs, and discuss new appoaches in hierarchical graph visualization using these evaluation criteria.

Short Bio:
Takayuki Itoh is a full professor of Ochanomizu University, Japan. He received his Ph.D. degree from Waseda University in 1997. He was a researcher of IBM Research, Tokyo Research Laboratory from 1992 to 2005.  He moved to Ochanomizu University as an associate professor in 2005 and then has been a full professor since 2011.    He is mainly working on information visualization, especially graph, hierarchical, and multidimensional data visualization.  He is
the general chair of IEEE Pacific Visualization 2018, Graph Drawing 2022, and many other international conferences.   He will be one of the short paper chairs of IEEE VIS 2023.

Details

Category

Duration

45+15
Host: Renata Raidou

Speaker: Assoc.Prof. Dr. Marianna Zichar (University of Debrecen, Department of Data Science and Visualization)

The purpose of creating 3D models can be very different, the tools we can use are diverse, and their applications are related to many fields.
One of the emerging technologies, namely 3D printing, needs 3D models as well.
Characteristics of these models depend on the manufacturing method as well as on their usage. The talk presents the fields where I work with 3D printable models (research, education, different projects)and tries to emphasize the benefits of 3D printing.
 
Short Bio:
Marianna Zichar gained her Ph.D. in Mathematics and Computer Science at the University of Debrecen, where she works as an associate professor.
Her original field of interest is GIS (geographic information systems), where she focuses on geovisualization.
Some years ago, she started to deal with 3D printing and modeling after the faculty received a 3D printer.
Together with some colleagues, she has designed a course on 3D printing and modeling, and she holds this course for bachelor students regularly. She takes every opportunity to collaborate with researchers from other disciplines (such as pharmacy, engineering) and also tends to introduce this innovative field for students in the frame of different projects and workshops.
 

Details

Category

Duration

45+15
Host: Manuela Waldner

Speaker: Dr. María Luján Ganuza (Universidad Nacional del Sur, Bahía Blanca, Argentina), Dr. Matías Selzer (Universidad Nacional del Sur, Bahía Blanca, Argentina)

Dr. Matías Selzer and Dr. María Luján Ganuza are part of the VyGLab (Research Laboratory in Visualization and Computer Graphics), at the DCIC (Department of Computer Science and Engineering) at the Universidad Nacional del Sur, Bahía Blanca, Argentina.

In this talk, they will introduce their recent research topics, regarding virtual and augmented reality and multidimensional data visualization.

Details

Category

Duration

20 + 20
Host: Eduard Gröller

Speaker: Prof. Alexander Lex (Scientific Computing and Imaging Institute and the School of Computing at the University of Utah)

Traditional empirical user studies tend to focus on testing aspects of visualizations or perceptual effects that can be fully controlled. Evaluating or comparing complex interactive visualization techniques, in contrast, is much more difficult, as complexity increases confounders. This challenge is aggravated when using crowdsourcing for evaluation, as crowd participants tend to be novices with limited motivation for excelling at a task. In this talk I will introduce methods to run and analyze such studies for complex visualization techniques, including procedural suggestions for crowdsourced studies, design of stimuli for testing, instrumentation of stimuli, and analysis of user behavior based on the data collected. 

Bio

I am an Associate Professor of Computer Science at the Scientific Computing and Imaging Institute and the School of Computing at the University of Utah. I direct the Visualization Design Lab where we develop visualization methods and systems to help solve today’s scientific problems.

Before joining the University of Utah, I was a lecturer and post-doctoral visualization researcher at Harvard University. I received my PhD, master’s, and undergraduate degrees from Graz University of Technology. In 2011 I was a visiting researcher at Harvard Medical School.

I am the recipient of an NSF CAREER award and multiple best paper awards or best paper honorable mentions at IEEE VIS, ACM CHI, and other conferences. I also received a best dissertation award from my alma mater. 

I co-founded Datavisyn (http://datavisyn.io), a startup company developing visual analytics solutions for the pharmaceutical industry, where I’m currently spending my sabbatical. 

http://alexander-lex.net

Video

Details

Category

Duration

45 + 30
Host: Manuela Waldner

Speaker: Prof. G. Elisabeta Marai (University of Illinois Chicago )

Abstract:

Data visualization research often seeks to help solve real world problems across application domains, from biomedicine to engineering.
There is considerable merit in such endeavors, which often help advance knowledge in these application domains. Beyond these contributions, as we work alongside domain experts, we also have a unique opportunity to observe qualitatively and analyze how these clients interact with the data through our tools and paradigms. Thus, we have a rare opportunity to better ground data visualization theory on these observations. In this talk, I will examine how working with real world data and problems can point out specific gaps in our theoretical knowledge, can challenge underlying assumptions in the data visualization field, and can lead to new insights and theoretical guidelines. I will focus on several theoretical contributions grounded in this experience, from activity-centered design to visual scaffolding, the details-first paradigm, and visual explainability in artificial intelligence. Last, I will reflect on the lessons learned through this experience, with particular emphasis on the barriers our field poses to new theoretical contributions.

Bio:

Liz Marai is an associate professor of Computer Science at the University of Illinois at Chicago. Her research interests go from visual-system related problems that can be robustly solved through automation, to problems that require human experts in the computational loop, and the principles behind this work. Marai's research has been recognized by multiple prestigious awards, including: a Test of Time award from the International Society for Computational Biology, and several Outstanding Paper awards, along with her students; an NSF CAREER Award and multiple NSF awards; and several multi-site NIH R01 awards as a lead investigator. She has co-authored scientific open-source software adopted by thousands of users, and she is a patent co-author, whose algorithms have been embedded in a medical device.

Details

Category

Duration

45 + 30

Speaker: Eric Mörth (Visualization Group, University of Bergen )

Multiparametric imaging in cancer has been shown to be useful for tumor detection. Furthermore, radiomic tumor profiling enables a deeper analysis of tumor phenotypes and enables analysis of a possible link to aggressiveness of the tumor. Analyzing complex imaging data in combination with clinical data is not trivial. We enable clinical experts to gather new insights of their multiparametric imaging data as well as cohort data. We include more than 7 modalities in a single view as well as cohort data of more than 100 endometrial cancer patients, including manually performed tumor segmentations. The goal of our contributions is to enable medical experts to obtain deeper understanding of different tumor types to define individual treatment for each patient.

Supporting the communication in science as well as between doctors and patients is another challenging task and one of our goals. In our latest contribution we propose a novel approach for authoring, editing, and presenting data-driven scientific narratives using scrollytelling. Our method flexibly integrates common sources such as images, text, and video, but also supports more specialized visualization techniques such as interactive maps or scalar field visualizations.

In this talk, I will present our efforts to scale up medical visualization supporting multi-modal, multi-patient and multi-audience approaches for healthcare data analysis and communication.

 

Speaker: Dr. Alexandra Diehl (Department of Informatics, University of Zurich)

According to the United Nations Office for Disaster Risk Reduction (UNDRR), the indirect economic losses caused by climate-related disasters increased by over 150 % during 1998–2017 compared to the period 1978– 1997 [1]. Among the most prominent high-impact weather events are flooding, storms, and heatwaves. Scientists need to improve the accuracy and communication of weather forecasting to reduce or even avoid the damage caused by these kinds of weather hazards.

 Citizens continuously generate an enormous amount of digital content of diverse kinds, such as blog posts, tweets, and photos and videos. People tend to proactively participate in digital media and communicate this kind of severe weather events in internet channels such as social media, news feeds, and citizen science projects, which represents a huge opportunity to improve current weather forecasting. To engage users in weather forecasting, meteorologists need effective visual communication tools to process the information and make it to citizens.

 In this talk, I will present some initial efforts in the visual analysis of citizen-generated data to extract useful information associated with severe weather events and identify expert users among the social networks and a perceptually-based visual design of a mobile application for citizen science on high-impact weather events.

 [1] P.Wallemacq and R. House. UNISDR and CRED report. economic losses, poverty, and disasters 1998–2017. Brussels: Centre for research on the epidemiology of disasters (CRED), 31, 2018.

Details

Category

Duration

30 + 15

Speaker: Prof. Dr. Wolfgang Heidrich (Director, Visual Computing Center, King Abdullah University of Science and Technology)

Details

Category

Duration

45 + 30

Speaker: Carmine Elvezio (Columbia University)

Augmented Reality (AR) and Virtual Reality (VR), collectively known as eXtended Reality (XR) experiences are built on rich, complex real-time interactive systems (RISs) that require the integration of numerous components supporting everything from rendering of virtual content to tracking of objects and people in the real world. Game engines such as Unity and Unreal currently provide a significantly easier pipeline than in the past to integrate different subsystems of XR applications. But there are a number of development questions that arise when considering how interaction, visualization, rendering, and application logic should interact, as developers are often left to create the “logical glue” on their own, leading to software components with low reusability. In this talk, I present a new software design pattern, the Relay & Responder (R&R) pattern, that attempts to address the concerns found with many traditional object-oriented approaches in XR systems. The R&R pattern simplifies the design of these systems by separating logical components from the communication infrastructure that connects them, while minimizing coupling and facilitating the creation of logical hierarchies that can improve XR application design and module reuse. 

Additionally, I explore how this pattern can, across a number of different research development efforts, aid in the creation of powerful and rich XR RISs. I first present related work in XR system design and introduce the R&R pattern. Then I discuss how XR development can be eased by utilizing modular building blocks and present the Mercury Messaging framework (https://github.com/ColumbiaCGUI/MercuryMessaging), which implements the R&R pattern. Next I delve into three new XR systems that explore complex XR RIS designs (including user-study-management modules) using the pattern and framework. I then address the creation of multi-user, networked XR RISs using R&R and Mercury. Finally I end with a discussion on additional considerations, advantages, and limitations of the pattern and framework, in addition to prospective future work that will help improve both.

Dr. Carmine Elvezio recently received his PhD in CS from Columbia University, studying AR/VR/MR and 3D graphics, and interaction and visualization techniques in the Computer Graphics and User Interfaces Lab, advised by Prof. Steven Feiner. He develops 3D systems across several domains including medicine, remote maintenance, space, music, and rehabilitation, working with many technologies including Microsoft HoloLens, Oculus Rift, Unity, Unreal, and OpenGL. He has participated in projects sponsored by NSF,  Google, Verizon, Canon, and NASA, amongst others. He has also contributed to a number of open-source frameworks, including MercuryMessaging (https://github.com/ColumbiaCGUI/MercuryMessaging) and GoblinXNA.

Carmine managed the CGUI Lab at Columbia with Prof. Feiner from 2013 until 2021, where he advised over 150 independent research projects, assisted in teaching courses on 3D user interfaces, AR, and VR, and participated in many multi-disciplinary university initiatives.

Speaker: Soros, Gabor (Nokia - HU/Budapest)

Augmented reality (AR) has the potential to become the universal user interface between technology-augmented humans and a technology-augmented world by interactively connecting physical objects and digital information in space and time. I present in several examples how AR technology enables us to reveal invisible processes, to see and hear things that happen at different time or space, to intuitively configure and control smart environments, and to visualize simulated insights and a beautified reality.

Bio:

Gábor Sörös is a research scientist at Nokia Bell Labs in Budapest. His research interests span from mobile and wearable computer vision, augmented reality (AR), and computational photography towards augmenting humans with wearable technology, among others for interaction with smart things.

He studied electrical engineering in Budapest (HU) and in Karlsruhe (DE) with focus on communication systems, and visual computing in Vienna (AT) with focus on mobile augmented reality. He obtained the PhD degree in computer science at the Institute for Pervasive Computing of ETH Zurich (CH).

During his undergraduate studies, he was working as a student assistant at the Computer and Automation Research Institute of the Hungarian Academy of Sciences. From 2011 to 2015, he was the scientific advisor of the ETH spin-off Scandit on visual code scanning and product AR with mobile and wearable devices. In 2014, he completed an R&D internship on mobile AR at Qualcomm Research. Between 2016 and 2019, he was a postdoctoral researcher at ETH Zurich and he also worked as the lead engineer of the ETH spin-off Kapanu on AR for dentistry. Since 2019, he is also a technical advisor to the ETH spin-off Arbrea Labs on AR for cosmetic surgery. He joined Bell Labs in 2019 and is working on augmented intelligence.

Speaker: Prof. Ingrid Hotz (Linköping University)

In this seminar, I will talk about my experiences from first industry contacts that arose from a collaboration with a research group in mechanical engineering. The group’s interest lies in the virtual development process of industrial parts and especially the analysis and modeling of fiber-reinforced polymers. Virtual product development based on simulations is today standard in many industrial and university environments. The models are becoming increasingly complex in their geometric design and the materials used. A lot of money and effort is invested in the development of new simulation software and virtual models with impressive results. However, the analysis of the simulation results is becoming more and more demanding and comparatively little effort is made to provide tools that exploit the data in its full diversity, from scalars to tensor fields. The collaboration we originally focusing on the development of novel tensor field visualization methods, but then more and more moved towards applying the entire zoo of basic visualization methods in a specific application. My talk is based on a presentation that I gave at the German industrial meeting on plastics and simulations last year in Munich and the responses that I got from this meeting.

Short Bio: Ingrid Hotz is currently a Professor in scientific visualization at Linköping University. She received a Ph.D. degree in Computer Sciences from the University of Kaiserslautern, Germany. After a PostDoc at the University of California Davis in the USA, she led an Emmy-Noether researcher group at the Zuse Institute Berlin. For two years she led the scientific visualization group at the German aerospace center in Braunschweig.  Her research interests include data analysis and scientific visualization, ranging from basic research questions to effective solutions to visualization problems in applications. This includes developing and applying concepts originating from different areas of computer sciences and mathematics, such as computer graphics, computer vision, dynamical systems, computational geometry, and combinatorial topology.

Details

Category

Duration

45 + 15
Host: Eduard Gröller

Speaker: Prof. Dr. Gerik Scheuermann (Universität Leipzig)

Splats and antisplats are specific flow features were a flow impinges on a immersed wall. They effect the surface stress

And play a major role in heat transfer between fluid and solid wall. On free boundaries or boundaries between two fluids

Like water and air, they are also known as upwelling. We present the first algorithm to detect such structures within the velocity field of fluid flows. Furthermore, we demonstrate the relevance and effectivity of this method by showing results for two turbulent flows simulated by a direct numerical simulation (DNS). These two flows concern a backward facing step and the flow through a turbine cascade.

Details

Category

Duration

45 + 15
Host: Eduard Gröller

Speaker: Prof. Jakob Wenzel (Realistic Graphics Lab - EPFL Lausanne)

Realism has been a major driving force since the inception of the field of computer graphics, and algorithms that generate photorealistic images using physical simulations are now in widespread use. These algorithms are normally used in a "forward” sense: given an input scene, they produce an output image. In this talk, I will present two recent projects that turn this around, enabling applications to problems including 3D reconstruction, material design, and acquisition.

The first is "Mitsuba 2", a new rendering system that is able to automatically and simultaneously differentiate a complex simulation with respect to millions of parameters, which involves unique challenges related to programming languages, just-in-time compilation, and reverse-mode automatic differentiation. I will discuss several difficult inverse problems that can be solved by the combination of gradient-based optimization and a differentiable simulation: surface/volume reconstruction, caustic design, and scattering compensation for 3D printers.

In the second part of the talk, I will present an ongoing effort that aims to build a large database of material representations that encode the interaction of light and matter (e.g. metals, plastics, fabrics, etc.). Capturing this "essence" of a material is challenging problem both from an optical and a computer science perspective due to the high-dimensional nature of the underlying space. I will show how an inverse approach can help evade the curse of dimensionality to acquire this information in a practical amount of time.

Bio: Wenzel Jakob is an assistant professor at EPFL's School of Computer and Communication Sciences, and is leading the Realistic Graphics Lab (https://rgl.epfl.ch/). His research interests revolve around material appearance modeling, rendering algorithms, and the high-dimensional geometry of light paths. Wenzel is the recipient of the ACM SIGGRAPH Significant Researcher award and the Eurographics Young Researcher Award. He is also the lead developer of the Mitsuba renderer, a research-oriented rendering system, and one of the authors of the third edition of "Physically Based Rendering: From Theory To Implementation". (http://pbrt.org/)

 

Details

Category

Duration

60 + 15
Host: Michael Wimmer

Speaker: Prof. Ioannis Pitas (Aristotle University of Thessaloniki)

The aim of drone cinematography is to develop innovative intelligent single- and multiple-drone platforms for media production to cover outdoor events (e.g., sports) that are typically distributed over large expanses, ranging, for example, from a stadium to an entire city. The drone or drone team, to be managed by the production director and his/her production crew, will have: a) increased multiple drone decisional autonomy, hence allowing event coverage in the time span of around one hour in an outdoor environment and b) improved multiple drone robustness and safety mechanisms (e.g., communication robustness/safety, embedded flight regulation compliance, enhanced crowd avoidance and emergency landing mechanisms), enabling it to carry out its mission against errors or crew inaction and to handle emergencies. Such robustness is particularly important, as the drones will operate close to crowds and/or may face environmental hazards (e.g., wind). Therefore, it must be contextually aware and adaptive, towards maximizing shooting creativity and productivity, while minimizing production costs.
Drone vision plays an important role towards this end, covering the following topics: a) drone visual mapping and localization, b) drone visual analysis for target/obstacle/crowd/POI detection, c) 2D/3D target tracking and d) privacy protection technologies in drones (face de-identification).
This lecture will offer an overview of current research efforts on all related topics, ranging from visual semantic world mapping to multiple drone mission planning and control and to drone perception for autonomous target following, tracking and AV shooting.

Short Bio:

Prof. Ioannis Pitas (IEEE fellow, IEEE Distinguished Lecturer, EURASIP fellow) received the Diploma and PhD degree in Electrical Engineering, both from the Aristotle University of Thessaloniki, Greece. Since 1994, he has been a Professor at the Department of Informatics of the same University. He served as a Visiting Professor at several Universities.
His current interests are in the areas of autonomous systems, machine learning, computer vision, 3D and biomedical imaging. He has published over 1090 papers, contributed in 50 books in his areas of interest and edited or (co-)authored another 11 books. He has also been member of the program committee of many scientific conferences and workshops. In the past he served as Associate Editor or co-Editor of 9 international journals and General or Technical Chair of 4 international conferences. He participated in 69 R&D projects, primarily funded by the European Union and is/was principal investigator/researcher in 41 such projects. He has 29200+ citations to his work and h-index 80+ (Google Scholar). Prof. Pitas leads the big European H2020 R&D project MULTIDRONE: https://multidrone.eu/. He is chair of the IEEE Autonomous Systems Initiative (ASI) https://ieeeasi.signalprocessingsociety.org/

 

Details

Category

Duration

45 + 15
Host: Walter Kropatsch

Speaker: Prof. Xiaoru Yuan (Peking University, China)

In this talk, I will introduce a few recent work on tree visualization.

First I will present a  visualization technique for comparing topological structures and node attribute values of multiple trees. I will further introduce GoTree, a declarative grammar supporting the creation of a wide range of tree visualizations. In the application side, visualization and visual analytics on social media will be introduced. The data from social media can be considered as graphs or trees with complex attributes. A few approaches using map metaphor for social media data visualization will be discussed.

http://vis.pku.edu.cn/yuanxiaoru/index_en.html

Details

Category

Duration

45 + 15
Host: Hsiang-Yun WU

Speaker: Prof. Yingcai Wu (College of Computer Science and Technology, Zhejiang University)

With the rapid development of sensing technologies and wearable devices, large amounts of sports data have been acquired daily. The data usually implies a wide spectrum of information and rich knowledge about sports. However, extracting insights from the complex sports data has become more challenging for analysts using traditional automatic approaches, such as data mining and statistical analysis. Visual analytics is an emerging research area which aims to support “analytical reasoning facilitated by interactive visual interfaces.” It has proven its value to tackle various important problems in sports science, such as tactics analysis in table tennis and formation analysis in soccer. Visual analytics would enable coaches and analysts to cope with complex sports data in an interactive and intuitive manner. In this talk, I will discuss our research experiences in visual analytics of sports data and introduce several recent studies of our group of making sense of sports data through interactive visualization.

Bio:

Yingcai Wu is a ZJU100 Young Professor at the State Key Lab of CAD & CG, College of Computer Science and Technology, Zhejiang University. His main research interests are in visual analytics and human-computer interaction, with focuses on sports analytics, urban computing, and social media analysis. He obtained his Ph.D. degree in Computer Science from the Hong Kong University of Science and Technology (HKUST). Prior to his current position, Yingcai Wu was a researcher in the Microsoft Research Asia, Beijing, China from 2012 to 2015, and a postdoctoral researcher at the University of California, Davis from 2010 to 2012. He has published more than 50 refereed papers, including 28 IEEE Transactions on Visualization and Computer Graphics (TVCG) papers. His three papers have been awarded Honorable Mention at IEEE VIS (SciVis) 2009, IEEE VIS (VAST) 2014, and IEEE PacificVis 2016.  He was a paper co-chair of IEEE Pacific Visualization 2017, ChinaVis 2016, ChinaVis 2017, and VINCI 2014. He was also the guest editor of IEEE TVCG, ACM Transactions on Intelligent Systems and Technology (TIST), and IEEE Transactions on Multimedia.

Details

Category

Duration

45 + 15
Host: Hsiang-Yun WU

Speaker: Dr. Ciril Bohak (University of Ljubljana, Faculty of Computer and Information Science)

We are developing a web-based real-time visualization framework build on top of WebGL 2.0 with deferred rendering pipeline with support for mesh geometry data as well as volumetric data. The framework allows merging the rendering outputs of different modalities in a seamless final image. Users can add their own annotations to the 3D representations and share it with other users. Collaborative aspect cover sharing the scene, the rendering parameters, camera view, and annotations. Users can also chat inside the framework. Recently we have paired the framework with a real-time volumetric path-tracing solution allowing users to merge mesh geometry and volumetric rendering of data in the same visualization.   About: Ciril is a postdoctoral researcher and teaching assistant in Laboratory for Computer Graphics and Multimedia, Faculty of Computer and Information Science, University of Ljubljana, Slovenia. His main research interests are Computer Graphics, Game Technology, and Data Visualization. His current research includes real-time medical and biological volumetric data visualization on the web, visualization of geodetic data (LiDAR and Ortho-Photo) and high-energy physics data visualization (in collaboration with CERN).

Details

Category

Duration

45 + 15
Host: Michael Wimmer

Speaker: Prof. Dr. Thomas Höllt (TU Delft)

High dimensional single-cell data is nowadays collected routinely for multiple applications in biology. Standard tools for the analysis of these data do not scale well with regard to the number of dimensions or the number of cells. To tackle these issues, we have extended and created new dimensionality reduction techniques such as A-tSNE[1] and HSNE[2,3]. We have implemented these in our integrated single-cell analysis framework Cytosplore and created new interaction methods such as CyteGuide[4] and Focus+Context for HSNE[5].

This presentation will give an overview over the Cytosplore Visual Analytics framework and highlight some of its domain applications.

[1]Approximated and User Steerable tSNE for Progressive Visual Analytics, IEEE Transactions on Visualization and Computer Graphics, 2017
[2] Hierarchical Stochastic Neighbor Embedding, Computer Graphics Forum (Proceedings of EuroVis 2016), 2016
[3] Visual Analysis of Mass Cytometry Data by Hierarchical Stochastic Neighbor Embedding Reveals Rare Cell Types, Nature Communications, 2017
[4] CyteGuide: Visual Guidance for Hierarchical Single-Cell Analysis, IEEE Transactions on Visualization and Computer Graphics (Proceedings of IEEE InfoVis 2017), 2018
[5] Focus+Context Exploration of Hierarchical Embeddings, Computer Graphics Forum (Proceedings of EuroVis 2019), 2019

Assistant Professor for Visualization at Leiden Computational Biology Center at LUMC
Visiting Researcher at Computer Graphics and Visualization group at TU Delft

Details

Category

Duration

45 + 15
Host: Renata Raidou

Speaker: Prof. Dr. Mario Botsch (Computer Graphics & Geometry Processing Group, Universität Bielefeld)

Digital models of humans are frequently used in computer games or the special effects movie industry. In this talk I will first describe how to efficiently generate realistic avatars through 3D-scanning and template fitting, and demonstrate their advantages over generic avatars in virtual reality scenarios. Medical applications can also benefit from virtual humans. In the context of craniofacial reconstruction I will show how digital head models allow us to estimate possible faces shapes from a given skull, and to estimate a person's skull from a surface scan of the face.

Short Bio:

Mario Botsch is professor in the Computer Science Department at Bielefeld University, where he leads the Computer Graphics & Geometry Processing Group. He received his MSc in mathematics from the University of Erlangen-Nürnberg and his PhD in computer science from RWTH Aachen, and did his post-doc studies at ETH Zurich. The focus of his research is the efficient acquisition, optimisation, animation, and visualisation of three-dimensional geometric objects. He is currently investigating 3D-scanning and motion-capturing of humans, modelling and animation of virtual characters, and real-time visualisation in interactive virtual reality scenarios.
 

Details

Category

Duration

60 + 15

Speaker: Andrew Glassner (The Imaginary Institute)

Graphics research into fundamental algorithms and systems is important.
But sometimes it's just as important to relax and use our hard-won graphics
techniques to help us understand the world, or simulate it, or just have fun creating
beautiful imagery. For 10 years Andrew Glassner wrote a bi-monthly column in I
IEEE Computer Graphics & Applications where the topics ranged from purely
speculative to practical. In this talk, we'll quickly survey a half-dozen favorite topics.

Biography:
Dr. Andrew Glassner ist Forscher und Berater für Computergrafik und maschinelles Lernen.
Glassner hat bei Bell Communications Research gearbeitet, IBM Watson Research Lab,
Xerox PARC und Microsoft Research. Zu seinen technischen Bücher zählt das Lehrbuch
"Principles of Digital Image Synthesis" und die drei folgenden Bücher von "Andrew Glassners
Notizbuch". Er schuf die Serie "Graphics Gems", gründete das Journal of Computer Graphics
Tools, fungierte als Chefredakteur des ACM Transactions on Graphics, und war Papers Chair
für SIGGRAPH '94. Glassner erschuf, schrieb und leitete das Multiplayer-Internetspiel "Dead
Air" für Microsoft, sowie den animierten Kurzfilm "Chicken Crossing" und mehrere Live
Action-Kurzfilme.

Details

Category

Duration

45 + 15

Speaker: Yaghmorasan Benzian (Département d'Informatique, Université Abou Bekr Belkaid-Tlemcen, Algérie)

Presentation of article works treating of mesh classification and image segmentation. The first work presents a Mesh classification approach based on region growing approach and discrete curvature criterion (mean and Gaussian curvatures). Another work of medical image segmentation by level sets controlled by fuzzy rules is presented. The method uses local statistical constraints and low image resolution analysis. The third work shows Fuzzy c-mean segmentation by integrating multi resolution image analysis too.

Biography:

Mohamed Yaghmorasan Benzian is currently a Doctor in computer science at Abou Bekr Belkaid University, Tlemcen, Algeria since 2001. He received his engineering degree in 1993, his MS degree in 2001 and his PhD in 2017 from Mohamed Boudiaf University of Science and Technologyof Oran. He is now member of SIMPA Laboratory at University of Science and Technology, of Oran since 2006. He has published many research papers in national and international conferences. His research interests are image processing, 3D reconstruction and modeling.

Details

Category

Duration

45 + 15

Speaker: Prof. Tobias Schreck (TU Graz, Institut Computer Graphik und Wissensvisualisierung)

Visual Analytics aims to support data analysis and exploration using interactive data visualization, tightly coupled with automatic data analysis methods. In this talk, we will introduce recent research in Visual Analytics at the Institute of Computer Graphics and Knowledge Visualization at TU Graz. After a brief introduction, we will first present approaches for visual similarity search and regression modeling in time series and scatter plot data, based on user sketches and lenses. Then, we will present approaches for visual analysis of movement data in team sports based on suitable visual data abstractions. In a third part, we will comment on recent research interest for guidance in visual data analysis, and describe our first ideas based on user eye tracking and relevance feedback. A summary concludes the talk.

Details

Category

Duration

50 + 10

Speaker: Prof. Helwig Hauser (University of Bergen)

Visualization is embracing the new paradigm of data science, where hypotheses are formulated on the basis of existing data, for example from medical cohort studies or ensemble simulation.  New methods for the interactive visual exploration of rich datasets are supporting data scientists as well as solutions that are based on the integration of automated analysis techniques with interactive visual methods.  In this talk, we discuss interactive visual hypothesis generation and recent related work.  We also look at interactive visual steering, where interactive visual solutions are used to enter an iterative process of modeling.  Furthermore, an attempt of looking into the new future of potentially upcoming visualization research is also included with the hope of spawning an interesting related discussion. 

Biographical Note

Helwig Hauser graduated in 1995 from the Vienna University of Technology (TU Wien) in Austria and finished his PhD project on the visualization of dynamical systems (flow visualization) in 1998.  In 2003, he did his Habilitation at TU Wien, entitled ''Generalizing Focus+Context Visualization''.  After first working for TU Wien as assistant and later as assistant professor (1994–), he changed to the then new VRVis Research Center in 2000 (having been one of the founding team), where he led the basic research group on interactive visualization (until 2003), before he then became the scientific director of VRVis.  Since 2007, he is professor in visualization at the University of Bergen in Norway, where he built up a new research group on visualization, see ii.UiB.no/vis   

Details

Category

Duration

45 + 15
Host: Meister Edi Gröller

Speaker: Alexander Keller (NVIDIA)

Synthesizing images that cannot be distinguished from photographs has been the holy grail of
computer graphics for long. With the path tracing revolution in the movie industry, high quality image synthesis is finally based on ray tracing. With the advent of hardware for accelerated ray tracing, the challenge now is to simulate light transport in realtime. We therefore introduce the relations of the integral equation ruling light transport and reinforcement learning and then survey the building blocks that will enable global illumination simulation in realtime.

Short Bio

Alexander Keller is a director of research at NVIDIA. Before, he had been the Chief Scientist of mental images, where he had been responsible for research and the conception of future products and strategies including the design of the NVIDIA Iray renderer. Prior to industry, he worked as a full professor for computer graphics and scientific computing at Ulm University, where he co-founded the UZWR (Ulmer Zentrum für wissenschaftliches Rechnen) and received an award for excellence in teaching. Alexander Keller holds a PhD, authored more than 27 granted patents, and published more than 50 research articles.

Alexander Keller leads and pursues foundational and applied research in the fields of computer graphics, simulation, quasi-Monte Carlo methods, and machine learning for more than 25 years. He has pioneered quasi-Monte Carlo methods for light transport simulation and initiated the fastest and most robust ray tracing technologies. His research results are manifested in industry leading products like mental ray or NVIDIA Iray aside from many more implementations in academic and professional software and products.

 

Details

Category

Duration

45 + 15