Find the upcoming dates on this page.

Previous Talks

Speaker: Soros, Gabor (Nokia - HU/Budapest)

Augmented reality (AR) has the potential to become the universal user interface between technology-augmented humans and a technology-augmented world by interactively connecting physical objects and digital information in space and time. I present in several examples how AR technology enables us to reveal invisible processes, to see and hear things that happen at different time or space, to intuitively configure and control smart environments, and to visualize simulated insights and a beautified reality.

Bio:

Gábor Sörös is a research scientist at Nokia Bell Labs in Budapest. His research interests span from mobile and wearable computer vision, augmented reality (AR), and computational photography towards augmenting humans with wearable technology, among others for interaction with smart things.

He studied electrical engineering in Budapest (HU) and in Karlsruhe (DE) with focus on communication systems, and visual computing in Vienna (AT) with focus on mobile augmented reality. He obtained the PhD degree in computer science at the Institute for Pervasive Computing of ETH Zurich (CH).

During his undergraduate studies, he was working as a student assistant at the Computer and Automation Research Institute of the Hungarian Academy of Sciences. From 2011 to 2015, he was the scientific advisor of the ETH spin-off Scandit on visual code scanning and product AR with mobile and wearable devices. In 2014, he completed an R&D internship on mobile AR at Qualcomm Research. Between 2016 and 2019, he was a postdoctoral researcher at ETH Zurich and he also worked as the lead engineer of the ETH spin-off Kapanu on AR for dentistry. Since 2019, he is also a technical advisor to the ETH spin-off Arbrea Labs on AR for cosmetic surgery. He joined Bell Labs in 2019 and is working on augmented intelligence.

Speaker: Prof. Ingrid Hotz (Linköping University)

In this seminar, I will talk about my experiences from first industry contacts that arose from a collaboration with a research group in mechanical engineering. The group’s interest lies in the virtual development process of industrial parts and especially the analysis and modeling of fiber-reinforced polymers. Virtual product development based on simulations is today standard in many industrial and university environments. The models are becoming increasingly complex in their geometric design and the materials used. A lot of money and effort is invested in the development of new simulation software and virtual models with impressive results. However, the analysis of the simulation results is becoming more and more demanding and comparatively little effort is made to provide tools that exploit the data in its full diversity, from scalars to tensor fields. The collaboration we originally focusing on the development of novel tensor field visualization methods, but then more and more moved towards applying the entire zoo of basic visualization methods in a specific application. My talk is based on a presentation that I gave at the German industrial meeting on plastics and simulations last year in Munich and the responses that I got from this meeting.

Short Bio: Ingrid Hotz is currently a Professor in scientific visualization at Linköping University. She received a Ph.D. degree in Computer Sciences from the University of Kaiserslautern, Germany. After a PostDoc at the University of California Davis in the USA, she led an Emmy-Noether researcher group at the Zuse Institute Berlin. For two years she led the scientific visualization group at the German aerospace center in Braunschweig.  Her research interests include data analysis and scientific visualization, ranging from basic research questions to effective solutions to visualization problems in applications. This includes developing and applying concepts originating from different areas of computer sciences and mathematics, such as computer graphics, computer vision, dynamical systems, computational geometry, and combinatorial topology.

Details

Category

Duration

45 + 15
Host: Eduard Gröller

Speaker: Prof. Dr. Gerik Scheuermann (Universität Leipzig)

Splats and antisplats are specific flow features were a flow impinges on a immersed wall. They effect the surface stress

And play a major role in heat transfer between fluid and solid wall. On free boundaries or boundaries between two fluids

Like water and air, they are also known as upwelling. We present the first algorithm to detect such structures within the velocity field of fluid flows. Furthermore, we demonstrate the relevance and effectivity of this method by showing results for two turbulent flows simulated by a direct numerical simulation (DNS). These two flows concern a backward facing step and the flow through a turbine cascade.

Details

Category

Duration

45 + 15
Host: Eduard Gröller

Speaker: Prof. Jakob Wenzel (Realistic Graphics Lab - EPFL Lausanne)

Realism has been a major driving force since the inception of the field of computer graphics, and algorithms that generate photorealistic images using physical simulations are now in widespread use. These algorithms are normally used in a "forward” sense: given an input scene, they produce an output image. In this talk, I will present two recent projects that turn this around, enabling applications to problems including 3D reconstruction, material design, and acquisition.

The first is "Mitsuba 2", a new rendering system that is able to automatically and simultaneously differentiate a complex simulation with respect to millions of parameters, which involves unique challenges related to programming languages, just-in-time compilation, and reverse-mode automatic differentiation. I will discuss several difficult inverse problems that can be solved by the combination of gradient-based optimization and a differentiable simulation: surface/volume reconstruction, caustic design, and scattering compensation for 3D printers.

In the second part of the talk, I will present an ongoing effort that aims to build a large database of material representations that encode the interaction of light and matter (e.g. metals, plastics, fabrics, etc.). Capturing this "essence" of a material is challenging problem both from an optical and a computer science perspective due to the high-dimensional nature of the underlying space. I will show how an inverse approach can help evade the curse of dimensionality to acquire this information in a practical amount of time.

Bio: Wenzel Jakob is an assistant professor at EPFL's School of Computer and Communication Sciences, and is leading the Realistic Graphics Lab (https://rgl.epfl.ch/). His research interests revolve around material appearance modeling, rendering algorithms, and the high-dimensional geometry of light paths. Wenzel is the recipient of the ACM SIGGRAPH Significant Researcher award and the Eurographics Young Researcher Award. He is also the lead developer of the Mitsuba renderer, a research-oriented rendering system, and one of the authors of the third edition of "Physically Based Rendering: From Theory To Implementation". (http://pbrt.org/)

 

Details

Category

Duration

60 + 15
Host: Michael Wimmer

Speaker: Prof. Ioannis Pitas (Aristotle University of Thessaloniki)

The aim of drone cinematography is to develop innovative intelligent single- and multiple-drone platforms for media production to cover outdoor events (e.g., sports) that are typically distributed over large expanses, ranging, for example, from a stadium to an entire city. The drone or drone team, to be managed by the production director and his/her production crew, will have: a) increased multiple drone decisional autonomy, hence allowing event coverage in the time span of around one hour in an outdoor environment and b) improved multiple drone robustness and safety mechanisms (e.g., communication robustness/safety, embedded flight regulation compliance, enhanced crowd avoidance and emergency landing mechanisms), enabling it to carry out its mission against errors or crew inaction and to handle emergencies. Such robustness is particularly important, as the drones will operate close to crowds and/or may face environmental hazards (e.g., wind). Therefore, it must be contextually aware and adaptive, towards maximizing shooting creativity and productivity, while minimizing production costs.
Drone vision plays an important role towards this end, covering the following topics: a) drone visual mapping and localization, b) drone visual analysis for target/obstacle/crowd/POI detection, c) 2D/3D target tracking and d) privacy protection technologies in drones (face de-identification).
This lecture will offer an overview of current research efforts on all related topics, ranging from visual semantic world mapping to multiple drone mission planning and control and to drone perception for autonomous target following, tracking and AV shooting.

Short Bio:

Prof. Ioannis Pitas (IEEE fellow, IEEE Distinguished Lecturer, EURASIP fellow) received the Diploma and PhD degree in Electrical Engineering, both from the Aristotle University of Thessaloniki, Greece. Since 1994, he has been a Professor at the Department of Informatics of the same University. He served as a Visiting Professor at several Universities.
His current interests are in the areas of autonomous systems, machine learning, computer vision, 3D and biomedical imaging. He has published over 1090 papers, contributed in 50 books in his areas of interest and edited or (co-)authored another 11 books. He has also been member of the program committee of many scientific conferences and workshops. In the past he served as Associate Editor or co-Editor of 9 international journals and General or Technical Chair of 4 international conferences. He participated in 69 R&D projects, primarily funded by the European Union and is/was principal investigator/researcher in 41 such projects. He has 29200+ citations to his work and h-index 80+ (Google Scholar). Prof. Pitas leads the big European H2020 R&D project MULTIDRONE: https://multidrone.eu/. He is chair of the IEEE Autonomous Systems Initiative (ASI) https://ieeeasi.signalprocessingsociety.org/

 

Details

Category

Duration

45 + 15
Host: Walter Kropatsch

Speaker: Prof. Xiaoru Yuan (Peking University, China)

In this talk, I will introduce a few recent work on tree visualization.

First I will present a  visualization technique for comparing topological structures and node attribute values of multiple trees. I will further introduce GoTree, a declarative grammar supporting the creation of a wide range of tree visualizations. In the application side, visualization and visual analytics on social media will be introduced. The data from social media can be considered as graphs or trees with complex attributes. A few approaches using map metaphor for social media data visualization will be discussed.

http://vis.pku.edu.cn/yuanxiaoru/index_en.html

Details

Category

Duration

45 + 15
Host: Hsiang-Yun WU

Speaker: Prof. Yingcai Wu (College of Computer Science and Technology, Zhejiang University)

With the rapid development of sensing technologies and wearable devices, large amounts of sports data have been acquired daily. The data usually implies a wide spectrum of information and rich knowledge about sports. However, extracting insights from the complex sports data has become more challenging for analysts using traditional automatic approaches, such as data mining and statistical analysis. Visual analytics is an emerging research area which aims to support “analytical reasoning facilitated by interactive visual interfaces.” It has proven its value to tackle various important problems in sports science, such as tactics analysis in table tennis and formation analysis in soccer. Visual analytics would enable coaches and analysts to cope with complex sports data in an interactive and intuitive manner. In this talk, I will discuss our research experiences in visual analytics of sports data and introduce several recent studies of our group of making sense of sports data through interactive visualization.

Bio:

Yingcai Wu is a ZJU100 Young Professor at the State Key Lab of CAD & CG, College of Computer Science and Technology, Zhejiang University. His main research interests are in visual analytics and human-computer interaction, with focuses on sports analytics, urban computing, and social media analysis. He obtained his Ph.D. degree in Computer Science from the Hong Kong University of Science and Technology (HKUST). Prior to his current position, Yingcai Wu was a researcher in the Microsoft Research Asia, Beijing, China from 2012 to 2015, and a postdoctoral researcher at the University of California, Davis from 2010 to 2012. He has published more than 50 refereed papers, including 28 IEEE Transactions on Visualization and Computer Graphics (TVCG) papers. His three papers have been awarded Honorable Mention at IEEE VIS (SciVis) 2009, IEEE VIS (VAST) 2014, and IEEE PacificVis 2016.  He was a paper co-chair of IEEE Pacific Visualization 2017, ChinaVis 2016, ChinaVis 2017, and VINCI 2014. He was also the guest editor of IEEE TVCG, ACM Transactions on Intelligent Systems and Technology (TIST), and IEEE Transactions on Multimedia.

Details

Category

Duration

45 + 15
Host: Hsiang-Yun WU

Speaker: Dr. Ciril Bohak (University of Ljubljana, Faculty of Computer and Information Science)

We are developing a web-based real-time visualization framework build on top of WebGL 2.0 with deferred rendering pipeline with support for mesh geometry data as well as volumetric data. The framework allows merging the rendering outputs of different modalities in a seamless final image. Users can add their own annotations to the 3D representations and share it with other users. Collaborative aspect cover sharing the scene, the rendering parameters, camera view, and annotations. Users can also chat inside the framework. Recently we have paired the framework with a real-time volumetric path-tracing solution allowing users to merge mesh geometry and volumetric rendering of data in the same visualization.   About: Ciril is a postdoctoral researcher and teaching assistant in Laboratory for Computer Graphics and Multimedia, Faculty of Computer and Information Science, University of Ljubljana, Slovenia. His main research interests are Computer Graphics, Game Technology, and Data Visualization. His current research includes real-time medical and biological volumetric data visualization on the web, visualization of geodetic data (LiDAR and Ortho-Photo) and high-energy physics data visualization (in collaboration with CERN).

Details

Category

Duration

45 + 15
Host: Michael Wimmer

Speaker: Prof. Dr. Thomas Höllt (TU Delft)

High dimensional single-cell data is nowadays collected routinely for multiple applications in biology. Standard tools for the analysis of these data do not scale well with regard to the number of dimensions or the number of cells. To tackle these issues, we have extended and created new dimensionality reduction techniques such as A-tSNE[1] and HSNE[2,3]. We have implemented these in our integrated single-cell analysis framework Cytosplore and created new interaction methods such as CyteGuide[4] and Focus+Context for HSNE[5].

This presentation will give an overview over the Cytosplore Visual Analytics framework and highlight some of its domain applications.

[1]Approximated and User Steerable tSNE for Progressive Visual Analytics, IEEE Transactions on Visualization and Computer Graphics, 2017
[2] Hierarchical Stochastic Neighbor Embedding, Computer Graphics Forum (Proceedings of EuroVis 2016), 2016
[3] Visual Analysis of Mass Cytometry Data by Hierarchical Stochastic Neighbor Embedding Reveals Rare Cell Types, Nature Communications, 2017
[4] CyteGuide: Visual Guidance for Hierarchical Single-Cell Analysis, IEEE Transactions on Visualization and Computer Graphics (Proceedings of IEEE InfoVis 2017), 2018
[5] Focus+Context Exploration of Hierarchical Embeddings, Computer Graphics Forum (Proceedings of EuroVis 2019), 2019

Assistant Professor for Visualization at Leiden Computational Biology Center at LUMC
Visiting Researcher at Computer Graphics and Visualization group at TU Delft

Details

Category

Duration

45 + 15
Host: Renata Raidou

Speaker: Prof. Dr. Mario Botsch (Computer Graphics & Geometry Processing Group, Universität Bielefeld)

Digital models of humans are frequently used in computer games or the special effects movie industry. In this talk I will first describe how to efficiently generate realistic avatars through 3D-scanning and template fitting, and demonstrate their advantages over generic avatars in virtual reality scenarios. Medical applications can also benefit from virtual humans. In the context of craniofacial reconstruction I will show how digital head models allow us to estimate possible faces shapes from a given skull, and to estimate a person's skull from a surface scan of the face.

Short Bio:

Mario Botsch is professor in the Computer Science Department at Bielefeld University, where he leads the Computer Graphics & Geometry Processing Group. He received his MSc in mathematics from the University of Erlangen-Nürnberg and his PhD in computer science from RWTH Aachen, and did his post-doc studies at ETH Zurich. The focus of his research is the efficient acquisition, optimisation, animation, and visualisation of three-dimensional geometric objects. He is currently investigating 3D-scanning and motion-capturing of humans, modelling and animation of virtual characters, and real-time visualisation in interactive virtual reality scenarios.
 

Details

Category

Duration

60 + 15

Speaker: Andrew Glassner (The Imaginary Institute)

Graphics research into fundamental algorithms and systems is important.
But sometimes it's just as important to relax and use our hard-won graphics
techniques to help us understand the world, or simulate it, or just have fun creating
beautiful imagery. For 10 years Andrew Glassner wrote a bi-monthly column in I
IEEE Computer Graphics & Applications where the topics ranged from purely
speculative to practical. In this talk, we'll quickly survey a half-dozen favorite topics.

Biography:
Dr. Andrew Glassner ist Forscher und Berater für Computergrafik und maschinelles Lernen.
Glassner hat bei Bell Communications Research gearbeitet, IBM Watson Research Lab,
Xerox PARC und Microsoft Research. Zu seinen technischen Bücher zählt das Lehrbuch
"Principles of Digital Image Synthesis" und die drei folgenden Bücher von "Andrew Glassners
Notizbuch". Er schuf die Serie "Graphics Gems", gründete das Journal of Computer Graphics
Tools, fungierte als Chefredakteur des ACM Transactions on Graphics, und war Papers Chair
für SIGGRAPH '94. Glassner erschuf, schrieb und leitete das Multiplayer-Internetspiel "Dead
Air" für Microsoft, sowie den animierten Kurzfilm "Chicken Crossing" und mehrere Live
Action-Kurzfilme.

Details

Category

Duration

45 + 15

Speaker: Yaghmorasan Benzian (Département d'Informatique, Université Abou Bekr Belkaid-Tlemcen, Algérie)

Presentation of article works treating of mesh classification and image segmentation. The first work presents a Mesh classification approach based on region growing approach and discrete curvature criterion (mean and Gaussian curvatures). Another work of medical image segmentation by level sets controlled by fuzzy rules is presented. The method uses local statistical constraints and low image resolution analysis. The third work shows Fuzzy c-mean segmentation by integrating multi resolution image analysis too.

Biography:

Mohamed Yaghmorasan Benzian is currently a Doctor in computer science at Abou Bekr Belkaid University, Tlemcen, Algeria since 2001. He received his engineering degree in 1993, his MS degree in 2001 and his PhD in 2017 from Mohamed Boudiaf University of Science and Technologyof Oran. He is now member of SIMPA Laboratory at University of Science and Technology, of Oran since 2006. He has published many research papers in national and international conferences. His research interests are image processing, 3D reconstruction and modeling.

Details

Category

Duration

45 + 15

Speaker: Prof. Tobias Schreck (TU Graz, Institut Computer Graphik und Wissensvisualisierung)

Visual Analytics aims to support data analysis and exploration using interactive data visualization, tightly coupled with automatic data analysis methods. In this talk, we will introduce recent research in Visual Analytics at the Institute of Computer Graphics and Knowledge Visualization at TU Graz. After a brief introduction, we will first present approaches for visual similarity search and regression modeling in time series and scatter plot data, based on user sketches and lenses. Then, we will present approaches for visual analysis of movement data in team sports based on suitable visual data abstractions. In a third part, we will comment on recent research interest for guidance in visual data analysis, and describe our first ideas based on user eye tracking and relevance feedback. A summary concludes the talk.

Details

Category

Duration

50 + 10

Speaker: Prof. Helwig Hauser (University of Bergen)

Visualization is embracing the new paradigm of data science, where hypotheses are formulated on the basis of existing data, for example from medical cohort studies or ensemble simulation.  New methods for the interactive visual exploration of rich datasets are supporting data scientists as well as solutions that are based on the integration of automated analysis techniques with interactive visual methods.  In this talk, we discuss interactive visual hypothesis generation and recent related work.  We also look at interactive visual steering, where interactive visual solutions are used to enter an iterative process of modeling.  Furthermore, an attempt of looking into the new future of potentially upcoming visualization research is also included with the hope of spawning an interesting related discussion. 

Biographical Note

Helwig Hauser graduated in 1995 from the Vienna University of Technology (TU Wien) in Austria and finished his PhD project on the visualization of dynamical systems (flow visualization) in 1998.  In 2003, he did his Habilitation at TU Wien, entitled ''Generalizing Focus+Context Visualization''.  After first working for TU Wien as assistant and later as assistant professor (1994–), he changed to the then new VRVis Research Center in 2000 (having been one of the founding team), where he led the basic research group on interactive visualization (until 2003), before he then became the scientific director of VRVis.  Since 2007, he is professor in visualization at the University of Bergen in Norway, where he built up a new research group on visualization, see ii.UiB.no/vis   

Details

Category

Duration

45 + 15
Host: Meister Edi Gröller

Speaker: Alexander Keller (NVIDIA)

Synthesizing images that cannot be distinguished from photographs has been the holy grail of
computer graphics for long. With the path tracing revolution in the movie industry, high quality image synthesis is finally based on ray tracing. With the advent of hardware for accelerated ray tracing, the challenge now is to simulate light transport in realtime. We therefore introduce the relations of the integral equation ruling light transport and reinforcement learning and then survey the building blocks that will enable global illumination simulation in realtime.

Short Bio

Alexander Keller is a director of research at NVIDIA. Before, he had been the Chief Scientist of mental images, where he had been responsible for research and the conception of future products and strategies including the design of the NVIDIA Iray renderer. Prior to industry, he worked as a full professor for computer graphics and scientific computing at Ulm University, where he co-founded the UZWR (Ulmer Zentrum für wissenschaftliches Rechnen) and received an award for excellence in teaching. Alexander Keller holds a PhD, authored more than 27 granted patents, and published more than 50 research articles.

Alexander Keller leads and pursues foundational and applied research in the fields of computer graphics, simulation, quasi-Monte Carlo methods, and machine learning for more than 25 years. He has pioneered quasi-Monte Carlo methods for light transport simulation and initiated the fastest and most robust ray tracing technologies. His research results are manifested in industry leading products like mental ray or NVIDIA Iray aside from many more implementations in academic and professional software and products.

 

Details

Category

Duration

45 + 15

Speaker: Prof. Art Olson (The Scripps Research Institute, CA USA)

The ability to create structural models of cells and cellular components at the molecular level is being driven by advances ranging from proteomics and expression profiling to ever more powerful imaging approaches. It is being enabled by technology leaps in computation, informatics, and visualization. 

Our CellPACK program is a tool for generating structural models of cellular environments at molecular and atomic level.  Recently we have implemented a GPU-based implementation of CellPACK that speeds up the process by orders of magnitude, in what we have termed “instant packing.” This enables interactive exploration and manipulation of components of the packings.  Visualization of these models is also enabled by GPU-based efficient representations, renderings and levels of detail.  The ability to model complex cellular components such as a bacterial nucleoid and distinct phases is a significant challenge that we continue to work on.  Our Lab’s recent lattice-based method for rapidly producing bacterial nucleoids is a prototype for other rule-based generative structure builders.  Use of GPU-based physics engines enables real time interaction with dynamic models.  Flexx is a real-time constraint solver from NVidia.  It is capable of interactive constraint minimization of up to 1 million particles in real time.  Such systems can quickly resolve clashes and identify interactions in the crowded cellular environment.

In a parallel effort to CellPACK, we have developed CellPAINT, which has its origins in David Goodsell’s watercolor paintings of cellular environments and processes. Cellpaint uses a painting metaphore to enable creation of Goodsell-like images interactively within a Unity game-engine.  These images can be animated using a simple Brownian motion-based diffusion model.  Recently we have expanded this interactive interface into 3D, and have also implemented it in a Virtual Reality environment.

The talk will include live interactive demonstrations of the current state of our software.

Details

Category

Duration

40 + 10
Host: Ivan Viola

Speaker: Prof. Martin Eisemann (Institut für Informatik, TH Köln)

In this talk I will introduce our work on how to use visualization and visual analytics techniques to solve problems in physically-based rendering, specifically in Monte-Carlo rendering. We show possible applications in various areas, such as improving rendering speed, reducing error, or optimizing object placement.   For more information see Martin Eisemann's webpage: https://www.th-koeln.de/personen/martin.eisemann/ 

Speaker: Gaël McGill (Harvard Medical School & Digizyme Inc.)

Biovisualization is a field that combines the complexities of science, the technical rigor of programming, the challenges of effective teaching and the creative possibilities of art and design.  It is often used in one of two ways: 1) to explore and extract meaningful patterns for data analysis and 2) to communicate and engage various audiences.  One of the most powerful yet little-recognized benefits of visualization, however, is the way it synthesizes our knowledge, externalizes our mental models of the science and thereby makes our assumptions explicit.  Despite cognitive research that informs us on how visualizations impact target audiences like students, little attention is given to the thought process behind crafting visualizations and how it impacts those in involved in planning and production.  Many designers and animators report anecdotally that scientists with whom they collaborate gain new insights into their science as a result of navigating this process: "visual thinking" triggered during the planning of a visualization is thought to put familiar data into a new light.  This presentation will draw on a range of example projects for scientists, science museums, public broadcasting, publishers, software developers and students, and provide an overview of tools, techniques, cognitive research and design practices aimed at maximizing the impact of visualizations in both research and education.

Gaël McGill, Ph.D.

Founder & CEO, Digizyme Inc.

Faculty, Department of Biological Chemistry and Molecular Pharmacology Director of Molecular Visualization, Center for Molecular & Cellular Dynamics Harvard Medical School

Co-author & Digital Director, E.O. Wilson’s Life on Earth

Dr. Gaël McGill’s federally-funded research and teaching at Harvard Medical School focuses on visualization design and assessment methods in science education and communication. He is also founder & CEO of Digizyme, Inc. (www.digizyme.com) a firm dedicated to the visualization and communication of science, through which he has designed, programmed and art directed over 130 web and visualization projects for biotechnology, pharmaceuticals, medical device companies, science museums, research institutes and hospitals.  Dr. McGill has also developed curricula and instructional multimedia for students ranging from middle school to graduate students, and recently co-authored and served as digital director of Apple’s flagship digital textbook E.O. Wilson’s Life on Earth featured with the release of iBooks Author software. He is the creator of the scientific visualization online community portal Clarafi.com (originally molecularmovies.com), the Molecular Maya (mMaya) software toolkit and has contributed to leading Maya and ZBrush textbooks for Wiley/SYBEX Publishing. Dr. McGill was also a board member of the Vesalius Trust and remains an advisor to several biotechnology and device companies. After his B.A. summa cum laude in Biology, Music, and Art History from Swarthmore College, and Ph.D. at Harvard Medical School as a Howard Hughes Medical Institute and Sandoz Pharmaceuticals fellow, Dr. McGill completed his postdoctoral work at the Dana Farber Cancer Institute studying tumor cell apoptosis and melanoma.

Details

Category

Duration

45 + 15
Host: Ivan Viola

Speaker: Tatiana von Landesberger (Technische Universität Darmstadt Interactive Graphics Systems Group)

Data visualization communicates the data to the user for exploring unknown datasets, for confirming an assumed hypothesis about a dataset and for presenting results of an analysis. The data can stem directly from measurements, from simulations or can be a result of data modelling.

Nowadays, there are many tools and libraries to visualize data such as Tableau, Gephi, ESRI or D3. However, “simply” plotting the data on the screen may lead to several problems when reading and interpreting the data. Examples include data overplotting, “getting lost” in the data space during exploration, cognitive overburden and conveying data uncertainty.

The lecture will explain recent techniques for dealing with these visualization challenges. The techniques will be exemplified on data from various application domains such as transportation, journalism or biology.

Biography: Tatiana Landesberger von Antburg leads the Visual Search and Analysis group at Interactive Graphics Systems Group, TU Darmstadt. She finished her PhD in 2010 at this university. Her research interests cover visual analysis of spatio-temporal and network data from various application domains such as finance, transportation, journalism or biology.

Homepage: http://www.gris.tu-darmstadt.de/research/vissearch/index.en.htm

Details

Category

Duration

45 + 15

Speaker: Marwan Abdellah (École polytechnique fédérale de Lausanne - EPFL)

Dr. Marwan Abdellah - Blue Brain Project - Ecole Polytechnique Federale de Lausanne (EPFL)

Abstract

The mammalian brain is a significant source of inspiration and challenge; it is the most complex phenomenon in the known universe whose function depends on the communication between countless structures, all spanning various spatial and temporal scales. The last century has witnessed massive efforts to reveal the intricacies of the brain to understand its function and dysfunction. Despite these efforts, our comprehension of the underlying mechanisms of the brain remains incomplete. A comprehensive understanding essentially requires collaborative efforts and profound insights into its structure and function across multiple levels of organization, from genetic principles to whole-brain level systems. Understanding the hidden aspects of the mammalian brain relying solely on wet lab experiments has been proven to be extremely limiting and time consuming. The data produced from such experiments is concerned with various levels of biological organization. The search space for unknown data is so broad, that it is debatable whether classical in vivo and in vitro experiments can provide enough results to answer all the questions in a reasonable time, unless a more systematic approach is followed. This approach requires integrating this data into a unifying multi-level system, which would allow us to build on previous knowledge and accelerate neuroscience research towards one and only one potential target: understanding the human brain.

The Blue Brain Project was founded in 2005, heralding the birth of a paradigm shift in neuroscience based on the fundamental insights of in silico research. This pioneering endeavor aims at integrating fragmented neuroscience knowledge from in vitro data in order to build detailed, multi-scale and biologically-accurate digital unifying models of rodents, and ultimately human brains. In 2015, the Blue Brain Project has achieved a long-awaited breakthrough: a first-draft digital reconstruction of the microcircuitry of somatosensory cortex of juvenile rat. This reconstruction is based on cellular and synaptic organizing principles to algorithmically reconstruct detailed anatomy and physiology from sparse experimental data. An objective anatomical method defines a neocortical volume of 0.29 ± 0.01 mm(3) containing ~31,000 neurons, and patch-clamp studies identify 55 layer-specific morphological and 207 morpho-electrical neuron subtypes. When digitally reconstructed neurons are positioned in the volume and synapse formation is restricted to biological bouton densities and numbers of synapses per connection, their overlapping arbors form ~8 million connections with ~37 million synapses. Simulations reproduce an array of in vitro and in vivo experiments without parameter tuning.

One approach to validate several structural and functional aspects of this model is to image it, in silico, expecting to achieve the same observations and findings a neurobiologist can reach in the wet lab. This mission entails defining a set of imaging experiments and then simulating them on a bio-physically-plausible basis assuming the existence of accurate biologically-detailed physical models of the cortical tissue and the imaging modalities employed to visualize it. This seminar addresses a novel visualization technology called In Silico Brain Imaging, or simply, how to image digital brain reconstructions simulated in supercomputers.  

Bibliography

Marwan Abdellah is a Scientific Visualization Engineer and Post-Doctoral Fellow in the Scientific Visualization Section of the Computing Division in the Blue Brain Project. Marwan obtained his PhD in Neuroscience from EPFL in 2017. He received his Bachelor and Master’s degrees from the Biomedical Engineering Department, School of Engineering at Cairo University, Egypt. His current research focuses on building accurate computational models of brain imaging technologies using physically-plausible visualization methods.

Marwan joined the Blue Brain in 2011 as a Software Engineer and led the multimedia generation workflows, and also contributed to the development of the software applications required for interactive visualisation of neocortical digital models. His research interests include scientific visualisation, physically-plausible rendering, computer graphics and modelling, medical imaging, high performance computing and in silico neuroscience.

Selected Publications

* M. Abdellah, J. Hernando, N. Antille, S. Eilemann, S. Lapere, H. Markram, F. Schürmann. NeuroMorphoVis: a collaborative framework for analysis and visualization of neuronal morphology skeletons reconstructed from microscopy stacks. Oxford Bioinformatics, In Press.

* M. Abdellah, J. Hernando, N. Antille, S. Eilemann, H. Markram, F. Schürmann. Reconstruction and visualization of large-scale volumetric models of neocortical circuits for physically-plausible in silico optical studies. BMC Bioinformatics, 13 September 2017.

* Marwan Abdellah, Ahmet Bilgili, Stefan Eilemann, Julian Shillcock, Henry Markram, Felix Schürmann. Bio-physically plausible visualization of highly scattering fluorescent neocortical models for in silico experimentation. BMC bioinformatics 18, no. Suppl 2 (2017): S8.

* Abdellah, Marwan, Ahmet Bilgili, Stefan Eilemann, Henry Markram, and Felix Schürmann. Physically-based in silico light sheet microscopy for visualizing fluorescent brain models. BMC bioinformatics 16, no. Suppl 11 (2015): S8.

* Abdellah, M., Abdelaziz, A., Ali, E. E., Abdelaziz, S., Sayed, A., Owis, M. I., & Eldeib, A. (2016, August). Parallel generation of digitally reconstructed radiographs on heterogeneous multi-GPU workstations. In Engineering in Medicine and Biology Society (EMBC), 2016 IEEE 38th Annual International Conference of the (pp. 3953-3956). IEEE.

* Henry Markram, Eilif Muller, Srikanth Ramaswamy, Michael W Reimann, Marwan Abdellah, Carlos Aguado Sanchez, et al.“Reconstruction and simulation of neocortical microcircuitry. Cell 2015.

* Ramaswamy, Srikanth, Jean-Denis Courcol, Marwan Abdellah, Stanislaw Adaszewski, Nicolas Antille, Selim Arsever, Guy Antoine et al. The Neocortical Microcircuit Collaboration Portal: A Resource for Rat Somatosensory Cortex. Frontiers in Neural Circuits 9 (2015).

Details

Category

Duration

45 + 15
Host: Ivan Viola

Speaker: Karsten Schatz (Visualization Research Center, University of Stuttgart)

For the last decades, the understanding of biomolecules like DNA and proteins is on a permanent rise. This lead to engineering processes that are able to produce specialized variants of the molecules, e. g. to solve other tasks than intended by nature. To perform these engineering processes, biochemists need to understand how the molecules work and how changes in their underlying structure lead to changes in reaction behavior. To that end, simulations and experiments are performed that generate vast amounts of data. The analysis of the many-faceted data can be a cumbersome task, if not supported by suitable analysis tools. This is where biomolecular visualization comes into play. Specialized visualization techniques and algorithms for biomolecules can lead to faster insights of the domain scientists, so many of them were developed in the recent years. Despite the amount of different methods, there are still many tasks left that are not supported by current visualization techniques. This talk presents some of the work performed by VISUS in the field of biomolecular visualization, including methods for uncertainty visualization on protein cartoon renderings, interactive exploration of tera-scale datasets and the abstracted visualization of enzymatic binding sites.

CV:
Karsten Schatz received a MSc in computer science from the Unversity of Stuttgart. Since 2016 he is a PhD student at the Visualization Research Center of the University of Stuttgart (VISUS). His research interests include the interactive visualization of protein-solvent systems as well as the visualization of other biomolecules. Additionally he is interested in the interactive visualization of tera-scale data sets.

Details

Category

Duration

30 + 15
Host: Ivan Viola

Speaker: Ove Daae Lampe (Christian Michelsen Research AS, Bergen, Norway)

ENLWEB, short for Enlighten web, is a web framework for interactive visual analysis and data exploration.
It is connected with the Jupyter Notebook environment in order to enable a tight connection between loading, transforming, fusion and filtering data and visualization. I will present the workflow in two cases, one from maritime traffic and one in the case of multidisciplinary geological data through the EU project EPOS.

Details

Category

Duration

45 + 15

Speaker: Prof. Victoria Interrante (University of Minnesota )

In this talk, I will present an overview of my lab’s recent and historical research into strategies for enabling the more effective use of VR in applications related to architectural design and evaluation, focusing on two specific topic areas: (1) facilitating accurate spatial perception; (2) supporting effective interpersonal communication.  I will touch on our efforts related to the use of self-avatars and autonomous intelligent agents, our investigations into the  relative requirements of visual and experiential realism, and our assessments of the extents to which various locomotion methods support users in maintaining an accurate mental map of the space through which they are traveling.

Short bio: Victoria Interrante is a Professor in the Department of Computer Science and Engineering at the University of Minnesota and Director of the University-wide Center for Cognitive Sciences.  She received her PhD in 1996 from the University of North Carolina at Chapel Hill, where she was advised by Drs. Henry Fuchs and Stephen Pizer, and was a 1999 recipient of the US Presidential Early Career Award for Scientists and Engineers, one of the highest honors bestowed by the US government to junior researchers.  Professor Interrante has extensive research experience in the fields of Virtual Reality and Data Visualization. She is currently serving as the co-Editor-in-Chief of the ACM Transactions on Applied Perception, and as a member of the Steering Committee for the IEEE Virtual Reality conference.

Speaker: Dr. Alexandra Diehl (University of Konstanz)

Volunteered Geographic Information (VGI) is a kind of information provided by citizens with different cultural backgrounds, experiences, interests, and level of education. They contribute, maintain, exchange, and share spatiotemporal data driven by a wide variety of interests and in different formats, for example, maps, image, and text.

To extract meaningful VGI from these heterogeneous and sometimes contradictory datasets is challenging. One way to start tackling this challenge is by understanding the different levels of uncertainty from the data, and during the visual analytical process.

In this talk, I will present my current efforts in the analysis of the uncertainty coming from social media data through the Visual Analytics process. From the raw data, through the data transformation, visual mapping, and interaction.

Speaker: Dr. Tobias Isenberg (Team AVIZ Inria-Saclay / Université Paris Saclay)

I will talk about the challenges for the interaction with 3D data using direct manipulation input devices such as tactile displays and tangible devices. I will address, in particular, data navigation & selection as two fundamental interaction techniques, and demonstrate example interaction designs for both. In addition, I will discuss what happens when immersive 3D display environments are combined with mobile displays for input, in particular in a stereoscopic interaction context. Finally, I will present some ideas for the potential combination of tactile with tangible input techniques to create hybrid interactions.

Images (see the respective "pictures" section on the pages & click on the thumbnails):
https://tobias.isenberg.cc/VideosAndDemos/Yu2010FDT
https://tobias.isenberg.cc/VideosAndDemos/Klein2012DSD
https://tobias.isenberg.cc/VideosAndDemos/Yu2012ESA
https://tobias.isenberg.cc/VideosAndDemos/Yu2016CEE
https://tobias.isenberg.cc/VideosAndDemos/Besancon2017PGF
https://tobias.isenberg.cc/VideosAndDemos/Besancon2017HTT

Biography:
Tobias Isenberg is a senior research scientist at Inria, France. He received his doctoral degree from the University of Magdeburg, Germany, in 2004. Previously he held positions as post-doctoral fellow at the University of Calgary, Canada, and as assistant professor at the University of Groningen, the Netherlands. His research interests comprise topics in scientific visualization, illustrative and non-photorealistic rendering, and interactive visualization techniques. He is particularly interested in interactive visualization environments for 3D spatial data that rely on novel input paradigms such as tactile screens and tangible devices.

Mug shot:
http://tobias.isenberg.cc/Pictures/Myself-TobiSelectionSquarelarge2
 

Details

Category

Duration

45 + 30