Find the upcoming dates on this page.

Previous Talks

Speaker: Prof. Art Olson (The Scripps Research Institute, CA USA)

The ability to create structural models of cells and cellular components at the molecular level is being driven by advances ranging from proteomics and expression profiling to ever more powerful imaging approaches. It is being enabled by technology leaps in computation, informatics, and visualization. 

Our CellPACK program is a tool for generating structural models of cellular environments at molecular and atomic level.  Recently we have implemented a GPU-based implementation of CellPACK that speeds up the process by orders of magnitude, in what we have termed “instant packing.” This enables interactive exploration and manipulation of components of the packings.  Visualization of these models is also enabled by GPU-based efficient representations, renderings and levels of detail.  The ability to model complex cellular components such as a bacterial nucleoid and distinct phases is a significant challenge that we continue to work on.  Our Lab’s recent lattice-based method for rapidly producing bacterial nucleoids is a prototype for other rule-based generative structure builders.  Use of GPU-based physics engines enables real time interaction with dynamic models.  Flexx is a real-time constraint solver from NVidia.  It is capable of interactive constraint minimization of up to 1 million particles in real time.  Such systems can quickly resolve clashes and identify interactions in the crowded cellular environment.

In a parallel effort to CellPACK, we have developed CellPAINT, which has its origins in David Goodsell’s watercolor paintings of cellular environments and processes. Cellpaint uses a painting metaphore to enable creation of Goodsell-like images interactively within a Unity game-engine.  These images can be animated using a simple Brownian motion-based diffusion model.  Recently we have expanded this interactive interface into 3D, and have also implemented it in a Virtual Reality environment.

The talk will include live interactive demonstrations of the current state of our software.

Details

Category

Duration

40 + 10
Host: Ivan Viola

Speaker: Prof. Martin Eisemann (Institut für Informatik, TH Köln)

In this talk I will introduce our work on how to use visualization and visual analytics techniques to solve problems in physically-based rendering, specifically in Monte-Carlo rendering. We show possible applications in various areas, such as improving rendering speed, reducing error, or optimizing object placement.   For more information see Martin Eisemann's webpage: https://www.th-koeln.de/personen/martin.eisemann/ 

Speaker: Gaël McGill (Harvard Medical School & Digizyme Inc.)

Biovisualization is a field that combines the complexities of science, the technical rigor of programming, the challenges of effective teaching and the creative possibilities of art and design.  It is often used in one of two ways: 1) to explore and extract meaningful patterns for data analysis and 2) to communicate and engage various audiences.  One of the most powerful yet little-recognized benefits of visualization, however, is the way it synthesizes our knowledge, externalizes our mental models of the science and thereby makes our assumptions explicit.  Despite cognitive research that informs us on how visualizations impact target audiences like students, little attention is given to the thought process behind crafting visualizations and how it impacts those in involved in planning and production.  Many designers and animators report anecdotally that scientists with whom they collaborate gain new insights into their science as a result of navigating this process: "visual thinking" triggered during the planning of a visualization is thought to put familiar data into a new light.  This presentation will draw on a range of example projects for scientists, science museums, public broadcasting, publishers, software developers and students, and provide an overview of tools, techniques, cognitive research and design practices aimed at maximizing the impact of visualizations in both research and education.

Gaël McGill, Ph.D.

Founder & CEO, Digizyme Inc.

Faculty, Department of Biological Chemistry and Molecular Pharmacology Director of Molecular Visualization, Center for Molecular & Cellular Dynamics Harvard Medical School

Co-author & Digital Director, E.O. Wilson’s Life on Earth

Dr. Gaël McGill’s federally-funded research and teaching at Harvard Medical School focuses on visualization design and assessment methods in science education and communication. He is also founder & CEO of Digizyme, Inc. (www.digizyme.com) a firm dedicated to the visualization and communication of science, through which he has designed, programmed and art directed over 130 web and visualization projects for biotechnology, pharmaceuticals, medical device companies, science museums, research institutes and hospitals.  Dr. McGill has also developed curricula and instructional multimedia for students ranging from middle school to graduate students, and recently co-authored and served as digital director of Apple’s flagship digital textbook E.O. Wilson’s Life on Earth featured with the release of iBooks Author software. He is the creator of the scientific visualization online community portal Clarafi.com (originally molecularmovies.com), the Molecular Maya (mMaya) software toolkit and has contributed to leading Maya and ZBrush textbooks for Wiley/SYBEX Publishing. Dr. McGill was also a board member of the Vesalius Trust and remains an advisor to several biotechnology and device companies. After his B.A. summa cum laude in Biology, Music, and Art History from Swarthmore College, and Ph.D. at Harvard Medical School as a Howard Hughes Medical Institute and Sandoz Pharmaceuticals fellow, Dr. McGill completed his postdoctoral work at the Dana Farber Cancer Institute studying tumor cell apoptosis and melanoma.

Details

Category

Duration

45 + 15
Host: Ivan Viola

Speaker: Tatiana von Landesberger (Technische Universität Darmstadt Interactive Graphics Systems Group)

Data visualization communicates the data to the user for exploring unknown datasets, for confirming an assumed hypothesis about a dataset and for presenting results of an analysis. The data can stem directly from measurements, from simulations or can be a result of data modelling.

Nowadays, there are many tools and libraries to visualize data such as Tableau, Gephi, ESRI or D3. However, “simply” plotting the data on the screen may lead to several problems when reading and interpreting the data. Examples include data overplotting, “getting lost” in the data space during exploration, cognitive overburden and conveying data uncertainty.

The lecture will explain recent techniques for dealing with these visualization challenges. The techniques will be exemplified on data from various application domains such as transportation, journalism or biology.

Biography: Tatiana Landesberger von Antburg leads the Visual Search and Analysis group at Interactive Graphics Systems Group, TU Darmstadt. She finished her PhD in 2010 at this university. Her research interests cover visual analysis of spatio-temporal and network data from various application domains such as finance, transportation, journalism or biology.

Homepage: http://www.gris.tu-darmstadt.de/research/vissearch/index.en.htm

Details

Category

Duration

45 + 15

Speaker: Marwan Abdellah (École polytechnique fédérale de Lausanne - EPFL)

Dr. Marwan Abdellah - Blue Brain Project - Ecole Polytechnique Federale de Lausanne (EPFL)

Abstract

The mammalian brain is a significant source of inspiration and challenge; it is the most complex phenomenon in the known universe whose function depends on the communication between countless structures, all spanning various spatial and temporal scales. The last century has witnessed massive efforts to reveal the intricacies of the brain to understand its function and dysfunction. Despite these efforts, our comprehension of the underlying mechanisms of the brain remains incomplete. A comprehensive understanding essentially requires collaborative efforts and profound insights into its structure and function across multiple levels of organization, from genetic principles to whole-brain level systems. Understanding the hidden aspects of the mammalian brain relying solely on wet lab experiments has been proven to be extremely limiting and time consuming. The data produced from such experiments is concerned with various levels of biological organization. The search space for unknown data is so broad, that it is debatable whether classical in vivo and in vitro experiments can provide enough results to answer all the questions in a reasonable time, unless a more systematic approach is followed. This approach requires integrating this data into a unifying multi-level system, which would allow us to build on previous knowledge and accelerate neuroscience research towards one and only one potential target: understanding the human brain.

The Blue Brain Project was founded in 2005, heralding the birth of a paradigm shift in neuroscience based on the fundamental insights of in silico research. This pioneering endeavor aims at integrating fragmented neuroscience knowledge from in vitro data in order to build detailed, multi-scale and biologically-accurate digital unifying models of rodents, and ultimately human brains. In 2015, the Blue Brain Project has achieved a long-awaited breakthrough: a first-draft digital reconstruction of the microcircuitry of somatosensory cortex of juvenile rat. This reconstruction is based on cellular and synaptic organizing principles to algorithmically reconstruct detailed anatomy and physiology from sparse experimental data. An objective anatomical method defines a neocortical volume of 0.29 ± 0.01 mm(3) containing ~31,000 neurons, and patch-clamp studies identify 55 layer-specific morphological and 207 morpho-electrical neuron subtypes. When digitally reconstructed neurons are positioned in the volume and synapse formation is restricted to biological bouton densities and numbers of synapses per connection, their overlapping arbors form ~8 million connections with ~37 million synapses. Simulations reproduce an array of in vitro and in vivo experiments without parameter tuning.

One approach to validate several structural and functional aspects of this model is to image it, in silico, expecting to achieve the same observations and findings a neurobiologist can reach in the wet lab. This mission entails defining a set of imaging experiments and then simulating them on a bio-physically-plausible basis assuming the existence of accurate biologically-detailed physical models of the cortical tissue and the imaging modalities employed to visualize it. This seminar addresses a novel visualization technology called In Silico Brain Imaging, or simply, how to image digital brain reconstructions simulated in supercomputers.  

Bibliography

Marwan Abdellah is a Scientific Visualization Engineer and Post-Doctoral Fellow in the Scientific Visualization Section of the Computing Division in the Blue Brain Project. Marwan obtained his PhD in Neuroscience from EPFL in 2017. He received his Bachelor and Master’s degrees from the Biomedical Engineering Department, School of Engineering at Cairo University, Egypt. His current research focuses on building accurate computational models of brain imaging technologies using physically-plausible visualization methods.

Marwan joined the Blue Brain in 2011 as a Software Engineer and led the multimedia generation workflows, and also contributed to the development of the software applications required for interactive visualisation of neocortical digital models. His research interests include scientific visualisation, physically-plausible rendering, computer graphics and modelling, medical imaging, high performance computing and in silico neuroscience.

Selected Publications

* M. Abdellah, J. Hernando, N. Antille, S. Eilemann, S. Lapere, H. Markram, F. Schürmann. NeuroMorphoVis: a collaborative framework for analysis and visualization of neuronal morphology skeletons reconstructed from microscopy stacks. Oxford Bioinformatics, In Press.

* M. Abdellah, J. Hernando, N. Antille, S. Eilemann, H. Markram, F. Schürmann. Reconstruction and visualization of large-scale volumetric models of neocortical circuits for physically-plausible in silico optical studies. BMC Bioinformatics, 13 September 2017.

* Marwan Abdellah, Ahmet Bilgili, Stefan Eilemann, Julian Shillcock, Henry Markram, Felix Schürmann. Bio-physically plausible visualization of highly scattering fluorescent neocortical models for in silico experimentation. BMC bioinformatics 18, no. Suppl 2 (2017): S8.

* Abdellah, Marwan, Ahmet Bilgili, Stefan Eilemann, Henry Markram, and Felix Schürmann. Physically-based in silico light sheet microscopy for visualizing fluorescent brain models. BMC bioinformatics 16, no. Suppl 11 (2015): S8.

* Abdellah, M., Abdelaziz, A., Ali, E. E., Abdelaziz, S., Sayed, A., Owis, M. I., & Eldeib, A. (2016, August). Parallel generation of digitally reconstructed radiographs on heterogeneous multi-GPU workstations. In Engineering in Medicine and Biology Society (EMBC), 2016 IEEE 38th Annual International Conference of the (pp. 3953-3956). IEEE.

* Henry Markram, Eilif Muller, Srikanth Ramaswamy, Michael W Reimann, Marwan Abdellah, Carlos Aguado Sanchez, et al.“Reconstruction and simulation of neocortical microcircuitry. Cell 2015.

* Ramaswamy, Srikanth, Jean-Denis Courcol, Marwan Abdellah, Stanislaw Adaszewski, Nicolas Antille, Selim Arsever, Guy Antoine et al. The Neocortical Microcircuit Collaboration Portal: A Resource for Rat Somatosensory Cortex. Frontiers in Neural Circuits 9 (2015).

Details

Category

Duration

45 + 15
Host: Ivan Viola

Speaker: Karsten Schatz (Visualization Research Center, University of Stuttgart)

For the last decades, the understanding of biomolecules like DNA and proteins is on a permanent rise. This lead to engineering processes that are able to produce specialized variants of the molecules, e. g. to solve other tasks than intended by nature. To perform these engineering processes, biochemists need to understand how the molecules work and how changes in their underlying structure lead to changes in reaction behavior. To that end, simulations and experiments are performed that generate vast amounts of data. The analysis of the many-faceted data can be a cumbersome task, if not supported by suitable analysis tools. This is where biomolecular visualization comes into play. Specialized visualization techniques and algorithms for biomolecules can lead to faster insights of the domain scientists, so many of them were developed in the recent years. Despite the amount of different methods, there are still many tasks left that are not supported by current visualization techniques. This talk presents some of the work performed by VISUS in the field of biomolecular visualization, including methods for uncertainty visualization on protein cartoon renderings, interactive exploration of tera-scale datasets and the abstracted visualization of enzymatic binding sites.

CV:
Karsten Schatz received a MSc in computer science from the Unversity of Stuttgart. Since 2016 he is a PhD student at the Visualization Research Center of the University of Stuttgart (VISUS). His research interests include the interactive visualization of protein-solvent systems as well as the visualization of other biomolecules. Additionally he is interested in the interactive visualization of tera-scale data sets.

Details

Category

Duration

30 + 15
Host: Ivan Viola

Speaker: Ove Daae Lampe (Christian Michelsen Research AS, Bergen, Norway)

ENLWEB, short for Enlighten web, is a web framework for interactive visual analysis and data exploration.
It is connected with the Jupyter Notebook environment in order to enable a tight connection between loading, transforming, fusion and filtering data and visualization. I will present the workflow in two cases, one from maritime traffic and one in the case of multidisciplinary geological data through the EU project EPOS.

Details

Category

Duration

45 + 15

Speaker: Prof. Victoria Interrante (University of Minnesota )

In this talk, I will present an overview of my lab’s recent and historical research into strategies for enabling the more effective use of VR in applications related to architectural design and evaluation, focusing on two specific topic areas: (1) facilitating accurate spatial perception; (2) supporting effective interpersonal communication.  I will touch on our efforts related to the use of self-avatars and autonomous intelligent agents, our investigations into the  relative requirements of visual and experiential realism, and our assessments of the extents to which various locomotion methods support users in maintaining an accurate mental map of the space through which they are traveling.

Short bio: Victoria Interrante is a Professor in the Department of Computer Science and Engineering at the University of Minnesota and Director of the University-wide Center for Cognitive Sciences.  She received her PhD in 1996 from the University of North Carolina at Chapel Hill, where she was advised by Drs. Henry Fuchs and Stephen Pizer, and was a 1999 recipient of the US Presidential Early Career Award for Scientists and Engineers, one of the highest honors bestowed by the US government to junior researchers.  Professor Interrante has extensive research experience in the fields of Virtual Reality and Data Visualization. She is currently serving as the co-Editor-in-Chief of the ACM Transactions on Applied Perception, and as a member of the Steering Committee for the IEEE Virtual Reality conference.

Speaker: Dr. Alexandra Diehl (University of Konstanz)

Volunteered Geographic Information (VGI) is a kind of information provided by citizens with different cultural backgrounds, experiences, interests, and level of education. They contribute, maintain, exchange, and share spatiotemporal data driven by a wide variety of interests and in different formats, for example, maps, image, and text.

To extract meaningful VGI from these heterogeneous and sometimes contradictory datasets is challenging. One way to start tackling this challenge is by understanding the different levels of uncertainty from the data, and during the visual analytical process.

In this talk, I will present my current efforts in the analysis of the uncertainty coming from social media data through the Visual Analytics process. From the raw data, through the data transformation, visual mapping, and interaction.

Speaker: Dr. Tobias Isenberg (Team AVIZ Inria-Saclay / Université Paris Saclay)

I will talk about the challenges for the interaction with 3D data using direct manipulation input devices such as tactile displays and tangible devices. I will address, in particular, data navigation & selection as two fundamental interaction techniques, and demonstrate example interaction designs for both. In addition, I will discuss what happens when immersive 3D display environments are combined with mobile displays for input, in particular in a stereoscopic interaction context. Finally, I will present some ideas for the potential combination of tactile with tangible input techniques to create hybrid interactions.

Images (see the respective "pictures" section on the pages & click on the thumbnails):
https://tobias.isenberg.cc/VideosAndDemos/Yu2010FDT
https://tobias.isenberg.cc/VideosAndDemos/Klein2012DSD
https://tobias.isenberg.cc/VideosAndDemos/Yu2012ESA
https://tobias.isenberg.cc/VideosAndDemos/Yu2016CEE
https://tobias.isenberg.cc/VideosAndDemos/Besancon2017PGF
https://tobias.isenberg.cc/VideosAndDemos/Besancon2017HTT

Biography:
Tobias Isenberg is a senior research scientist at Inria, France. He received his doctoral degree from the University of Magdeburg, Germany, in 2004. Previously he held positions as post-doctoral fellow at the University of Calgary, Canada, and as assistant professor at the University of Groningen, the Netherlands. His research interests comprise topics in scientific visualization, illustrative and non-photorealistic rendering, and interactive visualization techniques. He is particularly interested in interactive visualization environments for 3D spatial data that rely on novel input paradigms such as tactile screens and tangible devices.

Mug shot:
http://tobias.isenberg.cc/Pictures/Myself-TobiSelectionSquarelarge2
 

Details

Category

Duration

45 + 30

Speaker: Dr. Björn Sommer (Universität Konstanz)

Multiscale modeling and visualization of cellular environments is an important topic from a scientific as well as educational perspective. It plays an important role in analyzing and understanding metabolic processes, structural molecular complexes or the targeting of drugs.
The CELLmicrocosmos project combines different information layers for multiple purposes:
At the molecular level, the MembraneEditor is used by many projects to model heterogeneous membranes as a base for molecular simulations and analyses [SDGS11]. Showing small parallels to cellVIEW [LAPV15], the CellExplorer is a software tool which can be used to visualize and explore cell environments at the mesoscopic level. Combined with the PathwayIntegration, cytological networks can be localized and integrated into these cell environments [KoGS16, SKSH10]. In the recent years we developed a number of new cytological visualization approaches which can be explored on multiple scales: from the local computer, to web browsers, to mobile phones and Head-­‐mounted displays, and to large-­‐scale virtual environments like the CAVE2 [FNTT13, KoGS16, SBHG14, SHKC16, SWXC15]. In this context we are currently working with the CeBiTec Bielefeld on the visualization of a Chlamydomas rheinhardtii cell.

[FNTT13] FEBRETTI, Alessandro; NISHIMOTO, Arthur; THIGPEN, Terrance; TALANDIS, Jonas; LONG, Lance; PIRTLE, J. D.; PETERKA, Tom; VERLO, Alan; et al.:
CAVE2:  a hybrid reality environment for immersive simulation and information analysis. In: IS&T/SPIE lectronic Imaging : International Society for Optics and Photonics,  2013, pp. 864903-­‐864903–12 

[KoGS16] KOVANCI, Gökhan; GHAFFAR, Mehmood; SOMMER, Björn: Web-­‐based hybrid-­‐ dimensional Visualization and Exploration of Cytological Localization Scenarios. In: Journal of  Integrative Bioinformatics 13 (2016), no. 4, p. 298

[LAPV15] LE MUZIC, Mathieu; AUTIN, Ludovic; PARULEK, Julius; VIOLA, Ivan: cellVIEW: a tool for illustrative and multi-­‐scale rendering of large biomolecular datasets. In: Proceedings of the Eurographics Workshop on Visual Computing for Biology and Medicine : Eurographics  Association, 2015, pp. 61–70

[SBHG14] SOMMER, Björn; BENDER, Christian; HOPPE,   Tobias; GAMROTH, Christian;  JELONEK, Lukas: Stereoscopic cell visualization: from   mesoscopic to molecular scale. In: Electronic Imaging, Proceedings of Stereoscopic Displays   and Applications  XXVIII 23 (2014), no. 1, pp. 011007-­‐1-­‐011007-­‐10

[SDGS11] SOMMER, B.; DINGERSEN, T.; GAMROTH, C.;SCHNEIDER, S. E.; RUBERT, S.;  KRÜGER,  J.; DIETZ, K. J.: CELLmicrocosmos 2.2 MembraneEditor: a modular interactive
shape-­‐based  software  approach to solve heterogeneous Membrane Packing Problems. In: Journal of Chemical Information and Modeling 5 (2011), no. 51, pp. 1165–1182

[SHKC16] SOMMER, Björn; HAMACHER, Andreas; KALUZA, Owen; CZAUDERNA, Tobias;
KLAPPERSTÜCK, Matthias; BIERE, Niklas; CIVICO, Marco; THOMAS, Bruce; et al.: Stereoscopic Space Map – Semi-­‐immersive Configuration of 3D-­‐stereoscopic Tours
in Multi-­‐display Environments. In: Electronic Imaging, Proceedings of Stereoscopic Displays and Applications XXVII 2016 (2016), no. 5, pp. 1–9

[SKSH10] SOMMER, Björn; KÜNSEMÖLLER, Jörn; SAND, Norbert; HUSEMANN, Arne; RUMMING, Madis;  KORMEIER, Benjamin:  CELLmicrocosmos 4.1: an interactive approach to  integrating spatially localized metabolic networks into a virtual 3D cell environment. In: FRED, Ana; FILIPE, Joaquim; GAMBOA, Hugo (eds.): BIOSTEC 2010,  2010,pp. 90–95 

[SWXC15] SOMMER, Björn; WANG, Stephen Jia; XU, Lifeng; CHEN, Ming; SCHREIBER, Falk: Hybrid-­‐Dimensional Visualization and Interaction -­‐ Integrating 2D and 3D Visualization with Semi-­‐Immersive Navigation Techniques. In: Big  Data Visual Analytics (BDVA), 2015 : IEEE, 2015, pp. 1–8   

Details

Category

Duration

45 + 30
Host: Edi Gröller

Speaker: Prof. Dr. Bing-Yu Chen (National Taiwan University (NTU))

In this talk, I will focus on three projects related to the interactive media visualization including Dynamic Media Assemblage (IEEE TCSVT 2013), SmartPlayer (ACM CHI 2009), and Outside-In (ACM UIST 2017). Dynamic Media Assemblage is a new presentation and summarization method for images and videos on a 2D canvas. Instead of using the keyframes of the videos to generate a still image summarization, our method allows the videos to play simultaneously on the canvas while utilizing the limited space efficiently. This technique uses an efficient iterative packing algorithm, and as a result is well-suited for interactive manipulations of media files within the assemblages in real-time, such as insertion, deletion, and rearrangement. SmartPlayer is a new video interaction model called adaptive fast-forwarding to help people quickly browse videos with predefined semantic rules. This model is designed around the metaphor of "scenic car driving," in which the driver slows down near areas of interest and speeds through unexciting areas. Outside-In is a visualization technique which reintroduces off-screen regions-of-interest (ROIs) into the main screen as spatial picture-in-picture (PIP) previews. The geometry of the preview windows further encodes a ROI’s relative location vis-a-vis the main screen view, allowing for effective navigation. Two applications are demonstrated for use with Outside-In in 360-degree video navigation with touchscreens, and live telepresence. Short Bio Bing-Yu Chen received the B.S. and M.S. degrees in Computer Science and Information Engineering from National Taiwan University, Taipei, Taiwan, in 1995 and 1997, respectively, and the Ph.D. degree in Information Science from The University of Tokyo, Tokyo, Japan, in 2003. He is currently a Professor with Department of Information Management, Department of Computer Science and Information Engineering, and Graduate Institute of Networking and Multimedia of National Taiwan University (NTU), and also an Associate Director with the NTU IoX Research Center (formerly Intel-NTU Connected Context Computing Center), an Associate Dean and EiMBA Director of the NTU Management College, and the Director of the NTU Creativity and Entrepreneurship Program. He was a Visiting Researcher and Professor at The University of Tokyo in 2012 and 2016. His current research interests include Human-Computer Interaction, Computer Graphics, and Image Processing. He is a senior member of ACM and IEEE. He has been the Chair of ACM SIGGRAPH Taipei Chapter since 2015, the Executive Supervisor of ACM SIGCHI Taipei Chapter since 2016, and a steering committee member of Pacific Graphics since 2011. He is also General Co-Chairs of Pacific Graphics 2017 and ACM MobileHCI 2019. Both will be held in Taipei, Taiwan.

 

Details

Category

Duration

45 + 30
Host: Hsiang-Yun WU

Speaker: Prof. Claudio Delrieux (Universidad Nacional del Sur - Bahia Blanca - Argentina)


Modeling and rendering complex materials is among the central concerns
in Computer Graphics, and the Entertainment Industry is always eager to
feature photorealistic and attractive emulations of our everyday life.
However, accurate modeling and rendering of materials with complex
mesostructure (for instance, porous materials) has not received as much
attention as other topics in the literature. In this presentation we
will review some proposals in modeling and rendering this kind of
materials.

As a modeling case, bread turns out to be a structurally complex
material, for which the eye is in particular very sensitive in spotting
improper models, making adequate bread modeling a difficult task. We
developed an accurate computational bread baking model that allows to
faithfully represent the geometrical mesostructure and the appearance of
bread through its making process. This is achieved by a careful
simulation of the conditions during proving and baking to get a
realistically looking result. Some of the generative steps in the
process can be easily adapted to model other kinds of porous materials
(f.e., stones or sponges).

Regarding rendering, a remarkable property of porous materials is how
water or other liquids in their surface alter significantly their BRDFs,
which in turn determines subtle or overt changes in their visual
features. For this reason, rendering materials that change their
appearance when wet continues to be challenging. We are developing a
principled and comprehensive technique to model and render the changes
in appearance of absorbent materials under humidity conditions. This
includes a method to solve the interaction between the fluid and the
solid model, the fluid diffusion within the solid porous media, and a
physically based rendering model that adequately simulates the light
transfer behavior under these conditions. Additional features of this
model are: geometry easy to represent and to interact with, and
reasonable rendering times using off-the-shelf GPUs.

Short Bio:

Claudio Delrieux is BS in Electric Engineering, and PhD in Computer Science. He is currently full professor and PI at the Electric and Computer Engineering Department in the Universidad Nacional del Sur (Argentina), fellow of the National Council of Science and Technology of Argentina (CONICET), and Chair of the Imaging Sciences Laboratory. His current interests are Image and Video Processing, Computer Graphics, Scientific Visualization, and Artificial Intelligence. He is author of more than 45 refereed journal papers, and more than 100 refereed international conference papers. Supervisor of 10 PhD theses (currently another 10) and 5 MSc theses (3 ongoing). PI of 3 ongoing research projects, and 5 ongoing applied research and technology transfer agreements.

Details

Category

Duration

45 + 15

Speaker: Prof. Kwan-Liu MA (University of California-Davis)

Abstract

In the era of Big Data, visual analytics becomes an important tool for scientific research, engineering design, and critical decision making. The design of a visual analytics solution must take into account the data characteristics, the media used, the users, the tasks to support,
etc., each of which presents some unique requirements and challenges.
These challenges demands new technical approaches and design considerations.  I will discuss them using research results that my group has produced as examples.

Short Bio

Kwan-Liu Ma is a professor of computer science and the chair of the Graduate Group in Computer Science (GGCS) at the University of California-Davis, where he directs VIDI Labs and UC Davis Center of Excellence for Visualization. His research spans the fields of visualization, computer graphics, high-performance computing, and user interface design. Professor Ma received his PhD in computer science from the University of Utah in 1993. During 1993-1999, he was with ICASE/NASA Langley Research Center as a research scientist. He joined UC Davis in 1999. Professor Ma received numerous recognitions for his research contributions such as the NSF Presidential Early-Career Research Award (PECASE) in 2000, the UC Davis College of Engineering's Outstanding Mid-Career Research Faculty Award in 2007, and the 2013 IEEE VGTC Visualization Technical Achievement Award. He was elected an IEEE Fellow in 2012.

Details

Category
Host: Ivan Viola

Speaker: Prof. Chris Weaver (University of Oklahoma)

Abstract

What if we could create and manipulate data directly inside visualizations? How would the visual representation of data affect what we can do to it? How could interaction allow us to express our ideas as evolving data? Building on well-known systems and principles of interactive visualization design, I will offer a glimpse of visualization as an expressive workspace for observing and interpreting the world interactively, and present recent progress on building the foundations of such a workspace.

Short Bio

Chris Weaver is an Associate Professor in the School of Computer Science at the University of Oklahoma. He has a B.S. in Chemistry and Math from Michigan State University, an M.S. and Ph.D. in Computer Science from the University of Wisconsin-Madison, and spent 3 years as a post-doc with the GeoVISTA Center at Penn State University. In 2013 he served as Conference Chair of the IEEE Conference on Information Visualization. His research is supported in part by a 2014 NSF CAREER award and focuses on bringing people, data, and visualization together, with a special interest in supporting scholarship and learning in the humanities.

Details

Category
Host: Manuela Waldner

Speaker: Prof. Mateu Sbert, Tianjin University, Tianjin, China (Guest talk)

I will review in this talk the application of information theory to aesthetics. Information theory, developed by Claude Shannon, provides powerful tools to study the information content of any kind of data. We will use these tools to study the low level information content of Van Gogh paintings, and show that our results match well with the analysis by art critics.

Details

Category

Duration

45+15
Host: Ivan Viola

Speaker: Dr. Tanja Gesell (Department of Structural and Computational Biology, University of Vienna)

Structural concepts are important on many levels for molecular biology. In this talk, I will additionally underline the importance of visualization methods for structural research, while also juxtaposing the formation visualization concept as an engineering field with artistic representations as a field of fine arts.

As a first example, I will define a phylogenetic structure based on a simulation framework for (molecular) sequence evolution. On the one side, I will discuss scientific applications: for example for selecting and filtering non-coding RNA gene-candidates. From the perspective of information visualization, tools that allows for the interactive exploration of candidates and their genomic regions have been validated and evaluated. On the other side, I will discuss some of the points and questions, which arise about the incompleteness of any structure’s description and visualization.

As a second example , I will present our most recently development of a chemical descriptor system for small molecules, the Shannon entropy descriptor.

Details

Category

Duration

45 + 15

Speaker: Prof. Anders Ynnerman (Department of Science and Technology Linköping University, Norrköping Sweden)

This talk will show how data visualization can be used to provide public visitor venues, such as museums, science centres and zoos with unique interactive learning experiences. The talk will focus on the on how to bridge the distance from basic research to implementation in the galleries discuss issues such as interaction design and storytelling.  The talk will take its starting point in volumetric medical data captured with the latest modalities. By combining visualization techniques with technologies such as interactive multi-touch tables and intuitive user interfaces, visitors can conduct guided browsing of large volumetric image data. The visitors then themselves become the explorers of the normally invisible interior of unique artefacts and subjects. Demonstrations of the Inside Explorer software will be used as examples. The talk will then discuss the use of large scale immersive environments, such as dome theatres, for science communication. The unique technical and design challenges in producing content for both playback interactive demonstrations will be discussed. Examples will be taken from shows produced at the Norrköping Visualization Center. A live demo of the new NASA funded OpenSpace software initiative will conclude the talk.

Attachments

Details

Category

Duration

45 + 15
Host: Ivan Viola

Speaker: Prof. Arthur J. Olson (The Scripps Research Institute, US)

Biology has become accessible to an understanding of processes that span from atom to organism.  As such we now have the opportunity to model a spatio-temporal picture of living systems at the molecular level.  In our recent work we attempt to create, interact with, and communicate physical representations of complex molecular environments. I will discuss the challenges and demonstrate three levels of interaction with complex molecular environments: 1) human perceptual and cognitive interaction with complex structural information; 2) interaction and integration of multiple data sources to construct cellular environments at the molecular level; and 3) interaction of software tools that can bridge the disparate disciplines needed to explore, analyze and communicate a holistic molecular view of living systems.In order to increase our understanding and interaction with complex molecular structural information we have combined two evolving computer technologies, 3D printing and augmented reality1. We create custom tangible molecular models and track their manipulation with real-time video, superimposing text and graphics onto the models to enhance their information content and to drive interactive computation. We have developed automated technologies to construct the crowded molecular environment of living cells from structural information at multiple scales as well as bioinformatics information on levels of protein expression and other data2.  We can populate cytoplasm, membranes, and organelles within the same structural volume to generate cellular environments that synthesize our current knowledge of such systems. Examples of applications of this technology will be discussed. The communication of complex structural information requires extensive scientific knowledge as well as expertise in creating clear visualizations.  We have developed a method of combining scientific modeling environments with professional grade 3D modeling and animation programs such as Maya, Cinema4D and Blender3 , as well as the Unity Game engine.  This gives both scientists and professional illustrators access to the best tools to create and communicate the science and the art of the molecular cell.

1Gillet, A., Sanner, M., Stoffler, D., Olson, A.J. (2005) Tangible interfaces for structural molecular biology. Structure:13:483-491.

2Johnson GT, Autin L, Al-Alusi M, Goodsell DS, Sanner MF, Olson AJ. cellPACK: a virtual mesoscope to model and visualize structural systems biology. Nat Methods. 2015 Jan;12(1):85-91.

3 Autin, L., Johnson, G., Hake, J., Olson, A.J., Sanner, M.F. (2012) uPy: A Ubiquitous CG Python API with Biological-Modeling Applications. Computer Graphics & Applications 32(5):50-61.

Details

Category

Duration

45 + 15
Host: Ivan Viola

Speaker: Prof. Kwan-Liu Ma (University of California-Davis)

Visualization is a powerful exploration and storytelling tool for large complex, multidimensional data. The design of a visualization solution must take into account the data characteristics, the media used, and the purpose of the visualization, each of which presents some unique challenges. These challenges suggest new topics for visualization research. I will discuss some of these topics and present related research results produced by my group.

Biography:

Kwan-Liu Ma is a professor of computer science and the chair of the Graduate Group in Computer Science (GGCS) at the University of California-Davis, where he directs VIDI Labs and UC Davis Center of Excellence for Visualization. His research spans the fields of visualization, computer graphics, high-performance computing, and user interface design. Professor Ma received his PhD in computer science from the University of Utah in 1993. During 1993-1999, he was with ICASE/NASA Langley Research Center as a research scientist. He joined UC Davis in 1999. Professor Ma received numerous recognitions for his research contributions such as the NSF Presidential Early-Career Research Award (PECASE) in 2000, the UC Davis College of Engineering's Outstanding Mid-Career Research Faculty Award in 2007, and the 2013 IEEE VGTC Visualization Technical Achievement Award. He was elected an IEEE Fellow in 2012.

Details

Category

Duration

60 + 15
Host: Ivan Viola

Speaker: Assist.-Prof. Dr. Marc Streit (JOHANNES KEPLER UNIVERSITY LINZ, Institute of Computer Graphics)

The primary goal of visual data exploration tools is to enable the discovery of new insights. To justify and reproduce insights, the discovery process needs to be documented and communicated. A common approach to documenting and presenting findings is to capture visualizations as images or videos. Images, however, are insufficient for telling the story of a visual discovery, as they lack full provenance information and context. Videos are difficult to produce and edit, particularly due to the non-linear nature of the exploratory process. Most importantly, however, neither approach provides the opportunity to return to any point in the exploration in order to review the state of the visualization in detail or to conduct additional analyses. In this talk, I will introduce CLUE (Capture, Label, Understand, Explain), a model that tightly integrates data exploration and presentation of discoveries. Based on provenance data captured during the exploration process, users can extract key steps, add annotations, and author 'Vistories', visual stories based on the history of the exploration. These Vistories can be shared for others to view, but also to retrace and extend the original analysis. I will also discuss how the CLUE approach can be integrated into visualization tools. Finally, I will also demonstrate the general applicability of the model in multiple usage scenarios, including an example from molecular biology that illustrates how Vistories could be used in scientific journals.

Details

Category

Duration

45 + 15

Speaker: Scott A. Mitchell (Sandia National Laboratories)

I'll describe VoroCrust, the first algorithm for simultaneous surface reconstruction and volumetric Voronoi meshing. By surface reconstruction, I mean that weighted sample points are created on a smooth manifold, and we are tasked with building a mesh (triangulation) containing those points that approximates the surface. By Voronoi meshing, I mean that we create Voronoi cells that are well-shaped polytopal decompositions of the spaces inside and outside the manifold. By "simultaneous", I mean that the surface mesh is the interface of the two volume meshes.

VoroCrust meshes are distinguished from the usual approach of clipping Voronoi cells by the manifold, which results in many extra surface vertices beyond the original samples, and may result in non-planar, non-convex, or even non-star-shaped cells.

The VoroCrust algorithm is similar to the famous "power crust." Unlike the power crust, our output Voronoi cells are unweighted and have good aspect ratio. Moreover, there is complete freedom of how to mesh the volume far from the surface. Most of the reconstructed surface is composed of Delaunay triangles with small circumcircle radius, and all samples are vertices. In the presence of slivers, the reconstruction lies inside the sliver, interpolating between its upper and lower pair of bounding triangles, and introducing Steiner vertices.

Details

Category

Duration

45 + 15
Host: SO, MW

Speaker: Thomas Hoellt (Delft University of Technology)

To understand how the immune system works, one needs to have a clear picture of its cellular compositon and the cells’ corresponding properties and functionality. Mass cytometry is a novel technique to determine the properties of single-cells with unprecedented detail. This amount of detail allows for much finer differentiation but also comes at the cost of more complex analysis. In this work, we present Cytosplore, implementing an interactive workflow to analyze mass cytometry data in an integrated system, providing multiple linked views, showing different levels of detail and enabling the rapid definition of known and unknown cell types. Cytosplore handles millions of cells, each represented as a high-dimensional data point, facilitates hypothesis generation and confirmation, and provides a significant speed up of the current workflow.

Details

Category

Duration

35 + 10
Host: MEG

Speaker: Marcel Breeuwer (Eindhoven University of Technology)

This presentation will first briefly discuss the relevance of using medical image analysis and visualization in health care, and thereafter present a number of example clinical applications of image analysis and visualization in the domains of cardiovascular, neurological and oncological disease. The focus will be on using magnetic resonance imaging (MRI).

Despite the enormous amount of medical imaging research and development performed in the last decades, only a very limited number of applications are currently routinely used in clinical practice. The road from idea to a clinically adopted and widely used application will be reviewed in order to create insight into the many steps to be taken to develop and introduce truly meaningful innovations.

Details

Category

Duration

45 + 15
Host: MEG

Speaker: Yingcai Wu (Zhejiang University)

Online service providers, such as Twitter, Amazon, Google, and Wikipedia, generate huge volumes of user behavior data daily, in which valuable patterns and correlations of user behaviors are hidden. For companies, effective analysis of the behavior data allows them to learn more about their customers on an unprecedented scale to improve customer relations and develop social media marketing strategies. For governments, effective tracking of the behavior data allows them to detect and predict critical events to make proper decisions in a timely manner. However, analysis of the behavior data is challenging due to the enormous amount of data and the heterogeneity of information. In my talk, I will discuss the challenges of the research on visual behavior analytics, and then give some examples of applying interactive visualization techniques to making sense of the behavior data. 

Details

Category

Duration

45 + 15
Host: IV