The fourth meeting of the Czech and Slovak computer graphics and computer vision people in the mountains. The event will take place on 1. - 4. 2. 2017 in Beskydy mountains in a nice hotel:
The goal of the meeting is to encourage exchange of ideas between researchers and practitioners in the fields of computer graphics and computer vision from across the Czech and Slovak Republics and beyond. There will be many (optional) outdoor activities possible during the day and fruitful talks, discussions and socializing in the afternoons and evenings.
The expected price is around 90 EUR per person (3 nights, 3x half board). Please let us know if you are interested to attend the event (please provide also your talk proposal, deadline: 30th November, 2016).
Manuel M. Oliveira is an Associate Professor of Computer Science at the Federal University of Rio Grande do Sul (UFRGS), in Brazil. He received his PhD from the University of North Carolina at Chapel Hill, in 2000. Before joining UFRGS in 2002, he was an Assistant Professor of Computer Science at the State University of New York at Stony Brook (2000 to 2002). In the 2009-2010 academic year, he was a Visiting Associate Professor at the MIT Media Lab. His research interests cover most aspects of computer graphics, but especially the frontiers among graphics, image processing, and vision (both human and machine). In these areas, he has contributed a variety of techniques including relief texture mapping, real-time filtering in high-dimensional spaces, efficient algorithms for Hough transform, new physiologically-based models for color perception and pupil-light reflex, and novel interactive techniques for measuring visual acuity. Dr. Oliveira was program co-chair of the ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games 2010 (I3D 2010), and general co-chair of ACM I3D 2009. He is an Associate Editor of IEEE TVCG and IEEE CG&A, and a member of the CIE Technical Committee TC1-89 “Enhancement of Images for Colour Defective Observers". He received the ACM Recognition of Service Award in 2009 and in 2010.
Edge-aware filtering is a fundamental building block to a wide range of image and video processing, and computer graphics applications. These include detail enhancement, denoising, stylization and non-photorealistic effects, recoloring and colorization, tone mapping, and photon-map filtering, just to name a few. While an edge-preserving filter can be implemented as a convolution with a spatially-invariant kernel in high-dimensional space, performing such an operation is computationally expensive, preventing its use in interactive and real-time scenarios. The talk will present two recent techniques we have developed for efficiently performing edge-aware filtering. The first one is based on a domain transform that defines an isometry between curves on 2-D image manifolds (embedded in n-D space) and the real line. High-dimensional geodesic filtering is then performed in linear time as a sequence of 1-D filtering steps using a spatially-invariant kernel. The second technique focuses on Euclidean filtering. It works by sampling and filtering the input signal using a set of 2-D manifolds adapted to the original data. It essentially queries the value of a multivariate function in n-D by interpolating several scattered samples using normalized convolution. Its cost is linear both in the number of pixels and in the dimensionality of the space in which the filter operates. These techniques have many desirable features. In particular, they are significantly faster than previous approaches, supporting high-dimensional filtering of images and videos in real time. In the talk, I will show several examples illustrating their use in image, video processing, and computer graphics applications.
Elmar Eisemann is a professor at TU Delft, heading the Computer Graphics and Visualization Group. Before he was an associated professor at Telecom ParisTech (until 2012) and a senior scientist heading a research group in the Cluster of Excellence (Saarland University / MPI Informatik) (until 2009). He studied at the École Normale Supérieure in Paris (2001-2005) and received his PhD from the University of Grenoble at INRIA Rhône-Alpes (2005-2008). He spent several research visits abroad; at the Massachusetts Institute of Technology (2003), University of Illinois Urbana-Champaign (2006), Adobe Systems Inc. (2007,2008). His interests include real-time and perceptual rendering, alternative representations, shadow algorithms, global illumination, and GPU acceleration techniques. He coauthored the book “Real-time shadows” and participated in various committees and editorial boards. He was local organizer of EGSR 2010, 2012, HPG 2012, and is paper chair of HPG 2015. His work received several distinction awards and he was honored with the Eurographics Young Researcher Award 2011.
Computer Graphics has become a discipline of utmost importance for many domains. Besides the entertainment sector, a large variety of areas, including the medical domain, geoscience, and architecture can profit from recent advances in rendering and visualization. As most user interaction in real-time applications relies on visual feedback, it is important to reduce the involved computations to a minimum. In this talk, we will discuss a variety of challenges that are common problems to many applications. Among others, we will describe recent solutions to represent highly-detailed volumetric data sets and focus on rendering performance, as well as memory efficiency. Furthermore, to obtain convincing virtual environments, we will discuss the reproduction of physical phenomena in real-time. We will also illustrate the importance of involving findings regarding the human visual system to improve performance and quality, before concluding with an outlook on the future.
Bernd Bickel holds a Master's degree in Computer Science (2006) and a Ph.D. degree (2010) from ETH Zurich. From 2011-2012, Bernd was a visiting professor at TU Berlin, and in 2012 he became a research scientist and research group leader at Disney Research. In early 2015 he joined IST Austria, where he is an Assistant Professor, heading the Computer Graphics and Digital Fabrication group. He is a computer scientist interested in computer graphics and its overlap into animation, biomechanics, material science, and digital fabrication. His main objective is to push the boundaries of how digital content can be efficiently created, simulated, and reproduced.
While access to 3D-printing technology becomes ubiquitous and provides revolutionary possibilities for fabricating complex, functional, multi-material objects with stunning properties, its potential impact is currently significantly limited due to the lack of efficient and intuitive methods for content creation. Existing tools are usually restricted to expert users, have been developed based on the capabilities of traditional
manufacturing processes, and do not sufficiently take fabrication constraints into account. Scientifically, we are facing the fundamental challenge that existing simulation techniques and design approaches for predicting the physical properties of materials and objects at the resolution of modern 3D printers are too slow and do not scale with increasing object complexity. The problem is extremely challenging because real world-materials exhibit extraordinary variety and complexity.
I will describe recent progress in the area of computational fabrication towards more intuitive design. In particular, I will be focusing on simulation approaches that combine data-driven and physically-based modeling, providing both the required speed and accuracy through smart precomputations and tailored simulation techniques that operate on the data. I will also discuss strategies that allow us to characterize the design space and computational approaches to navigate it. All approaches will be illustrated with with examples, addressing important properties of real-world objects such as appearance, deformation, or sensing capabilities.
Duties: scientific program, selection of beer and everything else.