The seventh meeting of the Czech and Slovak computer graphics and computer vision people in the mountains. The event will take place January 29 - February 1, 2020 in the Ore Mountains (Krušné hory) in a nice hotel:
The goal of the meeting is to encourage exchange of ideas between researchers and practitioners in the fields of computer graphics and computer vision from across the Czech and Slovak Republics and beyond. There will be many (optional) outdoor activities possible during the day and fruitful talks, discussions and socializing in the afternoons and evenings.
The expected price is around 165 EUR per person (3 nights, 3x half board). Please let us know should you be interested in attending the event (please provide also your talk proposal, deadline: November 25, 2019).
Q: How do I register?
A: Send an e-mail to Martin and Jarda stating that you would like to participate. In your e-mail, include the title and a short abstract of your offered 20 minute talk. Sending your abstract does not necessarily mean you will be selected to present, but the title and abstract are still a strict requirement for the registration - intended to keep the standard of the presented material consistently high.
Q: I am from a company and I do not have anything “sciency” enough to show. May I still come?
A: By all means! We are eager to learn what your company works on and what you work on in the company, what interesting open problems you might have, etc. That said, no shameless corporate advertising, please.
Q: What is the conference fee and how do I pay it?
A: There is no conference fee per se. All you need to do is pay for your food and accommodation on the spot at the hotel, which we expect to be around 4200 CZK = 165 EUR per person for the three nights. We will *not* be collecting your money.
Q: Do I need to look for accomodation?
A: Accommodation is taken care of for you - everyone is staying in the same hotel (http://www.hotelpraha.cz/krusne-hory/) which we have pre-booked. You will pay on the spot after you arrive. Should you have any special requirements concerning accommodation or food, please contact Radka Hacklova.
Q: Do I need to take care of my travel arrangements?
A: Yes, travel is up to you. We will send an info email about this to all the registered participants in due time.
Q: Summing it all up, what’s the timeline between now and the conference?
A: Easy. You send us your abstract before November 25th. We confirm your registration. By the end of November, we will assemble the program from the offered talks and post it online. At the beginning of December, we will send an email with practical information to all registered participants. You will have almost two months to arrange your travel. You are expected to arrive to the hotel by the afternoon on January 29th, where we’ll have a dinner at 6pm and the conference program will start at 7pm.
Eduard Gröller is Professor at the Institute of Visual Computing & Human-Centered Technology (VC&HCT), TU Wien, where he is heading the Research Unit of Computer Graphics. He is a scientific proponent and key researcher of the VRVis research center (http://www.vrvis.at/). The center performs applied research in visualization, rendering, and visual analysis. Dr. Gröller is Adjunct Professor of Computer Science at the University of Bergen, Norway. His research interests include computer graphics, visualization and visual computing. He became a fellow of the Eurographics association in 2009. Dr. Gröller is the recipient of the Eurographics 2015 Outstanding Technical Contributions Award and of the IEEE VGTC 2019 Technical Achievement Award.
Visualization and visual computing use computer-supported, interactive, visual representations of (abstract) data to amplify cognition. In recent years data complexity concerning volume, veracity, velocity, and variety has increased considerably. This is due to new data sources as well as the availability of uncertainty, error and tolerance information. Instead of individual objects entire sets, collections, and ensembles are visually investigated. There is a need for visual analyses, comparative visualization, quantitative visualizations, scalable visualizations, and linked/integrated views. The simultaneous exploration and visualization of spatial and abstract information is an important case in point. Several examples from the computational sciences will be discussed in detail. These concern: parameter studies of dataset series; visual analytics for the exploration and assessment of segmentation errors; quantitative visual analytics with structured brushing and linked statistics; visual comparison of 3D volumes through space-filling curves. Given the amplified data variability, interactive visual data analyses are likely to gain in importance in the future. Research challenges and directions are sketched at the end of the talk.
Jiri Bittner is an associate professor at the Department of Computer Graphics and Interaction of the Czech Technical University in Prague. He received his Ph.D. in 2003 from the same institution and then he worked for several years as a researcher at TU Wien. His research interests include visibility computations, real-time rendering, spatial data structures, and global illumination. He participated in a number of national and international research projects and several commercial projects dealing with real-time rendering of complex scenes.
During the last decade, Bounding Volume Hierarchies (BVH) proved their position as the most popular data structure for ray tracing acceleration for GPU and CPU based ray tracers. The considerable research effort invested in BVH research recently graduated by integrating this concept into the hardware-accelerated ray tracing APIs available in consumer-grade GPUs.
In my talk, I will briefly survey the principles of efficient usage of BVHs for ray tracing. I will present a fast GPU method for the bottom-up construction of BVHs that provides an excellent balance between the build times and achievable trace performance. I will also show how to optimize the BVHs directly on the GPU to maximize their trace performance. Further, I will discuss a flexible parallel BVH construction method for CPUs based on the idea of iterative BVH construction. Finally, I will show how to improve the BVH traversal performance by using ray classification with compact sequences of traversal entry points. I conclude my talk by discussing open problems related to BVHs.
Christian Theobalt is a Professor of Computer Science and the head of the research group "Graphics, Vision, & Video" at the Max-Planck-Institute (MPI) for Informatics, Saarbrücken, Germany. He is also a Professor of Computer Science at Saarland University, Germany. From 2007 until 2009 he was a Visiting Assistant Professor in the Department of Computer Science at Stanford University. He received his MSc degree in Artificial Intelligence from the University of Edinburgh, his Diplom (MS) degree in Computer Science from Saarland University, and his PhD (Dr.-Ing.) from the Max-Planck-Institute for Informatics.
In his research he looks at algorithmic problems that lie at the intersection of Computer Graphics, Computer Vision and machine learning, such as: static and dynamic 3D scene reconstruction, marker-less motion and performance capture, virtual and augmented reality, computer animation, appearance and reflectance modelling, intrinsic video and inverse rendering, machine learning for graphics and vision, new sensors for 3D acquisition, advanced video processing, as well as image- and physically-based rendering. He is also interested in using reconstruction techniques for human computer interaction. For his work, he received several awards, including the Otto Hahn Medal of the Max-Planck Society in 2007, the EUROGRAPHICS Young Researcher Award in 2009, the German Pattern Recognition Award 2012, and the Karl Heinz Beckurts Award in 2017. He received two ERC grants, an ERC Starting Grant in 2013 and an ERC Consolidator Grant in 2017. In 2015, he was elected as one of the top 40 innovation leaders under 40 in Germany by the business magazine Capital. Christian Theobalt is also a co-founder of an award-winning spin-off company from his group - www.thecaptury.com - that is commercializing one of the most advanced solutions for marker-less motion capture. .
New methods for capturing highly detailed models of moving real world scenes with cameras, i.e., models of detailed deforming geometry, appearance or even material properties, become more and more important in many application areas. They are needed in visual content creation, for instance in visual effects, where they are used to build highly realistic models of virtual human actors or real world environments. Further on, efficient, reliable and highly accurate dynamic scene reconstruction is nowadays important for many other application domains, such as: human-computer and human-robot interaction, autonomous robotics and autonomous driving, virtual and augmented reality, immersive telepresence, and even video editing.
The development of dynamic scene reconstruction methods has been a long standing challenge in computer graphics and computer vision. Recently, the field has seen important progress. New methods were developed that capture - without markers or scene instrumentation - rather detailed models of individual moving humans or general deforming surfaces from video recordings, and capture even simple models of appearance and lighting. However, despite this recent progress, current technology is still starkly constrained in many ways. Many of today's state-of-the-art methods are still niche solutions that are designed to work under specific constrained conditions, for instance: only in controlled studios, with many cameras, for very specific object types, for very simple types of motion and deformation, or at processing speeds far from real-time.
In this talk, I will present some of our recent works on detailed marker-less dynamic scene reconstruction and performance capture in which we advanced the state of the art in several ways. For instance, I will briefly show new methods for marker-less capture of the full body (like our VNECT and XNEct approaches) and hands that work in more general environments, and even in real-time and with one camera. I will also show some of our work on high-quality face performance capture and face reenactment. Here, I will also illustrate the benefits of both model-based and learning-based approaches and show how different ways to join the forces of the two open up new possibilities.
Duties: scientific program, selection of beer and everything else.