High Visual Computing 2019

The sixth meeting of the Czech and Slovak computer graphics and computer vision people in the mountains. The event will take place January 30 - February 2, 2019 in Bohemian Forest (Šumava) in a nice hotel: (see the map).

The goal of the meeting is to encourage exchange of ideas between researchers and practitioners in the fields of computer graphics and computer vision from across the Czech and Slovak Republics and beyond. There will be many (optional) outdoor activities possible during the day and fruitful talks, discussions and socializing in the afternoons and evenings.

The expected price is around 100 EUR per person (3 nights, 3x half board). Please let us know should you be interested in attending the event (please provide also your talk proposal, deadline: November 12, 2018).

Frequently Asked Questions:

Q: How do I register?
A: Send an e-mail to Martin and Jarda stating that you would like to participate. In your e-mail, include the title and a short abstract of your offered 20 minute talk. We will confirm the registration by replying to your email (yes, it is this personal). Sending your abstract does not necessarily mean you will be selected to present, but the title and abstract are still a strict requirement for the registration - intended to keep the standard of the presented material consistently high.

Q: Do I need to look for accomodation?
A: Accommodation is taken care of for you - everyone is staying in the same hotel which we have pre-booked. You will pay on spot after you arrive. Should you have any special requirements concerning accommodation or food, please talk to Radka Hacklova.

Q: What is the conference fee and how do I pay it?
A: There is no conference fee per se. All you need to do is pay for your food and accommodation on the spot at the hotel, which we expect to be around 100 EUR per person for the three nights. We will not be collecting your money.

Q: Do I need to take care of my travel arrangements?
A: Yes, travel is up to you. We will send an info email about this to all the registered participants in due time.

Q: Summing it all up, what’s the timeline between now and the conference?
A: Easy. You send us your abstract before Nov 12th. We confirm your registration. By the end of November, we will assemble the program from the offered talks and post it online. At the beginning of December, we will send an email with practical information to all registered participants. You will have almost two months to arrange your travel. You are expected to arrive to the hotel by the afternoon on January 30th, where we’ll have a dinner at 6pm and the conference program will start at 7pm.


30.1.2019 Wednesday:

10:00 - 17:00
optional socializing outdoors
18:00 - 19:00
19:00 - 19:10
19:10 - 20:00
invited talk 1: Vincent Lepetit, University of Bordeaux, France: 3D Rigid and Articulated Object Registration for Robotics and Augmented Reality
20:15 - 21:30
talks 1 - 3:
21:30 - 02:00
socializing indoors

31.1.2019 Thursday:

7:30 - 9:30
10:00 - 17:00
socializing outdoors
17:00 - 17:50
talks 4 - 5:
  • Filip Škola: Virtual reality environments for motor imagery brain-computer interface training facilitation
  • Thorsten Herfet: Enabling Multiview- and Light Field-Video for Veridical Visual Experiences
18:00 - 19:10
19:10 - 20:00
invited talk 2: Christian Theobalt, MPII Saarbrücken, Germany: Capturing and Editing Models of the Real World in Motion
20:15 - 21:30
talks 6 - 8:
21:30 - 02:00
socializing indoors

1.2.2019 Friday:

7:30 - 9:30
10:00 - 17:00
socializing outdoors
17:00 - 17:50
talks 9 - 10:
  • Tobias Rittig: Geometry-Aware Scattering Compensation for 3D Printing
  • Thomas Auzinger: Computational Nanofabrication - How to color objects with transparent plastics
18:00 - 19:10
19:10 - 20:00
invited talk 3: Wenzel Jakob, School of Computer and Communication Sciences, EPFL, Switzerland: Capturing and rendering the world of materials
20:15 - 21:30
talks 11 - 13:
  • Vlastimil Havran: On the Advancement of BTF Measurement on Site
  • Dan Meister: Parallel Locally-Ordered Clustering for Bounding Volume Hierarchy Construction
  • Asen Atanasov: Adaptive Environment Sampling on CPU and GPU
21:25 - 02:00
socializing indoors

Invited Speakers:

Speaker 1: Vincent Lepetit, University of Bordeaux, France

Matthias Hullin

Dr. Vincent Lepetit is a Full Professor at the LaBRI, University of Bordeaux. He also supervizes a research group in Computer Vision for Augmented Reality at theInstitute for Computer Graphics and Vision, TU Graz. He received the PhD degree in Computer Vision in 2001 from the University of Nancy, France, after working in the ISA INRIA team. He then joined the Virtual Reality Lab at EPFL as a post-doctoral fellow and became a founding member of the Computer Vision Laboratory. He became a Professor at TU Graz in February 2014, and at University of Bordeaux in January 2017. His research is at the interface between Machine Learning and 3D Computer Vision, with application to 3D hand pose estimation, feature point detection and description, and 3D object and camera registration from images. In particular, he introduced with his colleagues methods such as Ferns, BRIEF, LINE-MOD, and DeepPrior for feature point matching and 3D object recognition.

3D Rigid and Articulated Object Registration for Robotics and Augmented Reality

I will present our approach to 3D registration of rigid and articulated objects from monocular color images or depth maps. We first introduce a "holistic" approach that relies on a representation of a 3D pose suitable to Deep Networks and on a feedback loop. We also show how to tackle the domain gap between real images and synthetic images, in order to use synthetic images to train our models. Finally, I will present our recent extension to deal with large partial occlusions.

Speaker 2: Christian Theobalt, MPII Saarbrücken, Germany

Christian Theobalt

Christian Theobalt is a Professor of Computer Science and the head of the research group "Graphics, Vision, and Video" at the Max-Planck-Institute (MPI) for Informatics, Saarbrücken, Germany. He is also a Professor of Computer Science at Saarland University, Germany. From 2007 until 2009 he was a Visiting Assistant Professor in the Department of Computer Science at Stanford University.He received his MSc degree in Artificial Intelligence from the University of Edinburgh, his Diplom (MS) degree in Computer Science from Saarland University, and his PhD (Dr.-Ing.) from Saarland University and Max-Planck-Institute for Informatics.
In his research he looks at algorithmic problems that lie at the intersection of Computer Graphics, Computer Vision and machine learning, such as: static and dynamic 3D scene reconstruction, marker-less motion and performance capture, virtual and augmented reality, computer animation, appearance and reflectance modelling, intrinsic video and inverse rendering, machine learning for graphics and vision, new sensors for 3D acquisition, advanced video processing, as well as image- and physically-based rendering. He is also interested in using reconstruction techniques for human computer interaction.
For his work, he received several awards, including the Otto Hahn Medal of the Max-Planck Society in 2007, the EUROGRAPHICS Young Researcher Award in 2009, the German Pattern Recognition Award 2012, and the Karl Heinz Beckurts Award in 2017. He received two ERC grants, one of the most prestigious and competitive individual research grants in Europe: An ERC Starting Grant in 2013 and an ERC Consolidator Grant in 2017. In 2015, he was elected as one of the top 40 innovation leaders under 40 in Germany by the business magazine Capital. Christian Theobalt is also a co-founder of an award-winning spin-off company from his group - - that is commercializing one of the most advanced solutions for marker-less motion and performance capture. .

Capturing and Editing Models of the Real World in Motion

New methods for capturing highly detailed models of moving real world scenes with cameras, i.e., models of detailed deforming geometry, appearance or even material properties, become more and more important in many application areas. They are needed in visual content creation, for instance in visual effects, where they are used to build highly realistic models of virtual human actors or real world environments. Further on, efficient, reliable and highly accurate dynamic scene reconstruction is nowadays an important prerequisite for many other application domains, such as: human-computer and human-robot interaction, autonomous robotics and autonomous driving, virtual and augmented reality, 3D and free-viewpoint TV, immersive telepresence, and even video editing.

The development of dynamic scene reconstruction methods has been a long standing challenge in computer graphics and computer vision. Recently, the field has seen important progress. New methods were developed that capture - without markers or scene instrumentation - rather detailed models of individual moving humans or general deforming surfaces from video recordings, and capture even simple models of appearance and lighting. However, despite this recent progress, the field is still at an early stage, and current technology is still starkly constrained in many ways. Many of today's state-of-the-art methods are still niche solutions that are designed to work under very constrained conditions, for instance: only in controlled studios, with many cameras, for very specific object types, for very simple types of motion and deformation, or at processing speeds far from real-time.

In this talk, I will present some of our recent works on detailed marker-less dynamic scene reconstruction and performance capture in which we advanced the state of the art in several ways. For instance, I will briefly show new methods for marker-less capture of the full body (like our VNECT approach) and hands that work in more general environments, and even in real-time and with one camera. I will also show some of our work on high-quality face performance capture and face reenactment. Here, I will also illustrate the benefits of both model-based and learning-based approaches and show how different ways to join the forces of the two open up new possibilities.


Speaker 3: Wenzel Jakob, School of Computer and Communication Sciences, EPFL, Switzerland

Wenzel Jakob

Wenzel Jakob is an assistant professor at EPFL’s School of Computer and Communication Sciences, where he leads the Realistic Graphics Lab. His research interests revolve around material appearance modeling, rendering algorithms, and the high-dimensional geometry of light paths. Wenzel Jakob is also the lead developer of the Mitsuba renderer, a research-oriented rendering system, and one of the authors of the third edition of “Physically Based Rendering: From Theory To Implementation".

Capturing and rendering the world of materials

One of the key ingredients of any realistic rendering system is a description of the way in which light interacts with objects, typically modeled via the Bidirectional Reflectance Distribution Function (BRDF). Unfortunately, real-world BRDF data remains extremely scarce due to the difficulty of acquiring it it: a BRDF measurement requires scanning a four-dimensional domain at high resolution—an infeasibly time-consuming process.

In this talk, I'll showcase our ongoing work on assembling a large library of materials including including metals, fabrics, organic substances like wood or plant leaves, etc. The key idea to work around the curse of dimensionality is an adaptive parameterization, which automatically warps the 4D space so that most of the volume maps to “interesting” regions. Starting with a review of BRDF models and microfacet theory, I'll explain the new model, as well as the optical measurement apparatus that we used to conduct the measurements.

Important Dates:

  • Deadline for talk proposals: November 12, 2018
  • Meeting: January 30 - February 2, 2019


Hotel Zadov, Czech Republic:

Programme and Organization Committee:

Martin Čadík, Jaroslav Křivánek

Duties: scientific program, selection of beer and everything else.