The fifth meeting of the Czech and Slovak computer graphics and computer vision people in the mountains. The event will take place January 31 - February 3, 2018 in Low Tatras, Slovakia in a nice hotel:
The goal of the meeting is to encourage exchange of ideas between researchers and practitioners in the fields of computer graphics and computer vision from across the Czech and Slovak Republics and beyond. There will be many (optional) outdoor activities possible during the day and fruitful talks, discussions and socializing in the afternoons and evenings.
The expected price is around 150 EUR per person (3 nights, 3x half board). Please let us know should you be interested in attending the event (please provide also your talk proposal, deadline: November 12, 2017).
Matthias Hullin is a professor of Digital Material Appearance at the University of Bonn, a post he took after research stages at the Max Planck Center for Visual Computing and Communication and the University of British Columbia. He obtained a doctorate in computer science from Saarland University (2010) with a dissertation that was awarded the Otto Hahn Medal of the Max Planck Society, and a Diplom in physics from the University of Kaiserslautern (2007). His research is focused on the interface between computer graphics and the physics of light.
A large part of computer graphics deals with the computationally efficient modeling and simulation of light propagating across many scales. The result of this simulation, by default, is meant to be consumed by humans, and so the goal is to achieve a high degree of "plausibility" or "realism". Ideally, however, the outcome of any graphics algorithm should be close to a reliable physical prediction. A central aspect of my research is dedicated to converting computer graphics methodology into a set of technical devices for solving problems in other fields. In my talk, I will illustrate this notion of "inverse computer graphics" by a selection of use cases. Examples include the calibration of free-form optical systems, the tracking of moving objects outside the line of sight, and the 3-dimensional reconstruction of fluid mixing processes.
Prof. Niloy J. Mitra leads the Smart Geometry Processing group in the Department of Computer Science at University College London. He received his PhD degree from Stanford University under the guidance of Leonidas Guibas. His research interests include shape analysis, computational design and fabrication, and geometry processing. Niloy received the ACM Siggraph Significant New Researcher Award in 2013 and the BCS Roger Needham award in 2015. His work has twice been featured as research highlights in the Communication of ACM, received best paper award at ACM Symposium on Geometry Processing 2014, and Honourable Mention at Eurographics 2014. Besides research, Niloy is an active DIYer and loves reading, bouldering, and cooking. More details at http://geometry.cs.ucl.ac.uk/.
Discovering 3D arrangements of objects from single indoor images is important given its many applications including interior design, content creation, etc. Although heavily researched in the recent years, existing approaches break down under medium or heavy occlusion as the core object detection module starts failing in absence of directly visible cues. Instead, we take into account holistic contextual 3D information, exploiting the fact that objects in indoor scenes co-occur mostly in typical near-regular configurations. First, we use a neural network trained on real indoor annotated images to extract 2D keypoints, and feed them to a 3D candidate object generation stage. Then, we solve a global selection problem among these 3D candidates using pairwise co-occurrence statistics discovered from a large 3D scene database. We iterate the process allowing for candidates with low keypoint response to be incrementally detected based on the location of the already discovered nearby objects. We demonstrate significant performance improvement over combinations of state-of-the-art methods, especially for scenes with moderately to severely occluded objects.
Ondřej Chum received the MSc degree in computer science from Charles University, Prague, in 2001 and the PhD degree from the Czech Technical University in Prague, in 2005. From 2005 to 2006, he was a research Fellow at the Centre for Machine Perception, Czech Technical University. From 2006 to 2007 he was a post-doc at the Visual Geometry Group, University of Oxford, UK. Recently, he is now an associate professor back at the Centre for Machine Perception. His research interests include object recognition, large-scale image and particular-object retrieval, invariant feature detection, and RANSAC-type optimization. He has coorganized the “25 years of RANSAC” Workshop in conjunction with CVPR 2006, Computer Vision Winter Workshop 2006, and Vision and Sports Summer School (VS3) in Prague 2012 and 2014. He was the recipient of the runner up award for the “2012 Outstanding Young Researcher in Image & Vision Computing” by the Journal of Image and Vision Computing for researchers within seven years of their PhD, and the Best Paper Prize at the British Machine Vision Conference in 2002. In 2013, he was awarded ERC-CZ grant.
Convolutional Neural Networks (CNNs) achieve state-of-theart performance in many computer vision tasks. However, this achievement is preceded by extreme manual annotation in order to perform either training from scratch or fine-tuning for the target task. In the talk, I will address fine-tuning of CNN for image retrieval and for sketch based image retrieval in a fully automated manner.
Krystian Mikolajczyk did his undergraduate study at the University of Science and Technology (AGH) in Krakow, Poland. He completed his PhD degree at the Institute National Polytechnique de Grenoble, France. He then worked as a research assistant in INRIA, University of Oxford and Technical University of Darmstadt (Germany), before joining the University of Surrey as a Lecturer, and Imperial College London as a Reader in 2015. His main area of expertise is in image and video recognition, in particular in problems related to matching, representation and learning. He participated in a number of EU and UK projects in the area of image and video analysis. He publishes in computer vision, pattern recognition and machine learning forums. He has served in various roles at major international conferences co-chairing British Machine Vision Conference 2012, 2017 and IEEE International Conference on Advanced Video and Signal-Based Surveillance 2013. In 2014 he received Longuet-Higgins Prize awarded by the Technical Committee on Pattern Analysis and Machine Intelligence of the IEEE Computer Society.
Matching local image features has been an active area of research for many decades. CNN based techniques have significantly pushed the limits of the state of the art like in all other computer vision tasks but how good local matching actually is these days? Are there any remaining challenges useful for practical applications? I will discuss some recent results in detecting and matching keypoints as well as how we are approaching the problem of multi and cross modal matching.
Duties: scientific program, selection of beer and everything else.