The ninth meeting of the Czech and Slovak image processing, computer graphics and computer vision people in the mountains. The event will take place January 31st - February 3rd, 2024 at Hrubý Jeseník (aka High Ash Mountains, Altvatergebirge, Jesionik Wysoki), Czechia in Resort Dlouhé Stráně: https://www.hotelds.cz/en/ (see the map).
The goal of the meeting is to encourage exchange of ideas between researchers and practitioners in the fields of image processing, computer graphics and computer vision from across the Czech and Slovak Republics and beyond. There will be many (optional) outdoor activities possible during the day and fruitful talks, discussions and socializing in the afternoons and evenings.
The expected price is around 180 EUR per person (3 nights, 3x half board). Please let us know should you be interested in attending the event (please provide also your talk proposal - extended deadline: December 15th, 2023 - and book the hotel yourself, deadline: January 10th, 2023).
Q: How do I register?
A:
Please fill out the form. While registering, you will be asked to include the title and a short
abstract of your offered 20 minute talk. Sending your abstract does not necessarily mean you will be selected to present, but the title and abstract are still a strict requirement for the registration - intended to keep the standard of the presented material consistently high. Remember: one person registered = one talk title/abstract. Once you fill out the form and pay the deposit to the hotel, take the registration as confirmed.
Q: I am from a company and I do not have anything "sciency" enough to show. May I still come?
A: By all means! We are eager to learn what your company works on and what you work on in the company, what interesting open problems you might have, etc. That said, no shameless corporate advertising, please.
Q: What is the conference fee and how do I pay it?
A: There is no conference fee per se. All you need to do is pay for your food and accommodation on the spot at the hotel, which we expect to be around 180 EUR per person for the three nights. We will *not* be collecting your money.
Q: Do I need to look for accomodation?
A: Yes, please book your room by sending an email to obchod@hotelds.cz (CC to hacklova@sisal.mff.cuni.cz) with the subject of your email HiVisComp 2024. Please, indicate the dates of your stay, in case of family groups also number of persons (adults and children) and the billing address to which you wish to have your invoice issued. Everyone is staying in the same hotel, which is pre-booked for us. You will pay a deposit in advance and the rest on the spot after you arrive. Should you have any special requirements concerning accommodation or food, please contact Radka Hacklova.
Q: Do I need to take care of my travel arrangements?
A: Yes, travel is up to you. We will send an info email about this to all the registered participants in due time.
Q: Summing it all up, what’s the timeline between now and the conference?
A: Easy. You fill out the registration form, including the offered talk abstract, before the submission deadline. Then you need to pay a deposit to the hotel by January 10, 2024 at the latest. By the end of November, we will assemble the program from the offered talks and post it online. At the beginning of, December, we will send an email with practical information to all registered participants. You will have almost two months to arrange your travel. You are expected to arrive to the hotel by the afternoon on January 31st, where we’ll have dinner at 6pm and the conference program will start at 7pm.
Rafał K. Mantiuk is a Professor of Graphics and Displays at the Department of Computer Science and Technology, University of Cambridge (UK). He received Ph.D. from the Max-Planck Institute for Computer Science (Germany). His recent interests focus on computational displays, rendering and imaging algorithms that adapt to human visual performance and deliver the best image quality given limited resources, such as computation time or bandwidth. He contributed to early work on high dynamic range imaging, including quality metrics (HDR-VDP), video compression and tone-mapping. More recently, he led an ERC-funded project on the capture and display system that passed the visual Turing test - 3D objects were reproduced with the fidelity that made them undistinguishable from their real counterparts.
Today's computer graphics techniques make it possible to create imagery that is hardly distinguishable from photographs. However, a photograph is clearly no match to an actual real-world scene. I argue that the next big challenge is to achieve perceptual realism by creating artificial imagery that would be hard to distinguish from reality. This requires profound changes in the entire imaging pipeline, from acquisition and rendering to display, with a strong focus on visual perception.
In this talk, I will outline the challenges that we need to overcome to reach perceptual realism and will give several examples of solutions that bring us close to that goal. This will include the work on the high-dynamic-range multi-focal stereo display, which let us achieve perceptual parity with the real world - a level of fidelity at which observers confused a real object with its virtual counterpart. I will also talk about visual models and metrics, which let us quantify the results in graphics and control rendering methods.
Bedrich Benes is a Professor of Computer Science at Purdue University. He received his Ph.D. from Czech Technical University in Prague in 1998. Bedrich is a Fellow of the European Association for Computer Graphics (Eurographics) and a senior member of ACM and IEEE. He is the editor-in-chief of Elsevier Graphical Models, and he was papers co-chair of Eurographics 2017. Dr. Benes works in generative methods for geometry synthesis and deep learning, focusing on procedural and inverse procedural modeling, simulation of natural phenomena, and additive manufacturing. He has published over 200 research papers and has been sponsored by the National Science Foundation, NASA, Adobe Research, Intel, Siemens, Samsung, the Department of Energy, and Ford Inc., among others. Bedrich is a Purdue University faculty scholar.
Trees belong to the most visually appealing and complex structures in Nature. They have unmeasurable effects on humans as they are an essential part of the atmosphere, but they also have substantial positive value on our well-being. Computer science has tried to capture and decipher tree shape and its development for over forty years, with one of the goals being to build the tree digital twin – a computer representation that responds to the environment, is simulation-ready, and can be used to answer the "what-if" scenarios.
We will present several recent algorithms that provide deep neural representation of mathematical models of trees, including their development. We will show how tree representations can be learned from data, particularly from vegetation capture using LiDAR or single images. We will discuss how L-systems can be learned by the transformer and how a novel deep representation can encapsulate the tree's environmental and growth parameters by learning them from simulated data.
Prof. Ariel Shamir is a faculty member and the former Dean of the Efi Arazi school of Computer Science at Reichman University (the Interdisciplinary Center) in Israel. He received his Ph.D. in computer science in 2000 from the Hebrew University in Jerusalem, and spent two years as PostDoc at the University of Texas in Austin. Prof. Shamir has numerous publications and a number of patents. He was named one of the most highly cited researchers on the Thomson Reuters list in 2015, was the recipient of several best papers awards in SIGGRAPH, has co-authored a paper chosen as one of the seminal computer graphics papers, and was the recipient of AsiaGraphics Outstanding Technical Contributions Award in 2023. Prof. Shamir has a broad commercial experience consulting various companies. He specializes in computer graphics, image processing computer vision and machine learning. He is a member of the ACM SIGGRAPH, IEEE Computer, AsiaGraphics and EuroGraphics associations.
Visual abstractions constitute a large part of human creation and communication: they are used for illustrations, instructions, explanations as well as for pure enjoyment and entertainment. For these reasons, the ability to understand and create visual abstractions is also crucial for the development of Artificial Intelligent (AI) agents and algorithms. In this talk I will present several recent efforts to understand and create visual abstractions. In CLIPasso, we show how to convert an image of an object to a sketch, allowing for varying levels of abstraction, while preserving its key visual features. In CLIPascene we disentangle abstraction into two different axes: fidelity and simplicity and show how to convert a whole scene image into a sketch with different types and levels of abstraction. In these works we rely on Large Vision and Language models to support semantic understanding. I will also demonstrate how such models can be used in semantic typography to create word-as-image illustrations automatically, and in design, to decompose a visual concept, represented as a set of images, into different visual aspects encoded in a hierarchical tree structure for exploration and inspiration. Several of these works had won awards in SIGGRAPH and SIGGRAPH Asia, and were developed jointly with Yael Vinker and other colleagues.
Duties: scientific program, selection of beer and everything else.