Home
Uni-Logo
 

Block-Seminar on Deep Learning

apl. Prof. Olaf Ronneberger (Google DeepMind)

In this seminar you will learn about recent developments in deep learning with a focus on images and videos and their combination with other modalities like language. The surprising emerging capabilities of large language models (like GPT-4) open up new design spaces. Many classic computer vision tasks can be translated into the language domain and can be (partially) solved there. Understanding the current capabilities, the shortcomings and approaches in the language domain will be essential for the future Computer Vision research. So the selected papers this year focus on the key concepts used in todays large language models as well as the approaches to combine computer vision with language.

For each paper there will be one person, who performs a detailed investigation of a research paper and its background and will give a presentation (time limit is 35-40 minutes). The presentation is followed by a discussion with all participants about the merits and limitations of the respective paper. You will learn to read and understand contemporary research papers, to give a good oral presentation, to ask questions, and to openly discuss a research problem. The maximum number of students that can participate in the seminar is 10.

The introduction meeting (together with Thomas Brox's seminar) will be in person, while the mid semester meeting will be online. The block seminar itself will be in person to give you the chance to practise your real-world presentation skills and to have more lively discussions


Contact person: Karim Farid

Blockseminar:
(2 SWS)
In person.
Date: tbd. between mid Februar and mid April 2026.

Room:tbd

Beginning: If you want to participate, attend the mandatory introduction meeting (Will be held jointly with Seminar on Current Works in Computer Vison) on October 15th, 14:00, register in HisInOne, and submit your paper preferences before October, 21st.

Mid-Semester Lecture: (tbd, via video conference) Introduction to Generative models by apl. Prof. Olaf Ronneberger (Google DeepMind)

Recommended semester:

6 (Bachelor), any (Master)
Requirements: Background in computer vision

Remarks: This course is offered to both Bachelor and Master students. The language of this course is English. All presentations must be given in English.

Topics will be assigned for both seminars via a preference voting. If there are more interested students than places, first priority will be given to students who attended the intrdocution meeting. Afterwards, we follow the assignments of the HisInOne system. We want to avoid that people grab a topic and then jump off during the semester. Please have a coarse look at all available papers to make an informed decision before you commit. If you don't attend the meeting (or not send a paper preference) but choose this seminar together with only other overbooked seminars in HisInOne, you may end up without a seminar place this semester.

Students who just need to attend (failed SL from previous semester), need not send a preference for a paper, but just reply with "SL only".

All participants must read all papers and answer a few questions. The questions will be available here. The answers must be sent to the corresponding advisor before the beginning of the seminar. We highly recommend to read and understand all papers first, before you start to prepare your presentation.

   
Vision model architecture from Qwen3-VL

Material

from Thomas Brox's seminar:

Papers:

The seminar has space for 10 students

NoPaper title and linkCommentsStudentAdvisor
B1Transfusion: Predict the Next Token and Diffuse Images with One Multi-Modal ModelYashi Singh
B2OneFlow: Concurrent Mixed-Modal and Interleaved Generation with Edit FlowsPhilipp Laur
B3Hunyuan3D 2.1: From Images to High-Fidelity 3D Assets with Production-Ready PBR MaterialNoah Berg
B4InternVL3.5: Advancing Open-Source Multimodal Models in Versatility, Reasoning, and EfficiencyWe will seelct a part of this paperManuel Di Biase
B5gpt-oss-120b & gpt-oss-20b Model Cardsee also https://openai.com/open-models/Louis Geiger
B6OpenVision 2: A Family of Generative Pretrained Visual Encoders for Multimodal LearningLuca Porta
B7Video models are zero-shot learners and reasonerssee also https://video-zero-shot.github.io/Luis Felipe Barragan Rodriguez
B8HunyuanImage 3.0 Technical ReportPaul Luca Schakau
B9Qwen3-VL See also https://arxiv.org/abs/2505.09388Ritika Gupta
B10RLP: Reinforcement as a Pretraining Objectivesee also https://research.nvidia.com/labs/adlr/RLP/Sharmila Karki