The AI for Content Creation (AICC) workshop at CVPR 2021 brings together researchers in computer vision, machine learning, and AI. Content creation has several important applications ranging from virtual reality, videography, gaming, and even retail and advertising. The recent progress of deep learning and machine learning techniques allowed to turn hours of manual, painstaking content creation work into minutes or seconds of automated work. For instance, generative adversarial networks (GANs) have been used to produce photorealistic images of items such as shoes, bags, and other articles of clothing, interior/industrial design, and even computer games' scenes. Neural networks can create impressive and accurate slow-motion sequences from videos captured at standard frame rates, thus side-stepping the need for specialized and expensive hardware. Style transfer algorithms can convincingly render the content of one image with the style of another, offering unique opportunities for generating additional and more diverse training data---in addition to creating awe-inspiring, artistic images. Learned priors can also be combined with explicit geometric constraints, allowing for realistic and visually pleasing solutions to traditional problems such as novel view synthesis, in particular for the more complex cases of view extrapolation.
AI for content creation lies at the intersection of the graphics, the computer vision, and the design community. However, researchers and professionals in these fields may not be aware of its full potential and inner workings. As such, the workshop is comprised of two parts: techniques for content creation and applications for content creation. The workshop has three goals:
More broadly, we hope that the workshop will serve as a forum to discuss the latest topics in content creation and the challenges that vision and learning researchers can help solve.
Welcome! -
Deqing Sun (Google),
Sanja Fidler (UToronto / NVIDIA),
Lu Jiang (Google),
Angjoo Kanazawa (UC Berkeley),
Ming-Yu Liu (NVIDIA),
Cynthia Lu (Adobe),
Kalyan Sunkavalli (Adobe),
James Tompkin (Brown),
Weilong Yang (Waymo).
Click ▶ to jump to each talk!
Click ▶ to jump to each talk!
Play | Time PDT | Repeat viewing | ||
---|---|---|---|---|
13:00 | 01:00 (26th) | Sergey Tulyakov (Snap Research) | ||
13:30 | 01:30 (26th) | Devi Parikh (Georgia Tech) | ||
14:00 | 02:00 (26th) | Emily Denton (Google) | ||
14:30 | 02:30 (26th) | Jon Barron (Google) | ||
15:00 | 03:00 (26th) | Oral session 2: Jahn et al., High-Resolution Complex Scene Synthesis with Transformers | ||
15:10 | 03:10 (26th) | Oral session 2: Mordvintsev et al., Texture Generation with Neural Cellular Automata | ||
15:20 | 03:20 (26th) | Poster session 2
|