AI for Content Creation Workshop

CVPR 2021

June 25th



Summary

The AI for Content Creation (AICC) workshop at CVPR 2021 brings together researchers in computer vision, machine learning, and AI. Content creation has several important applications ranging from virtual reality, videography, gaming, and even retail and advertising. The recent progress of deep learning and machine learning techniques allowed to turn hours of manual, painstaking content creation work into minutes or seconds of automated work. For instance, generative adversarial networks (GANs) have been used to produce photorealistic images of items such as shoes, bags, and other articles of clothing, interior/industrial design, and even computer games' scenes. Neural networks can create impressive and accurate slow-motion sequences from videos captured at standard frame rates, thus side-stepping the need for specialized and expensive hardware. Style transfer algorithms can convincingly render the content of one image with the style of another, offering unique opportunities for generating additional and more diverse training data---in addition to creating awe-inspiring, artistic images. Learned priors can also be combined with explicit geometric constraints, allowing for realistic and visually pleasing solutions to traditional problems such as novel view synthesis, in particular for the more complex cases of view extrapolation.

AI for content creation lies at the intersection of the graphics, the computer vision, and the design community. However, researchers and professionals in these fields may not be aware of its full potential and inner workings. As such, the workshop is comprised of two parts: techniques for content creation and applications for content creation. The workshop has three goals:

  1. To cover introductory concepts to help interested researchers from other fields start in this exciting area.
  2. To present success stories to show how deep learning can be used for content creation.
  3. To discuss pain points that designers face using content creation tools.

More broadly, we hope that the workshop will serve as a forum to discuss the latest topics in content creation and the challenges that vision and learning researchers can help solve.

Welcome! -
Deqing Sun (Google),
Sanja Fidler (UToronto / NVIDIA),
Lu Jiang (Google),
Angjoo Kanazawa (UC Berkeley),
Ming-Yu Liu (NVIDIA),
Cynthia Lu (Adobe),
Kalyan Sunkavalli (Adobe),
James Tompkin (Brown),
Weilong Yang (Waymo).



Video Recording (YouTube Playlist)

Morning session:

Click ▶ to jump to each talk!

PlayTime PDTRepeat viewing
08:00 20:00 Welcome and introductions
08:15 20:15 Kristen Grauman (UT Austin)
08:45 20:45 Matthias Nießner (TU Munich)
09:15 21:15 Ira Kemelmacher-Shlizerman (University of Washington)
10:00 22:00 Tali Dekel (Weizmann Institute of Science)
10:30 22:30 Raquel Urtasun (Waabi, University of Toronto)
11:00 23:00 Oral session 1: Kips et al., Deep Graphics Encoder for Real Time Video Makeup Synthesis from Example
11:10 23:10 Oral session 1: Mejjati et al., GaussiGAN: Controllable Image Synthesis with 3D Gaussians from Unposed Silhouettes
11:20 23:20 Poster session 1


Afternoon session:

Click ▶ to jump to each talk!

PlayTime PDTRepeat viewing
13:00 01:00 (26th) Sergey Tulyakov (Snap Research)
13:30 01:30 (26th) Devi Parikh (Georgia Tech)
14:00 02:00 (26th) Emily Denton (Google)
14:30 02:30 (26th) Jon Barron (Google)
15:00 03:00 (26th) Oral session 2: Jahn et al., High-Resolution Complex Scene Synthesis with Transformers
15:10 03:10 (26th) Oral session 2: Mordvintsev et al., Texture Generation with Neural Cellular Automata
15:20 03:20 (26th) Poster session 2

Awards

Best paper


All accepted works (in random order)

Papers (8 pages)

Extended abstracts (4 pages)

Papers (8 pages)—also in other proceedings



Previous Workshops