AI for Content Creation Workshop

CVPR 2020

June 15th

CVPR Virtual Conference Website: HERE

This contains links to all content and live interactions (video, text chat) for June 15th.

The AI for Content Creation workshop (AICCW) at CVPR 2020 brings together researchers in computer vision, machine learning, and AI. Content creation has several important applications ranging from virtual reality, videography, gaming, and even retail and advertising. The recent progress of deep learning and machine learning techniques allowed to turn hours of manual, painstaking content creation work into minutes or seconds of automated work. For instance, generative adversarial networks (GANs) have been used to produce photorealistic images of items such as shoes, bags, and other articles of clothing, interior/industrial design, and even computer games' scenes. Neural networks can create impressive and accurate slow-motion sequences from videos captured at standard frame rates, thus side-stepping the need for specialized and expensive hardware. Style transfer algorithms can convincingly render the content of one image with the style of another, offering unique opportunities for generating additional and more diverse training data---in addition to creating awe-inspiring, artistic images. Learned priors can also be combined with explicit geometric constraints, allowing for realistic and visually pleasing solutions to traditional problems such as novel view synthesis, in particular for the more complex cases of view extrapolation.

AI for content creation lies at the intersection of the graphics, the computer vision, and the design community. However, researchers and professionals in these fields may not be aware of its full potential and inner workings. As such, the workshop is comprised of two parts: techniques for content creation and applications for content creation. The workshop has three goals:

  1. To cover some introductory concepts to help interested researchers from other fields get started in this exciting new area.
  2. To present selected success cases to advertise how deep learning can be used for content creation.
  3. Our invited designers will talk about the pain points that designers face using content creation tools.

More broadly, we hope that the workshop will serve as a forum to discuss the latest topics in content creation and the challenges that vision and learning researchers can help solve.

- Deqing Sun, Ming-Yu Liu, Lu Jiang, James Tompkin, Weilong Yang, and Kalyan Sunkavalli.


Accepted works (in random order)

Extended abstracts (4 pages)

Papers (8 pages)

Papers (8 pages)—also in other proceedings


As CVPR2020 is now virtual due to COVID-19, so too will be the workshop. More details coming soon!

Submission Instructions

We call for papers (8 pages not including references) and extended abstracts (4 pages not including references) to be showcased in a poster session, and for interactive demos, both for the AI for Content Creation Workshop at CVPR 2020. Authors of accepted papers and extended abstracts will be asked to post their submissions on arXiv. Both papers and extended abstracts are not archival and will not be included in the proceedings of CVPR 2020 (authors should be aware that some conferences consider peer-reviewed works with >4-pages to be in violation of double submission policies, e.g., ECCV). We will accept work in progress, work that has not been published elsewhere, and work that has been recently published elsewhere including at CVPR 2020. In the interests of fostering a free exchange of ideas, we welcome both novel and previously-published work.

Paper submissions are double blind and in the CVPR template.

Paper submission: March 20, 2020 March 31, 2020, 11:59 PST
Acceptance notification: April 18, 2020
Submission Website:

The best paper and the best demo will be acknowledged with a Titan RTX GPU (kindly provided by our sponsors).

We seek contributions on a variety of aspects on content creation, including but not limited to the following areas:

This includes domains and applications for content creation: