Next Generation Generative Models for Vision Related Tasks with Dr. Rupayan Mallick
Abstract: High quality image generation and synthesis have been of utmost importance in many computer vision tasks such as inpainting, relighting and other image editing and generation tasks. While generative models (adversarial networks and autoencoders) have been the state of the art, diffusion models have gained recent attention. These models are trained by adding gaussian noise to the data and then trying to recover/denoise this noisy data. This workshop will begin by exploring the basics of generative models, including GANs and GITs models. We will then discuss diffusion models. After describing the basics we will do some hand-on activities using these models to generate some examples from certain applications such as inpainting, adding photorealistic effects. A basic understanding of python programming is expected. Familiarity with neural networks would be helpful but is not required. Workshop activities will be conducted in Python.