Old Movie Colorization Using Deep Learning Techniques

Abstract

Colors can express emotions of the scene even without a word being spoken. This work attempts to use different deep learning techniques for automatic colorization of old black and white movies. Colorization is a one to many task in which a single channel image is converted into a 3 channel image hence it has many possible solutions. But if a model is able to understand a scene well and given proper domain knowledge there remains very few plausible solutions. We use Generative Adversarial Networks (GANs) to generate realistic color images in which two networks trying to overpower each other end up making each other better at their respective jobs. To express a scene, multiple parts of an image have to work in harmony with multiple far away parts. We tried to translate this non local harmony into our model using a Self-Attention module along with CNNs. This module solves the problem of color bleeding in many instances. We also have created our own dataset whose domain is close to the most of the old black and white movies.

Supplementary notes can be added here, including code, math, and images.