Fast Style Transfer

Convert photos and videos to artwork

Using this we can stylize any photo or video in style of famous paintings using Neural Style Transfer.


Neural Style Transfer was first published in the paper “A Neural Algorithm of Artistic Style” by Gatys et al., originally released in 2015. It is an image transformation technique which modifies one image in the style of another image. We take two images of content image and style image, using these two images we generate a third image which has contents from the content image while styling (textures) from style image. If we take any painting as a style image then output generated image has contents painted like style image.

This project implements two style transfer techniques, one is proposed by Gatys et al which introduce style transfer in 2015 and other was proposed by Justin Johnson in his paper Perceptual Losses for Real-Time Style Transfer and Super-Resolution which uses autoencoder netowrk to map input image to style image using same idea described in above paper, the advantage is that now if we have trained an autoencoder for one style we can use it to style multiple images efficiently without optimizing input image which makes it fast and can be used to stylize videos.

I have also written posts explaining these two papers with code so to know more about its working refer to these posts.



  • For inferencing or generating images any system will work. But size of output image is limited as per system. Large images needs more momory to process. GPU is not must for inferencing but having it will be advantageous.
  • For training GPU is must with tensorflow-gpu and cuda installed.
  • If there is no access to GPU at local but want to train new style, there is a notebook Fast_Style_Transfer_Colab.ipynb open it in colab and train. For saving model checkpoints google drive is used. You can trust this notebook but I do not take any responsibility for data loss from google drive. Before running check the model save checkpoints path as it can override existing data with same name.
  • Training takes around 6 hours in colab for 2 epochs.


  • tensorflow-gpu>=2.0 or tensorflow>=2.0
  • numpy
  • matplotlib
  • pillow
  • opencv-python

This implementation is tested with tensorflow-gpu 2.0 and tensorflow-gpu 2.2 in Windows 10 and Linux

Get Started

  • Install Python3 or anaconda and install them. For detailed steps follow installation guide for Python3 and Anaconda
  • Install above packages via pip or conda. For detailed steps follow guide for pip and conda
  • Download some Pretrained Models trained on different paintings styles to start playing without need to train network
  • copy and unzip checkpoints inside data/models
  • run scripts for image and video stylization

Additional guides:

If stuck on Get Started Step 1 and Step 2 follow these additional resources

Usage Instructions

  • Download Github repository
  • Follow README guide for using the application.


style transfer jack sparrow

style transfer kido inazuma eleven

style transfer webcam

style transfer video output