Skip to content
/ epulse Public
forked from adamian98/pulse

PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models

Notifications You must be signed in to change notification settings

sweihub/epulse

 
 

Repository files navigation

Enhanced PULSE: EPULSE

The EPULSE can turn the low resolution face (256x256) into high resolution face (1024x1024) somewhat realisticly.

  • Added facenet loss, make sure the generated face is similar
  • Align face with size: 256x256
  • Default trainable noise layers: 20
  • Default steps: 1000

Note: facenet-pytorch is required.

pip install facenet-pytorch

Pre-trained models to download

How to run?

  1. put your LR photos into realpics and resize to 1024
  2. run align_face.py to align the faces into input folder
  3. go with run.py
  4. check your result in 'runs' folder

Alex Wei, 2020-8-23

Preview

  • Low Resolution: 256x256

Transformation Preview

  • High Resolution: 1024x1024

Transformation Preview

PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models

Code accompanying CVPR'20 paper of the same title. Paper link: https://arxiv.org/pdf/2003.03808.pdf

NOTE

We have noticed a lot of concern that PULSE will be used to identify individuals whose faces have been blurred out. We want to emphasize that this is impossible - PULSE makes imaginary faces of people who do not exist, which should not be confused for real people. It will not help identify or reconstruct the original image.

We also want to address concerns of bias in PULSE. We have now included a new section in the paper and an accompanying model card directly addressing this bias.


Transformation Preview Transformation Preview Transformation Preview

Table of Contents

What does it do?

Given a low-resolution input image, PULSE searches the outputs of a generative model (here, StyleGAN) for high-resolution images that are perceptually realistic and downscale correctly.

Transformation Preview

Usage

The main file of interest for applying PULSE is run.py. A full list of arguments with descriptions can be found in that file; here we describe those relevant to getting started.

Prereqs

You will need to install cmake first (required for dlib, which is used for face alignment). Currently the code only works with CUDA installed (and therefore requires an appropriate GPU) and has been tested on Linux and Windows. For the full set of required Python packages, create a Conda environment from the provided YAML, e.g.

conda create -f pulse.yml 

or (Anaconda on Windows):

conda env create -n pulse -f pulse.yml
conda activate pulse

In some environments (e.g. on Windows), you may have to edit the pulse.yml to remove the version specific hash on each dependency and remove any dependency that still throws an error after running conda env create... (such as readline)

dependencies
  - blas=1.0=mkl
  ...

to

dependencies
  - blas=1.0
 ...

Finally, you will need an internet connection the first time you run the code as it will automatically download the relevant pretrained model from Google Drive (if it has already been downloaded, it will use the local copy). In the event that the public Google Drive is out of capacity, add the files to your own Google Drive instead; get the share URL and replace the ID in the https://drive.google.com/uc?=ID links in align_face.py and PULSE.py with the new file ids from the share URL given by your own Drive file.

Data

By default, input data for run.py should be placed in ./input/ (though this can be modified). However, this assumes faces have already been aligned and downscaled. If you have data that is not already in this form, place it in realpics and run align_face.py which will automatically do this for you. (Again, all directories can be changed by command line arguments if more convenient.) You will at this stage pic a downscaling factor.

Note that if your data begins at a low resolution already, downscaling it further will retain very little information. In this case, you may wish to bicubically upsample (usually, to 1024x1024) and allow align_face.py to downscale for you.

Applying PULSE

Once your data is appropriately formatted, all you need to do is

python run.py

Enjoy!

About

PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%