DeepDigitalFilm
DigitalFilm: Use a neural network to simulate film style.
"DigitalFilm" Digital Film
Use a neural network to simulate film style.
Explore the documentation of this project »
View the demo
·
Report a bug
·
Propose a new feature
This README.md is for developers and users 简体中文
Table of Contents
Sample
Run Demo
The length and width of the input photo need to be divisible by 32.
python digitalFilm.py [-v/-h/-g] -i <input> -o <ouput> -m <model>
- -v print version information
- -h help information
- -g graphical image selection
- -i input image directory
- -o output image directory
- -m model directory
training model
training model directly use cyclegan.ipynb. But you need to download the pre-trained model of resnet18 in advance. Prepare digital photos and film photos in two folders. The model are included in the Release.
Installation steps
git clone https://github.com/SongZihui-sudo/digitalFilm.git
It is best to create an environment in conda now and then install various dependencies.
pip install -r requirement.txt
Overall architecture
Converting digital photos to film style can be regarded as an image style conversion task. Therefore, the overall architecture adopts the cycleGAN network. pytorch-CycleGAN-and-pix2pix In addition, it is difficult to obtain large-scale digital photos and film-style photos, so an unsupervised approach is adopted to use unpaired data for training.
Dataset
The dataset consists of dual-source image data, the main part of which is collected from high-quality digital photos taken by Xiaomi 13 Ultra mobile phone, and the rest is selected from professional HDR image dataset. Film samples are collected from the Internet.
File directory description
- DigitalFilm.ipynb is used to train the model
- app is a demo
- digitalFilm.py
- mynet.py
- mynet2.py
Version Control
This project uses Git for version management. You can view the currently available version in the repository.
Author
151122876@qq.com SongZihui-sudo
Zhihu:Dr.who qq:1751122876
*You can also view all the developers involved in the project in the list of contributors. *
Copyright
This project is licensed under GPLv3. For details, please refer to LICENSE.txt


