2x_bndl_animefilm_v1_DAT2
Been sitting on this model for a while as I had hoped to continue tweaking the dataset and make further adjustments, but the rate at which I've been training models has slowed down significantly, so I decided to release this. Performs very well on some sources, very strong color debleed and natural restoration. Best used as a teacher model to use on any series it works well on. I recommend scaling your input to a height of 540 pixels (or in that ballpark) with bicubic as that's what all LRs have been resized to.
How do I distill this model into a smaller architecture?
There's a variety of ways you could go about this, but this guide outlines one way I would personally do it.
Dataset Building
Prerequisites
Ensure you have the following tools installed:
Note: PepeDP has an alternative method for frame extraction and deduplication that is worth looking into. I personally haven't used it much, which is why this guide isn't built around that method.
1. Frame Extraction
Use ffmpeg to extract frames from your source video. The command you use will depend on your video's resolution.
If your source is already around 720x540:
ffmpeg -i your_input.mkv -vf fps=1 yourfolder/output_%04d.pngIf your source has the correct aspect ratio but wrong resolution:
ffmpeg -i your_input.mkv -vf fps=1,scale=-1:540 -sws_flags bicubic yourfolder/output_%04d.png
If your frames come out in a different aspect ratio than in normal playback, I'd recommend taking a screenshot in something like MPV to gauge the true resolution and adjusting scale= accordingly. This model was trained on frames resized to a height of 540, though it's not uncommon to find releases cropped to something like 708x531, which would work as well.
2. Deduplication
Your extracted frames will contain many duplicates. Use dupeguru to remove them.
The UI is pretty straightforward. I'd recommend a filter strength of 40% (under more options).
3. Smart Cropping
Finally, we'll use the BestTile script from PepeDP to crop the most information-dense parts of your frames. This will speed up the creation of synthetic high-resolution images down the line and improves training effectiveness.
- Create a new
.pyfile using the example script from the PepeDP Wiki. - Modify the script to point to your input and output folders. Remember to escape backslashes in your paths (e.g.,
G:\\training\\dataset). - Set the tile size to
256and run.
4. Creating the Synthetic HR's
Now, with your folder of 256x256 LR's, you'll just have to run batch inference with the model via chainner on the folder, and save it to a new folder as your HR's.
Training
Thankfully, guides for training already exist. For the next steps, please refer to the wiki for traiNNer-redux:
Troubleshooting
General Python Issues: For help with installing packages or basic Python troubleshooting, LLMs like GPT, Claude, or Gemini are excellent tools to help. This guide assumes a working knowledge of Python environments.
Training-Specific Issues: If you run into problems during training, I'd recommend searching the message history in the Enhance Everything Discord server. If you can't find a solution, feel free to ask a question in the
#trainingchannel.