file_path stringlengths 29 100 | content stringlengths 0 26.3k |
|---|---|
ManimML_helblazer811/LICENSE.md | MIT License
Copyright (c) 2022 Alec Helbling
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. |
ManimML_helblazer811/setup.py | from setuptools import setup, find_packages
setup(
name="manim_ml",
version="0.0.17",
description=("Machine Learning Animations in python using Manim."),
packages=find_packages(),
)
|
ManimML_helblazer811/Readme.md | # ManimML
<a href="https://github.com/helblazer811/ManimMachineLearning">
<img src="assets/readme/ManimMLLogo.gif">
</a>
[](https://github.com/helblazer811/ManimMachineLearning/blob/main/LICENSE.md)
[](https://img.shields.io/github/v/release/helblazer811/ManimMachineLearning)
[](https://pepy.tech/project/manim-ml)
ManimML is a project focused on providing animations and visualizations of common machine learning concepts with the [Manim Community Library](https://www.manim.community/). Please check out [our paper](https://arxiv.org/abs/2306.17108). We want this project to be a compilation of primitive visualizations that can be easily combined to create videos about complex machine learning concepts. Additionally, we want to provide a set of abstractions which allow users to focus on explanations instead of software engineering.
*A sneak peak ...*
<img src="assets/readme/convolutional_neural_network.gif">
## Table of Contents
- [ManimML](#manimml)
- [Table of Contents](#table-of-contents)
- [Getting Started](#getting-started)
- [Installation](#installation)
- [First Neural Network](#first-neural-network)
- [Guide](#guide)
- [Setting Up a Scene](#setting-up-a-scene)
- [A Simple Feed Forward Network](#a-simple-feed-forward-network)
- [Animating the Forward Pass](#animating-the-forward-pass)
- [Convolutional Neural Networks](#convolutional-neural-networks)
- [Convolutional Neural Network with an Image](#convolutional-neural-network-with-an-image)
- [Max Pooling](#max-pooling)
- [Activation Functions](#activation-functions)
- [More Complex Animations: Neural Network Dropout](#more-complex-animations-neural-network-dropout)
- [Citation](#citation)
## Getting Started
### Installation
First you will want to [install manim](https://docs.manim.community/en/stable/installation.html). Make sure it is the Manim Community edition, and not the original 3Blue1Brown Manim version.
Then install the package form source or
`pip install manim_ml`. Note: some recent features may only available if you install from source.
### First Neural Network
This is a visualization of a Convolutional Neural Network. The code needed to generate this visualization is shown below.
```python
from manim import *
from manim_ml.neural_network import Convolutional2DLayer, FeedForwardLayer, NeuralNetwork
# This changes the resolution of our rendered videos
config.pixel_height = 700
config.pixel_width = 1900
config.frame_height = 7.0
config.frame_width = 7.0
# Here we define our basic scene
class BasicScene(ThreeDScene):
# The code for generating our scene goes here
def construct(self):
# Make the neural network
nn = NeuralNetwork([
Convolutional2DLayer(1, 7, 3, filter_spacing=0.32),
Convolutional2DLayer(3, 5, 3, filter_spacing=0.32),
Convolutional2DLayer(5, 3, 3, filter_spacing=0.18),
FeedForwardLayer(3),
FeedForwardLayer(3),
],
layer_spacing=0.25,
)
# Center the neural network
nn.move_to(ORIGIN)
self.add(nn)
# Make a forward pass animation
forward_pass = nn.make_forward_pass_animation()
# Play animation
self.play(forward_pass)
```
You can generate the above video by copying the above code into a file called `example.py` and running the following in your command line (assuming everything is installed properly):
```bash
$ manim -pql example.py
```
The above generates a low resolution rendering, you can improve the resolution (at the cost of slowing down rendering speed) by running:
```bash
$ manim -pqh example.py
```
<img src="assets/readme/convolutional_neural_network.gif">
## Guide
This is a more in depth guide showing how to use various features of ManimML (Note: ManimML is still under development so some features may change, and documentation is lacking).
### Setting Up a Scene
In Manim all of your visualizations and animations belong inside of a `Scene`. You can make a scene by extending the `Scene` class or the `ThreeDScene` class if your animation has 3D content (as does our example). Add the following code to a python module called `example.py`.
```python
from manim import *
# Import modules here
class BasicScene(ThreeDScene):
def construct(self):
# Your code goes here
text = Text("Your first scene!")
self.add(text)
```
In order to render the scene we will run the following in the command line:
```bash
$ manim -pq -l example.py
```
<img src="assets/readme/setting_up_a_scene.png">
This will generate an image file in low quality (use `-h` for high quality).
For the rest of the tutorial the code snippets will need to be copied into the body of the `construct` function.
### A Simple Feed Forward Network
With ManimML we can easily visualize a simple feed forward neural network.
```python
from manim_ml.neural_network import NeuralNetwork, FeedForwardLayer
nn = NeuralNetwork([
FeedForwardLayer(num_nodes=3),
FeedForwardLayer(num_nodes=5),
FeedForwardLayer(num_nodes=3)
])
self.add(nn)
```
In the above code we create a `NeuralNetwork` object and pass a list of layers to it. For each feed forward layer we specify the number of nodes. ManimML will automatically piece together the individual layers into a single neural network. We call `self.add(nn)` in the body of the scene's `construct` method in order to add the neural network to the scene.
The majority of ManimML neural network objects and functions can be imported directly from `manim_ml.neural_network`.
We can now render a still frame image of the scene by running:
```bash
$ manim -pql example.py
```
<img src="assets/readme/a_simple_feed_forward_neural_network.png">
### Animating the Forward Pass
We can automatically render the forward pass of a neural network by creating the animation with the `neural_network.make_forward_pass_animation` method and play the animation in our scene with `self.play(animation)`.
```python
from manim_ml.neural_network import NeuralNetwork, FeedForwardLayer
# Make the neural network
nn = NeuralNetwork([
FeedForwardLayer(num_nodes=3),
FeedForwardLayer(num_nodes=5),
FeedForwardLayer(num_nodes=3)
])
self.add(nn)
# Make the animation
forward_pass_animation = nn.make_forward_pass_animation()
# Play the animation
self.play(forward_pass_animation)
```
We can now render with:
```bash
$ manim -pql example.py
```
<img src="assets/readme/animating_the_forward_pass.gif">
### Convolutional Neural Networks
ManimML supports visualizations of Convolutional Neural Networks. You can specify the number of feature maps, feature map size, and filter size as follows `Convolutional2DLayer(num_feature_maps, feature_map_size, filter_size)`. There are a number of other style parameters that we can change as well(documentation coming soon).
Here is a multi-layer convolutional neural network. If you are unfamiliar with convolutional networks [this overview](https://cs231n.github.io/convolutional-networks/) is a great resource. Additionally, [CNN Explainer](https://poloclub.github.io/cnn-explainer/) is a great interactive tool for understanding CNNs, all in the browser.
When specifying CNNs it is important for the feature map sizes and filter dimensions of adjacent layers match up.
```python
from manim_ml.neural_network import NeuralNetwork, FeedForwardLayer, Convolutional2DLayer
nn = NeuralNetwork([
Convolutional2DLayer(1, 7, 3, filter_spacing=0.32), # Note the default stride is 1.
Convolutional2DLayer(3, 5, 3, filter_spacing=0.32),
Convolutional2DLayer(5, 3, 3, filter_spacing=0.18),
FeedForwardLayer(3),
FeedForwardLayer(3),
],
layer_spacing=0.25,
)
# Center the neural network
nn.move_to(ORIGIN)
self.add(nn)
# Make a forward pass animation
forward_pass = nn.make_forward_pass_animation()
```
We can now render with:
```bash
$ manim -pql example.py
```
<img src="assets/readme/convolutional_neural_network.gif">
And there we have it, a convolutional neural network.
### Convolutional Neural Network with an Image
We can also animate an image being fed into a convolutional neural network by specifiying an `ImageLayer` before the first convolutional layer.
```python
import numpy as np
from PIL import Image
from manim_ml.neural_network import NeuralNetwork, FeedForwardLayer, Convolutional2DLayer, ImageLayer
image = Image.open("digit.jpeg") # You will need to download an image of a digit.
numpy_image = np.asarray(image)
nn = NeuralNetwork([
ImageLayer(numpy_image, height=1.5),
Convolutional2DLayer(1, 7, 3, filter_spacing=0.32), # Note the default stride is 1.
Convolutional2DLayer(3, 5, 3, filter_spacing=0.32),
Convolutional2DLayer(5, 3, 3, filter_spacing=0.18),
FeedForwardLayer(3),
FeedForwardLayer(3),
],
layer_spacing=0.25,
)
# Center the neural network
nn.move_to(ORIGIN)
self.add(nn)
# Make a forward pass animation
forward_pass = nn.make_forward_pass_animation()
```
We can now render with:
```bash
$ manim -pql example.py
```
<img src="assets/readme/convolutional_neural_network_with_an_image.gif">
### Max Pooling
A common operation in deep learning is the 2D Max Pooling operation, which reduces the size of convolutional feature maps. We can visualize max pooling with the `MaxPooling2DLayer`.
```python
from manim_ml.neural_network import NeuralNetwork, Convolutional2DLayer, MaxPooling2DLayer
# Make neural network
nn = NeuralNetwork([
Convolutional2DLayer(1, 8),
Convolutional2DLayer(3, 6, 3),
MaxPooling2DLayer(kernel_size=2),
Convolutional2DLayer(5, 2, 2),
],
layer_spacing=0.25,
)
# Center the nn
nn.move_to(ORIGIN)
self.add(nn)
# Play animation
forward_pass = nn.make_forward_pass_animation()
self.wait(1)
self.play(forward_pass)
```
We can now render with:
```bash
$ manim -pql example.py
```
<img src="assets/readme/max_pooling.gif">
### Activation Functions
Activation functions apply non-linarities to the outputs of neural networks. They have different shapes, and it is useful to be able to visualize the functions. I added the ability to visualize activation functions over `FeedForwardLayer` and `Convolutional2DLayer` by passing an argument as follows:
```python
layer = FeedForwardLayer(num_nodes=3, activation_function="ReLU")
```
We can add these to larger neural network as follows:
```python
from manim_ml.neural_network import NeuralNetwork, Convolutional2DLayer, FeedForwardLayer
# Make nn
nn = NeuralNetwork([
Convolutional2DLayer(1, 7, filter_spacing=0.32),
Convolutional2DLayer(3, 5, 3, filter_spacing=0.32, activation_function="ReLU"),
FeedForwardLayer(3, activation_function="Sigmoid"),
],
layer_spacing=0.25,
)
self.add(nn)
# Play animation
forward_pass = nn.make_forward_pass_animation()
self.play(forward_pass)
```
We can now render with:
```bash
$ manim -pql example.py
```
<img src="assets/readme/activation_functions.gif">
### More Complex Animations: Neural Network Dropout
```python
from manim_ml.neural_network import NeuralNetwork, FeedForwardLayer
from manim_ml.neural_network.animations.dropout import make_neural_network_dropout_animation
# Make nn
nn = NeuralNetwork([
FeedForwardLayer(3),
FeedForwardLayer(5),
FeedForwardLayer(3),
FeedForwardLayer(5),
FeedForwardLayer(4),
],
layer_spacing=0.4,
)
# Center the nn
nn.move_to(ORIGIN)
self.add(nn)
# Play animation
self.play(
make_neural_network_dropout_animation(
nn, dropout_rate=0.25, do_forward_pass=True
)
)
self.wait(1)
```
We can now render with:
```bash
$ manim -pql example.py
```
<img src="assets/readme/dropout.gif">
## Citation
If you found ManimML useful please cite it below!
```
@misc{helbling2023manimml,
title={ManimML: Communicating Machine Learning Architectures with Animation},
author={Alec Helbling and Duen Horng and Chau},
year={2023},
eprint={2306.17108},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
ManimML_helblazer811/.github/FUNDING.yml | # These are supported funding model platforms
github: [helblazer811] # Replace with up to 4 GitHub Sponsors-enabled usernames e.g., [user1, user2]
patreon: # Replace with a single Patreon username
open_collective: # Replace with a single Open Collective username
ko_fi: # Replace with a single Ko-fi username
tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel
community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry
liberapay: # Replace with a single Liberapay username
issuehunt: # Replace with a single IssueHunt username
otechie: # Replace with a single Otechie username
lfx_crowdfunding: # Replace with a single LFX Crowdfunding project-name e.g., cloud-foundry
custom: # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2']
|
ManimML_helblazer811/.github/workflows/black.yml | name: Lint
on: [push, pull_request]
jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: psf/black@stable
|
ManimML_helblazer811/manim_ml/__init__.py | from argparse import Namespace
from manim import *
import manim
from manim_ml.utils.colorschemes.colorschemes import light_mode, dark_mode, ColorScheme
class ManimMLConfig:
def __init__(self, default_color_scheme=dark_mode):
self._color_scheme = default_color_scheme
self.three_d_config = Namespace(
three_d_x_rotation = 90 * DEGREES,
three_d_y_rotation = 0 * DEGREES,
rotation_angle = 75 * DEGREES,
rotation_axis = [0.02, 1.0, 0.0]
# rotation_axis = [0.0, 0.9, 0.0]
#rotation_axis = [0.0, 0.9, 0.0]
)
@property
def color_scheme(self):
return self._color_scheme
@color_scheme.setter
def color_scheme(self, value):
if isinstance(value, str):
if value == "dark_mode":
self._color_scheme = dark_mode
elif value == "light_mode":
self._color_scheme = light_mode
else:
raise ValueError(
"Color scheme must be either 'dark_mode' or 'light_mode'"
)
elif isinstance(value, ColorScheme):
self._color_scheme = value
manim.config.background_color = self.color_scheme.background_color
# These are accesible from the manim_ml namespace
config = ManimMLConfig() |
ManimML_helblazer811/manim_ml/scene.py | from manim import *
class ManimML3DScene(ThreeDScene):
"""
This is a wrapper class for the Manim ThreeDScene
Note: the primary purpose of this is to make it so
that everything inside of a layer
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def play(self):
""" """
pass
|
ManimML_helblazer811/manim_ml/diffusion/mcmc.py | """
Tool for animating Markov Chain Monte Carlo simulations in 2D.
"""
from manim import *
import matplotlib
import matplotlib.pyplot as plt
from manim_ml.utils.mobjects.plotting import convert_matplotlib_figure_to_image_mobject
import numpy as np
import scipy
import scipy.stats
from tqdm import tqdm
import seaborn as sns
from manim_ml.utils.mobjects.probability import GaussianDistribution
######################## MCMC Algorithms #########################
def gaussian_proposal(x, sigma=0.3):
"""
Gaussian proposal distribution.
Draw new parameters from Gaussian distribution with
mean at current position and standard deviation sigma.
Since the mean is the current position and the standard
deviation is fixed. This proposal is symmetric so the ratio
of proposal densities is 1.
Parameters
----------
x : np.ndarray or list
point to center proposal around
sigma : float, optional
standard deviation of gaussian for proposal, by default 0.1
Returns
-------
np.ndarray
propossed point
"""
# Draw x_star
x_star = x + np.random.randn(len(x)) * sigma
# proposal ratio factor is 1 since jump is symmetric
qxx = 1
return (x_star, qxx)
class MultidimensionalGaussianPosterior:
"""
N-Dimensional Gaussian distribution with
mu ~ Normal(0, 10)
var ~ LogNormal(0, 1.5)
Prior on mean is U(-500, 500)
"""
def __init__(self, ndim=2, seed=12345, scale=3, mu=None, var=None):
"""_summary_
Parameters
----------
ndim : int, optional
_description_, by default 2
seed : int, optional
_description_, by default 12345
scale : int, optional
_description_, by default 10
"""
np.random.seed(seed)
self.scale = scale
if var is None:
self.var = 10 ** (np.random.randn(ndim) * 1.5)
else:
self.var = var
if mu is None:
self.mu = scipy.stats.norm(loc=0, scale=self.scale).rvs(ndim)
else:
self.mu = mu
def __call__(self, x):
"""
Call multivariate normal posterior.
"""
if np.all(x < 500) and np.all(x > -500):
return scipy.stats.multivariate_normal(mean=self.mu, cov=self.var).logpdf(x)
else:
return -1e6
def metropolis_hastings_sampler(
log_prob_fn=MultidimensionalGaussianPosterior(),
prop_fn=gaussian_proposal,
initial_location: np.ndarray = np.array([0, 0]),
iterations=25,
warm_up=0,
ndim=2,
sampling_seed=1
):
"""Samples using a Metropolis-Hastings sampler.
Parameters
----------
log_prob_fn : function, optional
Function to compute log-posterior, by default MultidimensionalGaussianPosterior
prop_fn : function, optional
Function to compute proposal location, by default gaussian_proposal
initial_location : np.ndarray, optional
initial location for the chain
iterations : int, optional
number of iterations of the markov chain, by default 100
warm_up : int, optional,
number of warm up iterations
Returns
-------
samples : np.ndarray
numpy array of 2D samples of length `iterations`
warm_up_samples : np.ndarray
numpy array of 2D warm up samples of length `warm_up`
candidate_samples: np.ndarray
numpy array of the candidate samples for each time step
"""
np.random.seed(sampling_seed)
# initialize chain, acceptance rate and lnprob
chain = np.zeros((iterations, ndim))
proposals = np.zeros((iterations, ndim))
lnprob = np.zeros(iterations)
accept_rate = np.zeros(iterations)
# first samples
chain[0] = initial_location
proposals[0] = initial_location
lnprob0 = log_prob_fn(initial_location)
lnprob[0] = lnprob0
# start loop
x0 = initial_location
naccept = 0
for ii in range(1, iterations):
# propose
x_star, factor = prop_fn(x0)
# draw random uniform number
u = np.random.uniform(0, 1)
# compute hastings ratio
lnprob_star = log_prob_fn(x_star)
H = np.exp(lnprob_star - lnprob0) * factor
# accept/reject step (update acceptance counter)
if u < H:
x0 = x_star
lnprob0 = lnprob_star
naccept += 1
# update chain
chain[ii] = x0
proposals[ii] = x_star
lnprob[ii] = lnprob0
accept_rate[ii] = naccept / ii
return chain, np.array([]), proposals
#################### MCMC Visualization Tools ######################
def make_dist_image_mobject_from_samples(samples, ylim, xlim):
# Make the plot
matplotlib.use('Agg')
plt.figure(figsize=(10,10), dpi=100)
print(np.shape(samples[:, 0]))
displot = sns.displot(
x=samples[:, 0],
y=samples[:, 1],
cmap="Reds",
kind="kde",
norm=matplotlib.colors.LogNorm()
)
plt.ylim(ylim[0], ylim[1])
plt.xlim(xlim[0], xlim[1])
plt.axis('off')
fig = displot.fig
image_mobject = convert_matplotlib_figure_to_image_mobject(fig)
return image_mobject
class Uncreate(Create):
def __init__(
self,
mobject,
reverse_rate_function: bool = True,
introducer: bool = True,
remover: bool = True,
**kwargs,
) -> None:
super().__init__(
mobject,
reverse_rate_function=reverse_rate_function,
introducer=introducer,
remover=remover,
**kwargs,
)
class MCMCAxes(Group):
"""Container object for visualizing MCMC on a 2D axis"""
def __init__(
self,
dot_color=BLUE,
dot_radius=0.02,
accept_line_color=GREEN,
reject_line_color=RED,
line_color=BLUE,
line_stroke_width=2,
x_range=[-3, 3],
y_range=[-3, 3],
x_length=5,
y_length=5
):
super().__init__()
self.dot_color = dot_color
self.dot_radius = dot_radius
self.accept_line_color = accept_line_color
self.reject_line_color = reject_line_color
self.line_color = line_color
self.line_stroke_width = line_stroke_width
# Make the axes
self.x_length = x_length
self.y_length = y_length
self.x_range = x_range
self.y_range = y_range
self.axes = Axes(
x_range=x_range,
y_range=y_range,
x_length=x_length,
y_length=y_length,
x_axis_config={"stroke_opacity": 0.0},
y_axis_config={"stroke_opacity": 0.0},
tips=False,
)
self.add(self.axes)
@override_animation(Create)
def _create_override(self, **kwargs):
"""Overrides Create animation"""
return AnimationGroup(Create(self.axes))
def visualize_gaussian_proposal_about_point(self, mean, cov=None) -> AnimationGroup:
"""Creates a Gaussian distribution about a certain point
Parameters
----------
mean : np.ndarray
mean of proposal distribution
cov : np.ndarray
covariance matrix of proposal distribution
Returns
-------
AnimationGroup
animation of creating the proposal Gaussian distribution
"""
gaussian = GaussianDistribution(
axes=self.axes, mean=mean, cov=cov, dist_theme="gaussian"
)
create_guassian = Create(gaussian)
return create_guassian
def make_transition_animation(
self,
start_point,
end_point,
candidate_point,
show_dots=True,
run_time=0.1
) -> AnimationGroup:
"""Makes an transition animation for a single point on a Markov Chain
Parameters
----------
start_point: Dot
Start point of the transition
end_point : Dot
End point of the transition
show_dots: boolean, optional
Whether or not to show the dots
Returns
-------
AnimationGroup
Animation of the transition from start to end
"""
start_location = self.axes.point_to_coords(start_point.get_center())
end_location = self.axes.point_to_coords(end_point.get_center())
candidate_location = self.axes.point_to_coords(candidate_point.get_center())
# Figure out if a point is accepted or rejected
# point_is_rejected = not candidate_location == end_location
point_is_rejected = False
if point_is_rejected:
return AnimationGroup(), Dot().set_opacity(0.0)
else:
create_end_point = Create(end_point)
line = Line(
start_point,
end_point,
color=self.line_color,
stroke_width=self.line_stroke_width,
buff=-0.1
)
create_line = Create(line)
if show_dots:
return AnimationGroup(
create_end_point,
create_line,
lag_ratio=1.0,
run_time=run_time
), line
else:
return AnimationGroup(
create_line,
lag_ratio=1.0,
run_time=run_time
), line
def show_ground_truth_gaussian(self, distribution):
""" """
mean = distribution.mu
var = np.eye(2) * distribution.var
distribution_drawing = GaussianDistribution(
self.axes, mean, var, dist_theme="gaussian"
).set_opacity(0.2)
return AnimationGroup(Create(distribution_drawing))
def visualize_metropolis_hastings_chain_sampling(
self,
log_prob_fn=MultidimensionalGaussianPosterior(),
prop_fn=gaussian_proposal,
show_dots=False,
true_samples=None,
sampling_kwargs={},
):
"""
Makes an animation for visualizing a 2D markov chain using
metropolis hastings samplings
Parameters
----------
axes : manim.mobject.graphing.coordinate_systems.Axes
Manim 2D axes to plot the chain on
log_prob_fn : function, optional
Function to compute log-posterior, by default MultidmensionalGaussianPosterior
prop_fn : function, optional
Function to compute proposal location, by default gaussian_proposal
initial_location : list, optional
initial location for the markov chain, by default None
show_dots : bool, optional
whether or not to show the dots on the screen, by default False
iterations : int, optional
number of iterations of the markov chain, by default 100
Returns
-------
animation : AnimationGroup
animation for creating the markov chain
"""
# Compute the chain samples using a Metropolis Hastings Sampler
mcmc_samples, warm_up_samples, candidate_samples = metropolis_hastings_sampler(
log_prob_fn=log_prob_fn,
prop_fn=prop_fn,
**sampling_kwargs
)
# print(f"MCMC samples: {mcmc_samples}")
# print(f"Candidate samples: {candidate_samples}")
# Make the animation for visualizing the chain
transition_animations = []
# Place the initial point
current_point = mcmc_samples[0]
current_point = Dot(
self.axes.coords_to_point(current_point[0], current_point[1]),
color=self.dot_color,
radius=self.dot_radius,
)
create_initial_point = Create(current_point)
transition_animations.append(create_initial_point)
# Show the initial point's proposal distribution
# NOTE: visualize the warm up and the iterations
lines = []
warmup_points = []
num_iterations = len(mcmc_samples) + len(warm_up_samples)
for iteration in tqdm(range(1, num_iterations)):
next_sample = mcmc_samples[iteration]
# print(f"Next sample: {next_sample}")
candidate_sample = candidate_samples[iteration - 1]
# Make the next point
next_point = Dot(
self.axes.coords_to_point(
next_sample[0],
next_sample[1]
),
color=self.dot_color,
radius=self.dot_radius,
)
candidate_point = Dot(
self.axes.coords_to_point(
candidate_sample[0],
candidate_sample[1]
),
color=self.dot_color,
radius=self.dot_radius,
)
# Make a transition animation
transition_animation, line = self.make_transition_animation(
current_point, next_point, candidate_point
)
# Save assets
lines.append(line)
if iteration < len(warm_up_samples):
warmup_points.append(candidate_point)
# Add the transition animation
transition_animations.append(transition_animation)
# Setup for next iteration
current_point = next_point
# Overall MCMC animation
# 1. Fade in the distribution
image_mobject = make_dist_image_mobject_from_samples(
true_samples,
xlim=(self.x_range[0], self.x_range[1]),
ylim=(self.y_range[0], self.y_range[1])
)
image_mobject.scale_to_fit_height(
self.y_length
)
image_mobject.move_to(self.axes)
fade_in_distribution = FadeIn(
image_mobject,
run_time=0.5
)
# 2. Start sampling the chain
chain_sampling_animation = AnimationGroup(
*transition_animations,
lag_ratio=1.0,
run_time=5.0
)
# 3. Convert the chain to points, excluding the warmup
lines = VGroup(*lines)
warm_up_points = VGroup(*warmup_points)
fade_out_lines_and_warmup = AnimationGroup(
Uncreate(lines),
Uncreate(warm_up_points),
lag_ratio=0.0
)
# Make the final animation
animation_group = Succession(
fade_in_distribution,
chain_sampling_animation,
fade_out_lines_and_warmup,
lag_ratio=1.0
)
return animation_group
|
ManimML_helblazer811/manim_ml/utils/__init__.py | |
ManimML_helblazer811/manim_ml/utils/colorschemes/__init__.py | from manim_ml.utils.colorschemes.colorschemes import light_mode, dark_mode |
ManimML_helblazer811/manim_ml/utils/colorschemes/colorschemes.py | from manim import *
from dataclasses import dataclass
@dataclass
class ColorScheme:
primary_color: str
secondary_color: str
active_color: str
text_color: str
background_color: str
dark_mode = ColorScheme(
primary_color=BLUE,
secondary_color=WHITE,
active_color=ORANGE,
text_color=WHITE,
background_color=BLACK
)
light_mode = ColorScheme(
primary_color=BLUE,
secondary_color=BLACK,
active_color=ORANGE,
text_color=BLACK,
background_color=WHITE
)
|
ManimML_helblazer811/manim_ml/utils/mobjects/__init__.py | |
ManimML_helblazer811/manim_ml/utils/mobjects/connections.py | import numpy as np
from manim import *
class NetworkConnection(VGroup):
"""
This class allows for creating connections
between locations in a network
"""
direction_vector_map = {"up": UP, "down": DOWN, "left": LEFT, "right": RIGHT}
def __init__(
self,
start_mobject,
end_mobject,
arc_direction="straight",
buffer=0.0,
arc_distance=0.2,
stroke_width=2.0,
color=WHITE,
active_color=ORANGE,
):
"""Creates an arrow with right angles in it connecting
two mobjects.
Parameters
----------
start_mobject : Mobject
Mobject where the start of the connection is from
end_mobject : Mobject
Mobject where the end of the connection goes to
arc_direction : str, optional
direction that the connection arcs, by default "straight"
buffer : float, optional
amount of space between the connection and mobjects at the end
arc_distance : float, optional
Distance from start and end mobject that the arc bends
stroke_width : float, optional
Stroke width of the connection
color : [float], optional
Color of the connection
active_color : [float], optional
Color of active animations for this mobject
"""
super().__init__()
assert arc_direction in ["straight", "up", "down", "left", "right"]
self.start_mobject = start_mobject
self.end_mobject = end_mobject
self.arc_direction = arc_direction
self.buffer = buffer
self.arc_distance = arc_distance
self.stroke_width = stroke_width
self.color = color
self.active_color = active_color
self.make_mobjects()
def make_mobjects(self):
"""Makes the submobjects"""
if self.start_mobject.get_center()[0] < self.end_mobject.get_center()[0]:
left_mobject = self.start_mobject
right_mobject = self.end_mobject
else:
right_mobject = self.start_mobject
left_mobject = self.end_mobject
if self.arc_direction == "straight":
# Make an arrow
self.straight_arrow = Arrow(
start=left_mobject.get_right() + np.array([self.buffer, 0.0, 0.0]),
end=right_mobject.get_left() + np.array([-1 * self.buffer, 0.0, 0.0]),
color=WHITE,
fill_color=WHITE,
stroke_opacity=1.0,
buff=0.0,
)
self.add(self.straight_arrow)
else:
# Figure out the direction of the arc
direction_vector = NetworkConnection.direction_vector_map[
self.arc_direction
]
# Based on the position of the start and end layer, and direction
# figure out how large to make each line
# Whichever mobject has a critical point the farthest
# distance in the direction_vector direction we will use that end
left_mobject_critical_point = left_mobject.get_critical_point(direction_vector)
right_mobject_critical_point = right_mobject.get_critical_point(direction_vector)
# Take the dot product of each
# These dot products correspond to the orthogonal projection
# onto the direction vectors
left_dot_product = np.dot(
left_mobject_critical_point,
direction_vector
)
right_dot_product = np.dot(
right_mobject_critical_point,
direction_vector
)
extra_distance = abs(left_dot_product - right_dot_product)
# The difference between the dot products
if left_dot_product < right_dot_product:
right_is_farthest = False
else:
right_is_farthest = True
# Make the start arc piece
start_line_start = left_mobject.get_critical_point(direction_vector)
start_line_start += direction_vector * self.buffer
start_line_end = start_line_start + direction_vector * self.arc_distance
if not right_is_farthest:
start_line_end = start_line_end + direction_vector * extra_distance
self.start_line = Line(
start_line_start,
start_line_end,
color=self.color,
stroke_width=self.stroke_width,
)
# Make the end arc piece with an arrow
end_line_end = right_mobject.get_critical_point(direction_vector)
end_line_end += direction_vector * self.buffer
end_line_start = end_line_end + direction_vector * self.arc_distance
if right_is_farthest:
end_line_start = end_line_start + direction_vector * extra_distance
self.end_arrow = Arrow(
start=end_line_start,
end=end_line_end,
color=WHITE,
fill_color=WHITE,
stroke_opacity=1.0,
buff=0.0,
)
# Make the middle arc piece
self.middle_line = Line(
start_line_end,
end_line_start,
color=self.color,
stroke_width=self.stroke_width,
)
# Add the mobjects
self.add(
self.start_line,
self.middle_line,
self.end_arrow,
)
@override_animation(ShowPassingFlash)
def _override_passing_flash(self, run_time=1.0, time_width=0.2):
"""Passing flash animation"""
if self.arc_direction == "straight":
return ShowPassingFlash(
self.straight_arrow.copy().set_color(self.active_color),
time_width=time_width,
)
else:
# Animate the start line
start_line_animation = ShowPassingFlash(
self.start_line.copy().set_color(self.active_color),
time_width=time_width,
)
# Animate the middle line
middle_line_animation = ShowPassingFlash(
self.middle_line.copy().set_color(self.active_color),
time_width=time_width,
)
# Animate the end line
end_line_animation = ShowPassingFlash(
self.end_arrow.copy().set_color(self.active_color),
time_width=time_width,
)
return AnimationGroup(
start_line_animation,
middle_line_animation,
end_line_animation,
lag_ratio=1.0,
run_time=run_time,
)
|
ManimML_helblazer811/manim_ml/utils/mobjects/image.py | from manim import *
import numpy as np
from PIL import Image
class GrayscaleImageMobject(Group):
"""Mobject for creating images in Manim from numpy arrays"""
def __init__(self, numpy_image, height=2.3):
super().__init__()
self.numpy_image = numpy_image
assert len(np.shape(self.numpy_image)) == 2
input_image = self.numpy_image[None, :, :]
# Convert grayscale to rgb version of grayscale
input_image = np.repeat(input_image, 3, axis=0)
input_image = np.rollaxis(input_image, 0, start=3)
self.image_mobject = ImageMobject(
input_image,
image_mode="RBG",
)
self.add(self.image_mobject)
self.image_mobject.set_resampling_algorithm(
RESAMPLING_ALGORITHMS["nearest"]
)
self.image_mobject.scale_to_fit_height(height)
@classmethod
def from_path(cls, path, height=2.3):
"""Loads image from path"""
image = Image.open(path)
numpy_image = np.asarray(image)
return cls(numpy_image, height=height)
@override_animation(Create)
def create(self, run_time=2):
return FadeIn(self)
def scale(self, scale_factor, **kwargs):
"""Scales the image mobject"""
# super().scale(scale_factor)
# height = self.height
self.image_mobject.scale(scale_factor)
# self.scale_to_fit_height(2)
# self.apply_points_function_about_point(
# lambda points: scale_factor * points, **kwargs
# )
def set_opacity(self, opacity):
"""Set the opacity"""
self.image_mobject.set_opacity(opacity)
class LabeledColorImage(Group):
"""Labeled Color Image"""
def __init__(
self, image, color=RED, label="Positive", stroke_width=5, font_size=24, buff=0.2
):
super().__init__()
self.image = image
self.color = color
self.label = label
self.stroke_width = stroke_width
self.font_size = font_size
text = Text(label, font_size=self.font_size)
text.next_to(self.image, UP, buff=buff)
rectangle = SurroundingRectangle(
self.image, color=color, buff=0.0, stroke_width=self.stroke_width
)
self.add(text)
self.add(rectangle)
self.add(self.image)
|
ManimML_helblazer811/manim_ml/utils/mobjects/list_group.py | from manim import *
class ListGroup(Mobject):
"""Indexable Group with traditional list operations"""
def __init__(self, *layers):
super().__init__()
self.items = [*layers]
def __getitem__(self, indices):
"""Traditional list indexing"""
return self.items[indices]
def insert(self, index, item):
"""Inserts item at index"""
self.items.insert(index, item)
self.submobjects = self.items
def remove_at_index(self, index):
"""Removes item at index"""
if index > len(self.items):
raise Exception(f"ListGroup index out of range: {index}")
item = self.items[index]
del self.items[index]
self.submobjects = self.items
return item
def remove_at_indices(self, indices):
"""Removes items at indices"""
items = []
for index in indices:
item = self.remove_at_index(index)
items.append(item)
return items
def remove(self, item):
"""Removes first instance of item"""
self.items.remove(item)
self.submobjects = self.items
return item
def get(self, index):
"""Gets item at index"""
return self.items[index]
def add(self, item):
"""Adds to end"""
self.items.append(item)
self.submobjects = self.items
def replace(self, index, item):
"""Replaces item at index"""
self.items[index] = item
self.submobjects = self.items
def index_of(self, item):
"""Returns index of item if it exists"""
for index, obj in enumerate(self.items):
if item is obj:
return index
return -1
def __len__(self):
"""Length of items"""
return len(self.items)
def set_z_index(self, z_index_value, family=True):
"""Sets z index of all values in ListGroup"""
for item in self.items:
item.set_z_index(z_index_value, family=True)
def __iter__(self):
self.current_index = -1
return self
def __next__(self): # Python 2: def next(self)
self.current_index += 1
if self.current_index < len(self.items):
return self.items[self.current_index]
raise StopIteration
def __repr__(self):
return f"ListGroup({self.items})"
|
ManimML_helblazer811/manim_ml/utils/mobjects/probability.py | from manim import *
import numpy as np
import math
class GaussianDistribution(VGroup):
"""Object for drawing a Gaussian distribution"""
def __init__(
self, axes, mean=None, cov=None, dist_theme="gaussian", color=ORANGE, **kwargs
):
super(VGroup, self).__init__(**kwargs)
self.axes = axes
self.mean = mean
self.cov = cov
self.dist_theme = dist_theme
self.color = color
if mean is None:
self.mean = np.array([0.0, 0.0])
if cov is None:
self.cov = np.array([[1, 0], [0, 1]])
# Make the Gaussian
if self.dist_theme is "gaussian":
self.ellipses = self.construct_gaussian_distribution(
self.mean, self.cov, color=self.color
)
self.add(self.ellipses)
elif self.dist_theme is "ellipse":
self.ellipses = self.construct_simple_gaussian_ellipse(
self.mean, self.cov, color=self.color
)
self.add(self.ellipses)
else:
raise Exception(f"Uncrecognized distribution theme: {self.dist_theme}")
"""
@override_animation(Create)
def _create_gaussian_distribution(self):
return Create(self)
"""
def compute_covariance_rotation_and_scale(self, covariance):
def eigsorted(cov):
"""
Eigenvalues and eigenvectors of the covariance matrix.
"""
vals, vecs = np.linalg.eigh(cov)
order = vals.argsort()[::-1]
return vals[order], vecs[:, order]
def cov_ellipse(cov, nstd):
"""
Source: http://stackoverflow.com/a/12321306/1391441
"""
vals, vecs = eigsorted(cov)
theta = np.degrees(np.arctan2(*vecs[:, 0][::-1]))
# Width and height are "full" widths, not radius
width, height = 2 * nstd * np.sqrt(vals)
return width, height, theta
width, height, angle = cov_ellipse(covariance, 1)
scale_factor = (
np.abs(self.axes.x_range[0] - self.axes.x_range[1]) / self.axes.x_length
)
width /= scale_factor
height /= scale_factor
return angle, width, height
def construct_gaussian_distribution(
self, mean, covariance, color=ORANGE, num_ellipses=4
):
"""Returns a 2d Gaussian distribution object with given mean and covariance"""
# map mean and covariance to frame coordinates
mean = self.axes.coords_to_point(*mean)
# Figure out the scale and angle of rotation
rotation, width, height = self.compute_covariance_rotation_and_scale(covariance)
# Make covariance ellipses
opacity = 0.0
ellipses = VGroup()
for ellipse_number in range(num_ellipses):
opacity += 1.0 / num_ellipses
ellipse_width = width * (1 - opacity)
ellipse_height = height * (1 - opacity)
ellipse = Ellipse(
width=ellipse_width,
height=ellipse_height,
color=color,
fill_opacity=opacity,
stroke_width=2.0,
)
ellipse.move_to(mean)
ellipse.rotate(rotation)
ellipses.add(ellipse)
return ellipses
def construct_simple_gaussian_ellipse(self, mean, covariance, color=ORANGE):
"""Returns a 2d Gaussian distribution object with given mean and covariance"""
# Map mean and covariance to frame coordinates
mean = self.axes.coords_to_point(*mean)
angle, width, height = self.compute_covariance_rotation_and_scale(covariance)
# Make covariance ellipses
ellipses = VGroup()
opacity = 0.4
ellipse = Ellipse(
width=width,
height=height,
color=color,
fill_opacity=opacity,
stroke_width=1.0,
)
ellipse.move_to(mean)
ellipse.rotate(angle)
ellipses.add(ellipse)
ellipses.set_z_index(3)
return ellipses
|
ManimML_helblazer811/manim_ml/utils/mobjects/plotting.py | from manim import *
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from PIL import Image
import io
def convert_matplotlib_figure_to_image_mobject(fig, dpi=200):
"""Takes a matplotlib figure and makes an image mobject from it
Parameters
----------
fig : matplotlib figure
matplotlib figure
"""
fig.tight_layout(pad=0)
# plt.axis('off')
fig.canvas.draw()
# Save data into a buffer
image_buffer = io.BytesIO()
plt.savefig(image_buffer, format='png', dpi=dpi)
# Reopen in PIL and convert to numpy
image = Image.open(image_buffer)
image = np.array(image)
# Convert it to an image mobject
image_mobject = ImageMobject(image, image_mode="RGB")
return image_mobject |
ManimML_helblazer811/manim_ml/utils/mobjects/gridded_rectangle.py | from manim import *
import numpy as np
class GriddedRectangle(VGroup):
"""Rectangle object with grid lines"""
def __init__(
self,
color=ORANGE,
height=2.0,
width=4.0,
mark_paths_closed=True,
close_new_points=True,
grid_xstep=None,
grid_ystep=None,
grid_stroke_width=0.0, # DEFAULT_STROKE_WIDTH/2,
grid_stroke_color=ORANGE,
grid_stroke_opacity=1.0,
stroke_width=2.0,
fill_opacity=0.2,
show_grid_lines=False,
dotted_lines=False,
**kwargs
):
super().__init__()
# Fields
self.color = color
self.mark_paths_closed = mark_paths_closed
self.close_new_points = close_new_points
self.grid_xstep = grid_xstep
self.grid_ystep = grid_ystep
self.grid_stroke_width = grid_stroke_width
self.grid_stroke_color = grid_stroke_color
self.grid_stroke_opacity = grid_stroke_opacity if show_grid_lines else 0.0
self.stroke_width = stroke_width
self.rotation_angles = [0, 0, 0]
self.show_grid_lines = show_grid_lines
self.untransformed_width = width
self.untransformed_height = height
self.dotted_lines = dotted_lines
# Make rectangle
if self.dotted_lines:
no_border_rectangle = Rectangle(
width=width,
height=height,
color=color,
fill_color=color,
stroke_opacity=0.0,
fill_opacity=fill_opacity,
shade_in_3d=True,
)
self.rectangle = no_border_rectangle
border_rectangle = Rectangle(
width=width,
height=height,
color=color,
fill_color=color,
fill_opacity=fill_opacity,
shade_in_3d=True,
stroke_width=stroke_width,
)
self.dotted_lines = DashedVMobject(
border_rectangle,
num_dashes=int((width + height) / 2) * 20,
)
self.add(self.dotted_lines)
else:
self.rectangle = Rectangle(
width=width,
height=height,
color=color,
stroke_width=stroke_width,
fill_color=color,
fill_opacity=fill_opacity,
shade_in_3d=True,
)
self.add(self.rectangle)
# Make grid lines
grid_lines = self.make_grid_lines()
self.add(grid_lines)
# Make corner rectangles
self.corners_dict = self.make_corners_dict()
self.add(*self.corners_dict.values())
def make_corners_dict(self):
"""Make corners dictionary"""
corners_dict = {
"top_right": Dot(
self.rectangle.get_corner([1, 1, 0]), fill_opacity=0.0, radius=0.0
),
"top_left": Dot(
self.rectangle.get_corner([-1, 1, 0]), fill_opacity=0.0, radius=0.0
),
"bottom_left": Dot(
self.rectangle.get_corner([-1, -1, 0]), fill_opacity=0.0, radius=0.0
),
"bottom_right": Dot(
self.rectangle.get_corner([1, -1, 0]), fill_opacity=0.0, radius=0.0
),
}
return corners_dict
def get_corners_dict(self):
"""Returns a dictionary of the corners"""
# Sort points through clockwise rotation of a vector in the xy plane
return self.corners_dict
def make_grid_lines(self):
"""Make grid lines in rectangle"""
grid_lines = VGroup()
v = self.rectangle.get_vertices()
if self.grid_xstep is not None:
grid_xstep = abs(self.grid_xstep)
count = int(self.width / grid_xstep)
grid = VGroup(
*(
Line(
v[1] + i * grid_xstep * RIGHT,
v[1] + i * grid_xstep * RIGHT + self.height * DOWN,
stroke_color=self.grid_stroke_color,
stroke_width=self.grid_stroke_width,
stroke_opacity=self.grid_stroke_opacity,
shade_in_3d=True,
)
for i in range(1, count)
)
)
grid_lines.add(grid)
if self.grid_ystep is not None:
grid_ystep = abs(self.grid_ystep)
count = int(self.height / grid_ystep)
grid = VGroup(
*(
Line(
v[1] + i * grid_ystep * DOWN,
v[1] + i * grid_ystep * DOWN + self.width * RIGHT,
stroke_color=self.grid_stroke_color,
stroke_width=self.grid_stroke_width,
stroke_opacity=self.grid_stroke_opacity,
)
for i in range(1, count)
)
)
grid_lines.add(grid)
return grid_lines
def get_center(self):
return self.rectangle.get_center()
def get_normal_vector(self):
vertex_1 = self.rectangle.get_vertices()[0]
vertex_2 = self.rectangle.get_vertices()[1]
vertex_3 = self.rectangle.get_vertices()[2]
# First vector
normal_vector = np.cross((vertex_1 - vertex_2), (vertex_1 - vertex_3))
return normal_vector
def set_color(self, color):
"""Sets the color of the gridded rectangle"""
self.color = color
self.rectangle.set_color(color)
self.rectangle.set_stroke_color(color)
|
ManimML_helblazer811/manim_ml/utils/testing/frames_comparison.py | from __future__ import annotations
import functools
import inspect
from pathlib import Path
from typing import Callable
from _pytest.fixtures import FixtureRequest
from manim import Scene
from manim._config import tempconfig
from manim._config.utils import ManimConfig
from manim.camera.three_d_camera import ThreeDCamera
from manim.renderer.cairo_renderer import CairoRenderer
from manim.scene.three_d_scene import ThreeDScene
from manim.utils.testing._frames_testers import _ControlDataWriter, _FramesTester
from manim.utils.testing._test_class_makers import (
DummySceneFileWriter,
_make_scene_file_writer_class,
_make_test_renderer_class,
_make_test_scene_class,
)
SCENE_PARAMETER_NAME = "scene"
_tests_root_dir_path = Path(__file__).absolute().parents[2]
print(f"Tests root path: {_tests_root_dir_path}")
PATH_CONTROL_DATA = _tests_root_dir_path / Path("control_data", "graphical_units_data")
def frames_comparison(
func=None,
*,
last_frame: bool = True,
renderer_class=CairoRenderer,
base_scene=Scene,
**custom_config,
):
"""Compares the frames generated by the test with control frames previously registered.
If there is no control frames for this test, the test will fail. To generate
control frames for a given test, pass ``--set_test`` flag to pytest
while running the test.
Note that this decorator can be use with or without parentheses.
Parameters
----------
last_frame
whether the test should test the last frame, by default True.
renderer_class
The base renderer to use (OpenGLRenderer/CairoRenderer), by default CairoRenderer
base_scene
The base class for the scene (ThreeDScene, etc.), by default Scene
.. warning::
By default, last_frame is True, which means that only the last frame is tested.
If the scene has a moving animation, then the test must set last_frame to False.
"""
def decorator_maker(tested_scene_construct):
if (
SCENE_PARAMETER_NAME
not in inspect.getfullargspec(tested_scene_construct).args
):
raise Exception(
f"Invalid graphical test function test function : must have '{SCENE_PARAMETER_NAME}'as one of the parameters.",
)
# Exclude "scene" from the argument list of the signature.
old_sig = inspect.signature(
functools.partial(tested_scene_construct, scene=None),
)
if "__module_test__" not in tested_scene_construct.__globals__:
raise Exception(
"There is no module test name indicated for the graphical unit test. You have to declare __module_test__ in the test file.",
)
module_name = tested_scene_construct.__globals__.get("__module_test__")
test_name = tested_scene_construct.__name__[len("test_") :]
@functools.wraps(tested_scene_construct)
# The "request" parameter is meant to be used as a fixture by pytest. See below.
def wrapper(*args, request: FixtureRequest, tmp_path, **kwargs):
# Wraps the test_function to a construct method, to "freeze" the eventual additional arguments (parametrizations fixtures).
construct = functools.partial(tested_scene_construct, *args, **kwargs)
# Kwargs contains the eventual parametrization arguments.
# This modifies the test_name so that it is defined by the parametrization
# arguments too.
# Example: if "length" is parametrized from 0 to 20, the kwargs
# will be once with {"length" : 1}, etc.
test_name_with_param = test_name + "_".join(
f"_{str(tup[0])}[{str(tup[1])}]" for tup in kwargs.items()
)
config_tests = _config_test(last_frame)
config_tests["text_dir"] = tmp_path
config_tests["tex_dir"] = tmp_path
if last_frame:
config_tests["frame_rate"] = 1
config_tests["dry_run"] = True
setting_test = request.config.getoption("--set_test")
try:
test_file_path = tested_scene_construct.__globals__["__file__"]
except Exception:
test_file_path = None
real_test = _make_test_comparing_frames(
file_path=_control_data_path(
test_file_path,
module_name,
test_name_with_param,
setting_test,
),
base_scene=base_scene,
construct=construct,
renderer_class=renderer_class,
is_set_test_data_test=setting_test,
last_frame=last_frame,
show_diff=request.config.getoption("--show_diff"),
size_frame=(config_tests["pixel_height"], config_tests["pixel_width"]),
)
# Isolate the config used for the test, to avoid modifying the global config during the test run.
with tempconfig({**config_tests, **custom_config}):
real_test()
parameters = list(old_sig.parameters.values())
# Adds "request" param into the signature of the wrapper, to use the associated pytest fixture.
# This fixture is needed to have access to flags value and pytest's config. See above.
if "request" not in old_sig.parameters:
parameters += [inspect.Parameter("request", inspect.Parameter.KEYWORD_ONLY)]
if "tmp_path" not in old_sig.parameters:
parameters += [
inspect.Parameter("tmp_path", inspect.Parameter.KEYWORD_ONLY),
]
new_sig = old_sig.replace(parameters=parameters)
wrapper.__signature__ = new_sig
# Reach a bit into pytest internals to hoist the marks from our wrapped
# function.
setattr(wrapper, "pytestmark", [])
new_marks = getattr(tested_scene_construct, "pytestmark", [])
wrapper.pytestmark = new_marks
return wrapper
# Case where the decorator is called with and without parentheses.
# If func is None, callabl(None) returns False
if callable(func):
return decorator_maker(func)
return decorator_maker
def _make_test_comparing_frames(
file_path: Path,
base_scene: type[Scene],
construct: Callable[[Scene], None],
renderer_class: type, # Renderer type, there is no superclass renderer yet .....
is_set_test_data_test: bool,
last_frame: bool,
show_diff: bool,
size_frame: tuple,
) -> Callable[[], None]:
"""Create the real pytest test that will fail if the frames mismatch.
Parameters
----------
file_path
The path of the control frames.
base_scene
The base scene class.
construct
The construct method (= the test function)
renderer_class
The renderer base class.
show_diff
whether to visually show_diff (see --show_diff)
Returns
-------
Callable[[], None]
The pytest test.
"""
if is_set_test_data_test:
frames_tester = _ControlDataWriter(file_path, size_frame=size_frame)
else:
frames_tester = _FramesTester(file_path, show_diff=show_diff)
file_writer_class = (
_make_scene_file_writer_class(frames_tester)
if not last_frame
else DummySceneFileWriter
)
testRenderer = _make_test_renderer_class(renderer_class)
def real_test():
with frames_tester.testing():
sceneTested = _make_test_scene_class(
base_scene=base_scene,
construct_test=construct,
# NOTE this is really ugly but it's due to the very bad design of the two renderers.
# If you pass a custom renderer to the Scene, the Camera class given as an argument in the Scene
# is not passed to the renderer. See __init__ of Scene.
# This potentially prevents OpenGL testing.
test_renderer=testRenderer(file_writer_class=file_writer_class)
if base_scene is not ThreeDScene
else testRenderer(
file_writer_class=file_writer_class,
camera_class=ThreeDCamera,
), # testRenderer(file_writer_class=file_writer_class),
)
scene_tested = sceneTested(skip_animations=True)
scene_tested.render()
if last_frame:
frames_tester.check_frame(-1, scene_tested.renderer.get_frame())
return real_test
def _control_data_path(
test_file_path: str | None, module_name: str, test_name: str, setting_test: bool
) -> Path:
if test_file_path is None:
# For some reason, path to test file containing @frames_comparison could not
# be determined. Use local directory instead.
test_file_path = __file__
path = Path(test_file_path).absolute().parent / "control_data" / module_name
if setting_test:
# Create the directory if not existing.
path.mkdir(exist_ok=True)
if not setting_test and not path.exists():
raise Exception(f"The control frames directory can't be found in {path}")
path = (path / test_name).with_suffix(".npz")
if not setting_test and not path.is_file():
raise Exception(
f"The control frame for the test {test_name} cannot be found in {path.parent}. "
"Make sure you generated the control frames first.",
)
return path
def _config_test(last_frame: bool) -> ManimConfig:
return ManimConfig().digest_file(
str(
Path(__file__).parent
/ (
"config_graphical_tests_monoframe.cfg"
if last_frame
else "config_graphical_tests_multiframes.cfg"
),
),
)
|
ManimML_helblazer811/manim_ml/utils/testing/doc_directive.py | r"""
A directive for including Manim videos in a Sphinx document
"""
from __future__ import annotations
import csv
import itertools as it
import os
import re
import shutil
import sys
from pathlib import Path
from timeit import timeit
import jinja2
from docutils import nodes
from docutils.parsers.rst import Directive, directives # type: ignore
from docutils.statemachine import StringList
from manim import QUALITIES
classnamedict = {}
class SkipManimNode(nodes.Admonition, nodes.Element):
"""Auxiliary node class that is used when the ``skip-manim`` tag is present
or ``.pot`` files are being built.
Skips rendering the manim directive and outputs a placeholder instead.
"""
pass
def visit(self, node, name=""):
self.visit_admonition(node, name)
if not isinstance(node[0], nodes.title):
node.insert(0, nodes.title("skip-manim", "Example Placeholder"))
def depart(self, node):
self.depart_admonition(node)
def process_name_list(option_input: str, reference_type: str) -> list[str]:
r"""Reformats a string of space separated class names
as a list of strings containing valid Sphinx references.
Tests
-----
::
>>> process_name_list("Tex TexTemplate", "class")
[':class:`~.Tex`', ':class:`~.TexTemplate`']
>>> process_name_list("Scene.play Mobject.rotate", "func")
[':func:`~.Scene.play`', ':func:`~.Mobject.rotate`']
"""
return [f":{reference_type}:`~.{name}`" for name in option_input.split()]
class ManimDirective(Directive):
r"""The manim directive, rendering videos while building
the documentation.
See the module docstring for documentation.
"""
has_content = True
required_arguments = 1
optional_arguments = 0
option_spec = {
"hide_source": bool,
"no_autoplay": bool,
"quality": lambda arg: directives.choice(
arg,
("low", "medium", "high", "fourk"),
),
"save_as_gif": bool,
"save_last_frame": bool,
"ref_modules": lambda arg: process_name_list(arg, "mod"),
"ref_classes": lambda arg: process_name_list(arg, "class"),
"ref_functions": lambda arg: process_name_list(arg, "func"),
"ref_methods": lambda arg: process_name_list(arg, "meth"),
}
final_argument_whitespace = True
def run(self):
# Rendering is skipped if the tag skip-manim is present,
# or if we are making the pot-files
should_skip = (
"skip-manim" in self.state.document.settings.env.app.builder.tags.tags
or self.state.document.settings.env.app.builder.name == "gettext"
)
if should_skip:
node = SkipManimNode()
self.state.nested_parse(
StringList(
[
f"Placeholder block for ``{self.arguments[0]}``.",
"",
".. code-block:: python",
"",
]
+ [" " + line for line in self.content]
),
self.content_offset,
node,
)
return [node]
from manim import config, tempconfig
global classnamedict
clsname = self.arguments[0]
if clsname not in classnamedict:
classnamedict[clsname] = 1
else:
classnamedict[clsname] += 1
hide_source = "hide_source" in self.options
no_autoplay = "no_autoplay" in self.options
save_as_gif = "save_as_gif" in self.options
save_last_frame = "save_last_frame" in self.options
assert not (save_as_gif and save_last_frame)
ref_content = (
self.options.get("ref_modules", [])
+ self.options.get("ref_classes", [])
+ self.options.get("ref_functions", [])
+ self.options.get("ref_methods", [])
)
if ref_content:
ref_block = "References: " + " ".join(ref_content)
else:
ref_block = ""
if "quality" in self.options:
quality = f'{self.options["quality"]}_quality'
else:
quality = "example_quality"
frame_rate = QUALITIES[quality]["frame_rate"]
pixel_height = QUALITIES[quality]["pixel_height"]
pixel_width = QUALITIES[quality]["pixel_width"]
state_machine = self.state_machine
document = state_machine.document
source_file_name = Path(document.attributes["source"])
source_rel_name = source_file_name.relative_to(setup.confdir)
source_rel_dir = source_rel_name.parents[0]
dest_dir = Path(setup.app.builder.outdir, source_rel_dir).absolute()
if not dest_dir.exists():
dest_dir.mkdir(parents=True, exist_ok=True)
source_block = [
".. code-block:: python",
"",
" from manim import *\n",
*(" " + line for line in self.content),
]
source_block = "\n".join(source_block)
config.media_dir = (Path(setup.confdir) / "media").absolute()
config.images_dir = "{media_dir}/images"
config.video_dir = "{media_dir}/videos/{quality}"
output_file = f"{clsname}-{classnamedict[clsname]}"
config.assets_dir = Path("_static")
config.progress_bar = "none"
config.verbosity = "WARNING"
example_config = {
"frame_rate": frame_rate,
"no_autoplay": no_autoplay,
"pixel_height": pixel_height,
"pixel_width": pixel_width,
"save_last_frame": save_last_frame,
"write_to_movie": not save_last_frame,
"output_file": output_file,
}
if save_last_frame:
example_config["format"] = None
if save_as_gif:
example_config["format"] = "gif"
user_code = self.content
if user_code[0].startswith(">>> "): # check whether block comes from doctest
user_code = [
line[4:] for line in user_code if line.startswith((">>> ", "... "))
]
code = [
"from manim import *",
*user_code,
f"{clsname}().render()",
]
with tempconfig(example_config):
run_time = timeit(lambda: exec("\n".join(code), globals()), number=1)
video_dir = config.get_dir("video_dir")
images_dir = config.get_dir("images_dir")
_write_rendering_stats(
clsname,
run_time,
self.state.document.settings.env.docname,
)
# copy video file to output directory
if not (save_as_gif or save_last_frame):
filename = f"{output_file}.mp4"
filesrc = video_dir / filename
destfile = Path(dest_dir, filename)
shutil.copyfile(filesrc, destfile)
elif save_as_gif:
filename = f"{output_file}.gif"
filesrc = video_dir / filename
elif save_last_frame:
filename = f"{output_file}.png"
filesrc = images_dir / filename
else:
raise ValueError("Invalid combination of render flags received.")
rendered_template = jinja2.Template(TEMPLATE).render(
clsname=clsname,
clsname_lowercase=clsname.lower(),
hide_source=hide_source,
filesrc_rel=Path(filesrc).relative_to(setup.confdir).as_posix(),
no_autoplay=no_autoplay,
output_file=output_file,
save_last_frame=save_last_frame,
save_as_gif=save_as_gif,
source_block=source_block,
ref_block=ref_block,
)
state_machine.insert_input(
rendered_template.split("\n"),
source=document.attributes["source"],
)
return []
rendering_times_file_path = Path("../rendering_times.csv")
def _write_rendering_stats(scene_name, run_time, file_name):
with rendering_times_file_path.open("a") as file:
csv.writer(file).writerow(
[
re.sub(r"^(reference\/)|(manim\.)", "", file_name),
scene_name,
"%.3f" % run_time,
],
)
def _log_rendering_times(*args):
if rendering_times_file_path.exists():
with rendering_times_file_path.open() as file:
data = list(csv.reader(file))
if len(data) == 0:
sys.exit()
print("\nRendering Summary\n-----------------\n")
max_file_length = max(len(row[0]) for row in data)
for key, group in it.groupby(data, key=lambda row: row[0]):
key = key.ljust(max_file_length + 1, ".")
group = list(group)
if len(group) == 1:
row = group[0]
print(f"{key}{row[2].rjust(7, '.')}s {row[1]}")
continue
time_sum = sum(float(row[2]) for row in group)
print(
f"{key}{f'{time_sum:.3f}'.rjust(7, '.')}s => {len(group)} EXAMPLES",
)
for row in group:
print(f"{' '*(max_file_length)} {row[2].rjust(7)}s {row[1]}")
print("")
def _delete_rendering_times(*args):
if rendering_times_file_path.exists():
rendering_times_file_path.unlink()
def setup(app):
app.add_node(SkipManimNode, html=(visit, depart))
setup.app = app
setup.config = app.config
setup.confdir = app.confdir
app.add_directive("manim", ManimDirective)
app.connect("builder-inited", _delete_rendering_times)
app.connect("build-finished", _log_rendering_times)
metadata = {"parallel_read_safe": False, "parallel_write_safe": True}
return metadata
TEMPLATE = r"""
{% if not hide_source %}
.. raw:: html
<div id="{{ clsname_lowercase }}" class="admonition admonition-manim-example">
<p class="admonition-title">Example: {{ clsname }} <a class="headerlink" href="#{{ clsname_lowercase }}">¶</a></p>
{% endif %}
{% if not (save_as_gif or save_last_frame) %}
.. raw:: html
<video
class="manim-video"
controls
loop
{{ '' if no_autoplay else 'autoplay' }}
src="./{{ output_file }}.mp4">
</video>
{% elif save_as_gif %}
.. image:: /{{ filesrc_rel }}
:align: center
{% elif save_last_frame %}
.. image:: /{{ filesrc_rel }}
:align: center
{% endif %}
{% if not hide_source %}
{{ source_block }}
{{ ref_block }}
.. raw:: html
</div>
{% endif %}
"""
|
ManimML_helblazer811/manim_ml/decision_tree/decision_tree_surface.py | from manim import *
import numpy as np
from collections import deque
from sklearn.tree import _tree as ctree
class AABB:
"""Axis-aligned bounding box"""
def __init__(self, n_features):
self.limits = np.array([[-np.inf, np.inf]] * n_features)
def split(self, f, v):
left = AABB(self.limits.shape[0])
right = AABB(self.limits.shape[0])
left.limits = self.limits.copy()
right.limits = self.limits.copy()
left.limits[f, 1] = v
right.limits[f, 0] = v
return left, right
def tree_bounds(tree, n_features=None):
"""Compute final decision rule for each node in tree"""
if n_features is None:
n_features = np.max(tree.feature) + 1
aabbs = [AABB(n_features) for _ in range(tree.node_count)]
queue = deque([0])
while queue:
i = queue.pop()
l = tree.children_left[i]
r = tree.children_right[i]
if l != ctree.TREE_LEAF:
aabbs[l], aabbs[r] = aabbs[i].split(tree.feature[i], tree.threshold[i])
queue.extend([l, r])
return aabbs
def compute_decision_areas(
tree_classifier,
maxrange,
x=0,
y=1,
n_features=None
):
"""Extract decision areas.
tree_classifier: Instance of a sklearn.tree.DecisionTreeClassifier
maxrange: values to insert for [left, right, top, bottom] if the interval is open (+/-inf)
x: index of the feature that goes on the x axis
y: index of the feature that goes on the y axis
n_features: override autodetection of number of features
"""
tree = tree_classifier.tree_
aabbs = tree_bounds(tree, n_features)
maxrange = np.array(maxrange)
rectangles = []
for i in range(len(aabbs)):
if tree.children_left[i] != ctree.TREE_LEAF:
continue
l = aabbs[i].limits
r = [l[x, 0], l[x, 1], l[y, 0], l[y, 1], np.argmax(tree.value[i])]
# clip out of bounds indices
"""
if r[0] < maxrange[0]:
r[0] = maxrange[0]
if r[1] > maxrange[1]:
r[1] = maxrange[1]
if r[2] < maxrange[2]:
r[2] = maxrange[2]
if r[3] > maxrange[3]:
r[3] = maxrange[3]
print(r)
"""
rectangles.append(r)
rectangles = np.array(rectangles)
rectangles[:, [0, 2]] = np.maximum(rectangles[:, [0, 2]], maxrange[0::2])
rectangles[:, [1, 3]] = np.minimum(rectangles[:, [1, 3]], maxrange[1::2])
return rectangles
def plot_areas(rectangles):
for rect in rectangles:
color = ["b", "r"][int(rect[4])]
print(rect[0], rect[1], rect[2] - rect[0], rect[3] - rect[1])
rp = Rectangle(
[rect[0], rect[2]],
rect[1] - rect[0],
rect[3] - rect[2],
color=color,
alpha=0.3,
)
plt.gca().add_artist(rp)
def merge_overlapping_polygons(all_polygons, colors=[BLUE, GREEN, ORANGE]):
# get all polygons of each color
polygon_dict = {
str(BLUE).lower(): [],
str(GREEN).lower(): [],
str(ORANGE).lower(): [],
}
for polygon in all_polygons:
print(polygon_dict)
polygon_dict[str(polygon.color).lower()].append(polygon)
return_polygons = []
for color in colors:
color = str(color).lower()
polygons = polygon_dict[color]
points = set()
for polygon in polygons:
vertices = polygon.get_vertices().tolist()
vertices = [tuple(vert) for vert in vertices]
for pt in vertices:
if pt in points: # Shared vertice, remove it.
points.remove(pt)
else:
points.add(pt)
points = list(points)
sort_x = sorted(points)
sort_y = sorted(points, key=lambda x: x[1])
edges_h = {}
edges_v = {}
i = 0
while i < len(points):
curr_y = sort_y[i][1]
while i < len(points) and sort_y[i][1] == curr_y:
edges_h[sort_y[i]] = sort_y[i + 1]
edges_h[sort_y[i + 1]] = sort_y[i]
i += 2
i = 0
while i < len(points):
curr_x = sort_x[i][0]
while i < len(points) and sort_x[i][0] == curr_x:
edges_v[sort_x[i]] = sort_x[i + 1]
edges_v[sort_x[i + 1]] = sort_x[i]
i += 2
# Get all the polygons.
while edges_h:
# We can start with any point.
polygon = [(edges_h.popitem()[0], 0)]
while True:
curr, e = polygon[-1]
if e == 0:
next_vertex = edges_v.pop(curr)
polygon.append((next_vertex, 1))
else:
next_vertex = edges_h.pop(curr)
polygon.append((next_vertex, 0))
if polygon[-1] == polygon[0]:
# Closed polygon
polygon.pop()
break
# Remove implementation-markers from the polygon.
poly = [point for point, _ in polygon]
for vertex in poly:
if vertex in edges_h:
edges_h.pop(vertex)
if vertex in edges_v:
edges_v.pop(vertex)
polygon = Polygon(*poly, color=color, fill_opacity=0.3, stroke_opacity=1.0)
return_polygons.append(polygon)
return return_polygons
class IrisDatasetPlot(VGroup):
def __init__(self, iris):
points = iris.data[:, 0:2]
labels = iris.feature_names
targets = iris.target
# Make points
self.point_group = self._make_point_group(points, targets)
# Make axes
self.axes_group = self._make_axes_group(points, labels)
# Make legend
self.legend_group = self._make_legend(
[BLUE, ORANGE, GREEN], iris.target_names, self.axes_group
)
# Make title
# title_text = "Iris Dataset Plot"
# self.title = Text(title_text).match_y(self.axes_group).shift([0.5, self.axes_group.height / 2 + 0.5, 0])
# Make all group
self.all_group = Group(self.point_group, self.axes_group, self.legend_group)
# scale the groups
self.point_group.scale(1.6)
self.point_group.match_x(self.axes_group)
self.point_group.match_y(self.axes_group)
self.point_group.shift([0.2, 0, 0])
self.axes_group.scale(0.7)
self.all_group.shift([0, 0.2, 0])
@override_animation(Create)
def create_animation(self):
animation_group = AnimationGroup(
# Perform the animations
Create(self.point_group, run_time=2),
Wait(0.5),
Create(self.axes_group, run_time=2),
# add title
# Create(self.title),
Create(self.legend_group),
)
return animation_group
def _make_point_group(self, points, targets, class_colors=[BLUE, ORANGE, GREEN]):
point_group = VGroup()
for point_index, point in enumerate(points):
# draw the dot
current_target = targets[point_index]
color = class_colors[current_target]
dot = Dot(point=np.array([point[0], point[1], 0])).set_color(color)
dot.scale(0.5)
point_group.add(dot)
return point_group
def _make_legend(self, class_colors, feature_labels, axes):
legend_group = VGroup()
# Make Text
setosa = Text("Setosa", color=BLUE)
verisicolor = Text("Verisicolor", color=ORANGE)
virginica = Text("Virginica", color=GREEN)
labels = VGroup(setosa, verisicolor, virginica).arrange(
direction=RIGHT, aligned_edge=LEFT, buff=2.0
)
labels.scale(0.5)
legend_group.add(labels)
# surrounding rectangle
surrounding_rectangle = SurroundingRectangle(labels, color=WHITE)
surrounding_rectangle.move_to(labels)
legend_group.add(surrounding_rectangle)
# shift the legend group
legend_group.move_to(axes)
legend_group.shift([0, -3.0, 0])
legend_group.match_x(axes[0][0])
return legend_group
def _make_axes_group(self, points, labels, font="Source Han Sans", font_scale=0.75):
axes_group = VGroup()
# make the axes
x_range = [
np.amin(points, axis=0)[0] - 0.2,
np.amax(points, axis=0)[0] - 0.2,
0.5,
]
y_range = [np.amin(points, axis=0)[1] - 0.2, np.amax(points, axis=0)[1], 0.5]
axes = Axes(
x_range=x_range,
y_range=y_range,
x_length=9,
y_length=6.5,
# axis_config={"number_scale_value":0.75, "include_numbers":True},
tips=False,
).shift([0.5, 0.25, 0])
axes_group.add(axes)
# make axis labels
# x_label
x_label = (
Text(labels[0], font=font)
.match_y(axes.get_axes()[0])
.shift([0.5, -0.75, 0])
.scale(font_scale)
)
axes_group.add(x_label)
# y_label
y_label = (
Text(labels[1], font=font)
.match_x(axes.get_axes()[1])
.shift([-0.75, 0, 0])
.rotate(np.pi / 2)
.scale(font_scale)
)
axes_group.add(y_label)
return axes_group
class DecisionTreeSurface(VGroup):
def __init__(self, tree_clf, data, axes, class_colors=[BLUE, ORANGE, GREEN]):
# take the tree and construct the surface from it
self.tree_clf = tree_clf
self.data = data
self.axes = axes
self.class_colors = class_colors
self.surface_rectangles = self.generate_surface_rectangles()
def generate_surface_rectangles(self):
# compute data bounds
left = np.amin(self.data[:, 0]) - 0.2
right = np.amax(self.data[:, 0]) - 0.2
top = np.amax(self.data[:, 1])
bottom = np.amin(self.data[:, 1]) - 0.2
maxrange = [left, right, bottom, top]
rectangles = compute_decision_areas(
self.tree_clf, maxrange, x=0, y=1, n_features=2
)
# turn the rectangle objects into manim rectangles
def convert_rectangle_to_polygon(rect):
# get the points for the rectangle in the plot coordinate frame
bottom_left = [rect[0], rect[3]]
bottom_right = [rect[1], rect[3]]
top_right = [rect[1], rect[2]]
top_left = [rect[0], rect[2]]
# convert those points into the entire manim coordinates
bottom_left_coord = self.axes.coords_to_point(*bottom_left)
bottom_right_coord = self.axes.coords_to_point(*bottom_right)
top_right_coord = self.axes.coords_to_point(*top_right)
top_left_coord = self.axes.coords_to_point(*top_left)
points = [
bottom_left_coord,
bottom_right_coord,
top_right_coord,
top_left_coord,
]
# construct a polygon object from those manim coordinates
rectangle = Polygon(
*points, color=color, fill_opacity=0.3, stroke_opacity=0.0
)
return rectangle
manim_rectangles = []
for rect in rectangles:
color = self.class_colors[int(rect[4])]
rectangle = convert_rectangle_to_polygon(rect)
manim_rectangles.append(rectangle)
manim_rectangles = merge_overlapping_polygons(
manim_rectangles, colors=[BLUE, GREEN, ORANGE]
)
return manim_rectangles
@override_animation(Create)
def create_override(self):
# play a reveal of all of the surface rectangles
animations = []
for rectangle in self.surface_rectangles:
animations.append(Create(rectangle))
animation_group = AnimationGroup(*animations)
return animation_group
@override_animation(Uncreate)
def uncreate_override(self):
# play a reveal of all of the surface rectangles
animations = []
for rectangle in self.surface_rectangles:
animations.append(Uncreate(rectangle))
animation_group = AnimationGroup(*animations)
return animation_group
def make_split_to_animation_map(self):
"""
Returns a dictionary mapping a given split
node to an animation to be played
"""
# Create an initial decision tree surface
# Go through each split node
# 1. Make a line split animation
# 2. Create the relevant classification areas
# and transform the old ones to them
pass
|
ManimML_helblazer811/manim_ml/decision_tree/helpers.py | def compute_node_depths(tree):
"""Computes the depths of nodes for level order traversal"""
def depth(node_index, current_node_index=0):
"""Compute the height of a node"""
if current_node_index == node_index:
return 0
elif (
tree.children_left[current_node_index]
== tree.children_right[current_node_index]
):
return -1
else:
# Compute the height of each subtree
l_depth = depth(node_index, tree.children_left[current_node_index])
r_depth = depth(node_index, tree.children_right[current_node_index])
# The index is only in one of them
if l_depth != -1:
return l_depth + 1
elif r_depth != -1:
return r_depth + 1
else:
return -1
node_depths = [depth(index) for index in range(tree.node_count)]
return node_depths
def compute_level_order_traversal(tree):
"""Computes level order traversal of a sklearn tree"""
def depth(node_index, current_node_index=0):
"""Compute the height of a node"""
if current_node_index == node_index:
return 0
elif (
tree.children_left[current_node_index]
== tree.children_right[current_node_index]
):
return -1
else:
# Compute the height of each subtree
l_depth = depth(node_index, tree.children_left[current_node_index])
r_depth = depth(node_index, tree.children_right[current_node_index])
# The index is only in one of them
if l_depth != -1:
return l_depth + 1
elif r_depth != -1:
return r_depth + 1
else:
return -1
node_depths = [(index, depth(index)) for index in range(tree.node_count)]
node_depths = sorted(node_depths, key=lambda x: x[1])
sorted_inds = [node_depth[0] for node_depth in node_depths]
return sorted_inds
def compute_bfs_traversal(tree):
"""Traverses the tree in BFS order and returns the nodes in order"""
traversal_order = []
tree_root_index = 0
queue = [tree_root_index]
while len(queue) > 0:
current_index = queue.pop(0)
traversal_order.append(current_index)
left_child_index = tree.children_left[node_index]
right_child_index = tree.children_right[node_index]
is_leaf_node = left_child_index == right_child_index
if not is_leaf_node:
queue.append(left_child_index)
queue.append(right_child_index)
return traversal_order
def compute_best_first_traversal(tree):
"""Traverses the tree according to the best split first order"""
pass
def compute_node_to_parent_mapping(tree):
"""Returns a hashmap mapping node indices to their parent indices"""
node_to_parent = {0: -1} # Root has no parent
num_nodes = tree.node_count
for node_index in range(num_nodes):
# Explore left children
left_child_node_index = tree.children_left[node_index]
if left_child_node_index != -1:
node_to_parent[left_child_node_index] = node_index
# Explore right children
right_child_node_index = tree.children_right[node_index]
if right_child_node_index != -1:
node_to_parent[right_child_node_index] = node_index
return node_to_parent
|
ManimML_helblazer811/manim_ml/decision_tree/decision_tree.py | """
Module for visualizing decision trees in Manim.
It parses a decision tree classifier from sklearn.
TODO return a map from nodes to split animation for BFS tree expansion
TODO reimplement the decision 2D decision tree surface drawing.
"""
from manim import *
from manim_ml.decision_tree.decision_tree_surface import (
compute_decision_areas,
merge_overlapping_polygons,
)
import manim_ml.decision_tree.helpers as helpers
import numpy as np
from PIL import Image
class LeafNode(Group):
"""Leaf node in tree"""
def __init__(
self, class_index, display_type="image", class_image_paths=[], class_colors=[]
):
super().__init__()
self.display_type = display_type
self.class_image_paths = class_image_paths
self.class_colors = class_colors
assert self.display_type in ["image", "text"]
if self.display_type == "image":
self._construct_image_node(class_index)
else:
raise NotImplementedError()
def _construct_image_node(self, class_index):
"""Make an image node"""
# Get image
image_path = self.class_image_paths[class_index]
pil_image = Image.open(image_path)
node = ImageMobject(pil_image)
node.scale(1.5)
rectangle = Rectangle(
width=node.width + 0.05,
height=node.height + 0.05,
color=self.class_colors[class_index],
stroke_width=6,
)
rectangle.move_to(node.get_center())
rectangle.shift([-0.02, 0.02, 0])
self.add(rectangle)
self.add(node)
class SplitNode(VGroup):
"""Node for splitting decision in tree"""
def __init__(self, feature, threshold):
super().__init__()
node_text = f"{feature}\n<= {threshold:.2f} cm"
# Draw decision text
decision_text = Text(node_text, color=WHITE)
# Draw the surrounding box
bounding_box = SurroundingRectangle(decision_text, buff=0.3, color=WHITE)
self.add(bounding_box)
self.add(decision_text)
class DecisionTreeDiagram(Group):
"""Decision Tree Diagram Class for Manim"""
def __init__(
self,
sklearn_tree,
feature_names=None,
class_names=None,
class_images_paths=None,
class_colors=[RED, GREEN, BLUE],
):
super().__init__()
self.tree = sklearn_tree
self.feature_names = feature_names
self.class_names = class_names
self.class_image_paths = class_images_paths
self.class_colors = class_colors
# Make graph container for the tree
self.tree_group, self.nodes_map, self.edge_map = self._make_tree()
self.add(self.tree_group)
def _make_node(
self,
node_index,
):
"""Make node"""
is_split_node = (
self.tree.children_left[node_index] != self.tree.children_right[node_index]
)
if is_split_node:
node_feature = self.tree.feature[node_index]
node_threshold = self.tree.threshold[node_index]
node = SplitNode(self.feature_names[node_feature], node_threshold)
else:
# Get the most abundant class for the given leaf node
# Make the leaf node object
tree_class_index = np.argmax(self.tree.value[node_index])
node = LeafNode(
class_index=tree_class_index,
class_colors=self.class_colors,
class_image_paths=self.class_image_paths,
)
return node
def _make_connection(self, top, bottom, is_leaf=False):
"""Make a connection from top to bottom"""
top_node_bottom_location = top.get_center()
top_node_bottom_location[1] -= top.height / 2
bottom_node_top_location = bottom.get_center()
bottom_node_top_location[1] += bottom.height / 2
line = Line(top_node_bottom_location, bottom_node_top_location, color=WHITE)
return line
def _make_tree(self):
"""Construct the tree diagram"""
tree_group = Group()
max_depth = self.tree.max_depth
# Make the root node
nodes_map = {}
root_node = self._make_node(
node_index=0,
)
nodes_map[0] = root_node
tree_group.add(root_node)
# Save some information
node_height = root_node.height
node_width = root_node.width
scale_factor = 1.0
edge_map = {}
# tree height
tree_height = scale_factor * node_height * max_depth
tree_width = scale_factor * 2**max_depth * node_width
# traverse tree
def recurse(node_index, depth, direction, parent_object, parent_node):
# make the node object
is_leaf = (
self.tree.children_left[node_index]
== self.tree.children_right[node_index]
)
node_object = self._make_node(node_index=node_index)
nodes_map[node_index] = node_object
node_height = node_object.height
# set the node position
direction_factor = -1 if direction == "left" else 1
shift_right_amount = (
0.9 * direction_factor * scale_factor * tree_width / (2**depth) / 2
)
if is_leaf:
shift_down_amount = -1.0 * scale_factor * node_height
else:
shift_down_amount = -1.8 * scale_factor * node_height
node_object.match_x(parent_object).match_y(parent_object).shift(
[shift_right_amount, shift_down_amount, 0]
)
tree_group.add(node_object)
# make a connection
connection = self._make_connection(
parent_object, node_object, is_leaf=is_leaf
)
edge_name = str(parent_node) + "," + str(node_index)
edge_map[edge_name] = connection
tree_group.add(connection)
# recurse
if not is_leaf:
recurse(
self.tree.children_left[node_index],
depth + 1,
"left",
node_object,
node_index,
)
recurse(
self.tree.children_right[node_index],
depth + 1,
"right",
node_object,
node_index,
)
recurse(self.tree.children_left[0], 1, "left", root_node, 0)
recurse(self.tree.children_right[0], 1, "right", root_node, 0)
tree_group.scale(0.35)
return tree_group, nodes_map, edge_map
def create_level_order_expansion_decision_tree(self, tree):
"""Expands the decision tree in level order"""
raise NotImplementedError()
def create_bfs_expansion_decision_tree(self, tree):
"""Expands the tree using BFS"""
animations = []
split_node_animations = {} # Dictionary mapping split node to animation
# Compute parent mapping
parent_mapping = helpers.compute_node_to_parent_mapping(self.tree)
# Create the root node as most common class
placeholder_class_nodes = {}
root_node_class_index = np.argmax(
self.tree.value[0]
)
root_placeholder_node = LeafNode(
class_index=root_node_class_index,
class_colors=self.class_colors,
class_image_paths=self.class_image_paths,
)
root_placeholder_node.move_to(self.nodes_map[0])
placeholder_class_nodes[0] = root_placeholder_node
root_create_animation = AnimationGroup(
FadeIn(root_placeholder_node),
lag_ratio=0.0
)
animations.append(root_create_animation)
# Iterate through the nodes
queue = [0]
while len(queue) > 0:
node_index = queue.pop(0)
# Check if a node is a split node or not
left_child_index = self.tree.children_left[node_index]
right_child_index = self.tree.children_right[node_index]
is_leaf_node = left_child_index == right_child_index
if not is_leaf_node:
# Remove the currently placeholder class node
fade_out_animation = FadeOut(
placeholder_class_nodes[node_index]
)
animations.append(fade_out_animation)
# Fade in the split node
fade_in_animation = FadeIn(
self.nodes_map[node_index]
)
animations.append(fade_in_animation)
# Split the node by creating the children and connecting them
# to the parent
# Handle left child
assert left_child_index in self.nodes_map.keys()
left_node = self.nodes_map[left_child_index]
left_parent_edge = self.edge_map[f"{node_index},{left_child_index}"]
# Get the children of the left node
left_node_left_index = self.tree.children_left[left_child_index]
left_node_right_index = self.tree.children_right[left_child_index]
left_is_leaf = left_node_left_index == left_node_right_index
if left_is_leaf:
# If a child is a leaf then just create it
left_animation = FadeIn(left_node)
else:
# If the child is a split node find the dominant class and make a temp
left_node_class_index = np.argmax(
self.tree.value[left_child_index]
)
new_leaf_node = LeafNode(
class_index=left_node_class_index,
class_colors=self.class_colors,
class_image_paths=self.class_image_paths,
)
new_leaf_node.move_to(self.nodes_map[leaf_child_index])
placeholder_class_nodes[left_child_index] = new_leaf_node
left_animation = AnimationGroup(
FadeIn(new_leaf_node),
Create(left_parent_edge),
lag_ratio=0.0
)
# Handle right child
assert right_child_index in self.nodes_map.keys()
right_node = self.nodes_map[right_child_index]
right_parent_edge = self.edge_map[f"{node_index},{right_child_index}"]
# Get the children of the left node
right_node_left_index = self.tree.children_left[right_child_index]
right_node_right_index = self.tree.children_right[right_child_index]
right_is_leaf = right_node_left_index == right_node_right_index
if right_is_leaf:
# If a child is a leaf then just create it
right_animation = FadeIn(right_node)
else:
# If the child is a split node find the dominant class and make a temp
right_node_class_index = np.argmax(
self.tree.value[right_child_index]
)
new_leaf_node = LeafNode(
class_index=right_node_class_index,
class_colors=self.class_colors,
class_image_paths=self.class_image_paths,
)
placeholder_class_nodes[right_child_index] = new_leaf_node
right_animation = AnimationGroup(
FadeIn(new_leaf_node),
Create(right_parent_edge),
lag_ratio=0.0
)
# Combine the animations
split_animation = AnimationGroup(
left_animation,
right_animation,
lag_ratio=0.0,
)
animations.append(split_animation)
# Add the split animation to the split node dict
split_node_animations[node_index] = split_animation
# Add the children to the queue
if left_child_index != -1:
queue.append(left_child_index)
if right_child_index != -1:
queue.append(right_child_index)
return Succession(
*animations,
lag_ratio=1.0
), split_node_animations
def make_expand_tree_animation(self, node_expand_order):
"""
Make an animation for expanding the decision tree
Shows each split node as a leaf node initially, and
then when it comes up shows it as a split node. The
reason for this is for purposes of animating each of the
splits in a decision surface.
"""
# Show the root node as a leaf node
# Iterate through the nodes in the traversal order
for node_index in node_expand_order[1:]:
# Figure out if it is a leaf or not
# If it is not a leaf then remove the placeholder leaf node
# then show the split node
# If it is a leaf then just show the leaf node
pass
pass
@override_animation(Create)
def create_decision_tree(self, traversal_order="bfs"):
"""Makes a create animation for the decision tree"""
# Comptue the node expand order
if traversal_order == "level":
node_expand_order = helpers.compute_level_order_traversal(self.tree)
elif traversal_order == "bfs":
node_expand_order = helpers.compute_bfs_traversal(self.tree)
else:
raise Exception(f"Uncrecognized traversal: {traversal_order}")
# Make the animation
expand_tree_animation = self.make_expand_tree_animation(node_expand_order)
return expand_tree_animation
class DecisionTreeContainer():
"""Connects the DecisionTreeDiagram to the DecisionTreeEmbedding"""
def __init__(self, sklearn_tree, points, classes):
self.sklearn_tree = sklearn_tree
self.points = points
self.classes = classes
def make_unfold_tree_animation(self):
"""Unfolds the tree through an in order traversal
This animations unfolds the tree diagram as well as showing the splitting
of a shaded region in the Decision Tree embedding.
"""
# Draw points in the embedding
# Start the tree splitting animation
pass
|
ManimML_helblazer811/manim_ml/neural_network/neural_network.py | """Neural Network Manim Visualization
This module is responsible for generating a neural network visualization with
manim, specifically a fully connected neural network diagram.
Example:
# Specify how many nodes are in each node layer
layer_node_count = [5, 3, 5]
# Create the object with default style settings
NeuralNetwork(layer_node_count)
"""
import textwrap
from manim_ml.neural_network.layers.embedding import EmbeddingLayer
from manim_ml.utils.mobjects.connections import NetworkConnection
import numpy as np
from manim import *
from manim_ml.neural_network.layers.parent_layers import ConnectiveLayer, ThreeDLayer
from manim_ml.neural_network.layers.util import get_connective_layer
from manim_ml.utils.mobjects.list_group import ListGroup
from manim_ml.neural_network.animations.neural_network_transformations import (
InsertLayer,
RemoveLayer,
)
import manim_ml
class NeuralNetwork(Group):
"""Neural Network Visualization Container Class"""
def __init__(
self,
input_layers,
layer_spacing=0.2,
animation_dot_color=manim_ml.config.color_scheme.active_color,
edge_width=2.5,
dot_radius=0.03,
title=" ",
layout="linear",
layout_direction="left_to_right",
debug_mode=False
):
super(Group, self).__init__()
self.input_layers_dict = self.make_input_layers_dict(input_layers)
self.input_layers = ListGroup(*self.input_layers_dict.values())
self.edge_width = edge_width
self.layer_spacing = layer_spacing
self.animation_dot_color = animation_dot_color
self.dot_radius = dot_radius
self.title_text = title
self.created = False
self.layout = layout
self.layout_direction = layout_direction
self.debug_mode = debug_mode
# TODO take layer_node_count [0, (1, 2), 0]
# and make it have explicit distinct subspaces
# Construct all of the layers
self._construct_input_layers()
# Place the layers
self._place_layers(layout=layout, layout_direction=layout_direction)
# Make the connective layers
self.connective_layers, self.all_layers = self._construct_connective_layers()
# Place the connective layers
self._place_connective_layers()
# Make overhead title
self.title = Text(
self.title_text,
font_size=DEFAULT_FONT_SIZE / 2
)
self.title.next_to(self, UP * self.layer_spacing, buff=0.25)
self.add(self.title)
# Place layers at correct z index
self.connective_layers.set_z_index(2)
self.input_layers.set_z_index(3)
# Center the whole diagram by default
self.all_layers.move_to(ORIGIN)
self.add(self.all_layers)
# Make container for connections
self.connections = []
# Print neural network
print(repr(self))
def make_input_layers_dict(self, input_layers):
"""Make dictionary of input layers"""
if isinstance(input_layers, dict):
# If input layers is dictionary then return it
return input_layers
elif isinstance(input_layers, list):
# If input layers is a list then make a dictionary with default
return_dict = {}
for layer_index, input_layer in enumerate(input_layers):
return_dict[f"layer{layer_index}"] = input_layer
return return_dict
else:
raise Exception(f"Uncrecognized input layers type: {type(input_layers)}")
def add_connection(
self,
start_mobject_or_name,
end_mobject_or_name,
connection_style="default",
connection_position="bottom",
arc_direction="down"
):
"""Add connection from start layer to end layer"""
assert connection_style in ["default"]
if connection_style == "default":
# Make arrow connection from start layer to end layer
# Add the connection
if isinstance(start_mobject_or_name, Mobject):
input_mobject = start_mobject_or_name
else:
input_mobject = self.input_layers_dict[start_mobject_or_name]
if isinstance(end_mobject_or_name, Mobject):
output_mobject = end_mobject_or_name
else:
output_mobject = self.input_layers_dict[end_mobject_or_name]
connection = NetworkConnection(
input_mobject,
output_mobject,
arc_direction=arc_direction,
buffer=0.05
)
self.connections.append(connection)
self.add(connection)
def _construct_input_layers(self):
"""Constructs each of the input layers in context
of their adjacent layers"""
prev_layer = None
next_layer = None
# Go through all the input layers and run their construct method
print("Constructing layers")
for layer_index in range(len(self.input_layers)):
current_layer = self.input_layers[layer_index]
print(f"Current layer: {current_layer}")
if layer_index < len(self.input_layers) - 1:
next_layer = self.input_layers[layer_index + 1]
if layer_index > 0:
prev_layer = self.input_layers[layer_index - 1]
# Run the construct layer method for each
current_layer.construct_layer(
prev_layer,
next_layer,
debug_mode=self.debug_mode
)
def _place_layers(
self,
layout="linear",
layout_direction="top_to_bottom"
):
"""Creates the neural network"""
# TODO implement more sophisticated custom layouts
# Default: Linear layout
for layer_index in range(1, len(self.input_layers)):
previous_layer = self.input_layers[layer_index - 1]
current_layer = self.input_layers[layer_index]
current_layer.move_to(previous_layer.get_center())
if layout_direction == "left_to_right":
x_shift = (
previous_layer.get_width() / 2
+ current_layer.get_width() / 2
+ self.layer_spacing
)
shift_vector = np.array([x_shift, 0, 0])
elif layout_direction == "top_to_bottom":
y_shift = -(
(previous_layer.get_width() / 2 + current_layer.get_width() / 2)
+ self.layer_spacing
)
shift_vector = np.array([0, y_shift, 0])
else:
raise Exception(f"Unrecognized layout direction: {layout_direction}")
current_layer.shift(shift_vector)
# After all layers have been placed place their activation functions
layer_max_height = max([layer.get_height() for layer in self.input_layers])
for current_layer in self.input_layers:
# Place activation function
if hasattr(current_layer, "activation_function"):
if not current_layer.activation_function is None:
# Get max height of layer
up_movement = np.array([
0,
layer_max_height / 2
+ current_layer.activation_function.get_height() / 2
+ 0.5 * self.layer_spacing,
0,
])
current_layer.activation_function.move_to(
current_layer,
)
current_layer.activation_function.match_y(
self.input_layers[0]
)
current_layer.activation_function.shift(
up_movement
)
self.add(
current_layer.activation_function
)
def _construct_connective_layers(self):
"""Draws connecting lines between layers"""
connective_layers = ListGroup()
all_layers = ListGroup()
for layer_index in range(len(self.input_layers) - 1):
current_layer = self.input_layers[layer_index]
# Add the layer to the list of layers
all_layers.add(current_layer)
next_layer = self.input_layers[layer_index + 1]
# Check if layer is actually a nested NeuralNetwork
if isinstance(current_layer, NeuralNetwork):
# Last layer of the current layer
current_layer = current_layer.all_layers[-1]
if isinstance(next_layer, NeuralNetwork):
# First layer of the next layer
next_layer = next_layer.all_layers[0]
# Find connective layer with correct layer pair
connective_layer = get_connective_layer(current_layer, next_layer)
connective_layers.add(connective_layer)
# Construct the connective layer
connective_layer.construct_layer(current_layer, next_layer)
# Add the layer to the list of layers
all_layers.add(connective_layer)
# Add final layer
all_layers.add(self.input_layers[-1])
# Handle layering
return connective_layers, all_layers
def _place_connective_layers(self):
"""Places the connective layers
"""
# Place each of the connective layers halfway between the adjacent layers
for connective_layer in self.connective_layers:
layer_midpoint = (
connective_layer.input_layer.get_center() +
connective_layer.output_layer.get_center()
) / 2
connective_layer.move_to(layer_midpoint)
def insert_layer(self, layer, insert_index):
"""Inserts a layer at the given index"""
neural_network = self
insert_animation = InsertLayer(layer, insert_index, neural_network)
return insert_animation
def remove_layer(self, layer):
"""Removes layer object if it exists"""
neural_network = self
return RemoveLayer(layer, neural_network, layer_spacing=self.layer_spacing)
def replace_layer(self, old_layer, new_layer):
"""Replaces given layer object"""
raise NotImplementedError()
remove_animation = self.remove_layer(insert_index)
insert_animation = self.insert_layer(layer, insert_index)
# Make the animation
animation_group = AnimationGroup(
FadeOut(self.all_layers[insert_index]), FadeIn(layer), lag_ratio=1.0
)
return animation_group
def make_forward_pass_animation(
self,
run_time=None,
passing_flash=True,
layer_args={},
per_layer_animations=False,
**kwargs
):
"""Generates an animation for feed forward propagation"""
all_animations = []
per_layer_animation_map = {}
per_layer_runtime = (
run_time / len(self.all_layers) if not run_time is None else None
)
for layer_index, layer in enumerate(self.all_layers):
# Get the layer args
if isinstance(layer, ConnectiveLayer):
"""
NOTE: By default a connective layer will get the combined
layer_args of the layers it is connecting and itself.
"""
before_layer_args = {}
current_layer_args = {}
after_layer_args = {}
if layer.input_layer in layer_args:
before_layer_args = layer_args[layer.input_layer]
if layer in layer_args:
current_layer_args = layer_args[layer]
if layer.output_layer in layer_args:
after_layer_args = layer_args[layer.output_layer]
# Merge the two dicts
current_layer_args = {
**before_layer_args,
**current_layer_args,
**after_layer_args,
}
else:
current_layer_args = {}
if layer in layer_args:
current_layer_args = layer_args[layer]
# Perform the forward pass of the current layer
layer_forward_pass = layer.make_forward_pass_animation(
layer_args=current_layer_args, run_time=per_layer_runtime, **kwargs
)
# Animate a forward pass for incoming connections
connection_input_pass = AnimationGroup()
for connection in self.connections:
if isinstance(layer, ConnectiveLayer):
output_layer = layer.output_layer
if connection.end_mobject == output_layer:
connection_input_pass = ShowPassingFlash(
connection,
run_time=layer_forward_pass.run_time,
time_width=0.2,
)
break
layer_forward_pass = AnimationGroup(
layer_forward_pass,
connection_input_pass,
lag_ratio=0.0
)
all_animations.append(layer_forward_pass)
# Add the animation to per layer animation
per_layer_animation_map[layer] = layer_forward_pass
# Make the animation group
animation_group = Succession(*all_animations, lag_ratio=1.0)
if per_layer_animations:
return per_layer_animation_map
else:
return animation_group
@override_animation(Create)
def _create_override(self, **kwargs):
"""Overrides Create animation"""
# Stop the neural network from being created twice
if self.created:
return AnimationGroup()
self.created = True
animations = []
# Create the overhead title
animations.append(Create(self.title))
# Create each layer one by one
for layer in self.all_layers:
layer_animation = Create(layer)
# Make titles
create_title = Create(layer.title)
# Create layer animation group
animation_group = AnimationGroup(layer_animation, create_title)
animations.append(animation_group)
animation_group = AnimationGroup(*animations, lag_ratio=1.0)
return animation_group
def set_z_index(self, z_index_value: float, family=False):
"""Overriden set_z_index"""
# Setting family=False stops sub-neural networks from inheriting parent z_index
for layer in self.all_layers:
if not isinstance(NeuralNetwork):
layer.set_z_index(z_index_value)
def scale(self, scale_factor, **kwargs):
"""Overriden scale"""
prior_center = self.get_center()
for layer in self.all_layers:
layer.scale(scale_factor, **kwargs)
# Place layers with scaled spacing
self.layer_spacing *= scale_factor
# self.connective_layers, self.all_layers = self._construct_connective_layers()
self._place_layers(layout=self.layout, layout_direction=self.layout_direction)
self._place_connective_layers()
# Scale the title
self.remove(self.title)
self.title.scale(scale_factor, **kwargs)
self.title.next_to(self, UP, buff=0.25 * scale_factor)
self.add(self.title)
self.move_to(prior_center)
def filter_layers(self, function):
"""Filters layers of the network given function"""
layers_to_return = []
for layer in self.all_layers:
func_out = function(layer)
assert isinstance(
func_out, bool
), "Filter layers function returned a non-boolean type."
if func_out:
layers_to_return.append(layer)
return layers_to_return
def __repr__(self, metadata=["z_index", "title_text"]):
"""Print string representation of layers"""
inner_string = ""
for layer in self.all_layers:
inner_string += f"{repr(layer)}("
for key in metadata:
value = getattr(layer, key)
if not value is "":
inner_string += f"{key}={value}, "
inner_string += "),\n"
inner_string = textwrap.indent(inner_string, " ")
string_repr = "NeuralNetwork([\n" + inner_string + "])"
return string_repr
|
ManimML_helblazer811/manim_ml/neural_network/__init__.py | from manim_ml.neural_network.neural_network import NeuralNetwork
from manim_ml.neural_network.layers.feed_forward import FeedForwardLayer
from manim_ml.neural_network.layers.convolutional_2d_to_convolutional_2d import (
Convolutional2DToConvolutional2D,
)
from manim_ml.neural_network.layers.convolutional_2d_to_feed_forward import (
Convolutional2DToFeedForward,
)
from manim_ml.neural_network.layers.convolutional_2d_to_max_pooling_2d import (
Convolutional2DToMaxPooling2D,
)
from manim_ml.neural_network.layers.convolutional_2d import Convolutional2DLayer
from manim_ml.neural_network.layers.embedding_to_feed_forward import (
EmbeddingToFeedForward,
)
from manim_ml.neural_network.layers.embedding import EmbeddingLayer
from manim_ml.neural_network.layers.feed_forward_to_embedding import (
FeedForwardToEmbedding,
)
from manim_ml.neural_network.layers.feed_forward_to_feed_forward import (
FeedForwardToFeedForward,
)
from manim_ml.neural_network.layers.feed_forward_to_image import FeedForwardToImage
from manim_ml.neural_network.layers.feed_forward_to_vector import FeedForwardToVector
from manim_ml.neural_network.layers.feed_forward import FeedForwardLayer
from manim_ml.neural_network.layers.image_to_convolutional_2d import (
ImageToConvolutional2DLayer,
)
from manim_ml.neural_network.layers.image_to_feed_forward import ImageToFeedForward
from manim_ml.neural_network.layers.image import ImageLayer
from manim_ml.neural_network.layers.max_pooling_2d_to_convolutional_2d import (
MaxPooling2DToConvolutional2D,
)
from manim_ml.neural_network.layers.max_pooling_2d import MaxPooling2DLayer
from manim_ml.neural_network.layers.paired_query_to_feed_forward import (
PairedQueryToFeedForward,
)
from manim_ml.neural_network.layers.paired_query import PairedQueryLayer
from manim_ml.neural_network.layers.triplet_to_feed_forward import TripletToFeedForward
from manim_ml.neural_network.layers.triplet import TripletLayer
from manim_ml.neural_network.layers.vector import VectorLayer
from manim_ml.neural_network.layers.math_operation_layer import MathOperationLayer |
ManimML_helblazer811/manim_ml/neural_network/layers/math_operation_layer.py | from manim import *
from manim_ml.neural_network.activation_functions import get_activation_function_by_name
from manim_ml.neural_network.activation_functions.activation_function import (
ActivationFunction,
)
from manim_ml.neural_network.layers.parent_layers import VGroupNeuralNetworkLayer
class MathOperationLayer(VGroupNeuralNetworkLayer):
"""Handles rendering a layer for a neural network"""
valid_operations = ["+", "-", "*", "/"]
def __init__(
self,
operation_type: str,
node_radius=0.5,
node_color=BLUE,
node_stroke_width=2.0,
active_color=ORANGE,
activation_function=None,
font_size=20,
**kwargs
):
super(VGroupNeuralNetworkLayer, self).__init__(**kwargs)
# Ensure operation type is valid
assert operation_type in MathOperationLayer.valid_operations
self.operation_type = operation_type
self.node_radius = node_radius
self.node_color = node_color
self.node_stroke_width = node_stroke_width
self.active_color = active_color
self.font_size = font_size
self.activation_function = activation_function
def construct_layer(
self,
input_layer: "NeuralNetworkLayer",
output_layer: "NeuralNetworkLayer",
**kwargs
):
"""Creates the neural network layer"""
# Draw the operation
self.operation_text = Text(
self.operation_type,
font_size=self.font_size
)
self.add(self.operation_text)
# Make the surrounding circle
self.surrounding_circle = Circle(
color=self.node_color,
stroke_width=self.node_stroke_width
).surround(self.operation_text)
self.add(self.surrounding_circle)
# Make the activation function
self.construct_activation_function()
super().construct_layer(input_layer, output_layer, **kwargs)
def construct_activation_function(self):
"""Construct the activation function"""
# Add the activation function
if not self.activation_function is None:
# Check if it is a string
if isinstance(self.activation_function, str):
activation_function = get_activation_function_by_name(
self.activation_function
)()
else:
assert isinstance(self.activation_function, ActivationFunction)
activation_function = self.activation_function
# Plot the function above the rest of the layer
self.activation_function = activation_function
self.add(self.activation_function)
def make_forward_pass_animation(self, layer_args={}, **kwargs):
"""Makes the forward pass animation
Parameters
----------
layer_args : dict, optional
layer specific arguments, by default {}
Returns
-------
AnimationGroup
Forward pass animation
"""
# Make highlight animation
succession = Succession(
ApplyMethod(
self.surrounding_circle.set_color,
self.active_color,
run_time=0.25
),
Wait(1.0),
ApplyMethod(
self.surrounding_circle.set_color,
self.node_color,
run_time=0.25
),
)
# Animate the activation function
if not self.activation_function is None:
animation_group = AnimationGroup(
succession,
self.activation_function.make_evaluate_animation(),
lag_ratio=0.0,
)
return animation_group
else:
return succession
def get_center(self):
return self.surrounding_circle.get_center()
def get_left(self):
return self.surrounding_circle.get_left()
def get_right(self):
return self.surrounding_circle.get_right()
def move_to(self, mobject_or_point):
"""Moves the center of the layer to the given mobject or point"""
layer_center = self.surrounding_circle.get_center()
if isinstance(mobject_or_point, Mobject):
target_center = mobject_or_point.get_center()
else:
target_center = mobject_or_point
self.shift(target_center - layer_center) |
ManimML_helblazer811/manim_ml/neural_network/layers/embedding.py | from manim import *
from manim_ml.utils.mobjects.probability import GaussianDistribution
from manim_ml.neural_network.layers.parent_layers import VGroupNeuralNetworkLayer
class EmbeddingLayer(VGroupNeuralNetworkLayer):
"""NeuralNetwork embedding object that can show probability distributions"""
def __init__(
self,
point_radius=0.02,
mean=np.array([0, 0]),
covariance=np.array([[1.0, 0], [0, 1.0]]),
dist_theme="gaussian",
paired_query_mode=False,
**kwargs
):
super(VGroupNeuralNetworkLayer, self).__init__(**kwargs)
self.mean = mean
self.covariance = covariance
self.gaussian_distributions = VGroup()
self.add(self.gaussian_distributions)
self.point_radius = point_radius
self.dist_theme = dist_theme
self.paired_query_mode = paired_query_mode
def construct_layer(
self,
input_layer: "NeuralNetworkLayer",
output_layer: "NeuralNetworkLayer",
**kwargs
):
self.axes = Axes(
tips=False,
x_length=0.8,
y_length=0.8,
x_range=(-1.4, 1.4),
y_range=(-1.8, 1.8),
x_axis_config={"include_ticks": False, "stroke_width": 0.0},
y_axis_config={"include_ticks": False, "stroke_width": 0.0},
)
self.add(self.axes)
self.axes.move_to(self.get_center())
# Make point cloud
self.point_cloud = self.construct_gaussian_point_cloud(
self.mean, self.covariance
)
self.add(self.point_cloud)
# Make latent distribution
self.latent_distribution = GaussianDistribution(
self.axes, mean=self.mean, cov=self.covariance
) # Use defaults
super().construct_layer(input_layer, output_layer, **kwargs)
def add_gaussian_distribution(self, gaussian_distribution):
"""Adds given GaussianDistribution to the list"""
self.gaussian_distributions.add(gaussian_distribution)
return Create(gaussian_distribution)
def remove_gaussian_distribution(self, gaussian_distribution):
"""Removes the given gaussian distribution from the embedding"""
for gaussian in self.gaussian_distributions:
if gaussian == gaussian_distribution:
self.gaussian_distributions.remove(gaussian_distribution)
return FadeOut(gaussian)
def sample_point_location_from_distribution(self):
"""Samples from the current latent distribution"""
mean = self.latent_distribution.mean
cov = self.latent_distribution.cov
point = np.random.multivariate_normal(mean, cov)
# Make dot at correct location
location = self.axes.coords_to_point(point[0], point[1])
return location
def get_distribution_location(self):
"""Returns mean of latent distribution in axes frame"""
return self.axes.coords_to_point(self.latent_distribution.mean)
def construct_gaussian_point_cloud(
self, mean, covariance, point_color=WHITE, num_points=400
):
"""Plots points sampled from a Gaussian with the given mean and covariance"""
# Sample points from a Gaussian
np.random.seed(5)
points = np.random.multivariate_normal(mean, covariance, num_points)
# Add each point to the axes
point_dots = VGroup()
for point in points:
point_location = self.axes.coords_to_point(*point)
dot = Dot(point_location, color=point_color, radius=self.point_radius / 2)
dot.set_z_index(-1)
point_dots.add(dot)
return point_dots
def make_forward_pass_animation(self, layer_args={}, **kwargs):
"""Forward pass animation"""
animations = []
if "triplet_args" in layer_args:
triplet_args = layer_args["triplet_args"]
positive_dist_args = triplet_args["positive_dist"]
negative_dist_args = triplet_args["negative_dist"]
anchor_dist_args = triplet_args["anchor_dist"]
# Create each dist
anchor_dist = GaussianDistribution(self.axes, **anchor_dist_args)
animations.append(Create(anchor_dist))
positive_dist = GaussianDistribution(self.axes, **positive_dist_args)
animations.append(Create(positive_dist))
negative_dist = GaussianDistribution(self.axes, **negative_dist_args)
animations.append(Create(negative_dist))
# Draw edges in between anchor and positive, anchor and negative
anchor_positive = Line(
anchor_dist.get_center(),
positive_dist.get_center(),
color=GOLD,
stroke_width=DEFAULT_STROKE_WIDTH / 2,
)
anchor_positive.set_z_index(3)
animations.append(Create(anchor_positive))
anchor_negative = Line(
anchor_dist.get_center(),
negative_dist.get_center(),
color=GOLD,
stroke_width=DEFAULT_STROKE_WIDTH / 2,
)
anchor_negative.set_z_index(3)
animations.append(Create(anchor_negative))
elif not self.paired_query_mode:
# Normal embedding mode
if "dist_args" in layer_args:
scale_factor = 1.0
if "scale_factor" in layer_args:
scale_factor = layer_args["scale_factor"]
self.latent_distribution = GaussianDistribution(
self.axes, **layer_args["dist_args"]
).scale(scale_factor)
else:
# Make ellipse object corresponding to the latent distribution
# self.latent_distribution = GaussianDistribution(
# self.axes,
# dist_theme=self.dist_theme,
# cov=np.array([[0.8, 0], [0.0, 0.8]])
# )
pass
# Create animation
create_distribution = Create(self.latent_distribution)
animations.append(create_distribution)
else:
# Paired Query Mode
assert "positive_dist_args" in layer_args
assert "negative_dist_args" in layer_args
positive_dist_args = layer_args["positive_dist_args"]
negative_dist_args = layer_args["negative_dist_args"]
# Handle logic for embedding a paired query into the embedding layer
positive_dist = GaussianDistribution(self.axes, **positive_dist_args)
self.gaussian_distributions.add(positive_dist)
negative_dist = GaussianDistribution(self.axes, **negative_dist_args)
self.gaussian_distributions.add(negative_dist)
animations.append(Create(positive_dist))
animations.append(Create(negative_dist))
animation_group = AnimationGroup(*animations)
return animation_group
@override_animation(Create)
def _create_override(self, **kwargs):
# Plot each point at once
point_animations = []
for point in self.point_cloud:
point_animations.append(GrowFromCenter(point))
point_animation = AnimationGroup(*point_animations, lag_ratio=1.0, run_time=2.5)
return point_animation
class NeuralNetworkEmbeddingTestScene(Scene):
def construct(self):
nne = EmbeddingLayer()
mean = np.array([0, 0])
cov = np.array([[5.0, 1.0], [0.0, 1.0]])
point_cloud = nne.construct_gaussian_point_cloud(mean, cov)
nne.add(point_cloud)
gaussian = nne.construct_gaussian_distribution(mean, cov)
nne.add(gaussian)
self.add(nne)
|
ManimML_helblazer811/manim_ml/neural_network/layers/paired_query.py | from manim import *
from manim_ml.neural_network.layers.parent_layers import NeuralNetworkLayer
from manim_ml.utils.mobjects.image import GrayscaleImageMobject, LabeledColorImage
import numpy as np
class PairedQueryLayer(NeuralNetworkLayer):
"""Paired Query Layer"""
def __init__(
self, positive, negative, stroke_width=5, font_size=18, spacing=0.5, **kwargs
):
super().__init__(**kwargs)
self.positive = positive
self.negative = negative
self.font_size = font_size
self.spacing = spacing
self.stroke_width = stroke_width
# Make the assets
self.assets = self.make_assets()
self.add(self.assets)
self.add(self.title)
def construct_layer(
self,
input_layer: "NeuralNetworkLayer",
output_layer: "NeuralNetworkLayer",
**kwargs
):
return super().construct_layer(input_layer, output_layer, **kwargs)
@classmethod
def from_paths(cls, positive_path, negative_path, grayscale=True, **kwargs):
"""Creates a query using the paths"""
# Load images from path
if grayscale:
positive = GrayscaleImageMobject.from_path(positive_path)
negative = GrayscaleImageMobject.from_path(negative_path)
else:
positive = ImageMobject(positive_path)
negative = ImageMobject(negative_path)
# Make the layer
query_layer = cls(positive, negative, **kwargs)
return query_layer
def make_assets(self):
"""
Constructs the assets needed for a query layer
"""
# Handle positive
positive_group = LabeledColorImage(
self.positive,
color=BLUE,
label="Positive",
font_size=self.font_size,
stroke_width=self.stroke_width,
)
# Handle negative
negative_group = LabeledColorImage(
self.negative,
color=RED,
label="Negative",
font_size=self.font_size,
stroke_width=self.stroke_width,
)
# Distribute the groups uniformly vertically
assets = Group(positive_group, negative_group)
assets.arrange(DOWN, buff=self.spacing)
return assets
@override_animation(Create)
def _create_override(self):
# TODO make Create animation that is custom
return FadeIn(self.assets)
def make_forward_pass_animation(self, layer_args={}, **kwargs):
"""Forward pass for query"""
return AnimationGroup()
|
ManimML_helblazer811/manim_ml/neural_network/layers/feed_forward_to_math_operation.py | from manim import *
from manim_ml.neural_network.layers.feed_forward import FeedForwardLayer
from manim_ml.neural_network.layers.parent_layers import ConnectiveLayer
from manim_ml.neural_network.layers.math_operation_layer import MathOperationLayer
from manim_ml.utils.mobjects.connections import NetworkConnection
class FeedForwardToMathOperation(ConnectiveLayer):
"""Image Layer to FeedForward layer"""
input_class = FeedForwardLayer
output_class = MathOperationLayer
def __init__(
self,
input_layer,
output_layer,
active_color=ORANGE,
**kwargs
):
self.active_color = active_color
super().__init__(input_layer, output_layer, **kwargs)
def construct_layer(
self,
input_layer: "NeuralNetworkLayer",
output_layer: "NeuralNetworkLayer",
**kwargs
):
# Draw an arrow from the output of the feed forward layer to the
# input of the math operation layer
self.connection = NetworkConnection(
self.input_layer,
self.output_layer,
arc_direction="straight",
buffer=0.05
)
self.add(self.connection)
return super().construct_layer(input_layer, output_layer, **kwargs)
def make_forward_pass_animation(self, layer_args={}, **kwargs):
"""Makes dots diverge from the given location and move to the feed forward nodes decoder"""
# Make flashing pass animation on arrow
passing_flash = ShowPassingFlash(
self.connection.copy().set_color(self.active_color)
)
return passing_flash
|
ManimML_helblazer811/manim_ml/neural_network/layers/image_to_feed_forward.py | from manim import *
from manim_ml.neural_network.layers.feed_forward import FeedForwardLayer
from manim_ml.neural_network.layers.image import ImageLayer
from manim_ml.neural_network.layers.parent_layers import ConnectiveLayer
class ImageToFeedForward(ConnectiveLayer):
"""Image Layer to FeedForward layer"""
input_class = ImageLayer
output_class = FeedForwardLayer
def __init__(
self,
input_layer,
output_layer,
animation_dot_color=RED,
dot_radius=0.05,
**kwargs
):
super().__init__(input_layer, output_layer, **kwargs)
self.animation_dot_color = animation_dot_color
self.dot_radius = dot_radius
self.feed_forward_layer = output_layer
self.image_layer = input_layer
def construct_layer(
self,
input_layer: "NeuralNetworkLayer",
output_layer: "NeuralNetworkLayer",
**kwargs
):
return super().construct_layer(input_layer, output_layer, **kwargs)
def make_forward_pass_animation(self, layer_args={}, **kwargs):
"""Makes dots diverge from the given location and move to the feed forward nodes decoder"""
animations = []
dots = []
image_mobject = self.image_layer.image_mobject
# Move the dots to the centers of each of the nodes in the FeedForwardLayer
image_location = image_mobject.get_center()
for node in self.feed_forward_layer.node_group:
new_dot = Dot(
image_location, radius=self.dot_radius, color=self.animation_dot_color
)
per_node_succession = Succession(
Create(new_dot),
new_dot.animate.move_to(node.get_center()),
)
animations.append(per_node_succession)
dots.append(new_dot)
animation_group = AnimationGroup(*animations)
return animation_group
@override_animation(Create)
def _create_override(self):
return AnimationGroup()
|
ManimML_helblazer811/manim_ml/neural_network/layers/vector.py | from manim import *
import random
from manim_ml.neural_network.layers.parent_layers import VGroupNeuralNetworkLayer
class VectorLayer(VGroupNeuralNetworkLayer):
"""Shows a vector"""
def __init__(self, num_values, value_func=lambda: random.uniform(0, 1), **kwargs):
super().__init__(**kwargs)
self.num_values = num_values
self.value_func = value_func
def construct_layer(
self,
input_layer: "NeuralNetworkLayer",
output_layer: "NeuralNetworkLayer",
**kwargs,
):
super().construct_layer(input_layer, output_layer, **kwargs)
# Make the vector
self.vector_label = self.make_vector()
self.add(self.vector_label)
def make_vector(self):
"""Makes the vector"""
if False:
# TODO install Latex
values = np.array([self.value_func() for i in range(self.num_values)])
values = values[None, :].T
vector = Matrix(values)
vector_label = Text(f"[{self.value_func():.2f}]")
vector_label.scale(0.3)
return vector_label
def make_forward_pass_animation(self, layer_args={}, **kwargs):
return AnimationGroup()
@override_animation(Create)
def _create_override(self):
"""Create animation"""
return Write(self.vector_label)
|
ManimML_helblazer811/manim_ml/neural_network/layers/__init__.py | from manim_ml.neural_network.layers.convolutional_2d_to_feed_forward import (
Convolutional2DToFeedForward,
)
from manim_ml.neural_network.layers.convolutional_2d_to_max_pooling_2d import (
Convolutional2DToMaxPooling2D,
)
from manim_ml.neural_network.layers.image_to_convolutional_2d import (
ImageToConvolutional2DLayer,
)
from manim_ml.neural_network.layers.max_pooling_2d_to_convolutional_2d import (
MaxPooling2DToConvolutional2D,
)
from manim_ml.neural_network.layers.max_pooling_2d_to_feed_forward import (
MaxPooling2DToFeedForward,
)
from .convolutional_2d_to_convolutional_2d import Convolutional2DToConvolutional2D
from .convolutional_2d import Convolutional2DLayer
from .feed_forward_to_vector import FeedForwardToVector
from .paired_query_to_feed_forward import PairedQueryToFeedForward
from .embedding_to_feed_forward import EmbeddingToFeedForward
from .embedding import EmbeddingLayer
from .feed_forward_to_embedding import FeedForwardToEmbedding
from .feed_forward_to_feed_forward import FeedForwardToFeedForward
from .feed_forward_to_image import FeedForwardToImage
from .feed_forward import FeedForwardLayer
from .image_to_feed_forward import ImageToFeedForward
from .image import ImageLayer
from .parent_layers import ConnectiveLayer, NeuralNetworkLayer
from .triplet import TripletLayer
from .triplet_to_feed_forward import TripletToFeedForward
from .paired_query import PairedQueryLayer
from .paired_query_to_feed_forward import PairedQueryToFeedForward
from .max_pooling_2d import MaxPooling2DLayer
from .feed_forward_to_math_operation import FeedForwardToMathOperation
connective_layers_list = (
EmbeddingToFeedForward,
FeedForwardToEmbedding,
FeedForwardToFeedForward,
FeedForwardToImage,
ImageToFeedForward,
PairedQueryToFeedForward,
TripletToFeedForward,
PairedQueryToFeedForward,
FeedForwardToVector,
Convolutional2DToConvolutional2D,
ImageToConvolutional2DLayer,
Convolutional2DToFeedForward,
Convolutional2DToMaxPooling2D,
MaxPooling2DToConvolutional2D,
MaxPooling2DToFeedForward,
FeedForwardToMathOperation
)
|
ManimML_helblazer811/manim_ml/neural_network/layers/util.py | import warnings
from manim import *
from manim_ml.neural_network.layers.parent_layers import BlankConnective, ThreeDLayer
from manim_ml.neural_network.layers import connective_layers_list
def get_connective_layer(input_layer, output_layer):
"""
Deduces the relevant connective layer
"""
connective_layer_class = None
for candidate_class in connective_layers_list:
input_class = candidate_class.input_class
output_class = candidate_class.output_class
if isinstance(input_layer, input_class) and isinstance(
output_layer, output_class
):
connective_layer_class = candidate_class
break
if connective_layer_class is None:
connective_layer_class = BlankConnective
warnings.warn(
f"Unrecognized input/output class pair: {input_layer} and {output_layer}"
)
# Make the instance now
connective_layer = connective_layer_class(input_layer, output_layer)
return connective_layer
|
ManimML_helblazer811/manim_ml/neural_network/layers/max_pooling_2d_to_feed_forward.py | from manim import *
from manim_ml.neural_network.layers.convolutional_2d_to_feed_forward import (
Convolutional2DToFeedForward,
)
from manim_ml.neural_network.layers.feed_forward import FeedForwardLayer
from manim_ml.neural_network.layers.max_pooling_2d import MaxPooling2DLayer
class MaxPooling2DToFeedForward(Convolutional2DToFeedForward):
"""Feed Forward to Embedding Layer"""
input_class = MaxPooling2DLayer
output_class = FeedForwardLayer
def __init__(
self,
input_layer: MaxPooling2DLayer,
output_layer: FeedForwardLayer,
passing_flash_color=ORANGE,
**kwargs
):
super().__init__(input_layer, output_layer, **kwargs)
def construct_layer(
self,
input_layer: "NeuralNetworkLayer",
output_layer: "NeuralNetworkLayer",
**kwargs
):
return super().construct_layer(input_layer, output_layer, **kwargs)
|
ManimML_helblazer811/manim_ml/neural_network/layers/feed_forward_to_image.py | from manim import *
from manim_ml.neural_network.layers.feed_forward import FeedForwardLayer
from manim_ml.neural_network.layers.image import ImageLayer
from manim_ml.neural_network.layers.parent_layers import ConnectiveLayer
class FeedForwardToImage(ConnectiveLayer):
"""Image Layer to FeedForward layer"""
input_class = FeedForwardLayer
output_class = ImageLayer
def __init__(
self,
input_layer,
output_layer,
animation_dot_color=RED,
dot_radius=0.05,
**kwargs
):
super().__init__(input_layer, output_layer, **kwargs)
self.animation_dot_color = animation_dot_color
self.dot_radius = dot_radius
self.feed_forward_layer = input_layer
self.image_layer = output_layer
def construct_layer(
self,
input_layer: "NeuralNetworkLayer",
output_layer: "NeuralNetworkLayer",
**kwargs
):
return super().construct_layer(input_layer, output_layer, **kwargs)
def make_forward_pass_animation(self, layer_args={}, **kwargs):
"""Makes dots diverge from the given location and move to the feed forward nodes decoder"""
animations = []
image_mobject = self.image_layer.image_mobject
# Move the dots to the centers of each of the nodes in the FeedForwardLayer
image_location = image_mobject.get_center()
for node in self.feed_forward_layer.node_group:
new_dot = Dot(
node.get_center(),
radius=self.dot_radius,
color=self.animation_dot_color,
)
per_node_succession = Succession(
Create(new_dot),
new_dot.animate.move_to(image_location),
)
animations.append(per_node_succession)
animation_group = AnimationGroup(*animations)
return animation_group
@override_animation(Create)
def _create_override(self):
return AnimationGroup()
|
ManimML_helblazer811/manim_ml/neural_network/layers/feed_forward.py | from manim import *
from manim_ml.neural_network.activation_functions import get_activation_function_by_name
from manim_ml.neural_network.activation_functions.activation_function import (
ActivationFunction,
)
from manim_ml.neural_network.layers.parent_layers import VGroupNeuralNetworkLayer
import manim_ml
class FeedForwardLayer(VGroupNeuralNetworkLayer):
"""Handles rendering a layer for a neural network"""
def __init__(
self,
num_nodes,
layer_buffer=SMALL_BUFF / 2,
node_radius=0.08,
node_color=manim_ml.config.color_scheme.primary_color,
node_outline_color=manim_ml.config.color_scheme.secondary_color,
rectangle_color=manim_ml.config.color_scheme.secondary_color,
node_spacing=0.3,
rectangle_fill_color=manim_ml.config.color_scheme.background_color,
node_stroke_width=2.0,
rectangle_stroke_width=2.0,
animation_dot_color=manim_ml.config.color_scheme.active_color,
activation_function=None,
**kwargs
):
super(VGroupNeuralNetworkLayer, self).__init__(**kwargs)
self.num_nodes = num_nodes
self.layer_buffer = layer_buffer
self.node_radius = node_radius
self.node_color = node_color
self.node_stroke_width = node_stroke_width
self.node_outline_color = node_outline_color
self.rectangle_stroke_width = rectangle_stroke_width
self.rectangle_color = rectangle_color
self.node_spacing = node_spacing
self.rectangle_fill_color = rectangle_fill_color
self.animation_dot_color = animation_dot_color
self.activation_function = activation_function
self.node_group = VGroup()
def construct_layer(
self,
input_layer: "NeuralNetworkLayer",
output_layer: "NeuralNetworkLayer",
**kwargs
):
"""Creates the neural network layer"""
# Add Nodes
for node_number in range(self.num_nodes):
node_object = Circle(
radius=self.node_radius,
color=self.node_color,
stroke_width=self.node_stroke_width,
)
self.node_group.add(node_object)
# Space the nodes
# Assumes Vertical orientation
for node_index, node_object in enumerate(self.node_group):
location = node_index * self.node_spacing
node_object.move_to([0, location, 0])
# Create Surrounding Rectangle
self.surrounding_rectangle = SurroundingRectangle(
self.node_group,
color=self.rectangle_color,
fill_color=self.rectangle_fill_color,
fill_opacity=1.0,
buff=self.layer_buffer,
stroke_width=self.rectangle_stroke_width,
)
self.surrounding_rectangle.set_z_index(1)
# Add the objects to the class
self.add(self.surrounding_rectangle, self.node_group)
self.construct_activation_function()
super().construct_layer(input_layer, output_layer, **kwargs)
def construct_activation_function(self):
"""Construct the activation function"""
# Add the activation function
if not self.activation_function is None:
# Check if it is a string
if isinstance(self.activation_function, str):
activation_function = get_activation_function_by_name(
self.activation_function
)()
else:
assert isinstance(self.activation_function, ActivationFunction)
activation_function = self.activation_function
# Plot the function above the rest of the layer
self.activation_function = activation_function
self.add(self.activation_function)
def make_dropout_forward_pass_animation(self, layer_args, **kwargs):
"""Makes a forward pass animation with dropout"""
# Make sure proper dropout information was passed
assert "dropout_node_indices" in layer_args
dropout_node_indices = layer_args["dropout_node_indices"]
# Only highlight nodes that were note dropped out
nodes_to_highlight = []
for index, node in enumerate(self.node_group):
if not index in dropout_node_indices:
nodes_to_highlight.append(node)
nodes_to_highlight = VGroup(*nodes_to_highlight)
# Make highlight animation
succession = Succession(
ApplyMethod(
nodes_to_highlight.set_color, self.animation_dot_color, run_time=0.25
),
Wait(1.0),
ApplyMethod(nodes_to_highlight.set_color, self.node_color, run_time=0.25),
)
return succession
def make_forward_pass_animation(self, layer_args={}, **kwargs):
# Check if dropout is a thing
if "dropout_node_indices" in layer_args:
# Drop out certain nodes
return self.make_dropout_forward_pass_animation(
layer_args=layer_args, **kwargs
)
else:
# Make highlight animation
succession = Succession(
ApplyMethod(
self.node_group.set_color, self.animation_dot_color, run_time=0.25
),
Wait(1.0),
ApplyMethod(self.node_group.set_color, self.node_color, run_time=0.25),
)
if not self.activation_function is None:
animation_group = AnimationGroup(
succession,
self.activation_function.make_evaluate_animation(),
lag_ratio=0.0,
)
return animation_group
else:
return succession
@override_animation(Create)
def _create_override(self, **kwargs):
animations = []
animations.append(Create(self.surrounding_rectangle))
for node in self.node_group:
animations.append(Create(node))
animation_group = AnimationGroup(*animations, lag_ratio=0.0)
return animation_group
def get_height(self):
return self.surrounding_rectangle.get_height()
def get_center(self):
return self.surrounding_rectangle.get_center()
def get_left(self):
return self.surrounding_rectangle.get_left()
def get_right(self):
return self.surrounding_rectangle.get_right()
def move_to(self, mobject_or_point):
"""Moves the center of the layer to the given mobject or point"""
layer_center = self.surrounding_rectangle.get_center()
if isinstance(mobject_or_point, Mobject):
target_center = mobject_or_point.get_center()
else:
target_center = mobject_or_point
self.shift(target_center - layer_center) |
ManimML_helblazer811/manim_ml/neural_network/layers/parent_layers.py | from manim import *
from abc import ABC, abstractmethod
class NeuralNetworkLayer(ABC, Group):
"""Abstract Neural Network Layer class"""
def __init__(self, text=None, *args, **kwargs):
super(Group, self).__init__()
self.title_text = kwargs["title"] if "title" in kwargs else " "
if "title" in kwargs:
self.title = Text(self.title_text, font_size=DEFAULT_FONT_SIZE // 3).scale(0.6)
self.title.next_to(self, UP, 1.2)
else:
self.title = Group()
# self.add(self.title)
@abstractmethod
def construct_layer(
self,
input_layer: "NeuralNetworkLayer",
output_layer: "NeuralNetworkLayer",
**kwargs,
):
"""Constructs the layer at network construction time
Parameters
----------
input_layer : NeuralNetworkLayer
preceding layer
output_layer : NeuralNetworkLayer
following layer
"""
if "debug_mode" in kwargs and kwargs["debug_mode"]:
self.add(SurroundingRectangle(self))
@abstractmethod
def make_forward_pass_animation(self, layer_args={}, **kwargs):
pass
@override_animation(Create)
def _create_override(self):
return Succession()
def __repr__(self):
return f"{type(self).__name__}"
class VGroupNeuralNetworkLayer(NeuralNetworkLayer):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
# self.camera = camera
@abstractmethod
def make_forward_pass_animation(self, **kwargs):
pass
@override_animation(Create)
def _create_override(self):
return super()._create_override()
class ThreeDLayer(ABC):
"""Abstract class for 3D layers"""
pass
# Angle of ThreeD layers is static context
class ConnectiveLayer(VGroupNeuralNetworkLayer):
"""Forward pass animation for a given pair of layers"""
@abstractmethod
def __init__(self, input_layer, output_layer, **kwargs):
super(VGroupNeuralNetworkLayer, self).__init__(**kwargs)
self.input_layer = input_layer
self.output_layer = output_layer
# Handle input and output class
# assert isinstance(input_layer, self.input_class), f"{input_layer}, {self.input_class}"
# assert isinstance(output_layer, self.output_class), f"{output_layer}, {self.output_class}"
@abstractmethod
def make_forward_pass_animation(self, run_time=2.0, layer_args={}, **kwargs):
pass
@override_animation(Create)
def _create_override(self):
return super()._create_override()
def __repr__(self):
return (
f"{self.__class__.__name__}("
+ f"input_layer={self.input_layer.__class__.__name__},"
+ f"output_layer={self.output_layer.__class__.__name__},"
+ ")"
)
class BlankConnective(ConnectiveLayer):
"""Connective layer to be used when the given pair of layers is undefined"""
def __init__(self, input_layer, output_layer, **kwargs):
super().__init__(input_layer, output_layer, **kwargs)
def make_forward_pass_animation(self, run_time=1.5, layer_args={}, **kwargs):
return AnimationGroup(run_time=run_time)
@override_animation(Create)
def _create_override(self):
return super()._create_override()
|
ManimML_helblazer811/manim_ml/neural_network/layers/triplet_to_feed_forward.py | from manim import *
from manim_ml.neural_network.layers.feed_forward import FeedForwardLayer
from manim_ml.neural_network.layers.parent_layers import ConnectiveLayer
from manim_ml.neural_network.layers.triplet import TripletLayer
class TripletToFeedForward(ConnectiveLayer):
"""TripletLayer to FeedForward layer"""
input_class = TripletLayer
output_class = FeedForwardLayer
def __init__(
self,
input_layer,
output_layer,
animation_dot_color=RED,
dot_radius=0.02,
**kwargs
):
super().__init__(input_layer, output_layer, **kwargs)
self.animation_dot_color = animation_dot_color
self.dot_radius = dot_radius
self.feed_forward_layer = output_layer
self.triplet_layer = input_layer
def construct_layer(
self,
input_layer: "NeuralNetworkLayer",
output_layer: "NeuralNetworkLayer",
**kwargs
):
return super().construct_layer(input_layer, output_layer, **kwargs)
def make_forward_pass_animation(self, layer_args={}, **kwargs):
"""Makes dots diverge from the given location and move to the feed forward nodes decoder"""
animations = []
# Loop through each image
images = [
self.triplet_layer.anchor,
self.triplet_layer.positive,
self.triplet_layer.negative,
]
for image_mobject in images:
image_animations = []
dots = []
# Move dots from each image to the centers of each of the nodes in the FeedForwardLayer
image_location = image_mobject.get_center()
for node in self.feed_forward_layer.node_group:
new_dot = Dot(
image_location,
radius=self.dot_radius,
color=self.animation_dot_color,
)
per_node_succession = Succession(
Create(new_dot),
new_dot.animate.move_to(node.get_center()),
)
image_animations.append(per_node_succession)
dots.append(new_dot)
animations.append(AnimationGroup(*image_animations))
animation_group = AnimationGroup(*animations)
return animation_group
@override_animation(Create)
def _create_override(self):
return AnimationGroup()
|
ManimML_helblazer811/manim_ml/neural_network/layers/image.py | from manim import *
import numpy as np
from PIL import Image
from manim_ml.utils.mobjects.image import GrayscaleImageMobject
from manim_ml.neural_network.layers.parent_layers import NeuralNetworkLayer
class ImageLayer(NeuralNetworkLayer):
"""Single Image Layer for Neural Network"""
def __init__(
self,
numpy_image,
height=1.5,
show_image_on_create=True,
**kwargs
):
super().__init__(**kwargs)
self.image_height = height
self.numpy_image = numpy_image
self.show_image_on_create = show_image_on_create
def construct_layer(self, input_layer, output_layer, **kwargs):
"""Construct layer method
Parameters
----------
input_layer :
Input layer
output_layer :
Output layer
"""
if len(np.shape(self.numpy_image)) == 2:
# Assumed Grayscale
self.num_channels = 1
self.image_mobject = GrayscaleImageMobject(
self.numpy_image,
height=self.image_height
)
elif len(np.shape(self.numpy_image)) == 3:
# Assumed RGB
self.num_channels = 3
self.image_mobject = ImageMobject(self.numpy_image).scale_to_fit_height(
self.image_height
)
self.add(self.image_mobject)
super().construct_layer(input_layer, output_layer, **kwargs)
@classmethod
def from_path(cls, image_path, grayscale=True, **kwargs):
"""Creates a query using the paths"""
# Load images from path
image = Image.open(image_path)
numpy_image = np.asarray(image)
# Make the layer
image_layer = cls(numpy_image, **kwargs)
return image_layer
@override_animation(Create)
def _create_override(self, **kwargs):
debug_mode = False
if debug_mode:
return FadeIn(SurroundingRectangle(self.image_mobject))
if self.show_image_on_create:
return FadeIn(self.image_mobject)
else:
return AnimationGroup()
def make_forward_pass_animation(self, layer_args={}, **kwargs):
return AnimationGroup()
def get_right(self):
"""Override get right"""
return self.image_mobject.get_right()
def scale(self, scale_factor, **kwargs):
"""Scales the image mobject"""
self.image_mobject.scale(scale_factor)
@property
def width(self):
return self.image_mobject.width
@property
def height(self):
return self.image_mobject.height
|
ManimML_helblazer811/manim_ml/neural_network/layers/convolutional_2d_to_max_pooling_2d.py | import random
from manim import *
from manim_ml.utils.mobjects.gridded_rectangle import GriddedRectangle
from manim_ml.neural_network.layers.convolutional_2d_to_convolutional_2d import (
get_rotated_shift_vectors,
)
from manim_ml.neural_network.layers.max_pooling_2d import MaxPooling2DLayer
from manim_ml.neural_network.layers.parent_layers import ConnectiveLayer, ThreeDLayer
from manim_ml.neural_network.layers.feed_forward import FeedForwardLayer
from manim_ml.neural_network.layers.convolutional_2d import Convolutional2DLayer
import manim_ml
class Uncreate(Create):
def __init__(
self,
mobject,
reverse_rate_function: bool = True,
introducer: bool = True,
remover: bool = True,
**kwargs,
) -> None:
super().__init__(
mobject,
reverse_rate_function=reverse_rate_function,
introducer=introducer,
remover=remover,
**kwargs,
)
class Convolutional2DToMaxPooling2D(ConnectiveLayer, ThreeDLayer):
"""Feed Forward to Embedding Layer"""
input_class = Convolutional2DLayer
output_class = MaxPooling2DLayer
def __init__(
self,
input_layer: Convolutional2DLayer,
output_layer: MaxPooling2DLayer,
active_color=ORANGE,
**kwargs,
):
super().__init__(input_layer, output_layer, **kwargs)
self.active_color = active_color
def construct_layer(
self,
input_layer: "NeuralNetworkLayer",
output_layer: "NeuralNetworkLayer",
**kwargs,
):
return super().construct_layer(input_layer, output_layer, **kwargs)
def make_forward_pass_animation(self, layer_args={}, run_time=1.5, **kwargs):
"""Forward pass animation from conv2d to max pooling"""
cell_width = self.input_layer.cell_width
feature_map_size = self.input_layer.feature_map_size
kernel_size = self.output_layer.kernel_size
feature_maps = self.input_layer.feature_maps
grid_stroke_width = 1.0
# Make all of the kernel gridded rectangles
create_gridded_rectangle_animations = []
create_and_remove_cell_animations = []
transform_gridded_rectangle_animations = []
remove_gridded_rectangle_animations = []
for feature_map_index, feature_map in enumerate(feature_maps):
# 1. Draw gridded rectangle with kernel_size x kernel_size
# box regions over the input feature maps.
gridded_rectangle = GriddedRectangle(
color=self.active_color,
height=cell_width * feature_map_size[1],
width=cell_width * feature_map_size[0],
grid_xstep=cell_width * kernel_size,
grid_ystep=cell_width * kernel_size,
grid_stroke_width=grid_stroke_width,
grid_stroke_color=self.active_color,
show_grid_lines=True,
)
gridded_rectangle.set_z_index(10)
# 2. Randomly highlight one of the cells in the kernel.
highlighted_cells = []
num_cells_in_kernel = kernel_size * kernel_size
num_x_kernels = int(feature_map_size[0] / kernel_size)
num_y_kernels = int(feature_map_size[1] / kernel_size)
for kernel_x in range(0, num_x_kernels):
for kernel_y in range(0, num_y_kernels):
# Choose a random cell index
cell_index = random.randint(0, num_cells_in_kernel - 1)
# Make a rectangle in that cell
cell_rectangle = GriddedRectangle(
color=self.active_color,
height=cell_width,
width=cell_width,
fill_opacity=0.7,
stroke_width=0.0,
z_index=10,
)
# Move to the correct location
kernel_shift_vector = [
kernel_size * cell_width * kernel_x,
-1 * kernel_size * cell_width * kernel_y,
0,
]
cell_shift_vector = [
(cell_index % kernel_size) * cell_width,
-1 * int(cell_index / kernel_size) * cell_width,
0,
]
cell_rectangle.next_to(
gridded_rectangle.get_corners_dict()["top_left"],
submobject_to_align=cell_rectangle.get_corners_dict()[
"top_left"
],
buff=0.0,
)
cell_rectangle.shift(kernel_shift_vector)
cell_rectangle.shift(cell_shift_vector)
highlighted_cells.append(cell_rectangle)
# Rotate the gridded rectangles so they match the angle
# of the conv maps
gridded_rectangle_group = VGroup(gridded_rectangle, *highlighted_cells)
gridded_rectangle_group.rotate(
manim_ml.config.three_d_config.rotation_angle,
about_point=gridded_rectangle.get_center(),
axis=manim_ml.config.three_d_config.rotation_axis,
)
gridded_rectangle_group.next_to(
feature_map.get_corners_dict()["top_left"],
submobject_to_align=gridded_rectangle.get_corners_dict()["top_left"],
buff=0.0,
)
# 3. Make a create gridded rectangle
create_rectangle = Create(
gridded_rectangle,
)
create_gridded_rectangle_animations.append(create_rectangle)
# 4. Create and fade out highlighted cells
create_group = AnimationGroup(
*[Create(highlighted_cell) for highlighted_cell in highlighted_cells],
lag_ratio=1.0,
)
uncreate_group = AnimationGroup(
*[Uncreate(highlighted_cell) for highlighted_cell in highlighted_cells],
lag_ratio=0.0,
)
create_and_remove_cell_animation = Succession(
create_group, Wait(1.0), uncreate_group
)
create_and_remove_cell_animations.append(create_and_remove_cell_animation)
# 5. Move and resize the gridded rectangle to the output
# feature maps.
output_gridded_rectangle = GriddedRectangle(
color=self.active_color,
height=cell_width * feature_map_size[1] / 2,
width=cell_width * feature_map_size[0] / 2,
grid_xstep=cell_width,
grid_ystep=cell_width,
grid_stroke_width=grid_stroke_width,
grid_stroke_color=self.active_color,
show_grid_lines=True,
)
output_gridded_rectangle.rotate(
manim_ml.config.three_d_config.rotation_angle,
about_point=output_gridded_rectangle.get_center(),
axis=manim_ml.three_d_config.rotation_axis,
)
output_gridded_rectangle.move_to(
self.output_layer.feature_maps[feature_map_index].copy()
)
transform_rectangle = ReplacementTransform(
gridded_rectangle,
output_gridded_rectangle,
introducer=True,
)
transform_gridded_rectangle_animations.append(
transform_rectangle,
)
"""
Succession(
Uncreate(gridded_rectangle),
transform_rectangle,
lag_ratio=1.0
)
"""
# 6. Make the gridded feature map(s) disappear.
remove_gridded_rectangle_animations.append(
Uncreate(gridded_rectangle_group)
)
create_gridded_rectangle_animation = AnimationGroup(
*create_gridded_rectangle_animations
)
create_and_remove_cell_animation = AnimationGroup(
*create_and_remove_cell_animations
)
transform_gridded_rectangle_animation = AnimationGroup(
*transform_gridded_rectangle_animations
)
remove_gridded_rectangle_animation = AnimationGroup(
*remove_gridded_rectangle_animations
)
return Succession(
create_gridded_rectangle_animation,
Wait(1),
create_and_remove_cell_animation,
transform_gridded_rectangle_animation,
Wait(1),
remove_gridded_rectangle_animation,
lag_ratio=1.0,
)
|
ManimML_helblazer811/manim_ml/neural_network/layers/feed_forward_to_vector.py | from manim import *
from manim_ml.neural_network.layers.feed_forward import FeedForwardLayer
from manim_ml.neural_network.layers.parent_layers import ConnectiveLayer
from manim_ml.neural_network.layers.vector import VectorLayer
class FeedForwardToVector(ConnectiveLayer):
"""Image Layer to FeedForward layer"""
input_class = FeedForwardLayer
output_class = VectorLayer
def __init__(
self,
input_layer,
output_layer,
animation_dot_color=RED,
dot_radius=0.05,
**kwargs
):
super().__init__(input_layer, output_layer, **kwargs)
self.animation_dot_color = animation_dot_color
self.dot_radius = dot_radius
self.feed_forward_layer = input_layer
self.vector_layer = output_layer
def construct_layer(
self,
input_layer: "NeuralNetworkLayer",
output_layer: "NeuralNetworkLayer",
**kwargs
):
return super().construct_layer(input_layer, output_layer, **kwargs)
def make_forward_pass_animation(self, layer_args={}, **kwargs):
"""Makes dots diverge from the given location and move to the feed forward nodes decoder"""
animations = []
# Move the dots to the centers of each of the nodes in the FeedForwardLayer
destination = self.vector_layer.get_center()
for node in self.feed_forward_layer.node_group:
new_dot = Dot(
node.get_center(),
radius=self.dot_radius,
color=self.animation_dot_color,
)
per_node_succession = Succession(
Create(new_dot),
new_dot.animate.move_to(destination),
)
animations.append(per_node_succession)
animation_group = AnimationGroup(*animations)
return animation_group
@override_animation(Create)
def _create_override(self):
return AnimationGroup()
|
ManimML_helblazer811/manim_ml/neural_network/layers/feed_forward_to_feed_forward.py | from typing import List, Union
import numpy as np
from manim import *
from manim_ml.neural_network.layers.feed_forward import FeedForwardLayer
from manim_ml.neural_network.layers.parent_layers import ConnectiveLayer
import manim_ml
class FeedForwardToFeedForward(ConnectiveLayer):
"""Layer for connecting FeedForward layer to FeedForwardLayer"""
input_class = FeedForwardLayer
output_class = FeedForwardLayer
def __init__(
self,
input_layer,
output_layer,
passing_flash=True,
dot_radius=0.05,
animation_dot_color=manim_ml.config.color_scheme.active_color,
edge_color=manim_ml.config.color_scheme.secondary_color,
edge_width=1.5,
camera=None,
**kwargs
):
super().__init__(input_layer, output_layer, **kwargs)
self.passing_flash = passing_flash
self.edge_color = edge_color
self.dot_radius = dot_radius
self.animation_dot_color = animation_dot_color
self.edge_width = edge_width
def construct_layer(
self,
input_layer: "NeuralNetworkLayer",
output_layer: "NeuralNetworkLayer",
**kwargs
):
self.edges = self.construct_edges()
self.add(self.edges)
super().construct_layer(input_layer, output_layer, **kwargs)
def construct_edges(self):
# Go through each node in the two layers and make a connecting line
edges = []
for node_i in self.input_layer.node_group:
for node_j in self.output_layer.node_group:
line = Line(
node_i.get_center(),
node_j.get_center(),
color=self.edge_color,
stroke_width=self.edge_width,
)
edges.append(line)
edges = VGroup(*edges)
return edges
@override_animation(FadeOut)
def _fadeout_animation(self):
animations = []
for edge in self.edges:
animations.append(FadeOut(edge))
animation_group = AnimationGroup(*animations)
return animation_group
def make_forward_pass_animation(
self, layer_args={}, run_time=1, feed_forward_dropout=0.0, **kwargs
):
"""Animation for passing information from one FeedForwardLayer to the next"""
path_animations = []
dots = []
for edge_index, edge in enumerate(self.edges):
if (
not "edge_indices_to_dropout" in layer_args
or not edge_index in layer_args["edge_indices_to_dropout"]
):
dot = Dot(
color=self.animation_dot_color,
fill_opacity=1.0,
radius=self.dot_radius,
)
# Add to dots group
dots.append(dot)
# Make the animation
if self.passing_flash:
copy_edge = edge.copy()
anim = ShowPassingFlash(
copy_edge.set_color(self.animation_dot_color), time_width=0.3
)
else:
anim = MoveAlongPath(
dot, edge, run_time=run_time, rate_function=sigmoid
)
path_animations.append(anim)
if not self.passing_flash:
dots = VGroup(*dots)
self.add(dots)
path_animations = AnimationGroup(*path_animations)
return path_animations
def modify_edge_colors(self, colors=None, magnitudes=None, color_scheme="inferno"):
"""Changes the colors of edges"""
# TODO implement
pass
def modify_edge_stroke_widths(self, widths):
"""Changes the widths of the edges"""
assert len(widths) > 0
# Note: 1d-arrays are assumed to be in row major order
widths = np.array(widths)
widths = np.flatten(widths)
# Check thickness size
assert np.shape(widths)[0] == len(self.edges)
# Make animation
animations = []
for index, edge in enumerate(self.edges):
width = widths[index]
change_width = edge.animate.set_stroke_width(width)
animations.append(change_width)
animation_group = AnimationGroup(*animations)
return animation_group
@override_animation(Create)
def _create_override(self, **kwargs):
animations = []
for edge in self.edges:
animations.append(Create(edge))
animation_group = AnimationGroup(*animations, lag_ratio=0.0)
return animation_group
|
ManimML_helblazer811/manim_ml/neural_network/layers/image_to_convolutional_2d.py | import numpy as np
from manim import *
from manim_ml.neural_network.layers.convolutional_2d import Convolutional2DLayer
from manim_ml.neural_network.layers.image import ImageLayer
from manim_ml.neural_network.layers.parent_layers import (
ThreeDLayer,
VGroupNeuralNetworkLayer,
)
from manim_ml.utils.mobjects.gridded_rectangle import GriddedRectangle
import manim_ml
class ImageToConvolutional2DLayer(VGroupNeuralNetworkLayer, ThreeDLayer):
"""Handles rendering a convolutional layer for a nn"""
input_class = ImageLayer
output_class = Convolutional2DLayer
def __init__(
self, input_layer: ImageLayer, output_layer: Convolutional2DLayer, **kwargs
):
super().__init__(input_layer, output_layer, **kwargs)
self.input_layer = input_layer
self.output_layer = output_layer
def construct_layer(
self,
input_layer: "NeuralNetworkLayer",
output_layer: "NeuralNetworkLayer",
**kwargs,
):
return super().construct_layer(input_layer, output_layer, **kwargs)
def make_forward_pass_animation(self, run_time=5, layer_args={}, **kwargs):
"""Maps image to convolutional layer"""
# Transform the image from the input layer to the
num_image_channels = self.input_layer.num_channels
if num_image_channels == 1 or num_image_channels == 3: # TODO fix this later
return self.grayscale_image_forward_pass_animation()
elif num_image_channels == 3:
return self.rbg_image_forward_pass_animation()
else:
raise Exception(
f"Unrecognized number of image channels: {num_image_channels}"
)
def rbg_image_forward_pass_animation(self):
"""Handles forward pass animation for 3 channel image"""
image_mobject = self.input_layer.image_mobject
# TODO get each color channel and turn it into an image
# TODO create image mobjects for each channel and transform
# it to the feature maps of the output_layer
raise NotImplementedError()
def grayscale_image_forward_pass_animation(self):
"""Handles forward pass animation for 1 channel image"""
animations = []
image_mobject = self.input_layer.image_mobject
target_feature_map = self.output_layer.feature_maps[0]
# Map image mobject to feature map
# Make rotation of image
rotation = ApplyMethod(
image_mobject.rotate,
manim_ml.config.three_d_config.rotation_angle,
manim_ml.config.three_d_config.rotation_axis,
image_mobject.get_center(),
run_time=0.5,
)
"""
x_rotation = ApplyMethod(
image_mobject.rotate,
ThreeDLayer.three_d_x_rotation,
[1, 0, 0],
image_mobject.get_center(),
run_time=0.5
)
y_rotation = ApplyMethod(
image_mobject.rotate,
ThreeDLayer.three_d_y_rotation,
[0, 1, 0],
image_mobject.get_center(),
run_time=0.5
)
"""
# Set opacity
set_opacity = ApplyMethod(image_mobject.set_opacity, 0.2, run_time=0.5)
# Scale the max of width or height to the
# width of the feature_map
def scale_image_func(image_mobject):
max_width_height = max(image_mobject.width, image_mobject.height)
scale_factor = target_feature_map.untransformed_width / max_width_height
image_mobject.scale(scale_factor)
return image_mobject
scale_image = ApplyFunction(scale_image_func, image_mobject)
# scale_image = ApplyMethod(image_mobject.scale, scale_factor, run_time=0.5)
# Move the image
move_image = ApplyMethod(image_mobject.move_to, target_feature_map)
# Compose the animations
animation = Succession(
rotation,
scale_image,
set_opacity,
move_image,
)
return animation
def scale(self, scale_factor, **kwargs):
super().scale(scale_factor, **kwargs)
@override_animation(Create)
def _create_override(self, **kwargs):
return AnimationGroup()
|
ManimML_helblazer811/manim_ml/neural_network/layers/max_pooling_2d_to_convolutional_2d.py | import numpy as np
from manim import *
from manim_ml.neural_network.layers.convolutional_2d_to_convolutional_2d import (
Convolutional2DToConvolutional2D,
Filters,
)
from manim_ml.neural_network.layers.max_pooling_2d import MaxPooling2DLayer
from manim_ml.neural_network.layers.parent_layers import ConnectiveLayer, ThreeDLayer
from manim_ml.neural_network.layers.feed_forward import FeedForwardLayer
from manim_ml.neural_network.layers.convolutional_2d import Convolutional2DLayer
from manim.utils.space_ops import rotation_matrix
class MaxPooling2DToConvolutional2D(Convolutional2DToConvolutional2D):
"""Feed Forward to Embedding Layer"""
input_class = MaxPooling2DLayer
output_class = Convolutional2DLayer
def __init__(
self,
input_layer: MaxPooling2DLayer,
output_layer: Convolutional2DLayer,
passing_flash_color=ORANGE,
cell_width=1.0,
stroke_width=2.0,
show_grid_lines=False,
**kwargs
):
input_layer.num_feature_maps = output_layer.num_feature_maps
super().__init__(input_layer, output_layer, **kwargs)
self.passing_flash_color = passing_flash_color
self.cell_width = cell_width
self.stroke_width = stroke_width
self.show_grid_lines = show_grid_lines
def construct_layer(
self,
input_layer: "NeuralNetworkLayer",
output_layer: "NeuralNetworkLayer",
**kwargs
):
"""Constructs the MaxPooling to Convolution3D layer
Parameters
----------
input_layer : NeuralNetworkLayer
input layer
output_layer : NeuralNetworkLayer
output layer
"""
super().construct_layer(input_layer, output_layer, **kwargs)
|
ManimML_helblazer811/manim_ml/neural_network/layers/paired_query_to_feed_forward.py | from manim import *
from manim_ml.neural_network.layers.feed_forward import FeedForwardLayer
from manim_ml.neural_network.layers.paired_query import PairedQueryLayer
from manim_ml.neural_network.layers.parent_layers import ConnectiveLayer
class PairedQueryToFeedForward(ConnectiveLayer):
"""PairedQuery layer to FeedForward layer"""
input_class = PairedQueryLayer
output_class = FeedForwardLayer
def __init__(
self,
input_layer,
output_layer,
animation_dot_color=RED,
dot_radius=0.02,
**kwargs
):
super().__init__(input_layer, output_layer, **kwargs)
self.animation_dot_color = animation_dot_color
self.dot_radius = dot_radius
self.paired_query_layer = input_layer
self.feed_forward_layer = output_layer
def construct_layer(
self,
input_layer: "NeuralNetworkLayer",
output_layer: "NeuralNetworkLayer",
**kwargs
):
return super().construct_layer(input_layer, output_layer, **kwargs)
def make_forward_pass_animation(self, layer_args={}, **kwargs):
"""Makes dots diverge from the given location and move to the feed forward nodes decoder"""
animations = []
# Loop through each image
images = [self.paired_query_layer.positive, self.paired_query_layer.negative]
for image_mobject in images:
image_animations = []
dots = []
# Move dots from each image to the centers of each of the nodes in the FeedForwardLayer
image_location = image_mobject.get_center()
for node in self.feed_forward_layer.node_group:
new_dot = Dot(
image_location,
radius=self.dot_radius,
color=self.animation_dot_color,
)
per_node_succession = Succession(
Create(new_dot),
new_dot.animate.move_to(node.get_center()),
)
image_animations.append(per_node_succession)
dots.append(new_dot)
animations.append(AnimationGroup(*image_animations))
animation_group = AnimationGroup(*animations)
return animation_group
@override_animation(Create)
def _create_override(self):
return AnimationGroup()
|
ManimML_helblazer811/manim_ml/neural_network/layers/triplet.py | from manim import *
from manim_ml.neural_network.layers import NeuralNetworkLayer
from manim_ml.utils.mobjects.image import GrayscaleImageMobject, LabeledColorImage
import numpy as np
class TripletLayer(NeuralNetworkLayer):
"""Shows triplet images"""
def __init__(
self,
anchor,
positive,
negative,
stroke_width=5,
font_size=22,
buff=0.2,
**kwargs
):
super().__init__(**kwargs)
self.anchor = anchor
self.positive = positive
self.negative = negative
self.buff = buff
self.stroke_width = stroke_width
self.font_size = font_size
def construct_layer(
self,
input_layer: "NeuralNetworkLayer",
output_layer: "NeuralNetworkLayer",
**kwargs
):
# Make the assets
self.assets = self.make_assets()
self.add(self.assets)
super().construct_layer(input_layer, output_layer, **kwargs)
@classmethod
def from_paths(
cls,
anchor_path,
positive_path,
negative_path,
grayscale=True,
font_size=22,
buff=0.2,
):
"""Creates a triplet using the anchor paths"""
# Load images from path
if grayscale:
anchor = GrayscaleImageMobject.from_path(anchor_path)
positive = GrayscaleImageMobject.from_path(positive_path)
negative = GrayscaleImageMobject.from_path(negative_path)
else:
anchor = ImageMobject(anchor_path)
positive = ImageMobject(positive_path)
negative = ImageMobject(negative_path)
# Make the layer
triplet_layer = cls(anchor, positive, negative, font_size=font_size, buff=buff)
return triplet_layer
def make_assets(self):
"""
Constructs the assets needed for a triplet layer
"""
# Handle anchor
anchor_group = LabeledColorImage(
self.anchor,
color=WHITE,
label="Anchor",
stroke_width=self.stroke_width,
font_size=self.font_size,
buff=self.buff,
)
# Handle positive
positive_group = LabeledColorImage(
self.positive,
color=GREEN,
label="Positive",
stroke_width=self.stroke_width,
font_size=self.font_size,
buff=self.buff,
)
# Handle negative
negative_group = LabeledColorImage(
self.negative,
color=RED,
label="Negative",
stroke_width=self.stroke_width,
font_size=self.font_size,
buff=self.buff,
)
# Distribute the groups uniformly vertically
assets = Group(anchor_group, positive_group, negative_group)
assets.arrange(DOWN, buff=1.5)
return assets
@override_animation(Create)
def _create_override(self):
# TODO make Create animation that is custom
return FadeIn(self.assets)
def make_forward_pass_animation(self, layer_args={}, **kwargs):
"""Forward pass for triplet"""
return AnimationGroup()
|
ManimML_helblazer811/manim_ml/neural_network/layers/convolutional_2d_to_convolutional_2d.py | import numpy as np
from manim import *
from manim_ml.neural_network.layers.convolutional_2d import Convolutional2DLayer
from manim_ml.neural_network.layers.parent_layers import ConnectiveLayer, ThreeDLayer
from manim_ml.utils.mobjects.gridded_rectangle import GriddedRectangle
import manim_ml
from manim.utils.space_ops import rotation_matrix
def get_rotated_shift_vectors(input_layer, normalized=False):
"""Rotates the shift vectors"""
# Make base shift vectors
right_shift = np.array([input_layer.cell_width, 0, 0])
down_shift = np.array([0, -input_layer.cell_width, 0])
# Make rotation matrix
rot_mat = rotation_matrix(
manim_ml.config.three_d_config.rotation_angle,
manim_ml.config.three_d_config.rotation_axis)
# Rotate the vectors
right_shift = np.dot(right_shift, rot_mat.T)
down_shift = np.dot(down_shift, rot_mat.T)
# Normalize the vectors
if normalized:
right_shift = right_shift / np.linalg.norm(right_shift)
down_shift = down_shift / np.linalg.norm(down_shift)
return right_shift, down_shift
class Filters(VGroup):
"""Group for showing a collection of filters connecting two layers"""
def __init__(
self,
input_layer,
output_layer,
line_color=ORANGE,
cell_width=1.0,
stroke_width=2.0,
show_grid_lines=False,
output_feature_map_to_connect=None, # None means all at once
):
super().__init__()
self.input_layer = input_layer
self.output_layer = output_layer
self.line_color = line_color
self.cell_width = cell_width
self.stroke_width = stroke_width
self.show_grid_lines = show_grid_lines
self.output_feature_map_to_connect = output_feature_map_to_connect
# Make the filter
self.input_rectangles = self.make_input_feature_map_rectangles()
# self.input_rectangles.set_z_index(5)
# self.add(self.input_rectangles)
self.output_rectangles = self.make_output_feature_map_rectangles()
# self.output_rectangles.set_z_index(5)
# self.add(self.output_rectangles)
self.connective_lines = self.make_connective_lines()
# self.connective_lines.set_z_index(5)
# self.add(self.connective_lines)
def make_input_feature_map_rectangles(self):
rectangles = []
rectangle_width = (
self.output_layer.filter_size[0] * self.output_layer.cell_width
)
rectangle_height = (
self.output_layer.filter_size[1] * self.output_layer.cell_width
)
filter_color = self.output_layer.filter_color
for index, feature_map in enumerate(self.input_layer.feature_maps):
rectangle = GriddedRectangle(
width=rectangle_width,
height=rectangle_height,
fill_color=filter_color,
stroke_color=filter_color,
fill_opacity=0.2,
stroke_width=self.stroke_width,
grid_xstep=self.cell_width,
grid_ystep=self.cell_width,
grid_stroke_width=self.stroke_width / 2,
grid_stroke_color=filter_color,
show_grid_lines=self.show_grid_lines,
)
# normal_vector = rectangle.get_normal_vector()
rectangle.rotate(
manim_ml.config.three_d_config.rotation_angle,
about_point=rectangle.get_center(),
axis=manim_ml.config.three_d_config.rotation_axis,
)
# Move the rectangle to the corner of the feature map
rectangle.next_to(
feature_map.get_corners_dict()["top_left"],
submobject_to_align=rectangle.get_corners_dict()["top_left"],
buff=0.0
# aligned_edge=feature_map.get_corners_dict()["top_left"].get_center()
)
rectangle.set_z_index(5)
rectangles.append(rectangle)
feature_map_rectangles = VGroup(*rectangles)
return feature_map_rectangles
def make_output_feature_map_rectangles(self):
rectangles = []
rectangle_width = self.output_layer.cell_width
rectangle_height = self.output_layer.cell_width
filter_color = self.output_layer.filter_color
right_shift, down_shift = get_rotated_shift_vectors(self.input_layer)
left_shift = -1 * right_shift
for index, feature_map in enumerate(self.output_layer.feature_maps):
# Make sure current feature map is the right filter
if not self.output_feature_map_to_connect is None:
if index != self.output_feature_map_to_connect:
continue
# Make the rectangle
rectangle = GriddedRectangle(
width=rectangle_width,
height=rectangle_height,
fill_color=filter_color,
fill_opacity=0.2,
stroke_color=filter_color,
stroke_width=self.stroke_width,
grid_xstep=self.cell_width,
grid_ystep=self.cell_width,
grid_stroke_width=self.stroke_width / 2,
grid_stroke_color=filter_color,
show_grid_lines=self.show_grid_lines,
)
# Rotate the rectangle
rectangle.rotate(
manim_ml.config.three_d_config.rotation_angle,
about_point=rectangle.get_center(),
axis=manim_ml.config.three_d_config.rotation_axis,
)
# Move the rectangle to the corner location
rectangle.next_to(
feature_map.get_corners_dict()["top_left"],
submobject_to_align=rectangle.get_corners_dict()["top_left"],
buff=0.0
# aligned_edge=feature_map.get_corners_dict()["top_left"].get_center()
)
# Shift based on the amount of output layer padding
rectangle.shift(
self.output_layer.padding[0] * right_shift,
)
rectangle.shift(
self.output_layer.padding[1] * down_shift,
)
rectangles.append(rectangle)
feature_map_rectangles = VGroup(*rectangles)
return feature_map_rectangles
def make_connective_lines(self):
"""Lines connecting input filter with output node"""
corner_names = ["top_left", "bottom_left", "top_right", "bottom_right"]
def make_input_connective_lines():
"""Makes connective lines between the corners of the input filters"""
first_input_rectangle = self.input_rectangles[0]
last_input_rectangle = self.input_rectangles[-1]
# Get the corner dots for each rectangle
first_input_corners = first_input_rectangle.get_corners_dict()
last_input_corners = last_input_rectangle.get_corners_dict()
# Iterate through each corner and make the lines
lines = []
for corner_name in corner_names:
line = Line(
first_input_corners[corner_name].get_center(),
last_input_corners[corner_name].get_center(),
color=self.line_color,
stroke_width=self.stroke_width,
)
lines.append(line)
return VGroup(*lines)
def make_output_connective_lines():
"""Makes connective lines between the corners of the output filters"""
first_output_rectangle = self.output_rectangles[0]
last_output_rectangle = self.output_rectangles[-1]
# Get the corner dots for each rectangle
first_output_corners = first_output_rectangle.get_corners_dict()
last_output_corners = last_output_rectangle.get_corners_dict()
# Iterate through each corner and make the lines
lines = []
for corner_name in corner_names:
line = Line(
first_output_corners[corner_name].get_center(),
last_output_corners[corner_name].get_center(),
color=self.line_color,
stroke_width=self.stroke_width,
)
lines.append(line)
return VGroup(*lines)
def make_input_to_output_connective_lines():
"""Make connective lines between last input filter and first output filter"""
# Choose the correct feature map to link to
input_rectangle = self.input_rectangles[-1]
output_rectangle = self.output_rectangles[0]
# Get the corner dots for each rectangle
input_corners = input_rectangle.get_corners_dict()
output_corners = output_rectangle.get_corners_dict()
# Iterate through each corner and make the lines
lines = []
for corner_name in corner_names:
line = Line(
input_corners[corner_name].get_center(),
output_corners[corner_name].get_center(),
color=self.line_color,
stroke_width=self.stroke_width,
)
lines.append(line)
return VGroup(*lines)
input_lines = make_input_connective_lines()
output_lines = make_output_connective_lines()
input_output_lines = make_input_to_output_connective_lines()
connective_lines = VGroup(*input_lines, *output_lines, *input_output_lines)
return connective_lines
@override_animation(Create)
def _create_override(self, **kwargs):
"""
NOTE This create override animation
is a workaround to make sure that the filter
does not show up in the scene before the create animation.
Without this override the filters were shown at the beginning
of the neural network forward pass animation
instead of just when the filters were supposed to appear.
I think this is a bug with Succession in the core
Manim Community Library.
TODO Fix this
"""
def add_content(object):
object.add(self.input_rectangles)
object.add(self.connective_lines)
object.add(self.output_rectangles)
return object
return ApplyFunction(add_content, self)
return AnimationGroup(
Create(self.input_rectangles),
Create(self.connective_lines),
Create(self.output_rectangles),
lag_ratio=0.0,
)
def make_pulse_animation(self, shift_amount):
"""Make animation of the filter pulsing"""
passing_flash = ShowPassingFlash(
self.connective_lines.shift(shift_amount).set_stroke_width(
self.stroke_width * 1.5
),
time_width=0.2,
color=RED,
z_index=10,
)
return passing_flash
class Convolutional2DToConvolutional2D(ConnectiveLayer, ThreeDLayer):
"""Feed Forward to Embedding Layer"""
input_class = Convolutional2DLayer
output_class = Convolutional2DLayer
def __init__(
self,
input_layer: Convolutional2DLayer,
output_layer: Convolutional2DLayer,
color=ORANGE,
filter_opacity=0.3,
line_color=ORANGE,
active_color=ORANGE,
cell_width=0.2,
show_grid_lines=True,
highlight_color=ORANGE,
**kwargs,
):
super().__init__(
input_layer,
output_layer,
**kwargs,
)
self.color = color
self.filter_color = self.output_layer.filter_color
self.filter_size = self.output_layer.filter_size
self.feature_map_size = self.input_layer.feature_map_size
self.num_input_feature_maps = self.input_layer.num_feature_maps
self.num_output_feature_maps = self.output_layer.num_feature_maps
self.cell_width = self.output_layer.cell_width
self.stride = self.output_layer.stride
self.padding = self.input_layer.padding
self.filter_opacity = filter_opacity
self.cell_width = cell_width
self.line_color = line_color
self.active_color = active_color
self.show_grid_lines = show_grid_lines
self.highlight_color = highlight_color
def construct_layer(
self,
input_layer: "NeuralNetworkLayer",
output_layer: "NeuralNetworkLayer",
**kwargs,
):
return super().construct_layer(input_layer, output_layer, **kwargs)
def animate_filters_all_at_once(self, filters):
"""Animates each of the filters all at once"""
animations = []
# Make filters
filters = Filters(
self.input_layer,
self.output_layer,
line_color=self.color,
cell_width=self.cell_width,
show_grid_lines=self.show_grid_lines,
output_feature_map_to_connect=None, # None means all at once
)
animations.append(Create(filters))
# Get the rotated shift vectors
right_shift, down_shift = get_rotated_shift_vectors(self.input_layer)
left_shift = -1 * right_shift
# Make the animation
num_y_moves = int(
(self.feature_map_size[1] - self.filter_size[1]) / self.stride
+ self.padding[1] * 2
)
num_x_moves = int(
(self.feature_map_size[0] - self.filter_size[0]) / self.stride
+ self.padding[0] * 2
)
for y_move in range(num_y_moves):
# Go right num_x_moves
for x_move in range(num_x_moves):
# Shift right
shift_animation = ApplyMethod(filters.shift, self.stride * right_shift)
# shift_animation = self.animate.shift(right_shift)
animations.append(shift_animation)
# Go back left num_x_moves and down one
shift_amount = (
self.stride * num_x_moves * left_shift + self.stride * down_shift
)
# Make the animation
shift_animation = ApplyMethod(filters.shift, shift_amount)
animations.append(shift_animation)
# Do last row move right
for x_move in range(num_x_moves):
# Shift right
shift_animation = ApplyMethod(filters.shift, self.stride * right_shift)
# shift_animation = self.animate.shift(right_shift)
animations.append(shift_animation)
# Remove the filters
animations.append(FadeOut(filters))
return Succession(*animations, lag_ratio=1.0)
def animate_filters_one_at_a_time(self, highlight_active_feature_map=True):
"""Animates each of the filters one at a time"""
animations = []
output_feature_maps = self.output_layer.feature_maps
for feature_map_index in range(len(output_feature_maps)):
# Make filters
filters = Filters(
self.input_layer,
self.output_layer,
line_color=self.color,
cell_width=self.cell_width,
show_grid_lines=self.show_grid_lines,
output_feature_map_to_connect=feature_map_index, # None means all at once
)
animations.append(Create(filters))
# Highlight the feature map
if highlight_active_feature_map:
feature_map = output_feature_maps[feature_map_index]
original_feature_map_color = feature_map.color
# Change the output feature map colors
change_color_animations = []
change_color_animations.append(
ApplyMethod(feature_map.set_color, self.highlight_color)
)
# Change the input feature map colors
input_feature_maps = self.input_layer.feature_maps
for input_feature_map in input_feature_maps:
change_color_animations.append(
ApplyMethod(input_feature_map.set_color, self.highlight_color)
)
# Combine the animations
animations.append(
AnimationGroup(*change_color_animations, lag_ratio=0.0)
)
# Get the rotated shift vectors
right_shift, down_shift = get_rotated_shift_vectors(self.input_layer)
left_shift = -1 * right_shift
# Make the animation
num_y_moves = int(
(self.feature_map_size[1] - self.filter_size[1]) / self.stride
+ self.padding[1] * 2
)
num_x_moves = int(
(self.feature_map_size[0] - self.filter_size[0]) / self.stride
+ self.padding[0] * 2
)
for y_move in range(num_y_moves):
# Go right num_x_moves
for x_move in range(num_x_moves):
# Make a pulse animation for the corners
"""
pulse_animation = filters.make_pulse_animation(
shift_amount=shift_amount
)
animations.append(pulse_animation)
"""
z_index_animation = ApplyMethod(filters.set_z_index, 5)
animations.append(z_index_animation)
# Shift right
shift_animation = ApplyMethod(
filters.shift, self.stride * right_shift
)
# shift_animation = self.animate.shift(right_shift)
animations.append(shift_animation)
# Go back left num_x_moves and down one
shift_amount = (
self.stride * num_x_moves * left_shift + self.stride * down_shift
)
# Make the animation
shift_animation = ApplyMethod(filters.shift, shift_amount)
animations.append(shift_animation)
# Do last row move right
for x_move in range(num_x_moves):
# Shift right
shift_animation = ApplyMethod(filters.shift, self.stride * right_shift)
# shift_animation = self.animate.shift(right_shift)
animations.append(shift_animation)
# Remove the filters
animations.append(FadeOut(filters))
# Un-highlight the feature map
if highlight_active_feature_map:
feature_map = output_feature_maps[feature_map_index]
# Change the output feature map colors
change_color_animations = []
change_color_animations.append(
ApplyMethod(feature_map.set_color, original_feature_map_color)
)
# Change the input feature map colors
input_feature_maps = self.input_layer.feature_maps
for input_feature_map in input_feature_maps:
change_color_animations.append(
ApplyMethod(
input_feature_map.set_color, original_feature_map_color
)
)
# Combine the animations
animations.append(
AnimationGroup(*change_color_animations, lag_ratio=0.0)
)
return Succession(*animations, lag_ratio=1.0)
def make_forward_pass_animation(
self,
layer_args={},
all_filters_at_once=False,
highlight_active_feature_map=True,
run_time=10.5,
**kwargs,
):
"""Forward pass animation from conv2d to conv2d"""
print(f"All filters at once: {all_filters_at_once}")
# Make filter shifting animations
if all_filters_at_once:
return self.animate_filters_all_at_once()
else:
return self.animate_filters_one_at_a_time(
highlight_active_feature_map=highlight_active_feature_map
)
def scale(self, scale_factor, **kwargs):
self.cell_width *= scale_factor
super().scale(scale_factor, **kwargs)
@override_animation(Create)
def _create_override(self, **kwargs):
return Succession()
|
ManimML_helblazer811/manim_ml/neural_network/layers/convolutional_2d_to_feed_forward.py | from manim import *
from manim_ml.neural_network.layers.parent_layers import ConnectiveLayer, ThreeDLayer
from manim_ml.neural_network.layers.feed_forward import FeedForwardLayer
from manim_ml.neural_network.layers.convolutional_2d import Convolutional2DLayer
class Convolutional2DToFeedForward(ConnectiveLayer, ThreeDLayer):
"""Feed Forward to Embedding Layer"""
input_class = Convolutional2DLayer
output_class = FeedForwardLayer
def __init__(
self,
input_layer: Convolutional2DLayer,
output_layer: FeedForwardLayer,
passing_flash_color=ORANGE,
**kwargs
):
super().__init__(input_layer, output_layer, **kwargs)
self.passing_flash_color = passing_flash_color
def construct_layer(
self,
input_layer: "NeuralNetworkLayer",
output_layer: "NeuralNetworkLayer",
**kwargs
):
return super().construct_layer(input_layer, output_layer, **kwargs)
def make_forward_pass_animation(self, layer_args={}, run_time=1.5, **kwargs):
"""Forward pass animation from conv2d to conv2d"""
animations = []
# Get input layer final feature map
final_feature_map = self.input_layer.feature_maps[-1]
# Get output layer nodes
feed_forward_nodes = self.output_layer.node_group
# Go through each corner
corners = final_feature_map.get_corners_dict().values()
for corner in corners:
# Go through each node
for node in feed_forward_nodes:
line = Line(corner, node, stroke_width=1.0)
line.set_z_index(self.output_layer.node_group.get_z_index())
anim = ShowPassingFlash(
line.set_color(self.passing_flash_color), time_width=0.2
)
animations.append(anim)
return AnimationGroup(*animations)
|
ManimML_helblazer811/manim_ml/neural_network/layers/convolutional_2d.py | from typing import Union
from manim_ml.neural_network.activation_functions import get_activation_function_by_name
from manim_ml.neural_network.activation_functions.activation_function import (
ActivationFunction,
)
import numpy as np
from manim import *
import manim_ml
from manim_ml.neural_network.layers.parent_layers import (
ThreeDLayer,
VGroupNeuralNetworkLayer,
)
from manim_ml.utils.mobjects.gridded_rectangle import GriddedRectangle
class FeatureMap(VGroup):
"""Class for making a feature map"""
def __init__(
self,
color=ORANGE,
feature_map_size=None,
fill_color=ORANGE,
fill_opacity=0.2,
cell_width=0.2,
padding=(0, 0),
stroke_width=2.0,
show_grid_lines=False,
padding_dashed=False,
):
super().__init__()
self.color = color
self.feature_map_size = feature_map_size
self.fill_color = fill_color
self.fill_opacity = fill_opacity
self.cell_width = cell_width
self.padding = padding
self.stroke_width = stroke_width
self.show_grid_lines = show_grid_lines
self.padding_dashed = padding_dashed
# Check if we have non-zero padding
if padding[0] > 0 or padding[1] > 0:
# Make the exterior rectangle dashed
width_with_padding = (
self.feature_map_size[0] + self.padding[0] * 2
) * self.cell_width
height_with_padding = (
self.feature_map_size[1] + self.padding[1] * 2
) * self.cell_width
self.untransformed_width = width_with_padding
self.untransformed_height = height_with_padding
self.exterior_rectangle = GriddedRectangle(
color=self.color,
width=width_with_padding,
height=height_with_padding,
fill_color=self.color,
fill_opacity=self.fill_opacity,
stroke_color=self.color,
stroke_width=self.stroke_width,
grid_xstep=self.cell_width,
grid_ystep=self.cell_width,
grid_stroke_width=self.stroke_width / 2,
grid_stroke_color=self.color,
show_grid_lines=self.show_grid_lines,
dotted_lines=self.padding_dashed,
)
self.add(self.exterior_rectangle)
# Add an interior rectangle with no fill color
self.interior_rectangle = GriddedRectangle(
color=self.color,
fill_opacity=0.0,
width=self.feature_map_size[0] * self.cell_width,
height=self.feature_map_size[1] * self.cell_width,
stroke_width=self.stroke_width,
)
self.add(self.interior_rectangle)
else:
# Just make an exterior rectangle with no dashes.
self.untransformed_height = (self.feature_map_size[1] * self.cell_width,)
self.untransformed_width = (self.feature_map_size[0] * self.cell_width,)
# Make the exterior rectangle
self.exterior_rectangle = GriddedRectangle(
color=self.color,
height=self.feature_map_size[1] * self.cell_width,
width=self.feature_map_size[0] * self.cell_width,
fill_color=self.color,
fill_opacity=self.fill_opacity,
stroke_color=self.color,
stroke_width=self.stroke_width,
grid_xstep=self.cell_width,
grid_ystep=self.cell_width,
grid_stroke_width=self.stroke_width / 2,
grid_stroke_color=self.color,
show_grid_lines=self.show_grid_lines,
)
self.add(self.exterior_rectangle)
def get_corners_dict(self):
"""Returns a dictionary of the corners"""
# Sort points through clockwise rotation of a vector in the xy plane
return self.exterior_rectangle.get_corners_dict()
class Convolutional2DLayer(VGroupNeuralNetworkLayer, ThreeDLayer):
"""Handles rendering a convolutional layer for a nn"""
def __init__(
self,
num_feature_maps,
feature_map_size=None,
filter_size=None,
cell_width=0.2,
filter_spacing=0.1,
color=BLUE,
active_color=ORANGE,
filter_color=ORANGE,
show_grid_lines=False,
fill_opacity=0.3,
stride=1,
stroke_width=2.0,
activation_function=None,
padding=0,
padding_dashed=True,
**kwargs,
):
super().__init__(**kwargs)
self.num_feature_maps = num_feature_maps
self.filter_color = filter_color
if isinstance(padding, tuple):
assert len(padding) == 2
self.padding = padding
elif isinstance(padding, int):
self.padding = (padding, padding)
else:
raise Exception(f"Unrecognized type for padding: {type(padding)}")
if isinstance(feature_map_size, int):
self.feature_map_size = (feature_map_size, feature_map_size)
else:
self.feature_map_size = feature_map_size
if isinstance(filter_size, int):
self.filter_size = (filter_size, filter_size)
else:
self.filter_size = filter_size
self.cell_width = cell_width
self.filter_spacing = filter_spacing
self.color = color
self.active_color = active_color
self.stride = stride
self.stroke_width = stroke_width
self.show_grid_lines = show_grid_lines
self.activation_function = activation_function
self.fill_opacity = fill_opacity
self.padding_dashed = padding_dashed
def construct_layer(
self,
input_layer: "NeuralNetworkLayer",
output_layer: "NeuralNetworkLayer",
**kwargs,
):
# Make the feature maps
self.feature_maps = self.construct_feature_maps()
self.add(self.feature_maps)
# Rotate stuff properly
# normal_vector = self.feature_maps[0].get_normal_vector()
self.rotate(
manim_ml.config.three_d_config.rotation_angle,
about_point=self.get_center(),
axis=manim_ml.config.three_d_config.rotation_axis,
)
self.construct_activation_function()
super().construct_layer(input_layer, output_layer, **kwargs)
def construct_activation_function(self):
"""Construct the activation function"""
# Add the activation function
if not self.activation_function is None:
# Check if it is a string
if isinstance(self.activation_function, str):
activation_function = get_activation_function_by_name(
self.activation_function
)()
else:
assert isinstance(self.activation_function, ActivationFunction)
activation_function = self.activation_function
# Plot the function above the rest of the layer
self.activation_function = activation_function
self.add(self.activation_function)
def construct_feature_maps(self):
"""Creates the neural network layer"""
# Draw rectangles that are filled in with opacity
feature_maps = []
for filter_index in range(self.num_feature_maps):
# Check if we need to add padding
"""
feature_map = GriddedRectangle(
color=self.color,
height=self.feature_map_size[1] * self.cell_width,
width=self.feature_map_size[0] * self.cell_width,
fill_color=self.color,
fill_opacity=self.fill_opacity,
stroke_color=self.color,
stroke_width=self.stroke_width,
grid_xstep=self.cell_width,
grid_ystep=self.cell_width,
grid_stroke_width=self.stroke_width / 2,
grid_stroke_color=self.color,
show_grid_lines=self.show_grid_lines,
)
"""
# feature_map = GriddedRectangle()
feature_map = FeatureMap(
color=self.color,
feature_map_size=self.feature_map_size,
cell_width=self.cell_width,
fill_color=self.color,
fill_opacity=self.fill_opacity,
padding=self.padding,
padding_dashed=self.padding_dashed,
)
# Move the feature map
feature_map.move_to([0, 0, filter_index * self.filter_spacing])
# rectangle.set_z_index(4)
feature_maps.append(feature_map)
return VGroup(*feature_maps)
def highlight_and_unhighlight_feature_maps(self):
"""Highlights then unhighlights feature maps"""
return Succession(
ApplyMethod(self.feature_maps.set_color, self.active_color),
ApplyMethod(self.feature_maps.set_color, self.color),
)
def make_forward_pass_animation(self, run_time=5, layer_args={}, **kwargs):
"""Convolution forward pass animation"""
# Note: most of this animation is done in the Convolution3DToConvolution3D layer
if not self.activation_function is None:
animation_group = AnimationGroup(
self.activation_function.make_evaluate_animation(),
self.highlight_and_unhighlight_feature_maps(),
lag_ratio=0.0,
)
else:
animation_group = AnimationGroup()
return animation_group
def scale(self, scale_factor, **kwargs):
self.cell_width *= scale_factor
super().scale(scale_factor, **kwargs)
def get_center(self):
"""Overrides function for getting center
The reason for this is so that the center calculation
does not include the activation function.
"""
return self.feature_maps.get_center()
def get_width(self):
"""Overrides get width function"""
return self.feature_maps.length_over_dim(0)
def get_height(self):
"""Overrides get height function"""
return self.feature_maps.length_over_dim(1)
def move_to(self, mobject_or_point):
"""Moves the center of the layer to the given mobject or point"""
layer_center = self.feature_maps.get_center()
if isinstance(mobject_or_point, Mobject):
target_center = mobject_or_point.get_center()
else:
target_center = mobject_or_point
self.shift(target_center - layer_center)
@override_animation(Create)
def _create_override(self, **kwargs):
return FadeIn(self.feature_maps)
|
ManimML_helblazer811/manim_ml/neural_network/layers/max_pooling_2d.py | from manim import *
from manim_ml.utils.mobjects.gridded_rectangle import GriddedRectangle
from manim_ml.neural_network.layers.parent_layers import (
ThreeDLayer,
VGroupNeuralNetworkLayer,
)
import manim_ml
class MaxPooling2DLayer(VGroupNeuralNetworkLayer, ThreeDLayer):
"""Max pooling layer for Convolutional2DLayer
Note: This is for a Convolutional2DLayer even though
it is called MaxPooling2DLayer because the 2D corresponds
to the 2 spatial dimensions of the convolution.
"""
def __init__(
self,
kernel_size=2,
stride=1,
cell_highlight_color=ORANGE,
cell_width=0.2,
filter_spacing=0.1,
color=BLUE,
show_grid_lines=False,
stroke_width=2.0,
**kwargs
):
"""Layer object for animating 2D Convolution Max Pooling
Parameters
----------
kernel_size : int or tuple, optional
Width/Height of max pooling kernel, by default 2
stride : int, optional
Stride of the max pooling operation, by default 1
"""
super().__init__(**kwargs)
self.kernel_size = kernel_size
self.stride = stride
self.cell_highlight_color = cell_highlight_color
self.cell_width = cell_width
self.filter_spacing = filter_spacing
self.color = color
self.show_grid_lines = show_grid_lines
self.stroke_width = stroke_width
self.padding = (0, 0)
def construct_layer(
self,
input_layer: "NeuralNetworkLayer",
output_layer: "NeuralNetworkLayer",
**kwargs
):
# Make the output feature maps
self.feature_maps = self._make_output_feature_maps(
input_layer.num_feature_maps, input_layer.feature_map_size
)
self.add(self.feature_maps)
self.rotate(
manim_ml.config.three_d_config.rotation_angle,
about_point=self.get_center(),
axis=manim_ml.config.three_d_config.rotation_axis
)
self.feature_map_size = (
input_layer.feature_map_size[0] / self.kernel_size,
input_layer.feature_map_size[1] / self.kernel_size,
)
super().construct_layer(input_layer, output_layer, **kwargs)
def _make_output_feature_maps(self, num_input_feature_maps, input_feature_map_size):
"""Makes a set of output feature maps"""
# Compute the size of the feature maps
output_feature_map_size = (
input_feature_map_size[0] / self.kernel_size,
input_feature_map_size[1] / self.kernel_size,
)
# Draw rectangles that are filled in with opacity
feature_maps = []
for filter_index in range(num_input_feature_maps):
rectangle = GriddedRectangle(
color=self.color,
height=output_feature_map_size[1] * self.cell_width,
width=output_feature_map_size[0] * self.cell_width,
fill_color=self.color,
fill_opacity=0.2,
stroke_color=self.color,
stroke_width=self.stroke_width,
grid_xstep=self.cell_width,
grid_ystep=self.cell_width,
grid_stroke_width=self.stroke_width / 2,
grid_stroke_color=self.color,
show_grid_lines=self.show_grid_lines,
)
# Move the feature map
rectangle.move_to([0, 0, filter_index * self.filter_spacing])
# rectangle.set_z_index(4)
feature_maps.append(rectangle)
return VGroup(*feature_maps)
def make_forward_pass_animation(self, layer_args={}, **kwargs):
"""Makes forward pass of Max Pooling Layer.
Parameters
----------
layer_args : dict, optional
_description_, by default {}
"""
return AnimationGroup()
@override_animation(Create)
def _create_override(self, **kwargs):
"""Create animation for the MaxPooling operation"""
pass
|
ManimML_helblazer811/manim_ml/neural_network/layers/embedding_to_feed_forward.py | from manim import *
from manim_ml.neural_network.layers.feed_forward import FeedForwardLayer
from manim_ml.neural_network.layers.parent_layers import ConnectiveLayer
from manim_ml.neural_network.layers.embedding import EmbeddingLayer
class EmbeddingToFeedForward(ConnectiveLayer):
"""Feed Forward to Embedding Layer"""
input_class = EmbeddingLayer
output_class = FeedForwardLayer
def __init__(
self,
input_layer,
output_layer,
animation_dot_color=RED,
dot_radius=0.03,
**kwargs
):
super().__init__(input_layer, output_layer, **kwargs)
self.feed_forward_layer = output_layer
self.embedding_layer = input_layer
self.animation_dot_color = animation_dot_color
self.dot_radius = dot_radius
def construct_layer(
self,
input_layer: "NeuralNetworkLayer",
output_layer: "NeuralNetworkLayer",
**kwargs
):
return super().construct_layer(input_layer, output_layer, **kwargs)
def make_forward_pass_animation(self, layer_args={}, run_time=1.5, **kwargs):
"""Makes dots diverge from the given location and move the decoder"""
# Find point to converge on by sampling from gaussian distribution
location = self.embedding_layer.sample_point_location_from_distribution()
# Move to location
animations = []
# Move the dots to the centers of each of the nodes in the FeedForwardLayer
dots = []
for node in self.feed_forward_layer.node_group:
new_dot = Dot(
location, radius=self.dot_radius, color=self.animation_dot_color
)
per_node_succession = Succession(
Create(new_dot),
new_dot.animate.move_to(node.get_center()),
)
animations.append(per_node_succession)
dots.append(new_dot)
# Follow up with remove animations
remove_animations = []
for dot in dots:
remove_animations.append(FadeOut(dot))
remove_animations = AnimationGroup(*remove_animations, run_time=0.2)
animations = AnimationGroup(*animations)
animation_group = Succession(animations, remove_animations, lag_ratio=1.0)
return animation_group
@override_animation(Create)
def _create_override(self, **kwargs):
return AnimationGroup()
|
ManimML_helblazer811/manim_ml/neural_network/layers/feed_forward_to_embedding.py | from manim import *
from manim_ml.neural_network.layers.embedding import EmbeddingLayer
from manim_ml.neural_network.layers.feed_forward import FeedForwardLayer
from manim_ml.neural_network.layers.parent_layers import ConnectiveLayer
class FeedForwardToEmbedding(ConnectiveLayer):
"""Feed Forward to Embedding Layer"""
input_class = FeedForwardLayer
output_class = EmbeddingLayer
def __init__(
self,
input_layer,
output_layer,
animation_dot_color=RED,
dot_radius=0.03,
**kwargs
):
super().__init__(input_layer, output_layer, **kwargs)
self.feed_forward_layer = input_layer
self.embedding_layer = output_layer
self.animation_dot_color = animation_dot_color
self.dot_radius = dot_radius
def construct_layer(
self,
input_layer: "NeuralNetworkLayer",
output_layer: "NeuralNetworkLayer",
**kwargs
):
return super().construct_layer(input_layer, output_layer, **kwargs)
def make_forward_pass_animation(self, layer_args={}, run_time=1.5, **kwargs):
"""Makes dots converge on a specific location"""
# Find point to converge on by sampling from gaussian distribution
location = self.embedding_layer.sample_point_location_from_distribution()
# Set the embedding layer latent distribution
# Move to location
animations = []
# Move the dots to the centers of each of the nodes in the FeedForwardLayer
dots = []
for node in self.feed_forward_layer.node_group:
new_dot = Dot(
node.get_center(),
radius=self.dot_radius,
color=self.animation_dot_color,
)
per_node_succession = Succession(
Create(new_dot),
new_dot.animate.move_to(location),
)
animations.append(per_node_succession)
dots.append(new_dot)
self.dots = VGroup(*dots)
self.add(self.dots)
# Follow up with remove animations
remove_animations = []
for dot in dots:
remove_animations.append(FadeOut(dot))
self.remove(self.dots)
remove_animations = AnimationGroup(*remove_animations, run_time=0.2)
animations = AnimationGroup(*animations)
animation_group = Succession(animations, remove_animations, lag_ratio=1.0)
return animation_group
@override_animation(Create)
def _create_override(self, **kwargs):
return AnimationGroup()
|
ManimML_helblazer811/manim_ml/neural_network/architectures/__init__.py | |
ManimML_helblazer811/manim_ml/neural_network/architectures/feed_forward.py | import manim_ml
from manim_ml.neural_network.neural_network import NeuralNetwork
from manim_ml.neural_network.layers.feed_forward import FeedForwardLayer
class FeedForwardNeuralNetwork(NeuralNetwork):
"""NeuralNetwork with just feed forward layers"""
def __init__(
self,
layer_node_count,
node_radius=0.08,
node_color=manim_ml.config.color_scheme.primary_color,
**kwargs
):
# construct layer
layers = []
for num_nodes in layer_node_count:
layer = FeedForwardLayer(
num_nodes, node_color=node_color, node_radius=node_radius
)
layers.append(layer)
# call super class
super().__init__(layers, **kwargs)
|
ManimML_helblazer811/manim_ml/neural_network/architectures/variational_autoencoder.py | """Variational Autoencoder Manim Visualizations
In this module I define Manim visualizations for Variational Autoencoders
and Traditional Autoencoders.
"""
from manim import *
import numpy as np
from PIL import Image
from manim_ml.neural_network.layers import FeedForwardLayer, EmbeddingLayer, ImageLayer
from manim_ml.neural_network.neural_network import NeuralNetwork
class VariationalAutoencoder(VGroup):
"""Variational Autoencoder Manim Visualization"""
def __init__(
self,
encoder_nodes_per_layer=[5, 3],
decoder_nodes_per_layer=[3, 5],
point_color=BLUE,
dot_radius=0.05,
ellipse_stroke_width=1.0,
layer_spacing=0.5,
):
super(VGroup, self).__init__()
self.encoder_nodes_per_layer = encoder_nodes_per_layer
self.decoder_nodes_per_layer = decoder_nodes_per_layer
self.point_color = point_color
self.dot_radius = dot_radius
self.layer_spacing = layer_spacing
self.ellipse_stroke_width = ellipse_stroke_width
# Make the VMobjects
self.neural_network, self.embedding_layer = self._construct_neural_network()
def _construct_neural_network(self):
"""Makes the VAE encoder, embedding layer, and decoder"""
embedding_layer = EmbeddingLayer()
neural_network = NeuralNetwork(
[
FeedForwardLayer(5),
FeedForwardLayer(3),
embedding_layer,
FeedForwardLayer(3),
FeedForwardLayer(5),
]
)
return neural_network, embedding_layer
@override_animation(Create)
def _create_vae(self):
return Create(self.neural_network)
def make_triplet_forward_pass(self, triplet):
pass
def make_image_forward_pass(self, input_image, output_image, run_time=1.5):
"""Override forward pass animation specific to a VAE"""
# Make a wrapper NN with images
wrapper_neural_network = NeuralNetwork(
[ImageLayer(input_image), self.neural_network, ImageLayer(output_image)]
)
# Make animation
animation_group = AnimationGroup(
Create(wrapper_neural_network),
wrapper_neural_network.make_forward_pass_animation(),
lag_ratio=1.0,
)
return animation_group
|
ManimML_helblazer811/manim_ml/neural_network/activation_functions/__init__.py | from manim_ml.neural_network.activation_functions.relu import ReLUFunction
from manim_ml.neural_network.activation_functions.sigmoid import SigmoidFunction
name_to_activation_function_map = {"ReLU": ReLUFunction, "Sigmoid": SigmoidFunction}
def get_activation_function_by_name(name):
assert (
name in name_to_activation_function_map.keys()
), f"Unrecognized activation function {name}"
return name_to_activation_function_map[name]
|
ManimML_helblazer811/manim_ml/neural_network/activation_functions/sigmoid.py | from manim import *
import numpy as np
from manim_ml.neural_network.activation_functions.activation_function import (
ActivationFunction,
)
class SigmoidFunction(ActivationFunction):
"""Sigmoid Activation Function"""
def __init__(self, function_name="Sigmoid", x_range=[-5, 5], y_range=[0, 1]):
super().__init__(function_name, x_range, y_range)
def apply_function(self, x_val):
return 1 / (1 + np.exp(-1 * x_val))
|
ManimML_helblazer811/manim_ml/neural_network/activation_functions/activation_function.py | from manim import *
from abc import ABC, abstractmethod
import random
import manim_ml.neural_network.activation_functions.relu as relu
import manim_ml
class ActivationFunction(ABC, VGroup):
"""Abstract parent class for defining activation functions"""
def __init__(
self,
function_name=None,
x_range=[-1, 1],
y_range=[-1, 1],
x_length=0.5,
y_length=0.3,
show_function_name=True,
active_color=manim_ml.config.color_scheme.active_color,
plot_color=manim_ml.config.color_scheme.primary_color,
rectangle_color=manim_ml.config.color_scheme.secondary_color,
):
super(VGroup, self).__init__()
self.function_name = function_name
self.x_range = x_range
self.y_range = y_range
self.x_length = x_length
self.y_length = y_length
self.show_function_name = show_function_name
self.active_color = active_color
self.plot_color = plot_color
self.rectangle_color = rectangle_color
self.construct_activation_function()
def construct_activation_function(self):
"""Makes the activation function"""
# Make an axis
self.axes = Axes(
x_range=self.x_range,
y_range=self.y_range,
x_length=self.x_length,
y_length=self.y_length,
tips=False,
axis_config={
"include_numbers": False,
"stroke_width": 0.5,
"include_ticks": False,
"color": self.rectangle_color
},
)
self.add(self.axes)
# Surround the axis with a rounded rectangle.
self.surrounding_rectangle = SurroundingRectangle(
self.axes,
corner_radius=0.05,
buff=0.05,
stroke_width=2.0,
stroke_color=self.rectangle_color,
)
self.add(self.surrounding_rectangle)
# Plot function on axis by applying it and showing in given range
self.graph = self.axes.plot(
lambda x: self.apply_function(x),
x_range=self.x_range,
stroke_color=self.plot_color,
stroke_width=2.0,
)
self.add(self.graph)
# Add the function name
if self.show_function_name:
function_name_text = Text(
self.function_name, font_size=12, font="sans-serif"
)
function_name_text.next_to(self.axes, UP * 0.5)
self.add(function_name_text)
@abstractmethod
def apply_function(self, x_val):
"""Evaluates function at given x_val"""
if x_val == None:
x_val = random.uniform(self.x_range[0], self.x_range[1])
def make_evaluate_animation(self, x_val=None):
"""Evaluates the function at a random point in the x_range"""
# Highlight the graph
# TODO: Evaluate the function at the x_val and show a highlighted dot
animation_group = Succession(
AnimationGroup(
ApplyMethod(self.graph.set_color, self.active_color),
ApplyMethod(
self.surrounding_rectangle.set_stroke_color, self.active_color
),
lag_ratio=0.0,
),
Wait(1),
AnimationGroup(
ApplyMethod(self.graph.set_color, self.plot_color),
ApplyMethod(
self.surrounding_rectangle.set_stroke_color, self.rectangle_color
),
lag_ratio=0.0,
),
lag_ratio=1.0,
)
return animation_group
|
ManimML_helblazer811/manim_ml/neural_network/activation_functions/relu.py | from manim import *
from manim_ml.neural_network.activation_functions.activation_function import (
ActivationFunction,
)
class ReLUFunction(ActivationFunction):
"""Rectified Linear Unit Activation Function"""
def __init__(self, function_name="ReLU", x_range=[-1, 1], y_range=[-1, 1]):
super().__init__(function_name, x_range, y_range)
def apply_function(self, x_val):
if x_val < 0:
return 0
else:
return x_val
|
ManimML_helblazer811/manim_ml/neural_network/animations/__init__.py | |
ManimML_helblazer811/manim_ml/neural_network/animations/neural_network_transformations.py | """
Transformations for manipulating a neural network object.
"""
from manim import *
from manim_ml.neural_network.layers.util import get_connective_layer
class RemoveLayer(AnimationGroup):
"""
Animation for removing a layer from a neural network.
Note: I needed to do something strange for creating the new connective layer.
The issue with creating it initially is that the positions of the sides of the
connective layer depend upon the location of the moved layers **after** the
move animations are performed. However, all of these animations are performed
after the animations have been created. This means that the animation depends upon
the state of the neural network layers after previous animations have been run.
To fix this issue I needed to use an UpdateFromFunc.
"""
def __init__(self, layer, neural_network, layer_spacing=0.2):
self.layer = layer
self.neural_network = neural_network
self.layer_spacing = layer_spacing
# Get the before and after layers
layers_tuple = self.get_connective_layers()
self.before_layer = layers_tuple[0]
self.after_layer = layers_tuple[1]
self.before_connective = layers_tuple[2]
self.after_connective = layers_tuple[3]
# Make the animations
remove_animations = self.make_remove_animation()
move_animations = self.make_move_animation()
new_connective_animation = self.make_new_connective_animation()
# Add all of the animations to the group
animations_list = [remove_animations, move_animations, new_connective_animation]
super().__init__(*animations_list, lag_ratio=1.0)
def get_connective_layers(self):
"""Gets the connective layers before and after self.layer"""
# Get layer index
layer_index = self.neural_network.all_layers.index_of(self.layer)
if layer_index == -1:
raise Exception("Layer object not found")
# Get the layers before and after
before_layer = None
after_layer = None
before_connective = None
after_connective = None
if layer_index - 2 >= 0:
before_layer = self.neural_network.all_layers[layer_index - 2]
before_connective = self.neural_network.all_layers[layer_index - 1]
if layer_index + 2 < len(self.neural_network.all_layers):
after_layer = self.neural_network.all_layers[layer_index + 2]
after_connective = self.neural_network.all_layers[layer_index + 1]
return before_layer, after_layer, before_connective, after_connective
def make_remove_animation(self):
"""Removes layer and the surrounding connective layers"""
remove_layer_animation = self.make_remove_layer_animation()
remove_connective_animation = self.make_remove_connective_layers_animation()
# Remove animations
remove_animations = AnimationGroup(
remove_layer_animation, remove_connective_animation
)
return remove_animations
def make_remove_layer_animation(self):
"""Removes the layer"""
# Remove the layer
self.neural_network.all_layers.remove(self.layer)
# Fade out the removed layer
fade_out_removed = FadeOut(self.layer)
return fade_out_removed
def make_remove_connective_layers_animation(self):
"""Removes the connective layers before and after layer if they exist"""
# Fade out the removed connective layers
fade_out_before_connective = AnimationGroup()
if not self.before_connective is None:
self.neural_network.all_layers.remove(self.before_connective)
fade_out_before_connective = FadeOut(self.before_connective)
fade_out_after_connective = AnimationGroup()
if not self.after_connective is None:
self.neural_network.all_layers.remove(self.after_connective)
fade_out_after_connective = FadeOut(self.after_connective)
# Group items
remove_connective_group = AnimationGroup(
fade_out_after_connective, fade_out_before_connective
)
return remove_connective_group
def make_move_animation(self):
"""Collapses layers"""
# Animate the movements
move_before_layers = AnimationGroup()
shift_right_amount = None
if not self.before_layer is None:
# Compute shift amount
layer_dist = np.abs(
self.layer.get_center() - self.before_layer.get_right()
)[0]
shift_right_amount = np.array([layer_dist - self.layer_spacing / 2, 0, 0])
# Shift all layers before forward
before_layer_index = self.neural_network.all_layers.index_of(
self.before_layer
)
layers_before = Group(
*self.neural_network.all_layers[: before_layer_index + 1]
)
move_before_layers = layers_before.animate.shift(shift_right_amount)
move_after_layers = AnimationGroup()
shift_left_amount = None
if not self.after_layer is None:
layer_dist = np.abs(self.after_layer.get_left() - self.layer.get_center())[
0
]
shift_left_amount = np.array([-layer_dist + self.layer_spacing / 2, 0, 0])
# Shift all layers after backward
after_layer_index = self.neural_network.all_layers.index_of(
self.after_layer
)
layers_after = Group(*self.neural_network.all_layers[after_layer_index:])
move_after_layers = layers_after.animate.shift(shift_left_amount)
# Group the move animations
move_group = AnimationGroup(move_before_layers, move_after_layers)
return move_group
def make_new_connective_animation(self):
"""Makes new connective layer"""
self.anim_count = 0
def create_new_connective(neural_network):
"""
Creates new connective layer
This is a closure that creates a new connective layer and animates it.
"""
self.anim_count += 1
if self.anim_count == 1:
if not self.before_layer is None and not self.after_layer is None:
print(neural_network)
new_connective_class = get_connective_layer(
self.before_layer, self.after_layer
)
before_layer_index = (
neural_network.all_layers.index_of(self.before_layer) + 1
)
neural_network.all_layers.insert(before_layer_index, new_connective)
print(neural_network)
update_func_anim = UpdateFromFunc(self.neural_network, create_new_connective)
return update_func_anim
class InsertLayer(AnimationGroup):
"""Animation for inserting layer at given index"""
def __init__(self, layer, index, neural_network):
self.layer = layer
self.index = index
self.neural_network = neural_network
# Check valid index
assert index < len(self.neural_network.all_layers)
# Layers before and after
self.layers_before = self.neural_network.all_layers[: self.index]
self.layers_after = self.neural_network.all_layers[self.index :]
# Get the non-connective layer before and after
if len(self.layers_before) > 0:
self.layer_before = self.layers_before[-2]
if len(self.layers_after) > 0:
self.layer_after = self.layers_after[0]
# Move layer
if not self.layer_after is None:
self.layer.move_to(self.layer_after)
# Make animations
(
self.old_connective_layer,
remove_connective_layer,
) = self.remove_connective_layer_animation()
move_layers = self.make_move_layers_animation()
create_layer = self.make_create_layer_animation()
# create_connective_layers = self.make_create_connective_layers()
animations = [
remove_connective_layer,
move_layers,
create_layer,
# create_connective_layers
]
super().__init__(*animations, lag_ratio=1.0)
def get_connective_layer_widths(self):
"""Gets the widths of the connective layers"""
# Make the layers
before_connective = None
after_connective = None
# Get the connective layer objects
if len(self.layers_before) > 0:
before_connective = get_connective_layer(self.layer_before, self.layer)
if len(self.layers_after) > 0:
after_connective = get_connective_layer(self.layer, self.layer_after)
# Compute the widths
before_connective_width = 0
if not before_connective is None:
before_connective_width = before_connective.width
after_connective_width = 0
if not after_connective is None:
after_connective_width = after_connective.width
return before_connective_width, after_connective_width
def remove_connective_layer_animation(self):
"""Removes the connective layer before the insertion index"""
# Check if connective layer before exists
if len(self.layers_before) > 0:
removed_connective = self.layers_before[-1]
self.layers_before.remove(removed_connective)
self.neural_network.all_layers.remove(removed_connective)
# Make remove animation
remove_animation = FadeOut(removed_connective)
return removed_connective, remove_animation
return None, AnimationGroup()
def make_move_layers_animation(self):
"""Shifts layers before and after"""
(
before_connective_width,
after_connective_width,
) = self.get_connective_layer_widths()
old_connective_width = 0
if not self.old_connective_layer is None:
old_connective_width = self.old_connective_layer.width
# Before layer shift
before_shift_animation = AnimationGroup()
if len(self.layers_before) > 0:
before_shift = np.array(
[
-self.layer.width / 2
- before_connective_width
+ old_connective_width,
0,
0,
]
)
# Shift layers before
before_shift_animation = Group(*self.layers_before).animate.shift(
before_shift
)
# After layer shift
after_shift_animation = AnimationGroup()
if len(self.layers_after) > 0:
after_shift = np.array(
[self.layer.width / 2 + after_connective_width, 0, 0]
)
# Shift layers after
after_shift_animation = Group(*self.layers_after).animate.shift(after_shift)
# Make animation group
shift_animations = AnimationGroup(before_shift_animation, after_shift_animation)
return shift_animations
def make_create_layer_animation(self):
"""Animates the creation of the layer"""
return Create(self.layer)
def make_create_connective_layers_animation(
self, before_connective, after_connective
):
"""Create connective layers"""
# Make the layers
before_connective = None
after_connective = None
# Get the connective layer objects
if len(self.layers_before) > 0:
before_connective = get_connective_layer(self.layers_before[-1], self.layer)
if len(self.layers_after) > 0:
after_connective = get_connective_layer(self.layers_after[0], self.layer)
# Insert the layers
# Make the animation
animation_group = AnimationGroup(
Create(before_connective), Create(after_connective)
)
return animation_group
|
ManimML_helblazer811/manim_ml/neural_network/animations/dropout.py | """
Code for making a dropout animation for the
feed forward layers of a neural network.
"""
from manim import *
import random
from manim_ml.neural_network.layers.feed_forward import FeedForwardLayer
from manim_ml.neural_network.layers.feed_forward_to_feed_forward import (
FeedForwardToFeedForward,
)
class XMark(VGroup):
def __init__(self, stroke_width=1.0, color=GRAY):
super().__init__()
line_one = Line(
[-0.1, 0.1, 0],
[0.1, -0.1, 0],
stroke_width=1.0,
stroke_color=color,
z_index=4,
)
self.add(line_one)
line_two = Line(
[-0.1, -0.1, 0],
[0.1, 0.1, 0],
stroke_width=1.0,
stroke_color=color,
z_index=4,
)
self.add(line_two)
def get_edges_to_drop_out(layer: FeedForwardToFeedForward, layers_to_nodes_to_drop_out):
"""Returns edges to drop out for a given FeedForwardToFeedForward layer"""
prev_layer = layer.input_layer
next_layer = layer.output_layer
# Get the nodes to dropout in previous layer
prev_layer_nodes_to_dropout = layers_to_nodes_to_drop_out[prev_layer]
next_layer_nodes_to_dropout = layers_to_nodes_to_drop_out[next_layer]
# Compute the edges to dropout
edges_to_dropout = []
edge_indices_to_dropout = []
for edge_index, edge in enumerate(layer.edges):
prev_node_index = int(edge_index / next_layer.num_nodes)
next_node_index = edge_index % next_layer.num_nodes
# Check if the edges should be dropped out
if (
prev_node_index in prev_layer_nodes_to_dropout
or next_node_index in next_layer_nodes_to_dropout
):
edges_to_dropout.append(edge)
edge_indices_to_dropout.append(edge_index)
return edges_to_dropout, edge_indices_to_dropout
def make_pre_dropout_animation(
neural_network,
layers_to_nodes_to_drop_out,
dropped_out_color=GRAY,
dropped_out_opacity=0.2,
):
"""Makes an animation that sets up the NN layer for dropout"""
animations = []
# Go through the network and get the FeedForwardLayer instances
feed_forward_layers = neural_network.filter_layers(
lambda layer: isinstance(layer, FeedForwardLayer)
)
# Go through the network and get the FeedForwardToFeedForwardLayer instances
feed_forward_to_feed_forward_layers = neural_network.filter_layers(
lambda layer: isinstance(layer, FeedForwardToFeedForward)
)
# Get the edges to drop out
layers_to_edges_to_dropout = {}
for layer in feed_forward_to_feed_forward_layers:
layers_to_edges_to_dropout[layer], _ = get_edges_to_drop_out(
layer, layers_to_nodes_to_drop_out
)
# Dim the colors of the edges
dim_edge_colors_animations = []
for layer in layers_to_edges_to_dropout.keys():
edges_to_drop_out = layers_to_edges_to_dropout[layer]
# Make color dimming animation
for edge_index, edge in enumerate(edges_to_drop_out):
"""
def modify_edge(edge):
edge.set_stroke_color(dropped_out_color)
edge.set_stroke_width(0.6)
edge.set_stroke_opacity(dropped_out_opacity)
return edge
dim_edge = ApplyFunction(
modify_edge,
edge
)
"""
dim_edge_colors_animations.append(FadeOut(edge))
dim_edge_colors_animation = AnimationGroup(
*dim_edge_colors_animations, lag_ratio=0.0
)
# Dim the colors of the nodes
dim_nodes_animations = []
x_marks = []
for layer in layers_to_nodes_to_drop_out.keys():
nodes_to_dropout = layers_to_nodes_to_drop_out[layer]
# Make an X over each node
for node_index, node in enumerate(layer.node_group):
if node_index in nodes_to_dropout:
x_mark = XMark()
x_marks.append(x_mark)
x_mark.move_to(node.get_center())
create_x = Create(x_mark)
dim_nodes_animations.append(create_x)
dim_nodes_animation = AnimationGroup(*dim_nodes_animations, lag_ratio=0.0)
animation_group = AnimationGroup(
dim_edge_colors_animation,
dim_nodes_animation,
)
return animation_group, x_marks
def make_post_dropout_animation(
neural_network,
layers_to_nodes_to_drop_out,
x_marks,
):
"""Returns the NN to normal after dropout"""
# Go through the network and get the FeedForwardLayer instances
feed_forward_layers = neural_network.filter_layers(
lambda layer: isinstance(layer, FeedForwardLayer)
)
# Go through the network and get the FeedForwardToFeedForwardLayer instances
feed_forward_to_feed_forward_layers = neural_network.filter_layers(
lambda layer: isinstance(layer, FeedForwardToFeedForward)
)
# Get the edges to drop out
layers_to_edges_to_dropout = {}
for layer in feed_forward_to_feed_forward_layers:
layers_to_edges_to_dropout[layer], _ = get_edges_to_drop_out(
layer, layers_to_nodes_to_drop_out
)
# Remove the x marks
uncreate_animations = []
for x_mark in x_marks:
uncreate_x_mark = Uncreate(x_mark)
uncreate_animations.append(uncreate_x_mark)
uncreate_x_marks = AnimationGroup(*uncreate_animations, lag_ratio=0.0)
# Add the edges back
create_edge_animations = []
for layer in layers_to_edges_to_dropout.keys():
edges_to_drop_out = layers_to_edges_to_dropout[layer]
# Make color dimming animation
for edge_index, edge in enumerate(edges_to_drop_out):
edge_copy = edge.copy()
edges_to_drop_out[edge_index] = edge_copy
create_edge_animations.append(FadeIn(edge_copy))
create_edge_animation = AnimationGroup(*create_edge_animations, lag_ratio=0.0)
return AnimationGroup(uncreate_x_marks, create_edge_animation, lag_ratio=0.0)
def make_forward_pass_with_dropout_animation(
neural_network,
layers_to_nodes_to_drop_out,
):
"""Makes forward pass animation with dropout"""
layer_args = {}
# Go through the network and get the FeedForwardLayer instances
feed_forward_layers = neural_network.filter_layers(
lambda layer: isinstance(layer, FeedForwardLayer)
)
# Go through the network and get the FeedForwardToFeedForwardLayer instances
feed_forward_to_feed_forward_layers = neural_network.filter_layers(
lambda layer: isinstance(layer, FeedForwardToFeedForward)
)
# Iterate through network and get feed forward layers
for layer in feed_forward_layers:
layer_args[layer] = {"dropout_node_indices": layers_to_nodes_to_drop_out[layer]}
for layer in feed_forward_to_feed_forward_layers:
_, edge_indices = get_edges_to_drop_out(layer, layers_to_nodes_to_drop_out)
layer_args[layer] = {"edge_indices_to_dropout": edge_indices}
return neural_network.make_forward_pass_animation(layer_args=layer_args)
def make_neural_network_dropout_animation(
neural_network, dropout_rate=0.5, do_forward_pass=True, last_layer_stable=False, first_layer_stable=False, seed=None
):
"""
Makes a dropout animation for a given neural network.
NOTE Does dropout only on feed forward layers.
1. Does dropout
2. If `do_forward_pass` then do forward pass animation
3. Revert network to pre-dropout appearance
"""
# Go through the network and get the FeedForwardLayer instances
if seed is not None:
random.seed(seed)
feed_forward_layers = neural_network.filter_layers(
lambda layer: isinstance(layer, FeedForwardLayer)
)
# Go through the network and get the FeedForwardToFeedForwardLayer instances
feed_forward_to_feed_forward_layers = neural_network.filter_layers(
lambda layer: isinstance(layer, FeedForwardToFeedForward)
)
# Get random nodes to drop out for each FeedForward Layer
layers_to_nodes_to_drop_out = {}
for idx, feed_forward_layer in enumerate(feed_forward_layers):
num_nodes = feed_forward_layer.num_nodes
nodes_to_drop_out = []
# Compute random probability that each node is dropped out
for node_index in range(num_nodes):
dropout_prob = random.random()
if last_layer_stable and idx==len(feed_forward_layers)-1:
continue
if first_layer_stable and idx==0:
continue
if dropout_prob < dropout_rate:
nodes_to_drop_out.append(node_index)
# Add the mapping to the dict
layers_to_nodes_to_drop_out[feed_forward_layer] = nodes_to_drop_out
# Make the animation
pre_dropout_animation, x_marks = make_pre_dropout_animation(
neural_network, layers_to_nodes_to_drop_out
)
if do_forward_pass:
forward_pass_animation = make_forward_pass_with_dropout_animation(
neural_network, layers_to_nodes_to_drop_out
)
else:
forward_pass_animation = AnimationGroup()
post_dropout_animation = make_post_dropout_animation(
neural_network, layers_to_nodes_to_drop_out, x_marks
)
# Combine the animations into one
return Succession(
pre_dropout_animation, forward_pass_animation, post_dropout_animation
)
|
ManimML_helblazer811/examples/neocognitron/neocognitron.py | from manim import *
from manim_ml.neural_network import NeuralNetwork
from manim_ml.neural_network.layers.parent_layers import NeuralNetworkLayer, ConnectiveLayer, ThreeDLayer
import manim_ml
config.pixel_height = 1200
config.pixel_width = 1900
config.frame_height = 10.5
config.frame_width = 10.5
class NeocognitronFilter(VGroup):
"""
Filter connecting the S and C Cells of a neocognitron layer.
"""
def __init__(
self,
input_cell,
output_cell,
receptive_field_radius,
outline_color=BLUE,
active_color=ORANGE,
**kwargs
):
super().__init__(**kwargs)
self.outline_color = outline_color
self.active_color = active_color
self.input_cell = input_cell
self.output_cell = output_cell
# Draw the receptive field
# Make the points of a equilateral triangle in the plane of the
# cell about a random center
# center_point = input_cell.get_center()
shift_amount_x = np.random.uniform(
-(input_cell.get_height()/2 - receptive_field_radius - 0.01),
input_cell.get_height()/2 - receptive_field_radius - 0.01,
)
shift_amount_y = np.random.uniform(
-(input_cell.get_height()/2 - receptive_field_radius - 0.01),
input_cell.get_height()/2 - receptive_field_radius - 0.01,
)
# center_point += np.array([shift_amount_x, shift_amount_y, 0])
# # Make the triangle points
# side_length = np.sqrt(3) * receptive_field_radius
# normal_vector = np.cross(
# input_cell.get_left() - input_cell.get_center(),
# input_cell.get_top() - input_cell.get_center(),
# )
# Get vector in the plane of the cell
# vector_in_plane = input_cell.get_left() - input_cell.get_center()
# point_a = center_point + vector_in_plane * receptive_field_radius
# # rotate the vector 120 degrees about the vector normal to the cell
# vector_in_plane = rotate_vector(vector_in_plane, PI/3, normal_vector)
# point_b = center_point + vector_in_plane * receptive_field_radius
# # rotate the vector 120 degrees about the vector normal to the cell
# vector_in_plane = rotate_vector(vector_in_plane, PI/3, normal_vector)
# point_c = center_point + vector_in_plane * receptive_field_radius
# self.receptive_field = Circle.from_three_points(
# point_a,
# point_b,
# point_c,
# color=outline_color,
# stroke_width=2.0,
# )
self.receptive_field = Circle(
radius=receptive_field_radius,
color=outline_color,
stroke_width=3.0,
)
self.add(self.receptive_field)
# Move receptive field to a random point within the cell
# minus the radius of the receptive field
self.receptive_field.move_to(
input_cell
)
# Shift a random amount in the x and y direction within
self.receptive_field.shift(
np.array([shift_amount_x, shift_amount_y, 0])
)
# Choose a random point on the c_cell
shift_amount_x = np.random.uniform(
-(output_cell.get_height()/2 - receptive_field_radius - 0.01),
output_cell.get_height()/2 - receptive_field_radius - 0.01,
)
shift_amount_y = np.random.uniform(
-(output_cell.get_height()/2 - receptive_field_radius - 0.01),
output_cell.get_height()/2 - receptive_field_radius - 0.01,
)
self.dot = Dot(
color=outline_color,
radius=0.04
)
self.dot.move_to(output_cell.get_center() + np.array([shift_amount_x, shift_amount_y, 0]))
self.add(self.dot)
# Make lines connecting receptive field to the dot
self.lines = VGroup()
self.lines.add(
Line(
self.receptive_field.get_center() + np.array([0, receptive_field_radius, 0]),
self.dot,
color=outline_color,
stroke_width=3.0,
)
)
self.lines.add(
Line(
self.receptive_field.get_center() - np.array([0, receptive_field_radius, 0]),
self.dot,
color=outline_color,
stroke_width=3.0,
)
)
self.add(self.lines)
def make_filter_pulse_animation(self, **kwargs):
succession = Succession(
ApplyMethod(
self.receptive_field.set_color,
self.active_color,
run_time=0.25
),
ApplyMethod(
self.receptive_field.set_color,
self.outline_color,
run_time=0.25
),
ShowPassingFlash(
self.lines.copy().set_color(self.active_color),
time_width=0.5,
),
ApplyMethod(
self.dot.set_color,
self.active_color,
run_time=0.25
),
ApplyMethod(
self.dot.set_color,
self.outline_color,
run_time=0.25
),
)
return succession
class NeocognitronLayer(NeuralNetworkLayer, ThreeDLayer):
def __init__(
self,
num_cells,
cell_height,
receptive_field_radius,
layer_name,
active_color=manim_ml.config.color_scheme.active_color,
cell_color=manim_ml.config.color_scheme.secondary_color,
outline_color=manim_ml.config.color_scheme.primary_color,
show_outline=True,
**kwargs
):
super().__init__(**kwargs)
self.num_cells = num_cells
self.cell_height = cell_height
self.receptive_field_radius = receptive_field_radius
self.layer_name = layer_name
self.active_color = active_color
self.cell_color = cell_color
self.outline_color = outline_color
self.show_outline = show_outline
self.plane_label = Text(layer_name).scale(0.4)
def make_cell_plane(self, layer_name, cell_buffer=0.1, stroke_width=2.0):
"""Makes a plane of cells.
"""
cell_plane = VGroup()
for cell_index in range(self.num_cells):
# Draw the cell box
cell_box = Rectangle(
width=self.cell_height,
height=self.cell_height,
color=self.cell_color,
fill_color=self.cell_color,
fill_opacity=0.3,
stroke_width=stroke_width,
)
if cell_index > 0:
cell_box.next_to(cell_plane[-1], DOWN, buff=cell_buffer)
cell_plane.add(cell_box)
# Draw the outline
if self.show_outline:
self.plane_outline = SurroundingRectangle(
cell_plane,
color=self.cell_color,
buff=cell_buffer,
stroke_width=stroke_width,
)
cell_plane.add(
self.plane_outline
)
# Draw a label above the container box
self.plane_label.next_to(cell_plane, UP, buff=0.2)
cell_plane.add(self.plane_label)
return cell_plane
def construct_layer(self, input_layer, output_layer, **kwargs):
# Make the Cell layer
self.cell_plane = self.make_cell_plane(self.layer_name)
self.add(self.cell_plane)
def make_forward_pass_animation(self, layer_args={}, **kwargs):
"""Forward pass for query"""
# # Pulse and un pulse the cell plane rectangle
flash_outline_animations = []
for cell in self.cell_plane:
flash_outline_animations.append(
Succession(
ApplyMethod(
cell.set_stroke_color,
self.active_color,
run_time=0.25
),
ApplyMethod(
cell.set_stroke_color,
self.outline_color,
run_time=0.25
)
)
)
return AnimationGroup(
*flash_outline_animations,
lag_ratio=0.0
)
class NeocognitronToNeocognitronLayer(ConnectiveLayer):
input_class = NeocognitronLayer
output_class = NeocognitronLayer
def __init__(self, input_layer, output_layer, **kwargs):
super().__init__(input_layer, output_layer, **kwargs)
def construct_layer(self, input_layer, output_layer, **kwargs):
self.filters = []
for cell_index in range(input_layer.num_cells):
input_cell = input_layer.cell_plane[cell_index]
# Randomly choose a cell from the output layer
output_cell = output_layer.cell_plane[
np.random.randint(0, output_layer.num_cells)
]
# Make the filter
filter = NeocognitronFilter(
input_cell=input_cell,
output_cell=output_cell,
receptive_field_radius=input_layer.receptive_field_radius,
outline_color=self.input_layer.outline_color
)
# filter = NeocognitronFilter(
# outline_color=input_layer.outline_color
# )
self.filters.append(filter)
self.add(VGroup(*self.filters))
def make_forward_pass_animation(self, layer_args={}, **kwargs):
"""Forward pass for query"""
filter_pulses = []
for filter in self.filters:
filter_pulses.append(
filter.make_filter_pulse_animation()
)
return AnimationGroup(
*filter_pulses
)
manim_ml.neural_network.layers.util.register_custom_connective_layer(
NeocognitronToNeocognitronLayer,
)
class Neocognitron(NeuralNetwork):
def __init__(
self,
camera,
cells_per_layer=[4, 5, 4, 4, 3, 3, 5, 5],
cell_heights=[0.8, 0.8, 0.8*0.75, 0.8*0.75, 0.8*0.75**2, 0.8*0.75**2, 0.8*0.75**3, 0.02],
layer_names=["S1", "C1", "S2", "C2", "S3", "C3", "S4", "C4"],
receptive_field_sizes=[0.2, 0.2, 0.2*0.75, 0.2*0.75, 0.2*0.75**2, 0.2*0.75**2, 0.2*0.75**3, 0.0],
):
self.cells_per_layer = cells_per_layer
self.cell_heights = cell_heights
self.layer_names = layer_names
self.receptive_field_sizes = receptive_field_sizes
# Make the neo-cognitron input layer
input_layers = []
input_layers.append(
NeocognitronLayer(
1,
1.5,
0.2,
"U0",
show_outline=False,
)
)
# Make the neo-cognitron layers
for i in range(len(cells_per_layer)):
layer = NeocognitronLayer(
cells_per_layer[i],
cell_heights[i],
receptive_field_sizes[i],
layer_names[i]
)
input_layers.append(layer)
# Make all of the layer labels fixed in frame
for layer in input_layers:
if isinstance(layer, NeocognitronLayer):
# camera.add_fixed_orientation_mobjects(layer.plane_label)
pass
all_layers = []
for layer_index in range(len(input_layers) - 1):
input_layer = input_layers[layer_index]
all_layers.append(input_layer)
output_layer = input_layers[layer_index + 1]
connective_layer = NeocognitronToNeocognitronLayer(
input_layer,
output_layer
)
all_layers.append(connective_layer)
all_layers.append(input_layers[-1])
def custom_layout_func(neural_network):
# Insert the connective layers
# Pass the layers to neural network super class
# Rotate pairs of layers
layer_index = 1
while layer_index < len(input_layers):
prev_layer = input_layers[layer_index - 1]
s_layer = input_layers[layer_index]
s_layer.next_to(prev_layer, RIGHT, buff=0.0)
s_layer.shift(RIGHT * 0.4)
c_layer = input_layers[layer_index + 1]
c_layer.next_to(s_layer, RIGHT, buff=0.0)
c_layer.shift(RIGHT * 0.2)
# Rotate the pair of layers
group = Group(s_layer, c_layer)
group.move_to(np.array([group.get_center()[0], 0, 0]))
# group.shift(layer_index * 3 * np.array([0, 0, 1.0]))
# group.rotate(40 * DEGREES, axis=UP, about_point=group.get_center())
# c_layer.rotate(40*DEGREES, axis=UP, about_point=group.get_center())
# s_layer.shift(
# layer_index // 2 * np.array([0, 0, 0.3])
# )
# c_layer.shift(
# layer_index // 2 * np.array([0, 0, 0.3])
# )
layer_index += 2
neural_network.move_to(ORIGIN)
super().__init__(
all_layers,
layer_spacing=0.5,
custom_layout_func=custom_layout_func
)
class Scene(ThreeDScene):
def construct(self):
neocognitron = Neocognitron(self.camera)
neocognitron.shift(DOWN*0.5)
self.add(neocognitron)
title = Text("Neocognitron").scale(1)
self.add_fixed_in_frame_mobjects(title)
title.next_to(neocognitron, UP, buff=0.4)
self.add(title)
"""
self.play(
neocognitron.make_forward_pass_animation()
)
"""
print(self.camera.gamma)
print(self.camera.theta)
print(self.camera.phi)
neocognitron.rotate(90*DEGREES, axis=RIGHT)
neocognitron.shift(np.array([0, 0, -0.2]))
# title.rotate(90*DEGREES, axis=RIGHT)
# self.set_camera_orientation(phi=-15*DEGREES)
# vec = np.array([0.0, 0.2, 0.0])
# vec /= np.linalg.norm(vec)
# x, y, z = vec[0], vec[1], vec[2]
# theta = np.arccos(z)
# phi = np.arctan(y / x)
self.set_camera_orientation(phi=90 * DEGREES, theta=-70*DEGREES, gamma=0*DEGREES)
# self.set_camera_orientation(theta=theta, phi=phi)
forward_pass = neocognitron.make_forward_pass_animation()
self.play(forward_pass)
|
ManimML_helblazer811/examples/diffusion_process/diffusion_process.py | """
Shows video of diffusion process.
"""
import manim_ml
from manim import *
from PIL import Image
import os
from diffusers import StableDiffusionPipeline
import numpy as np
import scipy
from manim_ml.diffusion.mcmc import metropolis_hastings_sampler, gaussian_proposal
from manim_ml.diffusion.random_walk import RandomWalk
from manim_ml.utils.mobjects.probability import make_dist_image_mobject_from_samples
def generate_stable_diffusion_images(prompt, num_inference_steps=30):
"""Generates a list of progressively denoised images using Stable Diffusion
Parameters
----------
num_inference_steps : int, optional
_description_, by default 30
"""
pipeline = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1")
image_list = []
# Save image callback
def save_image_callback(step, timestep, latents):
print("Saving image callback")
# Decode the latents
image = pipeline.vae.decode(latents)
image_list.append(image)
# Generate the image
pipeline(prompt, num_inference_steps=num_inference_steps, callback=save_image_callback)
return image_list
def make_time_schedule_bar(num_time_steps=30):
"""Makes a bar that gets moved back and forth according to the diffusion model time"""
# Draw time bar with initial value
time_bar = NumberLine(
[0, num_time_steps], length=25, stroke_width=10, include_ticks=False, include_numbers=False,
color=manim_ml.config.color_scheme.secondary_color
)
time_bar.shift(4.5 * DOWN)
current_time = ValueTracker(0.3)
time_point = time_bar.number_to_point(current_time.get_value())
time_dot = Dot(time_point, color=manim_ml.config.color_scheme.secondary_color, radius=0.2)
label_location = time_bar.number_to_point(1.0)
label_location -= DOWN * 0.1
label_text = MathTex("t", color=manim_ml.config.color_scheme.text_color).scale(1.5)
# label_text = Text("time")
label_text.move_to(time_bar.get_center())
label_text.shift(DOWN * 0.5)
# Make an updater for the dot
def dot_updater(time_dot):
# Get location on time_bar
point_loc = time_bar.number_to_point(current_time.get_value())
time_dot.move_to(point_loc)
time_dot.add_updater(dot_updater)
return time_bar, time_dot, label_text, current_time
def make_2d_diffusion_space():
"""Makes a 2D axis where the diffusion random walk happens in.
There is also a distribution of points representing the true distribution
of images.
"""
axes_group = Group()
x_range = [-8, 8]
y_range = [-8, 8]
y_length = 13
x_length = 13
# Make an axis
axes = Axes(
x_range=x_range,
y_range=y_range,
x_length=x_length,
y_length=y_length,
# x_axis_config={"stroke_opacity": 1.0, "stroke_color": WHITE},
# y_axis_config={"stroke_opacity": 1.0, "stroke_color": WHITE},
# tips=True,
)
# Make the distribution for the true images as some gaussian mixture in
# the bottom right
gaussian_a = np.random.multivariate_normal(
mean=[3.0, -3.2],
cov=[[1.0, 0.0], [0.0, 1.0]],
size=(100)
)
gaussian_b = np.random.multivariate_normal(
mean=[3.0, 3.0],
cov=[[1.0, 0.0], [0.0, 1.0]],
size=(100)
)
gaussian_c = np.random.multivariate_normal(
mean=[0.0, -1.6],
cov=[[1.0, 0.0], [0.0, 1.0]],
size=(200)
)
all_gaussian_samples = np.concatenate([gaussian_a, gaussian_b, gaussian_c], axis=0)
print(f"Shape of all gaussian samples: {all_gaussian_samples.shape}")
image_mobject = make_dist_image_mobject_from_samples(
all_gaussian_samples,
xlim=(x_range[0], x_range[1]),
ylim=(y_range[0], y_range[1])
)
image_mobject.scale_to_fit_height(
axes.get_height()
)
image_mobject.move_to(axes)
axes_group.add(axes)
axes_group.add(image_mobject)
return axes_group
def generate_forward_and_reverse_chains(start_point, target_point, num_inference_steps=30):
"""Here basically we want to generate a forward and reverse chain
for the diffusion model where the reverse chain starts at the end of the forward
chain, and the end of the reverse chain ends somewhere within a certain radius of the
reverse chain.
This can be done in a sortof hacky way by doing metropolis hastings from a start point
to a distribution centered about the start point and vica versa.
Parameters
----------
start_point : _type_
_description_
"""
def start_dist_log_prob_fn(x):
"""Log probability of the start distribution"""
# Make it a gaussian in top left of the 2D space
gaussian_pdf = scipy.stats.multivariate_normal(
mean=target_point,
cov=[1.0, 1.0]
).pdf(x)
return np.log(gaussian_pdf)
def end_dist_log_prob_fn(x):
gaussian_pdf = scipy.stats.multivariate_normal(
mean=start_point,
cov=[1.0, 1.0]
).pdf(x)
return np.log(gaussian_pdf)
forward_chain, _, _ = metropolis_hastings_sampler(
log_prob_fn=start_dist_log_prob_fn,
prop_fn=gaussian_proposal,
iterations=num_inference_steps,
initial_location=start_point
)
end_point = forward_chain[-1]
reverse_chain, _, _ = metropolis_hastings_sampler(
log_prob_fn=end_dist_log_prob_fn,
prop_fn=gaussian_proposal,
iterations=num_inference_steps,
initial_location=end_point
)
return forward_chain, reverse_chain
# Make the scene
config.pixel_height = 1200
config.pixel_width = 1900
config.frame_height = 30.0
config.frame_width = 30.0
class DiffusionProcess(Scene):
def construct(self):
# Parameters
num_inference_steps = 50
image_save_dir = "diffusion_images"
prompt = "An oil painting of a dragon."
start_time = 0
# Compute the stable diffusion images
if len(os.listdir(image_save_dir)) < num_inference_steps:
stable_diffusion_images = generate_stable_diffusion_images(
prompt,
num_inference_steps=num_inference_steps
)
for index, image in enumerate(stable_diffusion_images):
image.save(f"{image_save_dir}/{index}.png")
else:
stable_diffusion_images = [Image.open(f"{image_save_dir}/{i}.png") for i in range(num_inference_steps)]
# Reverse order of list to be in line with theory timesteps
stable_diffusion_images = stable_diffusion_images[::-1]
# Add the initial location of the first image
start_image = stable_diffusion_images[start_time]
image_mobject = ImageMobject(start_image)
image_mobject.scale(0.55)
image_mobject.shift(LEFT * 7.5)
self.add(image_mobject)
# Make the time schedule bar
time_bar, time_dot, time_label, current_time = make_time_schedule_bar(num_time_steps=num_inference_steps)
# Place the bar at the bottom of the screen
time_bar.move_to(image_mobject.get_bottom() + DOWN * 2)
time_bar.set_x(0)
self.add(time_bar)
# Place the time label below the time bar
self.add(time_label)
time_label.next_to(time_bar, DOWN, buff=0.5)
# Add 0 and T labels above the bar
zero_label = Text("0")
zero_label.next_to(time_bar.get_left(), UP, buff=0.5)
self.add(zero_label)
t_label = Text("T")
t_label.next_to(time_bar.get_right(), UP, buff=0.5)
self.add(t_label)
# Move the time dot to zero
time_dot.set_value(0)
time_dot.move_to(time_bar.number_to_point(0))
self.add(time_dot)
# Add the prompt above the image
paragraph_prompt = '"An oil painting of\n a flaming dragon."'
text_prompt = Paragraph(paragraph_prompt, alignment="center", font_size=64)
text_prompt.next_to(image_mobject, UP, buff=0.6)
self.add(text_prompt)
# Generate the chain data
forward_chain, reverse_chain = generate_forward_and_reverse_chains(
start_point=[3.0, -3.0],
target_point=[-7.0, -4.0],
num_inference_steps=num_inference_steps
)
# Make the axes that the distribution and chains go on
axes, axes_group = make_2d_diffusion_space()
# axes_group.match_height(image_mobject)
axes_group.shift(RIGHT * 7.5)
axes.shift(RIGHT * 7.5)
self.add(axes_group)
# Add title below distribution
title = Text("Distribution of Real Images", font_size=48, color=RED)
title.move_to(axes_group)
title.shift(DOWN * 6)
line = Line(
title.get_top(),
axes_group.get_bottom() - 4 * DOWN,
)
self.add(title)
self.add(line)
# Add a title above the plot
title = Text("Reverse Diffusion Process", font_size=64)
title.move_to(axes)
title.match_y(text_prompt)
self.add(title)
# First do the forward noise process of adding noise to an image
forward_random_walk = RandomWalk(forward_chain, axes=axes)
# Sync up forward random walk
synced_animations = []
forward_random_walk_animation = forward_random_walk.animate_random_walk()
print(len(forward_random_walk_animation.animations))
for timestep, transition_animation in enumerate(forward_random_walk_animation.animations):
# Make an animation moving the time bar
# time_bar_animation = current_time.animate.set_value(timestep)
time_bar_animation = current_time.animate.set_value(timestep)
# Make an animation replacing the current image with the timestep image
new_image = ImageMobject(stable_diffusion_images[timestep])
new_image = new_image.set_height(image_mobject.get_height())
new_image.move_to(image_mobject)
image_mobject = new_image
replace_image_animation = AnimationGroup(
FadeOut(image_mobject),
FadeIn(new_image),
lag_ratio=1.0
)
# Sync them together
# synced_animations.append(
self.play(
Succession(
transition_animation,
time_bar_animation,
replace_image_animation,
lag_ratio=0.0
)
)
# Fade out the random walk
self.play(forward_random_walk.fade_out_random_walk())
# self.play(
# Succession(
# *synced_animations,
# )
# )
new_title = Text("Forward Diffusion Process", font_size=64)
new_title.move_to(title)
self.play(ReplacementTransform(title, new_title))
# Second do the reverse noise process of removing noise from the image
backward_random_walk = RandomWalk(reverse_chain, axes=axes)
# Sync up forward random walk
synced_animations = []
backward_random_walk_animation = backward_random_walk.animate_random_walk()
for timestep, transition_animation in enumerate(backward_random_walk_animation.animations):
timestep = num_inference_steps - timestep - 1
if timestep == num_inference_steps - 1:
continue
# Make an animation moving the time bar
# time_bar_animation = time_dot.animate.set_value(timestep)
time_bar_animation = current_time.animate.set_value(timestep)
# Make an animation replacing the current image with the timestep image
new_image = ImageMobject(stable_diffusion_images[timestep])
new_image = new_image.set_height(image_mobject.get_height())
new_image.move_to(image_mobject)
image_mobject = new_image
replace_image_animation = AnimationGroup(
FadeOut(image_mobject),
FadeIn(new_image),
lag_ratio=1.0
)
# Sync them together
# synced_animations.append(
self.play(
Succession(
transition_animation,
time_bar_animation,
replace_image_animation,
lag_ratio=0.0
)
)
# self.play(
# Succession(
# *synced_animations,
# )
# )
|
ManimML_helblazer811/examples/basic_neural_network/residual_block.py | from manim import *
from manim_ml.neural_network.layers.feed_forward import FeedForwardLayer
from manim_ml.neural_network.layers.math_operation_layer import MathOperationLayer
from manim_ml.neural_network.neural_network import NeuralNetwork
# Make the specific scene
config.pixel_height = 1200
config.pixel_width = 1900
config.frame_height = 6.0
config.frame_width = 6.0
def make_code_snippet():
code_str = """
nn = NeuralNetwork({
"feed_forward_1": FeedForwardLayer(3),
"feed_forward_2": FeedForwardLayer(3, activation_function="ReLU"),
"feed_forward_3": FeedForwardLayer(3),
"sum_operation": MathOperationLayer("+", activation_function="ReLU"),
})
nn.add_connection("feed_forward_1", "sum_operation")
self.play(nn.make_forward_pass_animation())
"""
code = Code(
code=code_str,
tab_width=4,
background_stroke_width=1,
background_stroke_color=WHITE,
insert_line_no=False,
style="monokai",
# background="window",
language="py",
)
code.scale(0.38)
return code
class CombinedScene(ThreeDScene):
def construct(self):
# Add the network
nn = NeuralNetwork({
"feed_forward_1": FeedForwardLayer(3),
"feed_forward_2": FeedForwardLayer(3, activation_function="ReLU"),
"feed_forward_3": FeedForwardLayer(3),
"sum_operation": MathOperationLayer("+", activation_function="ReLU"),
},
layer_spacing=0.38
)
# Make connections
input_blank_dot = Dot(
nn.input_layers_dict["feed_forward_1"].get_left() - np.array([0.65, 0.0, 0.0])
)
nn.add_connection(input_blank_dot, "feed_forward_1", arc_direction="straight")
nn.add_connection("feed_forward_1", "sum_operation")
output_blank_dot = Dot(
nn.input_layers_dict["sum_operation"].get_right() + np.array([0.65, 0.0, 0.0])
)
nn.add_connection("sum_operation", output_blank_dot, arc_direction="straight")
# Center the nn
nn.move_to(ORIGIN)
self.add(nn)
# Make code snippet
code = make_code_snippet()
code.next_to(nn, DOWN)
self.add(code)
# Group it all
group = Group(nn, code)
group.move_to(ORIGIN)
# Play animation
forward_pass = nn.make_forward_pass_animation()
self.wait(1)
self.play(forward_pass) |
ManimML_helblazer811/examples/basic_neural_network/basic_neural_network.py | from manim import *
from manim_ml.neural_network.layers import FeedForwardLayer
from manim_ml.neural_network.neural_network import NeuralNetwork
class NeuralNetworkScene(Scene):
"""Test Scene for the Neural Network"""
def construct(self):
# Make the Layer object
layers = [FeedForwardLayer(3), FeedForwardLayer(5), FeedForwardLayer(3)]
nn = NeuralNetwork(layers)
nn.scale(2)
nn.move_to(ORIGIN)
# Make Animation
self.add(nn)
# self.play(Create(nn))
forward_propagation_animation = nn.make_forward_pass_animation(
run_time=5, passing_flash=True
)
self.play(forward_propagation_animation)
|
ManimML_helblazer811/examples/code_snippet/vae_code_landscape.py | from manim import *
from manim_ml.neural_network.layers import FeedForwardLayer, ImageLayer, EmbeddingLayer
from manim_ml.neural_network.neural_network import NeuralNetwork
from PIL import Image
import numpy as np
config.pixel_height = 720
config.pixel_width = 720
config.frame_height = 6.0
config.frame_width = 6.0
class VAECodeSnippetScene(Scene):
def make_code_snippet(self):
code_str = """
# Make Neural Network
nn = NeuralNetwork([
ImageLayer(numpy_image, height=1.2),
FeedForwardLayer(5),
FeedForwardLayer(3),
EmbeddingLayer(),
FeedForwardLayer(3),
FeedForwardLayer(5),
ImageLayer(numpy_image, height=1.2),
], layer_spacing=0.1)
# Play animation
self.play(nn.make_forward_pass_animation())
"""
code = Code(
code=code_str,
tab_width=4,
background_stroke_width=1,
# background_stroke_color=WHITE,
insert_line_no=False,
background="window",
# font="Monospace",
style="one-dark",
language="py",
)
return code
def construct(self):
image = Image.open("../../tests/images/image.jpeg")
numpy_image = np.asarray(image)
embedding_layer = EmbeddingLayer(dist_theme="ellipse", point_radius=0.04).scale(
1.0
)
# Make nn
nn = NeuralNetwork(
[
ImageLayer(numpy_image, height=1.2),
FeedForwardLayer(5),
FeedForwardLayer(3),
embedding_layer,
FeedForwardLayer(3),
FeedForwardLayer(5),
ImageLayer(numpy_image, height=1.2),
],
layer_spacing=0.1,
)
nn.scale(1.1)
# Center the nn
nn.move_to(ORIGIN)
# nn.rotate(-PI/2)
# nn.all_layers[0].image_mobject.rotate(PI/2)
# nn.all_layers[0].image_mobject.shift([0, -0.4, 0])
# nn.all_layers[-1].image_mobject.rotate(PI/2)
# nn.all_layers[-1].image_mobject.shift([0, -0.4, 0])
nn.shift([0, -1.4, 0])
self.add(nn)
# Make code snippet
code_snippet = self.make_code_snippet()
code_snippet.scale(0.52)
code_snippet.shift([0, 1.25, 0])
# code_snippet.shift([-1.25, 0, 0])
self.add(code_snippet)
# Play animation
self.play(nn.make_forward_pass_animation(), run_time=10)
if __name__ == "__main__":
"""Render all scenes"""
# Neural Network
nn_scene = VAECodeSnippetScene()
nn_scene.render()
|
ManimML_helblazer811/examples/code_snippet/image_nn_code_snippet.py | from manim import *
from manim_ml.neural_network.layers import FeedForwardLayer, ImageLayer
from manim_ml.neural_network.neural_network import NeuralNetwork
from PIL import Image
import numpy as np
config.pixel_height = 720
config.pixel_width = 1280
config.frame_height = 6.0
config.frame_width = 6.0
class ImageNeuralNetworkScene(Scene):
def make_code_snippet(self):
code_str = """
# Make image object
image = Image.open('images/image.jpeg')
numpy_image = np.asarray(image)
# Make Neural Network
layers = [
ImageLayer(numpy_image, height=1.4),
FeedForwardLayer(3),
FeedForwardLayer(5),
FeedForwardLayer(3)
]
nn = NeuralNetwork(layers)
self.add(nn)
# Play animation
self.play(
nn.make_forward_pass_animation()
)
"""
code = Code(
code=code_str,
tab_width=4,
background_stroke_width=1,
background_stroke_color=WHITE,
insert_line_no=False,
style="monokai",
# background="window",
language="py",
)
code.scale(0.2)
return code
def construct(self):
image = Image.open("../../tests/images/image.jpeg")
numpy_image = np.asarray(image)
# Make nn
layers = [
ImageLayer(numpy_image, height=1.4),
FeedForwardLayer(3),
FeedForwardLayer(5),
FeedForwardLayer(3),
FeedForwardLayer(6),
]
nn = NeuralNetwork(layers)
nn.scale(0.9)
# Center the nn
nn.move_to(ORIGIN)
nn.rotate(-PI / 2)
nn.layers[0].image_mobject.rotate(PI / 2)
nn.layers[0].image_mobject.shift([0, -0.4, 0])
nn.shift([1.5, 0.3, 0])
self.add(nn)
# Make code snippet
code_snippet = self.make_code_snippet()
code_snippet.scale(1.9)
code_snippet.shift([-1.25, 0, 0])
self.add(code_snippet)
# Play animation
self.play(nn.make_forward_pass_animation(run_time=10))
if __name__ == "__main__":
"""Render all scenes"""
# Feed Forward Neural Network
ffnn_scene = FeedForwardNeuralNetworkScene()
ffnn_scene.render()
# Neural Network
nn_scene = NeuralNetworkScene()
nn_scene.render()
|
ManimML_helblazer811/examples/code_snippet/vae_nn_code_snippet.py | from manim import *
from manim_ml.neural_network.layers import FeedForwardLayer, ImageLayer, EmbeddingLayer
from manim_ml.neural_network.neural_network import NeuralNetwork
from PIL import Image
import numpy as np
config.pixel_height = 720
config.pixel_width = 1280
config.frame_height = 6.0
config.frame_width = 6.0
class VAECodeSnippetScene(Scene):
def make_code_snippet(self):
code_str = """
# Make image object
image = Image.open('images/image.jpeg')
numpy_image = np.asarray(image)
# Make Neural Network
nn = NeuralNetwork([
ImageLayer(numpy_image, height=1.2),
FeedForwardLayer(5),
FeedForwardLayer(3),
EmbeddingLayer(),
FeedForwardLayer(3),
FeedForwardLayer(5),
ImageLayer(numpy_image, height=1.2),
], layer_spacing=0.1)
self.add(nn)
# Play animation
self.play(
nn.make_forward_pass_animation()
)
"""
code = Code(
code=code_str,
tab_width=4,
background_stroke_width=1,
# background_stroke_color=WHITE,
insert_line_no=False,
background="window",
# font="Monospace",
style="one-dark",
language="py",
)
code.scale(0.2)
return code
def construct(self):
image = Image.open("../../tests/images/image.jpeg")
numpy_image = np.asarray(image)
embedding_layer = EmbeddingLayer(dist_theme="ellipse", point_radius=0.04).scale(
1.0
)
# Make nn
nn = NeuralNetwork(
[
ImageLayer(numpy_image, height=1.0),
FeedForwardLayer(5),
FeedForwardLayer(3),
embedding_layer,
FeedForwardLayer(3),
FeedForwardLayer(5),
ImageLayer(numpy_image, height=1.0),
],
layer_spacing=0.1,
)
nn.scale(0.65)
# Center the nn
nn.move_to(ORIGIN)
nn.rotate(-PI / 2)
nn.all_layers[0].image_mobject.rotate(PI / 2)
# nn.all_layers[0].image_mobject.shift([0, -0.4, 0])
nn.all_layers[-1].image_mobject.rotate(PI / 2)
# nn.all_layers[-1].image_mobject.shift([0, -0.4, 0])
nn.shift([1.5, 0.0, 0])
self.add(nn)
# Make code snippet
code_snippet = self.make_code_snippet()
code_snippet.scale(1.9)
code_snippet.shift([-1.25, 0, 0])
self.add(code_snippet)
# Play animation
self.play(nn.make_forward_pass_animation(), run_time=10)
if __name__ == "__main__":
"""Render all scenes"""
# Neural Network
nn_scene = VAECodeSnippetScene()
nn_scene.render()
|
ManimML_helblazer811/examples/paper_visualizations/oracle_guidance/oracle_guidance.py | """
Here is a animated explanatory figure for the "Oracle Guided Image Synthesis with Relative Queries" paper.
"""
from pathlib import Path
from manim import *
from manim_ml.neural_network.layers import triplet
from manim_ml.neural_network.layers.image import ImageLayer
from manim_ml.neural_network.layers.paired_query import PairedQueryLayer
from manim_ml.neural_network.layers.triplet import TripletLayer
from manim_ml.neural_network.neural_network import NeuralNetwork
from manim_ml.neural_network.layers import FeedForwardLayer, EmbeddingLayer
from manim_ml.neural_network.layers.util import get_connective_layer
import os
from manim_ml.utils.mobjects.probability import GaussianDistribution
# Make the specific scene
config.pixel_height = 1200
config.pixel_width = 1900
config.frame_height = 6.0
config.frame_width = 6.0
ROOT_DIR = Path(__file__).parents[3]
class Localizer:
"""
Holds the localizer object, which contains the queries, images, etc.
needed to represent a localization run.
"""
def __init__(self, axes):
# Set dummy values for these
self.index = -1
self.axes = axes
self.num_queries = 3
self.assets_path = ROOT_DIR / "assets/oracle_guidance"
self.ground_truth_image_path = self.assets_path / "ground_truth.jpg"
self.ground_truth_location = np.array([2, 3])
# Prior distribution
print("initial gaussian")
self.prior_distribution = GaussianDistribution(
self.axes,
mean=np.array([0.0, 0.0]),
cov=np.array([[3, 0], [0, 3]]),
dist_theme="ellipse",
color=GREEN,
)
# Define the query images and embedded locations
# Contains image paths [(positive_path, negative_path), ...]
self.query_image_paths = [
(
os.path.join(self.assets_path, "positive_1.jpg"),
os.path.join(self.assets_path, "negative_1.jpg"),
),
(
os.path.join(self.assets_path, "positive_2.jpg"),
os.path.join(self.assets_path, "negative_2.jpg"),
),
(
os.path.join(self.assets_path, "positive_3.jpg"),
os.path.join(self.assets_path, "negative_3.jpg"),
),
]
# Contains 2D locations for each image [([2, 3], [2, 4]), ...]
self.query_locations = [
(np.array([-1, -1]), np.array([1, 1])),
(np.array([1, -1]), np.array([-1, 1])),
(np.array([0.3, -0.6]), np.array([-0.5, 0.7])),
]
# Make the covariances for each query
self.query_covariances = [
(np.array([[0.3, 0], [0.0, 0.2]]), np.array([[0.2, 0], [0.0, 0.2]])),
(np.array([[0.2, 0], [0.0, 0.2]]), np.array([[0.2, 0], [0.0, 0.2]])),
(np.array([[0.2, 0], [0.0, 0.2]]), np.array([[0.2, 0], [0.0, 0.2]])),
]
# Posterior distributions over time GaussianDistribution objects
self.posterior_distributions = [
GaussianDistribution(
self.axes,
dist_theme="ellipse",
color=GREEN,
mean=np.array([-0.3, -0.3]),
cov=np.array([[5, -4], [-4, 6]]),
).scale(0.6),
GaussianDistribution(
self.axes,
dist_theme="ellipse",
color=GREEN,
mean=np.array([0.25, -0.25]),
cov=np.array([[3, -2], [-2, 4]]),
).scale(0.35),
GaussianDistribution(
self.axes,
dist_theme="ellipse",
color=GREEN,
mean=np.array([0.4, -0.35]),
cov=np.array([[1, 0], [0, 1]]),
).scale(0.3),
]
# Some assumptions
assert len(self.query_locations) == len(self.query_image_paths)
assert len(self.query_locations) == len(self.posterior_distributions)
def __iter__(self):
return self
def __next__(self):
"""Steps through each localization time instance"""
if self.index < len(self.query_image_paths):
self.index += 1
else:
raise StopIteration
# Return query_paths, query_locations, posterior
out_tuple = (
self.query_image_paths[self.index],
self.query_locations[self.index],
self.posterior_distributions[self.index],
self.query_covariances[self.index],
)
return out_tuple
class OracleGuidanceVisualization(Scene):
def __init__(self):
super().__init__()
self.neural_network, self.embedding_layer = self.make_vae()
self.localizer = iter(Localizer(self.embedding_layer.axes))
self.subtitle = None
self.title = None
# Set image paths
# VAE embedding animation image paths
self.assets_path = ROOT_DIR / "assets/oracle_guidance"
self.input_embed_image_path = os.path.join(self.assets_path, "input_image.jpg")
self.output_embed_image_path = os.path.join(
self.assets_path, "output_image.jpg"
)
def make_vae(self):
"""Makes a simple VAE architecture"""
embedding_layer = EmbeddingLayer(dist_theme="ellipse")
self.encoder = NeuralNetwork(
[
FeedForwardLayer(5),
FeedForwardLayer(3),
embedding_layer,
]
)
self.decoder = NeuralNetwork(
[
FeedForwardLayer(3),
FeedForwardLayer(5),
]
)
neural_network = NeuralNetwork([self.encoder, self.decoder])
neural_network.shift(DOWN * 0.4)
return neural_network, embedding_layer
@override_animation(Create)
def _create_animation(self):
animation_group = AnimationGroup(Create(self.neural_network))
return animation_group
def insert_at_start(self, layer, create=True):
"""Inserts a layer at the beggining of the network"""
# Note: Does not move the rest of the network
current_first_layer = self.encoder.all_layers[0]
# Get connective layer
connective_layer = get_connective_layer(layer, current_first_layer)
# Insert both layers
self.encoder.all_layers.insert(0, layer)
self.encoder.all_layers.insert(1, connective_layer)
# Move layers to the correct location
# TODO: Fix this cause its hacky
layer.shift(DOWN * 0.4)
layer.shift(LEFT * 2.35)
# Make insert animation
if not create:
animation_group = AnimationGroup(Create(connective_layer))
else:
animation_group = AnimationGroup(Create(layer), Create(connective_layer))
self.play(animation_group)
def remove_start_layer(self):
"""Removes the first layer of the network"""
first_layer = self.encoder.all_layers.remove_at_index(0)
first_connective = self.encoder.all_layers.remove_at_index(0)
# Make remove animations
animation_group = AnimationGroup(
FadeOut(first_layer), FadeOut(first_connective)
)
self.play(animation_group)
def insert_at_end(self, layer):
"""Inserts the given layer at the end of the network"""
current_last_layer = self.decoder.all_layers[-1]
# Get connective layer
connective_layer = get_connective_layer(current_last_layer, layer)
# Insert both layers
self.decoder.all_layers.add(connective_layer)
self.decoder.all_layers.add(layer)
# Move layers to the correct location
# TODO: Fix this cause its hacky
layer.shift(DOWN * 0.4)
layer.shift(RIGHT * 2.35)
# Make insert animation
animation_group = AnimationGroup(Create(layer), Create(connective_layer))
self.play(animation_group)
def remove_end_layer(self):
"""Removes the lsat layer of the network"""
first_layer = self.decoder.all_layers.remove_at_index(-1)
first_connective = self.decoder.all_layers.remove_at_index(-1)
# Make remove animations
animation_group = AnimationGroup(
FadeOut(first_layer), FadeOut(first_connective)
)
self.play(animation_group)
def change_title(self, text, title_location=np.array([0, 1.25, 0]), font_size=24):
"""Changes title to the given text"""
if self.title is None:
self.title = Text(text, font_size=font_size)
self.title.move_to(title_location)
self.add(self.title)
self.play(Write(self.title), run_time=1)
self.wait(1)
return
self.play(Unwrite(self.title))
new_title = Text(text, font_size=font_size)
new_title.move_to(self.title)
self.title = new_title
self.wait(0.1)
self.play(Write(self.title))
def change_subtitle(self, text, title_location=np.array([0, 0.9, 0]), font_size=20):
"""Changes subtitle to the next algorithm step"""
if self.subtitle is None:
self.subtitle = Text(text, font_size=font_size)
self.subtitle.move_to(title_location)
self.play(Write(self.subtitle))
return
self.play(Unwrite(self.subtitle))
new_title = Text(text, font_size=font_size)
new_title.move_to(title_location)
self.subtitle = new_title
self.wait(0.1)
self.play(Write(self.subtitle))
def make_embed_input_image_animation(self, input_image_path, output_image_path):
"""Makes embed input image animation"""
# insert the input image at the begginging
input_image_layer = ImageLayer.from_path(input_image_path)
input_image_layer.scale(0.6)
current_first_layer = self.encoder.all_layers[0]
# Get connective layer
connective_layer = get_connective_layer(input_image_layer, current_first_layer)
# Insert both layers
self.encoder.all_layers.insert(0, input_image_layer)
self.encoder.all_layers.insert(1, connective_layer)
# Move layers to the correct location
# TODO: Fix this cause its hacky
input_image_layer.shift(DOWN * 0.4)
input_image_layer.shift(LEFT * 2.35)
# Play full forward pass
forward_pass = self.neural_network.make_forward_pass_animation(
layer_args={
self.encoder: {
self.embedding_layer: {
"dist_args": {
"cov": np.array([[1.5, 0], [0, 1.5]]),
"mean": np.array([0.5, 0.5]),
"dist_theme": "ellipse",
"color": ORANGE,
}
}
}
}
)
self.play(forward_pass)
# insert the output image at the end
output_image_layer = ImageLayer.from_path(output_image_path)
output_image_layer.scale(0.6)
self.insert_at_end(output_image_layer)
# Remove the input and output layers
self.remove_start_layer()
self.remove_end_layer()
# Remove the latent distribution
self.play(FadeOut(self.embedding_layer.latent_distribution))
def make_localization_time_step(self, old_posterior):
"""
Performs one query update for the localization procedure
Procedure:
a. Embed query input images
b. Oracle is asked a query
c. Query is embedded
d. Show posterior update
e. Show current recomendation
"""
# Helper functions
def embed_query_to_latent_space(query_locations, query_covariance):
"""Makes animation for a paired query"""
# Assumes first layer of neural network is a PairedQueryLayer
# Make the embedding animation
# Wait
self.play(Wait(1))
# Embed the query to latent space
self.change_subtitle("3. Embed the Query in Latent Space")
# Make forward pass animation
self.embedding_layer.paired_query_mode = True
# Make embedding embed query animation
embed_query_animation = self.encoder.make_forward_pass_animation(
run_time=5,
layer_args={
self.embedding_layer: {
"positive_dist_args": {
"cov": query_covariance[0],
"mean": query_locations[0],
"dist_theme": "ellipse",
"color": BLUE,
},
"negative_dist_args": {
"cov": query_covariance[1],
"mean": query_locations[1],
"dist_theme": "ellipse",
"color": RED,
},
}
},
)
self.play(embed_query_animation)
# Access localizer information
query_paths, query_locations, posterior_distribution, query_covariances = next(
self.localizer
)
positive_path, negative_path = query_paths
# Make subtitle for present user with query
self.change_subtitle("2. Present User with Query")
# Insert the layer into the encoder
query_layer = PairedQueryLayer.from_paths(
positive_path, negative_path, grayscale=False
)
query_layer.scale(0.5)
self.insert_at_start(query_layer)
# Embed query to latent space
query_to_latent_space_animation = embed_query_to_latent_space(
query_locations, query_covariances
)
# Wait
self.play(Wait(1))
# Update the posterior
self.change_subtitle("4. Update the Posterior")
# Remove the old posterior
self.play(ReplacementTransform(old_posterior, posterior_distribution))
"""
self.play(
self.embedding_layer.remove_gaussian_distribution(self.localizer.posterior_distribution)
)
"""
# self.embedding_layer.add_gaussian_distribution(posterior_distribution)
# self.localizer.posterior_distribution = posterior_distribution
# Remove query layer
self.remove_start_layer()
# Remove query ellipses
fade_outs = []
for dist in self.embedding_layer.gaussian_distributions:
self.embedding_layer.gaussian_distributions.remove(dist)
fade_outs.append(FadeOut(dist))
if not len(fade_outs) == 0:
fade_outs = AnimationGroup(*fade_outs)
self.play(fade_outs)
return posterior_distribution
def make_generate_estimate_animation(self, estimate_image_path):
"""Makes the generate estimate animation"""
# Change embedding layer mode
self.embedding_layer.paired_query_mode = False
# Sample from posterior distribution
# self.embedding_layer.latent_distribution = self.localizer.posterior_distribution
emb_to_ff_ind = self.neural_network.all_layers.index_of(self.encoder)
embedding_to_ff = self.neural_network.all_layers[emb_to_ff_ind + 1]
self.play(embedding_to_ff.make_forward_pass_animation())
# Pass through decoder
self.play(self.decoder.make_forward_pass_animation(), run_time=1)
# Create Image layer after the decoder
output_image_layer = ImageLayer.from_path(estimate_image_path)
output_image_layer.scale(0.5)
self.insert_at_end(output_image_layer)
# Wait
self.wait(1)
# Remove the image at the end
print(self.neural_network)
self.remove_end_layer()
def make_triplet_forward_animation(self):
"""Make triplet forward animation"""
# Make triplet layer
anchor_path = os.path.join(self.assets_path, "anchor.jpg")
positive_path = os.path.join(self.assets_path, "positive.jpg")
negative_path = os.path.join(self.assets_path, "negative.jpg")
triplet_layer = TripletLayer.from_paths(
anchor_path,
positive_path,
negative_path,
grayscale=False,
font_size=100,
buff=1.05,
)
triplet_layer.scale(0.10)
self.insert_at_start(triplet_layer)
# Make latent triplet animation
self.play(
self.encoder.make_forward_pass_animation(
layer_args={
self.embedding_layer: {
"triplet_args": {
"anchor_dist": {
"cov": np.array([[0.3, 0], [0, 0.3]]),
"mean": np.array([0.7, 1.4]),
"dist_theme": "ellipse",
"color": BLUE,
},
"positive_dist": {
"cov": np.array([[0.2, 0], [0, 0.2]]),
"mean": np.array([0.8, -0.4]),
"dist_theme": "ellipse",
"color": GREEN,
},
"negative_dist": {
"cov": np.array([[0.4, 0], [0, 0.25]]),
"mean": np.array([-1, -1.2]),
"dist_theme": "ellipse",
"color": RED,
},
}
}
},
run_time=3,
)
)
def construct(self):
"""
Makes the whole visualization.
1. Create the Architecture
a. Create the traditional VAE architecture with images
2. The Localization Procedure
3. The Training Procedure
"""
# 1. Create the Architecture
self.neural_network.scale(1.2)
create_vae = Create(self.neural_network)
self.play(create_vae, run_time=3)
# Make changing title
self.change_title("Oracle Guided Image Synthesis\n with Relative Queries")
# 2. The Localization Procedure
self.change_title("The Localization Procedure")
# Make algorithm subtitle
self.change_subtitle("Algorithm Steps")
# Wait
self.play(Wait(1))
# Make prior distribution subtitle
self.change_subtitle("1. Calculate Prior Distribution")
# Draw the prior distribution
self.play(Create(self.localizer.prior_distribution))
old_posterior = self.localizer.prior_distribution
# For N queries update the posterior
for query_index in range(self.localizer.num_queries):
# Make localization iteration
old_posterior = self.make_localization_time_step(old_posterior)
self.play(Wait(1))
if not query_index == self.localizer.num_queries - 1:
# Repeat
self.change_subtitle("5. Repeat")
# Wait a second
self.play(Wait(1))
# Generate final estimate
self.change_subtitle("5. Generate Estimate Image")
# Generate an estimate image
estimate_image_path = os.path.join(self.assets_path, "estimate_image.jpg")
self.make_generate_estimate_animation(estimate_image_path)
self.wait(1)
# Remove old posterior
self.play(FadeOut(old_posterior))
# 3. The Training Procedure
self.change_title("The Training Procedure")
# Make training animation
# Do an Image forward pass
self.change_subtitle("1. Unsupervised Image Reconstruction")
self.make_embed_input_image_animation(
self.input_embed_image_path, self.output_embed_image_path
)
self.wait(1)
# Do triplet forward pass
self.change_subtitle("2. Triplet Loss in Latent Space")
self.make_triplet_forward_animation()
self.wait(1)
|
ManimML_helblazer811/examples/mcmc/warmup_mcmc.py | from manim import *
import scipy.stats
from manim_ml.diffusion.mcmc import MCMCAxes
import matplotlib.pyplot as plt
import numpy as np
plt.style.use('dark_background')
# Make the specific scene
config.pixel_height = 720
config.pixel_width = 720
config.frame_height = 7.0
config.frame_width = 7.0
class MCMCWarmupScene(Scene):
def construct(self):
# Define the Gaussian Mixture likelihood
def gaussian_mm_logpdf(x):
"""Gaussian Mixture Model Log PDF"""
# Choose two arbitrary Gaussians
# Big Gaussian
big_gaussian_pdf = scipy.stats.multivariate_normal(
mean=[-0.5, -0.5],
cov=[1.0, 1.0]
).pdf(x)
# Little Gaussian
little_gaussian_pdf = scipy.stats.multivariate_normal(
mean=[2.3, 1.9],
cov=[0.3, 0.3]
).pdf(x)
# Sum their likelihoods and take the log
logpdf = np.log(big_gaussian_pdf + little_gaussian_pdf)
return logpdf
# Generate a bunch of true samples
true_samples = []
# Generate samples for little gaussian
little_gaussian_samples = np.random.multivariate_normal(
mean=[2.3, 1.9],
cov=[[0.3, 0.0], [0.0, 0.3]],
size=(10000)
)
big_gaussian_samples = np.random.multivariate_normal(
mean=[-0.5, -0.5],
cov=[[1.0, 0.0], [0.0, 1.0]],
size=(10000)
)
true_samples = np.concatenate((little_gaussian_samples, big_gaussian_samples))
# Make the MCMC axes
axes = MCMCAxes(
x_range=[-5, 5],
y_range=[-5, 5],
x_length=7.0,
y_length=7.0
)
axes.move_to(ORIGIN)
self.play(
Create(axes)
)
# Make the chain sampling animation
chain_sampling_animation = axes.visualize_metropolis_hastings_chain_sampling(
log_prob_fn=gaussian_mm_logpdf,
true_samples=true_samples,
sampling_kwargs={
"iterations": 2000,
"warm_up": 50,
"initial_location": np.array([-3.5, 3.5]),
"sampling_seed": 4
},
)
self.play(chain_sampling_animation)
self.wait(3)
|
ManimML_helblazer811/examples/disentanglement/disentanglement.py | """This module is dedicated to visualizing VAE disentanglement"""
from pathlib import Path
from manim import *
from manim_ml.neural_network.layers import FeedForwardLayer
from manim_ml.neural_network.neural_network import NeuralNetwork
import pickle
ROOT_DIR = Path(__file__).parents[2]
def construct_image_mobject(input_image, height=2.3):
"""Constructs an ImageMobject from a numpy grayscale image"""
# Convert image to rgb
if len(input_image.shape) == 2:
input_image = np.repeat(input_image, 3, axis=0)
input_image = np.rollaxis(input_image, 0, start=3)
# Make the ImageMobject
image_mobject = ImageMobject(input_image, image_mode="RGB")
image_mobject.set_resampling_algorithm(RESAMPLING_ALGORITHMS["nearest"])
image_mobject.height = height
return image_mobject
class DisentanglementVisualization(VGroup):
def __init__(
self,
model_path=ROOT_DIR
/ "examples/variational_autoencoder/autoencoder_models/saved_models/model_dim2.pth",
image_height=0.35,
):
self.model_path = model_path
self.image_height = image_height
# Load disentanglement image objects
with open(
ROOT_DIR
/ "examples/variational_autoencoder/autoencoder_models/disentanglement.pkl",
"rb",
) as f:
self.image_handler = pickle.load(f)
def make_disentanglement_generation_animation(self):
animation_list = []
for image_index, image in enumerate(self.image_handler["images"]):
image_mobject = construct_image_mobject(image, height=self.image_height)
r, c = self.image_handler["bin_indices"][image_index]
# Move the image to the correct location
r_offset = -1.2
c_offset = 0.25
image_location = [
c_offset + c * self.image_height,
r_offset + r * self.image_height,
0,
]
image_mobject.move_to(image_location)
animation_list.append(FadeIn(image_mobject))
generation_animation = AnimationGroup(*animation_list[::-1], lag_ratio=1.0)
return generation_animation
config.pixel_height = 720
config.pixel_width = 1280
config.frame_height = 5.0
config.frame_width = 5.0
class DisentanglementScene(Scene):
"""Disentanglement Scene Object"""
def _construct_embedding(self, point_color=BLUE, dot_radius=0.05):
"""Makes a Gaussian-like embedding"""
embedding = VGroup()
# Sample points from a Gaussian
num_points = 200
standard_deviation = [0.6, 0.8]
mean = [0, 0]
points = np.random.normal(mean, standard_deviation, size=(num_points, 2))
# Make an axes
embedding.axes = Axes(
x_range=[-3, 3],
y_range=[-3, 3],
x_length=2.2,
y_length=2.2,
tips=False,
)
# Add each point to the axes
self.point_dots = VGroup()
for point in points:
point_location = embedding.axes.coords_to_point(*point)
dot = Dot(point_location, color=point_color, radius=dot_radius / 2)
self.point_dots.add(dot)
embedding.add(self.point_dots)
return embedding
def construct(self):
# Make the VAE decoder
vae_decoder = NeuralNetwork(
[
FeedForwardLayer(3),
FeedForwardLayer(5),
],
layer_spacing=0.55,
)
vae_decoder.shift([-0.55, 0, 0])
self.play(Create(vae_decoder), run_time=1)
# Make the embedding
embedding = self._construct_embedding()
embedding.scale(0.9)
embedding.move_to(vae_decoder.get_left())
embedding.shift([-0.85, 0, 0])
self.play(Create(embedding))
# Make disentanglment visualization
disentanglement = DisentanglementVisualization()
disentanglement_animation = (
disentanglement.make_disentanglement_generation_animation()
)
self.play(disentanglement_animation, run_time=3)
self.play(Wait(2))
|
ManimML_helblazer811/examples/interpolation/interpolation.py | """Visualization of VAE Interpolation"""
import sys
import os
sys.path.append(os.environ["PROJECT_ROOT"])
from manim import *
import pickle
import numpy as np
import manim_ml.neural_network as neural_network
import examples.variational_autoencoder.variational_autoencoder as variational_autoencoder
"""
The VAE Scene for the twitter video.
"""
config.pixel_height = 720
config.pixel_width = 1280
config.frame_height = 6.0
config.frame_width = 6.0
# Set random seed so point distribution is constant
np.random.seed(1)
class InterpolationScene(MovingCameraScene):
"""Scene object for a Variational Autoencoder and Autoencoder"""
def construct(self):
# Set Scene config
vae = variational_autoencoder.VariationalAutoencoder(
dot_radius=0.035, layer_spacing=0.5
)
vae.move_to(ORIGIN)
vae.encoder.shift(LEFT * 0.5)
vae.decoder.shift(RIGHT * 0.5)
mnist_image_handler = variational_autoencoder.MNISTImageHandler()
image_pair = mnist_image_handler.image_pairs[3]
# Make forward pass animation and DO NOT run it
forward_pass_animation = vae.make_forward_pass_animation(image_pair)
# Make the interpolation animation
interpolation_images = mnist_image_handler.interpolation_images
interpolation_animation = vae.make_interpolation_animation(interpolation_images)
embedding_zoom_animation = self.camera.auto_zoom(
[vae.embedding, vae.decoder, vae.output_image], margin=0.5
)
# Make animations
forward_pass_animations = []
for i in range(7):
anim = vae.decoder.make_forward_propagation_animation(run_time=0.5)
forward_pass_animations.append(anim)
forward_pass_animation_group = AnimationGroup(
*forward_pass_animations, lag_ratio=1.0
)
# Make forward pass animations
self.play(Create(vae), run_time=1.5)
self.play(FadeOut(vae.encoder), run_time=1.0)
self.play(embedding_zoom_animation, run_time=1.5)
interpolation_animation = AnimationGroup(
forward_pass_animation_group, interpolation_animation
)
self.play(interpolation_animation, run_time=9.0)
|
ManimML_helblazer811/examples/logo/logo.py | """
Logo for Manim Machine Learning
"""
from manim import *
import manim_ml
manim_ml.config.color_scheme = "light_mode"
from manim_ml.neural_network.architectures.feed_forward import FeedForwardNeuralNetwork
config.pixel_height = 1000
config.pixel_width = 1000
config.frame_height = 4.0
config.frame_width = 4.0
class ManimMLLogo(Scene):
def construct(self):
self.text = Text("ManimML", color=manim_ml.config.color_scheme.text_color)
self.text.scale(1.0)
self.neural_network = FeedForwardNeuralNetwork(
[3, 5, 3, 6, 3], layer_spacing=0.3, node_color=BLUE
)
self.neural_network.scale(1.0)
self.neural_network.move_to(self.text.get_bottom())
self.neural_network.shift(1.25 * DOWN)
self.logo_group = Group(self.text, self.neural_network)
self.logo_group.scale(1.0)
self.logo_group.move_to(ORIGIN)
self.play(Write(self.text), run_time=1.0)
self.play(Create(self.neural_network), run_time=3.0)
# self.surrounding_rectangle = SurroundingRectangle(self.logo_group, buff=0.3, color=BLUE)
underline = Underline(self.text, color=BLACK)
animation_group = AnimationGroup(
self.neural_network.make_forward_pass_animation(run_time=5),
Create(underline),
# Create(self.surrounding_rectangle)
)
# self.surrounding_rectangle = SurroundingRectangle(self.logo_group, buff=0.3, color=BLUE)
underline = Underline(self.text, color=BLACK)
animation_group = AnimationGroup(
self.neural_network.make_forward_pass_animation(run_time=5),
Create(underline),
# Create(self.surrounding_rectangle)
)
self.play(animation_group, runtime=5.0)
self.wait(5)
|
ManimML_helblazer811/examples/logo/website_logo.py | """
Logo for Manim Machine Learning
"""
from manim import *
from manim_ml.neural_network.neural_network import FeedForwardNeuralNetwork
config.pixel_height = 400
config.pixel_width = 600
config.frame_height = 8.0
config.frame_width = 10.0
class ManimMLLogo(Scene):
def construct(self):
self.neural_network = FeedForwardNeuralNetwork(
[3, 5, 3, 5], layer_spacing=0.6, node_color=BLUE, edge_width=6
)
self.neural_network.scale(3)
self.neural_network.move_to(ORIGIN)
self.play(Create(self.neural_network))
# self.surrounding_rectangle = SurroundingRectangle(self.logo_group, buff=0.3, color=BLUE)
animation_group = AnimationGroup(
self.neural_network.make_forward_pass_animation(run_time=5),
)
self.play(animation_group)
self.wait(5)
|
ManimML_helblazer811/examples/logo/wide_logo.py | """
Logo for Manim Machine Learning
"""
from manim import *
import manim_ml
manim_ml.config.color_scheme = "light_mode"
from manim_ml.neural_network.architectures.feed_forward import FeedForwardNeuralNetwork
config.pixel_height = 1000
config.pixel_width = 2000
config.frame_height = 4.0
config.frame_width = 8.0
class ManimMLLogo(Scene):
def construct(self):
self.text = Text("ManimML", color=manim_ml.config.color_scheme.text_color)
self.text.scale(1.0)
self.neural_network = FeedForwardNeuralNetwork(
[3, 5, 3, 6, 3], layer_spacing=0.3, node_color=BLUE
)
self.neural_network.scale(0.8)
self.neural_network.next_to(self.text, RIGHT, buff=0.5)
# self.neural_network.move_to(self.text.get_right())
# self.neural_network.shift(1.25 * DOWN)
self.logo_group = Group(self.text, self.neural_network)
self.logo_group.scale(1.0)
self.logo_group.move_to(ORIGIN)
self.play(Write(self.text), run_time=1.0)
self.play(Create(self.neural_network), run_time=3.0)
# self.surrounding_rectangle = SurroundingRectangle(self.logo_group, buff=0.3, color=BLUE)
underline = Underline(self.text, color=BLACK)
animation_group = AnimationGroup(
self.neural_network.make_forward_pass_animation(run_time=5),
Create(underline),
# Create(self.surrounding_rectangle)
)
# self.surrounding_rectangle = SurroundingRectangle(self.logo_group, buff=0.3, color=BLUE)
underline = Underline(self.text, color=BLACK)
animation_group = AnimationGroup(
self.neural_network.make_forward_pass_animation(run_time=5),
Create(underline),
# Create(self.surrounding_rectangle)
)
self.play(animation_group, runtime=5.0)
self.wait(5)
|
ManimML_helblazer811/examples/lenet/lenet.py | from pathlib import Path
from manim import *
from PIL import Image
import numpy as np
from manim_ml.neural_network.layers.convolutional_2d import Convolutional2DLayer
from manim_ml.neural_network.layers.feed_forward import FeedForwardLayer
from manim_ml.neural_network.layers.image import ImageLayer
from manim_ml.neural_network.layers.max_pooling_2d import MaxPooling2DLayer
from manim_ml.neural_network.neural_network import NeuralNetwork
# Make the specific scene
config.pixel_height = 1200
config.pixel_width = 1900
config.frame_height = 20.0
config.frame_width = 20.0
ROOT_DIR = Path(__file__).parents[2]
class CombinedScene(ThreeDScene):
def construct(self):
image = Image.open(ROOT_DIR / "assets/mnist/digit.jpeg")
numpy_image = np.asarray(image)
# Make nn
nn = NeuralNetwork(
[
ImageLayer(numpy_image, height=4.5),
Convolutional2DLayer(1, 28),
Convolutional2DLayer(6, 28, 5),
MaxPooling2DLayer(kernel_size=2),
Convolutional2DLayer(16, 10, 5),
MaxPooling2DLayer(kernel_size=2),
FeedForwardLayer(8),
FeedForwardLayer(3),
FeedForwardLayer(2),
],
layer_spacing=0.25,
)
# Center the nn
nn.move_to(ORIGIN)
self.add(nn)
# Make code snippet
# code = make_code_snippet()
# code.next_to(nn, DOWN)
# self.add(code)
# Group it all
# group = Group(nn, code)
# group.move_to(ORIGIN)
nn.move_to(ORIGIN)
# Play animation
# forward_pass = nn.make_forward_pass_animation()
# self.wait(1)
# self.play(forward_pass)
|
ManimML_helblazer811/examples/translation_equivariance/translation_equivariance.py | from manim import *
from PIL import Image
from manim_ml.neural_network.layers.convolutional_2d import Convolutional2DLayer
from manim_ml.neural_network.layers.feed_forward import FeedForwardLayer
from manim_ml.neural_network.layers.image import ImageLayer
from manim_ml.neural_network.layers.parent_layers import ThreeDLayer
from manim_ml.neural_network.neural_network import NeuralNetwork
# Make the specific scene
config.pixel_height = 1200
config.pixel_width = 800
config.frame_height = 6.0
config.frame_width = 6.0
class CombinedScene(ThreeDScene):
def construct(self):
image = Image.open("../../assets/doggo.jpeg")
numpy_image = np.asarray(image)
# Rotate the Three D layer position
ThreeDLayer.rotation_angle = 15 * DEGREES
ThreeDLayer.rotation_axis = [1, -1.0, 0]
# Make nn
nn = NeuralNetwork(
[
ImageLayer(numpy_image, height=1.5),
Convolutional2DLayer(1, 7, 7, 3, 3, filter_spacing=0.32),
Convolutional2DLayer(3, 5, 5, 1, 1, filter_spacing=0.18),
],
layer_spacing=0.25,
layout_direction="top_to_bottom",
)
# Center the nn
nn.move_to(ORIGIN)
nn.scale(1.5)
self.add(nn)
# Play animation
forward_pass = nn.make_forward_pass_animation(
highlight_active_feature_map=True,
)
self.wait(1)
self.play(forward_pass)
|
ManimML_helblazer811/examples/cross_attention_vis/cross_attention_vis.py | """
Here I thought it would be interesting to visualize the cross attention
maps produced by the UNet of a text-to-image diffusion model.
The key thing I want to show is how images and text are broken up into tokens (patches for images),
and those tokens are used to compute a cross attention score, which can be interpreted
as a 2D heatmap over the image patches for each text token.
Necessary operations:
1. [X] Split an image into patches and "expand" the patches outward to highlight that they are
separate.
2. Split text into tokens using the tokenizer and display them.
3. Compute the cross attention scores (using DAAM) for each word/token and display them as a heatmap.
4. Overlay the heatmap over the image patches. Show the overlaying as a transition animation.
"""
import torch
import cv2
from manim import *
import numpy as np
from typing import List
from daam import trace, set_seed
from diffusers import StableDiffusionPipeline
import torchvision
import matplotlib.pyplot as plt
class ImagePatches(Group):
"""Container object for a grid of ImageMobjects."""
def __init__(self, image: ImageMobject, grid_width=4):
"""
Parameters
----------
image : ImageMobject
image to split into patches
grid_width : int, optional
number of patches per row, by default 4
"""
self.image = image
self.grid_width = grid_width
super(Group, self).__init__()
self.patch_dict = self._split_image_into_patches(image, grid_width)
def _split_image_into_patches(self, image, grid_width):
"""Splits the images into a set of patches
Parameters
----------
image : ImageMobject
image to split into patches
grid_width : int
number of patches per row
"""
patch_dict = {}
# Get a pixel array of the image mobject
pixel_array = image.pixel_array
# Get the height and width of the image
height, width = pixel_array.shape[:2]
# Split the image into an equal width grid of patches
assert height == width, "Image must be square"
assert height % grid_width == 0, "Image width must be divisible by grid_width"
patch_width, patch_height = width // grid_width, height // grid_width
for patch_i in range(self.grid_width):
for patch_j in range(self.grid_width):
# Get patch pixels
i_start, i_end = patch_i * patch_width, (patch_i + 1) * patch_width
j_start, j_end = patch_j * patch_height, (patch_j + 1) * patch_height
patch_pixels = pixel_array[i_start:i_end, j_start:j_end, :]
# Make the patch
image_patch = ImageMobject(patch_pixels)
# Move the patch to the correct location on the old image
image_width = image.get_width()
image_center = image.get_center()
image_patch.scale(image_width / grid_width / image_patch.get_width())
patch_manim_space_width = image_patch.get_width()
patch_center = image_center
patch_center += (patch_j - self.grid_width / 2 + 0.5) * patch_manim_space_width * RIGHT
# patch_center = image_center - (patch_i - self.grid_width / 2 + 0.5) * patch_manim_space_width * RIGHT
patch_center -= (patch_i - self.grid_width / 2 + 0.5) * patch_manim_space_width * UP
# + (patch_j - self.grid_width / 2) * patch_height / 2 * UP
image_patch.move_to(patch_center)
self.add(image_patch)
patch_dict[(patch_i, patch_j)] = image_patch
return patch_dict
class ExpandPatches(Animation):
def __init__(self, image_patches: ImagePatches, expansion_scale=2.0):
"""
Parameters
----------
image_patches : ImagePatches
set of image patches
expansion_scale : float, optional
scale factor for expansion, by default 2.0
"""
self.image_patches = image_patches
self.expansion_scale = expansion_scale
super().__init__(image_patches)
def interpolate_submobject(self, submobject, starting_submobject, alpha):
"""
Parameters
----------
submobject : ImageMobject
current patch
starting_submobject : ImageMobject
starting patch
alpha : float
interpolation alpha
"""
patch = submobject
starting_patch_center = starting_submobject.get_center()
image_center = self.image_patches.image.get_center()
# Start image vector
starting_patch_vector = starting_patch_center - image_center
final_image_vector = image_center + starting_patch_vector * self.expansion_scale
# Interpolate vectors
current_location = alpha * final_image_vector + (1 - alpha) * starting_patch_center
# # Need to compute the direction of expansion as the unit vector from the original image center
# # and patch center.
patch.move_to(current_location)
class TokenizedText(Group):
"""Tokenizes the given text and displays the tokens as a list of Text Mobjects."""
def __init__(self, text, tokenizer=None):
self.text = text
if not tokenizer is None:
self.tokenizer = tokenizer
else:
# TODO load default stable diffusion tokenizer here
raise NotImplementedError("Tokenizer must be provided")
self.token_strings = self._tokenize_text(text)
self.token_mobjects = self.make_text_mobjects(self.token_strings)
super(Group, self).__init__()
self.add(*self.token_mobjects)
self.arrange(RIGHT, buff=-0.05, aligned_edge=UP)
token_dict = {}
for token_index, token_string in enumerate(self.token_strings):
token_dict[token_string] = self.token_mobjects[token_index]
self.token_dict = token_dict
def _tokenize_text(self, text):
"""Tokenize the text using the tokenizer.
Parameters
----------
text : str
text to tokenize
"""
tokens = self.tokenizer.tokenize(text)
tokens = [token.replace("</w>", "") for token in tokens]
return tokens
def make_text_mobjects(self, tokens_list: List[str]):
"""Converts the tokens into a list of TextMobjects."""
# Make the tokens
# Insert phantom
tokens_list = ["l" + token + "g" for token in tokens_list]
token_objects = [
Text(token, t2c={'[-1:]': '#000000', '[0:1]': "#000000"}, font_size=64)
for token in tokens_list
]
return token_objects
def compute_stable_diffusion_cross_attention_heatmaps(
pipe,
prompt: str,
seed: int = 2,
map_resolution=(32, 32)
):
"""Computes the cross attention heatmaps for the given prompt.
Parameters
----------
prompt : str
the prompt
seed : int, optional
random seed, by default 0
map_resolution : tuple, optional
resolution to downscale maps to, by default (16, 16)
Returns
-------
_type_
_description_
"""
# Get tokens
tokens = pipe.tokenizer.tokenize(prompt)
tokens = [token.replace("</w>", "") for token in tokens]
# Set torch seeds
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
gen = set_seed(seed) # for reproducibility
heatmap_dict = {}
image = None
with torch.cuda.amp.autocast(dtype=torch.float16), torch.no_grad():
with trace(pipe) as tc:
out = pipe(prompt, num_inference_steps=30, generator=gen)
image = out[0][0]
global_heat_map = tc.compute_global_heat_map()
for token in tokens:
word_heat_map = global_heat_map.compute_word_heat_map(token)
heatmap = word_heat_map.heatmap
# Downscale the heatmap
heatmap = heatmap.unsqueeze(0).unsqueeze(0)
# Save the heatmap
heatmap = torchvision.transforms.Resize(
map_resolution,
interpolation=torchvision.transforms.InterpolationMode.NEAREST
)(heatmap)
heatmap = heatmap.squeeze(0).squeeze(0)
# heatmap = cv2.applyColorMap(heatmap, cv2.COLORMAP_JET)
# plt.imshow(heatmap)
# plt.savefig(f"{token}.png")
# Convert heatmap to rgb color
# normalize
heatmap = (heatmap - heatmap.min()) / (heatmap.max() - heatmap.min())
cmap = plt.get_cmap('inferno')
heatmap = cmap(heatmap) * 255
print(heatmap)
# print(heatmap)
# Make an image mobject for each heatmap
print(heatmap.shape)
heatmap = ImageMobject(heatmap)
heatmap.set_resampling_algorithm(RESAMPLING_ALGORITHMS["nearest"])
heatmap_dict[token] = heatmap
return image, heatmap_dict
# Make the scene
config.pixel_height = 1200
config.pixel_width = 1900
config.frame_height = 30.0
config.frame_width = 30.0
class StableDiffusionCrossAttentionScene(Scene):
def construct(self):
# Load the pipeline
model_id = 'stabilityai/stable-diffusion-2-base'
pipe = StableDiffusionPipeline.from_pretrained(model_id)
prompt = "Astronaut riding a horse on the moon"
# Compute the images and heatmaps
image, heatmap_dict = compute_stable_diffusion_cross_attention_heatmaps(pipe, prompt)
# 1. Show an image appearing
image_mobject = ImageMobject(image)
image_mobject.shift(DOWN)
image_mobject.scale(0.7)
image_mobject.shift(LEFT * 7)
self.add(image_mobject)
# 1. Show a text prompt and the corresponding generated image
paragraph_prompt = '"Astronaut riding a\nhorse on the moon"'
text_prompt = Paragraph(paragraph_prompt, alignment="center", font_size=64)
text_prompt.next_to(image_mobject, UP, buff=1.5)
prompt_title = Text("Prompt", font_size=72)
prompt_title.next_to(text_prompt, UP, buff=0.5)
self.play(Create(prompt_title))
self.play(Create(text_prompt))
# Make an arrow from the text to the image
arrow = Arrow(text_prompt.get_bottom(), image_mobject.get_top(), buff=0.5)
self.play(GrowArrow(arrow))
self.wait(1)
self.remove(arrow)
# 2. Show the image being split into patches
# Make the patches
patches = ImagePatches(image_mobject, grid_width=32)
# Expand and contract
self.remove(image_mobject)
self.play(ExpandPatches(patches, expansion_scale=1.2))
# Play arrows for visual tokens
visual_token_label = Text("Visual Tokens", font_size=72)
visual_token_label.next_to(image_mobject, DOWN, buff=1.5)
# Draw arrows
arrow_animations = []
arrows = []
for i in range(patches.grid_width):
patch = patches.patch_dict[(patches.grid_width - 1, i)]
arrow = Line(visual_token_label.get_top(), patch.get_bottom(), buff=0.3)
arrow_animations.append(
Create(
arrow
)
)
arrows.append(arrow)
self.play(AnimationGroup(*arrow_animations, FadeIn(visual_token_label), lag_ratio=0))
self.wait(1)
self.play(FadeOut(*arrows, visual_token_label))
self.play(ExpandPatches(patches, expansion_scale=1/1.2))
# 3. Show the text prompt and image being tokenized.
tokenized_text = TokenizedText(prompt, pipe.tokenizer)
tokenized_text.shift(RIGHT * 7)
tokenized_text.shift(DOWN * 7.5)
self.play(FadeIn(tokenized_text))
# Plot token label
token_label = Text("Textual Tokens", font_size=72)
token_label.next_to(tokenized_text, DOWN, buff=0.5)
arrow_animations = []
self.play(Create(token_label))
arrows = []
for token_name, token_mobject in tokenized_text.token_dict.items():
arrow = Line(token_label.get_top(), token_mobject.get_bottom(), buff=0.3)
arrow_animations.append(
Create(
arrow
)
)
arrows.append(arrow)
self.play(AnimationGroup(*arrow_animations, lag_ratio=0))
self.wait(1)
self.play(FadeOut(*arrows, token_label))
# 4. Show the heatmap corresponding to the cross attention map for each image.
surrounding_rectangle = SurroundingRectangle(tokenized_text.token_dict["astronaut"], buff=0.1)
self.add(surrounding_rectangle)
# Show the heatmap for "astronaut"
astronaut_heatmap = heatmap_dict["astronaut"]
astronaut_heatmap.height = image_mobject.get_height()
astronaut_heatmap.shift(RIGHT * 7)
astronaut_heatmap.shift(DOWN)
self.play(FadeIn(astronaut_heatmap))
self.wait(3)
self.remove(surrounding_rectangle)
surrounding_rectangle = SurroundingRectangle(tokenized_text.token_dict["horse"], buff=0.1)
self.add(surrounding_rectangle)
# Show the heatmap for "horse"
horse_heatmap = heatmap_dict["horse"]
horse_heatmap.height = image_mobject.get_height()
horse_heatmap.move_to(astronaut_heatmap)
self.play(FadeOut(astronaut_heatmap))
self.play(FadeIn(horse_heatmap))
self.wait(3)
self.remove(surrounding_rectangle)
surrounding_rectangle = SurroundingRectangle(tokenized_text.token_dict["riding"], buff=0.1)
self.add(surrounding_rectangle)
# Show the heatmap for "riding"
riding_heatmap = heatmap_dict["riding"]
riding_heatmap.height = image_mobject.get_height()
riding_heatmap.move_to(horse_heatmap)
self.play(FadeOut(horse_heatmap))
self.play(FadeIn(riding_heatmap))
self.wait(3)
|
ManimML_helblazer811/examples/decision_tree/split_scene.py | from sklearn import datasets
from decision_tree_surface import *
from manim import *
from sklearn.tree import DecisionTreeClassifier
from scipy.stats import entropy
import math
from PIL import Image
iris = datasets.load_iris()
font = "Source Han Sans"
font_scale = 0.75
images = [
Image.open("iris_dataset/SetosaFlower.jpeg"),
Image.open("iris_dataset/VeriscolorFlower.jpeg"),
Image.open("iris_dataset/VirginicaFlower.jpeg"),
]
def entropy(class_labels, base=2):
# compute the class counts
unique, counts = np.unique(class_labels, return_counts=True)
dictionary = dict(zip(unique, counts))
total = 0.0
num_samples = len(class_labels)
for class_index in range(0, 3):
if not class_index in dictionary:
continue
prob = dictionary[class_index] / num_samples
total += prob * math.log(prob, base)
# higher set
return -total
def merge_overlapping_polygons(all_polygons, colors=[BLUE, GREEN, ORANGE]):
# get all polygons of each color
polygon_dict = {
str(BLUE).lower(): [],
str(GREEN).lower(): [],
str(ORANGE).lower(): [],
}
for polygon in all_polygons:
print(polygon_dict)
polygon_dict[str(polygon.color).lower()].append(polygon)
return_polygons = []
for color in colors:
color = str(color).lower()
polygons = polygon_dict[color]
points = set()
for polygon in polygons:
vertices = polygon.get_vertices().tolist()
vertices = [tuple(vert) for vert in vertices]
for pt in vertices:
if pt in points: # Shared vertice, remove it.
points.remove(pt)
else:
points.add(pt)
points = list(points)
sort_x = sorted(points)
sort_y = sorted(points, key=lambda x: x[1])
edges_h = {}
edges_v = {}
i = 0
while i < len(points):
curr_y = sort_y[i][1]
while i < len(points) and sort_y[i][1] == curr_y:
edges_h[sort_y[i]] = sort_y[i + 1]
edges_h[sort_y[i + 1]] = sort_y[i]
i += 2
i = 0
while i < len(points):
curr_x = sort_x[i][0]
while i < len(points) and sort_x[i][0] == curr_x:
edges_v[sort_x[i]] = sort_x[i + 1]
edges_v[sort_x[i + 1]] = sort_x[i]
i += 2
# Get all the polygons.
while edges_h:
# We can start with any point.
polygon = [(edges_h.popitem()[0], 0)]
while True:
curr, e = polygon[-1]
if e == 0:
next_vertex = edges_v.pop(curr)
polygon.append((next_vertex, 1))
else:
next_vertex = edges_h.pop(curr)
polygon.append((next_vertex, 0))
if polygon[-1] == polygon[0]:
# Closed polygon
polygon.pop()
break
# Remove implementation-markers from the polygon.
poly = [point for point, _ in polygon]
for vertex in poly:
if vertex in edges_h:
edges_h.pop(vertex)
if vertex in edges_v:
edges_v.pop(vertex)
polygon = Polygon(*poly, color=color, fill_opacity=0.3, stroke_opacity=1.0)
return_polygons.append(polygon)
return return_polygons
class IrisDatasetPlot(VGroup):
def __init__(self):
points = iris.data[:, 0:2]
labels = iris.feature_names
targets = iris.target
# Make points
self.point_group = self._make_point_group(points, targets)
# Make axes
self.axes_group = self._make_axes_group(points, labels)
# Make legend
self.legend_group = self._make_legend(
[BLUE, ORANGE, GREEN], iris.target_names, self.axes_group
)
# Make title
# title_text = "Iris Dataset Plot"
# self.title = Text(title_text).match_y(self.axes_group).shift([0.5, self.axes_group.height / 2 + 0.5, 0])
# Make all group
self.all_group = Group(self.point_group, self.axes_group, self.legend_group)
# scale the groups
self.point_group.scale(1.6)
self.point_group.match_x(self.axes_group)
self.point_group.match_y(self.axes_group)
self.point_group.shift([0.2, 0, 0])
self.axes_group.scale(0.7)
self.all_group.shift([0, 0.2, 0])
@override_animation(Create)
def create_animation(self):
animation_group = AnimationGroup(
# Perform the animations
Create(self.point_group, run_time=2),
Wait(0.5),
Create(self.axes_group, run_time=2),
# add title
# Create(self.title),
Create(self.legend_group),
)
return animation_group
def _make_point_group(self, points, targets, class_colors=[BLUE, ORANGE, GREEN]):
point_group = VGroup()
for point_index, point in enumerate(points):
# draw the dot
current_target = targets[point_index]
color = class_colors[current_target]
dot = Dot(point=np.array([point[0], point[1], 0])).set_color(color)
dot.scale(0.5)
point_group.add(dot)
return point_group
def _make_legend(self, class_colors, feature_labels, axes):
legend_group = VGroup()
# Make Text
setosa = Text("Setosa", color=BLUE)
verisicolor = Text("Verisicolor", color=ORANGE)
virginica = Text("Virginica", color=GREEN)
labels = VGroup(setosa, verisicolor, virginica).arrange(
direction=RIGHT, aligned_edge=LEFT, buff=2.0
)
labels.scale(0.5)
legend_group.add(labels)
# surrounding rectangle
surrounding_rectangle = SurroundingRectangle(labels, color=WHITE)
surrounding_rectangle.move_to(labels)
legend_group.add(surrounding_rectangle)
# shift the legend group
legend_group.move_to(axes)
legend_group.shift([0, -3.0, 0])
legend_group.match_x(axes[0][0])
return legend_group
def _make_axes_group(self, points, labels):
axes_group = VGroup()
# make the axes
x_range = [
np.amin(points, axis=0)[0] - 0.2,
np.amax(points, axis=0)[0] - 0.2,
0.5,
]
y_range = [np.amin(points, axis=0)[1] - 0.2, np.amax(points, axis=0)[1], 0.5]
axes = Axes(
x_range=x_range,
y_range=y_range,
x_length=9,
y_length=6.5,
# axis_config={"number_scale_value":0.75, "include_numbers":True},
tips=False,
).shift([0.5, 0.25, 0])
axes_group.add(axes)
# make axis labels
# x_label
x_label = (
Text(labels[0], font=font)
.match_y(axes.get_axes()[0])
.shift([0.5, -0.75, 0])
.scale(font_scale)
)
axes_group.add(x_label)
# y_label
y_label = (
Text(labels[1], font=font)
.match_x(axes.get_axes()[1])
.shift([-0.75, 0, 0])
.rotate(np.pi / 2)
.scale(font_scale)
)
axes_group.add(y_label)
return axes_group
class DecisionTreeSurface(VGroup):
def __init__(self, tree_clf, data, axes, class_colors=[BLUE, ORANGE, GREEN]):
# take the tree and construct the surface from it
self.tree_clf = tree_clf
self.data = data
self.axes = axes
self.class_colors = class_colors
self.surface_rectangles = self.generate_surface_rectangles()
def generate_surface_rectangles(self):
# compute data bounds
left = np.amin(self.data[:, 0]) - 0.2
right = np.amax(self.data[:, 0]) - 0.2
top = np.amax(self.data[:, 1])
bottom = np.amin(self.data[:, 1]) - 0.2
maxrange = [left, right, bottom, top]
rectangles = decision_areas(self.tree_clf, maxrange, x=0, y=1, n_features=2)
# turn the rectangle objects into manim rectangles
def convert_rectangle_to_polygon(rect):
# get the points for the rectangle in the plot coordinate frame
bottom_left = [rect[0], rect[3]]
bottom_right = [rect[1], rect[3]]
top_right = [rect[1], rect[2]]
top_left = [rect[0], rect[2]]
# convert those points into the entire manim coordinates
bottom_left_coord = self.axes.coords_to_point(*bottom_left)
bottom_right_coord = self.axes.coords_to_point(*bottom_right)
top_right_coord = self.axes.coords_to_point(*top_right)
top_left_coord = self.axes.coords_to_point(*top_left)
points = [
bottom_left_coord,
bottom_right_coord,
top_right_coord,
top_left_coord,
]
# construct a polygon object from those manim coordinates
rectangle = Polygon(
*points, color=color, fill_opacity=0.3, stroke_opacity=0.0
)
return rectangle
manim_rectangles = []
for rect in rectangles:
color = self.class_colors[int(rect[4])]
rectangle = convert_rectangle_to_polygon(rect)
manim_rectangles.append(rectangle)
manim_rectangles = merge_overlapping_polygons(
manim_rectangles, colors=[BLUE, GREEN, ORANGE]
)
return manim_rectangles
@override_animation(Create)
def create_override(self):
# play a reveal of all of the surface rectangles
animations = []
for rectangle in self.surface_rectangles:
animations.append(Create(rectangle))
animation_group = AnimationGroup(*animations)
return animation_group
@override_animation(Uncreate)
def uncreate_override(self):
# play a reveal of all of the surface rectangles
animations = []
for rectangle in self.surface_rectangles:
animations.append(Uncreate(rectangle))
animation_group = AnimationGroup(*animations)
return animation_group
class DecisionTree:
"""
Draw a single tree node
"""
def _make_node(
self,
feature,
threshold,
values,
is_leaf=False,
depth=0,
leaf_colors=[BLUE, ORANGE, GREEN],
):
if not is_leaf:
node_text = f"{feature}\n <= {threshold:.3f} cm"
# draw decision text
decision_text = Text(node_text, color=WHITE)
# draw a box
bounding_box = SurroundingRectangle(decision_text, buff=0.3, color=WHITE)
node = VGroup()
node.add(bounding_box)
node.add(decision_text)
# return the node
else:
# plot the appropriate image
class_index = np.argmax(values)
# get image
pil_image = images[class_index]
leaf_group = Group()
node = ImageMobject(pil_image)
node.scale(1.5)
rectangle = Rectangle(
width=node.width + 0.05,
height=node.height + 0.05,
color=leaf_colors[class_index],
stroke_width=6,
)
rectangle.move_to(node.get_center())
rectangle.shift([-0.02, 0.02, 0])
leaf_group.add(rectangle)
leaf_group.add(node)
node = leaf_group
return node
def _make_connection(self, top, bottom, is_leaf=False):
top_node_bottom_location = top.get_center()
top_node_bottom_location[1] -= top.height / 2
bottom_node_top_location = bottom.get_center()
bottom_node_top_location[1] += bottom.height / 2
line = Line(top_node_bottom_location, bottom_node_top_location, color=WHITE)
return line
def _make_tree(self, tree, feature_names=["Sepal Length", "Sepal Width"]):
tree_group = Group()
max_depth = tree.max_depth
# make the base node
feature_name = feature_names[tree.feature[0]]
threshold = tree.threshold[0]
values = tree.value[0]
nodes_map = {}
root_node = self._make_node(feature_name, threshold, values, depth=0)
nodes_map[0] = root_node
tree_group.add(root_node)
# save some information
node_height = root_node.height
node_width = root_node.width
scale_factor = 1.0
edge_map = {}
# tree height
tree_height = scale_factor * node_height * max_depth
tree_width = scale_factor * 2**max_depth * node_width
# traverse tree
def recurse(node, depth, direction, parent_object, parent_node):
# make sure it is a valid node
# make the node object
is_leaf = tree.children_left[node] == tree.children_right[node]
feature_name = feature_names[tree.feature[node]]
threshold = tree.threshold[node]
values = tree.value[node]
node_object = self._make_node(
feature_name, threshold, values, depth=depth, is_leaf=is_leaf
)
nodes_map[node] = node_object
node_height = node_object.height
# set the node position
direction_factor = -1 if direction == "left" else 1
shift_right_amount = (
0.8 * direction_factor * scale_factor * tree_width / (2**depth) / 2
)
if is_leaf:
shift_down_amount = -1.0 * scale_factor * node_height
else:
shift_down_amount = -1.8 * scale_factor * node_height
node_object.match_x(parent_object).match_y(parent_object).shift(
[shift_right_amount, shift_down_amount, 0]
)
tree_group.add(node_object)
# make a connection
connection = self._make_connection(
parent_object, node_object, is_leaf=is_leaf
)
edge_name = str(parent_node) + "," + str(node)
edge_map[edge_name] = connection
tree_group.add(connection)
# recurse
if not is_leaf:
recurse(tree.children_left[node], depth + 1, "left", node_object, node)
recurse(
tree.children_right[node], depth + 1, "right", node_object, node
)
recurse(tree.children_left[0], 1, "left", root_node, 0)
recurse(tree.children_right[0], 1, "right", root_node, 0)
tree_group.scale(0.35)
return tree_group, nodes_map, edge_map
def color_example_path(
self, tree_group, nodes_map, tree, edge_map, example, color=YELLOW, thickness=2
):
# get decision path
decision_path = tree.decision_path(example)[0]
path_indices = decision_path.indices
# highlight edges
for node_index in range(0, len(path_indices) - 1):
current_val = path_indices[node_index]
next_val = path_indices[node_index + 1]
edge_str = str(current_val) + "," + str(next_val)
edge = edge_map[edge_str]
animation_two = AnimationGroup(
nodes_map[current_val].animate.set_color(color)
)
self.play(animation_two, run_time=0.5)
animation_one = AnimationGroup(
edge.animate.set_color(color),
# edge.animate.set_stroke_width(4),
)
self.play(animation_one, run_time=0.5)
# surround the bottom image
last_path_index = path_indices[-1]
last_path_rectangle = nodes_map[last_path_index][0]
self.play(last_path_rectangle.animate.set_color(color))
def create_sklearn_tree(self, max_tree_depth=1):
# learn the decision tree
iris = load_iris()
tree = learn_iris_decision_tree(iris, max_depth=max_tree_depth)
feature_names = iris.feature_names[0:2]
return tree.tree_
def make_tree(self, max_tree_depth=2):
sklearn_tree = self.create_sklearn_tree()
# make the tree
tree_group, nodes_map, edge_map = self._make_tree(
sklearn_tree.tree_, feature_names
)
tree_group.shift([0, 5.5, 0])
return tree_group
# self.add(tree_group)
# self.play(SpinInFromNothing(tree_group), run_time=3)
# self.color_example_path(tree_group, nodes_map, tree, edge_map, iris.data[None, 0, 0:2])
class DecisionTreeSplitScene(Scene):
def make_decision_tree_classifier(self, max_depth=4):
decision_tree = DecisionTreeClassifier(
random_state=1, max_depth=max_depth, max_leaf_nodes=8
)
decision_tree = decision_tree.fit(iris.data[:, :2], iris.target)
# output the decisioin tree in some format
return decision_tree
def make_split_animation(self, data, classes, data_labels, main_axes):
"""
def make_entropy_animation_and_plot(dim=0, num_entropy_values=50):
# calculate the entropy values
axes_group = VGroup()
# make axes
range_vals = [np.amin(data, axis=0)[dim], np.amax(data, axis=0)[dim]]
axes = Axes(x_range=range_vals,
y_range=[0, 1.0],
x_length=9,
y_length=4,
# axis_config={"number_scale_value":0.75, "include_numbers":True},
tips=False,
)
axes_group.add(axes)
# make axis labels
# x_label
x_label = Text(data_labels[dim], font=font) \
.match_y(axes.get_axes()[0]) \
.shift([0.5, -0.75, 0]) \
.scale(font_scale*1.2)
axes_group.add(x_label)
# y_label
y_label = Text("Information Gain", font=font) \
.match_x(axes.get_axes()[1]) \
.shift([-0.75, 0, 0]) \
.rotate(np.pi / 2) \
.scale(font_scale * 1.2)
axes_group.add(y_label)
# line animation
information_gains = []
def entropy_function(split_value):
# lower entropy
lower_set = np.nonzero(data[:, dim] <= split_value)[0]
lower_set = classes[lower_set]
lower_entropy = entropy(lower_set)
# higher entropy
higher_set = np.nonzero(data[:, dim] > split_value)[0]
higher_set = classes[higher_set]
higher_entropy = entropy(higher_set)
# calculate entropies
all_entropy = entropy(classes, base=2)
lower_entropy = entropy(lower_set, base=2)
higher_entropy = entropy(higher_set, base=2)
mean_entropy = (lower_entropy + higher_entropy) / 2
# calculate information gain
lower_prob = len(lower_set) / len(data[:, dim])
higher_prob = len(higher_set) / len(data[:, dim])
info_gain = all_entropy - (lower_prob * lower_entropy + higher_prob * higher_entropy)
information_gains.append((split_value, info_gain))
return info_gain
data_range = np.amin(data[:, dim]), np.amax(data[:, dim])
entropy_graph = axes.get_graph(
entropy_function,
# color=RED,
# x_range=data_range
)
axes_group.add(entropy_graph)
axes_group.shift([4.0, 2, 0])
axes_group.scale(0.5)
dot_animation = Dot(color=WHITE)
axes_group.add(dot_animation)
# make animations
animation_group = AnimationGroup(
Create(axes_group, run_time=2),
Wait(3),
MoveAlongPath(dot_animation, entropy_graph, run_time=20, rate_func=rate_functions.ease_in_out_quad),
Wait(2)
)
return axes_group, animation_group, information_gains
"""
def make_split_line_animation(dim=0):
# make a line along one of the dims and move it up and down
origin_coord = [
np.amin(data, axis=0)[0] - 0.2,
np.amin(data, axis=0)[1] - 0.2,
]
origin_point = main_axes.coords_to_point(*origin_coord)
top_left_coord = [origin_coord[0], np.amax(data, axis=0)[1]]
bottom_right_coord = [np.amax(data, axis=0)[0] - 0.2, origin_coord[1]]
if dim == 0:
other_coord = top_left_coord
moving_line_coord = bottom_right_coord
else:
other_coord = bottom_right_coord
moving_line_coord = top_left_coord
other_point = main_axes.coords_to_point(*other_coord)
moving_line_point = main_axes.coords_to_point(*moving_line_coord)
moving_line = Line(origin_point, other_point, color=RED)
movement_line = Line(origin_point, moving_line_point)
if dim == 0:
movement_line.shift([0, moving_line.height / 2, 0])
else:
movement_line.shift([moving_line.width / 2, 0, 0])
# move the moving line along the movement line
animation = MoveAlongPath(
moving_line,
movement_line,
run_time=20,
rate_func=rate_functions.ease_in_out_quad,
)
return animation, moving_line
# plot the line in white then make it invisible
# make an animation along the line
# make a
# axes_one_group, top_animation_group, info_gains = make_entropy_animation_and_plot(dim=0)
line_movement, first_moving_line = make_split_line_animation(dim=0)
# axes_two_group, bottom_animation_group, _ = make_entropy_animation_and_plot(dim=1)
second_line_movement, second_moving_line = make_split_line_animation(dim=1)
# axes_two_group.shift([0, -3, 0])
animation_group_one = AnimationGroup(
# top_animation_group,
line_movement,
)
animation_group_two = AnimationGroup(
# bottom_animation_group,
second_line_movement,
)
"""
both_axes_group = VGroup(
axes_one_group,
axes_two_group
)
"""
return (
animation_group_one,
animation_group_two,
first_moving_line,
second_moving_line,
None,
None,
)
# both_axes_group, \
# info_gains
def construct(self):
# make the points
iris_dataset_plot = IrisDatasetPlot()
iris_dataset_plot.all_group.scale(1.0)
iris_dataset_plot.all_group.shift([-3, 0.2, 0])
# make the entropy line graph
# entropy_line_graph = self.draw_entropy_line_graph()
# arrange the plots
# do animations
self.play(Create(iris_dataset_plot))
# make the decision tree classifier
decision_tree_classifier = self.make_decision_tree_classifier()
decision_tree_surface = DecisionTreeSurface(
decision_tree_classifier, iris.data, iris_dataset_plot.axes_group[0]
)
self.play(Create(decision_tree_surface))
self.wait(3)
self.play(Uncreate(decision_tree_surface))
main_axes = iris_dataset_plot.axes_group[0]
(
split_animation_one,
split_animation_two,
first_moving_line,
second_moving_line,
both_axes_group,
info_gains,
) = self.make_split_animation(
iris.data[:, 0:2], iris.target, iris.feature_names, main_axes
)
self.play(split_animation_one)
self.wait(0.1)
self.play(Uncreate(first_moving_line))
self.wait(3)
self.play(split_animation_two)
self.wait(0.1)
self.play(Uncreate(second_moving_line))
self.wait(0.1)
# highlight the maximum on top
# sort by second key
"""
highest_info_gain = sorted(info_gains, key=lambda x: x[1])[-1]
highest_info_gain_point = both_axes_group[0][0].coords_to_point(*highest_info_gain)
highlighted_peak = Dot(highest_info_gain_point, color=YELLOW)
# get location of highest info gain point
highest_info_gain_point_in_iris_graph = iris_dataset_plot.axes_group[0].coords_to_point(*[highest_info_gain[0], 0])
first_moving_line.start[0] = highest_info_gain_point_in_iris_graph[0]
first_moving_line.end[0] = highest_info_gain_point_in_iris_graph[0]
self.play(Create(highlighted_peak))
self.play(Create(first_moving_line))
text = Text("Highest Information Gain")
text.scale(0.4)
text.move_to(highlighted_peak)
text.shift([0, 0.5, 0])
self.play(Create(text))
"""
self.wait(1)
# draw the basic tree
decision_tree_classifier = self.make_decision_tree_classifier(max_depth=1)
decision_tree_surface = DecisionTreeSurface(
decision_tree_classifier, iris.data, iris_dataset_plot.axes_group[0]
)
decision_tree_graph, _, _ = DecisionTree()._make_tree(
decision_tree_classifier.tree_
)
decision_tree_graph.match_y(iris_dataset_plot.axes_group)
decision_tree_graph.shift([4, 0, 0])
self.play(Create(decision_tree_surface))
uncreate_animation = AnimationGroup(
# Uncreate(both_axes_group),
# Uncreate(highlighted_peak),
Uncreate(second_moving_line),
# Unwrite(text)
)
self.play(uncreate_animation)
self.wait(0.5)
self.play(FadeIn(decision_tree_graph))
# self.play(FadeIn(highlighted_peak))
self.wait(5)
|
ManimML_helblazer811/examples/decision_tree/decision_tree_surface.py | import numpy as np
from collections import deque
from sklearn.tree import DecisionTreeClassifier
from sklearn.tree import _tree as ctree
import matplotlib.pyplot as plt
from matplotlib.patches import Rectangle
class AABB:
"""Axis-aligned bounding box"""
def __init__(self, n_features):
self.limits = np.array([[-np.inf, np.inf]] * n_features)
def split(self, f, v):
left = AABB(self.limits.shape[0])
right = AABB(self.limits.shape[0])
left.limits = self.limits.copy()
right.limits = self.limits.copy()
left.limits[f, 1] = v
right.limits[f, 0] = v
return left, right
def tree_bounds(tree, n_features=None):
"""Compute final decision rule for each node in tree"""
if n_features is None:
n_features = np.max(tree.feature) + 1
aabbs = [AABB(n_features) for _ in range(tree.node_count)]
queue = deque([0])
while queue:
i = queue.pop()
l = tree.children_left[i]
r = tree.children_right[i]
if l != ctree.TREE_LEAF:
aabbs[l], aabbs[r] = aabbs[i].split(tree.feature[i], tree.threshold[i])
queue.extend([l, r])
return aabbs
def decision_areas(tree_classifier, maxrange, x=0, y=1, n_features=None):
"""Extract decision areas.
tree_classifier: Instance of a sklearn.tree.DecisionTreeClassifier
maxrange: values to insert for [left, right, top, bottom] if the interval is open (+/-inf)
x: index of the feature that goes on the x axis
y: index of the feature that goes on the y axis
n_features: override autodetection of number of features
"""
tree = tree_classifier.tree_
aabbs = tree_bounds(tree, n_features)
maxrange = np.array(maxrange)
rectangles = []
for i in range(len(aabbs)):
if tree.children_left[i] != ctree.TREE_LEAF:
continue
l = aabbs[i].limits
r = [l[x, 0], l[x, 1], l[y, 0], l[y, 1], np.argmax(tree.value[i])]
# clip out of bounds indices
"""
if r[0] < maxrange[0]:
r[0] = maxrange[0]
if r[1] > maxrange[1]:
r[1] = maxrange[1]
if r[2] < maxrange[2]:
r[2] = maxrange[2]
if r[3] > maxrange[3]:
r[3] = maxrange[3]
print(r)
"""
rectangles.append(r)
rectangles = np.array(rectangles)
rectangles[:, [0, 2]] = np.maximum(rectangles[:, [0, 2]], maxrange[0::2])
rectangles[:, [1, 3]] = np.minimum(rectangles[:, [1, 3]], maxrange[1::2])
return rectangles
def plot_areas(rectangles):
for rect in rectangles:
color = ["b", "r"][int(rect[4])]
print(rect[0], rect[1], rect[2] - rect[0], rect[3] - rect[1])
rp = Rectangle(
[rect[0], rect[2]],
rect[1] - rect[0],
rect[3] - rect[2],
color=color,
alpha=0.3,
)
plt.gca().add_artist(rp)
|
ManimML_helblazer811/examples/variational_autoencoder/variational_autoencoder.py | """Autoencoder Manim Visualizations
In this module I define Manim visualizations for Variational Autoencoders
and Traditional Autoencoders.
"""
from pathlib import Path
from manim import *
import numpy as np
from PIL import Image
from manim_ml.neural_network.layers import EmbeddingLayer
from manim_ml.neural_network.layers import FeedForwardLayer
from manim_ml.neural_network.layers import ImageLayer
from manim_ml.neural_network.neural_network import NeuralNetwork
ROOT_DIR = Path(__file__).parents[2]
config.pixel_height = 1200
config.pixel_width = 1900
config.frame_height = 7.0
config.frame_width = 7.0
class VAEScene(Scene):
"""Scene object for a Variational Autoencoder and Autoencoder"""
def construct(self):
numpy_image = np.asarray(Image.open(ROOT_DIR / "assets/mnist/digit.jpeg"))
vae = NeuralNetwork(
[
ImageLayer(numpy_image, height=1.4),
FeedForwardLayer(5),
FeedForwardLayer(3),
EmbeddingLayer(dist_theme="ellipse"),
FeedForwardLayer(3),
FeedForwardLayer(5),
ImageLayer(numpy_image, height=1.4),
]
)
self.play(Create(vae))
self.play(vae.make_forward_pass_animation(run_time=15))
|
ManimML_helblazer811/examples/variational_autoencoder/autoencoder_models/generate_interpolation.py | import torch
from variational_autoencoder import VAE, load_dataset
import matplotlib.pyplot as plt
from torchvision import datasets
from torchvision import transforms
from tqdm import tqdm
import numpy as np
import pickle
# Load model
vae = VAE(latent_dim=16)
vae.load_state_dict(torch.load("saved_models/model.pth"))
dataset = load_dataset()
# Generate reconstructions
num_images = 50
image_pairs = []
save_object = {"interpolation_path": [], "interpolation_images": []}
# Make interpolation path
image_a, image_b = dataset[0][0], dataset[1][0]
image_a = image_a.view(32 * 32)
image_b = image_b.view(32 * 32)
z_a, _, _, _ = vae.forward(image_a)
z_a = z_a.detach().cpu().numpy()
z_b, _, _, _ = vae.forward(image_b)
z_b = z_b.detach().cpu().numpy()
interpolation_path = np.linspace(z_a, z_b, num=num_images)
# interpolation_path[:, 4] = np.linspace(-3, 3, num=num_images)
save_object["interpolation_path"] = interpolation_path
for i in range(num_images):
# Generate
z = torch.Tensor(interpolation_path[i]).unsqueeze(0)
gen_image = vae.decode(z).detach().numpy()
gen_image = np.reshape(gen_image, (32, 32)) * 255
save_object["interpolation_images"].append(gen_image)
fig, axs = plt.subplots(num_images, 1, figsize=(1, num_images))
image_pairs = []
for i in range(num_images):
recon_image = save_object["interpolation_images"][i]
# Add to plot
axs[i].imshow(recon_image)
# Perform intrpolations
with open("interpolations.pkl", "wb") as f:
pickle.dump(save_object, f)
plt.show()
|
ManimML_helblazer811/examples/variational_autoencoder/autoencoder_models/__init__.py | |
ManimML_helblazer811/examples/variational_autoencoder/autoencoder_models/generate_disentanglement.py | import pickle
import sys
import os
sys.path.append(os.environ["PROJECT_ROOT"])
from autoencoder_models.variational_autoencoder import (
VAE,
load_dataset,
load_vae_from_path,
)
import matplotlib.pyplot as plt
import numpy as np
import torch
import scipy
import scipy.stats
import cv2
def binned_images(model_path, num_x_bins=6, plot=False):
latent_dim = 2
model = load_vae_from_path(model_path, latent_dim)
image_dataset = load_dataset(digit=2)
# Compute embedding
num_images = 500
embedding = []
images = []
for i in range(num_images):
image, _ = image_dataset[i]
mean, _, recon, _ = model.forward(image)
mean = mean.detach().numpy()
recon = recon.detach().numpy()
recon = recon.reshape(32, 32)
images.append(recon.squeeze())
if latent_dim > 2:
mean = mean[:2]
embedding.append(mean)
images = np.stack(images)
tsne_points = np.array(embedding)
tsne_points = (tsne_points - tsne_points.mean(axis=0)) / (tsne_points.std(axis=0))
# make vis
num_points = np.shape(tsne_points)[0]
x_min = np.amin(tsne_points.T[0])
y_min = np.amin(tsne_points.T[1])
y_max = np.amax(tsne_points.T[1])
x_max = np.amax(tsne_points.T[0])
# make the bins from the ranges
# to keep it square the same width is used for x and y dim
x_bins, step = np.linspace(x_min, x_max, num_x_bins, retstep=True)
x_bins = x_bins.astype(float)
num_y_bins = np.absolute(np.ceil((y_max - y_min) / step)).astype(int)
y_bins = np.linspace(y_min, y_max, num_y_bins)
# sort the tsne_points into a 2d histogram
tsne_points = tsne_points.squeeze()
hist_obj = scipy.stats.binned_statistic_dd(
tsne_points,
np.arange(num_points),
statistic="count",
bins=[x_bins, y_bins],
expand_binnumbers=True,
)
# sample one point from each bucket
binnumbers = hist_obj.binnumber
num_x_bins = np.amax(binnumbers[0]) + 1
num_y_bins = np.amax(binnumbers[1]) + 1
binnumbers = binnumbers.T
# some places have no value in a region
used_mask = np.zeros((num_y_bins, num_x_bins))
image_bins = np.zeros(
(num_y_bins, num_x_bins, 3, np.shape(images)[2], np.shape(images)[2])
)
for i, bin_num in enumerate(list(binnumbers)):
used_mask[bin_num[1], bin_num[0]] = 1
image_bins[bin_num[1], bin_num[0]] = images[i]
# plot a grid of the images
fig, axs = plt.subplots(
nrows=np.shape(y_bins)[0],
ncols=np.shape(x_bins)[0],
constrained_layout=False,
dpi=50,
)
images = []
bin_indices = []
for y in range(num_y_bins):
for x in range(num_x_bins):
if used_mask[y, x] > 0.0:
image = np.uint8(image_bins[y][x].squeeze() * 255)
image = np.rollaxis(image, 0, 3)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
axs[num_y_bins - 1 - y][x].imshow(image)
images.append(image)
bin_indices.append((y, x))
axs[y, x].axis("off")
if plot:
plt.axis("off")
plt.show()
else:
return images, bin_indices
def generate_disentanglement(model_path="saved_models/model_dim2.pth"):
"""Generates disentanglement visualization and serializes it"""
# Disentanglement object
disentanglement_object = {}
# Make Disentanglement
images, bin_indices = binned_images(model_path)
disentanglement_object["images"] = images
disentanglement_object["bin_indices"] = bin_indices
# Serialize Images
with open("disentanglement.pkl", "wb") as f:
pickle.dump(disentanglement_object, f)
if __name__ == "__main__":
plot = False
if plot:
model_path = "saved_models/model_dim2.pth"
# uniform_image_sample(model_path)
binned_images(model_path)
else:
generate_disentanglement()
|
ManimML_helblazer811/examples/variational_autoencoder/autoencoder_models/generate_images.py | import torch
from variational_autoencoder import VAE
import matplotlib.pyplot as plt
from torchvision import datasets
from torchvision import transforms
from tqdm import tqdm
import numpy as np
import pickle
# Load model
vae = VAE(latent_dim=16)
vae.load_state_dict(torch.load("saved_models/model.pth"))
# Transforms images to a PyTorch Tensor
tensor_transform = transforms.ToTensor()
# Download the MNIST Dataset
dataset = datasets.MNIST(
root="./data", train=True, download=True, transform=tensor_transform
)
# Generate reconstructions
num_recons = 10
fig, axs = plt.subplots(num_recons, 2, figsize=(2, num_recons))
image_pairs = []
for i in range(num_recons):
base_image, _ = dataset[i]
base_image = base_image.reshape(-1, 28 * 28)
_, _, recon_image, _ = vae.forward(base_image)
base_image = base_image.detach().numpy()
base_image = np.reshape(base_image, (28, 28)) * 255
recon_image = recon_image.detach().numpy()
recon_image = np.reshape(recon_image, (28, 28)) * 255
# Add to plot
axs[i][0].imshow(base_image)
axs[i][1].imshow(recon_image)
# image pairs
image_pairs.append((base_image, recon_image))
with open("image_pairs.pkl", "wb") as f:
pickle.dump(image_pairs, f)
plt.show()
|
ManimML_helblazer811/examples/variational_autoencoder/autoencoder_models/variational_autoencoder.py | import torch
from torchvision import datasets
from torchvision import transforms
import torch.nn as nn
import torch.nn.functional as F
import matplotlib.pyplot as plt
from tqdm import tqdm
import math
"""
These are utility functions that help to calculate the input and output
sizes of convolutional neural networks
"""
def num2tuple(num):
return num if isinstance(num, tuple) else (num, num)
def conv2d_output_shape(h_w, kernel_size=1, stride=1, pad=0, dilation=1):
h_w, kernel_size, stride, pad, dilation = (
num2tuple(h_w),
num2tuple(kernel_size),
num2tuple(stride),
num2tuple(pad),
num2tuple(dilation),
)
pad = num2tuple(pad[0]), num2tuple(pad[1])
h = math.floor(
(h_w[0] + sum(pad[0]) - dilation[0] * (kernel_size[0] - 1) - 1) / stride[0] + 1
)
w = math.floor(
(h_w[1] + sum(pad[1]) - dilation[1] * (kernel_size[1] - 1) - 1) / stride[1] + 1
)
return h, w
def convtransp2d_output_shape(
h_w, kernel_size=1, stride=1, pad=0, dilation=1, out_pad=0
):
h_w, kernel_size, stride, pad, dilation, out_pad = (
num2tuple(h_w),
num2tuple(kernel_size),
num2tuple(stride),
num2tuple(pad),
num2tuple(dilation),
num2tuple(out_pad),
)
pad = num2tuple(pad[0]), num2tuple(pad[1])
h = (
(h_w[0] - 1) * stride[0]
- sum(pad[0])
+ dialation[0] * (kernel_size[0] - 1)
+ out_pad[0]
+ 1
)
w = (
(h_w[1] - 1) * stride[1]
- sum(pad[1])
+ dialation[1] * (kernel_size[1] - 1)
+ out_pad[1]
+ 1
)
return h, w
def conv2d_get_padding(h_w_in, h_w_out, kernel_size=1, stride=1, dilation=1):
h_w_in, h_w_out, kernel_size, stride, dilation = (
num2tuple(h_w_in),
num2tuple(h_w_out),
num2tuple(kernel_size),
num2tuple(stride),
num2tuple(dilation),
)
p_h = (
(h_w_out[0] - 1) * stride[0]
- h_w_in[0]
+ dilation[0] * (kernel_size[0] - 1)
+ 1
)
p_w = (
(h_w_out[1] - 1) * stride[1]
- h_w_in[1]
+ dilation[1] * (kernel_size[1] - 1)
+ 1
)
return (math.floor(p_h / 2), math.ceil(p_h / 2)), (
math.floor(p_w / 2),
math.ceil(p_w / 2),
)
def convtransp2d_get_padding(
h_w_in, h_w_out, kernel_size=1, stride=1, dilation=1, out_pad=0
):
h_w_in, h_w_out, kernel_size, stride, dilation, out_pad = (
num2tuple(h_w_in),
num2tuple(h_w_out),
num2tuple(kernel_size),
num2tuple(stride),
num2tuple(dilation),
num2tuple(out_pad),
)
p_h = (
-(
h_w_out[0]
- 1
- out_pad[0]
- dilation[0] * (kernel_size[0] - 1)
- (h_w_in[0] - 1) * stride[0]
)
/ 2
)
p_w = (
-(
h_w_out[1]
- 1
- out_pad[1]
- dilation[1] * (kernel_size[1] - 1)
- (h_w_in[1] - 1) * stride[1]
)
/ 2
)
return (math.floor(p_h / 2), math.ceil(p_h / 2)), (
math.floor(p_w / 2),
math.ceil(p_w / 2),
)
def load_dataset(train=True, digit=None):
# Transforms images to a PyTorch Tensor
tensor_transform = transforms.Compose([transforms.Pad(2), transforms.ToTensor()])
# Download the MNIST Dataset
dataset = datasets.MNIST(
root="./data", train=train, download=True, transform=tensor_transform
)
# Load specific image
if not digit is None:
idx = dataset.train_labels == digit
dataset.targets = dataset.targets[idx]
dataset.data = dataset.data[idx]
return dataset
def load_vae_from_path(path, latent_dim):
model = VAE(latent_dim)
model.load_state_dict(torch.load(path))
return model
# Creating a PyTorch class
# 28*28 ==> 9 ==> 28*28
class VAE(torch.nn.Module):
def __init__(self, latent_dim=5, layer_count=4, channels=1):
super().__init__()
self.latent_dim = latent_dim
self.in_shape = 32
self.layer_count = layer_count
self.channels = channels
self.d = 128
mul = 1
inputs = self.channels
out_sizes = [(self.in_shape, self.in_shape)]
for i in range(self.layer_count):
setattr(self, "conv%d" % (i + 1), nn.Conv2d(inputs, self.d * mul, 4, 2, 1))
setattr(self, "conv%d_bn" % (i + 1), nn.BatchNorm2d(self.d * mul))
h_w = (out_sizes[-1][-1], out_sizes[-1][-1])
out_sizes.append(
conv2d_output_shape(h_w, kernel_size=4, stride=2, pad=1, dilation=1)
)
inputs = self.d * mul
mul *= 2
self.d_max = inputs
self.last_size = out_sizes[-1][-1]
self.num_linear = self.last_size**2 * self.d_max
# Encoder linear layers
self.encoder_mean_linear = nn.Linear(self.num_linear, self.latent_dim)
self.encoder_logvar_linear = nn.Linear(self.num_linear, self.latent_dim)
# Decoder linear layer
self.decoder_linear = nn.Linear(self.latent_dim, self.num_linear)
mul = inputs // self.d // 2
for i in range(1, self.layer_count):
setattr(
self,
"deconv%d" % (i + 1),
nn.ConvTranspose2d(inputs, self.d * mul, 4, 2, 1),
)
setattr(self, "deconv%d_bn" % (i + 1), nn.BatchNorm2d(self.d * mul))
inputs = self.d * mul
mul //= 2
setattr(
self,
"deconv%d" % (self.layer_count + 1),
nn.ConvTranspose2d(inputs, self.channels, 4, 2, 1),
)
def encode(self, x):
if len(x.shape) < 3:
x = x.unsqueeze(0)
if len(x.shape) < 4:
x = x.unsqueeze(1)
batch_size = x.shape[0]
for i in range(self.layer_count):
x = F.relu(
getattr(self, "conv%d_bn" % (i + 1))(
getattr(self, "conv%d" % (i + 1))(x)
)
)
x = x.view(batch_size, -1)
mean = self.encoder_mean_linear(x)
logvar = self.encoder_logvar_linear(x)
return mean, logvar
def decode(self, x):
x = x.view(x.shape[0], self.latent_dim)
x = self.decoder_linear(x)
x = x.view(x.shape[0], self.d_max, self.last_size, self.last_size)
# x = self.deconv1_bn(x)
x = F.leaky_relu(x, 0.2)
for i in range(1, self.layer_count):
x = F.leaky_relu(
getattr(self, "deconv%d_bn" % (i + 1))(
getattr(self, "deconv%d" % (i + 1))(x)
),
0.2,
)
x = getattr(self, "deconv%d" % (self.layer_count + 1))(x)
x = torch.sigmoid(x)
return x
def forward(self, x):
batch_size = x.shape[0]
mean, logvar = self.encode(x)
eps = torch.randn(batch_size, self.latent_dim)
z = mean + torch.exp(logvar / 2) * eps
reconstructed = self.decode(z)
return mean, logvar, reconstructed, x
def train_model(latent_dim=16, plot=True, digit=1, epochs=200):
dataset = load_dataset(train=True, digit=digit)
# DataLoader is used to load the dataset
# for training
loader = torch.utils.data.DataLoader(dataset=dataset, batch_size=32, shuffle=True)
# Model Initialization
model = VAE(latent_dim=latent_dim)
# Validation using MSE Loss function
def loss_function(mean, log_var, reconstructed, original, kl_beta=0.0001):
kl = torch.mean(
-0.5 * torch.sum(1 + log_var - mean**2 - log_var.exp(), dim=1), dim=0
)
recon = torch.nn.functional.mse_loss(reconstructed, original)
# print(f"KL Error {kl}, Recon Error {recon}")
return kl_beta * kl + recon
# Using an Adam Optimizer with lr = 0.1
optimizer = torch.optim.Adam(model.parameters(), lr=1e-4, weight_decay=0e-8)
outputs = []
losses = []
for epoch in tqdm(range(epochs)):
for (image, _) in loader:
# Output of Autoencoder
mean, log_var, reconstructed, image = model(image)
# Calculating the loss function
loss = loss_function(mean, log_var, reconstructed, image)
# The gradients are set to zero,
# the the gradient is computed and stored.
# .step() performs parameter update
optimizer.zero_grad()
loss.backward()
optimizer.step()
# Storing the losses in a list for plotting
if torch.isnan(loss):
raise Exception()
losses.append(loss.detach().cpu())
outputs.append((epochs, image, reconstructed))
torch.save(
model.state_dict(),
os.path.join(
os.environ["PROJECT_ROOT"],
f"examples/variational_autoencoder/autoencoder_model/saved_models/model_dim{latent_dim}.pth",
),
)
if plot:
# Defining the Plot Style
plt.style.use("fivethirtyeight")
plt.xlabel("Iterations")
plt.ylabel("Loss")
# Plotting the last 100 values
plt.plot(losses)
plt.show()
if __name__ == "__main__":
train_model(latent_dim=2, digit=2, epochs=40)
|
ManimML_helblazer811/examples/gan/gan.py | import random
from pathlib import Path
from PIL import Image
from manim import *
from manim_ml.neural_network.layers.embedding import EmbeddingLayer
from manim_ml.neural_network.layers.feed_forward import FeedForwardLayer
from manim_ml.neural_network.layers.image import ImageLayer
from manim_ml.neural_network.layers.vector import VectorLayer
from manim_ml.neural_network.neural_network import NeuralNetwork
ROOT_DIR = Path(__file__).parents[2]
config.pixel_height = 1080
config.pixel_width = 1080
config.frame_height = 8.3
config.frame_width = 8.3
class GAN(Mobject):
"""Generative Adversarial Network"""
def __init__(self):
super().__init__()
self.make_entities()
self.place_entities()
self.titles = self.make_titles()
def make_entities(self, image_height=1.2):
"""Makes all of the network entities"""
# Make the fake image layer
default_image = Image.open(ROOT_DIR / "assets/gan/fake_image.png")
numpy_image = np.asarray(default_image)
self.fake_image_layer = ImageLayer(
numpy_image, height=image_height, show_image_on_create=False
)
# Make the Generator Network
self.generator = NeuralNetwork(
[
EmbeddingLayer(covariance=np.array([[3.0, 0], [0, 3.0]])).scale(1.3),
FeedForwardLayer(3),
FeedForwardLayer(5),
self.fake_image_layer,
],
layer_spacing=0.1,
)
self.add(self.generator)
# Make the Discriminator
self.discriminator = NeuralNetwork(
[
FeedForwardLayer(5),
FeedForwardLayer(1),
VectorLayer(1, value_func=lambda: random.uniform(0, 1)),
],
layer_spacing=0.1,
)
self.add(self.discriminator)
# Make Ground Truth Dataset
default_image = Image.open(ROOT_DIR / "assets/gan/real_image.jpg")
numpy_image = np.asarray(default_image)
self.ground_truth_layer = ImageLayer(numpy_image, height=image_height)
self.add(self.ground_truth_layer)
self.scale(1)
def place_entities(self):
"""Positions entities in correct places"""
# Place relative to generator
# Place the ground_truth image layer
self.ground_truth_layer.next_to(self.fake_image_layer, DOWN, 0.8)
# Group the images
image_group = Group(self.ground_truth_layer, self.fake_image_layer)
# Move the discriminator to the right of thee generator
self.discriminator.next_to(self.generator, RIGHT, 0.2)
self.discriminator.match_y(image_group)
# Move the discriminator to the height of the center of the image_group
# self.discriminator.match_y(image_group)
# self.ground_truth_layer.next_to(self.fake_image_layer, DOWN, 0.5)
def make_titles(self):
"""Makes titles for the different entities"""
titles = VGroup()
self.ground_truth_layer_title = Text("Real Image").scale(0.3)
self.ground_truth_layer_title.next_to(self.ground_truth_layer, UP, 0.1)
self.add(self.ground_truth_layer_title)
titles.add(self.ground_truth_layer_title)
self.fake_image_layer_title = Text("Fake Image").scale(0.3)
self.fake_image_layer_title.next_to(self.fake_image_layer, UP, 0.1)
self.add(self.fake_image_layer_title)
titles.add(self.fake_image_layer_title)
# Overhead title
overhead_title = Text("Generative Adversarial Network").scale(0.75)
overhead_title.shift(np.array([0, 3.5, 0]))
titles.add(overhead_title)
# Probability title
self.probability_title = Text("Probability").scale(0.5)
self.probability_title.move_to(self.discriminator.input_layers[-2])
self.probability_title.shift(UP)
self.probability_title.shift(RIGHT * 1.05)
titles.add(self.probability_title)
return titles
def make_highlight_generator_rectangle(self):
"""Returns animation that highlights the generators contents"""
group = VGroup()
generator_surrounding_group = Group(self.generator, self.fake_image_layer_title)
generator_surrounding_rectangle = SurroundingRectangle(
generator_surrounding_group, buff=0.1, stroke_width=4.0, color="#0FFF50"
)
group.add(generator_surrounding_rectangle)
title = Text("Generator").scale(0.5)
title.next_to(generator_surrounding_rectangle, UP, 0.2)
group.add(title)
return group
def make_highlight_discriminator_rectangle(self):
"""Makes a rectangle for highlighting the discriminator"""
discriminator_group = Group(
self.discriminator,
self.fake_image_layer,
self.ground_truth_layer,
self.fake_image_layer_title,
self.probability_title,
)
group = VGroup()
discriminator_surrounding_rectangle = SurroundingRectangle(
discriminator_group, buff=0.05, stroke_width=4.0, color="#0FFF50"
)
group.add(discriminator_surrounding_rectangle)
title = Text("Discriminator").scale(0.5)
title.next_to(discriminator_surrounding_rectangle, UP, 0.2)
group.add(title)
return group
def make_generator_forward_pass(self):
"""Makes forward pass of the generator"""
forward_pass = self.generator.make_forward_pass_animation(dist_theme="ellipse")
return forward_pass
def make_discriminator_forward_pass(self):
"""Makes forward pass of the discriminator"""
disc_forward = self.discriminator.make_forward_pass_animation()
return disc_forward
@override_animation(Create)
def _create_override(self):
"""Overrides create"""
animation_group = AnimationGroup(
Create(self.generator),
Create(self.discriminator),
Create(self.ground_truth_layer),
Create(self.titles),
)
return animation_group
class GANScene(Scene):
"""GAN Scene"""
def construct(self):
gan = GAN().scale(1.70)
gan.move_to(ORIGIN)
gan.shift(DOWN * 0.35)
gan.shift(LEFT * 0.1)
self.play(Create(gan), run_time=3)
# Highlight generator
highlight_generator_rectangle = gan.make_highlight_generator_rectangle()
self.play(Create(highlight_generator_rectangle), run_time=1)
# Generator forward pass
gen_forward_pass = gan.make_generator_forward_pass()
self.play(gen_forward_pass, run_time=5)
# Fade out generator highlight
self.play(Uncreate(highlight_generator_rectangle), run_time=1)
# Highlight discriminator
highlight_discriminator_rectangle = gan.make_highlight_discriminator_rectangle()
self.play(Create(highlight_discriminator_rectangle), run_time=1)
# Discriminator forward pass
discriminator_forward_pass = gan.make_discriminator_forward_pass()
self.play(discriminator_forward_pass, run_time=5)
# Unhighlight discriminator
self.play(Uncreate(highlight_discriminator_rectangle), run_time=1)
|
ManimML_helblazer811/examples/readme_example/first_neural_network.py | from manim import *
from manim_ml.neural_network import (
Convolutional2DLayer,
FeedForwardLayer,
NeuralNetwork,
)
# Make the specific scene
config.pixel_height = 700
config.pixel_width = 1900
config.frame_height = 7.0
config.frame_width = 7.0
class CombinedScene(ThreeDScene):
def construct(self):
# Make nn
nn = NeuralNetwork(
[
Convolutional2DLayer(1, 7, 3, filter_spacing=0.32),
Convolutional2DLayer(3, 5, 3, filter_spacing=0.32),
Convolutional2DLayer(5, 3, 3, filter_spacing=0.18),
FeedForwardLayer(3),
FeedForwardLayer(3),
],
layer_spacing=0.25,
)
# Center the nn
nn.move_to(ORIGIN)
self.add(nn)
# Play animation
forward_pass = nn.make_forward_pass_animation()
self.play(forward_pass)
|
ManimML_helblazer811/examples/readme_example/convolutional_neural_networks.py | from manim import *
from manim_ml.neural_network import (
Convolutional2DLayer,
FeedForwardLayer,
NeuralNetwork,
)
# Make the specific scene
config.pixel_height = 700
config.pixel_width = 1900
config.frame_height = 7.0
config.frame_width = 7.0
class CombinedScene(ThreeDScene):
def construct(self):
# Make nn
nn = NeuralNetwork(
[
Convolutional2DLayer(1, 7, 3, filter_spacing=0.32),
Convolutional2DLayer(3, 5, 3, filter_spacing=0.32),
Convolutional2DLayer(5, 3, 3, filter_spacing=0.18),
FeedForwardLayer(3),
FeedForwardLayer(3),
],
layer_spacing=0.25,
)
# Center the nn
nn.move_to(ORIGIN)
self.add(nn)
# Play animation
forward_pass = nn.make_forward_pass_animation()
self.play(ChangeSpeed(forward_pass, speedinfo={}), run_time=10)
|
ManimML_helblazer811/examples/readme_example/activation_functions.py | from manim import *
from manim_ml.neural_network.layers.convolutional_2d import Convolutional2DLayer
from manim_ml.neural_network.layers.feed_forward import FeedForwardLayer
from manim_ml.neural_network.neural_network import NeuralNetwork
# Make the specific scene
config.pixel_height = 1200
config.pixel_width = 1900
config.frame_height = 6.0
config.frame_width = 6.0
class CombinedScene(ThreeDScene):
def construct(self):
# Make nn
nn = NeuralNetwork(
[
Convolutional2DLayer(1, 7, filter_spacing=0.32),
Convolutional2DLayer(
3, 5, 3, filter_spacing=0.32, activation_function="ReLU"
),
FeedForwardLayer(3, activation_function="Sigmoid"),
],
layer_spacing=0.25,
)
# Center the nn
nn.move_to(ORIGIN)
self.add(nn)
# Play animation
forward_pass = nn.make_forward_pass_animation()
self.play(ChangeSpeed(forward_pass, speedinfo={}), run_time=10)
self.wait(1)
|
ManimML_helblazer811/examples/readme_example/neural_network_dropout.py | from manim import *
from manim_ml.neural_network.animations.dropout import (
make_neural_network_dropout_animation,
)
from manim_ml.neural_network import FeedForwardLayer, NeuralNetwork
config.pixel_height = 1200
config.pixel_width = 1900
config.frame_height = 5.0
config.frame_width = 5.0
class DropoutNeuralNetworkScene(Scene):
def construct(self):
# Make nn
nn = NeuralNetwork(
[
FeedForwardLayer(3, rectangle_color=BLUE),
FeedForwardLayer(5, rectangle_color=BLUE),
FeedForwardLayer(3, rectangle_color=BLUE),
FeedForwardLayer(5, rectangle_color=BLUE),
FeedForwardLayer(4, rectangle_color=BLUE),
],
layer_spacing=0.4,
)
# Center the nn
nn.move_to(ORIGIN)
self.add(nn)
# Play animation
self.play(
make_neural_network_dropout_animation(
nn, dropout_rate=0.25, do_forward_pass=True
)
)
self.wait(1)
|
ManimML_helblazer811/examples/readme_example/convolutional_neural_network_with_images.py | from manim import *
from PIL import Image
import numpy as np
from manim_ml.neural_network import (
Convolutional2DLayer,
FeedForwardLayer,
NeuralNetwork,
ImageLayer,
)
# Make the specific scene
config.pixel_height = 700
config.pixel_width = 1900
config.frame_height = 7.0
config.frame_width = 7.0
class CombinedScene(ThreeDScene):
def construct(self):
# Make nn
image = Image.open("../../assets/mnist/digit.jpeg")
numpy_image = np.asarray(image)
# Make nn
nn = NeuralNetwork(
[
ImageLayer(numpy_image, height=1.5),
Convolutional2DLayer(1, 7, filter_spacing=0.32),
Convolutional2DLayer(3, 5, 3, filter_spacing=0.32),
Convolutional2DLayer(5, 3, 3, filter_spacing=0.18),
FeedForwardLayer(3),
FeedForwardLayer(3),
],
layer_spacing=0.25,
)
# Center the nn
nn.move_to(ORIGIN)
self.add(nn)
# Play animation
forward_pass = nn.make_forward_pass_animation()
self.play(ChangeSpeed(forward_pass, speedinfo={}), run_time=10)
self.wait(1)
|
ManimML_helblazer811/examples/readme_example/animating_the_forward_pass.py | from manim import *
from manim_ml.neural_network import (
Convolutional2DLayer,
FeedForwardLayer,
NeuralNetwork,
)
# Make the specific scene
config.pixel_height = 700
config.pixel_width = 1900
config.frame_height = 7.0
config.frame_width = 7.0
class CombinedScene(ThreeDScene):
def construct(self):
# Make nn
nn = NeuralNetwork(
[
Convolutional2DLayer(1, 7, 3, filter_spacing=0.32),
Convolutional2DLayer(3, 5, 3, filter_spacing=0.32),
Convolutional2DLayer(5, 3, 3, filter_spacing=0.18),
FeedForwardLayer(3),
FeedForwardLayer(3),
],
layer_spacing=0.25,
)
# Center the nn
nn.move_to(ORIGIN)
self.add(nn)
# Play animation
forward_pass = nn.make_forward_pass_animation()
self.play(forward_pass)
|
ManimML_helblazer811/examples/readme_example/a_simple_feed_forward_network.py | from manim import *
from manim_ml.neural_network import FeedForwardLayer, NeuralNetwork
# Make the specific scene
config.pixel_height = 700
config.pixel_width = 1200
config.frame_height = 4.0
config.frame_width = 4.0
class CombinedScene(ThreeDScene):
def construct(self):
# Make nn
nn = NeuralNetwork(
[
FeedForwardLayer(num_nodes=3),
FeedForwardLayer(num_nodes=5),
FeedForwardLayer(num_nodes=3),
]
)
self.add(nn)
# Center the nn
nn.move_to(ORIGIN)
self.add(nn)
|
ManimML_helblazer811/examples/readme_example/max_pooling.py | from manim import *
from PIL import Image
import numpy as np
from manim_ml.neural_network.layers.convolutional_2d import Convolutional2DLayer
from manim_ml.neural_network.layers.max_pooling_2d import MaxPooling2DLayer
from manim_ml.neural_network.neural_network import NeuralNetwork
# Make the specific scene
config.pixel_height = 1200
config.pixel_width = 1900
config.frame_height = 6.0
config.frame_width = 6.0
class MaxPoolingScene(ThreeDScene):
def construct(self):
# Make nn
nn = NeuralNetwork(
[
Convolutional2DLayer(1, 8),
Convolutional2DLayer(3, 6, 3),
MaxPooling2DLayer(kernel_size=2),
Convolutional2DLayer(5, 2, 2),
],
layer_spacing=0.25,
)
# Center the nn
nn.move_to(ORIGIN)
self.add(nn)
# Play animation
forward_pass = nn.make_forward_pass_animation()
self.play(ChangeSpeed(forward_pass, speedinfo={}), run_time=10)
self.wait(1)
|
ManimML_helblazer811/examples/readme_example/setting_up_a_scene.py | from manim import *
# Import modules here
class BasicScene(ThreeDScene):
def construct(self):
# Your code goes here
text = Text("Your first scene!")
self.add(text)
|
ManimML_helblazer811/examples/readme_example/example.py | from manim import *
from manim_ml.neural_network import (
Convolutional2DLayer,
FeedForwardLayer,
NeuralNetwork,
)
# Make the specific scene
config.pixel_height = 700
config.pixel_width = 1900
config.frame_height = 7.0
config.frame_width = 7.0
class CombinedScene(ThreeDScene):
def construct(self):
# Make nn
nn = NeuralNetwork(
[
Convolutional2DLayer(1, 7, 3, filter_spacing=0.32),
Convolutional2DLayer(3, 5, 3, filter_spacing=0.32),
Convolutional2DLayer(5, 3, 3, filter_spacing=0.18),
FeedForwardLayer(3),
FeedForwardLayer(3),
],
layer_spacing=0.25,
)
# Center the nn
nn.move_to(ORIGIN)
self.add(nn)
# Play animation
forward_pass = nn.make_forward_pass_animation()
self.play(forward_pass)
|
ManimML_helblazer811/examples/readme_example/old_example.py | from manim import *
from PIL import Image
from manim_ml.neural_network.layers.convolutional_2d import Convolutional2DLayer
from manim_ml.neural_network.layers.feed_forward import FeedForwardLayer
from manim_ml.neural_network.layers.image import ImageLayer
from manim_ml.neural_network.neural_network import NeuralNetwork
class ConvolutionalNetworkScene(Scene):
def construct(self):
# Make nn
nn = NeuralNetwork(
[
Convolutional2DLayer(1, 7, 3, filter_spacing=0.32),
Convolutional2DLayer(3, 5, 3, filter_spacing=0.32),
Convolutional2DLayer(5, 3, 3, filter_spacing=0.18),
FeedForwardLayer(3),
FeedForwardLayer(3),
],
layer_spacing=0.25,
)
# Center the nn
nn.move_to(ORIGIN)
self.add(nn)
self.play(nn.make_forward_pass_animation())
|
ManimML_helblazer811/examples/epsilon_nn_graph/epsilon_nn_graph.py | """
Example where I draw an epsilon nearest neighbor graph animation
"""
from cProfile import label
from manim import *
from sklearn.datasets import make_moons
from sklearn.cluster import SpectralClustering
import numpy as np
# Make the specific scene
config.pixel_height = 1200
config.pixel_width = 1200
config.frame_height = 12.0
config.frame_width = 12.0
def make_moon_points(num_samples=100, noise=0.1, random_seed=1):
"""Make two half moon point shapes"""
# Make sure the points are normalized
X, y = make_moons(n_samples=num_samples, noise=noise, random_state=random_seed)
X -= np.mean(X, axis=0)
X /= np.std(X, axis=0)
X[:, 1] += 0.3
# X[:, 0] /= 2 # squeeze width
return X
def make_epsilon_balls(epsilon_value, points, axes, ball_color=RED, opacity=0.0):
"""Draws epsilon balls"""
balls = []
for point in points:
ball = Circle(epsilon_value, color=ball_color, fill_opacity=opacity)
global_location = axes.coords_to_point(*point)
ball.move_to(global_location)
balls.append(ball)
return VGroup(*balls)
def make_epsilon_graph(epsilon_value, dots, points, edge_color=ORANGE):
"""Makes an epsilon nearest neighbor graph for the given dots"""
# First compute the adjacency matrix from the epsilon value and the points
num_dots = len(dots)
adjacency_matrix = np.zeros((num_dots, num_dots))
# Note: just doing lower triangular matrix
for i in range(num_dots):
for j in range(i):
dist = np.linalg.norm(dots[i].get_center() - dots[j].get_center())
is_connected = 1 if dist < epsilon_value else 0
adjacency_matrix[i, j] = is_connected
# Draw a graph based on the adjacency matrix
edges = []
for i in range(num_dots):
for j in range(i):
is_connected = adjacency_matrix[i, j]
if is_connected:
# Draw a connection between the corresponding dots
dot_a = dots[i]
dot_b = dots[j]
edge = Line(
dot_a.get_center(),
dot_b.get_center(),
color=edge_color,
stroke_width=3,
)
edges.append(edge)
return VGroup(*edges), adjacency_matrix
def perform_spectral_clustering(adjacency_matrix):
"""Performs spectral clustering given adjacency matrix"""
clustering = SpectralClustering(
n_clusters=2, affinity="precomputed", random_state=0
).fit(adjacency_matrix)
labels = clustering.labels_
return labels
def make_color_change_animation(labels, dots, colors=[ORANGE, GREEN]):
"""Makes a color change animation"""
anims = []
for index in range(len(labels)):
color = colors[labels[index]]
dot = dots[index]
anims.append(dot.animate.set_color(color))
return AnimationGroup(*anims, lag_ratio=0.0)
class EpsilonNearestNeighborScene(Scene):
def construct(
self,
num_points=200,
dot_radius=0.1,
dot_color=BLUE,
ball_color=WHITE,
noise=0.1,
ball_opacity=0.0,
random_seed=2,
):
# Make moon shape points
# Note: dot is the drawing object and point is the math concept
moon_points = make_moon_points(
num_samples=num_points, noise=noise, random_seed=random_seed
)
# Make an axes
axes = Axes(
x_range=[-6, 6, 1],
y_range=[-6, 6, 1],
x_length=12,
y_length=12,
tips=False,
axis_config={"stroke_color": "#000000"},
)
axes.scale(2.2)
self.add(axes)
# Draw points
dots = []
for point in moon_points:
axes_location = axes.coords_to_point(*point)
dot = Dot(axes_location, color=dot_color, radius=dot_radius, z_index=1)
dots.append(dot)
dots = VGroup(*dots)
self.play(Create(dots))
# Draw epsilon bar with initial value
epsilon_bar = NumberLine(
[0, 2], length=8, stroke_width=2, include_ticks=False, include_numbers=False
)
epsilon_bar.shift(4.5 * DOWN)
self.play(Create(epsilon_bar))
current_epsilon = ValueTracker(0.3)
epsilon_point = epsilon_bar.number_to_point(current_epsilon.get_value())
epsilon_dot = Dot(epsilon_point)
self.add(epsilon_dot)
label_location = epsilon_bar.number_to_point(1.0)
label_location -= DOWN * 0.1
label_text = MathTex("\epsilon").scale(1.5)
# label_text = Text("Epsilon")
label_text.move_to(epsilon_bar.get_center())
label_text.shift(DOWN * 0.5)
self.add(label_text)
# Make an updater for the dot
def dot_updater(epsilon_dot):
# Get location on epsilon_bar
point_loc = epsilon_bar.number_to_point(current_epsilon.get_value())
epsilon_dot.move_to(point_loc)
epsilon_dot.add_updater(dot_updater)
# Make the epsilon balls
epsilon_balls = make_epsilon_balls(
current_epsilon.get_value(),
moon_points,
axes,
ball_color=ball_color,
opacity=ball_opacity,
)
# Set up updater for radius of balls
def epsilon_balls_updater(epsilon_balls):
for ball in epsilon_balls:
ball.set_width(current_epsilon.get_value())
# Turn epsilon up and down
epsilon_balls.add_updater(epsilon_balls_updater)
# Fade in the initial balls
self.play(FadeIn(epsilon_balls), lag_ratio=0.0)
# Iterate through different values of epsilon
for value in [1.5, 0.5, 0.9]:
self.play(current_epsilon.animate.set_value(value), run_time=2.5)
# Perform clustering
epsilon_value = 0.9
# Show connecting graph
epsilon_graph, adjacency_matrix = make_epsilon_graph(
current_epsilon.get_value(), dots, moon_points, edge_color=WHITE
)
self.play(FadeOut(epsilon_balls))
self.play(FadeIn(epsilon_graph))
# Fade out balls
self.play(Wait(1.5))
# Perform clustering
labels = perform_spectral_clustering(adjacency_matrix)
# Change the colors of the dots
color_change_animation = make_color_change_animation(labels, dots)
self.play(color_change_animation)
# Fade out graph edges
self.play(FadeOut(epsilon_graph))
self.play(Wait(5.0))
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.