code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def forward(
self,
hidden_states,
temb=None,
encoder_hidden_states=None,
attention_mask=None,
full_mask=None,
face_mask=None,
lip_mask=None,
audio_embedding=None,
motion_scale=None,
):
"""
Defines the forward pass for th... |
Defines the forward pass for the CrossAttnDownBlock3D class.
Parameters:
- hidden_states : torch.Tensor
The input tensor to the block.
temb : torch.Tensor, optional
The token embeddings from the previous block.
encoder_hidden_states : torch.T... | forward | python | jdh-algo/JoyHallo | joyhallo/models/unet_3d_blocks.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/unet_3d_blocks.py | MIT |
def forward(
self,
hidden_states,
temb=None,
encoder_hidden_states=None,
):
"""
forward method for the DownBlock3D class.
Args:
hidden_states (Tensor): The input tensor to the DownBlock3D layer.
temb (Tensor, optional): The tok... |
forward method for the DownBlock3D class.
Args:
hidden_states (Tensor): The input tensor to the DownBlock3D layer.
temb (Tensor, optional): The token embeddings, if using transformer.
encoder_hidden_states (Tensor, optional): The hidden states from the encod... | forward | python | jdh-algo/JoyHallo | joyhallo/models/unet_3d_blocks.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/unet_3d_blocks.py | MIT |
def forward(
self,
hidden_states,
res_hidden_states_tuple,
temb=None,
encoder_hidden_states=None,
upsample_size=None,
attention_mask=None,
full_mask=None,
face_mask=None,
lip_mask=None,
audio_embedding=None,
motion_scale=Non... |
Forward pass for the CrossAttnUpBlock3D class.
Args:
self (CrossAttnUpBlock3D): An instance of the CrossAttnUpBlock3D class.
hidden_states (Tensor): The input hidden states tensor.
res_hidden_states_tuple (Tuple[Tensor]): A tuple of residual hidden states tensors.
... | forward | python | jdh-algo/JoyHallo | joyhallo/models/unet_3d_blocks.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/unet_3d_blocks.py | MIT |
def forward(
self,
hidden_states,
res_hidden_states_tuple,
temb=None,
upsample_size=None,
encoder_hidden_states=None,
):
"""
Forward pass for the UpBlock3D class.
Args:
self (UpBlock3D): An instance of the UpBlock3D class.
... |
Forward pass for the UpBlock3D class.
Args:
self (UpBlock3D): An instance of the UpBlock3D class.
hidden_states (Tensor): The input hidden states tensor.
res_hidden_states_tuple (Tuple[Tensor]): A tuple of residual hidden states tensors.
temb (Tensor, op... | forward | python | jdh-algo/JoyHallo | joyhallo/models/unet_3d_blocks.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/unet_3d_blocks.py | MIT |
def forward(
self,
input_values,
seq_len,
attention_mask=None,
mask_time_indices=None,
output_attentions=None,
output_hidden_states=None,
return_dict=None,
):
"""
Forward pass of the Wav2Vec model.
Args:
self: The i... |
Forward pass of the Wav2Vec model.
Args:
self: The instance of the model.
input_values: The input values (waveform) to the model.
seq_len: The sequence length of the input values.
attention_mask: Attention mask to be used for the model.
mask_... | forward | python | jdh-algo/JoyHallo | joyhallo/models/wav2vec.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/wav2vec.py | MIT |
def feature_extract(
self,
input_values,
seq_len,
):
"""
Extracts features from the input values and returns the extracted features.
Parameters:
input_values (torch.Tensor): The input values to be processed.
seq_len (torch.Tensor): The sequence length... |
Extracts features from the input values and returns the extracted features.
Parameters:
input_values (torch.Tensor): The input values to be processed.
seq_len (torch.Tensor): The sequence lengths of the input values.
Returns:
extracted_features (torch.Tensor): The extr... | feature_extract | python | jdh-algo/JoyHallo | joyhallo/models/wav2vec.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/wav2vec.py | MIT |
def encode(
self,
extract_features,
attention_mask=None,
mask_time_indices=None,
output_attentions=None,
output_hidden_states=None,
return_dict=None,
):
"""
Encodes the input features into the output space.
Args:
extract_fe... |
Encodes the input features into the output space.
Args:
extract_features (torch.Tensor): The extracted features from the audio signal.
attention_mask (torch.Tensor, optional): Attention mask to be used for padding.
mask_time_indices (torch.Tensor, optional): Masked ... | encode | python | jdh-algo/JoyHallo | joyhallo/models/wav2vec.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/wav2vec.py | MIT |
def linear_interpolation(features, seq_len):
"""
Transpose the features to interpolate linearly.
Args:
features (torch.Tensor): The extracted features to be interpolated.
seq_len (torch.Tensor): The sequence lengths of the features.
Returns:
torch.Tensor: The interpolated featu... |
Transpose the features to interpolate linearly.
Args:
features (torch.Tensor): The extracted features to be interpolated.
seq_len (torch.Tensor): The sequence lengths of the features.
Returns:
torch.Tensor: The interpolated features.
| linear_interpolation | python | jdh-algo/JoyHallo | joyhallo/models/wav2vec.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/wav2vec.py | MIT |
def filter_non_none(dict_obj: Dict):
"""
Filters out key-value pairs from the given dictionary where the value is None.
Args:
dict_obj (Dict): The dictionary to be filtered.
Returns:
Dict: The dictionary with key-value pairs removed where the value was None.
This function creates ... |
Filters out key-value pairs from the given dictionary where the value is None.
Args:
dict_obj (Dict): The dictionary to be filtered.
Returns:
Dict: The dictionary with key-value pairs removed where the value was None.
This function creates a new dictionary containing only the key-val... | filter_non_none | python | jdh-algo/JoyHallo | joyhallo/utils/config.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/utils/config.py | MIT |
def seed_everything(seed):
"""
Seeds all random number generators to ensure reproducibility.
Args:
seed (int): The seed value to set for all random number generators.
"""
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
np.random.seed(seed % (2**32))
random.seed(seed) |
Seeds all random number generators to ensure reproducibility.
Args:
seed (int): The seed value to set for all random number generators.
| seed_everything | python | jdh-algo/JoyHallo | joyhallo/utils/util.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/utils/util.py | MIT |
def import_filename(filename):
"""
Import a module from a given file location.
Args:
filename (str): The path to the file containing the module to be imported.
Returns:
module: The imported module.
Raises:
ImportError: If the module cannot be imported.
Example:
... |
Import a module from a given file location.
Args:
filename (str): The path to the file containing the module to be imported.
Returns:
module: The imported module.
Raises:
ImportError: If the module cannot be imported.
Example:
>>> imported_module = import_filenam... | import_filename | python | jdh-algo/JoyHallo | joyhallo/utils/util.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/utils/util.py | MIT |
def delete_additional_ckpt(base_path, num_keep):
"""
Deletes additional checkpoint files in the given directory.
Args:
base_path (str): The path to the directory containing the checkpoint files.
num_keep (int): The number of most recent checkpoint files to keep.
Returns:
None
... |
Deletes additional checkpoint files in the given directory.
Args:
base_path (str): The path to the directory containing the checkpoint files.
num_keep (int): The number of most recent checkpoint files to keep.
Returns:
None
Raises:
FileNotFoundError: If the base_path ... | delete_additional_ckpt | python | jdh-algo/JoyHallo | joyhallo/utils/util.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/utils/util.py | MIT |
def save_videos_from_pil(pil_images, path, fps=8):
"""
Save a sequence of images as a video using the Pillow library.
Args:
pil_images (List[PIL.Image]): A list of PIL.Image objects representing the frames of the video.
path (str): The output file path for the video.
fps (int, optio... |
Save a sequence of images as a video using the Pillow library.
Args:
pil_images (List[PIL.Image]): A list of PIL.Image objects representing the frames of the video.
path (str): The output file path for the video.
fps (int, optional): The frames per second rate of the video. Defaults to... | save_videos_from_pil | python | jdh-algo/JoyHallo | joyhallo/utils/util.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/utils/util.py | MIT |
def save_videos_grid(videos: torch.Tensor, path: str, rescale=False, n_rows=6, fps=8):
"""
Save a grid of videos as an animation or video.
Args:
videos (torch.Tensor): A tensor of shape (batch_size, channels, time, height, width)
containing the videos to save.
path (str): The pa... |
Save a grid of videos as an animation or video.
Args:
videos (torch.Tensor): A tensor of shape (batch_size, channels, time, height, width)
containing the videos to save.
path (str): The path to save the video grid. Supported formats are .mp4, .avi, and .gif.
rescale (bool, ... | save_videos_grid | python | jdh-algo/JoyHallo | joyhallo/utils/util.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/utils/util.py | MIT |
def read_frames(video_path):
"""
Reads video frames from a given video file.
Args:
video_path (str): The path to the video file.
Returns:
container (av.container.InputContainer): The input container object
containing the video stream.
... |
Reads video frames from a given video file.
Args:
video_path (str): The path to the video file.
Returns:
container (av.container.InputContainer): The input container object
containing the video stream.
Raises:
FileNotFoundErr... | read_frames | python | jdh-algo/JoyHallo | joyhallo/utils/util.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/utils/util.py | MIT |
def get_fps(video_path):
"""
Get the frame rate (FPS) of a video file.
Args:
video_path (str): The path to the video file.
Returns:
int: The frame rate (FPS) of the video file.
"""
container = av.open(video_path)
video_stream = next(s for s in container.streams if s.type ==... |
Get the frame rate (FPS) of a video file.
Args:
video_path (str): The path to the video file.
Returns:
int: The frame rate (FPS) of the video file.
| get_fps | python | jdh-algo/JoyHallo | joyhallo/utils/util.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/utils/util.py | MIT |
def tensor_to_video(tensor, output_video_file, audio_source, fps=25):
"""
Converts a Tensor with shape [c, f, h, w] into a video and adds an audio track from the specified audio file.
Args:
tensor (Tensor): The Tensor to be converted, shaped [c, f, h, w].
output_video_file (str): The file p... |
Converts a Tensor with shape [c, f, h, w] into a video and adds an audio track from the specified audio file.
Args:
tensor (Tensor): The Tensor to be converted, shaped [c, f, h, w].
output_video_file (str): The file path where the output video will be saved.
audio_source (str): The pat... | tensor_to_video | python | jdh-algo/JoyHallo | joyhallo/utils/util.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/utils/util.py | MIT |
def compute_face_landmarks(detection_result, h, w):
"""
Compute face landmarks from a detection result.
Args:
detection_result (mediapipe.solutions.face_mesh.FaceMesh): The detection result containing face landmarks.
h (int): The height of the video frame.
w (int): The width of the ... |
Compute face landmarks from a detection result.
Args:
detection_result (mediapipe.solutions.face_mesh.FaceMesh): The detection result containing face landmarks.
h (int): The height of the video frame.
w (int): The width of the video frame.
Returns:
face_landmarks_list (lis... | compute_face_landmarks | python | jdh-algo/JoyHallo | joyhallo/utils/util.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/utils/util.py | MIT |
def get_landmark(file):
"""
This function takes a file as input and returns the facial landmarks detected in the file.
Args:
file (str): The path to the file containing the video or image to be processed.
Returns:
Tuple[List[float], List[float]]: A tuple containing two lists of floats ... |
This function takes a file as input and returns the facial landmarks detected in the file.
Args:
file (str): The path to the file containing the video or image to be processed.
Returns:
Tuple[List[float], List[float]]: A tuple containing two lists of floats representing the x and y coordi... | get_landmark | python | jdh-algo/JoyHallo | joyhallo/utils/util.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/utils/util.py | MIT |
def get_landmark_overframes(landmark_model, frames_path):
"""
This function iterate frames and returns the facial landmarks detected in each frame.
Args:
landmark_model: mediapipe landmark model instance
frames_path (str): The path to the video frames.
Returns:
List[List[float]... |
This function iterate frames and returns the facial landmarks detected in each frame.
Args:
landmark_model: mediapipe landmark model instance
frames_path (str): The path to the video frames.
Returns:
List[List[float], float, float]: A List containing two lists of floats representi... | get_landmark_overframes | python | jdh-algo/JoyHallo | joyhallo/utils/util.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/utils/util.py | MIT |
def get_lip_mask(landmarks, height, width, out_path=None, expand_ratio=2.0):
"""
Extracts the lip region from the given landmarks and saves it as an image.
Parameters:
landmarks (numpy.ndarray): Array of facial landmarks.
height (int): Height of the output lip mask image.
width (int... |
Extracts the lip region from the given landmarks and saves it as an image.
Parameters:
landmarks (numpy.ndarray): Array of facial landmarks.
height (int): Height of the output lip mask image.
width (int): Width of the output lip mask image.
out_path (pathlib.Path): Path to save... | get_lip_mask | python | jdh-algo/JoyHallo | joyhallo/utils/util.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/utils/util.py | MIT |
def get_union_lip_mask(landmarks, height, width, expand_ratio=1):
"""
Extracts the lip region from the given landmarks and saves it as an image.
Parameters:
landmarks (numpy.ndarray): Array of facial landmarks.
height (int): Height of the output lip mask image.
width (int): Width of... |
Extracts the lip region from the given landmarks and saves it as an image.
Parameters:
landmarks (numpy.ndarray): Array of facial landmarks.
height (int): Height of the output lip mask image.
width (int): Width of the output lip mask image.
expand_ratio (float): Expand ratio of... | get_union_lip_mask | python | jdh-algo/JoyHallo | joyhallo/utils/util.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/utils/util.py | MIT |
def get_face_mask(landmarks, height, width, out_path=None, expand_ratio=1.2):
"""
Generate a face mask based on the given landmarks.
Args:
landmarks (numpy.ndarray): The landmarks of the face.
height (int): The height of the output face mask image.
width (int): The width of the outp... |
Generate a face mask based on the given landmarks.
Args:
landmarks (numpy.ndarray): The landmarks of the face.
height (int): The height of the output face mask image.
width (int): The width of the output face mask image.
out_path (pathlib.Path): The path to save the face mask i... | get_face_mask | python | jdh-algo/JoyHallo | joyhallo/utils/util.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/utils/util.py | MIT |
def get_union_face_mask(landmarks, height, width, expand_ratio=1):
"""
Generate a face mask based on the given landmarks.
Args:
landmarks (numpy.ndarray): The landmarks of the face.
height (int): The height of the output face mask image.
width (int): The width of the output face mas... |
Generate a face mask based on the given landmarks.
Args:
landmarks (numpy.ndarray): The landmarks of the face.
height (int): The height of the output face mask image.
width (int): The width of the output face mask image.
expand_ratio (float): Expand ratio of mask.
Returns:
... | get_union_face_mask | python | jdh-algo/JoyHallo | joyhallo/utils/util.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/utils/util.py | MIT |
def get_mask(file, cache_dir, face_expand_raio):
"""
Generate a face mask based on the given landmarks and save it to the specified cache directory.
Args:
file (str): The path to the file containing the landmarks.
cache_dir (str): The directory to save the generated face mask.
Returns:... |
Generate a face mask based on the given landmarks and save it to the specified cache directory.
Args:
file (str): The path to the file containing the landmarks.
cache_dir (str): The directory to save the generated face mask.
Returns:
None
| get_mask | python | jdh-algo/JoyHallo | joyhallo/utils/util.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/utils/util.py | MIT |
def expand_region(region, image_w, image_h, expand_ratio=1.0):
"""
Expand the given region by a specified ratio.
Args:
region (tuple): A tuple containing the coordinates (min_x, max_x, min_y, max_y) of the region.
image_w (int): The width of the image.
image_h (int): The height of th... |
Expand the given region by a specified ratio.
Args:
region (tuple): A tuple containing the coordinates (min_x, max_x, min_y, max_y) of the region.
image_w (int): The width of the image.
image_h (int): The height of the image.
expand_ratio (float, optional): The ratio by which th... | expand_region | python | jdh-algo/JoyHallo | joyhallo/utils/util.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/utils/util.py | MIT |
def get_blur_mask(file_path, output_file_path, resize_dim=(64, 64), kernel_size=(101, 101)):
"""
Read, resize, blur, normalize, and save an image.
Parameters:
file_path (str): Path to the input image file.
output_dir (str): Path to the output directory to save blurred images.
resize_dim (tuple)... |
Read, resize, blur, normalize, and save an image.
Parameters:
file_path (str): Path to the input image file.
output_dir (str): Path to the output directory to save blurred images.
resize_dim (tuple): Dimensions to resize the images to.
kernel_size (tuple): Size of the kernel to use for Gaussia... | get_blur_mask | python | jdh-algo/JoyHallo | joyhallo/utils/util.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/utils/util.py | MIT |
def blur_mask(mask, resize_dim=(64, 64), kernel_size=(51, 51)):
"""
Read, resize, blur, normalize, and save an image.
Parameters:
file_path (str): Path to the input image file.
resize_dim (tuple): Dimensions to resize the images to.
kernel_size (tuple): Size of the kernel to use for Gaussian bl... |
Read, resize, blur, normalize, and save an image.
Parameters:
file_path (str): Path to the input image file.
resize_dim (tuple): Dimensions to resize the images to.
kernel_size (tuple): Size of the kernel to use for Gaussian blur.
| blur_mask | python | jdh-algo/JoyHallo | joyhallo/utils/util.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/utils/util.py | MIT |
def get_background_mask(file_path, output_file_path):
"""
Read an image, invert its values, and save the result.
Parameters:
file_path (str): Path to the input image file.
output_dir (str): Path to the output directory to save the inverted image.
"""
# Read the image
image = cv2.imread(... |
Read an image, invert its values, and save the result.
Parameters:
file_path (str): Path to the input image file.
output_dir (str): Path to the output directory to save the inverted image.
| get_background_mask | python | jdh-algo/JoyHallo | joyhallo/utils/util.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/utils/util.py | MIT |
def get_sep_face_mask(file_path1, file_path2, output_file_path):
"""
Read two images, subtract the second one from the first, and save the result.
Parameters:
output_dir (str): Path to the output directory to save the subtracted image.
"""
# Read the images
mask1 = cv2.imread(file_path1, c... |
Read two images, subtract the second one from the first, and save the result.
Parameters:
output_dir (str): Path to the output directory to save the subtracted image.
| get_sep_face_mask | python | jdh-algo/JoyHallo | joyhallo/utils/util.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/utils/util.py | MIT |
def load_checkpoint(cfg, save_dir, accelerator):
"""
Load the most recent checkpoint from the specified directory.
This function loads the latest checkpoint from the `save_dir` if the `resume_from_checkpoint` parameter is set to "latest".
If a specific checkpoint is provided in `resume_from_checkpoint`... |
Load the most recent checkpoint from the specified directory.
This function loads the latest checkpoint from the `save_dir` if the `resume_from_checkpoint` parameter is set to "latest".
If a specific checkpoint is provided in `resume_from_checkpoint`, it loads that checkpoint. If no checkpoint is found,
... | load_checkpoint | python | jdh-algo/JoyHallo | joyhallo/utils/util.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/utils/util.py | MIT |
def compute_snr(noise_scheduler, timesteps):
"""
Computes SNR as per
https://github.com/TiankaiHang/Min-SNR-Diffusion-Training/blob/
521b624bd70c67cee4bdf49225915f5945a872e3/guided_diffusion/gaussian_diffusion.py#L847-L849
"""
alphas_cumprod = noise_scheduler.alphas_cumprod
sqrt_alph... |
Computes SNR as per
https://github.com/TiankaiHang/Min-SNR-Diffusion-Training/blob/
521b624bd70c67cee4bdf49225915f5945a872e3/guided_diffusion/gaussian_diffusion.py#L847-L849
| compute_snr | python | jdh-algo/JoyHallo | joyhallo/utils/util.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/utils/util.py | MIT |
def extract_audio_from_videos(video_path: Path, audio_output_path: Path) -> Path:
"""
Extract audio from a video file and save it as a WAV file.
This function uses ffmpeg to extract the audio stream from a given video file and saves it as a WAV file
in the specified output directory.
Args:
... |
Extract audio from a video file and save it as a WAV file.
This function uses ffmpeg to extract the audio stream from a given video file and saves it as a WAV file
in the specified output directory.
Args:
video_path (Path): The path to the input video file.
output_dir (Path): The dire... | extract_audio_from_videos | python | jdh-algo/JoyHallo | joyhallo/utils/util.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/utils/util.py | MIT |
def convert_video_to_images(video_path: Path, output_dir: Path) -> Path:
"""
Convert a video file into a sequence of images.
This function uses ffmpeg to convert each frame of the given video file into an image. The images are saved
in a directory named after the video file stem under the specified out... |
Convert a video file into a sequence of images.
This function uses ffmpeg to convert each frame of the given video file into an image. The images are saved
in a directory named after the video file stem under the specified output directory.
Args:
video_path (Path): The path to the input video... | convert_video_to_images | python | jdh-algo/JoyHallo | joyhallo/utils/util.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/utils/util.py | MIT |
def get_union_mask(masks):
"""
Compute the union of a list of masks.
This function takes a list of masks and computes their union by taking the maximum value at each pixel location.
Additionally, it finds the bounding box of the non-zero regions in the mask and sets the bounding box area to white.
... |
Compute the union of a list of masks.
This function takes a list of masks and computes their union by taking the maximum value at each pixel location.
Additionally, it finds the bounding box of the non-zero regions in the mask and sets the bounding box area to white.
Args:
masks (list of np.n... | get_union_mask | python | jdh-algo/JoyHallo | joyhallo/utils/util.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/utils/util.py | MIT |
def move_final_checkpoint(save_dir, module_dir, prefix):
"""
Move the final checkpoint file to the save directory.
This function identifies the latest checkpoint file based on the given prefix and moves it to the specified save directory.
Args:
save_dir (str): The directory where the final che... |
Move the final checkpoint file to the save directory.
This function identifies the latest checkpoint file based on the given prefix and moves it to the specified save directory.
Args:
save_dir (str): The directory where the final checkpoint file should be saved.
module_dir (str): The dire... | move_final_checkpoint | python | jdh-algo/JoyHallo | joyhallo/utils/util.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/utils/util.py | MIT |
def forward(
self,
noisy_latents: torch.Tensor,
timesteps: torch.Tensor,
ref_image_latents: torch.Tensor,
face_emb: torch.Tensor,
audio_emb: torch.Tensor,
mask: torch.Tensor,
full_mask: torch.Tensor,
face_mask: torch.Tensor,
lip_mask: torch... |
simple docstring to prevent pylint error
| forward | python | jdh-algo/JoyHallo | scripts/app.py | https://github.com/jdh-algo/JoyHallo/blob/master/scripts/app.py | MIT |
def get_attention_mask(mask: torch.Tensor, weight_dtype: torch.dtype) -> torch.Tensor:
"""
Rearrange the mask tensors to the required format.
Args:
mask (torch.Tensor): The input mask tensor.
weight_dtype (torch.dtype): The data type for the mask tensor.
Returns:
torch.Tensor: ... |
Rearrange the mask tensors to the required format.
Args:
mask (torch.Tensor): The input mask tensor.
weight_dtype (torch.dtype): The data type for the mask tensor.
Returns:
torch.Tensor: The rearranged mask tensor.
| get_attention_mask | python | jdh-algo/JoyHallo | scripts/app.py | https://github.com/jdh-algo/JoyHallo/blob/master/scripts/app.py | MIT |
def get_noise_scheduler(cfg: argparse.Namespace) -> Tuple[DDIMScheduler, DDIMScheduler]:
"""
Create noise scheduler for training.
Args:
cfg (argparse.Namespace): Configuration object.
Returns:
Tuple[DDIMScheduler, DDIMScheduler]: Train noise scheduler and validation noise scheduler.
... |
Create noise scheduler for training.
Args:
cfg (argparse.Namespace): Configuration object.
Returns:
Tuple[DDIMScheduler, DDIMScheduler]: Train noise scheduler and validation noise scheduler.
| get_noise_scheduler | python | jdh-algo/JoyHallo | scripts/app.py | https://github.com/jdh-algo/JoyHallo/blob/master/scripts/app.py | MIT |
def process_audio_emb(audio_emb: torch.Tensor) -> torch.Tensor:
"""
Process the audio embedding to concatenate with other tensors.
Parameters:
audio_emb (torch.Tensor): The audio embedding tensor to process.
Returns:
concatenated_tensors (List[torch.Tensor]): The concatenated tensor li... |
Process the audio embedding to concatenate with other tensors.
Parameters:
audio_emb (torch.Tensor): The audio embedding tensor to process.
Returns:
concatenated_tensors (List[torch.Tensor]): The concatenated tensor list.
| process_audio_emb | python | jdh-algo/JoyHallo | scripts/app.py | https://github.com/jdh-algo/JoyHallo/blob/master/scripts/app.py | MIT |
def log_validation(
accelerator: Accelerator,
vae: AutoencoderKL,
net: Net,
scheduler: DDIMScheduler,
width: int,
height: int,
clip_length: int = 24,
generator: torch.Generator = None,
cfg: dict = None,
save_dir: str = None,
global_step: int = 0,
times: int = None,
fa... |
Log validation video during the training process.
Args:
accelerator (Accelerator): The accelerator for distributed training.
vae (AutoencoderKL): The autoencoder model.
net (Net): The main neural network model.
scheduler (DDIMScheduler): The scheduler for noise.
width (... | log_validation | python | jdh-algo/JoyHallo | scripts/app.py | https://github.com/jdh-algo/JoyHallo/blob/master/scripts/app.py | MIT |
def get_model(cfg: argparse.Namespace) -> None:
"""
Trains the model using the given configuration (cfg).
Args:
cfg (dict): The configuration dictionary containing the parameters for training.
Notes:
- This function trains the model using the given configuration.
- It initializ... |
Trains the model using the given configuration (cfg).
Args:
cfg (dict): The configuration dictionary containing the parameters for training.
Notes:
- This function trains the model using the given configuration.
- It initializes the necessary components for training, such as the p... | get_model | python | jdh-algo/JoyHallo | scripts/app.py | https://github.com/jdh-algo/JoyHallo/blob/master/scripts/app.py | MIT |
def load_config(config_path: str) -> dict:
"""
Loads the configuration file.
Args:
config_path (str): Path to the configuration file.
Returns:
dict: The configuration dictionary.
"""
if config_path.endswith(".yaml"):
return OmegaConf.load(config_path)
if config_pat... |
Loads the configuration file.
Args:
config_path (str): Path to the configuration file.
Returns:
dict: The configuration dictionary.
| load_config | python | jdh-algo/JoyHallo | scripts/app.py | https://github.com/jdh-algo/JoyHallo/blob/master/scripts/app.py | MIT |
def predict(image, audio, pose_weight, face_weight, lip_weight, face_expand_ratio, progress=gr.Progress(track_tqdm=True)):
"""
Create a gradio interface with the configs.
"""
_ = progress
config = {
'ref_img_path': image,
'audio_path': audio,
'pose_weight': pose_weight,
... |
Create a gradio interface with the configs.
| predict | python | jdh-algo/JoyHallo | scripts/app.py | https://github.com/jdh-algo/JoyHallo/blob/master/scripts/app.py | MIT |
def setup_directories(video_path: Path) -> dict:
"""
Setup directories for storing processed files.
Args:
video_path (Path): Path to the video file.
Returns:
dict: A dictionary containing paths for various directories.
"""
base_dir = video_path.parent.parent
dirs = {
... |
Setup directories for storing processed files.
Args:
video_path (Path): Path to the video file.
Returns:
dict: A dictionary containing paths for various directories.
| setup_directories | python | jdh-algo/JoyHallo | scripts/data_preprocess.py | https://github.com/jdh-algo/JoyHallo/blob/master/scripts/data_preprocess.py | MIT |
def process_single_video(video_path: Path,
output_dir: Path,
image_processor: ImageProcessorForDataProcessing,
audio_processor: AudioProcessor,
step: int) -> None:
"""
Process a single video file.
Args:
... |
Process a single video file.
Args:
video_path (Path): Path to the video file.
output_dir (Path): Directory to save the output.
image_processor (ImageProcessorForDataProcessing): Image processor object.
audio_processor (AudioProcessor): Audio processor object.
gpu_status... | process_single_video | python | jdh-algo/JoyHallo | scripts/data_preprocess.py | https://github.com/jdh-algo/JoyHallo/blob/master/scripts/data_preprocess.py | MIT |
def process_all_videos(input_video_list: List[Path], output_dir: Path, step: int) -> None:
"""
Process all videos in the input list.
Args:
input_video_list (List[Path]): List of video paths to process.
output_dir (Path): Directory to save the output.
gpu_status (bool): Whether to us... |
Process all videos in the input list.
Args:
input_video_list (List[Path]): List of video paths to process.
output_dir (Path): Directory to save the output.
gpu_status (bool): Whether to use GPU for processing.
| process_all_videos | python | jdh-algo/JoyHallo | scripts/data_preprocess.py | https://github.com/jdh-algo/JoyHallo/blob/master/scripts/data_preprocess.py | MIT |
def get_video_paths(source_dir: Path, parallelism: int, rank: int) -> List[Path]:
"""
Get paths of videos to process, partitioned for parallel processing.
Args:
source_dir (Path): Source directory containing videos.
parallelism (int): Level of parallelism.
rank (int): Rank for distr... |
Get paths of videos to process, partitioned for parallel processing.
Args:
source_dir (Path): Source directory containing videos.
parallelism (int): Level of parallelism.
rank (int): Rank for distributed processing.
Returns:
List[Path]: List of video paths to process.
| get_video_paths | python | jdh-algo/JoyHallo | scripts/data_preprocess.py | https://github.com/jdh-algo/JoyHallo/blob/master/scripts/data_preprocess.py | MIT |
def construct_meta_info(frames_dir_path: Path) -> dict:
"""
Construct meta information for a given frames directory.
Args:
frames_dir_path (Path): The path to the frames directory.
Returns:
dict: A dictionary containing the meta information for the frames directory, or None if the requ... |
Construct meta information for a given frames directory.
Args:
frames_dir_path (Path): The path to the frames directory.
Returns:
dict: A dictionary containing the meta information for the frames directory, or None if the required files do not exist.
| construct_meta_info | python | jdh-algo/JoyHallo | scripts/extract_meta_info_stage1.py | https://github.com/jdh-algo/JoyHallo/blob/master/scripts/extract_meta_info_stage1.py | MIT |
def main():
"""
Main function to extract meta info for training.
"""
parser = argparse.ArgumentParser()
parser.add_argument("-r", "--root_path", type=str,
required=True, help="Root path of the video directories")
parser.add_argument("-n", "--dataset_name", type=str,
... |
Main function to extract meta info for training.
| main | python | jdh-algo/JoyHallo | scripts/extract_meta_info_stage1.py | https://github.com/jdh-algo/JoyHallo/blob/master/scripts/extract_meta_info_stage1.py | MIT |
def extract_meta_info(video_path: str) -> dict:
"""
Extract meta information for a given video file.
Args:
video_path (str): The path to the video file.
Returns:
dict: A dictionary containing the meta information for the video.
"""
mask_path = construct_paths(
video_pat... |
Extract meta information for a given video file.
Args:
video_path (str): The path to the video file.
Returns:
dict: A dictionary containing the meta information for the video.
| extract_meta_info | python | jdh-algo/JoyHallo | scripts/extract_meta_info_stage2.py | https://github.com/jdh-algo/JoyHallo/blob/master/scripts/extract_meta_info_stage2.py | MIT |
def forward(
self,
noisy_latents,
timesteps,
ref_image_latents,
face_emb,
face_mask,
uncond_fwd: bool = False,
):
"""
Forward pass of the model.
Args:
self (Net): The model instance.
noisy_latents (torch.Tensor):... |
Forward pass of the model.
Args:
self (Net): The model instance.
noisy_latents (torch.Tensor): Noisy latents.
timesteps (torch.Tensor): Timesteps.
ref_image_latents (torch.Tensor): Reference image latents.
face_emb (torch.Tensor): Face embeddi... | forward | python | jdh-algo/JoyHallo | scripts/train_stage1_alltrain.py | https://github.com/jdh-algo/JoyHallo/blob/master/scripts/train_stage1_alltrain.py | MIT |
def get_noise_scheduler(cfg: argparse.Namespace):
"""
Create noise scheduler for training
Args:
cfg (omegaconf.dictconfig.DictConfig): Configuration object.
Returns:
train noise scheduler and val noise scheduler
"""
sched_kwargs = OmegaConf.to_container(cfg.noise_scheduler_kwar... |
Create noise scheduler for training
Args:
cfg (omegaconf.dictconfig.DictConfig): Configuration object.
Returns:
train noise scheduler and val noise scheduler
| get_noise_scheduler | python | jdh-algo/JoyHallo | scripts/train_stage1_alltrain.py | https://github.com/jdh-algo/JoyHallo/blob/master/scripts/train_stage1_alltrain.py | MIT |
def log_validation(
vae,
net,
scheduler,
accelerator,
width,
height,
imageproj,
cfg,
save_dir,
global_step,
face_analysis_model_path,
):
"""
Log validation generation image.
Args:
vae (nn.Module): Variational Autoencoder model.
net (Net): Main mod... |
Log validation generation image.
Args:
vae (nn.Module): Variational Autoencoder model.
net (Net): Main model.
scheduler (diffusers.SchedulerMixin): Noise scheduler.
accelerator (accelerate.Accelerator): Accelerator for training.
width (int): Width of the input images.
... | log_validation | python | jdh-algo/JoyHallo | scripts/train_stage1_alltrain.py | https://github.com/jdh-algo/JoyHallo/blob/master/scripts/train_stage1_alltrain.py | MIT |
def preprocess(
self, video_path: Path | None, image_path: Path | None
) -> Tuple[torch.Tensor, torch.Tensor]:
"""
Loads and preprocesses a video and an image.
If either path is None, no preprocessing will be done for that input.
Args:
video_path: Path to the vid... |
Loads and preprocesses a video and an image.
If either path is None, no preprocessing will be done for that input.
Args:
video_path: Path to the video file to load
image_path: Path to the image file to load
Returns:
A tuple containing:
... | preprocess | python | THUDM/CogVideo | finetune/datasets/i2v_dataset.py | https://github.com/THUDM/CogVideo/blob/master/finetune/datasets/i2v_dataset.py | Apache-2.0 |
def preprocess_image_with_resize(
image_path: Path | str,
height: int,
width: int,
) -> torch.Tensor:
"""
Loads and resizes a single image.
Args:
image_path: Path to the image file.
height: Target height for resizing.
width: Target width for resizing.
Returns:
... |
Loads and resizes a single image.
Args:
image_path: Path to the image file.
height: Target height for resizing.
width: Target width for resizing.
Returns:
torch.Tensor: Image tensor with shape [C, H, W] where:
C = number of channels (3 for RGB)
H = ... | preprocess_image_with_resize | python | THUDM/CogVideo | finetune/datasets/utils.py | https://github.com/THUDM/CogVideo/blob/master/finetune/datasets/utils.py | Apache-2.0 |
def preprocess_video_with_resize(
video_path: Path | str,
max_num_frames: int,
height: int,
width: int,
) -> torch.Tensor:
"""
Loads and resizes a single video.
The function processes the video through these steps:
1. If video frame count > max_num_frames, downsample frames evenly
... |
Loads and resizes a single video.
The function processes the video through these steps:
1. If video frame count > max_num_frames, downsample frames evenly
2. If video dimensions don't match (height, width), resize frames
Args:
video_path: Path to the video file.
max_num_frames... | preprocess_video_with_resize | python | THUDM/CogVideo | finetune/datasets/utils.py | https://github.com/THUDM/CogVideo/blob/master/finetune/datasets/utils.py | Apache-2.0 |
def preprocess_video_with_buckets(
video_path: Path,
resolution_buckets: List[Tuple[int, int, int]],
) -> torch.Tensor:
"""
Args:
video_path: Path to the video file.
resolution_buckets: List of tuples (num_frames, height, width) representing
available resolution buckets.
... |
Args:
video_path: Path to the video file.
resolution_buckets: List of tuples (num_frames, height, width) representing
available resolution buckets.
Returns:
torch.Tensor: Video tensor with shape [F, C, H, W] where:
F = number of frames
C = number of ... | preprocess_video_with_buckets | python | THUDM/CogVideo | finetune/datasets/utils.py | https://github.com/THUDM/CogVideo/blob/master/finetune/datasets/utils.py | Apache-2.0 |
def register(model_name: str, training_type: Literal["lora", "sft"], trainer_cls: Trainer):
"""Register a model and its associated functions for a specific training type.
Args:
model_name (str): Name of the model to register (e.g. "cogvideox-5b")
training_type (Literal["lora", "sft"]): Type of ... | Register a model and its associated functions for a specific training type.
Args:
model_name (str): Name of the model to register (e.g. "cogvideox-5b")
training_type (Literal["lora", "sft"]): Type of training - either "lora" or "sft"
trainer_cls (Trainer): Trainer class to register.
| register | python | THUDM/CogVideo | finetune/models/utils.py | https://github.com/THUDM/CogVideo/blob/master/finetune/models/utils.py | Apache-2.0 |
def validation_step(
self, eval_data: Dict[str, Any], pipe: CogVideoXImageToVideoPipeline
) -> List[Tuple[str, Image.Image | List[Image.Image]]]:
"""
Return the data that needs to be saved. For videos, the data format is List[PIL],
and for images, the data format is PIL
"""
... |
Return the data that needs to be saved. For videos, the data format is List[PIL],
and for images, the data format is PIL
| validation_step | python | THUDM/CogVideo | finetune/models/cogvideox_i2v/lora_trainer.py | https://github.com/THUDM/CogVideo/blob/master/finetune/models/cogvideox_i2v/lora_trainer.py | Apache-2.0 |
def parse_args(cls):
"""Parse command line arguments and return Args instance"""
parser = argparse.ArgumentParser()
# Required arguments
parser.add_argument("--model_path", type=str, required=True)
parser.add_argument("--model_name", type=str, required=True)
parser.add_ar... | Parse command line arguments and return Args instance | parse_args | python | THUDM/CogVideo | finetune/schemas/args.py | https://github.com/THUDM/CogVideo/blob/master/finetune/schemas/args.py | Apache-2.0 |
def cast_training_params(model: Union[torch.nn.Module, List[torch.nn.Module]], dtype=torch.float32):
"""
Casts the training parameters of the model to the specified data type.
Args:
model: The PyTorch model whose parameters will be cast.
dtype: The data type to which the model parameters wi... |
Casts the training parameters of the model to the specified data type.
Args:
model: The PyTorch model whose parameters will be cast.
dtype: The data type to which the model parameters will be cast.
| cast_training_params | python | THUDM/CogVideo | finetune/utils/torch_utils.py | https://github.com/THUDM/CogVideo/blob/master/finetune/utils/torch_utils.py | Apache-2.0 |
def generate_video(
prompt: str,
model_path: str,
output_path: str = "./output.mp4",
num_inference_steps: int = 50,
guidance_scale: float = 6.0,
num_videos_per_prompt: int = 1,
quantization_scheme: str = "fp8",
dtype: torch.dtype = torch.bfloat16,
num_frames: int = 81,
fps: int =... |
Generates a video based on the given prompt and saves it to the specified path.
Parameters:
- prompt (str): The description of the video to be generated.
- model_path (str): The path of the pre-trained model to be used.
- output_path (str): The path where the generated video will be saved.
- n... | generate_video | python | THUDM/CogVideo | inference/cli_demo_quantization.py | https://github.com/THUDM/CogVideo/blob/master/inference/cli_demo_quantization.py | Apache-2.0 |
def encode_video(model_path, video_path, dtype, device):
"""
Loads a pre-trained AutoencoderKLCogVideoX model and encodes the video frames.
Parameters:
- model_path (str): The path to the pre-trained model.
- video_path (str): The path to the video file.
- dtype (torch.dtype): The data type for... |
Loads a pre-trained AutoencoderKLCogVideoX model and encodes the video frames.
Parameters:
- model_path (str): The path to the pre-trained model.
- video_path (str): The path to the video file.
- dtype (torch.dtype): The data type for computation.
- device (str): The device to use for computat... | encode_video | python | THUDM/CogVideo | inference/cli_vae_demo.py | https://github.com/THUDM/CogVideo/blob/master/inference/cli_vae_demo.py | Apache-2.0 |
def decode_video(model_path, encoded_tensor_path, dtype, device):
"""
Loads a pre-trained AutoencoderKLCogVideoX model and decodes the encoded video frames.
Parameters:
- model_path (str): The path to the pre-trained model.
- encoded_tensor_path (str): The path to the encoded tensor file.
- dty... |
Loads a pre-trained AutoencoderKLCogVideoX model and decodes the encoded video frames.
Parameters:
- model_path (str): The path to the pre-trained model.
- encoded_tensor_path (str): The path to the encoded tensor file.
- dtype (torch.dtype): The data type for computation.
- device (str): The ... | decode_video | python | THUDM/CogVideo | inference/cli_vae_demo.py | https://github.com/THUDM/CogVideo/blob/master/inference/cli_vae_demo.py | Apache-2.0 |
def save_video(tensor, output_path):
"""
Saves the video frames to a video file.
Parameters:
- tensor (torch.Tensor): The video frames' tensor.
- output_path (str): The path to save the output video.
"""
tensor = tensor.to(dtype=torch.float32)
frames = tensor[0].squeeze(0).permute(1, 2,... |
Saves the video frames to a video file.
Parameters:
- tensor (torch.Tensor): The video frames' tensor.
- output_path (str): The path to save the output video.
| save_video | python | THUDM/CogVideo | inference/cli_vae_demo.py | https://github.com/THUDM/CogVideo/blob/master/inference/cli_vae_demo.py | Apache-2.0 |
def convert_prompt(prompt: str, retry_times: int = 3, type: str = "t2v", image_path: str = None):
"""
Convert a prompt to a format that can be used by the model for inference
"""
client = OpenAI()
## If you using with Azure OpenAI, please uncomment the below line and comment the above line
# cl... |
Convert a prompt to a format that can be used by the model for inference
| convert_prompt | python | THUDM/CogVideo | inference/convert_demo.py | https://github.com/THUDM/CogVideo/blob/master/inference/convert_demo.py | Apache-2.0 |
def read_video(
filename: str,
start_pts: Union[float, Fraction] = 0,
end_pts: Optional[Union[float, Fraction]] = None,
pts_unit: str = "pts",
output_format: str = "THWC",
) -> Tuple[torch.Tensor, torch.Tensor, Dict[str, Any]]:
"""
Reads a video from a file, returning both the video frames a... |
Reads a video from a file, returning both the video frames and the audio frames
Args:
filename (str): path to the video file
start_pts (int if pts_unit = 'pts', float / Fraction if pts_unit = 'sec', optional):
The start presentation time of the video
end_pts (int if pts_uni... | read_video | python | THUDM/CogVideo | sat/data_video.py | https://github.com/THUDM/CogVideo/blob/master/sat/data_video.py | Apache-2.0 |
def process_video(
video_path,
image_size=None,
duration=None,
num_frames=4,
wanted_fps=None,
actual_fps=None,
skip_frms_num=0.0,
nb_read_frames=None,
):
"""
video_path: str or io.BytesIO
image_size: .
duration: preknow the duration to speed up by seeking to sampled start... |
video_path: str or io.BytesIO
image_size: .
duration: preknow the duration to speed up by seeking to sampled start. TODO by_pass if unknown.
num_frames: wanted num_frames.
wanted_fps: .
skip_frms_num: ignore the first and the last xx frames, avoiding transitions.
| process_video | python | THUDM/CogVideo | sat/data_video.py | https://github.com/THUDM/CogVideo/blob/master/sat/data_video.py | Apache-2.0 |
def __init__(self, data_dir, video_size, fps, max_num_frames, skip_frms_num=3):
"""
skip_frms_num: ignore the first and the last xx frames, avoiding transitions.
"""
super(SFTDataset, self).__init__()
self.video_size = video_size
self.fps = fps
self.max_num_frame... |
skip_frms_num: ignore the first and the last xx frames, avoiding transitions.
| __init__ | python | THUDM/CogVideo | sat/data_video.py | https://github.com/THUDM/CogVideo/blob/master/sat/data_video.py | Apache-2.0 |
def log_conditionings(self, batch: Dict, n: int) -> Dict:
"""
Defines heuristics to log different conditionings.
These can be lists of strings (text-to-image), tensors, ints, ...
"""
image_h, image_w = batch[self.input_key].shape[3:]
log = dict()
for embedder in ... |
Defines heuristics to log different conditionings.
These can be lists of strings (text-to-image), tensors, ints, ...
| log_conditionings | python | THUDM/CogVideo | sat/diffusion_video.py | https://github.com/THUDM/CogVideo/blob/master/sat/diffusion_video.py | Apache-2.0 |
def get_3d_sincos_pos_embed(
embed_dim,
grid_height,
grid_width,
t_size,
cls_token=False,
height_interpolation=1.0,
width_interpolation=1.0,
time_interpolation=1.0,
):
"""
grid_size: int of the grid height and width
t_size: int of the temporal size
return:
pos_embed: ... |
grid_size: int of the grid height and width
t_size: int of the temporal size
return:
pos_embed: [t_size*grid_size * grid_size, embed_dim] or [1+t_size*grid_size * grid_size, embed_dim]
(w/ or w/o cls_token)
| get_3d_sincos_pos_embed | python | THUDM/CogVideo | sat/dit_video_concat.py | https://github.com/THUDM/CogVideo/blob/master/sat/dit_video_concat.py | Apache-2.0 |
def get_2d_sincos_pos_embed(embed_dim, grid_height, grid_width, cls_token=False, extra_tokens=0):
"""
grid_size: int of the grid height and width
return:
pos_embed: [grid_size*grid_size, embed_dim] or [1+grid_size*grid_size, embed_dim] (w/ or w/o cls_token)
"""
grid_h = np.arange(grid_height, dt... |
grid_size: int of the grid height and width
return:
pos_embed: [grid_size*grid_size, embed_dim] or [1+grid_size*grid_size, embed_dim] (w/ or w/o cls_token)
| get_2d_sincos_pos_embed | python | THUDM/CogVideo | sat/dit_video_concat.py | https://github.com/THUDM/CogVideo/blob/master/sat/dit_video_concat.py | Apache-2.0 |
def get_1d_sincos_pos_embed_from_grid(embed_dim, pos):
"""
embed_dim: output dimension for each position
pos: a list of positions to be encoded: size (M,)
out: (M, D)
"""
assert embed_dim % 2 == 0
omega = np.arange(embed_dim // 2, dtype=np.float64)
omega /= embed_dim / 2.0
omega = 1.... |
embed_dim: output dimension for each position
pos: a list of positions to be encoded: size (M,)
out: (M, D)
| get_1d_sincos_pos_embed_from_grid | python | THUDM/CogVideo | sat/dit_video_concat.py | https://github.com/THUDM/CogVideo/blob/master/sat/dit_video_concat.py | Apache-2.0 |
def is_power_of_two(n):
"""
chat.openai.com/chat
Return True if n is a power of 2, otherwise return False.
The function is_power_of_two takes an integer n as input and returns True if n is a power of 2, otherwise it returns False.
The function works by first checking if n is less than or equal to 0... |
chat.openai.com/chat
Return True if n is a power of 2, otherwise return False.
The function is_power_of_two takes an integer n as input and returns True if n is a power of 2, otherwise it returns False.
The function works by first checking if n is less than or equal to 0. If n is less than or equal to... | is_power_of_two | python | THUDM/CogVideo | sat/sgm/util.py | https://github.com/THUDM/CogVideo/blob/master/sat/sgm/util.py | Apache-2.0 |
def append_dims(x, target_dims):
"""Appends dimensions to the end of a tensor until it has target_dims dimensions."""
dims_to_append = target_dims - x.ndim
if dims_to_append < 0:
raise ValueError(f"input has {x.ndim} dims but target_dims is {target_dims}, which is less")
return x[(...,) + (None,... | Appends dimensions to the end of a tensor until it has target_dims dimensions. | append_dims | python | THUDM/CogVideo | sat/sgm/util.py | https://github.com/THUDM/CogVideo/blob/master/sat/sgm/util.py | Apache-2.0 |
def get_configs_path() -> str:
"""
Get the `configs` directory.
For a working copy, this is the one in the root of the repository,
but for an installed copy, it's in the `sgm` package (see pyproject.toml).
"""
this_dir = os.path.dirname(__file__)
candidates = (
os.path.join(this_dir,... |
Get the `configs` directory.
For a working copy, this is the one in the root of the repository,
but for an installed copy, it's in the `sgm` package (see pyproject.toml).
| get_configs_path | python | THUDM/CogVideo | sat/sgm/util.py | https://github.com/THUDM/CogVideo/blob/master/sat/sgm/util.py | Apache-2.0 |
def get_nested_attribute(obj, attribute_path, depth=None, return_key=False):
"""
Will return the result of a recursive get attribute call.
E.g.:
a.b.c
= getattr(getattr(a, "b"), "c")
= get_nested_attribute(a, "b.c")
If any part of the attribute call is an integer x with current o... |
Will return the result of a recursive get attribute call.
E.g.:
a.b.c
= getattr(getattr(a, "b"), "c")
= get_nested_attribute(a, "b.c")
If any part of the attribute call is an integer x with current obj a, will
try to call a[x] instead of a.x first.
| get_nested_attribute | python | THUDM/CogVideo | sat/sgm/util.py | https://github.com/THUDM/CogVideo/blob/master/sat/sgm/util.py | Apache-2.0 |
def pytorch_worker_info(group=None): # sourcery skip: use-contextlib-suppress
"""Return node and worker info for PyTorch and some distributed environments."""
rank = 0
world_size = 1
worker = 0
num_workers = 1
try:
import torch.distributed
if torch.distributed.is_available() an... | Return node and worker info for PyTorch and some distributed environments. | pytorch_worker_info | python | THUDM/CogVideo | sat/sgm/webds.py | https://github.com/THUDM/CogVideo/blob/master/sat/sgm/webds.py | Apache-2.0 |
def pytorch_worker_seed(group=None):
"""Compute a distinct, deterministic RNG seed for each worker and node."""
rank, world_size, worker, num_workers = pytorch_worker_info(group=group)
return rank * 1000 + worker | Compute a distinct, deterministic RNG seed for each worker and node. | pytorch_worker_seed | python | THUDM/CogVideo | sat/sgm/webds.py | https://github.com/THUDM/CogVideo/blob/master/sat/sgm/webds.py | Apache-2.0 |
def tar_file_iterator_with_meta(
fileobj,
meta_names,
skip_meta=r"__[^/]*__($|/)",
suffix=None,
handler=reraise_exception,
meta_stream=None,
):
"""Iterate over tar file, yielding filename, content pairs for the given tar stream.
:param fileobj: byte stream suitable for tarfile
:para... | Iterate over tar file, yielding filename, content pairs for the given tar stream.
:param fileobj: byte stream suitable for tarfile
:param meta_names: key of different items in meta file
:param skip_meta: regexp for keys that are skipped entirely (Default value = r"__[^/]*__($|/)")
| tar_file_iterator_with_meta | python | THUDM/CogVideo | sat/sgm/webds.py | https://github.com/THUDM/CogVideo/blob/master/sat/sgm/webds.py | Apache-2.0 |
def tar_file_expander_with_meta(data, meta_names, handler=reraise_exception):
"""Expand a stream of open tar files into a stream of tar file contents.
This returns an iterator over (filename, file_contents).
"""
for source in data:
url = source["url"]
try:
assert isinstance(... | Expand a stream of open tar files into a stream of tar file contents.
This returns an iterator over (filename, file_contents).
| tar_file_expander_with_meta | python | THUDM/CogVideo | sat/sgm/webds.py | https://github.com/THUDM/CogVideo/blob/master/sat/sgm/webds.py | Apache-2.0 |
def url_opener(
data,
handler,
**kw,
):
"""Open URLs and yield a stream of url+stream pairs.
Args:
data: iterator over dict(url=...)
handler: exception handler.
kw: keyword arguments for gopen.gopen.
Yields:
a stream of url+stream pairs.
"""
for sample i... | Open URLs and yield a stream of url+stream pairs.
Args:
data: iterator over dict(url=...)
handler: exception handler.
kw: keyword arguments for gopen.gopen.
Yields:
a stream of url+stream pairs.
| url_opener | python | THUDM/CogVideo | sat/sgm/webds.py | https://github.com/THUDM/CogVideo/blob/master/sat/sgm/webds.py | Apache-2.0 |
def gopen_rclone(url, mode="rb", bufsize=1024 * 1024 * 32):
"""Open a URL with `curl`.
:param url: rclone url, e.g. data:bucket1/foo.tar. data should be configured.
:param mode: file mode
:param bufsize: buffer size
"""
url = url.replace("rclone://", "")
if mode[0] == "r":
cmd = f"r... | Open a URL with `curl`.
:param url: rclone url, e.g. data:bucket1/foo.tar. data should be configured.
:param mode: file mode
:param bufsize: buffer size
| gopen_rclone | python | THUDM/CogVideo | sat/sgm/webds.py | https://github.com/THUDM/CogVideo/blob/master/sat/sgm/webds.py | Apache-2.0 |
def gopen_boto3(url, mode="rb", bufsize=8192 * 2):
"""Open a URL with boto3 API.
:param url: boto3 url, e.g. boto3://bucket1/foo.tar. data should be configured.
:param mode: file mode
:param bufsize: buffer size
"""
import boto3
# boto3.set_stream_logger('botocore', level='DEBUG')
if u... | Open a URL with boto3 API.
:param url: boto3 url, e.g. boto3://bucket1/foo.tar. data should be configured.
:param mode: file mode
:param bufsize: buffer size
| gopen_boto3 | python | THUDM/CogVideo | sat/sgm/webds.py | https://github.com/THUDM/CogVideo/blob/master/sat/sgm/webds.py | Apache-2.0 |
def zero_module(module):
"""
Zero out the parameters of a module and return it.
"""
for p in module.parameters():
p.detach().zero_()
return module |
Zero out the parameters of a module and return it.
| zero_module | python | THUDM/CogVideo | sat/sgm/modules/attention.py | https://github.com/THUDM/CogVideo/blob/master/sat/sgm/modules/attention.py | Apache-2.0 |
def get_timestep_embedding(timesteps, embedding_dim):
"""
This matches the implementation in Denoising Diffusion Probabilistic Models:
From Fairseq.
Build sinusoidal embeddings.
This matches the implementation in tensor2tensor, but differs slightly
from the description in Section 3.5 of "Attenti... |
This matches the implementation in Denoising Diffusion Probabilistic Models:
From Fairseq.
Build sinusoidal embeddings.
This matches the implementation in tensor2tensor, but differs slightly
from the description in Section 3.5 of "Attention Is All You Need".
| get_timestep_embedding | python | THUDM/CogVideo | sat/sgm/modules/cp_enc_dec.py | https://github.com/THUDM/CogVideo/blob/master/sat/sgm/modules/cp_enc_dec.py | Apache-2.0 |
def forward(self, fmap, cond: Tensor):
"""
notation
b - batch
n - convs
o - output
i - input
k - kernel
"""
b = fmap.shape[0]
# prepare weights for modulation
weights = self.weights
# do the modulation, demodulation, as... |
notation
b - batch
n - convs
o - output
i - input
k - kernel
| forward | python | THUDM/CogVideo | sat/sgm/modules/autoencoding/magvit2_pytorch.py | https://github.com/THUDM/CogVideo/blob/master/sat/sgm/modules/autoencoding/magvit2_pytorch.py | Apache-2.0 |
def __init__(self, input_nc=3, ndf=64, n_layers=3, use_actnorm=False):
"""Construct a PatchGAN discriminator
Parameters:
input_nc (int) -- the number of channels in input images
ndf (int) -- the number of filters in the last conv layer
n_layers (int) -- the nu... | Construct a PatchGAN discriminator
Parameters:
input_nc (int) -- the number of channels in input images
ndf (int) -- the number of filters in the last conv layer
n_layers (int) -- the number of conv layers in the discriminator
norm_layer -- normalizat... | __init__ | python | THUDM/CogVideo | sat/sgm/modules/autoencoding/lpips/model/model.py | https://github.com/THUDM/CogVideo/blob/master/sat/sgm/modules/autoencoding/lpips/model/model.py | Apache-2.0 |
def quantize(self, z: Tensor) -> Tensor:
"""Quantizes z, returns quantized zhat, same shape as z."""
quantized = round_ste(self.bound(z))
half_width = self._levels // 2 # Renormalize to [-1, 1].
return quantized / half_width | Quantizes z, returns quantized zhat, same shape as z. | quantize | python | THUDM/CogVideo | sat/sgm/modules/autoencoding/regularizers/finite_scalar_quantization.py | https://github.com/THUDM/CogVideo/blob/master/sat/sgm/modules/autoencoding/regularizers/finite_scalar_quantization.py | Apache-2.0 |
def codes_to_indices(self, zhat: Tensor) -> Tensor:
"""Converts a `code` to an index in the codebook."""
assert zhat.shape[-1] == self.codebook_dim
zhat = self._scale_and_shift(zhat)
return (zhat * self._basis).sum(dim=-1).to(int32) | Converts a `code` to an index in the codebook. | codes_to_indices | python | THUDM/CogVideo | sat/sgm/modules/autoencoding/regularizers/finite_scalar_quantization.py | https://github.com/THUDM/CogVideo/blob/master/sat/sgm/modules/autoencoding/regularizers/finite_scalar_quantization.py | Apache-2.0 |
def forward(self, z: Tensor) -> Tensor:
"""
einstein notation
b - batch
n - sequence (or flattened spatial dimensions)
d - feature dimension
c - number of codebook dim
"""
is_img_or_video = z.ndim >= 4
# standardize image or video into (batch, se... |
einstein notation
b - batch
n - sequence (or flattened spatial dimensions)
d - feature dimension
c - number of codebook dim
| forward | python | THUDM/CogVideo | sat/sgm/modules/autoencoding/regularizers/finite_scalar_quantization.py | https://github.com/THUDM/CogVideo/blob/master/sat/sgm/modules/autoencoding/regularizers/finite_scalar_quantization.py | Apache-2.0 |
def forward(
self,
x,
inv_temperature=100.0,
return_loss_breakdown=False,
mask=None,
):
"""
einstein notation
b - batch
n - sequence (or flattened spatial dimensions)
d - feature dimension, which is also log2(codebook size)
c - ... |
einstein notation
b - batch
n - sequence (or flattened spatial dimensions)
d - feature dimension, which is also log2(codebook size)
c - number of codebook dim
| forward | python | THUDM/CogVideo | sat/sgm/modules/autoencoding/regularizers/lookup_free_quantization.py | https://github.com/THUDM/CogVideo/blob/master/sat/sgm/modules/autoencoding/regularizers/lookup_free_quantization.py | Apache-2.0 |
def _find_children(
model,
search_class: List[Type[nn.Module]] = [nn.Linear],
):
"""
Find all modules of a certain class (or union of classes).
Returns all matching modules, along with the parent of those moduless and the
names they are referenced by.
"""
# For each target find every li... |
Find all modules of a certain class (or union of classes).
Returns all matching modules, along with the parent of those moduless and the
names they are referenced by.
| _find_children | python | THUDM/CogVideo | sat/sgm/modules/diffusionmodules/lora.py | https://github.com/THUDM/CogVideo/blob/master/sat/sgm/modules/diffusionmodules/lora.py | Apache-2.0 |
def _find_modules_v2(
model,
ancestor_class: Optional[Set[str]] = None,
search_class: List[Type[nn.Module]] = [nn.Linear],
exclude_children_of: Optional[List[Type[nn.Module]]] = [
LoRACompatibleLinear,
LoRACompatibleConv,
LoRALinearLayer,
LoRAConv2dLayer,
],
):
""... |
Find all modules of a certain class (or union of classes) that are direct or
indirect descendants of other modules of a certain class (or union of classes).
Returns all matching modules, along with the parent of those moduless and the
names they are referenced by.
| _find_modules_v2 | python | THUDM/CogVideo | sat/sgm/modules/diffusionmodules/lora.py | https://github.com/THUDM/CogVideo/blob/master/sat/sgm/modules/diffusionmodules/lora.py | Apache-2.0 |
def forward(self, x, emb):
"""
Apply the module to `x` given `emb` timestep embeddings.
""" |
Apply the module to `x` given `emb` timestep embeddings.
| forward | python | THUDM/CogVideo | sat/sgm/modules/diffusionmodules/openaimodel.py | https://github.com/THUDM/CogVideo/blob/master/sat/sgm/modules/diffusionmodules/openaimodel.py | Apache-2.0 |
def count_flops_attn(model, _x, y):
"""
A counter for the `thop` package to count the operations in an
attention operation.
Meant to be used like:
macs, params = thop.profile(
model,
inputs=(inputs, timestamps),
custom_ops={QKVAttention: QKVAttention.count_flo... |
A counter for the `thop` package to count the operations in an
attention operation.
Meant to be used like:
macs, params = thop.profile(
model,
inputs=(inputs, timestamps),
custom_ops={QKVAttention: QKVAttention.count_flops},
)
| count_flops_attn | python | THUDM/CogVideo | sat/sgm/modules/diffusionmodules/openaimodel.py | https://github.com/THUDM/CogVideo/blob/master/sat/sgm/modules/diffusionmodules/openaimodel.py | Apache-2.0 |
def forward(self, qkv):
"""
Apply QKV attention.
:param qkv: an [N x (H * 3 * C) x T] tensor of Qs, Ks, and Vs.
:return: an [N x (H * C) x T] tensor after attention.
"""
bs, width, length = qkv.shape
assert width % (3 * self.n_heads) == 0
ch = width // (3 ... |
Apply QKV attention.
:param qkv: an [N x (H * 3 * C) x T] tensor of Qs, Ks, and Vs.
:return: an [N x (H * C) x T] tensor after attention.
| forward | python | THUDM/CogVideo | sat/sgm/modules/diffusionmodules/openaimodel.py | https://github.com/THUDM/CogVideo/blob/master/sat/sgm/modules/diffusionmodules/openaimodel.py | Apache-2.0 |
def forward(self, qkv):
"""
Apply QKV attention.
:param qkv: an [N x (3 * H * C) x T] tensor of Qs, Ks, and Vs.
:return: an [N x (H * C) x T] tensor after attention.
"""
bs, width, length = qkv.shape
assert width % (3 * self.n_heads) == 0
ch = width // (3 ... |
Apply QKV attention.
:param qkv: an [N x (3 * H * C) x T] tensor of Qs, Ks, and Vs.
:return: an [N x (H * C) x T] tensor after attention.
| forward | python | THUDM/CogVideo | sat/sgm/modules/diffusionmodules/openaimodel.py | https://github.com/THUDM/CogVideo/blob/master/sat/sgm/modules/diffusionmodules/openaimodel.py | Apache-2.0 |
def forward(self, x, timesteps=None, context=None, y=None, **kwargs):
"""
Apply the model to an input batch.
:param x: an [N x C x ...] Tensor of inputs.
:param timesteps: a 1-D batch of timesteps.
:param context: conditioning plugged in via crossattn
:param y: an [N] Ten... |
Apply the model to an input batch.
:param x: an [N x C x ...] Tensor of inputs.
:param timesteps: a 1-D batch of timesteps.
:param context: conditioning plugged in via crossattn
:param y: an [N] Tensor of labels, if class-conditional.
:return: an [N x C x ...] Tensor of ... | forward | python | THUDM/CogVideo | sat/sgm/modules/diffusionmodules/openaimodel.py | https://github.com/THUDM/CogVideo/blob/master/sat/sgm/modules/diffusionmodules/openaimodel.py | Apache-2.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.