repo_name stringlengths 7 71 | file_path stringlengths 5 118 | context list | import_statement stringlengths 45 12.5k | token_num int64 641 99.4k | cropped_code stringlengths 44 17k | all_code stringlengths 43 754k | next_line stringlengths 2 330 | gold_snippet_index int64 0 68 | created_at stringlengths 25 25 | level stringclasses 9
values |
|---|---|---|---|---|---|---|---|---|---|---|
boweniac/autogan | autogan/agents/universal_agent.py | [
{
"identifier": "AgentSwitch",
"path": "autogan/agents/agent_switch.py",
"snippet": "class AgentSwitch:\n def __init__(\n self,\n organizational_structure: List,\n task_tag: Optional[str] = \"/task\",\n opening_speaker: Optional[any] = None,\n de... | import re
from collections import defaultdict
from typing import Optional, Dict, Any
from autogan.agents.agent_switch import AgentSwitch
from autogan.utils.compressed_messages_utils import compressed_messages
from autogan.utils.compressed_text_utils import compressed_text_universal
from autogan.oai.config_utils import AgentConfig
from autogan.oai.count_tokens_utils import count_text_tokens
from autogan.oai.generate_utils import generate_chat_completion
from autogan.utils.environment_utils import environment_info
from autogan.utils.response import default_response_func
from termcolor import colored | 9,965 | return x
class UniversalAgent:
def __init__(
self,
name: str,
agent_config: Optional[Dict] = None,
duty: Optional[str] = None,
work_flow: Optional[str] = None,
use_tool: Optional[str] = None, # only | join
super_rich: Optional[str] = None, # auto | on | off
stream_mode: Optional[bool] = None,
):
"""Agent base class
Each agent can communicate with other agents in the current department and the leader of the subordinate department to complete tasks together.
每个 agent 可与当前部门的其他 agent 以及下级部门的 leader 沟通,协作完成任务。
To provide functions beyond the modeling capabilities for the agent, you can override the tool_function method.
想要为 agent 提供模型能力之外的功能,可以通过重写 tool_function 方法来实现。
:param name: The agent name should be unique in the organizational structure.
agent name 在组织架构中应当是唯一的。
:param agent_config: The agent configuration includes:
agent 配置包括:
- main_model: The LLM configuration of the agent's main body.
agent 主体的 LLM 配置。
- summary_model: The LLM configuration used for compressing context and generating text summaries.
用于压缩上下文以及生成文本摘要的 LLM 配置。
- request_interval_time: The interval time of LLM requests.
LLM 请求间隔时间。
- request_timeout:The timeout of LLM requests.
LLM 请求超时时间。
- max_retries: The maximum number of retries for LLM requests.
LLM 请求最大重试次数。
:param duty: Used to explain one's job responsibilities to other agents.
用于向其他 agent 说明自己的工作职责。
:param work_flow: Defines the workflow of the agent.
定义 agent 的工作流程。
:param use_tool: Defines the mode of the agent using the tool_function:
定义 agent 使用 tool_function 的模式:
- None: means not using the tool function.
不使用工具函数。
- only: Do not use the LLM, only use the tool function to generate results.
不使用 LLM,仅使用工具函数生成结果。
- join: The content generated by the LLM will be used as the input parameter for the tool_function.
LLM 生成的内容将作为 tool_function 的输入参数
:param super_rich: Whether to enable the deep thought function. When enabled,
it uses a set of analysis processes to refine the output of the agent. However,
this can increase the number of tokens used, so it is not recommended for use with the gpt-4 model.
The name "super_rich" is a reminder that using this function with gpt-4 can be expensive,
even more so than Elon Musk's earning speed.
是否开启深思功能,开启后会使用一套分析流程来收敛 agent 的输出结果,但这样做会增加 tokens 的消耗,因此不建议在gpt-4模型下使用。
之所以这个参数叫 super_rich ,是为了提醒用户,如果在 gpt-4 下使用,其花钱的速度可能会超过马斯克赚钱的速度。
- auto: Disable for GPT-4, enable for other models
在 gpt-4下禁用,其他模型开启
- on: Always enabled
始终开启
- off: Always disabled
始终关闭
:param stream_mode: Whether to enable the stream_mode
定义 agent 的工作流程。
"""
self.name = name
self.agent_config = AgentConfig(agent_config) if agent_config else None
self.duty = duty
self.super_rich = super_rich # auto | on | off
self.stream_mode = stream_mode
self.response_func = default_response_func # Used to return results to the interface or terminal.
self.workmates = "" # relevant personnel's name and duty
self.pipeline = "" # In a linear workflow, this is the next person to communicate with.
# Translate the session ID of the pusher into the sub-session ID of the receiver.
self.sub_to_main_task_id = defaultdict(str)
# Translate the session id of the sender into the superior session id of the receiver.
self.main_to_sub_task_id = defaultdict(str)
self._work_flow = work_flow
self._use_tool = use_tool # only | join
self._conversation_messages = defaultdict(list) # key: task id,value: Conversation history
self._conversation_focus = defaultdict(Dict) # key: task id,value: {"task_issuer": "", "task_content": ""}
def set_agent_config(self, agent_config: Dict):
self.agent_config = AgentConfig(agent_config)
def new_task(self, switch: AgentSwitch, task_id: str, sender_name: str, content: str,
completion_tokens: int):
"""Accept tasks posted by other agent.
:param switch: AgentSwitch object
:param task_id: New task id
:param sender_name: Task Issuer's Name
:param content: Task content
:param completion_tokens: Task content tokens
"""
# Avoid excessively long task content
if (self._use_tool != "only" and completion_tokens >
self.agent_config.main_model_config.max_messages_tokens * 0.5):
self._push_to_switch(switch, task_id, "The task is too long", 5)
# Cache task information to maintain focus during task execution
task_content = content.replace(f"@{self.name}", "please help me")
task_content = task_content.replace(f"{switch.task_tag}", "")
self._conversation_focus[task_id] = {'task_issuer': sender_name, 'task_content': task_content}
# Start the generation process
self._generate_process(switch, task_id, sender_name, content, completion_tokens)
def receive(self, switch: AgentSwitch, task_id: str, sender_name: str, content: str,
completion_tokens: int):
"""Receive messages sent by other agents (excluding new task requests)
:param switch: AgentSwitch object
:param task_id: Task id
:param sender_name: Name of the agent sending the message
:param content: Message content
:param completion_tokens: Message content tokens
"""
if self._use_tool != "only":
safe_size = self.agent_config.main_model_config.max_messages_tokens
if completion_tokens > safe_size:
# 如消息内容过长,则对其进行压缩
|
try:
except ImportError:
def colored(x, *args, **kwargs):
return x
class UniversalAgent:
def __init__(
self,
name: str,
agent_config: Optional[Dict] = None,
duty: Optional[str] = None,
work_flow: Optional[str] = None,
use_tool: Optional[str] = None, # only | join
super_rich: Optional[str] = None, # auto | on | off
stream_mode: Optional[bool] = None,
):
"""Agent base class
Each agent can communicate with other agents in the current department and the leader of the subordinate department to complete tasks together.
每个 agent 可与当前部门的其他 agent 以及下级部门的 leader 沟通,协作完成任务。
To provide functions beyond the modeling capabilities for the agent, you can override the tool_function method.
想要为 agent 提供模型能力之外的功能,可以通过重写 tool_function 方法来实现。
:param name: The agent name should be unique in the organizational structure.
agent name 在组织架构中应当是唯一的。
:param agent_config: The agent configuration includes:
agent 配置包括:
- main_model: The LLM configuration of the agent's main body.
agent 主体的 LLM 配置。
- summary_model: The LLM configuration used for compressing context and generating text summaries.
用于压缩上下文以及生成文本摘要的 LLM 配置。
- request_interval_time: The interval time of LLM requests.
LLM 请求间隔时间。
- request_timeout:The timeout of LLM requests.
LLM 请求超时时间。
- max_retries: The maximum number of retries for LLM requests.
LLM 请求最大重试次数。
:param duty: Used to explain one's job responsibilities to other agents.
用于向其他 agent 说明自己的工作职责。
:param work_flow: Defines the workflow of the agent.
定义 agent 的工作流程。
:param use_tool: Defines the mode of the agent using the tool_function:
定义 agent 使用 tool_function 的模式:
- None: means not using the tool function.
不使用工具函数。
- only: Do not use the LLM, only use the tool function to generate results.
不使用 LLM,仅使用工具函数生成结果。
- join: The content generated by the LLM will be used as the input parameter for the tool_function.
LLM 生成的内容将作为 tool_function 的输入参数
:param super_rich: Whether to enable the deep thought function. When enabled,
it uses a set of analysis processes to refine the output of the agent. However,
this can increase the number of tokens used, so it is not recommended for use with the gpt-4 model.
The name "super_rich" is a reminder that using this function with gpt-4 can be expensive,
even more so than Elon Musk's earning speed.
是否开启深思功能,开启后会使用一套分析流程来收敛 agent 的输出结果,但这样做会增加 tokens 的消耗,因此不建议在gpt-4模型下使用。
之所以这个参数叫 super_rich ,是为了提醒用户,如果在 gpt-4 下使用,其花钱的速度可能会超过马斯克赚钱的速度。
- auto: Disable for GPT-4, enable for other models
在 gpt-4下禁用,其他模型开启
- on: Always enabled
始终开启
- off: Always disabled
始终关闭
:param stream_mode: Whether to enable the stream_mode
定义 agent 的工作流程。
"""
self.name = name
self.agent_config = AgentConfig(agent_config) if agent_config else None
self.duty = duty
self.super_rich = super_rich # auto | on | off
self.stream_mode = stream_mode
self.response_func = default_response_func # Used to return results to the interface or terminal.
self.workmates = "" # relevant personnel's name and duty
self.pipeline = "" # In a linear workflow, this is the next person to communicate with.
# Translate the session ID of the pusher into the sub-session ID of the receiver.
self.sub_to_main_task_id = defaultdict(str)
# Translate the session id of the sender into the superior session id of the receiver.
self.main_to_sub_task_id = defaultdict(str)
self._work_flow = work_flow
self._use_tool = use_tool # only | join
self._conversation_messages = defaultdict(list) # key: task id,value: Conversation history
self._conversation_focus = defaultdict(Dict) # key: task id,value: {"task_issuer": "", "task_content": ""}
def set_agent_config(self, agent_config: Dict):
self.agent_config = AgentConfig(agent_config)
def new_task(self, switch: AgentSwitch, task_id: str, sender_name: str, content: str,
completion_tokens: int):
"""Accept tasks posted by other agent.
:param switch: AgentSwitch object
:param task_id: New task id
:param sender_name: Task Issuer's Name
:param content: Task content
:param completion_tokens: Task content tokens
"""
# Avoid excessively long task content
if (self._use_tool != "only" and completion_tokens >
self.agent_config.main_model_config.max_messages_tokens * 0.5):
self._push_to_switch(switch, task_id, "The task is too long", 5)
# Cache task information to maintain focus during task execution
task_content = content.replace(f"@{self.name}", "please help me")
task_content = task_content.replace(f"{switch.task_tag}", "")
self._conversation_focus[task_id] = {'task_issuer': sender_name, 'task_content': task_content}
# Start the generation process
self._generate_process(switch, task_id, sender_name, content, completion_tokens)
def receive(self, switch: AgentSwitch, task_id: str, sender_name: str, content: str,
completion_tokens: int):
"""Receive messages sent by other agents (excluding new task requests)
:param switch: AgentSwitch object
:param task_id: Task id
:param sender_name: Name of the agent sending the message
:param content: Message content
:param completion_tokens: Message content tokens
"""
if self._use_tool != "only":
safe_size = self.agent_config.main_model_config.max_messages_tokens
if completion_tokens > safe_size:
# 如消息内容过长,则对其进行压缩 | compressed_text, total_tokens = compressed_text_universal( | 2 | 2023-12-06 03:24:34+00:00 | 12k |
JingHao99/IDR-Ingredients-oriented-Degradation-Reformulation | inference.py | [
{
"identifier": "AverageMeter",
"path": "utils/metric_util.py",
"snippet": "class AverageMeter():\r\n \"\"\" Computes and stores the average and current value \"\"\"\r\n\r\n def __init__(self):\r\n self.reset()\r\n\r\n def reset(self):\r\n \"\"\" Reset all statistics \"\"\"\r\n ... | import argparse
import subprocess
import numpy as np
import os
import torch
import torch.nn as nn
import logging
from tqdm import tqdm
from PIL import Image
from torch.utils.data import DataLoader
from torch.utils.data import Dataset
from torchvision.transforms import ToPILImage, Compose, RandomCrop, ToTensor
from utils.metric_util import AverageMeter
from utils.tensor_op import save_img_tensor, save_image_tensor
from utils.util import mkdir, setup_logger
from utils.data_util import crop_HWC_img, random_augmentation, tensor2img
from metrics.psnr_ssim import compute_psnr_ssim, calculate_psnr, calculate_ssim
from models.archs.IDR_restormer_arch import IDR_restormer | 7,611 | y = np.zeros_like(x)
y[:,1:,:] += x_diffx
y[:,:-1,:] += x_diffx
y[1:,:,:] += x_diffy
y[:-1,:,:] += x_diffy
y = np.sum(y,2)/3
y /= 4
return y[:,:,None].astype(np.float32)
def __getitem__(self, idx):
degraded_path = self.ids[idx]
clean_path = self._get_gt_path(degraded_path)
degraded_img = crop_HWC_img(np.array(Image.open(degraded_path).convert('RGB')), base=32)
clean_img = crop_HWC_img(np.array(Image.open(clean_path).convert('RGB')), base=32)
clean_img, degraded_img = self.toTensor(clean_img), self.toTensor(degraded_img)
degraded_name = degraded_path.split('/')[-1][:-4]
return [degraded_name], degraded_img, clean_img
def __len__(self):
return self.length
def test_Denoise(net, dataset, task="CBSD68", sigma=15,save_img=True):
logger = logging.getLogger('base')
output_path = opt.output_path + 'denoise/' + str(sigma) + '/'
# subprocess.check_output(['mkdir', '-p', output_path])
mkdir(output_path)
dataset.set_dataset(task)
dataset.set_sigma(sigma)
testloader = DataLoader(dataset, batch_size=1, pin_memory=True, shuffle=False, num_workers=0)
psnr = AverageMeter()
ssim = AverageMeter()
with torch.no_grad():
for ([clean_name], degrad_patch, clean_patch) in tqdm(testloader):
degrad_patch, clean_patch = degrad_patch.cuda(), clean_patch.cuda()
restored = net(degrad_patch)
if type(restored) == list:
restored = restored[0]
temp_psnr, temp_ssim, N = compute_psnr_ssim(restored, clean_patch)
psnr.update(temp_psnr, N)
ssim.update(temp_ssim, N)
if save_img:
save_image_tensor(restored, output_path + clean_name[0] + '.png')
logger.info("Deonise sigma=%d: psnr: %.2f, ssim: %.4f" % (sigma, psnr.avg, ssim.avg))
def test_Derain_Dehaze(net, dataset, task="derain",save_img=True):
logger = logging.getLogger('base')
output_path = opt.output_path + task + '/'
# subprocess.check_output(['mkdir', '-p', output_path])
mkdir(output_path)
dataset.set_dataset(task)
testloader = DataLoader(dataset, batch_size=1, pin_memory=True, shuffle=False, num_workers=0)
psnr = AverageMeter()
ssim = AverageMeter()
with torch.no_grad():
for ([degraded_name], degrad_patch, clean_patch) in tqdm(testloader):
degrad_patch, clean_patch = degrad_patch.cuda(), clean_patch.cuda()
restored = net(degrad_patch)
if type(restored) == list:
restored = restored[0]
temp_psnr, temp_ssim, N = compute_psnr_ssim(restored, clean_patch)
N = degrad_patch.shape[0]
psnr.update(temp_psnr, N)
ssim.update(temp_ssim, N)
if save_img:
save_image_tensor(restored, output_path + degraded_name[0] + '.png')
logger.info("PSNR: %.2f, SSIM: %.4f" % (psnr.avg, ssim.avg))
if __name__ == '__main__':
parser = argparse.ArgumentParser()
# Input Parameters
parser.add_argument('--cuda', type=int, default=0)
parser.add_argument('--mode', type=int, default=0,
help='0 for 5 tasks, 1 for denoising details, 2 for unknowing UDC')
parser.add_argument('--denoise_CBSD68_path', type=str, default="", help='save path of test noisy images')
parser.add_argument('--denoise_urban100_path', type=str, default="", help='save path of test noisy images')
parser.add_argument('--denoise_Kodak24_path', type=str, default="", help='save path of test noisy images')
parser.add_argument('--derain_path', type=str, default="", help='save path of test raining images')
parser.add_argument('--dehaze_path', type=str, default="", help='save path of test hazy images')
parser.add_argument('--deblur_path', type=str, default="", help='save path of test blur images')
parser.add_argument('--low_light_path', type=str, default="", help='save path of test low-light images')
parser.add_argument('--udc_T_path', type=str, default="", help='save path of test udc Toled images')
parser.add_argument('--udc_P_path', type=str, default="", help='save path of test udc Poled images')
parser.add_argument('--output_path', type=str, default="./results/visualization", help='output save path')
parser.add_argument('--ckpt_path', type=str, default="", help='checkpoint save path')
parser.add_argument('--log_path', type=str, default="./results/log", help='checkpoint save path')
opt = parser.parse_args()
np.random.seed(0)
torch.manual_seed(0)
torch.cuda.set_device(opt.cuda)
denoise_set = DenoiseTestDataset(opt)
derain_set = DerainDehazeDataset(opt)
# Make network
net = IDR_restormer(inp_channels=3, out_channels=3, dim=24, num_blocks=[2,3,3,4], num_refinement_blocks=2, heads=[1,2,4,8], ffn_expansion_factor=2.66, bias=False, LayerNorm_type='WithBias', num_degra_queries = 24, keep_degra=48)
net = net.cuda()
net.eval()
net.load_state_dict(torch.load(opt.ckpt_path, map_location=torch.device(opt.cuda)))
|
class DenoiseTestDataset(Dataset):
def __init__(self, args, dataset="CBSD68"):
super(DenoiseTestDataset, self).__init__()
self.args = args
self.clean_ids = []
self.sigma = 15
self.dataset_dict = {'CBSD68': 0, 'urban100': 1, 'Kodak24':2}
self.set_dataset(dataset)
self.toTensor = ToTensor()
def _init_clean_ids(self):
if self.task_idx == 0:
self.clean_ids = []
name_list = os.listdir(self.args.denoise_CBSD68_path)
self.clean_ids += [self.args.denoise_CBSD68_path + id_ for id_ in name_list]
elif self.task_idx == 1:
self.clean_ids = []
name_list = os.listdir(self.args.denoise_urban100_path)
self.clean_ids += [self.args.denoise_urban100_path + id_ for id_ in name_list]
elif self.task_idx == 2:
self.clean_ids = []
name_list = os.listdir(self.args.denoise_Kodak24_path)
self.clean_ids += [self.args.denoise_Kodak24_path + id_ for id_ in name_list]
self.num_clean = len(self.clean_ids)
def set_dataset(self, dataset):
self.task_idx = self.dataset_dict[dataset]
self._init_clean_ids()
def _add_gaussian_noise(self, clean_patch):
noise = np.random.randn(*clean_patch.shape)
noisy_patch = np.clip(clean_patch + noise * self.sigma, 0, 255).astype(np.uint8)
return noisy_patch, clean_patch
def _edgeComputation(self,x):
x_diffx = np.abs(x[:,1:,:] - x[:,:-1,:])
x_diffy = np.abs(x[1:,:,:] - x[:-1,:,:])
y = np.zeros_like(x)
y[:,1:,:] += x_diffx
y[:,:-1,:] += x_diffx
y[1:,:,:] += x_diffy
y[:-1,:,:] += x_diffy
y = np.sum(y,2)/3
y /= 4
return y[:,:,None].astype(np.float32)
def set_sigma(self, sigma):
self.sigma = sigma
def __getitem__(self, clean_id):
clean_img = crop_HWC_img(np.array(Image.open(self.clean_ids[clean_id]).convert('RGB')), base=32)
clean_name = self.clean_ids[clean_id].split("/")[-1].split('.')[0]
noisy_img, _ = self._add_gaussian_noise(clean_img)
clean_img, noisy_img = self.toTensor(clean_img), self.toTensor(noisy_img)
return [clean_name], noisy_img, clean_img
def __len__(self):
return self.num_clean
class DerainDehazeDataset(Dataset):
def __init__(self, args, task="derain"):
super(DerainDehazeDataset, self).__init__()
self.ids = []
self.task_idx = 0
self.args = args
self.task_dict = {'derain': 0, 'dehaze': 1, 'deblur':2, 'low-light':3, 'UDC_T':4, 'UDC_P':5}
self.toTensor = ToTensor()
self.set_dataset(task)
def _init_input_ids(self):
if self.task_idx == 0:
self.ids = []
name_list = os.listdir(self.args.derain_path + 'input/')
self.ids += [self.args.derain_path + 'input/' + id_ for id_ in name_list]
elif self.task_idx == 1:
self.ids = []
name_list = os.listdir(self.args.dehaze_path + 'input/')
self.ids += [self.args.dehaze_path + 'input/' + id_ for id_ in name_list]
elif self.task_idx == 2:
self.ids = []
name_list = os.listdir(self.args.deblur_path + 'input/')
self.ids += [self.args.deblur_path + 'input/' + id_ for id_ in name_list]
elif self.task_idx == 3:
self.ids = []
name_list = os.listdir(self.args.low_light_path + 'input/')
self.ids += [self.args.low_light_path + 'input/' + id_ for id_ in name_list]
elif self.task_idx == 4:
self.ids = []
name_list = os.listdir(self.args.udc_T_path + 'input/')
self.ids += [self.args.udc_T_path + 'input/' + id_ for id_ in name_list]
elif self.task_idx == 5:
self.ids = []
name_list = os.listdir(self.args.udc_P_path + 'input/')
self.ids += [self.args.udc_P_path + 'input/' + id_ for id_ in name_list]
self.length = len(self.ids)
def _get_gt_path(self, degraded_name):
if self.task_idx == 0:
gt_name = degraded_name.replace("input", "target")
elif self.task_idx == 1:
dir_name = degraded_name.split("input")[0] + 'target/'
name = degraded_name.split('/')[-1].split('_')[0] + '.png'
gt_name = dir_name + name
elif self.task_idx == 2:
gt_name = degraded_name.replace("input", "target")
elif self.task_idx == 3:
gt_name = degraded_name.replace("input", "target")
elif self.task_idx == 4:
gt_name = degraded_name.replace("input", "target")
elif self.task_idx == 5:
gt_name = degraded_name.replace("input", "target")
return gt_name
def set_dataset(self, task):
self.task_idx = self.task_dict[task]
self._init_input_ids()
def _edgeComputation(self,x):
x_diffx = np.abs(x[:,1:,:] - x[:,:-1,:])
x_diffy = np.abs(x[1:,:,:] - x[:-1,:,:])
y = np.zeros_like(x)
y[:,1:,:] += x_diffx
y[:,:-1,:] += x_diffx
y[1:,:,:] += x_diffy
y[:-1,:,:] += x_diffy
y = np.sum(y,2)/3
y /= 4
return y[:,:,None].astype(np.float32)
def __getitem__(self, idx):
degraded_path = self.ids[idx]
clean_path = self._get_gt_path(degraded_path)
degraded_img = crop_HWC_img(np.array(Image.open(degraded_path).convert('RGB')), base=32)
clean_img = crop_HWC_img(np.array(Image.open(clean_path).convert('RGB')), base=32)
clean_img, degraded_img = self.toTensor(clean_img), self.toTensor(degraded_img)
degraded_name = degraded_path.split('/')[-1][:-4]
return [degraded_name], degraded_img, clean_img
def __len__(self):
return self.length
def test_Denoise(net, dataset, task="CBSD68", sigma=15,save_img=True):
logger = logging.getLogger('base')
output_path = opt.output_path + 'denoise/' + str(sigma) + '/'
# subprocess.check_output(['mkdir', '-p', output_path])
mkdir(output_path)
dataset.set_dataset(task)
dataset.set_sigma(sigma)
testloader = DataLoader(dataset, batch_size=1, pin_memory=True, shuffle=False, num_workers=0)
psnr = AverageMeter()
ssim = AverageMeter()
with torch.no_grad():
for ([clean_name], degrad_patch, clean_patch) in tqdm(testloader):
degrad_patch, clean_patch = degrad_patch.cuda(), clean_patch.cuda()
restored = net(degrad_patch)
if type(restored) == list:
restored = restored[0]
temp_psnr, temp_ssim, N = compute_psnr_ssim(restored, clean_patch)
psnr.update(temp_psnr, N)
ssim.update(temp_ssim, N)
if save_img:
save_image_tensor(restored, output_path + clean_name[0] + '.png')
logger.info("Deonise sigma=%d: psnr: %.2f, ssim: %.4f" % (sigma, psnr.avg, ssim.avg))
def test_Derain_Dehaze(net, dataset, task="derain",save_img=True):
logger = logging.getLogger('base')
output_path = opt.output_path + task + '/'
# subprocess.check_output(['mkdir', '-p', output_path])
mkdir(output_path)
dataset.set_dataset(task)
testloader = DataLoader(dataset, batch_size=1, pin_memory=True, shuffle=False, num_workers=0)
psnr = AverageMeter()
ssim = AverageMeter()
with torch.no_grad():
for ([degraded_name], degrad_patch, clean_patch) in tqdm(testloader):
degrad_patch, clean_patch = degrad_patch.cuda(), clean_patch.cuda()
restored = net(degrad_patch)
if type(restored) == list:
restored = restored[0]
temp_psnr, temp_ssim, N = compute_psnr_ssim(restored, clean_patch)
N = degrad_patch.shape[0]
psnr.update(temp_psnr, N)
ssim.update(temp_ssim, N)
if save_img:
save_image_tensor(restored, output_path + degraded_name[0] + '.png')
logger.info("PSNR: %.2f, SSIM: %.4f" % (psnr.avg, ssim.avg))
if __name__ == '__main__':
parser = argparse.ArgumentParser()
# Input Parameters
parser.add_argument('--cuda', type=int, default=0)
parser.add_argument('--mode', type=int, default=0,
help='0 for 5 tasks, 1 for denoising details, 2 for unknowing UDC')
parser.add_argument('--denoise_CBSD68_path', type=str, default="", help='save path of test noisy images')
parser.add_argument('--denoise_urban100_path', type=str, default="", help='save path of test noisy images')
parser.add_argument('--denoise_Kodak24_path', type=str, default="", help='save path of test noisy images')
parser.add_argument('--derain_path', type=str, default="", help='save path of test raining images')
parser.add_argument('--dehaze_path', type=str, default="", help='save path of test hazy images')
parser.add_argument('--deblur_path', type=str, default="", help='save path of test blur images')
parser.add_argument('--low_light_path', type=str, default="", help='save path of test low-light images')
parser.add_argument('--udc_T_path', type=str, default="", help='save path of test udc Toled images')
parser.add_argument('--udc_P_path', type=str, default="", help='save path of test udc Poled images')
parser.add_argument('--output_path', type=str, default="./results/visualization", help='output save path')
parser.add_argument('--ckpt_path', type=str, default="", help='checkpoint save path')
parser.add_argument('--log_path', type=str, default="./results/log", help='checkpoint save path')
opt = parser.parse_args()
np.random.seed(0)
torch.manual_seed(0)
torch.cuda.set_device(opt.cuda)
denoise_set = DenoiseTestDataset(opt)
derain_set = DerainDehazeDataset(opt)
# Make network
net = IDR_restormer(inp_channels=3, out_channels=3, dim=24, num_blocks=[2,3,3,4], num_refinement_blocks=2, heads=[1,2,4,8], ffn_expansion_factor=2.66, bias=False, LayerNorm_type='WithBias', num_degra_queries = 24, keep_degra=48)
net = net.cuda()
net.eval()
net.load_state_dict(torch.load(opt.ckpt_path, map_location=torch.device(opt.cuda)))
| setup_logger('base', opt.log_path, level=logging.INFO, phase='test', screen=True, tofile=False) | 4 | 2023-12-07 10:58:34+00:00 | 12k |
TACJu/Compositor | Compositor_Mask2Former/mask2former_video/data_video/ytvis_eval.py | [
{
"identifier": "YTVOS",
"path": "Compositor_Mask2Former/mask2former_video/data_video/datasets/ytvis_api/ytvos.py",
"snippet": "class YTVOS:\n def __init__(self, annotation_file=None):\n \"\"\"\n Constructor of Microsoft COCO helper class for reading and visualizing annotations.\n ... | import contextlib
import copy
import io
import itertools
import json
import logging
import numpy as np
import os
import pycocotools.mask as mask_util
import torch
import detectron2.utils.comm as comm
from collections import OrderedDict
from .datasets.ytvis_api.ytvos import YTVOS
from .datasets.ytvis_api.ytvoseval import YTVOSeval
from tabulate import tabulate
from detectron2.config import CfgNode
from detectron2.data import MetadataCatalog
from detectron2.evaluation import DatasetEvaluator
from detectron2.utils.file_io import PathManager
from detectron2.utils.logger import create_small_table | 10,684 | # Copyright (c) Facebook, Inc. and its affiliates.
# Modified by Bowen Cheng from https://github.com/sukjunhwang/IFC
class YTVISEvaluator(DatasetEvaluator):
"""
Evaluate AR for object proposals, AP for instance detection/segmentation, AP
for keypoint detection outputs using COCO's metrics.
See http://cocodataset.org/#detection-eval and
http://cocodataset.org/#keypoints-eval to understand its metrics.
In addition to COCO, this evaluator is able to support any bounding box detection,
instance segmentation, or keypoint detection dataset.
"""
def __init__(
self,
dataset_name,
tasks=None,
distributed=True,
output_dir=None,
*,
use_fast_impl=True,
):
"""
Args:
dataset_name (str): name of the dataset to be evaluated.
It must have either the following corresponding metadata:
"json_file": the path to the COCO format annotation
Or it must be in detectron2's standard dataset format
so it can be converted to COCO format automatically.
tasks (tuple[str]): tasks that can be evaluated under the given
configuration. A task is one of "bbox", "segm", "keypoints".
By default, will infer this automatically from predictions.
distributed (True): if True, will collect results from all ranks and run evaluation
in the main process.
Otherwise, will only evaluate the results in the current process.
output_dir (str): optional, an output directory to dump all
results predicted on the dataset. The dump contains two files:
1. "instances_predictions.pth" a file in torch serialization
format that contains all the raw original predictions.
2. "coco_instances_results.json" a json file in COCO's result
format.
use_fast_impl (bool): use a fast but **unofficial** implementation to compute AP.
Although the results should be very close to the official implementation in COCO
API, it is still recommended to compute results with the official API for use in
papers. The faster implementation also uses more RAM.
"""
self._logger = logging.getLogger(__name__)
self._distributed = distributed
self._output_dir = output_dir
self._use_fast_impl = use_fast_impl
if tasks is not None and isinstance(tasks, CfgNode):
self._logger.warning(
"COCO Evaluator instantiated using config, this is deprecated behavior."
" Please pass in explicit arguments instead."
)
self._tasks = None # Infering it from predictions should be better
else:
self._tasks = tasks
self._cpu_device = torch.device("cpu")
self._metadata = MetadataCatalog.get(dataset_name)
json_file = PathManager.get_local_path(self._metadata.json_file)
with contextlib.redirect_stdout(io.StringIO()):
| # Copyright (c) Facebook, Inc. and its affiliates.
# Modified by Bowen Cheng from https://github.com/sukjunhwang/IFC
class YTVISEvaluator(DatasetEvaluator):
"""
Evaluate AR for object proposals, AP for instance detection/segmentation, AP
for keypoint detection outputs using COCO's metrics.
See http://cocodataset.org/#detection-eval and
http://cocodataset.org/#keypoints-eval to understand its metrics.
In addition to COCO, this evaluator is able to support any bounding box detection,
instance segmentation, or keypoint detection dataset.
"""
def __init__(
self,
dataset_name,
tasks=None,
distributed=True,
output_dir=None,
*,
use_fast_impl=True,
):
"""
Args:
dataset_name (str): name of the dataset to be evaluated.
It must have either the following corresponding metadata:
"json_file": the path to the COCO format annotation
Or it must be in detectron2's standard dataset format
so it can be converted to COCO format automatically.
tasks (tuple[str]): tasks that can be evaluated under the given
configuration. A task is one of "bbox", "segm", "keypoints".
By default, will infer this automatically from predictions.
distributed (True): if True, will collect results from all ranks and run evaluation
in the main process.
Otherwise, will only evaluate the results in the current process.
output_dir (str): optional, an output directory to dump all
results predicted on the dataset. The dump contains two files:
1. "instances_predictions.pth" a file in torch serialization
format that contains all the raw original predictions.
2. "coco_instances_results.json" a json file in COCO's result
format.
use_fast_impl (bool): use a fast but **unofficial** implementation to compute AP.
Although the results should be very close to the official implementation in COCO
API, it is still recommended to compute results with the official API for use in
papers. The faster implementation also uses more RAM.
"""
self._logger = logging.getLogger(__name__)
self._distributed = distributed
self._output_dir = output_dir
self._use_fast_impl = use_fast_impl
if tasks is not None and isinstance(tasks, CfgNode):
self._logger.warning(
"COCO Evaluator instantiated using config, this is deprecated behavior."
" Please pass in explicit arguments instead."
)
self._tasks = None # Infering it from predictions should be better
else:
self._tasks = tasks
self._cpu_device = torch.device("cpu")
self._metadata = MetadataCatalog.get(dataset_name)
json_file = PathManager.get_local_path(self._metadata.json_file)
with contextlib.redirect_stdout(io.StringIO()): | self._ytvis_api = YTVOS(json_file) | 0 | 2023-12-12 11:49:28+00:00 | 12k |
neu-spiral/multi-label-emg | multi_label_emg/train.py | [
{
"identifier": "load_data_dict",
"path": "multi_label_emg/data.py",
"snippet": "def load_data_dict():\n \"\"\"\n Loads features and labels from subject folders into a single dictionary as described below.\n NOTE - preprocessing should be been done first to extract features from raw data (see R... | import sys
import numpy as np
import plotly.graph_objects as go
import argparse
from loguru import logger
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.ensemble import GradientBoostingClassifier, RandomForestClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.mixture import GaussianMixture
from sklearn.neighbors import KernelDensity, KNeighborsClassifier
from sklearn.neural_network import MLPClassifier
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import RobustScaler
from sklearn.svm import SVC
from multi_label_emg.data import load_data_dict
from multi_label_emg.models import AvgPairs, ElementwiseMaxPairs, ParallelA, ParallelB
from multi_label_emg.utils import (
NO_DIR_IDX,
NO_MOD_IDX,
RESULTS_DIR,
canonical_coords,
confusion_matrix,
str2bool,
) | 7,889 |
# NOTE - we use HoldPulse1_NoFeedback and SimultaneousPulse1_NoFeedback for train set in the "upper bound"
# otherwise, these blocks are not used
# Load test data
if include_doubles_in_train:
# We use blocks 1 and 2 of the "NoFeedBack" portion of experiment
# Double check that we're not using augmentation
assert doubles_method == "none"
assert singles_method == "none"
# Add real combos to train set
train_features = np.concatenate(
[
train_features,
data["HoldPulse1_NoFeedBack_features"],
data["SimultaneousPulse1_NoFeedBack_features"],
data["HoldPulse2_NoFeedBack_features"],
data["SimultaneousPulse2_NoFeedBack_features"],
]
)
train_dir_labels = np.concatenate(
[
train_dir_labels,
data["HoldPulse1_NoFeedBack_dir_labels"],
data["SimultaneousPulse1_NoFeedBack_dir_labels"],
data["HoldPulse2_NoFeedBack_dir_labels"],
data["SimultaneousPulse2_NoFeedBack_dir_labels"],
]
)
train_mod_labels = np.concatenate(
[
train_mod_labels,
data["HoldPulse1_NoFeedBack_mod_labels"],
data["SimultaneousPulse1_NoFeedBack_mod_labels"],
data["HoldPulse2_NoFeedBack_mod_labels"],
data["SimultaneousPulse2_NoFeedBack_mod_labels"],
]
)
logger.info(f"Initial train set: {train_features.shape=}, {train_dir_labels.shape=}, {train_mod_labels.shape=}")
# Don't use "Feedback" blocks for this analysis
test_blocks = ["HoldPulse3_NoFeedBack", "SimultaneousPulse3_NoFeedBack"]
test_features = np.concatenate([data[f"{block}_features"] for block in test_blocks])
test_dir_labels = np.concatenate([data[f"{block}_dir_labels"] for block in test_blocks])
test_mod_labels = np.concatenate([data[f"{block}_mod_labels"] for block in test_blocks])
logger.info(f"test set: {test_features.shape=}, {test_dir_labels.shape=}, {test_mod_labels.shape=}")
# Vary strategy for augmented doubles
double_features_aug, double_dir_labels_aug, double_mod_labels_aug = get_augmented_doubles(
doubles_method,
feature_combine_type,
fraction_doubles_per_class,
train_features,
train_dir_labels,
train_mod_labels,
)
# Make augmented singles
# Figure out how many doubles per class. Take avg and then apply rel_fraction_singles_per_class to
# get the number of singles per class
n_singles_per_class = 0
if singles_method != "none":
doubles_labels_2d = np.stack((double_dir_labels_aug.argmax(-1), double_mod_labels_aug.argmax(-1)), axis=-1)
class_sizes = np.unique(doubles_labels_2d, axis=0, return_counts=True)[-1]
n_singles_per_class = int(np.round(np.mean(class_sizes) * rel_fraction_singles_per_class))
single_features_aug, single_dir_labels_aug, single_mod_labels_aug = get_augmented_singles(
singles_method, n_singles_per_class, train_features, train_dir_labels, train_mod_labels
)
# Merge all train data
train_features = np.concatenate([train_features, double_features_aug, single_features_aug])
train_dir_labels = np.concatenate([train_dir_labels, double_dir_labels_aug, single_dir_labels_aug])
train_mod_labels = np.concatenate([train_mod_labels, double_mod_labels_aug, single_mod_labels_aug])
logger.info(f"Augmented train set: {train_features.shape=}, {train_dir_labels.shape=}, {train_mod_labels.shape=}")
# Create model
if parallel_model_type == "ParallelA":
model = ParallelA(
get_clf(clf_name, num_classes=5),
get_clf(clf_name, num_classes=3),
use_augmentation=False,
include_rest_data_for_clf=True,
)
elif parallel_model_type == "ParallelB":
model = ParallelB(
dir_clf=get_clf(clf_name, num_classes=4),
mod_clf=get_clf(clf_name, num_classes=2),
has_dir_clf=get_clf(clf_name, num_classes=2),
has_mod_clf=get_clf(clf_name, num_classes=2),
use_augmentation=False,
# include_rest_data_for_clf=True, # NOTE - always using true, flag is not in model
)
elif parallel_model_type == "SerialControl":
model = get_clf(clf_name, num_classes=15)
else:
raise ValueError(f"Unknown parallel model type: {parallel_model_type}")
# Train
logger.info("Train...")
if parallel_model_type == "SerialControl":
# Convert labels to integer by making 2-digit numbers,
# where the 10s place is the dir label and the 1s place is the mod label
train_labels = train_dir_labels.argmax(-1) * 10 + train_mod_labels.argmax(-1)
model.fit(train_features, train_labels)
else:
model.fit(train_features, train_dir_labels, train_mod_labels)
# Evaluate
logger.info("Evaluate")
if parallel_model_type == "SerialControl":
combined_preds = model.predict(test_features)
dir_preds = combined_preds // 10
mod_preds = combined_preds % 10
else:
dir_preds, mod_preds = model.predict(test_features)
preds_2d = np.stack([dir_preds, mod_preds], axis=-1)
true_labels_2d = np.stack([test_dir_labels.argmax(-1), test_mod_labels.argmax(-1)], axis=-1)
|
def get_name(
subject: str,
seed: int,
parallel_model_type: str,
clf_name: str,
doubles_method: str,
fraction_doubles_per_class: float,
singles_method: str,
rel_fraction_singles_per_class: float,
include_doubles_in_train: bool,
feature_combine_type: str,
):
return "__".join(
[
f"subj={subject}",
f"seed={seed}",
f"par={parallel_model_type}",
f"clf={clf_name}",
f"doubles={doubles_method}",
f"frac_doubles={fraction_doubles_per_class}",
f"singles={singles_method}",
f"frac_singles={rel_fraction_singles_per_class}",
f"incl_doubles={include_doubles_in_train}",
f"feat_type={feature_combine_type}",
]
)
def plot_confusion_matrix(data: np.ndarray):
def make_text(cm):
text = []
for v in cm.flatten():
text.append(f"{round(v, 2)}")
return np.array(text).reshape(cm.shape)
coords, coords_str = canonical_coords()
text = make_text(data)
fig = go.Figure()
fig.update_layout(
# margin=margin,
xaxis=dict(
title="Predicted",
tickangle=-45,
tickmode="array",
ticktext=coords_str,
tickvals=list(range(len(coords_str))),
constrain="domain",
),
yaxis=dict(
title="Actual",
tickmode="array",
ticktext=coords_str,
tickvals=list(range(len(coords_str))),
autorange="reversed",
scaleanchor="x",
scaleratio=1,
constrain="domain",
),
)
fig.add_trace(
go.Heatmap(z=data, text=text, texttemplate="%{text}", zmin=0, zmax=1, colorscale="Blues", showscale=False)
)
return fig
def subset_doubles_uniform(
n_per_class: int, features_aug: np.ndarray, dir_labels_aug: np.ndarray, mod_labels_aug: np.ndarray
):
"""For each class, take n_per_class items uniformly at random"""
res_x, res_y_dir, res_y_mod = [], [], []
labels_2d = np.stack([dir_labels_aug.argmax(-1), mod_labels_aug.argmax(-1)], axis=-1)
for d, m in np.unique(labels_2d, axis=0):
idx = np.where((labels_2d == (d, m)).all(-1))[0]
subset_idx = np.random.choice(idx, size=n_per_class, replace=False)
res_x.append(features_aug[subset_idx])
res_y_dir.append(dir_labels_aug[subset_idx])
res_y_mod.append(mod_labels_aug[subset_idx])
features_aug = np.concatenate(res_x)
dir_labels_aug = np.concatenate(res_y_dir)
mod_labels_aug = np.concatenate(res_y_mod)
return features_aug, dir_labels_aug, mod_labels_aug
def subset_doubles_near_mean(
n_per_class: int, features_aug: np.ndarray, dir_labels_aug: np.ndarray, mod_labels_aug: np.ndarray
):
"""For each class, take n_per_class items closest to the mean of these synthetic items"""
# Find class means
class_means = {}
labels_2d = np.stack([dir_labels_aug.argmax(-1), mod_labels_aug.argmax(-1)], axis=-1)
for d, m in np.unique(labels_2d, axis=0):
idx = np.where((labels_2d == (d, m)).all(-1))[0]
class_means[(d, m)] = np.mean(features_aug[idx], axis=0)
# Subset each class by taking items closest to mean
res_x, res_y_dir, res_y_mod = [], [], []
for d, m in np.unique(labels_2d, axis=0):
class_mean = class_means[(d, m)]
idx = np.where((labels_2d == (d, m)).all(-1))[0]
dists = np.linalg.norm(features_aug[idx] - class_mean, axis=-1)
k_smallest_idx = np.argpartition(dists, n_per_class)[:n_per_class]
subset_idx = idx[k_smallest_idx]
res_x.append(features_aug[subset_idx])
res_y_dir.append(dir_labels_aug[subset_idx])
res_y_mod.append(mod_labels_aug[subset_idx])
features_aug = np.concatenate(res_x)
dir_labels_aug = np.concatenate(res_y_dir)
mod_labels_aug = np.concatenate(res_y_mod)
return features_aug, dir_labels_aug, mod_labels_aug
def subset_doubles_spaced_quantiles(
n_per_class: int, features_aug: np.ndarray, dir_labels_aug: np.ndarray, mod_labels_aug: np.ndarray
):
"""For each class, rank items by their distance to the class mean,
and take items with ranks 1, K+1, 2K+1.
The spacing K will be approx (class_size / n_per_class)
"""
# Find class means
class_means = {}
labels_2d = np.stack([dir_labels_aug.argmax(-1), mod_labels_aug.argmax(-1)], axis=-1)
for d, m in np.unique(labels_2d, axis=0):
idx = np.where((labels_2d == (d, m)).all(-1))[0]
class_means[(d, m)] = np.mean(features_aug[idx], axis=0)
# Subset each class by taking items closest to mean
res_x, res_y_dir, res_y_mod = [], [], []
for d, m in np.unique(labels_2d, axis=0):
class_mean = class_means[(d, m)]
idx = np.where((labels_2d == (d, m)).all(-1))[0]
dists = np.linalg.norm(features_aug[idx] - class_mean, axis=-1)
ranked_distances = np.argsort(dists)
spacing = int(np.floor(len(idx) / n_per_class))
# Since we use floor, we step slightly too little.
# In case this gives us extra items, we also truncate.
subset_idx = idx[ranked_distances[::spacing][:n_per_class]]
n_subset = len(subset_idx)
assert abs(n_subset - n_per_class) <= 1
res_x.append(features_aug[subset_idx])
res_y_dir.append(dir_labels_aug[subset_idx])
res_y_mod.append(mod_labels_aug[subset_idx])
features_aug = np.concatenate(res_x)
dir_labels_aug = np.concatenate(res_y_dir)
mod_labels_aug = np.concatenate(res_y_mod)
return features_aug, dir_labels_aug, mod_labels_aug
def subset_dir_mod(
method: str, fraction_doubles_per_class: float, features: np.ndarray, dir_labels: np.ndarray, mod_labels: np.ndarray
):
# Should have 1-hot vector labels
assert dir_labels.ndim == 2
assert mod_labels.ndim == 2
# check these are all singles
items_with_dir = dir_labels.argmax(-1) != NO_DIR_IDX
items_with_mod = mod_labels.argmax(-1) != NO_MOD_IDX
items_with_both = np.logical_and(items_with_dir, items_with_mod)
assert np.sum(items_with_both) == 0
labels_2d = np.stack([dir_labels.argmax(-1), mod_labels.argmax(-1)], axis=-1)
# Figure out how many items we have per class
# Then use fraction_doubles_per_class to figure out how many doubles we want
class_sizes = np.unique(labels_2d, axis=0, return_counts=True)[-1]
n_per_class = int(np.round(fraction_doubles_per_class * np.mean(class_sizes)))
n_per_class = min(n_per_class, np.min(class_sizes))
logger.info(f"Initial class sizes: {class_sizes}, n_per_class: {n_per_class}")
# For each class, fit a multivariate gaussian and sample the requested number of points
res_x, res_y_dir, res_y_mod = [], [], []
for d, m in np.unique(labels_2d, axis=0):
idx = np.where((labels_2d == (d, m)).all(-1))[0]
class_mean = np.mean(features[idx], axis=0)
if method == "subsetInput_uniform":
subset_idx = np.random.choice(idx, n_per_class, replace=False)
elif method == "subsetInput_near_mean":
dists = np.linalg.norm(features[idx] - class_mean, axis=-1)
ranked_distances = np.argsort(dists)
subset_idx = idx[ranked_distances[:n_per_class]]
elif method == "subsetInput_spaced_quantiles":
dists = np.linalg.norm(features[idx] - class_mean, axis=-1)
ranked_distances = np.argsort(dists)
spacing = int(np.floor(len(idx) / n_per_class))
# Since we use floor, we step slightly too little.
# In case this gives us extra items, we also truncate.
subset_idx = idx[ranked_distances[::spacing][:n_per_class]]
n_subset = len(subset_idx)
assert abs(n_subset - n_per_class) <= 1
res_x.append(features[subset_idx])
res_y_dir.append(dir_labels[subset_idx])
res_y_mod.append(mod_labels[subset_idx])
res_x = np.concatenate(res_x)
res_y_dir = np.concatenate(res_y_dir)
res_y_mod = np.concatenate(res_y_mod)
labels_2d = np.stack([res_y_dir.argmax(-1), res_y_mod.argmax(-1)], axis=-1)
class_sizes = np.unique(labels_2d, axis=0, return_counts=True)[-1]
logger.info(f"Class sizes after subset: {class_sizes}")
return res_x, res_y_dir, res_y_mod
def get_augmented_doubles(
method: str,
feature_combine_type: str,
fraction_doubles_per_class: float,
features: np.ndarray,
dir_labels: np.ndarray,
mod_labels: np.ndarray,
):
if feature_combine_type == "avg":
aug = AvgPairs(-1)
elif feature_combine_type == "max":
aug = ElementwiseMaxPairs(-1)
else:
raise ValueError(f"Unknown feature_combine_type: {feature_combine_type}")
if method == "none":
logger.info("No synthetic doubles")
# We create nothing and return early
features_aug = np.empty((0, *features.shape[1:]))
dir_labels_aug = np.empty((0, *dir_labels.shape[1:]))
mod_labels_aug = np.empty((0, *mod_labels.shape[1:]))
return features_aug, dir_labels_aug, mod_labels_aug
if method.startswith("subsetInput"):
# NOTE - here, n_per_class means how many items in each INPUT class
# Do the subsetting before making combinations
logger.info("Subset before creating doubles")
features_subset, dir_labels_subset, mod_labels_subset = subset_dir_mod(
method, fraction_doubles_per_class, features, dir_labels, mod_labels
)
features_aug, dir_labels_aug, mod_labels_aug = aug(features_subset, dir_labels_subset, mod_labels_subset)
labels_2d = np.stack([dir_labels_aug.argmax(-1), mod_labels_aug.argmax(-1)], axis=-1)
class_sizes = np.unique(labels_2d, axis=0, return_counts=True)[-1]
logger.info(f"Final synthetic double class sizes: {class_sizes}")
return features_aug, dir_labels_aug, mod_labels_aug
# Other methods create all combinations and THEN subset
# First, create all augmented items
logger.info("Subset after creating doubles")
features_aug, dir_labels_aug, mod_labels_aug = aug(features, dir_labels, mod_labels)
labels_2d = np.stack([dir_labels_aug.argmax(-1), mod_labels_aug.argmax(-1)], axis=-1)
class_sizes = np.unique(labels_2d, axis=0, return_counts=True)[-1]
logger.info(f"Initial synthetic double class sizes: {class_sizes}")
# check these are all doubles
items_with_dir = dir_labels_aug.argmax(-1) != NO_DIR_IDX
items_with_mod = mod_labels_aug.argmax(-1) != NO_MOD_IDX
items_with_both = np.logical_and(items_with_dir, items_with_mod)
assert np.sum(items_with_both) == len(features_aug)
# Figure out how many items we want per class
n_per_class = int(np.round(fraction_doubles_per_class * np.mean(class_sizes)))
n_per_class = min(n_per_class, np.min(class_sizes))
# Then, subset as requested
if method == "all":
pass
elif method == "subset_uniform":
features_aug, dir_labels_aug, mod_labels_aug = subset_doubles_uniform(
n_per_class, features_aug, dir_labels_aug, mod_labels_aug
)
elif method == "subset_near_mean":
features_aug, dir_labels_aug, mod_labels_aug = subset_doubles_near_mean(
n_per_class, features_aug, dir_labels_aug, mod_labels_aug
)
elif method == "subset_spaced_quantiles":
features_aug, dir_labels_aug, mod_labels_aug = subset_doubles_spaced_quantiles(
n_per_class, features_aug, dir_labels_aug, mod_labels_aug
)
else:
raise ValueError(f"Unknown augmentation method: {method}")
labels_2d = np.stack([dir_labels_aug.argmax(-1), mod_labels_aug.argmax(-1)], axis=-1)
class_sizes = np.unique(labels_2d, axis=0, return_counts=True)[-1]
logger.info(f"Final synthetic double class sizes: {class_sizes}")
return features_aug, dir_labels_aug, mod_labels_aug
def get_noise_simple(x, relative_std):
"""Add noise to x, where the noise standard deviation is relative_std * x.std()"""
return np.random.randn(*x.shape) * relative_std * x.std(0)
def balanced_sample_singles(features, dir_labels, mod_labels, n_per_class):
# Should have 1-hot vector labels
assert dir_labels.ndim == 2
assert mod_labels.ndim == 2
# check these are all singles
items_with_dir = dir_labels.argmax(-1) != NO_DIR_IDX
items_with_mod = mod_labels.argmax(-1) != NO_MOD_IDX
items_with_both = np.logical_and(items_with_dir, items_with_mod)
assert np.sum(items_with_both) == 0
labels_2d = np.stack([dir_labels.argmax(-1), mod_labels.argmax(-1)], axis=-1)
res_x, res_y_dir, res_y_mod = [], [], []
for d, m in np.unique(labels_2d, axis=0):
idx = np.where((labels_2d == (d, m)).all(-1))[0]
n_needed = n_per_class
selected_idx = []
while True:
if n_needed >= len(idx):
# Take all items in this class 1 more time
selected_idx.append(idx)
n_needed -= len(idx)
else:
# Take the remaining items randomly
selected_idx.append(np.random.choice(idx, n_needed, replace=False))
break
selected_idx = np.concatenate(selected_idx)
res_x.append(features[selected_idx])
res_y_dir.append(dir_labels[selected_idx])
res_y_mod.append(mod_labels[selected_idx])
return np.concatenate(res_x), np.concatenate(res_y_dir), np.concatenate(res_y_mod)
def sample_singles_gmm(features, dir_labels, mod_labels, n_per_class, n_components):
"""Fit a GMM to each class, then sample as requested"""
assert dir_labels.ndim == 2
assert mod_labels.ndim == 2
# check these are all singles
items_with_dir = dir_labels.argmax(-1) != NO_DIR_IDX
items_with_mod = mod_labels.argmax(-1) != NO_MOD_IDX
items_with_both = np.logical_and(items_with_dir, items_with_mod)
assert np.sum(items_with_both) == 0
labels_2d = np.stack([dir_labels.argmax(-1), mod_labels.argmax(-1)], axis=-1)
# For each class, fit a multivariate gaussian and sample the requested number of points
res_x, res_y_dir, res_y_mod = [], [], []
for d, m in np.unique(labels_2d, axis=0):
# NOTE - d and m are now integer values. We need to convert them to 1-hot vectors for the output
d_onehot = np.zeros(dir_labels.shape[1])
d_onehot[d] = 1
m_onehot = np.zeros(mod_labels.shape[1])
m_onehot[m] = 1
idx = np.where((labels_2d == (d, m)).all(-1))[0]
gmm = GaussianMixture(n_components=n_components)
gmm.fit(features[idx])
res_x.append(gmm.sample(n_per_class)[0])
res_y_dir.append(np.tile(d_onehot, (n_per_class, 1)))
res_y_mod.append(np.tile(m_onehot, (n_per_class, 1)))
return np.concatenate(res_x), np.concatenate(res_y_dir), np.concatenate(res_y_mod)
def sample_singles_kde(features, dir_labels, mod_labels, n_per_class, bandwidth):
"""Fit a GMM to each class, then sample as requested"""
assert dir_labels.ndim == 2
assert mod_labels.ndim == 2
# check these are all singles
items_with_dir = dir_labels.argmax(-1) != NO_DIR_IDX
items_with_mod = mod_labels.argmax(-1) != NO_MOD_IDX
items_with_both = np.logical_and(items_with_dir, items_with_mod)
assert np.sum(items_with_both) == 0
labels_2d = np.stack([dir_labels.argmax(-1), mod_labels.argmax(-1)], axis=-1)
# For each class, fit a multivariate gaussian and sample the requested number of points
res_x, res_y_dir, res_y_mod = [], [], []
for d, m in np.unique(labels_2d, axis=0):
# NOTE - d and m are now integer values. We need to convert them to 1-hot vectors for the output
d_onehot = np.zeros(dir_labels.shape[1])
d_onehot[d] = 1
m_onehot = np.zeros(mod_labels.shape[1])
m_onehot[m] = 1
idx = np.where((labels_2d == (d, m)).all(-1))[0]
kde = KernelDensity(bandwidth=bandwidth)
kde.fit(features[idx])
res_x.append(kde.sample(n_per_class))
res_y_dir.append(np.tile(d_onehot, (n_per_class, 1)))
res_y_mod.append(np.tile(m_onehot, (n_per_class, 1)))
return np.concatenate(res_x), np.concatenate(res_y_dir), np.concatenate(res_y_mod)
def get_augmented_singles(
method: str, n_per_class: int, features: np.ndarray, dir_labels: np.ndarray, mod_labels: np.ndarray
):
if method == "none":
logger.info("No augmented singles")
# Return empties so we can just concatenate and not worry about it
features_aug = np.empty((0, *features.shape[1:]))
dir_labels_aug = np.empty((0, *dir_labels.shape[1:]))
mod_labels_aug = np.empty((0, *mod_labels.shape[1:]))
return features_aug, dir_labels_aug, mod_labels_aug
logger.info(f"Augmenting singles with method {method}")
if method.startswith("add-gaussian"):
# First, choose a subset of items according to n_per_class
features, dir_labels_aug, mod_labels_aug = balanced_sample_singles(
features, dir_labels, mod_labels, n_per_class
)
if method == "add-gaussian-0.05":
factor = 0.05
elif method == "add-gaussian-0.1":
factor = 0.1
elif method == "add-gaussian-0.2":
factor = 0.2
elif method == "add-gaussian-0.3":
factor = 0.3
elif method == "add-gaussian-0.4":
factor = 0.4
elif method == "add-gaussian-0.5":
factor = 0.5
elif method == "add-gaussian-0.6":
factor = 0.6
else:
raise ValueError(f"Unknown gaussian factor: {method}")
features_aug = features + get_noise_simple(features, factor)
elif method.startswith("fit-gmm"):
if method == "fit-gmm-1":
nc = 1
elif method == "fit-gmm-3":
nc = 3
elif method == "fit-gmm-5":
nc = 5
elif method == "fit-gmm-10":
nc = 10
features_aug, dir_labels_aug, mod_labels_aug = sample_singles_gmm(
features, dir_labels, mod_labels, n_per_class, n_components=nc
)
elif method.startswith("fit-kde"):
if method == "fit-kde-gaussian-scott":
bandwidth = "scott"
if method == "fit-kde-gaussian-silverman":
bandwidth = "silverman"
if method == "fit-kde-gaussian-0.01":
bandwidth = 0.01
if method == "fit-kde-gaussian-0.1":
bandwidth = 0.1
if method == "fit-kde-gaussian-1.0":
bandwidth = 1.0
if method == "fit-kde-gaussian-10.0":
bandwidth = 10.0
features_aug, dir_labels_aug, mod_labels_aug = sample_singles_kde(
features, dir_labels, mod_labels, n_per_class, bandwidth=bandwidth
)
else:
raise NotImplementedError()
labels_2d = np.stack([dir_labels_aug.argmax(-1), mod_labels_aug.argmax(-1)], axis=-1)
class_sizes = np.unique(labels_2d, axis=0, return_counts=True)[-1]
logger.info(f"Augmented singles class sizes: {class_sizes}")
return features_aug, dir_labels_aug, mod_labels_aug
def get_clf(name: str, num_classes: int):
if name == "mlp":
return make_pipeline(
RobustScaler(), MLPClassifier(hidden_layer_sizes=[100, 100, 100], early_stopping=True, max_iter=200)
)
elif name == "logr":
return make_pipeline(RobustScaler(), LogisticRegression(class_weight="balanced", max_iter=2000, n_jobs=-1))
elif name == "svc":
return make_pipeline(RobustScaler(), SVC(class_weight="balanced", probability=True))
elif name == "rf":
return make_pipeline(RobustScaler(), RandomForestClassifier(class_weight="balanced", n_jobs=-1))
elif name == "knn":
return make_pipeline(RobustScaler(), KNeighborsClassifier())
elif name == "lda":
return make_pipeline(RobustScaler(), LinearDiscriminantAnalysis())
elif name == "gbc":
return make_pipeline(RobustScaler(), GradientBoostingClassifier())
else:
raise ValueError(f"Unknown model name: {name}")
def balance_classes(train_features, train_dir_labels, train_mod_labels):
# Subsample the "Rest" class since it will be overrepresented
assert train_dir_labels.ndim == 2
assert train_mod_labels.ndim == 2
labels_2d = np.stack([train_dir_labels.argmax(-1), train_mod_labels.argmax(-1)], axis=-1)
class_sizes = np.unique(labels_2d, axis=0, return_counts=True)[-1]
logger.info(f"Before pruning 'Rest' items, class sizes: {class_sizes}")
rest_idx = np.where((labels_2d == [NO_DIR_IDX, NO_MOD_IDX]).all(-1))[0]
active_idx = np.where((labels_2d != [NO_DIR_IDX, NO_MOD_IDX]).any(-1))[0]
active_counts = np.unique(labels_2d[active_idx], axis=0, return_counts=True)[-1]
avg_n_active = int(np.mean(active_counts))
subset_rest_idx = np.random.choice(rest_idx, avg_n_active, replace=False)
res_x = np.concatenate((train_features[active_idx], train_features[subset_rest_idx]))
res_y_dir = np.concatenate((train_dir_labels[active_idx], train_dir_labels[subset_rest_idx]))
res_y_mod = np.concatenate((train_mod_labels[active_idx], train_mod_labels[subset_rest_idx]))
res_labels_2d = np.stack([res_y_dir.argmax(-1), res_y_mod.argmax(-1)], axis=-1)
res_class_sizes = np.unique(res_labels_2d, axis=0, return_counts=True)[-1]
logger.info(f"After pruning 'Rest' items, class sizes: {res_class_sizes}")
return res_x, res_y_dir, res_y_mod
def remove_double_gestures(train_features, train_dir_labels, train_mod_labels):
labels_2d = np.stack([train_dir_labels.argmax(-1), train_mod_labels.argmax(-1)], axis=-1)
class_sizes = np.unique(labels_2d, axis=0, return_counts=True)[-1]
logger.info(f"Before removing double gestures, class sizes: {class_sizes}")
items_with_dir = train_dir_labels.argmax(-1) != NO_DIR_IDX
items_with_mod = train_mod_labels.argmax(-1) != NO_MOD_IDX
# Remove items with both direction and modifier
singles_idx = ~np.logical_and(items_with_dir, items_with_mod)
res_features = train_features[singles_idx]
res_dir_labels = train_dir_labels[singles_idx]
res_mod_labels = train_mod_labels[singles_idx]
res_labels_2d = np.stack([res_dir_labels.argmax(-1), res_mod_labels.argmax(-1)], axis=-1)
res_class_sizes = np.unique(res_labels_2d, axis=0, return_counts=True)[-1]
logger.info(f"After removing double gestures, class sizes: {res_class_sizes}")
return res_features, res_dir_labels, res_mod_labels
@logger.catch(onerror=lambda _: sys.exit(1))
def run_training(
subject: str,
parallel_model_type: str,
clf_name: str,
doubles_method: str,
fraction_doubles_per_class: float,
singles_method: str,
rel_fraction_singles_per_class: float,
include_doubles_in_train: bool,
feature_combine_type: str,
):
# We don't want to modify code in the gest module itself.
# Thus, we'll do augmentation manually here, and tell the model not to do
# any further augmentation.
# Load train data
data_dict = load_data_dict()
try:
data = data_dict[subject]
except KeyError:
raise ValueError(f"Unknown subject: {subject}")
train_features = data["Calibration_features"]
train_dir_labels = data["Calibration_dir_labels"]
train_mod_labels = data["Calibration_mod_labels"]
# First, reduce amount of "Rest" items in train set
train_features, train_dir_labels, train_mod_labels = balance_classes(
train_features, train_dir_labels, train_mod_labels
)
# Remove any double gestures that occured due to bad participant behavior
train_features, train_dir_labels, train_mod_labels = remove_double_gestures(
train_features, train_dir_labels, train_mod_labels
)
# NOTE - we use HoldPulse1_NoFeedback and SimultaneousPulse1_NoFeedback for train set in the "upper bound"
# otherwise, these blocks are not used
# Load test data
if include_doubles_in_train:
# We use blocks 1 and 2 of the "NoFeedBack" portion of experiment
# Double check that we're not using augmentation
assert doubles_method == "none"
assert singles_method == "none"
# Add real combos to train set
train_features = np.concatenate(
[
train_features,
data["HoldPulse1_NoFeedBack_features"],
data["SimultaneousPulse1_NoFeedBack_features"],
data["HoldPulse2_NoFeedBack_features"],
data["SimultaneousPulse2_NoFeedBack_features"],
]
)
train_dir_labels = np.concatenate(
[
train_dir_labels,
data["HoldPulse1_NoFeedBack_dir_labels"],
data["SimultaneousPulse1_NoFeedBack_dir_labels"],
data["HoldPulse2_NoFeedBack_dir_labels"],
data["SimultaneousPulse2_NoFeedBack_dir_labels"],
]
)
train_mod_labels = np.concatenate(
[
train_mod_labels,
data["HoldPulse1_NoFeedBack_mod_labels"],
data["SimultaneousPulse1_NoFeedBack_mod_labels"],
data["HoldPulse2_NoFeedBack_mod_labels"],
data["SimultaneousPulse2_NoFeedBack_mod_labels"],
]
)
logger.info(f"Initial train set: {train_features.shape=}, {train_dir_labels.shape=}, {train_mod_labels.shape=}")
# Don't use "Feedback" blocks for this analysis
test_blocks = ["HoldPulse3_NoFeedBack", "SimultaneousPulse3_NoFeedBack"]
test_features = np.concatenate([data[f"{block}_features"] for block in test_blocks])
test_dir_labels = np.concatenate([data[f"{block}_dir_labels"] for block in test_blocks])
test_mod_labels = np.concatenate([data[f"{block}_mod_labels"] for block in test_blocks])
logger.info(f"test set: {test_features.shape=}, {test_dir_labels.shape=}, {test_mod_labels.shape=}")
# Vary strategy for augmented doubles
double_features_aug, double_dir_labels_aug, double_mod_labels_aug = get_augmented_doubles(
doubles_method,
feature_combine_type,
fraction_doubles_per_class,
train_features,
train_dir_labels,
train_mod_labels,
)
# Make augmented singles
# Figure out how many doubles per class. Take avg and then apply rel_fraction_singles_per_class to
# get the number of singles per class
n_singles_per_class = 0
if singles_method != "none":
doubles_labels_2d = np.stack((double_dir_labels_aug.argmax(-1), double_mod_labels_aug.argmax(-1)), axis=-1)
class_sizes = np.unique(doubles_labels_2d, axis=0, return_counts=True)[-1]
n_singles_per_class = int(np.round(np.mean(class_sizes) * rel_fraction_singles_per_class))
single_features_aug, single_dir_labels_aug, single_mod_labels_aug = get_augmented_singles(
singles_method, n_singles_per_class, train_features, train_dir_labels, train_mod_labels
)
# Merge all train data
train_features = np.concatenate([train_features, double_features_aug, single_features_aug])
train_dir_labels = np.concatenate([train_dir_labels, double_dir_labels_aug, single_dir_labels_aug])
train_mod_labels = np.concatenate([train_mod_labels, double_mod_labels_aug, single_mod_labels_aug])
logger.info(f"Augmented train set: {train_features.shape=}, {train_dir_labels.shape=}, {train_mod_labels.shape=}")
# Create model
if parallel_model_type == "ParallelA":
model = ParallelA(
get_clf(clf_name, num_classes=5),
get_clf(clf_name, num_classes=3),
use_augmentation=False,
include_rest_data_for_clf=True,
)
elif parallel_model_type == "ParallelB":
model = ParallelB(
dir_clf=get_clf(clf_name, num_classes=4),
mod_clf=get_clf(clf_name, num_classes=2),
has_dir_clf=get_clf(clf_name, num_classes=2),
has_mod_clf=get_clf(clf_name, num_classes=2),
use_augmentation=False,
# include_rest_data_for_clf=True, # NOTE - always using true, flag is not in model
)
elif parallel_model_type == "SerialControl":
model = get_clf(clf_name, num_classes=15)
else:
raise ValueError(f"Unknown parallel model type: {parallel_model_type}")
# Train
logger.info("Train...")
if parallel_model_type == "SerialControl":
# Convert labels to integer by making 2-digit numbers,
# where the 10s place is the dir label and the 1s place is the mod label
train_labels = train_dir_labels.argmax(-1) * 10 + train_mod_labels.argmax(-1)
model.fit(train_features, train_labels)
else:
model.fit(train_features, train_dir_labels, train_mod_labels)
# Evaluate
logger.info("Evaluate")
if parallel_model_type == "SerialControl":
combined_preds = model.predict(test_features)
dir_preds = combined_preds // 10
mod_preds = combined_preds % 10
else:
dir_preds, mod_preds = model.predict(test_features)
preds_2d = np.stack([dir_preds, mod_preds], axis=-1)
true_labels_2d = np.stack([test_dir_labels.argmax(-1), test_mod_labels.argmax(-1)], axis=-1) | return confusion_matrix(true_labels_2d, preds_2d) | 9 | 2023-12-12 16:50:34+00:00 | 12k |
ebb-earl-co/tidal-wave | tidal_wave/playlist.py | [
{
"identifier": "AudioFormat",
"path": "tidal_wave/media.py",
"snippet": "class AudioFormat(str, Enum):\n sony_360_reality_audio = \"360\"\n dolby_atmos = \"Atmos\"\n hi_res = \"HiRes\"\n mqa = \"MQA\"\n lossless = \"Lossless\"\n high = \"High\"\n low = \"Low\""
},
{
"identi... | from dataclasses import dataclass
from pathlib import Path
from types import SimpleNamespace
from typing import Dict, List, Optional, Set, Tuple, Union
from requests import HTTPError, Session
from .media import AudioFormat
from .models import (
PlaylistsEndpointResponseJSON,
TracksEndpointResponseJSON,
VideosEndpointResponseJSON,
)
from .requesting import request_playlists
from .track import Track
from .utils import download_cover_image, temporary_file, TIDAL_API_URL
from .video import Video
import json
import logging
import math
import shutil
import sys
import ffmpeg
import mutagen | 9,893 |
logger = logging.getLogger("__name__")
@dataclass
class Playlist:
playlist_id: str # UUID4
def __post_init__(self):
self.playlist_dir: Optional[Path] = None
self.playlist_cover_saved: bool = False
def get_metadata(self, session: Session):
"""Request from TIDAL API /playlists endpoint"""
self.metadata: Optional[PlaylistsEndpointResponseJSON] = request_playlists(
session=session, identifier=self.playlist_id
)
if self.metadata is None:
return
self.name = (
self.metadata.title.replace("/", "_")
.replace("|", "_")
.replace(":", " -")
.replace('"', "")
.replace("..", "")
)
def set_items(self, session: Session):
"""Uses data from TIDAL API /playlists/items endpoint to
populate self.items"""
playlist_items: Optional[PlaylistsItemsResponseJSON] = get_playlist(
session=session, playlist_id=self.playlist_id
)
if playlist_items is None:
self.items = tuple()
else:
self.items: Tuple[Optional[PlaylistItem]] = tuple(playlist_items.items)
def set_dir(self, out_dir: Path):
"""Populates self.playlist_dir based on self.name, self.playlist_id"""
playlist_substring: str = f"{self.name} [{self.playlist_id}]"
self.playlist_dir: Path = out_dir / "Playlists" / playlist_substring
self.playlist_dir.mkdir(parents=True, exist_ok=True)
def save_cover_image(self, session: Session, out_dir: Path):
"""Requests self.metadata.image and attempts to write it to disk"""
if self.playlist_dir is None:
self.set_dir(out_dir=out_dir)
self.cover_path: Path = self.playlist_dir / "cover.jpg"
if not self.cover_path.exists():
download_cover_image(
session=session,
cover_uuid=self.metadata.square_image,
output_dir=self.playlist_dir,
dimension=1080,
)
else:
self.playlist_cover_saved = True
def save_description(self):
"""Requests self.metadata.description and attempts to write it to disk"""
description_path: Path = self.playlist_dir / "PlaylistDescription.txt"
if self.metadata.description is not None and len(self.metadata.description) > 0:
if not description_path.exists():
description_path.write_text(f"{self.metadata.description}\n")
def get_items(self, session: Session, audio_format: AudioFormat):
"""Using either Track.get() or Video.get(), attempt to request
the data for each track or video in self.items"""
if len(self.items) == 0:
return
tracks_videos: list = [None] * len(self.items)
for i, item in enumerate(self.items):
if item is None:
tracks_videos[i] = None
continue
elif isinstance(item, TracksEndpointResponseJSON):
track: Track = Track(track_id=item.id)
track.get(
session=session,
audio_format=audio_format,
out_dir=self.playlist_dir,
metadata=item,
)
tracks_videos[i] = track
|
logger = logging.getLogger("__name__")
@dataclass
class Playlist:
playlist_id: str # UUID4
def __post_init__(self):
self.playlist_dir: Optional[Path] = None
self.playlist_cover_saved: bool = False
def get_metadata(self, session: Session):
"""Request from TIDAL API /playlists endpoint"""
self.metadata: Optional[PlaylistsEndpointResponseJSON] = request_playlists(
session=session, identifier=self.playlist_id
)
if self.metadata is None:
return
self.name = (
self.metadata.title.replace("/", "_")
.replace("|", "_")
.replace(":", " -")
.replace('"', "")
.replace("..", "")
)
def set_items(self, session: Session):
"""Uses data from TIDAL API /playlists/items endpoint to
populate self.items"""
playlist_items: Optional[PlaylistsItemsResponseJSON] = get_playlist(
session=session, playlist_id=self.playlist_id
)
if playlist_items is None:
self.items = tuple()
else:
self.items: Tuple[Optional[PlaylistItem]] = tuple(playlist_items.items)
def set_dir(self, out_dir: Path):
"""Populates self.playlist_dir based on self.name, self.playlist_id"""
playlist_substring: str = f"{self.name} [{self.playlist_id}]"
self.playlist_dir: Path = out_dir / "Playlists" / playlist_substring
self.playlist_dir.mkdir(parents=True, exist_ok=True)
def save_cover_image(self, session: Session, out_dir: Path):
"""Requests self.metadata.image and attempts to write it to disk"""
if self.playlist_dir is None:
self.set_dir(out_dir=out_dir)
self.cover_path: Path = self.playlist_dir / "cover.jpg"
if not self.cover_path.exists():
download_cover_image(
session=session,
cover_uuid=self.metadata.square_image,
output_dir=self.playlist_dir,
dimension=1080,
)
else:
self.playlist_cover_saved = True
def save_description(self):
"""Requests self.metadata.description and attempts to write it to disk"""
description_path: Path = self.playlist_dir / "PlaylistDescription.txt"
if self.metadata.description is not None and len(self.metadata.description) > 0:
if not description_path.exists():
description_path.write_text(f"{self.metadata.description}\n")
def get_items(self, session: Session, audio_format: AudioFormat):
"""Using either Track.get() or Video.get(), attempt to request
the data for each track or video in self.items"""
if len(self.items) == 0:
return
tracks_videos: list = [None] * len(self.items)
for i, item in enumerate(self.items):
if item is None:
tracks_videos[i] = None
continue
elif isinstance(item, TracksEndpointResponseJSON):
track: Track = Track(track_id=item.id)
track.get(
session=session,
audio_format=audio_format,
out_dir=self.playlist_dir,
metadata=item,
)
tracks_videos[i] = track | elif isinstance(item, VideosEndpointResponseJSON): | 3 | 2023-12-12 21:50:25+00:00 | 12k |
lbcb-sci/GNNome | train.py | [
{
"identifier": "AssemblyGraphDataset",
"path": "graph_dataset.py",
"snippet": "class AssemblyGraphDataset(DGLDataset):\n def __init__(self, root, assembler, threads=32, generate=False):\n self.root = os.path.abspath(root)\n self.assembler = assembler\n self.threads = threads\n ... | import argparse
import copy
import os
import pickle
import random
import re
import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
import dgl
import wandb
import evaluate
import models
import utils
from datetime import datetime
from tqdm import tqdm
from torch.nn.functional import kl_div
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.profiler import profile, record_function, ProfilerActivity
from dgl.dataloading import GraphDataLoader
from graph_dataset import AssemblyGraphDataset
from hyperparameters import get_hyperparameters
from config import get_config
from inference import inference | 7,355 | edge_labels = g.edata['y'][sub_g.edata['_ID']].to(device)
loss = criterion(edge_predictions, edge_labels)
TP, TN, FP, FN = utils.calculate_tfpn(edge_predictions, edge_labels)
acc, precision, recall, f1 = utils.calculate_metrics(TP, TN, FP, FN)
acc_inv, precision_inv, recall_inv, f1_inv = utils.calculate_metrics_inverse(TP, TN, FP, FN)
try:
fp_rate = FP / (FP + TN)
except ZeroDivisionError:
fp_rate = 0.0
try:
fn_rate = FN / (FN + TP)
except ZeroDivisionError:
fn_rate = 0.0
# Append results of a single mini-batch / METIS partition
# These are used for epoch mean = mean over partitions over graphs - mostly DEPRECATED
running_loss.append(loss.item())
running_fp_rate.append(fp_rate)
running_fn_rate.append(fn_rate)
running_acc.append(acc)
running_precision.append(precision)
running_recall.append(recall)
running_f1.append(f1)
# These are used for epoch mean = mean over all the partitions in all the graphs
valid_loss_epoch.append(loss.item())
valid_fp_rate_epoch.append(fp_rate)
valid_fn_rate_epoch.append(fn_rate)
valid_acc_epoch.append(acc)
valid_precision_epoch.append(precision)
valid_recall_epoch.append(recall)
valid_f1_epoch.append(f1)
# Inverse metrics because F1 and them are not good for dataset with mostly positive labels
valid_acc_inv_epoch.append(acc_inv)
valid_precision_inv_epoch.append(precision_inv)
valid_recall_inv_epoch.append(recall_inv)
valid_f1_inv_epoch.append(f1_inv)
# Average over all mini-batches (partitions) in a single graph - mostly DEPRECATED
val_loss = np.mean(running_loss)
val_fp_rate = np.mean(running_fp_rate)
val_fn_rate = np.mean(running_fn_rate)
val_acc = np.mean(running_acc)
val_precision = np.mean(running_precision)
val_recall = np.mean(running_recall)
val_f1 = np.mean(running_f1)
# elapsed = utils.timedelta_to_str(datetime.now() - time_start_eval)
# print(f'\nVALIDATION (one validation graph): Epoch = {epoch}, Graph = {idx}')
# print(f'Loss: {val_loss:.4f}, fp_rate(GT=0): {val_fp_rate:.4f}, fn_rate(GT=1): {val_fn_rate:.4f}')
# print(f'elapsed time: {elapsed}\n\n')
# Record after each graph in the dataset - mostly DEPRECATED
val_loss_all_graphs.append(val_loss)
val_fp_rate_all_graphs.append(val_fp_rate)
val_fn_rate_all_graphs.append(val_fn_rate)
val_acc_all_graphs.append(val_acc)
val_precision_all_graphs.append(val_precision)
val_recall_all_graphs.append(val_recall)
val_f1_all_graphs.append(val_f1)
# Average over all the training graphs in one epoch - mostly DEPRECATED
val_loss_all_graphs = np.mean(val_loss_all_graphs)
val_fp_rate_all_graphs = np.mean(val_fp_rate_all_graphs)
val_fn_rate_all_graphs = np.mean(val_fn_rate_all_graphs)
val_acc_all_graphs = np.mean(val_acc_all_graphs)
val_precision_all_graphs = np.mean(val_precision_all_graphs)
val_recall_all_graphs = np.mean(val_recall_all_graphs)
val_f1_all_graphs = np.mean(val_f1_all_graphs)
# Average over all the partitions in one epoch
valid_loss_epoch = np.mean(valid_loss_epoch)
valid_fp_rate_epoch = np.mean(valid_fp_rate_epoch)
valid_fn_rate_epoch = np.mean(valid_fn_rate_epoch)
valid_acc_epoch = np.mean(valid_acc_epoch)
valid_precision_epoch = np.mean(valid_precision_epoch)
valid_recall_epoch = np.mean(valid_recall_epoch)
valid_f1_epoch = np.mean(valid_f1_epoch)
valid_acc_inv_epoch = np.mean(valid_acc_inv_epoch)
valid_precision_inv_epoch = np.mean(valid_precision_inv_epoch)
valid_recall_inv_epoch = np.mean(valid_recall_inv_epoch)
valid_f1_inv_epoch = np.mean(valid_f1_inv_epoch)
loss_per_epoch_valid.append(valid_loss_epoch)
f1_inv_per_epoch_valid.append(valid_f1_inv_epoch)
elapsed = utils.timedelta_to_str(datetime.now() - time_start)
print(f'\n==> VALIDATION (all validation graphs): Epoch = {epoch}')
print(f'Loss: {valid_loss_epoch:.4f}, fp_rate(GT=0): {valid_fp_rate_epoch:.4f}, fn_rate(GT=1): {valid_fn_rate_epoch:.4f}')
print(f'Elapsed time total: {elapsed}\n\n')
if not overfit:
# Choose the model with minimal loss on validation set
if len(loss_per_epoch_valid) == 1 or len(loss_per_epoch_valid) > 1 and loss_per_epoch_valid[-1] < min(loss_per_epoch_valid[:-1]):
torch.save(model.state_dict(), model_min_loss_path)
print(f'Epoch {epoch:3}: Model MIN-LOSS saved! -> Val Loss = {valid_loss_epoch:.6f}\tVal F1 = {valid_f1_epoch:.4f}\tVal inv-F1 = {valid_f1_inv_epoch:.4f}' \
f'\tVal FPR = {valid_fp_rate_epoch:.4f}\tVal FNR = {valid_fn_rate_epoch:.4f}\t')
save_checkpoint(epoch, model, optimizer, min(loss_per_epoch_train), min(loss_per_epoch_valid), out, ckpt_path) # Save the checkpoint every epoch
scheduler.step(valid_loss_epoch)
# Code that evalates NGA50 during training -- only for overfitting
plot_nga50_during_training = hyperparameters['plot_nga50_during_training']
i = hyperparameters['chr_overfit']
eval_frequency = hyperparameters['eval_frequency']
if overfit and plot_nga50_during_training and (epoch+1) % eval_frequency == 0:
# call inference
refs_path = hyperparameters['refs_path']
save_dir = os.path.join(train_path, assembler)
if not os.path.isdir(save_dir):
os.makedirs(save_dir)
if not os.path.isdir(os.path.join(save_dir, f'assembly')):
os.mkdir(os.path.join(save_dir, f'assembly'))
if not os.path.isdir(os.path.join(save_dir, f'inference')):
os.mkdir(os.path.join(save_dir, f'inference'))
if not os.path.isdir(os.path.join(save_dir, f'reports')):
os.mkdir(os.path.join(save_dir, f'reports'))
|
def save_checkpoint(epoch, model, optimizer, loss_train, loss_valid, out, ckpt_path):
checkpoint = {
'epoch': epoch,
'model_state_dict': model.state_dict(),
'optim_state_dict': optimizer.state_dict(),
'loss_train': loss_train,
'loss_valid': loss_valid,
}
torch.save(checkpoint, ckpt_path)
def load_checkpoint(out, model, optimizer):
ckpt_path = f'checkpoints/{out}.pt'
checkpoint = torch.load(ckpt_path)
epoch = checkpoint['epoch']
model.load_state_dict(checkpoint['model_state_dict'])
optimizer.load_state_dict(checkpoint['optim_state_dict'])
loss_train = checkpoint['loss_train']
loss_valid = checkpoint['loss_valid']
return epoch, model, optimizer, loss_train, loss_valid
def view_model_param(model):
total_param = 0
for param in model.parameters():
total_param += np.prod(list(param.data.size()))
return total_param
def mask_graph(g, fraction, device):
keep_node_idx = torch.rand(g.num_nodes(), device=device) < fraction
sub_g = dgl.node_subgraph(g, keep_node_idx, store_ids=True)
return sub_g
def mask_graph_strandwise(g, fraction, device):
keep_node_idx_half = torch.rand(g.num_nodes() // 2, device=device) < fraction
keep_node_idx = torch.empty(keep_node_idx_half.size(0) * 2, dtype=keep_node_idx_half.dtype)
keep_node_idx[0::2] = keep_node_idx_half
keep_node_idx[1::2] = keep_node_idx_half
sub_g = dgl.node_subgraph(g, keep_node_idx, store_ids=True)
print(f'Masking fraction: {fraction}')
print(f'Original graph: N={g.num_nodes()}, E={g.num_edges()}')
print(f'Subsampled graph: N={sub_g.num_nodes()}, E={sub_g.num_edges()}')
return sub_g
def symmetry_loss(org_scores, rev_scores, labels, pos_weight=1.0, alpha=1.0):
BCE = torch.nn.BCEWithLogitsLoss(pos_weight=pos_weight, reduction='none')
BCE_org = BCE(org_scores, labels)
BCE_rev = BCE(rev_scores, labels)
abs_diff = torch.abs(org_scores - rev_scores)
loss = (BCE_org + BCE_rev + alpha * abs_diff)
loss = loss.mean()
return loss
def train(train_path, valid_path, out, assembler, overfit=False, dropout=None, seed=None, resume=False):
hyperparameters = get_hyperparameters()
if seed is None:
seed = hyperparameters['seed']
num_epochs = hyperparameters['num_epochs']
num_gnn_layers = hyperparameters['num_gnn_layers']
hidden_features = hyperparameters['dim_latent']
nb_pos_enc = hyperparameters['nb_pos_enc']
patience = hyperparameters['patience']
lr = hyperparameters['lr']
device = hyperparameters['device']
batch_norm = hyperparameters['batch_norm']
node_features = hyperparameters['node_features']
edge_features = hyperparameters['edge_features']
hidden_edge_features = hyperparameters['hidden_edge_features']
hidden_edge_scores = hyperparameters['hidden_edge_scores']
decay = hyperparameters['decay']
wandb_mode = hyperparameters['wandb_mode']
wandb_project = hyperparameters['wandb_project']
num_nodes_per_cluster = hyperparameters['num_nodes_per_cluster']
npc_lower_bound = hyperparameters['npc_lower_bound']
npc_upper_bound = hyperparameters['npc_upper_bound']
k_extra_hops = hyperparameters['k_extra_hops']
masking = hyperparameters['masking']
mask_frac_low = hyperparameters['mask_frac_low']
mask_frac_high = hyperparameters['mask_frac_high']
use_symmetry_loss = hyperparameters['use_symmetry_loss']
alpha = hyperparameters['alpha']
config = get_config()
checkpoints_path = os.path.abspath(config['checkpoints_path'])
models_path = os.path.abspath(config['models_path'])
print(f'----- TRAIN -----')
print(f'\nSaving checkpoints: {checkpoints_path}')
print(f'Saving models: {models_path}\n')
print(f'USING SEED: {seed}')
if torch.cuda.is_available():
torch.cuda.set_device(device)
utils.set_seed(seed)
time_start = datetime.now()
timestamp = time_start.strftime('%Y-%b-%d-%H-%M-%S')
if out is None:
out = timestamp
assert train_path is not None, "train_path not specified!"
assert valid_path is not None, "valid_path not specified!"
if not overfit:
ds_train = AssemblyGraphDataset(train_path, assembler=assembler)
ds_valid = AssemblyGraphDataset(valid_path, assembler=assembler)
else:
ds_train = ds_valid = AssemblyGraphDataset(train_path, assembler=assembler)
pos_to_neg_ratio = sum([((g.edata['y']==1).sum() / (g.edata['y']==0).sum()).item() for idx, g in ds_train]) / len(ds_train)
model = models.SymGatedGCNModel(node_features, edge_features, hidden_features, hidden_edge_features, num_gnn_layers, hidden_edge_scores, batch_norm, nb_pos_enc, dropout=dropout)
model.to(device)
if not os.path.exists(models_path):
print(models_path)
os.makedirs(models_path)
out = out + f'_seed{seed}'
model_path = os.path.join(models_path, f'model_{out}.pt') # TODO: Delete this?
model_min_loss_path = os.path.join(models_path, f'model_min-loss_{out}.pt')
print(f'MODEL PATH: {model_path}')
ckpt_path = f'{checkpoints_path}/ckpt_{out}.pt'
print(f'CHECKPOINT PATH: {ckpt_path}')
print(f'\nNumber of network parameters: {view_model_param(model)}\n')
print(f'Normalization type : Batch Normalization\n') if batch_norm else print(f'Normalization type : Layer Normalization\n')
pos_weight = torch.tensor([1 / pos_to_neg_ratio], device=device)
criterion = torch.nn.BCEWithLogitsLoss(pos_weight=pos_weight)
optimizer = torch.optim.Adam(model.parameters(), lr=lr)
scheduler = ReduceLROnPlateau(optimizer, mode='min', factor=decay, patience=patience, verbose=True)
start_epoch = 0
loss_per_epoch_train, loss_per_epoch_valid = [], []
f1_inv_per_epoch_valid = []
if not os.path.exists(checkpoints_path):
os.makedirs(checkpoints_path)
if resume:
# ckpt_path = f'{checkpoints_path}/ckpt_{out}.pt' # This should be the checkpoint of the old run
checkpoint = torch.load(ckpt_path)
print('Loding the checkpoint from:', ckpt_path, sep='\t')
model_path = os.path.join(models_path, f'model_{out}_resumed-{num_epochs}.pt')
ckpt_path = os.path.join(checkpoints_path, f'ckpt_{out}_resumed-{num_epochs}.pt')
print('Saving the resumed model to:', model_path, sep='\t')
print('Saving the new checkpoint to:', ckpt_path, sep='\t')
start_epoch = checkpoint['epoch'] + 1
print(f'Resuming from epoch: {start_epoch}')
model.load_state_dict(checkpoint['model_state_dict'])
optimizer.load_state_dict(checkpoint['optim_state_dict'])
min_loss_train = checkpoint['loss_train']
min_loss_valid = checkpoint['loss_valid']
loss_per_epoch_train.append(min_loss_train)
loss_per_epoch_valid.append(min_loss_valid)
elapsed = utils.timedelta_to_str(datetime.now() - time_start)
print(f'Loading data done. Elapsed time: {elapsed}')
try:
with wandb.init(project=wandb_project, config=hyperparameters, mode=wandb_mode, name=out):
wandb.watch(model, criterion, log='all', log_freq=1000)
for epoch in range(start_epoch, num_epochs):
train_loss_all_graphs, train_fp_rate_all_graphs, train_fn_rate_all_graphs = [], [], []
train_acc_all_graphs, train_precision_all_graphs, train_recall_all_graphs, train_f1_all_graphs = [], [], [], []
train_loss_epoch, train_fp_rate_epoch, train_fn_rate_epoch = [], [], []
train_acc_epoch, train_precision_epoch, train_recall_epoch, train_f1_epoch = [], [], [], []
train_acc_inv_epoch, train_precision_inv_epoch, train_recall_inv_epoch, train_f1_inv_epoch = [], [], [], []
train_aps_epoch, train_aps_inv_epoch = [], []
print('\n===> TRAINING\n')
random.shuffle(ds_train.graph_list)
for data in ds_train:
model.train()
idx, g = data
print(f'\n(TRAIN: Epoch = {epoch:3}) NEW GRAPH: index = {idx}')
if masking:
fraction = random.randint(mask_frac_low, mask_frac_high) / 100 # Fraction of nodes to be left in the graph (.85 -> ~30x, 1.0 -> 60x)
g = mask_graph_strandwise(g, fraction, device)
# Number of clusters dependant on graph size!
num_nodes_per_cluster_min = int(num_nodes_per_cluster * npc_lower_bound)
num_nodes_per_cluster_max = int(num_nodes_per_cluster * npc_upper_bound) + 1
num_nodes_for_g = torch.LongTensor(1).random_(num_nodes_per_cluster_min, num_nodes_per_cluster_max).item()
num_clusters = g.num_nodes() // num_nodes_for_g + 1
if num_nodes_for_g >= g.num_nodes(): # train with full graph
print(f'\nUse METIS: False')
print(f'Use full graph')
g = g.to(device)
if use_symmetry_loss:
x = g.ndata['x'].to(device)
e = g.edata['e'].to(device)
# pe = g.ndata['pe'].to(device)
# pe = (pe - pe.mean()) / pe.std()
pe_in = g.ndata['in_deg'].unsqueeze(1).to(device)
pe_in = (pe_in - pe_in.mean()) / pe_in.std()
pe_out = g.ndata['out_deg'].unsqueeze(1).to(device)
pe_out = (pe_out - pe_out.mean()) / pe_out.std()
# pe = torch.cat((pe_in, pe_out, pe), dim=1)
pe = torch.cat((pe_in, pe_out), dim=1)
org_scores = model(g, x, e, pe).squeeze(-1)
edge_predictions = org_scores
edge_labels = g.edata['y'].to(device)
g = dgl.reverse(g, True, True)
x = g.ndata['x'].to(device)
e = g.edata['e'].to(device)
# pe = g.ndata['pe'].to(device)
# pe = (pe - pe.mean()) / pe.std()
pe_out = g.ndata['in_deg'].unsqueeze(1).to(device) # Reversed edges, in/out-deg also reversed
pe_out = (pe_out - pe_out.mean()) / pe_out.std()
pe_in = g.ndata['out_deg'].unsqueeze(1).to(device) # Reversed edges, in/out-deg also reversed
pe_in = (pe_in - pe_in.mean()) / pe_in.std()
# pe = torch.cat((pe_in, pe_out, pe), dim=1)
pe = torch.cat((pe_in, pe_out), dim=1)
rev_scores = model(g, x, e, pe).squeeze(-1)
loss = symmetry_loss(org_scores, rev_scores, edge_labels, pos_weight, alpha=alpha)
else:
x = g.ndata['x'].to(device)
e = g.edata['e'].to(device)
# pe = g.ndata['pe'].to(device)
# pe = (pe - pe.mean()) / pe.std()
pe_in = g.ndata['in_deg'].unsqueeze(1).to(device)
pe_in = (pe_in - pe_in.mean()) / pe_in.std()
pe_out = g.ndata['out_deg'].unsqueeze(1).to(device)
pe_out = (pe_out - pe_out.mean()) / pe_out.std()
# pe = torch.cat((pe_in, pe_out, pe), dim=1)
pe = torch.cat((pe_in, pe_out), dim=1)
edge_predictions = model(g, x, e, pe)
edge_predictions = edge_predictions.squeeze(-1)
edge_labels = g.edata['y'].to(device)
loss = criterion(edge_predictions, edge_labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
train_loss = loss.item()
TP, TN, FP, FN = utils.calculate_tfpn(edge_predictions, edge_labels)
acc, precision, recall, f1 = utils.calculate_metrics(TP, TN, FP, FN)
try:
fp_rate = FP / (FP + TN)
except ZeroDivisionError:
fp_rate = 0.0
try:
fn_rate = FN / (FN + TP)
except ZeroDivisionError:
fn_rate = 0.0
train_fp_rate = fp_rate
train_fn_rate = fn_rate
train_acc = acc
train_precision = precision
train_recall = recall
train_f1 = f1
train_loss_epoch.append(loss.item())
train_fp_rate_epoch.append(fp_rate)
train_fn_rate_epoch.append(fn_rate)
# elapsed = utils.timedelta_to_str(datetime.now() - time_start)
# print(f'\nTRAINING (one training graph): Epoch = {epoch}, Graph = {idx}')
# print(f'Loss: {train_loss:.4f}, fp_rate(GT=0): {train_fp_rate:.4f}, fn_rate(GT=1): {train_fn_rate:.4f}')
# print(f'elapsed time: {elapsed}\n\n')
else: # train with mini-batch
print(f'\nUse METIS: True')
print(f'Number of clusters:', num_clusters)
g = g.long()
d = dgl.metis_partition(g, num_clusters, extra_cached_hops=k_extra_hops)
sub_gs = list(d.values())
random.shuffle(sub_gs)
# Loop over all mini-batch in the graph
running_loss, running_fp_rate, running_fn_rate = [], [], []
running_acc, running_precision, running_recall, running_f1 = [], [], [], []
for sub_g in sub_gs:
if use_symmetry_loss:
sub_g = sub_g.to(device)
x = g.ndata['x'][sub_g.ndata['_ID']].to(device)
e = g.edata['e'][sub_g.edata['_ID']].to(device)
pe_in = g.ndata['in_deg'][sub_g.ndata['_ID']].unsqueeze(1).to(device)
pe_in = (pe_in - pe_in.mean()) / pe_in.std()
pe_out = g.ndata['out_deg'][sub_g.ndata['_ID']].unsqueeze(1).to(device)
pe_out = (pe_out - pe_out.mean()) / pe_out.std()
pe = torch.cat((pe_in, pe_out), dim=1)
org_scores = model(sub_g, x, e, pe).squeeze(-1)
labels = g.edata['y'][sub_g.edata['_ID']].to(device)
sub_g = dgl.reverse(sub_g, True, True)
x = g.ndata['x'][sub_g.ndata['_ID']].to(device)
e = g.edata['e'][sub_g.edata['_ID']].to(device)
pe_out = g.ndata['in_deg'][sub_g.ndata['_ID']].unsqueeze(1).to(device) # Reversed edges, in/out-deg also reversed
pe_out = (pe_out - pe_out.mean()) / pe_out.std()
pe_in = g.ndata['out_deg'][sub_g.ndata['_ID']].unsqueeze(1).to(device) # Reversed edges, in/out-deg also reversed
pe_in = (pe_in - pe_in.mean()) / pe_in.std()
pe = torch.cat((pe_in, pe_out), dim=1)
rev_scores = model(sub_g, x, e, pe).squeeze(-1)
loss = symmetry_loss(org_scores, rev_scores, labels, pos_weight, alpha=alpha)
edge_predictions = org_scores
edge_labels = labels
else:
sub_g = sub_g.to(device)
x = g.ndata['x'][sub_g.ndata['_ID']].to(device)
e = g.edata['e'][sub_g.edata['_ID']].to(device)
pe_in = g.ndata['in_deg'][sub_g.ndata['_ID']].unsqueeze(1).to(device)
pe_in = (pe_in - pe_in.mean()) / pe_in.std()
pe_out = g.ndata['out_deg'][sub_g.ndata['_ID']].unsqueeze(1).to(device)
pe_out = (pe_out - pe_out.mean()) / pe_out.std()
pe = torch.cat((pe_in, pe_out), dim=1)
edge_predictions = model(sub_g, x, e, pe)
edge_predictions = edge_predictions.squeeze(-1)
edge_labels = g.edata['y'][sub_g.edata['_ID']].to(device)
loss = criterion(edge_predictions, edge_labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
TP, TN, FP, FN = utils.calculate_tfpn(edge_predictions, edge_labels)
acc, precision, recall, f1 = utils.calculate_metrics(TP, TN, FP, FN)
acc_inv, precision_inv, recall_inv, f1_inv = utils.calculate_metrics_inverse(TP, TN, FP, FN)
try:
fp_rate = FP / (FP + TN)
except ZeroDivisionError:
fp_rate = 0.0
try:
fn_rate = FN / (FN + TP)
except ZeroDivisionError:
fn_rate = 0.0
# Append results of a single mini-batch / METIS partition
# These are used for epoch mean = mean over partitions over graphs - mostly DEPRECATED
running_loss.append(loss.item())
running_fp_rate.append(fp_rate)
running_fn_rate.append(fn_rate)
running_acc.append(acc)
running_precision.append(precision)
running_recall.append(recall)
running_f1.append(f1)
# These are used for epoch mean = mean over all the partitions in all the graphs
train_loss_epoch.append(loss.item())
train_fp_rate_epoch.append(fp_rate)
train_fn_rate_epoch.append(fn_rate)
train_acc_epoch.append(acc)
train_precision_epoch.append(precision)
train_recall_epoch.append(recall)
train_f1_epoch.append(f1)
# Inverse metrics because F1 and them are not good for dataset with mostly positive labels
train_acc_inv_epoch.append(acc_inv)
train_precision_inv_epoch.append(precision_inv)
train_recall_inv_epoch.append(recall_inv)
train_f1_inv_epoch.append(f1_inv)
# Average over all mini-batches (partitions) in a single graph - mostly DEPRECATED
train_loss = np.mean(running_loss)
train_fp_rate = np.mean(running_fp_rate)
train_fn_rate = np.mean(running_fn_rate)
train_acc = np.mean(running_acc)
train_precision = np.mean(running_precision)
train_recall = np.mean(running_recall)
train_f1 = np.mean(running_f1)
# elapsed = utils.timedelta_to_str(datetime.now() - time_start)
# print(f'\nTRAINING (one training graph): Epoch = {epoch}, Graph = {idx}')
# print(f'Loss: {train_loss:.4f}, fp_rate(GT=0): {train_fp_rate:.4f}, fn_rate(GT=1): {train_fn_rate:.4f}')
# print(f'elapsed time: {elapsed}\n\n')
# Record after each graph in the dataset - mostly DEPRECATED
train_loss_all_graphs.append(train_loss)
train_fp_rate_all_graphs.append(train_fp_rate)
train_fn_rate_all_graphs.append(train_fn_rate)
train_acc_all_graphs.append(train_acc)
train_precision_all_graphs.append(train_precision)
train_recall_all_graphs.append(train_recall)
train_f1_all_graphs.append(train_f1)
# Average over all the training graphs in one epoch - mostly DEPRECATED
train_loss_all_graphs = np.mean(train_loss_all_graphs)
train_fp_rate_all_graphs = np.mean(train_fp_rate_all_graphs)
train_fn_rate_all_graphs = np.mean(train_fn_rate_all_graphs)
train_acc_all_graphs = np.mean(train_acc_all_graphs)
train_precision_all_graphs = np.mean(train_precision_all_graphs)
train_recall_all_graphs = np.mean(train_recall_all_graphs)
train_f1_all_graphs = np.mean(train_f1_all_graphs)
# Average over all the partitions in one epoch
train_loss_epoch = np.mean(train_loss_epoch)
train_fp_rate_epoch = np.mean(train_fp_rate_epoch)
train_fn_rate_epoch = np.mean(train_fn_rate_epoch)
train_acc_epoch = np.mean(train_acc_epoch)
train_precision_epoch = np.mean(train_precision_epoch)
train_recall_epoch = np.mean(train_recall_epoch)
train_f1_epoch = np.mean(train_f1_epoch)
train_acc_inv_epoch = np.mean(train_acc_inv_epoch)
train_precision_inv_epoch = np.mean(train_precision_inv_epoch)
train_recall_inv_epoch = np.mean(train_recall_inv_epoch)
train_f1_inv_epoch = np.mean(train_f1_inv_epoch)
loss_per_epoch_train.append(train_loss_epoch)
lr_value = optimizer.param_groups[0]['lr']
elapsed = utils.timedelta_to_str(datetime.now() - time_start)
print(f'\n==> TRAINING (all training graphs): Epoch = {epoch}')
print(f'Loss: {train_loss_epoch:.4f}, fp_rate(GT=0): {train_fp_rate_epoch:.4f}, fn_rate(GT=1): {train_fn_rate_epoch:.4f}')
print(f'Elapsed time: {elapsed}\n\n')
if overfit:
if len(loss_per_epoch_valid) == 1 or len(loss_per_epoch_train) > 1 and loss_per_epoch_train[-1] < min(loss_per_epoch_train[:-1]):
torch.save(model.state_dict(), model_path)
print(f'Epoch {epoch}: Model saved!')
save_checkpoint(epoch, model, optimizer, loss_per_epoch_train[-1], 0.0, out, ckpt_path)
scheduler.step(train_loss_all_graphs)
wandb.log({'train_loss': train_loss_all_graphs, 'train_accuracy': train_acc_all_graphs, \
'train_precision': train_precision_all_graphs, 'lr_value': lr_value, \
'train_recall': train_recall_all_graphs, 'train_f1': train_f1_all_graphs, \
'train_fp-rate': train_fp_rate_all_graphs, 'train_fn-rate': train_fn_rate_all_graphs})
continue # This will entirely skip the validation
val_loss_all_graphs, val_fp_rate_all_graphs, val_fn_rate_all_graphs = [], [], []
val_acc_all_graphs, val_precision_all_graphs, val_recall_all_graphs, val_f1_all_graphs = [], [], [], []
valid_loss_epoch, valid_fp_rate_epoch, valid_fn_rate_epoch = [], [], []
valid_acc_epoch, valid_precision_epoch, valid_recall_epoch, valid_f1_epoch = [], [], [], []
valid_acc_inv_epoch, valid_precision_inv_epoch, valid_recall_inv_epoch, valid_f1_inv_epoch = [], [], [], []
valid_aps_epoch, valid_aps_inv_epoch = [], []
with torch.no_grad():
print('\n===> VALIDATION\n')
time_start_eval = datetime.now()
model.eval()
for data in ds_valid:
idx, g = data
print(f'\n(VALID Epoch = {epoch:3}) NEW GRAPH: index = {idx}')
if masking:
fraction = random.randint(mask_frac_low, mask_frac_high) / 100 # Fraction of nodes to be left in the graph (.85 -> ~30x, 1.0 -> 60x)
g = mask_graph_strandwise(g, fraction, device)
# Number of clusters dependant on graph size!
num_nodes_per_cluster_min = int(num_nodes_per_cluster * npc_lower_bound)
num_nodes_per_cluster_max = int(num_nodes_per_cluster * npc_upper_bound) + 1
num_nodes_for_g = torch.LongTensor(1).random_(num_nodes_per_cluster_min, num_nodes_per_cluster_max).item() # DEBUG!!!
num_clusters = g.num_nodes() // num_nodes_for_g + 1
if num_nodes_for_g >= g.num_nodes(): # full graph
print(f'\nUse METIS: False')
print(f'Use full graph')
g = g.to(device)
if use_symmetry_loss:
x = g.ndata['x'].to(device)
e = g.edata['e'].to(device)
pe_in = g.ndata['in_deg'].unsqueeze(1).to(device)
pe_in = (pe_in - pe_in.mean()) / pe_in.std()
pe_out = g.ndata['out_deg'].unsqueeze(1).to(device)
pe_out = (pe_out - pe_out.mean()) / pe_out.std()
pe = torch.cat((pe_in, pe_out), dim=1)
org_scores = model(g, x, e, pe).squeeze(-1)
edge_predictions = org_scores
edge_labels = g.edata['y'].to(device)
g = dgl.reverse(g, True, True)
x = g.ndata['x'].to(device)
e = g.edata['e'].to(device)
pe_out = g.ndata['in_deg'].unsqueeze(1).to(device) # Reversed edges, in/out-deg also reversed
pe_out = (pe_out - pe_out.mean()) / pe_out.std()
pe_in = g.ndata['out_deg'].unsqueeze(1).to(device) # Reversed edges, in/out-deg also reversed
pe_in = (pe_in - pe_in.mean()) / pe_in.std()
pe = torch.cat((pe_in, pe_out), dim=1)
rev_scores = model(g, x, e, pe).squeeze(-1)
loss = symmetry_loss(org_scores, rev_scores, edge_labels, pos_weight, alpha=alpha)
else:
x = g.ndata['x'].to(device)
e = g.edata['e'].to(device)
pe_in = g.ndata['in_deg'].unsqueeze(1).to(device)
pe_in = (pe_in - pe_in.mean()) / pe_in.std()
pe_out = g.ndata['out_deg'].unsqueeze(1).to(device)
pe_out = (pe_out - pe_out.mean()) / pe_out.std()
pe = torch.cat((pe_in, pe_out), dim=1)
edge_predictions = model(g, x, e, pe)
edge_predictions = edge_predictions.squeeze(-1)
edge_labels = g.edata['y'].to(device)
loss = criterion(edge_predictions, edge_labels)
val_loss = loss.item()
TP, TN, FP, FN = utils.calculate_tfpn(edge_predictions, edge_labels)
acc, precision, recall, f1 = utils.calculate_metrics(TP, TN, FP, FN)
try:
fp_rate = FP / (FP + TN)
except ZeroDivisionError:
fp_rate = 0.0
try:
fn_rate = FN / (FN + TP)
except ZeroDivisionError:
fn_rate = 0.0
val_fp_rate = fp_rate
val_fn_rate = fn_rate
val_acc = acc
val_precision = precision
val_recall = recall
val_f1 = f1
valid_loss_epoch.append(loss.item())
valid_fp_rate_epoch.append(fp_rate)
valid_fn_rate_epoch.append(fn_rate)
# elapsed = utils.timedelta_to_str(datetime.now() - time_start_eval)
# print(f'\nVALIDATION (one validation graph): Epoch = {epoch}, Graph = {idx}')
# print(f'Loss: {val_loss:.4f}, fp_rate(GT=0): {val_fp_rate:.4f}, fn_rate(GT=1): {val_fn_rate:.4f}')
# print(f'elapsed time: {elapsed}\n\n')
else: # mini-batch
print(f'\nNum clusters:', num_clusters)
g = g.long()
d = dgl.metis_partition(g, num_clusters, extra_cached_hops=k_extra_hops)
sub_gs = list(d.values())
# g = g.to(device)
# For loop over all mini-batch in the graph
running_loss, running_fp_rate, running_fn_rate = [], [], []
running_acc, running_precision, running_recall, running_f1 = [], [], [], []
for sub_g in sub_gs:
if use_symmetry_loss:
sub_g = sub_g.to(device)
x = g.ndata['x'][sub_g.ndata['_ID']].to(device)
e = g.edata['e'][sub_g.edata['_ID']].to(device)
pe_in = g.ndata['in_deg'][sub_g.ndata['_ID']].unsqueeze(1).to(device)
pe_in = (pe_in - pe_in.mean()) / pe_in.std()
pe_out = g.ndata['out_deg'][sub_g.ndata['_ID']].unsqueeze(1).to(device)
pe_out = (pe_out - pe_out.mean()) / pe_out.std()
pe = torch.cat((pe_in, pe_out), dim=1)
org_scores = model(sub_g, x, e, pe).squeeze(-1)
labels = g.edata['y'][sub_g.edata['_ID']].to(device)
sub_g = dgl.reverse(sub_g, True, True)
x = g.ndata['x'][sub_g.ndata['_ID']].to(device)
e = g.edata['e'][sub_g.edata['_ID']].to(device)
pe_out = g.ndata['in_deg'][sub_g.ndata['_ID']].unsqueeze(1).to(device) # Reversed edges, in/out-deg also reversed
pe_out = (pe_out - pe_out.mean()) / pe_out.std()
pe_in = g.ndata['out_deg'][sub_g.ndata['_ID']].unsqueeze(1).to(device) # Reversed edges, in/out-deg also reversed
pe_in = (pe_in - pe_in.mean()) / pe_in.std()
pe = torch.cat((pe_in, pe_out), dim=1)
rev_scores = model(sub_g, x, e, pe).squeeze(-1)
loss = symmetry_loss(org_scores, rev_scores, labels, pos_weight, alpha=alpha)
edge_predictions = org_scores
edge_labels = labels
else:
sub_g = sub_g.to(device)
x = g.ndata['x'][sub_g.ndata['_ID']].to(device)
e = g.edata['e'][sub_g.edata['_ID']].to(device)
pe_in = g.ndata['in_deg'][sub_g.ndata['_ID']].unsqueeze(1).to(device)
pe_in = (pe_in - pe_in.mean()) / pe_in.std()
pe_out = g.ndata['out_deg'][sub_g.ndata['_ID']].unsqueeze(1).to(device)
pe_out = (pe_out - pe_out.mean()) / pe_out.std()
pe = torch.cat((pe_in, pe_out), dim=1)
edge_predictions = model(sub_g, x, e, pe)
edge_predictions = edge_predictions.squeeze(-1)
edge_labels = g.edata['y'][sub_g.edata['_ID']].to(device)
loss = criterion(edge_predictions, edge_labels)
TP, TN, FP, FN = utils.calculate_tfpn(edge_predictions, edge_labels)
acc, precision, recall, f1 = utils.calculate_metrics(TP, TN, FP, FN)
acc_inv, precision_inv, recall_inv, f1_inv = utils.calculate_metrics_inverse(TP, TN, FP, FN)
try:
fp_rate = FP / (FP + TN)
except ZeroDivisionError:
fp_rate = 0.0
try:
fn_rate = FN / (FN + TP)
except ZeroDivisionError:
fn_rate = 0.0
# Append results of a single mini-batch / METIS partition
# These are used for epoch mean = mean over partitions over graphs - mostly DEPRECATED
running_loss.append(loss.item())
running_fp_rate.append(fp_rate)
running_fn_rate.append(fn_rate)
running_acc.append(acc)
running_precision.append(precision)
running_recall.append(recall)
running_f1.append(f1)
# These are used for epoch mean = mean over all the partitions in all the graphs
valid_loss_epoch.append(loss.item())
valid_fp_rate_epoch.append(fp_rate)
valid_fn_rate_epoch.append(fn_rate)
valid_acc_epoch.append(acc)
valid_precision_epoch.append(precision)
valid_recall_epoch.append(recall)
valid_f1_epoch.append(f1)
# Inverse metrics because F1 and them are not good for dataset with mostly positive labels
valid_acc_inv_epoch.append(acc_inv)
valid_precision_inv_epoch.append(precision_inv)
valid_recall_inv_epoch.append(recall_inv)
valid_f1_inv_epoch.append(f1_inv)
# Average over all mini-batches (partitions) in a single graph - mostly DEPRECATED
val_loss = np.mean(running_loss)
val_fp_rate = np.mean(running_fp_rate)
val_fn_rate = np.mean(running_fn_rate)
val_acc = np.mean(running_acc)
val_precision = np.mean(running_precision)
val_recall = np.mean(running_recall)
val_f1 = np.mean(running_f1)
# elapsed = utils.timedelta_to_str(datetime.now() - time_start_eval)
# print(f'\nVALIDATION (one validation graph): Epoch = {epoch}, Graph = {idx}')
# print(f'Loss: {val_loss:.4f}, fp_rate(GT=0): {val_fp_rate:.4f}, fn_rate(GT=1): {val_fn_rate:.4f}')
# print(f'elapsed time: {elapsed}\n\n')
# Record after each graph in the dataset - mostly DEPRECATED
val_loss_all_graphs.append(val_loss)
val_fp_rate_all_graphs.append(val_fp_rate)
val_fn_rate_all_graphs.append(val_fn_rate)
val_acc_all_graphs.append(val_acc)
val_precision_all_graphs.append(val_precision)
val_recall_all_graphs.append(val_recall)
val_f1_all_graphs.append(val_f1)
# Average over all the training graphs in one epoch - mostly DEPRECATED
val_loss_all_graphs = np.mean(val_loss_all_graphs)
val_fp_rate_all_graphs = np.mean(val_fp_rate_all_graphs)
val_fn_rate_all_graphs = np.mean(val_fn_rate_all_graphs)
val_acc_all_graphs = np.mean(val_acc_all_graphs)
val_precision_all_graphs = np.mean(val_precision_all_graphs)
val_recall_all_graphs = np.mean(val_recall_all_graphs)
val_f1_all_graphs = np.mean(val_f1_all_graphs)
# Average over all the partitions in one epoch
valid_loss_epoch = np.mean(valid_loss_epoch)
valid_fp_rate_epoch = np.mean(valid_fp_rate_epoch)
valid_fn_rate_epoch = np.mean(valid_fn_rate_epoch)
valid_acc_epoch = np.mean(valid_acc_epoch)
valid_precision_epoch = np.mean(valid_precision_epoch)
valid_recall_epoch = np.mean(valid_recall_epoch)
valid_f1_epoch = np.mean(valid_f1_epoch)
valid_acc_inv_epoch = np.mean(valid_acc_inv_epoch)
valid_precision_inv_epoch = np.mean(valid_precision_inv_epoch)
valid_recall_inv_epoch = np.mean(valid_recall_inv_epoch)
valid_f1_inv_epoch = np.mean(valid_f1_inv_epoch)
loss_per_epoch_valid.append(valid_loss_epoch)
f1_inv_per_epoch_valid.append(valid_f1_inv_epoch)
elapsed = utils.timedelta_to_str(datetime.now() - time_start)
print(f'\n==> VALIDATION (all validation graphs): Epoch = {epoch}')
print(f'Loss: {valid_loss_epoch:.4f}, fp_rate(GT=0): {valid_fp_rate_epoch:.4f}, fn_rate(GT=1): {valid_fn_rate_epoch:.4f}')
print(f'Elapsed time total: {elapsed}\n\n')
if not overfit:
# Choose the model with minimal loss on validation set
if len(loss_per_epoch_valid) == 1 or len(loss_per_epoch_valid) > 1 and loss_per_epoch_valid[-1] < min(loss_per_epoch_valid[:-1]):
torch.save(model.state_dict(), model_min_loss_path)
print(f'Epoch {epoch:3}: Model MIN-LOSS saved! -> Val Loss = {valid_loss_epoch:.6f}\tVal F1 = {valid_f1_epoch:.4f}\tVal inv-F1 = {valid_f1_inv_epoch:.4f}' \
f'\tVal FPR = {valid_fp_rate_epoch:.4f}\tVal FNR = {valid_fn_rate_epoch:.4f}\t')
save_checkpoint(epoch, model, optimizer, min(loss_per_epoch_train), min(loss_per_epoch_valid), out, ckpt_path) # Save the checkpoint every epoch
scheduler.step(valid_loss_epoch)
# Code that evalates NGA50 during training -- only for overfitting
plot_nga50_during_training = hyperparameters['plot_nga50_during_training']
i = hyperparameters['chr_overfit']
eval_frequency = hyperparameters['eval_frequency']
if overfit and plot_nga50_during_training and (epoch+1) % eval_frequency == 0:
# call inference
refs_path = hyperparameters['refs_path']
save_dir = os.path.join(train_path, assembler)
if not os.path.isdir(save_dir):
os.makedirs(save_dir)
if not os.path.isdir(os.path.join(save_dir, f'assembly')):
os.mkdir(os.path.join(save_dir, f'assembly'))
if not os.path.isdir(os.path.join(save_dir, f'inference')):
os.mkdir(os.path.join(save_dir, f'inference'))
if not os.path.isdir(os.path.join(save_dir, f'reports')):
os.mkdir(os.path.join(save_dir, f'reports')) | inference(train_path, model_path, assembler, save_dir) | 3 | 2023-12-08 04:45:45+00:00 | 12k |
Deltares/imod-python | imod/mf6/disv.py | [
{
"identifier": "Package",
"path": "imod/mf6/package.py",
"snippet": "class Package(PackageBase, abc.ABC):\n \"\"\"\n Package is used to share methods for specific packages with no time\n component.\n\n It is not meant to be used directly, only to inherit from, to implement new\n packages... | import numpy as np
import pandas as pd
from imod.mf6.package import Package
from imod.mf6.regridding_utils import RegridderType
from imod.mf6.validation import DisBottomSchema
from imod.mf6.write_context import WriteContext
from imod.schemata import (
AllValueSchema,
AnyValueSchema,
DimsSchema,
DTypeSchema,
IdentityNoDataSchema,
IndexesSchema,
) | 9,518 |
class VerticesDiscretization(Package):
"""
Discretization by Vertices (DISV).
Parameters
----------
top: array of floats (xu.UgridDataArray)
bottom: array of floats (xu.UgridDataArray)
idomain: array of integers (xu.UgridDataArray)
validate: {True, False}
Flag to indicate whether the package should be validated upon
initialization. This raises a ValidationError if package input is
provided in the wrong manner. Defaults to True.
"""
_pkg_id = "disv"
_init_schemata = {
"top": [
|
class VerticesDiscretization(Package):
"""
Discretization by Vertices (DISV).
Parameters
----------
top: array of floats (xu.UgridDataArray)
bottom: array of floats (xu.UgridDataArray)
idomain: array of integers (xu.UgridDataArray)
validate: {True, False}
Flag to indicate whether the package should be validated upon
initialization. This raises a ValidationError if package input is
provided in the wrong manner. Defaults to True.
"""
_pkg_id = "disv"
_init_schemata = {
"top": [ | DTypeSchema(np.floating), | 7 | 2023-12-08 13:57:59+00:00 | 12k |
Dong142857/Live3DPortrait | models/eg3d/superresolution.py | [
{
"identifier": "Conv2dLayer",
"path": "models/eg3d/networks_stylegan2.py",
"snippet": "class Conv2dLayer(torch.nn.Module):\n def __init__(self,\n in_channels, # Number of input channels.\n out_channels, # Number of output channels.\n kernel_s... | import torch
import numpy as np
from models.eg3d.networks_stylegan2 import Conv2dLayer, SynthesisLayer, ToRGBLayer
from torch_utils.ops import upfirdn2d
from torch_utils import persistence
from torch_utils import misc
from models.eg3d.networks_stylegan2 import SynthesisBlock
from models.eg3d.networks_stylegan3 import SynthesisLayer as AFSynthesisLayer | 8,988 | x, rgb = self.block1(x, rgb, ws, **block_kwargs)
return rgb
#----------------------------------------------------------------------------
# for 128 x 128 generation
@persistence.persistent_class
class SuperresolutionHybrid2X(torch.nn.Module):
def __init__(self, channels, img_resolution, sr_num_fp16_res, sr_antialias,
num_fp16_res=4, conv_clamp=None, channel_base=None, channel_max=None,# IGNORE
**block_kwargs):
super().__init__()
assert img_resolution == 128
use_fp16 = sr_num_fp16_res > 0
self.input_resolution = 64
self.sr_antialias = sr_antialias
self.block0 = SynthesisBlockNoUp(channels, 128, w_dim=512, resolution=64,
img_channels=3, is_last=False, use_fp16=use_fp16, conv_clamp=(256 if use_fp16 else None), **block_kwargs)
self.block1 = SynthesisBlock(128, 64, w_dim=512, resolution=128,
img_channels=3, is_last=True, use_fp16=use_fp16, conv_clamp=(256 if use_fp16 else None), **block_kwargs)
self.register_buffer('resample_filter', upfirdn2d.setup_filter([1,3,3,1]))
def forward(self, rgb, x, ws, **block_kwargs):
ws = ws[:, -1:, :].repeat(1, 3, 1)
if x.shape[-1] != self.input_resolution:
x = torch.nn.functional.interpolate(x, size=(self.input_resolution, self.input_resolution),
mode='bilinear', align_corners=False, antialias=self.sr_antialias)
rgb = torch.nn.functional.interpolate(rgb, size=(self.input_resolution, self.input_resolution),
mode='bilinear', align_corners=False, antialias=self.sr_antialias)
x, rgb = self.block0(x, rgb, ws, **block_kwargs)
x, rgb = self.block1(x, rgb, ws, **block_kwargs)
return rgb
#----------------------------------------------------------------------------
# TODO: Delete (here for backwards compatibility with old 256x256 models)
@persistence.persistent_class
class SuperresolutionHybridDeepfp32(torch.nn.Module):
def __init__(self, channels, img_resolution, sr_num_fp16_res,
num_fp16_res=4, conv_clamp=None, channel_base=None, channel_max=None,# IGNORE
**block_kwargs):
super().__init__()
assert img_resolution == 256
use_fp16 = sr_num_fp16_res > 0
self.input_resolution = 128
self.block0 = SynthesisBlockNoUp(channels, 128, w_dim=512, resolution=128,
img_channels=3, is_last=False, use_fp16=use_fp16, conv_clamp=(256 if use_fp16 else None), **block_kwargs)
self.block1 = SynthesisBlock(128, 64, w_dim=512, resolution=256,
img_channels=3, is_last=True, use_fp16=use_fp16, conv_clamp=(256 if use_fp16 else None), **block_kwargs)
self.register_buffer('resample_filter', upfirdn2d.setup_filter([1,3,3,1]))
def forward(self, rgb, x, ws, **block_kwargs):
ws = ws[:, -1:, :].repeat(1, 3, 1)
if x.shape[-1] < self.input_resolution:
x = torch.nn.functional.interpolate(x, size=(self.input_resolution, self.input_resolution),
mode='bilinear', align_corners=False)
rgb = torch.nn.functional.interpolate(rgb, size=(self.input_resolution, self.input_resolution),
mode='bilinear', align_corners=False)
x, rgb = self.block0(x, rgb, ws, **block_kwargs)
x, rgb = self.block1(x, rgb, ws, **block_kwargs)
return rgb
#----------------------------------------------------------------------------
@persistence.persistent_class
class SynthesisBlockNoUp(torch.nn.Module):
def __init__(self,
in_channels, # Number of input channels, 0 = first block.
out_channels, # Number of output channels.
w_dim, # Intermediate latent (W) dimensionality.
resolution, # Resolution of this block.
img_channels, # Number of output color channels.
is_last, # Is this the last block?
architecture = 'skip', # Architecture: 'orig', 'skip', 'resnet'.
resample_filter = [1,3,3,1], # Low-pass filter to apply when resampling activations.
conv_clamp = 256, # Clamp the output of convolution layers to +-X, None = disable clamping.
use_fp16 = False, # Use FP16 for this block?
fp16_channels_last = False, # Use channels-last memory format with FP16?
fused_modconv_default = True, # Default value of fused_modconv. 'inference_only' = True for inference, False for training.
**layer_kwargs, # Arguments for SynthesisLayer.
):
assert architecture in ['orig', 'skip', 'resnet']
super().__init__()
self.in_channels = in_channels
self.w_dim = w_dim
self.resolution = resolution
self.img_channels = img_channels
self.is_last = is_last
self.architecture = architecture
self.use_fp16 = use_fp16
self.channels_last = (use_fp16 and fp16_channels_last)
self.fused_modconv_default = fused_modconv_default
self.register_buffer('resample_filter', upfirdn2d.setup_filter(resample_filter))
self.num_conv = 0
self.num_torgb = 0
if in_channels == 0:
self.const = torch.nn.Parameter(torch.randn([out_channels, resolution, resolution]))
if in_channels != 0:
self.conv0 = SynthesisLayer(in_channels, out_channels, w_dim=w_dim, resolution=resolution,
conv_clamp=conv_clamp, channels_last=self.channels_last, **layer_kwargs)
self.num_conv += 1
self.conv1 = SynthesisLayer(out_channels, out_channels, w_dim=w_dim, resolution=resolution,
conv_clamp=conv_clamp, channels_last=self.channels_last, **layer_kwargs)
self.num_conv += 1
if is_last or architecture == 'skip':
self.torgb = ToRGBLayer(out_channels, img_channels, w_dim=w_dim,
conv_clamp=conv_clamp, channels_last=self.channels_last)
self.num_torgb += 1
if in_channels != 0 and architecture == 'resnet':
| # SPDX-FileCopyrightText: Copyright (c) 2021-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: LicenseRef-NvidiaProprietary
#
# NVIDIA CORPORATION, its affiliates and licensors retain all intellectual
# property and proprietary rights in and to this material, related
# documentation and any modifications thereto. Any use, reproduction,
# disclosure or distribution of this material and related documentation
# without an express license agreement from NVIDIA CORPORATION or
# its affiliates is strictly prohibited.
"""Superresolution network architectures from the paper
"Efficient Geometry-aware 3D Generative Adversarial Networks"."""
#----------------------------------------------------------------------------
# for 512x512 generation
@persistence.persistent_class
class SuperresolutionHybrid8X(torch.nn.Module):
def __init__(self, channels, img_resolution, sr_num_fp16_res, sr_antialias,
num_fp16_res=4, conv_clamp=None, channel_base=None, channel_max=None,# IGNORE
**block_kwargs):
super().__init__()
assert img_resolution == 512
use_fp16 = sr_num_fp16_res > 0
self.input_resolution = 128
self.sr_antialias = sr_antialias
self.block0 = SynthesisBlock(channels, 128, w_dim=512, resolution=256,
img_channels=3, is_last=False, use_fp16=use_fp16, conv_clamp=(256 if use_fp16 else None), **block_kwargs)
self.block1 = SynthesisBlock(128, 64, w_dim=512, resolution=512,
img_channels=3, is_last=True, use_fp16=use_fp16, conv_clamp=(256 if use_fp16 else None), **block_kwargs)
self.register_buffer('resample_filter', upfirdn2d.setup_filter([1,3,3,1]))
def forward(self, rgb, x, ws, **block_kwargs):
ws = ws[:, -1:, :].repeat(1, 3, 1)
if x.shape[-1] != self.input_resolution:
x = torch.nn.functional.interpolate(x, size=(self.input_resolution, self.input_resolution),
mode='bilinear', align_corners=False, antialias=self.sr_antialias)
rgb = torch.nn.functional.interpolate(rgb, size=(self.input_resolution, self.input_resolution),
mode='bilinear', align_corners=False, antialias=self.sr_antialias)
x, rgb = self.block0(x, rgb, ws, **block_kwargs)
x, rgb = self.block1(x, rgb, ws, **block_kwargs)
return rgb
#----------------------------------------------------------------------------
# for 256x256 generation
@persistence.persistent_class
class SuperresolutionHybrid4X(torch.nn.Module):
def __init__(self, channels, img_resolution, sr_num_fp16_res, sr_antialias,
num_fp16_res=4, conv_clamp=None, channel_base=None, channel_max=None,# IGNORE
**block_kwargs):
super().__init__()
assert img_resolution == 256
use_fp16 = sr_num_fp16_res > 0
self.sr_antialias = sr_antialias
self.input_resolution = 128
self.block0 = SynthesisBlockNoUp(channels, 128, w_dim=512, resolution=128,
img_channels=3, is_last=False, use_fp16=use_fp16, conv_clamp=(256 if use_fp16 else None), **block_kwargs)
self.block1 = SynthesisBlock(128, 64, w_dim=512, resolution=256,
img_channels=3, is_last=True, use_fp16=use_fp16, conv_clamp=(256 if use_fp16 else None), **block_kwargs)
self.register_buffer('resample_filter', upfirdn2d.setup_filter([1,3,3,1]))
def forward(self, rgb, x, ws, **block_kwargs):
ws = ws[:, -1:, :].repeat(1, 3, 1)
if x.shape[-1] < self.input_resolution:
x = torch.nn.functional.interpolate(x, size=(self.input_resolution, self.input_resolution),
mode='bilinear', align_corners=False, antialias=self.sr_antialias)
rgb = torch.nn.functional.interpolate(rgb, size=(self.input_resolution, self.input_resolution),
mode='bilinear', align_corners=False, antialias=self.sr_antialias)
x, rgb = self.block0(x, rgb, ws, **block_kwargs)
x, rgb = self.block1(x, rgb, ws, **block_kwargs)
return rgb
#----------------------------------------------------------------------------
# for 128 x 128 generation
@persistence.persistent_class
class SuperresolutionHybrid2X(torch.nn.Module):
def __init__(self, channels, img_resolution, sr_num_fp16_res, sr_antialias,
num_fp16_res=4, conv_clamp=None, channel_base=None, channel_max=None,# IGNORE
**block_kwargs):
super().__init__()
assert img_resolution == 128
use_fp16 = sr_num_fp16_res > 0
self.input_resolution = 64
self.sr_antialias = sr_antialias
self.block0 = SynthesisBlockNoUp(channels, 128, w_dim=512, resolution=64,
img_channels=3, is_last=False, use_fp16=use_fp16, conv_clamp=(256 if use_fp16 else None), **block_kwargs)
self.block1 = SynthesisBlock(128, 64, w_dim=512, resolution=128,
img_channels=3, is_last=True, use_fp16=use_fp16, conv_clamp=(256 if use_fp16 else None), **block_kwargs)
self.register_buffer('resample_filter', upfirdn2d.setup_filter([1,3,3,1]))
def forward(self, rgb, x, ws, **block_kwargs):
ws = ws[:, -1:, :].repeat(1, 3, 1)
if x.shape[-1] != self.input_resolution:
x = torch.nn.functional.interpolate(x, size=(self.input_resolution, self.input_resolution),
mode='bilinear', align_corners=False, antialias=self.sr_antialias)
rgb = torch.nn.functional.interpolate(rgb, size=(self.input_resolution, self.input_resolution),
mode='bilinear', align_corners=False, antialias=self.sr_antialias)
x, rgb = self.block0(x, rgb, ws, **block_kwargs)
x, rgb = self.block1(x, rgb, ws, **block_kwargs)
return rgb
#----------------------------------------------------------------------------
# TODO: Delete (here for backwards compatibility with old 256x256 models)
@persistence.persistent_class
class SuperresolutionHybridDeepfp32(torch.nn.Module):
def __init__(self, channels, img_resolution, sr_num_fp16_res,
num_fp16_res=4, conv_clamp=None, channel_base=None, channel_max=None,# IGNORE
**block_kwargs):
super().__init__()
assert img_resolution == 256
use_fp16 = sr_num_fp16_res > 0
self.input_resolution = 128
self.block0 = SynthesisBlockNoUp(channels, 128, w_dim=512, resolution=128,
img_channels=3, is_last=False, use_fp16=use_fp16, conv_clamp=(256 if use_fp16 else None), **block_kwargs)
self.block1 = SynthesisBlock(128, 64, w_dim=512, resolution=256,
img_channels=3, is_last=True, use_fp16=use_fp16, conv_clamp=(256 if use_fp16 else None), **block_kwargs)
self.register_buffer('resample_filter', upfirdn2d.setup_filter([1,3,3,1]))
def forward(self, rgb, x, ws, **block_kwargs):
ws = ws[:, -1:, :].repeat(1, 3, 1)
if x.shape[-1] < self.input_resolution:
x = torch.nn.functional.interpolate(x, size=(self.input_resolution, self.input_resolution),
mode='bilinear', align_corners=False)
rgb = torch.nn.functional.interpolate(rgb, size=(self.input_resolution, self.input_resolution),
mode='bilinear', align_corners=False)
x, rgb = self.block0(x, rgb, ws, **block_kwargs)
x, rgb = self.block1(x, rgb, ws, **block_kwargs)
return rgb
#----------------------------------------------------------------------------
@persistence.persistent_class
class SynthesisBlockNoUp(torch.nn.Module):
def __init__(self,
in_channels, # Number of input channels, 0 = first block.
out_channels, # Number of output channels.
w_dim, # Intermediate latent (W) dimensionality.
resolution, # Resolution of this block.
img_channels, # Number of output color channels.
is_last, # Is this the last block?
architecture = 'skip', # Architecture: 'orig', 'skip', 'resnet'.
resample_filter = [1,3,3,1], # Low-pass filter to apply when resampling activations.
conv_clamp = 256, # Clamp the output of convolution layers to +-X, None = disable clamping.
use_fp16 = False, # Use FP16 for this block?
fp16_channels_last = False, # Use channels-last memory format with FP16?
fused_modconv_default = True, # Default value of fused_modconv. 'inference_only' = True for inference, False for training.
**layer_kwargs, # Arguments for SynthesisLayer.
):
assert architecture in ['orig', 'skip', 'resnet']
super().__init__()
self.in_channels = in_channels
self.w_dim = w_dim
self.resolution = resolution
self.img_channels = img_channels
self.is_last = is_last
self.architecture = architecture
self.use_fp16 = use_fp16
self.channels_last = (use_fp16 and fp16_channels_last)
self.fused_modconv_default = fused_modconv_default
self.register_buffer('resample_filter', upfirdn2d.setup_filter(resample_filter))
self.num_conv = 0
self.num_torgb = 0
if in_channels == 0:
self.const = torch.nn.Parameter(torch.randn([out_channels, resolution, resolution]))
if in_channels != 0:
self.conv0 = SynthesisLayer(in_channels, out_channels, w_dim=w_dim, resolution=resolution,
conv_clamp=conv_clamp, channels_last=self.channels_last, **layer_kwargs)
self.num_conv += 1
self.conv1 = SynthesisLayer(out_channels, out_channels, w_dim=w_dim, resolution=resolution,
conv_clamp=conv_clamp, channels_last=self.channels_last, **layer_kwargs)
self.num_conv += 1
if is_last or architecture == 'skip':
self.torgb = ToRGBLayer(out_channels, img_channels, w_dim=w_dim,
conv_clamp=conv_clamp, channels_last=self.channels_last)
self.num_torgb += 1
if in_channels != 0 and architecture == 'resnet': | self.skip = Conv2dLayer(in_channels, out_channels, kernel_size=1, bias=False, up=2, | 0 | 2023-12-09 15:18:53+00:00 | 12k |
blaise-tk/RVC_CLI | rvc/infer/infer.py | [
{
"identifier": "load_audio",
"path": "rvc/lib/utils.py",
"snippet": "def load_audio(file, sampling_rate):\n try:\n file = file.strip(\" \").strip('\"').strip(\"\\n\").strip('\"').strip(\" \")\n out, _ = (\n ffmpeg.input(file, threads=0)\n .output(\"-\", format=\"f... | import os
import sys
import torch
import numpy as np
import soundfile as sf
from vc_infer_pipeline import VC
from rvc.lib.utils import load_audio
from fairseq import checkpoint_utils
from rvc.lib.infer_pack.models import (
SynthesizerTrnMs256NSFsid,
SynthesizerTrnMs256NSFsid_nono,
SynthesizerTrnMs768NSFsid,
SynthesizerTrnMs768NSFsid_nono,
)
from rvc.configs.config import Config | 7,334 |
config = Config()
torch.manual_seed(114514)
hubert_model = None
def load_hubert():
global hubert_model
models, _, _ = checkpoint_utils.load_model_ensemble_and_task(
["hubert_base.pt"],
suffix="",
)
hubert_model = models[0]
hubert_model = hubert_model.to(config.device)
if config.is_half:
hubert_model = hubert_model.half()
else:
hubert_model = hubert_model.float()
hubert_model.eval()
def vc_single(
sid=0,
input_audio_path=None,
f0_up_key=None,
f0_file=None,
f0_method=None,
file_index=None,
index_rate=None,
resample_sr=0,
rms_mix_rate=1,
protect=0.33,
hop_length=None,
output_path=None,
):
global tgt_sr, net_g, vc, hubert_model, version
if input_audio_path is None:
return "Please, load an audio!", None
f0_up_key = int(f0_up_key)
try:
|
config = Config()
torch.manual_seed(114514)
hubert_model = None
def load_hubert():
global hubert_model
models, _, _ = checkpoint_utils.load_model_ensemble_and_task(
["hubert_base.pt"],
suffix="",
)
hubert_model = models[0]
hubert_model = hubert_model.to(config.device)
if config.is_half:
hubert_model = hubert_model.half()
else:
hubert_model = hubert_model.float()
hubert_model.eval()
def vc_single(
sid=0,
input_audio_path=None,
f0_up_key=None,
f0_file=None,
f0_method=None,
file_index=None,
index_rate=None,
resample_sr=0,
rms_mix_rate=1,
protect=0.33,
hop_length=None,
output_path=None,
):
global tgt_sr, net_g, vc, hubert_model, version
if input_audio_path is None:
return "Please, load an audio!", None
f0_up_key = int(f0_up_key)
try: | audio = load_audio(input_audio_path, 16000) | 0 | 2023-12-10 21:09:41+00:00 | 12k |
Opt-Mucca/PySCIPOpt-ML | src/pyscipopt_ml/modelling/gradient_boosting/aggregate_tree_model.py | [
{
"identifier": "add_decision_tree_classifier_constr",
"path": "src/pyscipopt_ml/sklearn/decision_tree.py",
"snippet": "def add_decision_tree_classifier_constr(\n scip_model,\n decision_tree_classifier,\n input_vars,\n output_vars=None,\n unique_naming_prefix=\"\",\n epsilon=0.0,\n ... | import numpy as np
from ...sklearn.decision_tree import (
add_decision_tree_classifier_constr,
add_decision_tree_regressor_constr,
)
from ..base_predictor_constraint import AbstractPredictorConstr
from ..classification.argmax_model import argmax_bound_formulation
from ..decision_tree import leaf_formulation
from ..var_utils import create_vars | 9,971 | The output dimension of each decision tree
unique_naming_prefix : str
The unique naming prefix string that goes before all variables and constraints that are constructed by SCIP
epsilon : float
The epsilon that is used for each decision tree model. See #TODO: Decision tree modelling path
classification : bool
Whether the individual decision trees (i.e. estimators) are classification trees
Returns
-------
estimators : list
A list of :py:class`pyscipopt_ml.modelling.aggregate_tree_model.TreeEstimator`
"""
estimators = []
for i in range(n_estimators):
for j in range(outdim):
unique_prefix = unique_naming_prefix + f"{i}_{j}"
estimators.append(
TreeEstimator(
scip_model,
trees[i][j],
_input,
tree_vars[:, i, j].reshape((-1, 1)),
unique_prefix,
epsilon,
classification,
**kwargs,
)
)
return estimators
def create_sklearn_tree_estimators(
scip_model,
predictor,
_input,
n_samples,
outdim,
unique_naming_prefix,
classification,
gbdt_or_rf="gbdt",
**kwargs,
):
"""
Create individual estimators for each decision tree for decision tree based ensemble predictors from SKLearn.
Parameters
----------
scip_model : PySCIPOpt Model
The SCIP Model where the predictor should be inserted.
predictor : GradientBoostingClassifier | GradientBoostingRegressor | RandomForestClassifier | RandomForestRegressor
The Sklearn predictor that we are modelling
_input : np.ndarray
The input variables into each decision tree (i.e. estimator)
n_samples : int
The number of samples as input
outdim : int
The number of outputs of each estimator
unique_naming_prefix : str
The unique naming prefix string that goes before all variables and constraints that are constructed by SCIP
classification : bool
Whether the individual decision trees (i.e. estimators) are classification trees
gbdt_or_rf : "gbdt" | "rf"
Whether the predictor is for gradient boosting decision trees or random forests.
Returns
-------
estimators : list
A list of :py:class`pyscipopt_ml.modelling.aggregate_tree_model.TreeEstimator`
tree_vars : np.ndarray
A np.ndarray of created PySCIPopt vars
"""
# Create variables to represent the output of each decision tree (i.e. estimator)
shape = (n_samples, predictor.n_estimators, outdim)
tree_vars = create_vars(
scip_model, shape=shape, vtype="C", lb=None, name_prefix=unique_naming_prefix + "tree_var"
)
# Create each estimator. In the case of GBDT, there are (n_estimators, outdim) many estimators, while for RF
# there are (outdim,) many estimators. In the case of GBDT for classification each individual DT is regression.
estimators = []
if gbdt_or_rf == "gbdt":
for i in range(predictor.n_estimators_):
for j in range(outdim):
unique_prefix = unique_naming_prefix + f"{i}_{j}"
tree = predictor.estimators_[i][j]
estimators.append(
add_decision_tree_regressor_constr(
scip_model,
tree,
_input,
tree_vars[:, i, j].reshape((-1, 1)),
unique_prefix,
**kwargs,
)
)
elif gbdt_or_rf == "rf":
for i in range(predictor.n_estimators):
tree = predictor.estimators_[i]
unique_prefix = unique_naming_prefix + f"{i}"
if classification:
estimators.append(
add_decision_tree_classifier_constr(
scip_model, tree, _input, tree_vars[:, i, :], unique_prefix, **kwargs
)
)
else:
estimators.append(
add_decision_tree_regressor_constr(
scip_model, tree, _input, tree_vars[:, i, :], unique_prefix, **kwargs
)
)
return estimators, tree_vars
| """ Utilities for modelling gradient boosting decision trees and random forest constraints """
def aggregated_estimator_formulation(
scip_model,
_input,
output,
tree_vars,
trees,
constant,
lr,
n_estimators,
unique_naming_prefix,
epsilon,
aggr,
classification,
**kwargs,
):
"""
Creates the model that represents the aggregation of estimators into a single output.
This function is used exclusively for the case where the estimators are decision trees, and the larger
predictor is either a gradient boosting decision tree or random forest.
Parameters
----------
scip_model : PySCIPOpt Model
The SCIP Model where the predictor should be inserted.
_input : np.ndarray
The input variables that are passed to each decision tree
output : np.ndarray
The output variables of the predictor
tree_vars : np.ndarray
The PySCIPOpt variables that have been created to represent the output of each decision tree (i.e. estimator)
trees : list
A list of lists containing dictionary information that completely describe each decision tree (i.e. estimator)
constant : np.ndarray
An array of constant shift values that are added to the output values of each decision tree (i.e. estimator)
lr : float or int
The learning rate used while training. For GBDT / RF this scales the output of each tree
n_estimators : int
The number of decision trees (i.e. estimators)
unique_naming_prefix : str
The unique naming prefix string that goes before all variables and constraints that are constructed by SCIP
epsilon : float
The epsilon that is used for each decision tree model. See
:py:func:`pyscipopt_ml.modelling.decision_tree.leaf_formulation`.
aggr : str, "sum" or "avg"
The aggregation method used in the formulation. Either the estimators are averages or summed.
classification : bool
Whether the aggregated output of each decision tree (i.e. estimator) should be used for classification.
Returns
-------
estimators : list
A list of :py:class`pyscipopt_ml.modelling.aggregate_tree_model.TreeEstimator`
created_vars : list
A list containing all created PySCIPOpt vars
created_cons : list
A list containing all created PySCIPOpt cons
"""
# Get the number of samples and output dimension
n_samples = _input.shape[0]
outdim = output.shape[-1]
# Create the individual tree estimators
estimators = create_tree_estimators(
scip_model,
_input,
tree_vars,
trees,
n_estimators,
outdim,
unique_naming_prefix,
epsilon,
False,
**kwargs,
)
# Aggregate the trees over the output dimension
aggregate_tree_output = aggregate_estimator_outputs(tree_vars, lr, constant, aggr=aggr)
# Formulate the appropriate constraints
created_vars, created_cons = create_aggregation_constraints(
scip_model,
aggregate_tree_output,
output,
n_samples,
outdim,
unique_naming_prefix,
classification,
)
return estimators, created_vars, created_cons
def create_aggregation_constraints(
scip_model,
aggregate_tree_output,
output,
n_samples,
outdim,
unique_naming_prefix,
classification,
):
"""
Creates the variables and constraints that link the output of the predictor itself and the aggregation of each
estimator.
Parameters
----------
scip_model : PySCIPOpt Model
The SCIP Model where the predictor should be inserted.
aggregate_tree_output : np.ndarray
The aggregated output variables of each decision tree
output : np.ndarray
The output variables of the predictor
n_samples : int
The number of samples
outdim : int
The number of outputs of each decision tree (i.e. estimator)
unique_naming_prefix : str
The unique naming prefix string that goes before all variables and constraints that are constructed by SCIP
classification : bool
Whether the aggregated output of each decision tree (i.e. estimator) should be used for classification.
Returns
-------
created_vars : list
A list containing all created PySCIPOpt vars
created_cons : list
A list containing all created PySCIPOpt cons
"""
# Formulate the appropriate constraints
created_cons = []
created_vars = []
if not classification:
sum_tree_cons = np.zeros((n_samples, outdim), dtype=object)
for i in range(n_samples):
for j in range(outdim):
name = unique_naming_prefix + f"tree_sum_{i}_{j}"
sum_tree_cons[i][j] = scip_model.addCons(
output[i][j] == aggregate_tree_output[i][j], name=name
)
created_cons.append(sum_tree_cons)
else:
new_vars, new_cons = argmax_bound_formulation(
scip_model, aggregate_tree_output, output, unique_naming_prefix
)
for added_var in new_vars:
created_vars.append(added_var)
for added_cons in new_cons:
created_cons.append(added_cons)
return created_vars, created_cons
def aggregate_estimator_outputs(_output, lr, constant, aggr="sum"):
"""
Aggregate the output of individual estimators into a single expression for each output dimension.
This function is needed for models with multiple estimators, e.g. gradient boosting decision trees and
random forests.
The output after aggregation can then be used as input for argmax classification.
Parameters
----------
_output : np.ndarray
The output variables from each individual estimator (e.g. decision tree)
lr : float
The learning rate used for training and which is used to scale the output
constant : np.ndarray
The constant term that is added to the aggregation
aggr : "sum" or "avg"
Aggregation type ("sum" or "avg"). "Sum" for gradient boosting decision trees. "avg" for random forests.
Returns
-------
aggregated_output : np.ndarray
The new aggregated output per dimension over all estimators. Traditionally a sum over one dimension.
"""
assert aggr in [
"sum",
"avg",
], f"Aggregation type {aggr} is neither sum or avg. No model exists."
assert (
_output.ndim == 3
), f"Aggregating estimator outputs of invalid dimension. {_output.ndim} != 3"
n_samples = _output.shape[0]
outdim = _output.shape[-1]
n_estimators = _output.shape[1]
aggregated_output = np.zeros((n_samples, outdim), dtype=object)
for i in range(n_samples):
for j in range(outdim):
sum_expr = constant[j]
for k in range(n_estimators):
scale = 1 if aggr == "sum" else n_estimators
sum_expr += lr * _output[i][k][j] / scale
aggregated_output[i][j] = sum_expr
return aggregated_output
def create_tree_estimators(
scip_model,
_input,
tree_vars,
trees,
n_estimators,
outdim,
unique_naming_prefix,
epsilon,
classification,
**kwargs,
):
"""
Creates individual tree estimator models for each decision tree.
Parameters
----------
scip_model : PySCIPOpt Model
The SCIP Model where the predictor should be inserted.
_input : np.ndarray
The input variables that are passed to each decision tree
tree_vars : np.ndarray
The PySCIPOpt variables that have been created to represent the output of each decision tree (i.e. estimator)
trees : list
A list of lists containing dictionary information that completely describe each decision tree (i.e. estimator)
n_estimators : int
The number of decision trees (i.e. estimators)
outdim : int
The output dimension of each decision tree
unique_naming_prefix : str
The unique naming prefix string that goes before all variables and constraints that are constructed by SCIP
epsilon : float
The epsilon that is used for each decision tree model. See #TODO: Decision tree modelling path
classification : bool
Whether the individual decision trees (i.e. estimators) are classification trees
Returns
-------
estimators : list
A list of :py:class`pyscipopt_ml.modelling.aggregate_tree_model.TreeEstimator`
"""
estimators = []
for i in range(n_estimators):
for j in range(outdim):
unique_prefix = unique_naming_prefix + f"{i}_{j}"
estimators.append(
TreeEstimator(
scip_model,
trees[i][j],
_input,
tree_vars[:, i, j].reshape((-1, 1)),
unique_prefix,
epsilon,
classification,
**kwargs,
)
)
return estimators
def create_sklearn_tree_estimators(
scip_model,
predictor,
_input,
n_samples,
outdim,
unique_naming_prefix,
classification,
gbdt_or_rf="gbdt",
**kwargs,
):
"""
Create individual estimators for each decision tree for decision tree based ensemble predictors from SKLearn.
Parameters
----------
scip_model : PySCIPOpt Model
The SCIP Model where the predictor should be inserted.
predictor : GradientBoostingClassifier | GradientBoostingRegressor | RandomForestClassifier | RandomForestRegressor
The Sklearn predictor that we are modelling
_input : np.ndarray
The input variables into each decision tree (i.e. estimator)
n_samples : int
The number of samples as input
outdim : int
The number of outputs of each estimator
unique_naming_prefix : str
The unique naming prefix string that goes before all variables and constraints that are constructed by SCIP
classification : bool
Whether the individual decision trees (i.e. estimators) are classification trees
gbdt_or_rf : "gbdt" | "rf"
Whether the predictor is for gradient boosting decision trees or random forests.
Returns
-------
estimators : list
A list of :py:class`pyscipopt_ml.modelling.aggregate_tree_model.TreeEstimator`
tree_vars : np.ndarray
A np.ndarray of created PySCIPopt vars
"""
# Create variables to represent the output of each decision tree (i.e. estimator)
shape = (n_samples, predictor.n_estimators, outdim)
tree_vars = create_vars(
scip_model, shape=shape, vtype="C", lb=None, name_prefix=unique_naming_prefix + "tree_var"
)
# Create each estimator. In the case of GBDT, there are (n_estimators, outdim) many estimators, while for RF
# there are (outdim,) many estimators. In the case of GBDT for classification each individual DT is regression.
estimators = []
if gbdt_or_rf == "gbdt":
for i in range(predictor.n_estimators_):
for j in range(outdim):
unique_prefix = unique_naming_prefix + f"{i}_{j}"
tree = predictor.estimators_[i][j]
estimators.append(
add_decision_tree_regressor_constr(
scip_model,
tree,
_input,
tree_vars[:, i, j].reshape((-1, 1)),
unique_prefix,
**kwargs,
)
)
elif gbdt_or_rf == "rf":
for i in range(predictor.n_estimators):
tree = predictor.estimators_[i]
unique_prefix = unique_naming_prefix + f"{i}"
if classification:
estimators.append(
add_decision_tree_classifier_constr(
scip_model, tree, _input, tree_vars[:, i, :], unique_prefix, **kwargs
)
)
else:
estimators.append(
add_decision_tree_regressor_constr(
scip_model, tree, _input, tree_vars[:, i, :], unique_prefix, **kwargs
)
)
return estimators, tree_vars
| class TreeEstimator(AbstractPredictorConstr): | 2 | 2023-12-10 20:28:22+00:00 | 12k |
camenduru/MotionDirector-hf | models/unet_3d_condition.py | [
{
"identifier": "CrossAttnDownBlock3D",
"path": "models/unet_3d_blocks.py",
"snippet": "class CrossAttnDownBlock3D(nn.Module):\n def __init__(\n self,\n in_channels: int,\n out_channels: int,\n temb_channels: int,\n dropout: float = 0.0,\n num_layers: int = 1... | from dataclasses import dataclass
from typing import Any, Dict, List, Optional, Tuple, Union
from diffusers.configuration_utils import ConfigMixin, register_to_config
from diffusers.utils import BaseOutput, logging
from diffusers.models.embeddings import TimestepEmbedding, Timesteps
from diffusers.models.modeling_utils import ModelMixin
from diffusers.models.transformer_temporal import TransformerTemporalModel
from .unet_3d_blocks import (
CrossAttnDownBlock3D,
CrossAttnUpBlock3D,
DownBlock3D,
UNetMidBlock3DCrossAttn,
UpBlock3D,
get_down_block,
get_up_block,
transformer_g_c
)
import torch
import torch.nn as nn
import torch.utils.checkpoint | 7,573 | """
sample: torch.FloatTensor
class UNet3DConditionModel(ModelMixin, ConfigMixin):
r"""
UNet3DConditionModel is a conditional 2D UNet model that takes in a noisy sample, conditional state, and a timestep
and returns sample shaped output.
This model inherits from [`ModelMixin`]. Check the superclass documentation for the generic methods the library
implements for all the models (such as downloading or saving, etc.)
Parameters:
sample_size (`int` or `Tuple[int, int]`, *optional*, defaults to `None`):
Height and width of input/output sample.
in_channels (`int`, *optional*, defaults to 4): The number of channels in the input sample.
out_channels (`int`, *optional*, defaults to 4): The number of channels in the output.
down_block_types (`Tuple[str]`, *optional*, defaults to `("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")`):
The tuple of downsample blocks to use.
up_block_types (`Tuple[str]`, *optional*, defaults to `("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D",)`):
The tuple of upsample blocks to use.
block_out_channels (`Tuple[int]`, *optional*, defaults to `(320, 640, 1280, 1280)`):
The tuple of output channels for each block.
layers_per_block (`int`, *optional*, defaults to 2): The number of layers per block.
downsample_padding (`int`, *optional*, defaults to 1): The padding to use for the downsampling convolution.
mid_block_scale_factor (`float`, *optional*, defaults to 1.0): The scale factor to use for the mid block.
act_fn (`str`, *optional*, defaults to `"silu"`): The activation function to use.
norm_num_groups (`int`, *optional*, defaults to 32): The number of groups to use for the normalization.
If `None`, it will skip the normalization and activation layers in post-processing
norm_eps (`float`, *optional*, defaults to 1e-5): The epsilon to use for the normalization.
cross_attention_dim (`int`, *optional*, defaults to 1280): The dimension of the cross attention features.
attention_head_dim (`int`, *optional*, defaults to 8): The dimension of the attention heads.
"""
_supports_gradient_checkpointing = True
@register_to_config
def __init__(
self,
sample_size: Optional[int] = None,
in_channels: int = 4,
out_channels: int = 4,
down_block_types: Tuple[str] = (
"CrossAttnDownBlock3D",
"CrossAttnDownBlock3D",
"CrossAttnDownBlock3D",
"DownBlock3D",
),
up_block_types: Tuple[str] = ("UpBlock3D", "CrossAttnUpBlock3D", "CrossAttnUpBlock3D", "CrossAttnUpBlock3D"),
block_out_channels: Tuple[int] = (320, 640, 1280, 1280),
layers_per_block: int = 2,
downsample_padding: int = 1,
mid_block_scale_factor: float = 1,
act_fn: str = "silu",
norm_num_groups: Optional[int] = 32,
norm_eps: float = 1e-5,
cross_attention_dim: int = 1024,
attention_head_dim: Union[int, Tuple[int]] = 64,
):
super().__init__()
self.sample_size = sample_size
self.gradient_checkpointing = False
# Check inputs
if len(down_block_types) != len(up_block_types):
raise ValueError(
f"Must provide the same number of `down_block_types` as `up_block_types`. `down_block_types`: {down_block_types}. `up_block_types`: {up_block_types}."
)
if len(block_out_channels) != len(down_block_types):
raise ValueError(
f"Must provide the same number of `block_out_channels` as `down_block_types`. `block_out_channels`: {block_out_channels}. `down_block_types`: {down_block_types}."
)
if not isinstance(attention_head_dim, int) and len(attention_head_dim) != len(down_block_types):
raise ValueError(
f"Must provide the same number of `attention_head_dim` as `down_block_types`. `attention_head_dim`: {attention_head_dim}. `down_block_types`: {down_block_types}."
)
# input
conv_in_kernel = 3
conv_out_kernel = 3
conv_in_padding = (conv_in_kernel - 1) // 2
self.conv_in = nn.Conv2d(
in_channels, block_out_channels[0], kernel_size=conv_in_kernel, padding=conv_in_padding
)
# time
time_embed_dim = block_out_channels[0] * 4
self.time_proj = Timesteps(block_out_channels[0], True, 0)
timestep_input_dim = block_out_channels[0]
self.time_embedding = TimestepEmbedding(
timestep_input_dim,
time_embed_dim,
act_fn=act_fn,
)
self.transformer_in = TransformerTemporalModel(
num_attention_heads=8,
attention_head_dim=attention_head_dim,
in_channels=block_out_channels[0],
num_layers=1,
)
# class embedding
self.down_blocks = nn.ModuleList([])
self.up_blocks = nn.ModuleList([])
if isinstance(attention_head_dim, int):
attention_head_dim = (attention_head_dim,) * len(down_block_types)
# down
output_channel = block_out_channels[0]
for i, down_block_type in enumerate(down_block_types):
input_channel = output_channel
output_channel = block_out_channels[i]
is_final_block = i == len(block_out_channels) - 1
| # Copyright 2023 Alibaba DAMO-VILAB and The HuggingFace Team. All rights reserved.
# Copyright 2023 The ModelScope Team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
logger = logging.get_logger(__name__) # pylint: disable=invalid-name
@dataclass
class UNet3DConditionOutput(BaseOutput):
"""
Args:
sample (`torch.FloatTensor` of shape `(batch_size, num_frames, num_channels, height, width)`):
Hidden states conditioned on `encoder_hidden_states` input. Output of last layer of model.
"""
sample: torch.FloatTensor
class UNet3DConditionModel(ModelMixin, ConfigMixin):
r"""
UNet3DConditionModel is a conditional 2D UNet model that takes in a noisy sample, conditional state, and a timestep
and returns sample shaped output.
This model inherits from [`ModelMixin`]. Check the superclass documentation for the generic methods the library
implements for all the models (such as downloading or saving, etc.)
Parameters:
sample_size (`int` or `Tuple[int, int]`, *optional*, defaults to `None`):
Height and width of input/output sample.
in_channels (`int`, *optional*, defaults to 4): The number of channels in the input sample.
out_channels (`int`, *optional*, defaults to 4): The number of channels in the output.
down_block_types (`Tuple[str]`, *optional*, defaults to `("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")`):
The tuple of downsample blocks to use.
up_block_types (`Tuple[str]`, *optional*, defaults to `("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D",)`):
The tuple of upsample blocks to use.
block_out_channels (`Tuple[int]`, *optional*, defaults to `(320, 640, 1280, 1280)`):
The tuple of output channels for each block.
layers_per_block (`int`, *optional*, defaults to 2): The number of layers per block.
downsample_padding (`int`, *optional*, defaults to 1): The padding to use for the downsampling convolution.
mid_block_scale_factor (`float`, *optional*, defaults to 1.0): The scale factor to use for the mid block.
act_fn (`str`, *optional*, defaults to `"silu"`): The activation function to use.
norm_num_groups (`int`, *optional*, defaults to 32): The number of groups to use for the normalization.
If `None`, it will skip the normalization and activation layers in post-processing
norm_eps (`float`, *optional*, defaults to 1e-5): The epsilon to use for the normalization.
cross_attention_dim (`int`, *optional*, defaults to 1280): The dimension of the cross attention features.
attention_head_dim (`int`, *optional*, defaults to 8): The dimension of the attention heads.
"""
_supports_gradient_checkpointing = True
@register_to_config
def __init__(
self,
sample_size: Optional[int] = None,
in_channels: int = 4,
out_channels: int = 4,
down_block_types: Tuple[str] = (
"CrossAttnDownBlock3D",
"CrossAttnDownBlock3D",
"CrossAttnDownBlock3D",
"DownBlock3D",
),
up_block_types: Tuple[str] = ("UpBlock3D", "CrossAttnUpBlock3D", "CrossAttnUpBlock3D", "CrossAttnUpBlock3D"),
block_out_channels: Tuple[int] = (320, 640, 1280, 1280),
layers_per_block: int = 2,
downsample_padding: int = 1,
mid_block_scale_factor: float = 1,
act_fn: str = "silu",
norm_num_groups: Optional[int] = 32,
norm_eps: float = 1e-5,
cross_attention_dim: int = 1024,
attention_head_dim: Union[int, Tuple[int]] = 64,
):
super().__init__()
self.sample_size = sample_size
self.gradient_checkpointing = False
# Check inputs
if len(down_block_types) != len(up_block_types):
raise ValueError(
f"Must provide the same number of `down_block_types` as `up_block_types`. `down_block_types`: {down_block_types}. `up_block_types`: {up_block_types}."
)
if len(block_out_channels) != len(down_block_types):
raise ValueError(
f"Must provide the same number of `block_out_channels` as `down_block_types`. `block_out_channels`: {block_out_channels}. `down_block_types`: {down_block_types}."
)
if not isinstance(attention_head_dim, int) and len(attention_head_dim) != len(down_block_types):
raise ValueError(
f"Must provide the same number of `attention_head_dim` as `down_block_types`. `attention_head_dim`: {attention_head_dim}. `down_block_types`: {down_block_types}."
)
# input
conv_in_kernel = 3
conv_out_kernel = 3
conv_in_padding = (conv_in_kernel - 1) // 2
self.conv_in = nn.Conv2d(
in_channels, block_out_channels[0], kernel_size=conv_in_kernel, padding=conv_in_padding
)
# time
time_embed_dim = block_out_channels[0] * 4
self.time_proj = Timesteps(block_out_channels[0], True, 0)
timestep_input_dim = block_out_channels[0]
self.time_embedding = TimestepEmbedding(
timestep_input_dim,
time_embed_dim,
act_fn=act_fn,
)
self.transformer_in = TransformerTemporalModel(
num_attention_heads=8,
attention_head_dim=attention_head_dim,
in_channels=block_out_channels[0],
num_layers=1,
)
# class embedding
self.down_blocks = nn.ModuleList([])
self.up_blocks = nn.ModuleList([])
if isinstance(attention_head_dim, int):
attention_head_dim = (attention_head_dim,) * len(down_block_types)
# down
output_channel = block_out_channels[0]
for i, down_block_type in enumerate(down_block_types):
input_channel = output_channel
output_channel = block_out_channels[i]
is_final_block = i == len(block_out_channels) - 1
| down_block = get_down_block( | 5 | 2023-12-11 04:51:39+00:00 | 12k |
Yingyue-L/Mamba-LLaVA | llava/model/language_model/mpt/modeling_mpt.py | [
{
"identifier": "attn_bias_shape",
"path": "llava/model/language_model/mpt/attention.py",
"snippet": "def attn_bias_shape(attn_impl, n_heads, seq_len, alibi, prefix_lm, causal, use_sequence_id):\n if attn_impl == 'flash':\n return None\n elif attn_impl in ['torch', 'triton']:\n if al... | import math
import warnings
import torch
import torch.nn as nn
import torch.nn.functional as F
from typing import List, Optional, Tuple, Union
from transformers import PreTrainedModel, PreTrainedTokenizer, PreTrainedTokenizerFast
from transformers.modeling_outputs import BaseModelOutputWithPast, CausalLMOutputWithPast
from .attention import attn_bias_shape, build_attn_bias
from .blocks import MPTBlock
from .custom_embedding import SharedEmbedding
from .norm import NORM_CLASS_REGISTRY
from .configuration_mpt import MPTConfig
from .adapt_tokenizer import AutoTokenizerForMOD, adapt_tokenizer_for_denoising
from .hf_prefixlm_converter import add_bidirectional_mask_if_missing, convert_hf_causal_lm_to_prefix_lm
from .meta_init_context import init_empty_weights
from .param_init_fns import MODEL_INIT_REGISTRY, generic_param_init_fn_
from .flash_attn_triton import flash_attn_func | 9,446 | assert isinstance(attn_bias, torch.Tensor)
attn_bias = self._apply_sequence_id(attn_bias, sequence_id)
if attention_mask is not None:
s_k = attention_mask.shape[-1]
if attn_bias is None:
attn_bias = torch.zeros((1, 1, 1, s_k), device=device, dtype=dtype)
else:
_s_k = max(0, attn_bias.size(-1) - s_k)
attn_bias = attn_bias[:, :, :, _s_k:]
if prefix_mask is not None and attention_mask.shape != prefix_mask.shape:
raise ValueError(f'attention_mask shape={attention_mask.shape} ' + f'and prefix_mask shape={prefix_mask.shape} are not equal.')
min_val = torch.finfo(attn_bias.dtype).min
attn_bias = attn_bias.masked_fill(~attention_mask.view(-1, 1, 1, s_k), min_val)
return (attn_bias, None)
def _apply_prefix_mask(self, attn_bias: torch.Tensor, prefix_mask: torch.Tensor):
(s_k, s_q) = attn_bias.shape[-2:]
if s_k != self.config.max_seq_len or s_q != self.config.max_seq_len:
raise ValueError('attn_bias does not match the expected shape. ' + f'The last two dimensions should both be {self.config.max_length} ' + f'but are {s_k} and {s_q}.')
seq_len = prefix_mask.shape[-1]
if seq_len > self.config.max_seq_len:
raise ValueError(f'prefix_mask sequence length cannot exceed max_seq_len={self.config.max_seq_len}')
attn_bias = attn_bias[..., :seq_len, :seq_len]
causal = torch.tril(torch.ones((seq_len, seq_len), dtype=torch.bool, device=prefix_mask.device)).view(1, 1, seq_len, seq_len)
prefix = prefix_mask.view(-1, 1, 1, seq_len)
cannot_attend = ~torch.logical_or(causal, prefix.bool())
min_val = torch.finfo(attn_bias.dtype).min
attn_bias = attn_bias.masked_fill(cannot_attend, min_val)
return attn_bias
def _apply_sequence_id(self, attn_bias: torch.Tensor, sequence_id: torch.LongTensor):
seq_len = sequence_id.shape[-1]
if seq_len > self.config.max_seq_len:
raise ValueError(f'sequence_id sequence length cannot exceed max_seq_len={self.config.max_seq_len}')
attn_bias = attn_bias[..., :seq_len, :seq_len]
cannot_attend = torch.logical_not(torch.eq(sequence_id.view(-1, seq_len, 1), sequence_id.view(-1, 1, seq_len))).unsqueeze(1)
min_val = torch.finfo(attn_bias.dtype).min
attn_bias = attn_bias.masked_fill(cannot_attend, min_val)
return attn_bias
def forward(self, input_ids: torch.LongTensor, past_key_values: Optional[List[Tuple[torch.FloatTensor]]]=None, attention_mask: Optional[torch.ByteTensor]=None, prefix_mask: Optional[torch.ByteTensor]=None, sequence_id: Optional[torch.LongTensor]=None, return_dict: Optional[bool]=None, output_attentions: Optional[bool]=None, output_hidden_states: Optional[bool]=None, use_cache: Optional[bool]=None, inputs_embeds: Optional[torch.Tensor]=None):
return_dict = return_dict if return_dict is not None else self.config.return_dict
use_cache = use_cache if use_cache is not None else self.config.use_cache
if attention_mask is not None:
attention_mask = attention_mask.bool()
if prefix_mask is not None:
prefix_mask = prefix_mask.bool()
if not return_dict:
raise NotImplementedError('return_dict False is not implemented yet for MPT')
if output_attentions:
if self.attn_impl != 'torch':
raise NotImplementedError('output_attentions is not implemented for MPT when using attn_impl `flash` or `triton`.')
if attention_mask is not None and attention_mask[:, 0].sum() != attention_mask.shape[0] and self.training:
raise NotImplementedError('MPT does not support training with left padding.')
if self.prefix_lm and prefix_mask is None:
raise ValueError('prefix_mask is a required argument when MPT is configured with prefix_lm=True.')
if self.training:
if self.attn_uses_sequence_id and sequence_id is None:
raise ValueError('sequence_id is a required argument when MPT is configured with attn_uses_sequence_id=True ' + 'and the model is in train mode.')
elif self.attn_uses_sequence_id is False and sequence_id is not None:
warnings.warn('MPT received non-None input for `sequence_id` but is configured with attn_uses_sequence_id=False. ' + 'This input will be ignored. If you want the model to use `sequence_id`, set attn_uses_sequence_id to True.')
if input_ids is not None:
S = input_ids.size(1)
assert S <= self.config.max_seq_len, f'Cannot forward input with seq_len={S}, this model only supports seq_len<={self.config.max_seq_len}'
tok_emb = self.wte(input_ids)
else:
assert inputs_embeds is not None
assert self.alibi, 'inputs_embeds is not implemented for MPT unless for alibi.'
S = inputs_embeds.size(1)
tok_emb = inputs_embeds
if self.alibi:
x = tok_emb
else:
past_position = 0
if past_key_values is not None:
if len(past_key_values) != self.config.n_layers:
raise ValueError(f'past_key_values must provide a past_key_value for each attention ' + f'layer in the network (len(past_key_values)={len(past_key_values)!r}; self.config.n_layers={self.config.n_layers!r}).')
past_position = past_key_values[0][0].size(1)
if self.attn_impl == 'torch':
past_position = past_key_values[0][0].size(3)
if S + past_position > self.config.max_seq_len:
raise ValueError(f'Cannot forward input with past sequence length {past_position} and current sequence length {S + 1}, this model only supports total sequence length <= {self.config.max_seq_len}.')
pos = torch.arange(past_position, S + past_position, dtype=torch.long, device=input_ids.device).unsqueeze(0)
if attention_mask is not None:
pos = torch.clamp(pos - torch.cumsum((~attention_mask).to(torch.int32), dim=1)[:, past_position:], min=0)
pos_emb = self.wpe(pos)
x = tok_emb + pos_emb
if self.embedding_fraction == 1:
x = self.emb_drop(x)
else:
x_shrunk = x * self.embedding_fraction + x.detach() * (1 - self.embedding_fraction)
assert isinstance(self.emb_drop, nn.Module)
x = self.emb_drop(x_shrunk)
(attn_bias, attention_mask) = self._attn_bias(device=x.device, dtype=torch.float32, attention_mask=attention_mask, prefix_mask=prefix_mask, sequence_id=sequence_id)
if use_cache and past_key_values is None:
past_key_values = [() for _ in range(self.config.n_layers)]
all_hidden_states = () if output_hidden_states else None
all_self_attns = () if output_attentions else None
for (b_idx, block) in enumerate(self.blocks):
if output_hidden_states:
assert all_hidden_states is not None
all_hidden_states = all_hidden_states + (x,)
past_key_value = past_key_values[b_idx] if past_key_values is not None else None
if self.gradient_checkpointing and self.training:
(x, attn_weights, past_key_value) = torch.utils.checkpoint.checkpoint(block, x, past_key_value, attn_bias, attention_mask, self.is_causal)
else:
(x, attn_weights, past_key_value) = block(x, past_key_value=past_key_value, attn_bias=attn_bias, attention_mask=attention_mask, is_causal=self.is_causal)
if past_key_values is not None:
past_key_values[b_idx] = past_key_value
if output_attentions:
assert all_self_attns is not None
all_self_attns = all_self_attns + (attn_weights,)
x = self.norm_f(x)
if output_hidden_states:
assert all_hidden_states is not None
all_hidden_states = all_hidden_states + (x,)
return BaseModelOutputWithPast(last_hidden_state=x, past_key_values=past_key_values, hidden_states=all_hidden_states, attentions=all_self_attns)
def param_init_fn(self, module):
init_fn_name = self.config.init_config['name']
| """A simple, flexible implementation of a GPT model.
Inspired by https://github.com/karpathy/minGPT/blob/master/mingpt/model.py
"""
try:
except:
pass
Tokenizer = Union[PreTrainedTokenizer, PreTrainedTokenizerFast]
class MPTPreTrainedModel(PreTrainedModel):
config_class = MPTConfig
base_model_prefix = 'model'
_no_split_modules = ['MPTBlock']
class MPTModel(MPTPreTrainedModel):
def __init__(self, config: MPTConfig):
config._validate_config()
super().__init__(config)
self.attn_impl = config.attn_config['attn_impl']
self.prefix_lm = config.attn_config['prefix_lm']
self.attn_uses_sequence_id = config.attn_config['attn_uses_sequence_id']
self.alibi = config.attn_config['alibi']
self.alibi_bias_max = config.attn_config['alibi_bias_max']
if config.init_device == 'mixed':
if dist.get_local_rank() == 0:
config.init_device = 'cpu'
else:
config.init_device = 'meta'
if config.norm_type.lower() not in NORM_CLASS_REGISTRY.keys():
norm_options = ' | '.join(NORM_CLASS_REGISTRY.keys())
raise NotImplementedError(f'Requested norm type ({config.norm_type}) is not implemented within this repo (Options: {norm_options}).')
norm_class = NORM_CLASS_REGISTRY[config.norm_type.lower()]
self.embedding_fraction = config.embedding_fraction
self.wte = SharedEmbedding(config.vocab_size, config.d_model, device=config.init_device)
if not self.alibi:
self.wpe = torch.nn.Embedding(config.max_seq_len, config.d_model, device=config.init_device)
self.emb_drop = nn.Dropout(config.emb_pdrop)
self.blocks = nn.ModuleList([MPTBlock(device=config.init_device, **config.to_dict()) for _ in range(config.n_layers)])
self.norm_f = norm_class(config.d_model, device=config.init_device)
if config.init_device != 'meta':
print(f'You are using config.init_device={config.init_device!r}, but you can also use config.init_device="meta" with Composer + FSDP for fast initialization.')
self.apply(self.param_init_fn)
self.is_causal = not self.prefix_lm
self._attn_bias_initialized = False
self.attn_bias = None
self.attn_bias_shape = attn_bias_shape(self.attn_impl, config.n_heads, config.max_seq_len, self.alibi, prefix_lm=self.prefix_lm, causal=self.is_causal, use_sequence_id=self.attn_uses_sequence_id)
if config.no_bias:
for module in self.modules():
if hasattr(module, 'bias') and isinstance(module.bias, nn.Parameter):
if config.verbose:
warnings.warn(f'Removing bias ({module.bias}) from {module}.')
module.register_parameter('bias', None)
if config.verbose and config.verbose > 2:
print(self)
if 'verbose' not in self.config.init_config:
self.config.init_config['verbose'] = self.config.verbose
if self.config.init_config['verbose'] > 1:
init_fn_name = self.config.init_config['name']
warnings.warn(f'Using {init_fn_name} initialization.')
self.gradient_checkpointing = False
def get_input_embeddings(self):
return self.wte
def set_input_embeddings(self, value):
self.wte = value
@torch.no_grad()
def _attn_bias(self, device, dtype, attention_mask: Optional[torch.ByteTensor]=None, prefix_mask: Optional[torch.ByteTensor]=None, sequence_id: Optional[torch.LongTensor]=None):
if not self._attn_bias_initialized:
if self.attn_bias_shape:
self.attn_bias = torch.zeros(self.attn_bias_shape, device=device, dtype=dtype)
self.attn_bias = build_attn_bias(self.attn_impl, self.attn_bias, self.config.n_heads, self.config.max_seq_len, causal=self.is_causal, alibi=self.alibi, alibi_bias_max=self.alibi_bias_max)
self._attn_bias_initialized = True
if self.attn_impl == 'flash':
return (self.attn_bias, attention_mask)
if self.attn_bias is not None:
self.attn_bias = self.attn_bias.to(dtype=dtype, device=device)
attn_bias = self.attn_bias
if self.prefix_lm:
assert isinstance(attn_bias, torch.Tensor)
assert isinstance(prefix_mask, torch.Tensor)
attn_bias = self._apply_prefix_mask(attn_bias, prefix_mask)
if self.attn_uses_sequence_id and sequence_id is not None:
assert isinstance(attn_bias, torch.Tensor)
attn_bias = self._apply_sequence_id(attn_bias, sequence_id)
if attention_mask is not None:
s_k = attention_mask.shape[-1]
if attn_bias is None:
attn_bias = torch.zeros((1, 1, 1, s_k), device=device, dtype=dtype)
else:
_s_k = max(0, attn_bias.size(-1) - s_k)
attn_bias = attn_bias[:, :, :, _s_k:]
if prefix_mask is not None and attention_mask.shape != prefix_mask.shape:
raise ValueError(f'attention_mask shape={attention_mask.shape} ' + f'and prefix_mask shape={prefix_mask.shape} are not equal.')
min_val = torch.finfo(attn_bias.dtype).min
attn_bias = attn_bias.masked_fill(~attention_mask.view(-1, 1, 1, s_k), min_val)
return (attn_bias, None)
def _apply_prefix_mask(self, attn_bias: torch.Tensor, prefix_mask: torch.Tensor):
(s_k, s_q) = attn_bias.shape[-2:]
if s_k != self.config.max_seq_len or s_q != self.config.max_seq_len:
raise ValueError('attn_bias does not match the expected shape. ' + f'The last two dimensions should both be {self.config.max_length} ' + f'but are {s_k} and {s_q}.')
seq_len = prefix_mask.shape[-1]
if seq_len > self.config.max_seq_len:
raise ValueError(f'prefix_mask sequence length cannot exceed max_seq_len={self.config.max_seq_len}')
attn_bias = attn_bias[..., :seq_len, :seq_len]
causal = torch.tril(torch.ones((seq_len, seq_len), dtype=torch.bool, device=prefix_mask.device)).view(1, 1, seq_len, seq_len)
prefix = prefix_mask.view(-1, 1, 1, seq_len)
cannot_attend = ~torch.logical_or(causal, prefix.bool())
min_val = torch.finfo(attn_bias.dtype).min
attn_bias = attn_bias.masked_fill(cannot_attend, min_val)
return attn_bias
def _apply_sequence_id(self, attn_bias: torch.Tensor, sequence_id: torch.LongTensor):
seq_len = sequence_id.shape[-1]
if seq_len > self.config.max_seq_len:
raise ValueError(f'sequence_id sequence length cannot exceed max_seq_len={self.config.max_seq_len}')
attn_bias = attn_bias[..., :seq_len, :seq_len]
cannot_attend = torch.logical_not(torch.eq(sequence_id.view(-1, seq_len, 1), sequence_id.view(-1, 1, seq_len))).unsqueeze(1)
min_val = torch.finfo(attn_bias.dtype).min
attn_bias = attn_bias.masked_fill(cannot_attend, min_val)
return attn_bias
def forward(self, input_ids: torch.LongTensor, past_key_values: Optional[List[Tuple[torch.FloatTensor]]]=None, attention_mask: Optional[torch.ByteTensor]=None, prefix_mask: Optional[torch.ByteTensor]=None, sequence_id: Optional[torch.LongTensor]=None, return_dict: Optional[bool]=None, output_attentions: Optional[bool]=None, output_hidden_states: Optional[bool]=None, use_cache: Optional[bool]=None, inputs_embeds: Optional[torch.Tensor]=None):
return_dict = return_dict if return_dict is not None else self.config.return_dict
use_cache = use_cache if use_cache is not None else self.config.use_cache
if attention_mask is not None:
attention_mask = attention_mask.bool()
if prefix_mask is not None:
prefix_mask = prefix_mask.bool()
if not return_dict:
raise NotImplementedError('return_dict False is not implemented yet for MPT')
if output_attentions:
if self.attn_impl != 'torch':
raise NotImplementedError('output_attentions is not implemented for MPT when using attn_impl `flash` or `triton`.')
if attention_mask is not None and attention_mask[:, 0].sum() != attention_mask.shape[0] and self.training:
raise NotImplementedError('MPT does not support training with left padding.')
if self.prefix_lm and prefix_mask is None:
raise ValueError('prefix_mask is a required argument when MPT is configured with prefix_lm=True.')
if self.training:
if self.attn_uses_sequence_id and sequence_id is None:
raise ValueError('sequence_id is a required argument when MPT is configured with attn_uses_sequence_id=True ' + 'and the model is in train mode.')
elif self.attn_uses_sequence_id is False and sequence_id is not None:
warnings.warn('MPT received non-None input for `sequence_id` but is configured with attn_uses_sequence_id=False. ' + 'This input will be ignored. If you want the model to use `sequence_id`, set attn_uses_sequence_id to True.')
if input_ids is not None:
S = input_ids.size(1)
assert S <= self.config.max_seq_len, f'Cannot forward input with seq_len={S}, this model only supports seq_len<={self.config.max_seq_len}'
tok_emb = self.wte(input_ids)
else:
assert inputs_embeds is not None
assert self.alibi, 'inputs_embeds is not implemented for MPT unless for alibi.'
S = inputs_embeds.size(1)
tok_emb = inputs_embeds
if self.alibi:
x = tok_emb
else:
past_position = 0
if past_key_values is not None:
if len(past_key_values) != self.config.n_layers:
raise ValueError(f'past_key_values must provide a past_key_value for each attention ' + f'layer in the network (len(past_key_values)={len(past_key_values)!r}; self.config.n_layers={self.config.n_layers!r}).')
past_position = past_key_values[0][0].size(1)
if self.attn_impl == 'torch':
past_position = past_key_values[0][0].size(3)
if S + past_position > self.config.max_seq_len:
raise ValueError(f'Cannot forward input with past sequence length {past_position} and current sequence length {S + 1}, this model only supports total sequence length <= {self.config.max_seq_len}.')
pos = torch.arange(past_position, S + past_position, dtype=torch.long, device=input_ids.device).unsqueeze(0)
if attention_mask is not None:
pos = torch.clamp(pos - torch.cumsum((~attention_mask).to(torch.int32), dim=1)[:, past_position:], min=0)
pos_emb = self.wpe(pos)
x = tok_emb + pos_emb
if self.embedding_fraction == 1:
x = self.emb_drop(x)
else:
x_shrunk = x * self.embedding_fraction + x.detach() * (1 - self.embedding_fraction)
assert isinstance(self.emb_drop, nn.Module)
x = self.emb_drop(x_shrunk)
(attn_bias, attention_mask) = self._attn_bias(device=x.device, dtype=torch.float32, attention_mask=attention_mask, prefix_mask=prefix_mask, sequence_id=sequence_id)
if use_cache and past_key_values is None:
past_key_values = [() for _ in range(self.config.n_layers)]
all_hidden_states = () if output_hidden_states else None
all_self_attns = () if output_attentions else None
for (b_idx, block) in enumerate(self.blocks):
if output_hidden_states:
assert all_hidden_states is not None
all_hidden_states = all_hidden_states + (x,)
past_key_value = past_key_values[b_idx] if past_key_values is not None else None
if self.gradient_checkpointing and self.training:
(x, attn_weights, past_key_value) = torch.utils.checkpoint.checkpoint(block, x, past_key_value, attn_bias, attention_mask, self.is_causal)
else:
(x, attn_weights, past_key_value) = block(x, past_key_value=past_key_value, attn_bias=attn_bias, attention_mask=attention_mask, is_causal=self.is_causal)
if past_key_values is not None:
past_key_values[b_idx] = past_key_value
if output_attentions:
assert all_self_attns is not None
all_self_attns = all_self_attns + (attn_weights,)
x = self.norm_f(x)
if output_hidden_states:
assert all_hidden_states is not None
all_hidden_states = all_hidden_states + (x,)
return BaseModelOutputWithPast(last_hidden_state=x, past_key_values=past_key_values, hidden_states=all_hidden_states, attentions=all_self_attns)
def param_init_fn(self, module):
init_fn_name = self.config.init_config['name'] | MODEL_INIT_REGISTRY[init_fn_name](module=module, n_layers=self.config.n_layers, d_model=self.config.d_model, **self.config.init_config) | 11 | 2023-12-09 09:39:13+00:00 | 12k |
Theia-4869/MoSA | train.py | [
{
"identifier": "get_cfg",
"path": "src/configs/config.py",
"snippet": "def get_cfg():\n \"\"\"\n Get a copy of the default config.\n \"\"\"\n return _C.clone()"
},
{
"identifier": "loader",
"path": "src/data/loader.py",
"snippet": "_TORCH_BASIC_DS = {\n \"cifar100\": CIFA... | import os
import torch
import warnings
import numpy as np
import random
import wandb
import src.utils.logging as logging
from random import randint
from time import sleep
from src.configs.config import get_cfg
from src.data import loader as data_loader
from src.engine.evaluator import Evaluator
from src.engine.trainer import Trainer
from src.models.build_model import build_model
from src.utils.build_pruner import build_pruner, log_pruned_model_info
from src.utils.file_io import PathManager
from launch import default_argument_parser, logging_train_setup | 8,655 | if p.requires_grad and "head" not in k:
score = pruner.score(p)
mask = pruner.prune(score)
p.mask = mask
elif cfg.MODEL.TRANSFER_TYPE == "lora" or cfg.MODEL.TRANSFER_TYPE == "mosl":
if cfg.MODEL.LORA.MOE:
for blk in model.enc.transformer.encoder.layer:
if cfg.MODEL.LORA.SHARE != "down":
if "q" in cfg.MODEL.LORA.MODE:
score = pruner.score(blk.attn.lora_A_q[0].weight)
masks = pruner.divide(score)
for i, m in enumerate(masks):
blk.attn.lora_A_q[i].weight.mask = m
if "k" in cfg.MODEL.LORA.MODE:
score = pruner.score(blk.attn.lora_A_k[0].weight)
masks = pruner.divide(score)
for i, m in enumerate(masks):
blk.attn.lora_A_k[i].weight.mask = m
if "v" in cfg.MODEL.LORA.MODE:
score = pruner.score(blk.attn.lora_A_v[0].weight)
masks = pruner.divide(score)
for i, m in enumerate(masks):
blk.attn.lora_A_v[i].weight.mask = m
if "o" in cfg.MODEL.LORA.MODE:
score = pruner.score(blk.attn.lora_A_o[0].weight)
masks = pruner.divide(score)
for i, m in enumerate(masks):
blk.attn.lora_A_o[i].weight.mask = m
if cfg.MODEL.LORA.SHARE != "up":
if "q" in cfg.MODEL.LORA.MODE:
score = pruner.score(blk.attn.lora_B_q[0].weight)
masks = pruner.divide(score)
for i, m in enumerate(masks):
blk.attn.lora_B_q[i].weight.mask = m
if "k" in cfg.MODEL.LORA.MODE:
score = pruner.score(blk.attn.lora_B_k[0].weight)
masks = pruner.divide(score)
for i, m in enumerate(masks):
blk.attn.lora_B_k[i].weight.mask = m
if "v" in cfg.MODEL.LORA.MODE:
score = pruner.score(blk.attn.lora_B_v[0].weight)
masks = pruner.divide(score)
for i, m in enumerate(masks):
blk.attn.lora_B_v[i].weight.mask = m
if "o" in cfg.MODEL.LORA.MODE:
score = pruner.score(blk.attn.lora_B_o[0].weight)
masks = pruner.divide(score)
for i, m in enumerate(masks):
blk.attn.lora_B_o[i].weight.mask = m
else:
for k, p in model.named_parameters():
if p.requires_grad and "head" not in k:
score = pruner.score(p)
mask = pruner.prune(score)
p.mask = mask
elif cfg.MODEL.TYPE == "swin":
if cfg.MODEL.ADAPTER.MOE:
for layer in model.enc.layers:
for blk in layer.blocks:
if cfg.MODEL.ADAPTER.SHARE != "down":
score = pruner.score(blk.mlp.adapter_down[0].weight)
masks = pruner.divide(score)
for i, m in enumerate(masks):
blk.mlp.adapter_down[i].weight.mask = m
score = pruner.score(blk.mlp.adapter_down[0].bias)
masks = pruner.divide(score)
for i, m in enumerate(masks):
blk.mlp.adapter_down[i].bias.mask = m
if cfg.MODEL.ADAPTER.SHARE != "up":
score = pruner.score(blk.mlp.adapter_up[0].weight)
masks = pruner.divide(score)
for i, m in enumerate(masks):
blk.mlp.adapter_up[i].weight.mask = m
score = pruner.score(blk.mlp.adapter_up[0].bias)
masks = pruner.divide(score)
for i, m in enumerate(masks):
blk.mlp.adapter_up[i].bias.mask = m
else:
for k, p in model.named_parameters():
if p.requires_grad and "head" not in k:
score = pruner.score(p)
mask = pruner.prune(score)
p.mask = mask
log_pruned_model_info(model, verbose=cfg.DBG)
# for k, p in model.named_parameters():
# if p.requires_grad:
# print(k, p.shape)
# raise ValueError("stop here")
logger.info("Setting up Evalutator...")
evaluator = Evaluator()
logger.info("Setting up Trainer...")
trainer = Trainer(cfg, args, model, evaluator, cur_device)
if train_loader:
trainer.train_classifier(train_loader, val_loader, test_loader)
else:
print("No train loader presented. Exit")
if cfg.SOLVER.TOTAL_EPOCH == 0:
trainer.eval_classifier(test_loader, "test", 0)
def main(args):
"""main function to call from workflow"""
# set up cfg and args
cfg = setup(args)
# Perform training.
train(cfg, args)
if __name__ == '__main__':
| #!/usr/bin/env python3
"""
major actions here: fine-tune the features and evaluate different settings
"""
warnings.filterwarnings("ignore")
def setup(args):
"""
Create configs and perform basic setups.
"""
cfg = get_cfg()
cfg.merge_from_file(args.config_file)
cfg.merge_from_list(args.opts)
# setup output dir
# output_dir / data_name / feature_name / lr_wd / run1
output_dir = cfg.OUTPUT_DIR
lr = cfg.SOLVER.BASE_LR
wd = cfg.SOLVER.WEIGHT_DECAY
bn = cfg.MODEL.ADAPTER.BOTTLENECK_SIZE
output_folder = os.path.join(
cfg.DATA.NAME, cfg.DATA.FEATURE, f"lr{lr}_bn{bn}")
# train cfg.RUN_N_TIMES times
count = 1
while count <= cfg.RUN_N_TIMES:
output_path = os.path.join(output_dir, output_folder, f"run{count}")
# pause for a random time, so concurrent process with same setting won't interfere with each other. # noqa
# sleep(randint(3, 30))
if not PathManager.exists(output_path):
PathManager.mkdirs(output_path)
cfg.OUTPUT_DIR = output_path
break
else:
count += 1
if count > cfg.RUN_N_TIMES:
raise ValueError(
f"Already run {cfg.RUN_N_TIMES} times for {output_folder}, no need to run more")
cfg.freeze()
return cfg
def get_loaders(cfg, logger):
logger.info("Loading training data (final training data for vtab)...")
if cfg.DATA.NAME.startswith("vtab-"):
train_loader = data_loader.construct_trainval_loader(cfg)
else:
train_loader = data_loader.construct_train_loader(cfg)
logger.info("Loading validation data...")
# not really needed for vtab
val_loader = data_loader.construct_val_loader(cfg)
logger.info("Loading test data...")
if cfg.DATA.NO_TEST:
logger.info("...no test data is constructed")
test_loader = None
else:
test_loader = data_loader.construct_test_loader(cfg)
return train_loader, val_loader, test_loader
def train(cfg, args):
# clear up residual cache from previous runs
if torch.cuda.is_available():
torch.cuda.empty_cache()
# main training / eval actions here
# fix the seed for reproducibility
if cfg.SEED is not None:
torch.manual_seed(cfg.SEED)
torch.cuda.manual_seed(cfg.SEED)
torch.cuda.manual_seed_all(cfg.SEED)
np.random.seed(cfg.SEED)
random.seed(cfg.SEED)
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True
# setup training env including loggers
logging_train_setup(args, cfg)
logger = logging.get_logger("MOSA")
if args.use_wandb:
wandb.init(
project='MOSA',
name='{}_{}_{}'.format(cfg.DATA.NAME, cfg.MODEL.TRANSFER_TYPE, cfg.MODEL.HYPER.HYPER),
config=cfg
)
train_loader, val_loader, test_loader = get_loaders(cfg, logger)
logger.info("Constructing models...")
model, cur_device = build_model(cfg)
if args.sparse_train:
logger.info("Constructing pruner...")
pruner = build_pruner(cfg)
# for k, p in model.named_parameters():
# if p.requires_grad:
# print(k, p.shape)
# raise ValueError("stop here")
if args.sparse_train:
logger.info("Pruning model...")
if cfg.MODEL.TYPE == "vit":
if cfg.MODEL.TRANSFER_TYPE == "adapter" or cfg.MODEL.TRANSFER_TYPE == "mosa":
if cfg.MODEL.ADAPTER.MOE:
for blk in model.enc.transformer.encoder.layer:
if cfg.MODEL.ADAPTER.SHARE != "down":
if cfg.MODEL.ADAPTER.STYLE == "AdaptFormer" or cfg.MODEL.ADAPTER.STYLE == "Pfeiffer":
score = pruner.score(blk.adapter_down[0].weight)
masks = pruner.divide(score)
for i, m in enumerate(masks):
blk.adapter_down[i].weight.mask = m
score = pruner.score(blk.adapter_down[0].bias)
masks = pruner.divide(score)
for i, m in enumerate(masks):
blk.adapter_down[i].bias.mask = m
elif cfg.MODEL.ADAPTER.STYLE == "Houlsby":
score = pruner.score(blk.adapter_down_attn[0].weight)
masks = pruner.divide(score)
for i, m in enumerate(masks):
blk.adapter_down_attn[i].weight.mask = m
score = pruner.score(blk.adapter_down_attn[0].bias)
masks = pruner.divide(score)
for i, m in enumerate(masks):
blk.adapter_down_attn[i].bias.mask = m
score = pruner.score(blk.adapter_down_ffn[0].weight)
masks = pruner.divide(score)
for i, m in enumerate(masks):
blk.adapter_down_ffn[i].weight.mask = m
score = pruner.score(blk.adapter_down_ffn[0].bias)
masks = pruner.divide(score)
for i, m in enumerate(masks):
blk.adapter_down_ffn[i].bias.mask = m
if cfg.MODEL.ADAPTER.SHARE != "up":
if cfg.MODEL.ADAPTER.STYLE == "AdaptFormer" or cfg.MODEL.ADAPTER.STYLE == "Pfeiffer":
score = pruner.score(blk.adapter_up[0].weight)
masks = pruner.divide(score)
for i, m in enumerate(masks):
blk.adapter_up[i].weight.mask = m
score = pruner.score(blk.adapter_up[0].bias)
masks = pruner.divide(score)
for i, m in enumerate(masks):
blk.adapter_up[i].bias.mask = m
elif cfg.MODEL.ADAPTER.STYLE == "Houlsby":
score = pruner.score(blk.adapter_up_attn[0].weight)
masks = pruner.divide(score)
for i, m in enumerate(masks):
blk.adapter_up_attn[i].weight.mask = m
score = pruner.score(blk.adapter_up_attn[0].bias)
masks = pruner.divide(score)
for i, m in enumerate(masks):
blk.adapter_up_attn[i].bias.mask = m
score = pruner.score(blk.adapter_up_ffn[0].weight)
masks = pruner.divide(score)
for i, m in enumerate(masks):
blk.adapter_up_ffn[i].weight.mask = m
score = pruner.score(blk.adapter_up_ffn[0].bias)
masks = pruner.divide(score)
for i, m in enumerate(masks):
blk.adapter_up_ffn[i].bias.mask = m
else:
for k, p in model.named_parameters():
if p.requires_grad and "head" not in k:
score = pruner.score(p)
mask = pruner.prune(score)
p.mask = mask
elif cfg.MODEL.TRANSFER_TYPE == "lora" or cfg.MODEL.TRANSFER_TYPE == "mosl":
if cfg.MODEL.LORA.MOE:
for blk in model.enc.transformer.encoder.layer:
if cfg.MODEL.LORA.SHARE != "down":
if "q" in cfg.MODEL.LORA.MODE:
score = pruner.score(blk.attn.lora_A_q[0].weight)
masks = pruner.divide(score)
for i, m in enumerate(masks):
blk.attn.lora_A_q[i].weight.mask = m
if "k" in cfg.MODEL.LORA.MODE:
score = pruner.score(blk.attn.lora_A_k[0].weight)
masks = pruner.divide(score)
for i, m in enumerate(masks):
blk.attn.lora_A_k[i].weight.mask = m
if "v" in cfg.MODEL.LORA.MODE:
score = pruner.score(blk.attn.lora_A_v[0].weight)
masks = pruner.divide(score)
for i, m in enumerate(masks):
blk.attn.lora_A_v[i].weight.mask = m
if "o" in cfg.MODEL.LORA.MODE:
score = pruner.score(blk.attn.lora_A_o[0].weight)
masks = pruner.divide(score)
for i, m in enumerate(masks):
blk.attn.lora_A_o[i].weight.mask = m
if cfg.MODEL.LORA.SHARE != "up":
if "q" in cfg.MODEL.LORA.MODE:
score = pruner.score(blk.attn.lora_B_q[0].weight)
masks = pruner.divide(score)
for i, m in enumerate(masks):
blk.attn.lora_B_q[i].weight.mask = m
if "k" in cfg.MODEL.LORA.MODE:
score = pruner.score(blk.attn.lora_B_k[0].weight)
masks = pruner.divide(score)
for i, m in enumerate(masks):
blk.attn.lora_B_k[i].weight.mask = m
if "v" in cfg.MODEL.LORA.MODE:
score = pruner.score(blk.attn.lora_B_v[0].weight)
masks = pruner.divide(score)
for i, m in enumerate(masks):
blk.attn.lora_B_v[i].weight.mask = m
if "o" in cfg.MODEL.LORA.MODE:
score = pruner.score(blk.attn.lora_B_o[0].weight)
masks = pruner.divide(score)
for i, m in enumerate(masks):
blk.attn.lora_B_o[i].weight.mask = m
else:
for k, p in model.named_parameters():
if p.requires_grad and "head" not in k:
score = pruner.score(p)
mask = pruner.prune(score)
p.mask = mask
elif cfg.MODEL.TYPE == "swin":
if cfg.MODEL.ADAPTER.MOE:
for layer in model.enc.layers:
for blk in layer.blocks:
if cfg.MODEL.ADAPTER.SHARE != "down":
score = pruner.score(blk.mlp.adapter_down[0].weight)
masks = pruner.divide(score)
for i, m in enumerate(masks):
blk.mlp.adapter_down[i].weight.mask = m
score = pruner.score(blk.mlp.adapter_down[0].bias)
masks = pruner.divide(score)
for i, m in enumerate(masks):
blk.mlp.adapter_down[i].bias.mask = m
if cfg.MODEL.ADAPTER.SHARE != "up":
score = pruner.score(blk.mlp.adapter_up[0].weight)
masks = pruner.divide(score)
for i, m in enumerate(masks):
blk.mlp.adapter_up[i].weight.mask = m
score = pruner.score(blk.mlp.adapter_up[0].bias)
masks = pruner.divide(score)
for i, m in enumerate(masks):
blk.mlp.adapter_up[i].bias.mask = m
else:
for k, p in model.named_parameters():
if p.requires_grad and "head" not in k:
score = pruner.score(p)
mask = pruner.prune(score)
p.mask = mask
log_pruned_model_info(model, verbose=cfg.DBG)
# for k, p in model.named_parameters():
# if p.requires_grad:
# print(k, p.shape)
# raise ValueError("stop here")
logger.info("Setting up Evalutator...")
evaluator = Evaluator()
logger.info("Setting up Trainer...")
trainer = Trainer(cfg, args, model, evaluator, cur_device)
if train_loader:
trainer.train_classifier(train_loader, val_loader, test_loader)
else:
print("No train loader presented. Exit")
if cfg.SOLVER.TOTAL_EPOCH == 0:
trainer.eval_classifier(test_loader, "test", 0)
def main(args):
"""main function to call from workflow"""
# set up cfg and args
cfg = setup(args)
# Perform training.
train(cfg, args)
if __name__ == '__main__': | args = default_argument_parser().parse_args() | 8 | 2023-12-06 07:50:16+00:00 | 12k |
khwong-c/syn-magia | tests/core/test_signal.py | [
{
"identifier": "Input",
"path": "magia/core.py",
"snippet": "class Input(Signal):\n \"\"\"\n Representing an input signal.\n It has no driver, but it is driving other signals.\n It is used by both the module declaration and the module instance.\n \"\"\"\n\n def __init__(\n ... | import random
import cocotb
import cocotb.clock
import tests.helper as helper
from pathlib import Path
from cocotb_test.simulator import run as sim_run
from magia import Elaborator, Input, Module, Output, Signal | 8,332 |
@cocotb.test()
async def adder_test(dut):
for _ in range(50):
a = random.randint(0, 0xF)
b = random.randint(0, 0xF)
dut.a.value = a
dut.b.value = b
await cocotb.clock.Timer(1, units="ns")
assert dut.q.value == (a + b)
class TestSignalManipulate:
TOP = "TopLevel"
def test_naming(self):
"""
Specifying a name for a signal should be reflected in the code generated
"""
|
@cocotb.test()
async def adder_test(dut):
for _ in range(50):
a = random.randint(0, 0xF)
b = random.randint(0, 0xF)
dut.a.value = a
dut.b.value = b
await cocotb.clock.Timer(1, units="ns")
assert dut.q.value == (a + b)
class TestSignalManipulate:
TOP = "TopLevel"
def test_naming(self):
"""
Specifying a name for a signal should be reflected in the code generated
"""
| class Top(Module): | 4 | 2023-12-12 22:50:43+00:00 | 12k |
batmanlab/DrasCLR | train.py | [
{
"identifier": "Encoder",
"path": "models/cnn3d.py",
"snippet": "class Encoder(nn.Module):\n\n def __init__(self, rep_dim, moco_dim, num_experts, num_coordinates):\n super(Encoder, self).__init__()\n self.rep_dim = rep_dim\n self.moco_dim = moco_dim\n self.num_experts = n... | import os
import argparse
import builtins
import math
import random
import shutil
import time
import warnings
import json
import numpy as np
import torch
import torch.nn as nn
import torch.nn.parallel
import torch.backends.cudnn as cudnn
import torch.distributed as dist
import torch.optim
import torch.multiprocessing as mp
import torch.utils.data
import torch.utils.data.distributed
import models.loader as DrasCLR_Loader
from tensorboard_logger import configure, log_value
from models.cnn3d import Encoder
from models.builder import DrasCLR
from data.copd_patch import COPD_dataset
from monai.transforms import Compose, RandGaussianNoise, RandAffine, Rand3DElastic, RandAdjustContrast | 7,395 | # define and create the experiment directory
exp_dir = os.path.join('./ssl_exp', args.exp_name)
if not os.path.isdir(exp_dir):
os.makedirs(exp_dir, exist_ok=True)
# save configurations to a dictionary
with open(os.path.join(exp_dir, 'configs.json'), 'w') as f:
json.dump(vars(args), f, indent=2)
f.close()
if args.seed is not None:
random.seed(args.seed)
torch.manual_seed(args.seed)
torch.cuda.manual_seed_all(args.seed)
torch.backends.cudnn.benchmark = True
if args.gpu is not None:
warnings.warn('You have chosen a specific GPU. This will completely '
'disable data parallelism.')
if args.dist_url == "env://" and args.world_size == -1:
args.world_size = int(os.environ["WORLD_SIZE"])
args.distributed = args.world_size > 1 or args.multiprocessing_distributed
print("Distributed:", args.distributed)
#ngpus_per_node = torch.cuda.device_count()
ngpus_per_node = args.npgus_per_node
if args.multiprocessing_distributed:
# Since we have ngpus_per_node processes per node, the total world_size
# needs to be adjusted accordingly
args.world_size = ngpus_per_node * args.world_size
# Use torch.multiprocessing.spawn to launch distributed processes: the
# main_worker process function
mp.spawn(main_worker, nprocs=ngpus_per_node, args=(ngpus_per_node, args))
else:
# Simply call main_worker function
main_worker(args.gpu, ngpus_per_node, args)
def main_worker(gpu, ngpus_per_node, args):
args.gpu = gpu
# suppress printing if not master
if args.multiprocessing_distributed and args.gpu != 0:
def print_pass(*args):
pass
builtins.print = print_pass
if args.gpu is not None:
print("Use GPU: {} for training".format(args.gpu))
if args.distributed:
if args.dist_url == "env://" and args.rank == -1:
args.rank = int(os.environ["RANK"])
if args.multiprocessing_distributed:
# For multiprocessing distributed training, rank needs to be the
# global rank among all the processes
args.rank = args.rank * ngpus_per_node + gpu
dist.init_process_group(backend=args.dist_backend, init_method=args.dist_url,
world_size=args.world_size, rank=args.rank)
if args.rank == 0:
configure(os.path.join('./ssl_exp', args.exp_name))
# create patch-level encoder
model = DrasCLR(
Encoder,
args.num_patch, args.rep_dim, args.moco_dim, args.num_experts, \
args.num_coordinates, args.moco_k, args.moco_m, args.moco_t, args.mlp)
if args.distributed:
# For multiprocessing distributed, DistributedDataParallel constructor
# should always set the single device scope, otherwise,
# DistributedDataParallel will use all available devices.
if args.gpu is not None:
torch.cuda.set_device(args.gpu)
model.cuda(args.gpu)
# When using a single GPU per process and per
# DistributedDataParallel, we need to divide the batch size
# ourselves based on the total number of GPUs we have
args.batch_size = int(args.batch_size / ngpus_per_node)
args.workers = int((args.workers + ngpus_per_node - 1) / ngpus_per_node)
model = torch.nn.parallel.DistributedDataParallel(model,
device_ids=[args.gpu])
else:
raise NotImplementedError("GPU number is unknown.")
else:
# this code only supports DistributedDataParallel.
raise NotImplementedError("Only DistributedDataParallel is supported.")
# define loss function (criterion) and optimizer
criterion = nn.CrossEntropyLoss().cuda(args.gpu)
optimizer = torch.optim.SGD(model.parameters(), args.lr,
momentum=args.momentum,
weight_decay=args.weight_decay)
# optionally resume from a checkpoint
if args.resume:
checkpoint = os.path.join('./ssl_exp', args.exp_name, args.resume)
if os.path.isfile(checkpoint):
print("=> loading checkpoint '{}'".format(checkpoint))
if args.gpu is None:
checkpoint = torch.load(checkpoint)
else:
# Map model to be loaded to specified single gpu.
loc = 'cuda:{}'.format(args.gpu)
checkpoint = torch.load(checkpoint, map_location=loc)
args.start_epoch = checkpoint['epoch']
model.load_state_dict(checkpoint['state_dict'])
optimizer.load_state_dict(checkpoint['optimizer'])
print("=> loaded checkpoint '{}' (epoch {})"
.format(args.resume, checkpoint['epoch']))
else:
print("=> no checkpoint found at '{}'".format(checkpoint))
exit()
# define augmentation
train_transform = define_augmentation(args, use_cuda=False)
|
parser = argparse.ArgumentParser(description='3D CT Images Self-Supervised Training Patch-level')
parser.add_argument('--arch', metavar='ARCH', default='custom')
parser.add_argument('--workers', default=0, type=int, metavar='N',
help='patch-level number of data loading workers (default: 0)')
parser.add_argument('--epochs', default=20, type=int, metavar='N',
help='number of total epochs to run')
parser.add_argument('--batch-size', default=64, type=int,
metavar='N',
help='patch-level mini-batch size (default: 32), this is the total '
'batch size of all GPUs on the current node when '
'using Data Parallel or Distributed Data Parallel')
parser.add_argument('--lr', '--learning-rate', default=0.01, type=float,
metavar='LR', help='initial learning rate', dest='lr')
parser.add_argument('--start-epoch', default=0, type=int, metavar='N',
help='manual epoch number (useful on restarts)')
parser.add_argument('--schedule', default=[120, 160], nargs='*', type=int,
help='learning rate schedule (when to drop lr by 10x)')
parser.add_argument('--momentum', default=0.9, type=float, metavar='M',
help='momentum of SGD solver')
parser.add_argument('--weight-decay', default=1e-4, type=float,
metavar='W', help='weight decay (default: 1e-4)',
dest='weight_decay')
parser.add_argument('--print-freq', default=10, type=int,
metavar='N', help='print frequency (default: 10)')
parser.add_argument('--resume', default='', type=str, metavar='PATH',
help='path to latest patch-level checkpoint (default: None)')
parser.add_argument('--world-size', default=1, type=int,
help='number of nodes for distributed training')
parser.add_argument('--rank', default=0, type=int,
help='node rank for distributed training')
parser.add_argument('--dist-url', default='tcp://localhost:10000', type=str,
help='url used to set up distributed training')
parser.add_argument('--dist-backend', default='nccl', type=str,
help='distributed backend')
parser.add_argument('--seed', default=0, type=int,
help='seed for initializing training. ')
parser.add_argument('--gpu', default=None, type=int,
help='GPU id to use.')
parser.add_argument('--multiprocessing-distributed', action='store_false',
help='use multi-processing distributed training to launch '
'N processes per node, which has N GPUs. This is the '
'fastest way to use PyTorch for either single node or '
'multi node data parallel training')
parser.add_argument('--npgus-per-node', default=2, type=int,
help='number of gpus per node.')
# image data configs:
parser.add_argument('--stage', default='training', type=str,
help='stage: training or testing')
parser.add_argument('--num-patch', default=581, type=int,
help='total number of patches in the atlas image.')
parser.add_argument('--root-dir', default='/ocean/projects/asc170022p/lisun/copd/gnn_shared/data/patch_data_32_6_reg_mask/',
help='root directory of registered images in COPD dataset')
parser.add_argument('--label-name', default=["FEV1pp_utah", "FEV1_FVC_utah", "finalGold"], nargs='+',
help='phenotype label names')
parser.add_argument('--label-name-set2', default=["Exacerbation_Frequency", "MMRCDyspneaScor"], nargs='+',
help='phenotype label names')
parser.add_argument('--visual-score', default=["Emph_Severity", "Emph_Paraseptal"], nargs='+',
help='phenotype label names')
parser.add_argument('--P2-Pheno', default=["Exacerbation_Frequency_P2"], nargs='+',
help='phenotype label names')
parser.add_argument('--nhw-only', action='store_true',
help='only include white people')
parser.add_argument('--fold', default=0, type=int,
help='fold index of cross validation')
# MoCo specific configs:
parser.add_argument('--rep-dim', default=128, type=int,
help='feature dimension (default: 128)')
parser.add_argument('--moco-dim', default=128, type=int,
help='feature dimension (default: 128)')
parser.add_argument('--moco-k', default=4096, type=int,
help='queue size; number of negative keys (default: 4098)')
parser.add_argument('--moco-m', default=0.999, type=float,
help='moco momentum of updating key encoder (default: 0.999)')
parser.add_argument('--moco-t', default=0.2, type=float,
help='softmax temperature (default: 0.2)')
# options for moco v2
parser.add_argument('--mlp', action='store_false',
help='use mlp head')
parser.add_argument('--cos', action='store_false',
help='use cosine lr schedule')
# experiment configs
parser.add_argument('--adj-thres', default=0.18, type=float,
help='patch adjacent threshold (default: 0.18)')
parser.add_argument('--k-neighbors', default=2, type=int,
help='top k nearest neighbors of the anchor patch in the atlas image.')
parser.add_argument('--beta', default=1.0, type=float,
help='scaling factor of neighbor InfoNCE loss. (default: 1.0)')
parser.add_argument('--warm-up', default=0, type=int,
help='number of warm-up epochs before training neighbor contrastive loss.')
parser.add_argument('--num-experts', default=8, type=int,
help='number of experts in CondConv layer.')
parser.add_argument('--num-coordinates', default=1, type=int,
help='number of input coordinates.')
parser.add_argument('--augmentation', default='agc',
help='initials of augmentation including: (f)lip, (a)ffine, (e)lastic, (g)uassian, (c)ontrast.')
parser.add_argument('--exp-name', default='debug_patch', type=str,
help='experiment name')
def main():
# read configurations
args = parser.parse_args()
# define and create the experiment directory
exp_dir = os.path.join('./ssl_exp', args.exp_name)
if not os.path.isdir(exp_dir):
os.makedirs(exp_dir, exist_ok=True)
# save configurations to a dictionary
with open(os.path.join(exp_dir, 'configs.json'), 'w') as f:
json.dump(vars(args), f, indent=2)
f.close()
if args.seed is not None:
random.seed(args.seed)
torch.manual_seed(args.seed)
torch.cuda.manual_seed_all(args.seed)
torch.backends.cudnn.benchmark = True
if args.gpu is not None:
warnings.warn('You have chosen a specific GPU. This will completely '
'disable data parallelism.')
if args.dist_url == "env://" and args.world_size == -1:
args.world_size = int(os.environ["WORLD_SIZE"])
args.distributed = args.world_size > 1 or args.multiprocessing_distributed
print("Distributed:", args.distributed)
#ngpus_per_node = torch.cuda.device_count()
ngpus_per_node = args.npgus_per_node
if args.multiprocessing_distributed:
# Since we have ngpus_per_node processes per node, the total world_size
# needs to be adjusted accordingly
args.world_size = ngpus_per_node * args.world_size
# Use torch.multiprocessing.spawn to launch distributed processes: the
# main_worker process function
mp.spawn(main_worker, nprocs=ngpus_per_node, args=(ngpus_per_node, args))
else:
# Simply call main_worker function
main_worker(args.gpu, ngpus_per_node, args)
def main_worker(gpu, ngpus_per_node, args):
args.gpu = gpu
# suppress printing if not master
if args.multiprocessing_distributed and args.gpu != 0:
def print_pass(*args):
pass
builtins.print = print_pass
if args.gpu is not None:
print("Use GPU: {} for training".format(args.gpu))
if args.distributed:
if args.dist_url == "env://" and args.rank == -1:
args.rank = int(os.environ["RANK"])
if args.multiprocessing_distributed:
# For multiprocessing distributed training, rank needs to be the
# global rank among all the processes
args.rank = args.rank * ngpus_per_node + gpu
dist.init_process_group(backend=args.dist_backend, init_method=args.dist_url,
world_size=args.world_size, rank=args.rank)
if args.rank == 0:
configure(os.path.join('./ssl_exp', args.exp_name))
# create patch-level encoder
model = DrasCLR(
Encoder,
args.num_patch, args.rep_dim, args.moco_dim, args.num_experts, \
args.num_coordinates, args.moco_k, args.moco_m, args.moco_t, args.mlp)
if args.distributed:
# For multiprocessing distributed, DistributedDataParallel constructor
# should always set the single device scope, otherwise,
# DistributedDataParallel will use all available devices.
if args.gpu is not None:
torch.cuda.set_device(args.gpu)
model.cuda(args.gpu)
# When using a single GPU per process and per
# DistributedDataParallel, we need to divide the batch size
# ourselves based on the total number of GPUs we have
args.batch_size = int(args.batch_size / ngpus_per_node)
args.workers = int((args.workers + ngpus_per_node - 1) / ngpus_per_node)
model = torch.nn.parallel.DistributedDataParallel(model,
device_ids=[args.gpu])
else:
raise NotImplementedError("GPU number is unknown.")
else:
# this code only supports DistributedDataParallel.
raise NotImplementedError("Only DistributedDataParallel is supported.")
# define loss function (criterion) and optimizer
criterion = nn.CrossEntropyLoss().cuda(args.gpu)
optimizer = torch.optim.SGD(model.parameters(), args.lr,
momentum=args.momentum,
weight_decay=args.weight_decay)
# optionally resume from a checkpoint
if args.resume:
checkpoint = os.path.join('./ssl_exp', args.exp_name, args.resume)
if os.path.isfile(checkpoint):
print("=> loading checkpoint '{}'".format(checkpoint))
if args.gpu is None:
checkpoint = torch.load(checkpoint)
else:
# Map model to be loaded to specified single gpu.
loc = 'cuda:{}'.format(args.gpu)
checkpoint = torch.load(checkpoint, map_location=loc)
args.start_epoch = checkpoint['epoch']
model.load_state_dict(checkpoint['state_dict'])
optimizer.load_state_dict(checkpoint['optimizer'])
print("=> loaded checkpoint '{}' (epoch {})"
.format(args.resume, checkpoint['epoch']))
else:
print("=> no checkpoint found at '{}'".format(checkpoint))
exit()
# define augmentation
train_transform = define_augmentation(args, use_cuda=False)
| train_dataset = COPD_dataset('training', args, DrasCLR_Loader.TwoCropsTransform(train_transform), train_transform) | 2 | 2023-12-09 02:33:53+00:00 | 12k |
CHDers/Traffic-Flow-Prediction-with-Graph-Neural-Networks | traffic_prediction.py | [
{
"identifier": "LoadData",
"path": "traffic_dataset.py",
"snippet": "class LoadData(Dataset): # 这个就是把读入的数据处理成模型需要的训练数据和测试数据,一个一个样本能读取出来\n def __init__(self, data_path, num_nodes, divide_days, time_interval, history_length, train_mode):\n \"\"\"\n :param data_path: list, [\"graph file ... | import os
import time
import h5py
import torch
import numpy as np
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import warnings
from torch.utils.data import DataLoader
from traffic_dataset import LoadData
from utils import Evaluation # 三种评价指标以及可视化类
from utils import visualize_result
from gcnnet import GCN
from chebnet import ChebNet
from gat import GATNet
from rich import print
from tqdm import tqdm | 7,301 | # 第三步:定义损失函数和优化器
criterion = nn.MSELoss() # 均方损失函数
# 没写学习率,表示使用的是默认的,也就是lr=1e-3
optimizer = optim.Adam(params=my_net.parameters())
# 第四步:训练+测试
# Train model
Epoch = 20 # 训练的次数
my_net.train() # 打开训练模式
for epoch in tqdm(range(Epoch), colour="green", desc="Train"):
epoch_loss = 0.0
count = 0
start_time = time.time()
# ["graph": [B, N, N] , "flow_x": [B, N, H, D], "flow_y": [B, N, 1, D]],一次把一个batch的训练数据取出来
for data in train_loader:
my_net.zero_grad() # 梯度清零
count += 1
# [B, N, 1, D],由于标签flow_y在cpu中,所以最后的预测值要放回到cpu中
predict_value = my_net(data, device).to(torch.device("cpu"))
# 计算损失,切记这个loss不是标量
loss = criterion(predict_value, data["flow_y"])
epoch_loss += loss.item() # 这里是把一个epoch的损失都加起来,最后再除训练数据长度,用平均loss来表示
loss.backward() # 反向传播
optimizer.step() # 更新参数
end_time = time.time()
print("Epoch: {:04d}, Loss: {:02.4f}, Time: {:02.2f} mins".format(epoch, 1000 * epoch_loss / len(train_data),
(end_time - start_time) / 60))
# Test Model
# 对于测试:
# 第一、除了计算loss之外,还需要可视化一下预测的结果(定性分析)
# 第二、对于预测的结果这里我使用了 MAE, MAPE, and RMSE 这三种评价标准来评估(定量分析)
my_net.eval() # 打开测试模式
with torch.no_grad(): # 关闭梯度
MAE, MAPE, RMSE = [], [], [] # 定义三种指标的列表
Target = np.zeros([307, 1, 1]) # [N, T, D],T=1 # 目标数据的维度,用0填充
Predict = np.zeros_like(Target) # [N, T, D],T=1 # 预测数据的维度
total_loss = 0.0
for data in test_loader: # 一次把一个batch的测试数据取出来
# 下面得到的预测结果实际上是归一化的结果,有一个问题是我们这里使用的三种评价标准以及可视化结果要用的是逆归一化的数据
# [B, N, 1, D],B是batch_size, N是节点数量,1是时间T=1, D是节点的流量特征
predict_value = my_net(data, device).to(torch.device("cpu"))
loss = criterion(predict_value, data["flow_y"]) # 使用MSE计算loss
total_loss += loss.item() # 所有的batch的loss累加
# 下面实际上是把预测值和目标值的batch放到第二维的时间维度,这是因为在测试数据的时候对样本没有shuffle,
# 所以每一个batch取出来的数据就是按时间顺序来的,因此放到第二维来表示时间是合理的.
predict_value = predict_value.transpose(0, 2).squeeze(
0) # [1, N, B(T), D] -> [N, B(T), D] -> [N, T, D]
target_value = data["flow_y"].transpose(0, 2).squeeze(
0) # [1, N, B(T), D] -> [N, B(T), D] -> [N, T, D]
performance, data_to_save = compute_performance(
predict_value, target_value, test_loader) # 计算模型的性能,返回评价结果和恢复好的数据
# 下面这个是每一个batch取出的数据,按batch这个维度进行串联,最后就得到了整个时间的数据,也就是
# [N, T, D] = [N, T1+T2+..., D]
Predict = np.concatenate([Predict, data_to_save[0]], axis=1)
Target = np.concatenate([Target, data_to_save[1]], axis=1)
MAE.append(performance[0])
MAPE.append(performance[1])
RMSE.append(performance[2])
print("Test Loss: {:02.4f}".format(1000 * total_loss / len(test_data)))
# 三种指标取平均
print("Performance: MAE {:2.2f} {:2.2f}% {:2.2f}".format(np.mean(MAE), np.mean(MAPE * 100), np.mean(RMSE)))
# 将第0行的0删除,因为开始定义的时候用0填充,但是时间是从1开始的
Predict = np.delete(Predict, 0, axis=1)
Target = np.delete(Target, 0, axis=1)
result_file = "GAT_result.h5"
file_obj = h5py.File(result_file, "w") # 将预测值和目标值保存到文件中,因为要多次可视化看看结果
file_obj["predict"] = Predict # [N, T, D]
file_obj["target"] = Target # [N, T, D]
def compute_performance(prediction, target, data): # 计算模型性能
# 下面的try和except实际上在做这样一件事:当训练+测试模型的时候,数据肯定是经过dataloader的,所以直接赋值就可以了
# 但是如果将训练好的模型保存下来,然后测试,那么数据就没有经过dataloader,是dataloader型的,需要转换成dataset型。
try:
dataset = data.dataset # 数据为dataloader型,通过它下面的属性.dataset类变成dataset型数据
except:
dataset = data # 数据为dataset型,直接赋值
# 下面就是对预测和目标数据进行逆归一化,recover_data()函数在上一小节的数据处理中
# flow_norm为归一化的基,flow_norm[0]为最大值,flow_norm[1]为最小值
# prediction.numpy()和target.numpy()是需要逆归一化的数据,转换成numpy型是因为 recover_data()函数中的数据都是numpy型,保持一致
prediction = LoadData.recover_data(
dataset.flow_norm[0], dataset.flow_norm[1], prediction.numpy())
target = LoadData.recover_data(
dataset.flow_norm[0], dataset.flow_norm[1], target.numpy())
# 对三种评价指标写了一个类,这个类封装在另一个文件中,在后面
mae, mape, rmse = Evaluation.total(
target.reshape(-1), prediction.reshape(-1)) # 变成常向量才能计算这三种指标
performance = [mae, mape, rmse]
recovered_data = [prediction, target]
return performance, recovered_data # 返回评价结果,以及恢复好的数据(为可视化准备的)
if __name__ == '__main__':
main()
# 可视化,在下面的 Evaluation()类中,这里是对应的GAT算法运行的结果,进行可视化
# 如果要对GCN或者chebnet进行可视化,只需要在第45行,注释修改下对应的算法即可
| # @Time : 2020/8/25
# @Author : LeronQ
# @github : https://github.com/LeronQ
# Pytorch-基于GCN/GAT/Chebnet图神经网络实现的交通流预测(附代码): https://blog.csdn.net/yilulvxing/article/details/110306999
# traffic_prediction.py
# 这个就是上一小节处理数据自己写的的类,封装在traffic_dataset.py文件中
warnings.filterwarnings('ignore')
def main():
os.environ["CUDA_VISIBLE_DEVICES"] = "0" # 配置GPU,因为可能有多个GPU,这里用了第0号GPU
# 第一步:准备数据(上一节已经准备好了,这里只是调用而已,链接在最开头)
train_data = LoadData(data_path=["PeMS_04/PeMS04.csv", "PeMS_04/PeMS04.npz"], num_nodes=307, divide_days=[45, 14],
time_interval=5, history_length=6,
train_mode="train")
# num_workers是加载数据(batch)的线程数目
train_loader = DataLoader(
train_data, batch_size=32, shuffle=True, num_workers=4)
test_data = LoadData(data_path=["PeMS_04/PeMS04.csv", "PeMS_04/PeMS04.npz"], num_nodes=307, divide_days=[45, 14],
time_interval=5, history_length=6,
train_mode="test")
test_loader = DataLoader(test_data, batch_size=32,
shuffle=False, num_workers=4)
print("🚀🚀🚀 [italic bold green]数据加载完成!!!")
# SECTION: 第二步:定义模型(这里其实只是加载模型,关于模型的定义在下面单独写了,先假设已经写好)
my_net = GCN(in_c=6, hid_c=6, out_c=1) # 加载GCN模型
# my_net = ChebNet(in_c=6, hid_c=6, out_c=1, K=2) # 加载ChebNet模型
# my_net = GATNet(in_c=6 * 1, hid_c=6, out_c=1, n_heads=2) # 加载GAT模型
print(my_net)
device = torch.device(
"cuda" if torch.cuda.is_available() else "cpu") # 定义设备
my_net = my_net.to(device) # 模型送入设备
# 第三步:定义损失函数和优化器
criterion = nn.MSELoss() # 均方损失函数
# 没写学习率,表示使用的是默认的,也就是lr=1e-3
optimizer = optim.Adam(params=my_net.parameters())
# 第四步:训练+测试
# Train model
Epoch = 20 # 训练的次数
my_net.train() # 打开训练模式
for epoch in tqdm(range(Epoch), colour="green", desc="Train"):
epoch_loss = 0.0
count = 0
start_time = time.time()
# ["graph": [B, N, N] , "flow_x": [B, N, H, D], "flow_y": [B, N, 1, D]],一次把一个batch的训练数据取出来
for data in train_loader:
my_net.zero_grad() # 梯度清零
count += 1
# [B, N, 1, D],由于标签flow_y在cpu中,所以最后的预测值要放回到cpu中
predict_value = my_net(data, device).to(torch.device("cpu"))
# 计算损失,切记这个loss不是标量
loss = criterion(predict_value, data["flow_y"])
epoch_loss += loss.item() # 这里是把一个epoch的损失都加起来,最后再除训练数据长度,用平均loss来表示
loss.backward() # 反向传播
optimizer.step() # 更新参数
end_time = time.time()
print("Epoch: {:04d}, Loss: {:02.4f}, Time: {:02.2f} mins".format(epoch, 1000 * epoch_loss / len(train_data),
(end_time - start_time) / 60))
# Test Model
# 对于测试:
# 第一、除了计算loss之外,还需要可视化一下预测的结果(定性分析)
# 第二、对于预测的结果这里我使用了 MAE, MAPE, and RMSE 这三种评价标准来评估(定量分析)
my_net.eval() # 打开测试模式
with torch.no_grad(): # 关闭梯度
MAE, MAPE, RMSE = [], [], [] # 定义三种指标的列表
Target = np.zeros([307, 1, 1]) # [N, T, D],T=1 # 目标数据的维度,用0填充
Predict = np.zeros_like(Target) # [N, T, D],T=1 # 预测数据的维度
total_loss = 0.0
for data in test_loader: # 一次把一个batch的测试数据取出来
# 下面得到的预测结果实际上是归一化的结果,有一个问题是我们这里使用的三种评价标准以及可视化结果要用的是逆归一化的数据
# [B, N, 1, D],B是batch_size, N是节点数量,1是时间T=1, D是节点的流量特征
predict_value = my_net(data, device).to(torch.device("cpu"))
loss = criterion(predict_value, data["flow_y"]) # 使用MSE计算loss
total_loss += loss.item() # 所有的batch的loss累加
# 下面实际上是把预测值和目标值的batch放到第二维的时间维度,这是因为在测试数据的时候对样本没有shuffle,
# 所以每一个batch取出来的数据就是按时间顺序来的,因此放到第二维来表示时间是合理的.
predict_value = predict_value.transpose(0, 2).squeeze(
0) # [1, N, B(T), D] -> [N, B(T), D] -> [N, T, D]
target_value = data["flow_y"].transpose(0, 2).squeeze(
0) # [1, N, B(T), D] -> [N, B(T), D] -> [N, T, D]
performance, data_to_save = compute_performance(
predict_value, target_value, test_loader) # 计算模型的性能,返回评价结果和恢复好的数据
# 下面这个是每一个batch取出的数据,按batch这个维度进行串联,最后就得到了整个时间的数据,也就是
# [N, T, D] = [N, T1+T2+..., D]
Predict = np.concatenate([Predict, data_to_save[0]], axis=1)
Target = np.concatenate([Target, data_to_save[1]], axis=1)
MAE.append(performance[0])
MAPE.append(performance[1])
RMSE.append(performance[2])
print("Test Loss: {:02.4f}".format(1000 * total_loss / len(test_data)))
# 三种指标取平均
print("Performance: MAE {:2.2f} {:2.2f}% {:2.2f}".format(np.mean(MAE), np.mean(MAPE * 100), np.mean(RMSE)))
# 将第0行的0删除,因为开始定义的时候用0填充,但是时间是从1开始的
Predict = np.delete(Predict, 0, axis=1)
Target = np.delete(Target, 0, axis=1)
result_file = "GAT_result.h5"
file_obj = h5py.File(result_file, "w") # 将预测值和目标值保存到文件中,因为要多次可视化看看结果
file_obj["predict"] = Predict # [N, T, D]
file_obj["target"] = Target # [N, T, D]
def compute_performance(prediction, target, data): # 计算模型性能
# 下面的try和except实际上在做这样一件事:当训练+测试模型的时候,数据肯定是经过dataloader的,所以直接赋值就可以了
# 但是如果将训练好的模型保存下来,然后测试,那么数据就没有经过dataloader,是dataloader型的,需要转换成dataset型。
try:
dataset = data.dataset # 数据为dataloader型,通过它下面的属性.dataset类变成dataset型数据
except:
dataset = data # 数据为dataset型,直接赋值
# 下面就是对预测和目标数据进行逆归一化,recover_data()函数在上一小节的数据处理中
# flow_norm为归一化的基,flow_norm[0]为最大值,flow_norm[1]为最小值
# prediction.numpy()和target.numpy()是需要逆归一化的数据,转换成numpy型是因为 recover_data()函数中的数据都是numpy型,保持一致
prediction = LoadData.recover_data(
dataset.flow_norm[0], dataset.flow_norm[1], prediction.numpy())
target = LoadData.recover_data(
dataset.flow_norm[0], dataset.flow_norm[1], target.numpy())
# 对三种评价指标写了一个类,这个类封装在另一个文件中,在后面
mae, mape, rmse = Evaluation.total(
target.reshape(-1), prediction.reshape(-1)) # 变成常向量才能计算这三种指标
performance = [mae, mape, rmse]
recovered_data = [prediction, target]
return performance, recovered_data # 返回评价结果,以及恢复好的数据(为可视化准备的)
if __name__ == '__main__':
main()
# 可视化,在下面的 Evaluation()类中,这里是对应的GAT算法运行的结果,进行可视化
# 如果要对GCN或者chebnet进行可视化,只需要在第45行,注释修改下对应的算法即可 | visualize_result(h5_file="GAT_result.h5", | 2 | 2023-12-05 07:25:35+00:00 | 12k |
nickruggeri/hypergraph-message-passing | test/model/conftest.py | [
{
"identifier": "hye_list_to_binary_incidence",
"path": "src/data/conversion.py",
"snippet": "def hye_list_to_binary_incidence(\n hye_list: list[tuple[int]], shape: tuple[int] | None = None\n) -> sparse.coo_array:\n \"\"\"Convert a list of hyperedges into a scipy sparse COO array.\n The hypered... | import itertools
import os
import numpy as np
import pytest
from pathlib import Path
from typing import Dict, Tuple
from dotenv import load_dotenv
from src.data.conversion import hye_list_to_binary_incidence
from src.data.representation.binary_hypergraph import BinaryHypergraph
from src.data.representation.incidence_hypergraph import IncidenceHypergraph
from src.model.hypergraph_block_model import HypergraphBlockModel | 10,249 |
load_dotenv()
TEST_DATA_DIR = Path(os.environ["TEST_DATA_DIR"])
########################################################################################
# Some blockmodels.
p_vals = [
np.array([[0.1, 0.2, 0.0], [0.2, 0.0, 0.9], [0.0, 0.9, 0.0]]),
np.array(
[
[0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0],
]
),
np.array(
[
[1.0, 0.0],
[0.0, 1.0],
]
),
np.array(
[
[0.9, 0.1, 0.0],
[0.1, 1.0, 0.0],
[0.0, 0.0, 0.23],
]
),
]
N_vals = [2, 5, 10, 100]
def _all_models():
for p, N in itertools.product(p_vals, N_vals):
n = np.ones(len(p)) / len(p)
n[-1] += 1 - n.sum() # sum to 1, avoid numerical errors.
assert n.sum() == 1
|
load_dotenv()
TEST_DATA_DIR = Path(os.environ["TEST_DATA_DIR"])
########################################################################################
# Some blockmodels.
p_vals = [
np.array([[0.1, 0.2, 0.0], [0.2, 0.0, 0.9], [0.0, 0.9, 0.0]]),
np.array(
[
[0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0],
]
),
np.array(
[
[1.0, 0.0],
[0.0, 1.0],
]
),
np.array(
[
[0.9, 0.1, 0.0],
[0.1, 1.0, 0.0],
[0.0, 0.0, 0.23],
]
),
]
N_vals = [2, 5, 10, 100]
def _all_models():
for p, N in itertools.product(p_vals, N_vals):
n = np.ones(len(p)) / len(p)
n[-1] += 1 - n.sum() # sum to 1, avoid numerical errors.
assert n.sum() == 1 | yield HypergraphBlockModel(n, p, N, len(p), max_hye_size=None) | 3 | 2023-12-06 22:01:38+00:00 | 12k |
kramerlab/PeerLearning | run_peer.py | [
{
"identifier": "DQNPeer",
"path": "dqn_peer.py",
"snippet": "class DQNPeer(make_peer_class(DQN)):\n \"\"\"\n A DQN version to be used with peer learning. Therefore, it features\n a critic function\n \"\"\"\n def critic(self, observations, actions):\n q_values = self.q_net(observat... | import argparse
import datetime
import gym
import wandb
import predefined_agents # noqa: F401
import env as local_envs # noqa: F401
from pathlib import Path
from stable_baselines3 import SAC, TD3
from stable_baselines3.common.utils import set_random_seed, \
update_learning_rate
from wandb.integration.sb3 import WandbCallback
from dqn_peer import DQNPeer
from peer import PeerGroup, make_peer_class
from callbacks import PeerEvalCallback
from utils import str2bool, add_default_values_to_parser, \
log_reward_avg_in_wandb, add_default_values_to_train_parser, \
new_random_seed, make_env, ControllerArguments | 8,484 | peer_args = []
for i in range(args.agent_count):
algo_args.append(
dict(policy="MlpPolicy",
verbose=1,
policy_kwargs=dict(
net_arch=CA.argument_for_every_agent(args.net_arch, i)
),
buffer_size=args.buffer_size,
batch_size=args.batch_size,
gamma=args.gamma,
tau=args.tau,
train_freq=args.train_freq,
target_update_interval=args.target_update_interval,
gradient_steps=args.gradient_steps,
learning_starts=args.buffer_start_size,
learning_rate=CA.argument_for_every_agent(args.learning_rate,
i),
tensorboard_log=None,
device=args.device))
peer_args.append(
dict(temperature=CA.argument_for_every_agent(args.T, i),
temp_decay=CA.argument_for_every_agent(args.T_decay, i),
algo_args=algo_args[i],
env=args.env,
env_args=args.env_args,
use_trust=args.use_trust,
use_critic=args.use_critic,
buffer_size=args.trust_buffer_size,
follow_steps=args.follow_steps,
use_trust_buffer=args.use_trust_buffer,
solo_training=not args.peer_learning,
peers_sample_with_noise=args.peers_sample_with_noise,
sample_random_actions=args.sample_random_actions,
init_trust_values=args.init_trust_values,
sample_from_suggestions=args.sample_from_suggestions,
epsilon=args.epsilon,
only_follow_peers=args.only_follow_peers))
# create Peer classes
SACPeer = make_peer_class(SAC)
TD3Peer = make_peer_class(TD3)
# create peers and peer group
peers = []
callbacks = []
eval_envs = []
for i in range(args.agent_count):
args_for_agent = peer_args[i]
agent_algo = CA.argument_for_every_agent(args.mix_agents, i)
if agent_algo == 'SAC':
args_for_agent["algo_args"]["ent_coef"] = "auto"
args_for_agent["algo_args"]["use_sde"] = True
args_for_agent["algo_args"]["policy_kwargs"]["log_std_init"] = -3
peer = SACPeer(**args_for_agent, seed=new_random_seed())
elif agent_algo == 'TD3':
peer = TD3Peer(**args_for_agent, seed=new_random_seed())
elif agent_algo == 'DQN':
args_for_agent["algo_args"]["exploration_fraction"] = \
args.exploration_fraction
args_for_agent["algo_args"]["exploration_final_eps"] = \
args.exploration_final_eps
peer = DQNPeer(**args_for_agent, seed=new_random_seed())
elif agent_algo in ['Adversarial', 'Expert']:
class_str = f"predefined_agents." \
f"{args.env.split('-')[0]}{agent_algo}"
peer = eval(class_str)(**args_for_agent, seed=new_random_seed())
else:
raise NotImplementedError(
f"The Agent {agent_algo}"
f" is not implemented")
peers.append(peer)
eval_env = make_env(args.env, args.n_eval_episodes, **args.env_args)
# every agent gets its own callbacks
callbacks.append([WandbCallback(verbose=2)])
eval_envs.append(eval_env)
peer_group = PeerGroup(peers, use_agent_values=args.use_agent_value,
lr=args.trust_lr, switch_ratio=args.switch_ratio,
init_agent_values=args.init_agent_values,
use_advantage=args.use_advantage,
max_peer_epochs=args.max_peer_epochs)
# create callbacks
for i in range(args.agent_count):
peer_callback = PeerEvalCallback(eval_env=eval_envs[i],
eval_envs=eval_envs,
peer_group=peer_group,
best_model_save_path=str_folder,
log_path=str_folder,
eval_freq=args.eval_interval,
n_eval_episodes=args.n_eval_episodes)
callbacks[i].append(peer_callback) # type: ignore
# calculate number of epochs based on episode length
max_episode_steps = max(args.min_epoch_length,
gym.spec(args.env).max_episode_steps)
n_epochs = args.steps // max_episode_steps
# load pretrained model
for i, path in enumerate(args.load_paths):
load_path = Path.cwd().joinpath("Experiments", path)
peer = peer_group.peers[i].set_parameters(load_path_or_dict=load_path)
peers[i].learning_rate = 0
peers[i].lr_schedule = lambda _: 0.0
update_learning_rate(peers[i].ent_coef_optimizer, 0)
peers[i].replay_buffer.reset()
peers[i].buffer.buffer.clear()
# train the peer group
peer_group.learn(n_epochs, callbacks=callbacks,
eval_log_path=str_folder,
max_epoch_len=max_episode_steps)
|
def add_args():
# create arg parser
parser = argparse.ArgumentParser(description="Peer learning.")
# General
parser.add_argument("--save-name", type=str, default="delete_me")
parser = add_default_values_to_parser(parser)
# Training
training = parser.add_argument_group("Training")
add_default_values_to_train_parser(training)
# Peer Learning
peer_learning = parser.add_argument_group("Peer Learning")
peer_learning.add_argument("--follow-steps", type=int, default=10)
peer_learning.add_argument("--switch-ratio", type=float, default=1,
help="How many times peer training compared to "
"solo training Ratio of peer learning "
"episodes to solo episodes; 0 -> only "
"peer learning episodes."
"ratio 0 {'solo': 0, 'peer': 100}"
"ratio 0.2 {'solo': 83, 'peer': 17}"
"ratio 0.25 {'solo': 80, 'peer': 20}"
"ratio 0.333333 {'solo': 75, 'peer': 25}"
"ratio 0.5 {'solo': 67, 'peer': 33}"
"ratio 1 {'solo': 50, 'peer': 50}"
"ratio 2 {'solo': 33, 'peer': 67}"
"ratio 3 {'solo': 25, 'peer': 75}"
"ratio 4 {'solo': 20, 'peer': 80}"
"ratio 5 {'solo': 17, 'peer': 83}")
peer_learning.add_argument("--peer-learning", type=str2bool, nargs="?",
const=True, default=True)
peer_learning.add_argument("--peers-sample-with-noise", type=str2bool,
nargs="?",
const=True, default=True)
peer_learning.add_argument("--use-agent-value", type=str2bool, nargs="?",
const=True, default=True)
peer_learning.add_argument("--use-trust", type=str2bool, nargs="?",
const=True, default=True)
peer_learning.add_argument("--use-trust-buffer", type=str2bool, nargs="?",
const=True, default=True)
peer_learning.add_argument("--trust-buffer-size", type=int, default=1000)
peer_learning.add_argument("--use-critic", type=str2bool, nargs="?",
const=True, default=True)
peer_learning.add_argument("--sample_random_actions", type=str2bool,
nargs="?", const=True, default=False)
peer_learning.add_argument("--trust-lr", type=float, default=0.001)
peer_learning.add_argument("--T", type=float, nargs='*', default=[1])
peer_learning.add_argument("--T-decay", type=float, nargs='*', default=[0])
peer_learning.add_argument("--init-trust-values", type=float, default=200)
peer_learning.add_argument("--init-agent-values", type=float, default=200)
peer_learning.add_argument("--use-advantage", type=str2bool, nargs="?",
const=False, default=False)
peer_learning.add_argument("--sample-from-suggestions", type=str2bool,
nargs="?", const=False, default=False)
peer_learning.add_argument("--epsilon", type=float, default=0.0)
peer_learning.add_argument("--max-peer-epochs", type=int,
default=1_000_000_000)
peer_learning.add_argument("--only-follow-peers", type=str2bool,
nargs="?", const=False, default=False)
return parser
if __name__ == '__main__':
# parse args
arg_parser = add_args()
args = arg_parser.parse_args()
CA = ControllerArguments(args.agent_count)
# assert if any peer learning strategy is chosen peer learning must be True
option_on = (args.use_trust or args.use_critic or args.use_agent_value)
assert (option_on and args.peer_learning) or not option_on
# create results/experiments folder
time_string = datetime.datetime.now().strftime("%Y-%m-%d_%H.%M.%S")
unique_dir = f"{time_string}__{args.job_id}"
experiment_folder = args.save_dir.joinpath(args.save_name, unique_dir)
experiment_folder.mkdir(exist_ok=True, parents=True)
str_folder = str(experiment_folder)
print("Experiment folder is", str_folder)
# suppress gym warnings
gym.logger.set_level(level=gym.logger.DISABLED)
# seed everything
set_random_seed(args.seed)
# init wandb
wandb.tensorboard.patch(root_logdir=str_folder)
run = wandb.init(entity="jgu-wandb", config=args.__dict__,
project="peer-learning",
monitor_gym=True, sync_tensorboard=False,
name=f"{args.save_name}__{args.job_id}",
notes=f"Peer Learning with {args.agent_count} agents on "
f"the {args.env.split('-')[0]} environment.",
dir=str_folder, mode=args.wandb)
# initialize peer group
algo_args = []
peer_args = []
for i in range(args.agent_count):
algo_args.append(
dict(policy="MlpPolicy",
verbose=1,
policy_kwargs=dict(
net_arch=CA.argument_for_every_agent(args.net_arch, i)
),
buffer_size=args.buffer_size,
batch_size=args.batch_size,
gamma=args.gamma,
tau=args.tau,
train_freq=args.train_freq,
target_update_interval=args.target_update_interval,
gradient_steps=args.gradient_steps,
learning_starts=args.buffer_start_size,
learning_rate=CA.argument_for_every_agent(args.learning_rate,
i),
tensorboard_log=None,
device=args.device))
peer_args.append(
dict(temperature=CA.argument_for_every_agent(args.T, i),
temp_decay=CA.argument_for_every_agent(args.T_decay, i),
algo_args=algo_args[i],
env=args.env,
env_args=args.env_args,
use_trust=args.use_trust,
use_critic=args.use_critic,
buffer_size=args.trust_buffer_size,
follow_steps=args.follow_steps,
use_trust_buffer=args.use_trust_buffer,
solo_training=not args.peer_learning,
peers_sample_with_noise=args.peers_sample_with_noise,
sample_random_actions=args.sample_random_actions,
init_trust_values=args.init_trust_values,
sample_from_suggestions=args.sample_from_suggestions,
epsilon=args.epsilon,
only_follow_peers=args.only_follow_peers))
# create Peer classes
SACPeer = make_peer_class(SAC)
TD3Peer = make_peer_class(TD3)
# create peers and peer group
peers = []
callbacks = []
eval_envs = []
for i in range(args.agent_count):
args_for_agent = peer_args[i]
agent_algo = CA.argument_for_every_agent(args.mix_agents, i)
if agent_algo == 'SAC':
args_for_agent["algo_args"]["ent_coef"] = "auto"
args_for_agent["algo_args"]["use_sde"] = True
args_for_agent["algo_args"]["policy_kwargs"]["log_std_init"] = -3
peer = SACPeer(**args_for_agent, seed=new_random_seed())
elif agent_algo == 'TD3':
peer = TD3Peer(**args_for_agent, seed=new_random_seed())
elif agent_algo == 'DQN':
args_for_agent["algo_args"]["exploration_fraction"] = \
args.exploration_fraction
args_for_agent["algo_args"]["exploration_final_eps"] = \
args.exploration_final_eps
peer = DQNPeer(**args_for_agent, seed=new_random_seed())
elif agent_algo in ['Adversarial', 'Expert']:
class_str = f"predefined_agents." \
f"{args.env.split('-')[0]}{agent_algo}"
peer = eval(class_str)(**args_for_agent, seed=new_random_seed())
else:
raise NotImplementedError(
f"The Agent {agent_algo}"
f" is not implemented")
peers.append(peer)
eval_env = make_env(args.env, args.n_eval_episodes, **args.env_args)
# every agent gets its own callbacks
callbacks.append([WandbCallback(verbose=2)])
eval_envs.append(eval_env)
peer_group = PeerGroup(peers, use_agent_values=args.use_agent_value,
lr=args.trust_lr, switch_ratio=args.switch_ratio,
init_agent_values=args.init_agent_values,
use_advantage=args.use_advantage,
max_peer_epochs=args.max_peer_epochs)
# create callbacks
for i in range(args.agent_count):
peer_callback = PeerEvalCallback(eval_env=eval_envs[i],
eval_envs=eval_envs,
peer_group=peer_group,
best_model_save_path=str_folder,
log_path=str_folder,
eval_freq=args.eval_interval,
n_eval_episodes=args.n_eval_episodes)
callbacks[i].append(peer_callback) # type: ignore
# calculate number of epochs based on episode length
max_episode_steps = max(args.min_epoch_length,
gym.spec(args.env).max_episode_steps)
n_epochs = args.steps // max_episode_steps
# load pretrained model
for i, path in enumerate(args.load_paths):
load_path = Path.cwd().joinpath("Experiments", path)
peer = peer_group.peers[i].set_parameters(load_path_or_dict=load_path)
peers[i].learning_rate = 0
peers[i].lr_schedule = lambda _: 0.0
update_learning_rate(peers[i].ent_coef_optimizer, 0)
peers[i].replay_buffer.reset()
peers[i].buffer.buffer.clear()
# train the peer group
peer_group.learn(n_epochs, callbacks=callbacks,
eval_log_path=str_folder,
max_epoch_len=max_episode_steps)
| log_reward_avg_in_wandb(callbacks) | 6 | 2023-12-13 10:40:55+00:00 | 12k |
ZS-YANG/FemtoDet-v3 | projects/Detic_new/detic/detic.py | [
{
"identifier": "LVISV1Dataset",
"path": "mmdet/datasets/lvis.py",
"snippet": "class LVISV1Dataset(LVISDataset):\n \"\"\"LVIS v1 dataset for detection.\"\"\"\n\n METAINFO = {\n 'classes':\n ('aerosol_can', 'air_conditioner', 'airplane', 'alarm_clock',\n 'alcohol', 'alligator'... | import copy
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import clip
from typing import List, Union
from mmengine.logging import print_log
from torch import Tensor
from mmdet.datasets import LVISV1Dataset
from mmdet.models.detectors.cascade_rcnn import CascadeRCNN
from mmdet.registry import MODELS
from mmdet.structures import SampleList
from clip.simple_tokenizer import SimpleTokenizer
from mmdet.datasets import CocoDataset
from mmdet.datasets import CityscapesDataset
from mmdet.datasets import VOCDataset
from mmdet.datasets import OpenImagesDataset
from mmdet.datasets import LVISV1Dataset | 8,255 | # Copyright (c) OpenMMLab. All rights reserved.
class CLIPTextEncoder(nn.Module):
def __init__(self, model_name='ViT-B/32'):
super().__init__()
self.tokenizer = SimpleTokenizer()
pretrained_model, _ = clip.load(model_name, device='cpu')
self.clip = pretrained_model
@property
def device(self):
return self.clip.device
@property
def dtype(self):
return self.clip.dtype
def tokenize(self,
texts: Union[str, List[str]],
context_length: int = 77) -> torch.LongTensor:
if isinstance(texts, str):
texts = [texts]
sot_token = self.tokenizer.encoder['<|startoftext|>']
eot_token = self.tokenizer.encoder['<|endoftext|>']
all_tokens = [[sot_token] + self.tokenizer.encode(text) + [eot_token]
for text in texts]
result = torch.zeros(len(all_tokens), context_length, dtype=torch.long)
for i, tokens in enumerate(all_tokens):
if len(tokens) > context_length:
st = torch.randint(len(tokens) - context_length + 1,
(1, ))[0].item()
tokens = tokens[st:st + context_length]
result[i, :len(tokens)] = torch.tensor(tokens)
return result
def forward(self, text):
text = self.tokenize(text)
text_features = self.clip.encode_text(text)
return text_features
def get_class_weight(original_caption, prompt_prefix='a '):
if isinstance(original_caption, str):
if original_caption == 'coco':
class_names = CocoDataset.METAINFO['classes']
elif original_caption == 'cityscapes':
class_names = CityscapesDataset.METAINFO['classes']
elif original_caption == 'voc':
class_names = VOCDataset.METAINFO['classes']
elif original_caption == 'openimages':
class_names = OpenImagesDataset.METAINFO['classes']
elif original_caption == 'lvis':
class_names = LVISV1Dataset.METAINFO['classes']
else:
if not original_caption.endswith('.'):
original_caption = original_caption + ' . '
original_caption = original_caption.split(' . ')
class_names = list(filter(lambda x: len(x) > 0, original_caption))
# for test.py
else:
class_names = list(original_caption)
text_encoder = CLIPTextEncoder()
text_encoder.eval()
texts = [prompt_prefix + x for x in class_names]
print_log(f'Computing text embeddings for {len(class_names)} classes.')
embeddings = text_encoder(texts).detach().permute(1, 0).contiguous().cpu()
return class_names, embeddings
def reset_cls_layer_weight(roi_head, weight):
if type(weight) == str:
print_log(f'Resetting cls_layer_weight from file: {weight}')
zs_weight = torch.tensor(
np.load(weight),
dtype=torch.float32).permute(1, 0).contiguous() # D x C
else:
zs_weight = weight
zs_weight = torch.cat(
[zs_weight, zs_weight.new_zeros(
(zs_weight.shape[0], 1))], dim=1) # D x (C + 1)
zs_weight = F.normalize(zs_weight, p=2, dim=0)
zs_weight = zs_weight.to('cuda')
num_classes = zs_weight.shape[-1]
for bbox_head in roi_head.bbox_head:
bbox_head.num_classes = num_classes
del bbox_head.fc_cls.zs_weight
bbox_head.fc_cls.zs_weight = zs_weight
@MODELS.register_module()
| # Copyright (c) OpenMMLab. All rights reserved.
class CLIPTextEncoder(nn.Module):
def __init__(self, model_name='ViT-B/32'):
super().__init__()
self.tokenizer = SimpleTokenizer()
pretrained_model, _ = clip.load(model_name, device='cpu')
self.clip = pretrained_model
@property
def device(self):
return self.clip.device
@property
def dtype(self):
return self.clip.dtype
def tokenize(self,
texts: Union[str, List[str]],
context_length: int = 77) -> torch.LongTensor:
if isinstance(texts, str):
texts = [texts]
sot_token = self.tokenizer.encoder['<|startoftext|>']
eot_token = self.tokenizer.encoder['<|endoftext|>']
all_tokens = [[sot_token] + self.tokenizer.encode(text) + [eot_token]
for text in texts]
result = torch.zeros(len(all_tokens), context_length, dtype=torch.long)
for i, tokens in enumerate(all_tokens):
if len(tokens) > context_length:
st = torch.randint(len(tokens) - context_length + 1,
(1, ))[0].item()
tokens = tokens[st:st + context_length]
result[i, :len(tokens)] = torch.tensor(tokens)
return result
def forward(self, text):
text = self.tokenize(text)
text_features = self.clip.encode_text(text)
return text_features
def get_class_weight(original_caption, prompt_prefix='a '):
if isinstance(original_caption, str):
if original_caption == 'coco':
class_names = CocoDataset.METAINFO['classes']
elif original_caption == 'cityscapes':
class_names = CityscapesDataset.METAINFO['classes']
elif original_caption == 'voc':
class_names = VOCDataset.METAINFO['classes']
elif original_caption == 'openimages':
class_names = OpenImagesDataset.METAINFO['classes']
elif original_caption == 'lvis':
class_names = LVISV1Dataset.METAINFO['classes']
else:
if not original_caption.endswith('.'):
original_caption = original_caption + ' . '
original_caption = original_caption.split(' . ')
class_names = list(filter(lambda x: len(x) > 0, original_caption))
# for test.py
else:
class_names = list(original_caption)
text_encoder = CLIPTextEncoder()
text_encoder.eval()
texts = [prompt_prefix + x for x in class_names]
print_log(f'Computing text embeddings for {len(class_names)} classes.')
embeddings = text_encoder(texts).detach().permute(1, 0).contiguous().cpu()
return class_names, embeddings
def reset_cls_layer_weight(roi_head, weight):
if type(weight) == str:
print_log(f'Resetting cls_layer_weight from file: {weight}')
zs_weight = torch.tensor(
np.load(weight),
dtype=torch.float32).permute(1, 0).contiguous() # D x C
else:
zs_weight = weight
zs_weight = torch.cat(
[zs_weight, zs_weight.new_zeros(
(zs_weight.shape[0], 1))], dim=1) # D x (C + 1)
zs_weight = F.normalize(zs_weight, p=2, dim=0)
zs_weight = zs_weight.to('cuda')
num_classes = zs_weight.shape[-1]
for bbox_head in roi_head.bbox_head:
bbox_head.num_classes = num_classes
del bbox_head.fc_cls.zs_weight
bbox_head.fc_cls.zs_weight = zs_weight
@MODELS.register_module() | class Detic(CascadeRCNN): | 1 | 2023-12-11 15:23:03+00:00 | 12k |
merlresearch/PixPNet | pixpnet/protonets/prp/prp.py | [
{
"identifier": "AdaptiveAvgPool2DWrapperFct",
"path": "pixpnet/protonets/prp/lrp_general6.py",
"snippet": "class AdaptiveAvgPool2DWrapperFct(torch.autograd.Function):\n \"\"\"\n We can implement our own custom autograd Functions by subclassing\n torch.autograd.Function and implementing the for... | import copy
import torch
from collections import OrderedDict
from torch import nn
from torchvision import datasets
from pixpnet.protonets.prp.lrp_general6 import (
AdaptiveAvgPool2DWrapperFct,
Conv2DBeta0WrapperFct,
CosineDistLRPClass,
EltwiseSumStacked2EpsWrapperFct,
L2LRPClass,
LinearLayerEpsWrapperFct,
MaxPool2DWrapperFct,
ReluWrapperFct,
SigmoidWrapperFct,
SumStacked2,
bnafterconv_overwrite_intoconv,
get_lrpwrapperformodule,
resetbn,
)
from pixpnet.protonets.prp.resnet_features import BasicBlock, Bottleneck, ResNetFeatures | 10,000 | """
Copyright (c) 2022-2023 Mitsubishi Electric Research Laboratories (MERL)
Copyright (c) 2022 Srishti Gautam, Marina Hohne, Robert Jenssen, Michael Kampffmeyer
SPDX-License-Identifier: AGPL-3.0-or-later
SPDX-License-Identifier: MIT
"""
def imshow_im(hm, q=100):
hm = hm.squeeze().sum(dim=0).detach()
return hm
# partial replacement of BN, use own classes, no pretrained loading
class TorchModuleNotFoundError(Exception):
pass
class BasicBlockFused(BasicBlock):
expansion = 1
def __init__(self, inplanes, planes, stride=1, downsample=None):
super(BasicBlockFused, self).__init__(inplanes, planes, stride, downsample)
# own
| """
Copyright (c) 2022-2023 Mitsubishi Electric Research Laboratories (MERL)
Copyright (c) 2022 Srishti Gautam, Marina Hohne, Robert Jenssen, Michael Kampffmeyer
SPDX-License-Identifier: AGPL-3.0-or-later
SPDX-License-Identifier: MIT
"""
def imshow_im(hm, q=100):
hm = hm.squeeze().sum(dim=0).detach()
return hm
# partial replacement of BN, use own classes, no pretrained loading
class TorchModuleNotFoundError(Exception):
pass
class BasicBlockFused(BasicBlock):
expansion = 1
def __init__(self, inplanes, planes, stride=1, downsample=None):
super(BasicBlockFused, self).__init__(inplanes, planes, stride, downsample)
# own | self.elt = SumStacked2() # eltwisesum2() | 9 | 2023-12-06 23:49:31+00:00 | 12k |
dvmazur/mixtral-offloading | src/build_model.py | [
{
"identifier": "ExpertCache",
"path": "src/expert_cache.py",
"snippet": "class ExpertCache:\n def __init__(self, make_module: callable, main_size: int, offload_size: int, buffer_size: int):\n \"\"\"Dynamically loads an array of modules with identical hyperparameters\"\"\"\n self.module... | import os
import json
import typing as tp
import torch
from functools import cache
from dataclasses import dataclass
from torch import nn
from transformers import AutoConfig
from transformers.models.mixtral import MixtralForCausalLM, MixtralConfig
from safetensors.torch import load_file
from torch import nn
from tqdm.auto import trange
from hqq.core.quantize import BaseQuantizeConfig
from .expert_cache import ExpertCache
from .expert_wrapper import MixtralExpertWrapper
from .custom_layers import (
HQQLinearTritonSavable,
MixtralBLockSparseTop2MLP_HQQ,
SparseMoeWrapper,
)
from .utils import with_default_dtype | 7,276 |
hidden_size = config.hidden_size
num_heads = config.num_attention_heads
head_dim = hidden_size // num_heads
num_key_value_heads = config.num_key_value_heads
shapes = [
(hidden_size, num_heads * head_dim),
(hidden_size, num_key_value_heads * head_dim),
(hidden_size, num_key_value_heads * head_dim),
(num_heads * head_dim, hidden_size),
]
shape_to_meta = {
shape: HQQLinearTritonSavable.get_hqq_meta(shape, attn_quant_config)
for shape in shapes
}
def patch_fct_hqq(shape, quant_config):
meta = shape_to_meta[shape]
layer = HQQLinearTritonSavable(None, quant_config, meta=meta)
return layer
for layer in model.model.layers:
layer.block_sparse_moe.gate = nn.Linear(
config.hidden_size,
config.num_local_experts,
dtype=torch.float16,
device=device,
bias=False,
)
layer.self_attn.q_proj = patch_fct_hqq(
(hidden_size, num_heads * head_dim), attn_quant_config
)
layer.self_attn.k_proj = patch_fct_hqq(
(hidden_size, num_key_value_heads * head_dim), attn_quant_config
)
layer.self_attn.v_proj = patch_fct_hqq(
(hidden_size, num_key_value_heads * head_dim), attn_quant_config
)
layer.self_attn.o_proj = patch_fct_hqq(
(hidden_size, num_heads * head_dim), attn_quant_config
)
@cache
def get_default_ffn_quant_config(ffn_dim: int = 14336, hidden_dim: int = 4096):
quant_config = BaseQuantizeConfig(
nbits=2,
group_size=16,
quant_zero=True,
quant_scale=True,
)
meta1 = HQQLinearTritonSavable.get_hqq_meta((hidden_dim, ffn_dim), quant_config)
meta2 = HQQLinearTritonSavable.get_hqq_meta((ffn_dim, hidden_dim), quant_config)
return quant_config, meta1, meta2
def make_empty_expert(
model_config: MixtralConfig, quant_config: QuantConfig
) -> MixtralBLockSparseTop2MLP_HQQ:
meta1, meta2 = quant_config.get_ffn_metas(
model_config.hidden_size, model_config.intermediate_size
)
return MixtralBLockSparseTop2MLP_HQQ(
model_config,
quant_config.ffn_config,
meta1,
meta2,
)
def make_and_load_expert_wrapper(
config: MixtralConfig,
quant_config: QuantConfig,
states_dir: str,
expert_uid: tuple[int, int],
device: torch.device,
) -> MixtralExpertWrapper:
layer_idx, expert_idx = expert_uid
index_path = os.path.join(states_dir, "model.safetensors.index.json")
with open(index_path) as f:
module_idx = f"model.layers.{layer_idx}.block_sparse_moe.experts.{expert_idx}"
state_fpath = json.load(f)["weight_map"][f"{module_idx}.w1.W_q"]
state_dict = load_file(os.path.join(states_dir, state_fpath), device=str(device))
expert = make_empty_expert(config, quant_config)
expert.load_state_dict(state_dict, strict=True)
return MixtralExpertWrapper(expert, device)
def load_00_expert_state_dict(states_dir: str, device: torch.device):
index_path = os.path.join(states_dir, "model.safetensors.index.json")
with open(index_path) as f:
module_idx = f"model.layers.0.block_sparse_moe.experts.0"
state_fpath = json.load(f)["weight_map"][f"{module_idx}.w1.W_q"]
return load_file(os.path.join(states_dir, state_fpath), device=str(device))
def build_model(
device: torch.device,
quant_config: QuantConfig,
offload_config: OffloadConfig,
state_path: str,
):
model_name = "mistralai/Mixtral-8x7B-Instruct-v0.1"
state_dict_00 = load_00_expert_state_dict(state_path, device)
def _make_module():
config = AutoConfig.from_pretrained(model_name)
expert = make_empty_expert(config, quant_config)
expert.load_state_dict(state_dict_00)
return MixtralExpertWrapper(expert, device=device)
|
@dataclass(frozen=True)
class OffloadConfig:
main_size: int
offload_size: int
buffer_size: int
offload_per_layer: int
class QuantConfig:
def __init__(
self,
ffn_config: BaseQuantizeConfig,
attn_config: BaseQuantizeConfig,
):
self.ffn_config = ffn_config
self.attn_config = attn_config
@cache
def get_ffn_metas(self, hidden_dim: int, ffn_dim: int) -> tuple[tp.Any, tp.Any]:
return (
HQQLinearTritonSavable.get_hqq_meta((hidden_dim, ffn_dim), self.ffn_config),
HQQLinearTritonSavable.get_hqq_meta((ffn_dim, hidden_dim), self.ffn_config),
)
def replace_attn_layers(
model: MixtralForCausalLM,
config: MixtralConfig,
quant_config: QuantConfig,
device: torch.device,
) -> None:
attn_quant_config = quant_config.attn_config
hidden_size = config.hidden_size
num_heads = config.num_attention_heads
head_dim = hidden_size // num_heads
num_key_value_heads = config.num_key_value_heads
shapes = [
(hidden_size, num_heads * head_dim),
(hidden_size, num_key_value_heads * head_dim),
(hidden_size, num_key_value_heads * head_dim),
(num_heads * head_dim, hidden_size),
]
shape_to_meta = {
shape: HQQLinearTritonSavable.get_hqq_meta(shape, attn_quant_config)
for shape in shapes
}
def patch_fct_hqq(shape, quant_config):
meta = shape_to_meta[shape]
layer = HQQLinearTritonSavable(None, quant_config, meta=meta)
return layer
for layer in model.model.layers:
layer.block_sparse_moe.gate = nn.Linear(
config.hidden_size,
config.num_local_experts,
dtype=torch.float16,
device=device,
bias=False,
)
layer.self_attn.q_proj = patch_fct_hqq(
(hidden_size, num_heads * head_dim), attn_quant_config
)
layer.self_attn.k_proj = patch_fct_hqq(
(hidden_size, num_key_value_heads * head_dim), attn_quant_config
)
layer.self_attn.v_proj = patch_fct_hqq(
(hidden_size, num_key_value_heads * head_dim), attn_quant_config
)
layer.self_attn.o_proj = patch_fct_hqq(
(hidden_size, num_heads * head_dim), attn_quant_config
)
@cache
def get_default_ffn_quant_config(ffn_dim: int = 14336, hidden_dim: int = 4096):
quant_config = BaseQuantizeConfig(
nbits=2,
group_size=16,
quant_zero=True,
quant_scale=True,
)
meta1 = HQQLinearTritonSavable.get_hqq_meta((hidden_dim, ffn_dim), quant_config)
meta2 = HQQLinearTritonSavable.get_hqq_meta((ffn_dim, hidden_dim), quant_config)
return quant_config, meta1, meta2
def make_empty_expert(
model_config: MixtralConfig, quant_config: QuantConfig
) -> MixtralBLockSparseTop2MLP_HQQ:
meta1, meta2 = quant_config.get_ffn_metas(
model_config.hidden_size, model_config.intermediate_size
)
return MixtralBLockSparseTop2MLP_HQQ(
model_config,
quant_config.ffn_config,
meta1,
meta2,
)
def make_and_load_expert_wrapper(
config: MixtralConfig,
quant_config: QuantConfig,
states_dir: str,
expert_uid: tuple[int, int],
device: torch.device,
) -> MixtralExpertWrapper:
layer_idx, expert_idx = expert_uid
index_path = os.path.join(states_dir, "model.safetensors.index.json")
with open(index_path) as f:
module_idx = f"model.layers.{layer_idx}.block_sparse_moe.experts.{expert_idx}"
state_fpath = json.load(f)["weight_map"][f"{module_idx}.w1.W_q"]
state_dict = load_file(os.path.join(states_dir, state_fpath), device=str(device))
expert = make_empty_expert(config, quant_config)
expert.load_state_dict(state_dict, strict=True)
return MixtralExpertWrapper(expert, device)
def load_00_expert_state_dict(states_dir: str, device: torch.device):
index_path = os.path.join(states_dir, "model.safetensors.index.json")
with open(index_path) as f:
module_idx = f"model.layers.0.block_sparse_moe.experts.0"
state_fpath = json.load(f)["weight_map"][f"{module_idx}.w1.W_q"]
return load_file(os.path.join(states_dir, state_fpath), device=str(device))
def build_model(
device: torch.device,
quant_config: QuantConfig,
offload_config: OffloadConfig,
state_path: str,
):
model_name = "mistralai/Mixtral-8x7B-Instruct-v0.1"
state_dict_00 = load_00_expert_state_dict(state_path, device)
def _make_module():
config = AutoConfig.from_pretrained(model_name)
expert = make_empty_expert(config, quant_config)
expert.load_state_dict(state_dict_00)
return MixtralExpertWrapper(expert, device=device)
| with device, with_default_dtype(torch.float16): | 5 | 2023-12-15 03:32:35+00:00 | 12k |
open-mmlab/PIA | predict.py | [
{
"identifier": "I2VPipeline",
"path": "animatediff/pipelines/i2v_pipeline.py",
"snippet": "class I2VPipeline(DiffusionPipeline, IPAdapterMixin, TextualInversionLoaderMixin):\n _optional_components = []\n\n def __init__(\n self,\n vae: AutoencoderKL,\n text_encoder: CLIPTextMo... | import os
import os.path as osp
import numpy as np
import torch
from glob import glob
from omegaconf import OmegaConf
from PIL import Image
from cog import BasePredictor, Input, Path
from animatediff.pipelines import I2VPipeline
from animatediff.utils.util import save_videos_grid | 8,363 | # Prediction interface for Cog ⚙️
# https://github.com/replicate/cog/blob/main/docs/python.md
N_PROMPT = (
"wrong white balance, dark, sketches,worst quality,low quality, "
"deformed, distorted, disfigured, bad eyes, wrong lips, "
"weird mouth, bad teeth, mutated hands and fingers, bad anatomy,"
"wrong anatomy, amputation, extra limb, missing limb, "
"floating,limbs, disconnected limbs, mutation, ugly, disgusting, "
"bad_pictures, negative_hand-neg"
)
BASE_CONFIG = "example/config/base.yaml"
STYLE_CONFIG_LIST = {
"realistic": "example/replicate/1-realistic.yaml",
"3d_cartoon": "example/replicate/3-3d.yaml",
}
PIA_PATH = "models/PIA"
VAE_PATH = "models/VAE"
DreamBooth_LoRA_PATH = "models/DreamBooth_LoRA"
STABLE_DIFFUSION_PATH = "models/StableDiffusion"
class Predictor(BasePredictor):
def setup(self) -> None:
"""Load the model into memory to make running multiple predictions efficient"""
self.ip_adapter_dir = (
"models/IP_Adapter/h94/IP-Adapter/models" # cached h94/IP-Adapter
)
self.inference_config = OmegaConf.load("example/config/base.yaml")
self.stable_diffusion_dir = self.inference_config.pretrained_model_path
self.pia_path = self.inference_config.generate.model_path
self.style_configs = {
k: OmegaConf.load(v) for k, v in STYLE_CONFIG_LIST.items()
}
self.pipeline_dict = self.load_model_list()
def load_model_list(self):
pipeline_dict = dict()
for style, cfg in self.style_configs.items():
print(f"Loading {style}")
dreambooth_path = cfg.get("dreambooth", "none")
if dreambooth_path and dreambooth_path.upper() != "NONE":
dreambooth_path = osp.join(DreamBooth_LoRA_PATH, dreambooth_path)
lora_path = cfg.get("lora", None)
if lora_path is not None:
lora_path = osp.join(DreamBooth_LoRA_PATH, lora_path)
lora_alpha = cfg.get("lora_alpha", 0.0)
vae_path = cfg.get("vae", None)
if vae_path is not None:
vae_path = osp.join(VAE_PATH, vae_path)
| # Prediction interface for Cog ⚙️
# https://github.com/replicate/cog/blob/main/docs/python.md
N_PROMPT = (
"wrong white balance, dark, sketches,worst quality,low quality, "
"deformed, distorted, disfigured, bad eyes, wrong lips, "
"weird mouth, bad teeth, mutated hands and fingers, bad anatomy,"
"wrong anatomy, amputation, extra limb, missing limb, "
"floating,limbs, disconnected limbs, mutation, ugly, disgusting, "
"bad_pictures, negative_hand-neg"
)
BASE_CONFIG = "example/config/base.yaml"
STYLE_CONFIG_LIST = {
"realistic": "example/replicate/1-realistic.yaml",
"3d_cartoon": "example/replicate/3-3d.yaml",
}
PIA_PATH = "models/PIA"
VAE_PATH = "models/VAE"
DreamBooth_LoRA_PATH = "models/DreamBooth_LoRA"
STABLE_DIFFUSION_PATH = "models/StableDiffusion"
class Predictor(BasePredictor):
def setup(self) -> None:
"""Load the model into memory to make running multiple predictions efficient"""
self.ip_adapter_dir = (
"models/IP_Adapter/h94/IP-Adapter/models" # cached h94/IP-Adapter
)
self.inference_config = OmegaConf.load("example/config/base.yaml")
self.stable_diffusion_dir = self.inference_config.pretrained_model_path
self.pia_path = self.inference_config.generate.model_path
self.style_configs = {
k: OmegaConf.load(v) for k, v in STYLE_CONFIG_LIST.items()
}
self.pipeline_dict = self.load_model_list()
def load_model_list(self):
pipeline_dict = dict()
for style, cfg in self.style_configs.items():
print(f"Loading {style}")
dreambooth_path = cfg.get("dreambooth", "none")
if dreambooth_path and dreambooth_path.upper() != "NONE":
dreambooth_path = osp.join(DreamBooth_LoRA_PATH, dreambooth_path)
lora_path = cfg.get("lora", None)
if lora_path is not None:
lora_path = osp.join(DreamBooth_LoRA_PATH, lora_path)
lora_alpha = cfg.get("lora_alpha", 0.0)
vae_path = cfg.get("vae", None)
if vae_path is not None:
vae_path = osp.join(VAE_PATH, vae_path)
| pipeline_dict[style] = I2VPipeline.build_pipeline( | 0 | 2023-12-21 03:29:34+00:00 | 12k |
xinghaochen/TinySAM | tinysam/hierarchical_mask_generator.py | [
{
"identifier": "Sam",
"path": "tinysam/modeling/sam.py",
"snippet": "class Sam(nn.Module):\n mask_threshold: float = 0.0\n image_format: str = \"RGB\"\n\n def __init__(\n self,\n image_encoder: Union[ImageEncoderViT, TinyViT],\n prompt_encoder: PromptEncoder,\n mask... | import numpy as np
import torch
import cv2 # type: ignore # noqa: F401
from torchvision.ops.boxes import batched_nms, box_area # type: ignore
from typing import Any, Dict, List, Optional, Tuple
from .modeling import Sam
from .predictor import SamPredictor
from .utils.amg import (
MaskData,
area_from_rle,
batch_iterator,
batched_mask_to_box,
box_xyxy_to_xywh,
build_all_layer_point_grids,
calculate_stability_score,
coco_encode_rle,
generate_crop_boxes,
is_box_near_crop_edge,
mask_to_rle_pytorch,
remove_small_regions,
rle_to_mask,
uncrop_boxes_xyxy,
uncrop_masks,
uncrop_points,
)
from pycocotools import mask as mask_utils # type: ignore # noqa: F401 | 9,084 | # Copyright 2023 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
class SamHierarchicalMaskGenerator:
def __init__(
self,
model: Sam,
points_per_side: Optional[int] = 32,
points_per_batch: int = 64,
pred_iou_thresh: float = 0.88,
high_score_thresh: float = 8.5,
stability_score_thresh: float = 0.95,
stability_score_offset: float = 1.0,
box_nms_thresh: float = 0.7,
crop_n_layers: int = 0,
crop_nms_thresh: float = 0.7,
crop_overlap_ratio: float = 512 / 1500,
crop_n_points_downscale_factor: int = 1,
point_grids: Optional[List[np.ndarray]] = None,
min_mask_region_area: int = 0,
output_mode: str = "binary_mask",
) -> None:
"""
Using a SAM model, generates masks for the entire image.
Generates a grid of point prompts over the image, then filters
low quality and duplicate masks. The default settings are chosen
for SAM with a ViT-H backbone.
Arguments:
model (Sam): The SAM model to use for mask prediction.
points_per_side (int or None): The number of points to be sampled
along one side of the image. The total number of points is
points_per_side**2. If None, 'point_grids' must provide explicit
point sampling.
points_per_batch (int): Sets the number of points run simultaneously
by the model. Higher numbers may be faster but use more GPU memory.
pred_iou_thresh (float): A filtering threshold in [0,1], using the
model's predicted mask quality.
high_score_thresh (float): A filtering threshold in [-inf,inf], to find out
the unmasked area for the next generation.
stability_score_thresh (float): A filtering threshold in [0,1], using
the stability of the mask under changes to the cutoff used to binarize
the model's mask predictions.
stability_score_offset (float): The amount to shift the cutoff when
calculated the stability score.
box_nms_thresh (float): The box IoU cutoff used by non-maximal
suppression to filter duplicate masks.
crop_n_layers (int): If >0, mask prediction will be run again on
crops of the image. Sets the number of layers to run, where each
layer has 2**i_layer number of image crops.
crop_nms_thresh (float): The box IoU cutoff used by non-maximal
suppression to filter duplicate masks between different crops.
crop_overlap_ratio (float): Sets the degree to which crops overlap.
In the first crop layer, crops will overlap by this fraction of
the image length. Later layers with more crops scale down this overlap.
crop_n_points_downscale_factor (int): The number of points-per-side
sampled in layer n is scaled down by crop_n_points_downscale_factor**n.
point_grids (list(np.ndarray) or None): A list over explicit grids
of points used for sampling, normalized to [0,1]. The nth grid in the
list is used in the nth crop layer. Exclusive with points_per_side.
min_mask_region_area (int): If >0, postprocessing will be applied
to remove disconnected regions and holes in masks with area smaller
than min_mask_region_area. Requires opencv.
output_mode (str): The form masks are returned in. Can be 'binary_mask',
'uncompressed_rle', or 'coco_rle'. 'coco_rle' requires pycocotools.
For large resolutions, 'binary_mask' may consume large amounts of
memory.
"""
assert (points_per_side is None) != (
point_grids is None
), "Exactly one of points_per_side or point_grid must be provided."
if points_per_side is not None:
| # Copyright 2023 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
class SamHierarchicalMaskGenerator:
def __init__(
self,
model: Sam,
points_per_side: Optional[int] = 32,
points_per_batch: int = 64,
pred_iou_thresh: float = 0.88,
high_score_thresh: float = 8.5,
stability_score_thresh: float = 0.95,
stability_score_offset: float = 1.0,
box_nms_thresh: float = 0.7,
crop_n_layers: int = 0,
crop_nms_thresh: float = 0.7,
crop_overlap_ratio: float = 512 / 1500,
crop_n_points_downscale_factor: int = 1,
point_grids: Optional[List[np.ndarray]] = None,
min_mask_region_area: int = 0,
output_mode: str = "binary_mask",
) -> None:
"""
Using a SAM model, generates masks for the entire image.
Generates a grid of point prompts over the image, then filters
low quality and duplicate masks. The default settings are chosen
for SAM with a ViT-H backbone.
Arguments:
model (Sam): The SAM model to use for mask prediction.
points_per_side (int or None): The number of points to be sampled
along one side of the image. The total number of points is
points_per_side**2. If None, 'point_grids' must provide explicit
point sampling.
points_per_batch (int): Sets the number of points run simultaneously
by the model. Higher numbers may be faster but use more GPU memory.
pred_iou_thresh (float): A filtering threshold in [0,1], using the
model's predicted mask quality.
high_score_thresh (float): A filtering threshold in [-inf,inf], to find out
the unmasked area for the next generation.
stability_score_thresh (float): A filtering threshold in [0,1], using
the stability of the mask under changes to the cutoff used to binarize
the model's mask predictions.
stability_score_offset (float): The amount to shift the cutoff when
calculated the stability score.
box_nms_thresh (float): The box IoU cutoff used by non-maximal
suppression to filter duplicate masks.
crop_n_layers (int): If >0, mask prediction will be run again on
crops of the image. Sets the number of layers to run, where each
layer has 2**i_layer number of image crops.
crop_nms_thresh (float): The box IoU cutoff used by non-maximal
suppression to filter duplicate masks between different crops.
crop_overlap_ratio (float): Sets the degree to which crops overlap.
In the first crop layer, crops will overlap by this fraction of
the image length. Later layers with more crops scale down this overlap.
crop_n_points_downscale_factor (int): The number of points-per-side
sampled in layer n is scaled down by crop_n_points_downscale_factor**n.
point_grids (list(np.ndarray) or None): A list over explicit grids
of points used for sampling, normalized to [0,1]. The nth grid in the
list is used in the nth crop layer. Exclusive with points_per_side.
min_mask_region_area (int): If >0, postprocessing will be applied
to remove disconnected regions and holes in masks with area smaller
than min_mask_region_area. Requires opencv.
output_mode (str): The form masks are returned in. Can be 'binary_mask',
'uncompressed_rle', or 'coco_rle'. 'coco_rle' requires pycocotools.
For large resolutions, 'binary_mask' may consume large amounts of
memory.
"""
assert (points_per_side is None) != (
point_grids is None
), "Exactly one of points_per_side or point_grid must be provided."
if points_per_side is not None: | self.point_grids = build_all_layer_point_grids( | 7 | 2023-12-19 11:25:54+00:00 | 12k |
dcharatan/pixelsplat | src/model/encoder/visualization/encoder_visualizer_epipolar.py | [
{
"identifier": "BatchedViews",
"path": "src/dataset/types.py",
"snippet": "class BatchedViews(TypedDict, total=False):\n extrinsics: Float[Tensor, \"batch _ 4 4\"] # batch view 4 4\n intrinsics: Float[Tensor, \"batch _ 3 3\"] # batch view 3 3\n image: Float[Tensor, \"batch _ _ _ _\"] # batc... | from pathlib import Path
from random import randrange
from typing import Optional
from einops import rearrange, reduce, repeat
from jaxtyping import Bool, Float
from torch import Tensor
from ....dataset.types import BatchedViews
from ....misc.heterogeneous_pairings import generate_heterogeneous_index
from ....visualization.annotation import add_label
from ....visualization.color_map import apply_color_map, apply_color_map_to_image
from ....visualization.colors import get_distinct_color
from ....visualization.drawing.lines import draw_lines
from ....visualization.drawing.points import draw_points
from ....visualization.layout import add_border, hcat, vcat
from ...ply_export import export_ply
from ..encoder_epipolar import EncoderEpipolar
from ..epipolar.epipolar_sampler import EpipolarSampling
from .encoder_visualizer import EncoderVisualizer
from .encoder_visualizer_epipolar_cfg import EncoderVisualizerEpipolarCfg
import numpy as np
import torch
import wandb | 7,992 | attention, "l (b v r) hd () s -> l b v r hd s", b=b, v=v, r=r
)
attention = attention[:, rb, rv, rr, :, :]
num_layers, _, hd, _ = attention.shape
vis = []
for il in range(num_layers):
vis_layer = []
for ihd in range(hd):
# Create colors according to attention.
color = [get_distinct_color(i) for i, _ in enumerate(rr)]
color = torch.tensor(color, device=attention.device)
color = rearrange(color, "r c -> r () c")
attn = rearrange(attention[il, :, ihd], "r s -> r s ()")
color = rearrange(attn * color, "r s c -> (r s ) c")
# Draw the alternating bucket lines.
vis_layer_head = draw_lines(
context_images[rb, self.encoder.sampler.index_v[rv, rov]],
rearrange(
sampling.xy_sample_near[rb, rv, rov, rr], "r s xy -> (r s) xy"
),
rearrange(
sampling.xy_sample_far[rb, rv, rov, rr], "r s xy -> (r s) xy"
),
color,
3,
cap="butt",
x_range=(0, 1),
y_range=(0, 1),
)
vis_layer.append(vis_layer_head)
vis.append(add_label(vcat(*vis_layer), f"Layer {il}"))
vis = add_label(add_border(add_border(hcat(*vis)), 1, 0), "Keys & Values")
vis = add_border(hcat(add_label(ray_view), vis, align="top"))
return vis
def visualize_depth(
self,
context: BatchedViews,
multi_depth: Float[Tensor, "batch view height width surface spp"],
) -> Float[Tensor, "3 vis_width vis_height"]:
multi_vis = []
*_, srf, _ = multi_depth.shape
for i in range(srf):
depth = multi_depth[..., i, :]
depth = depth.mean(dim=-1)
# Compute relative depth and disparity.
near = rearrange(context["near"], "b v -> b v () ()")
far = rearrange(context["far"], "b v -> b v () ()")
relative_depth = (depth - near) / (far - near)
relative_disparity = 1 - (1 / depth - 1 / far) / (1 / near - 1 / far)
relative_depth = apply_color_map_to_image(relative_depth, "turbo")
relative_depth = vcat(*[hcat(*x) for x in relative_depth])
relative_depth = add_label(relative_depth, "Depth")
relative_disparity = apply_color_map_to_image(relative_disparity, "turbo")
relative_disparity = vcat(*[hcat(*x) for x in relative_disparity])
relative_disparity = add_label(relative_disparity, "Disparity")
multi_vis.append(add_border(hcat(relative_depth, relative_disparity)))
return add_border(vcat(*multi_vis))
def visualize_overlaps(
self,
context_images: Float[Tensor, "batch view 3 height width"],
sampling: EpipolarSampling,
is_monocular: Optional[Bool[Tensor, "batch view height width"]] = None,
) -> Float[Tensor, "3 vis_width vis_height"]:
device = context_images.device
b, v, _, h, w = context_images.shape
green = torch.tensor([0.235, 0.706, 0.294], device=device)[..., None, None]
rb = randrange(b)
valid = sampling.valid[rb].float()
ds = self.encoder.cfg.epipolar_transformer.downscale
valid = repeat(
valid,
"v ov (h w) -> v ov c (h rh) (w rw)",
c=3,
h=h // ds,
w=w // ds,
rh=ds,
rw=ds,
)
if is_monocular is not None:
is_monocular = is_monocular[rb].float()
is_monocular = repeat(is_monocular, "v h w -> v c h w", c=3, h=h, w=w)
# Select context images in grid.
context_images = context_images[rb]
index, _ = generate_heterogeneous_index(v)
valid = valid * (green + context_images[index]) / 2
vis = vcat(*(hcat(im, hcat(*v)) for im, v in zip(context_images, valid)))
vis = add_label(vis, "Context Overlaps")
if is_monocular is not None:
vis = hcat(vis, add_label(vcat(*is_monocular), "Monocular?"))
return add_border(vis)
def visualize_gaussians(
self,
context_images: Float[Tensor, "batch view 3 height width"],
opacities: Float[Tensor, "batch vrspp"],
covariances: Float[Tensor, "batch vrspp 3 3"],
colors: Float[Tensor, "batch vrspp 3"],
) -> Float[Tensor, "3 vis_height vis_width"]:
b, v, _, h, w = context_images.shape
rb = randrange(b)
context_images = context_images[rb]
opacities = repeat(
opacities[rb], "(v h w spp) -> spp v c h w", v=v, c=3, h=h, w=w
)
colors = rearrange(colors[rb], "(v h w spp) c -> spp v c h w", v=v, h=h, w=w)
# Color-map Gaussian covariawnces.
det = covariances[rb].det()
|
def box(
image: Float[Tensor, "3 height width"],
) -> Float[Tensor, "3 new_height new_width"]:
return add_border(add_border(image), 1, 0)
class EncoderVisualizerEpipolar(
EncoderVisualizer[EncoderVisualizerEpipolarCfg, EncoderEpipolar]
):
def visualize(
self,
context: BatchedViews,
global_step: int,
) -> dict[str, Float[Tensor, "3 _ _"]]:
# Short-circuit execution when ablating the epipolar transformer.
if self.encoder.epipolar_transformer is None:
return {}
visualization_dump = {}
softmax_weights = []
def hook(module, input, output):
softmax_weights.append(output)
# Register hooks to grab attention.
handles = [
layer[0].fn.attend.register_forward_hook(hook)
for layer in self.encoder.epipolar_transformer.transformer.layers
]
result = self.encoder.forward(
context,
global_step,
visualization_dump=visualization_dump,
deterministic=True,
)
# De-register hooks.
for handle in handles:
handle.remove()
softmax_weights = torch.stack(softmax_weights)
# Generate high-resolution context images that can be drawn on.
context_images = context["image"]
_, _, _, h, w = context_images.shape
length = min(h, w)
min_resolution = self.cfg.min_resolution
scale_multiplier = (min_resolution + length - 1) // length
if scale_multiplier > 1:
context_images = repeat(
context_images,
"b v c h w -> b v c (h rh) (w rw)",
rh=scale_multiplier,
rw=scale_multiplier,
)
# This is kind of hacky for now, since we're using it for short experiments.
if self.cfg.export_ply and wandb.run is not None:
name = wandb.run._name.split(" ")[0]
ply_path = Path(f"outputs/gaussians/{name}/{global_step:0>6}.ply")
export_ply(
context["extrinsics"][0, 0],
result.means[0],
visualization_dump["scales"][0],
visualization_dump["rotations"][0],
result.harmonics[0],
result.opacities[0],
ply_path,
)
return {
# "attention": self.visualize_attention(
# context_images,
# visualization_dump["sampling"],
# softmax_weights,
# ),
"epipolar_samples": self.visualize_epipolar_samples(
context_images,
visualization_dump["sampling"],
),
"epipolar_color_samples": self.visualize_epipolar_color_samples(
context_images,
context,
),
"gaussians": self.visualize_gaussians(
context["image"],
result.opacities,
result.covariances,
result.harmonics[..., 0], # Just visualize DC component.
),
"overlaps": self.visualize_overlaps(
context["image"],
visualization_dump["sampling"],
visualization_dump.get("is_monocular", None),
),
"depth": self.visualize_depth(
context,
visualization_dump["depth"],
),
}
def visualize_attention(
self,
context_images: Float[Tensor, "batch view 3 height width"],
sampling: EpipolarSampling,
attention: Float[Tensor, "layer bvr head 1 sample"],
) -> Float[Tensor, "3 vis_height vis_width"]:
device = context_images.device
# Pick a random batch element, view, and other view.
b, v, ov, r, s, _ = sampling.xy_sample.shape
rb = randrange(b)
rv = randrange(v)
rov = randrange(ov)
num_samples = self.cfg.num_samples
rr = np.random.choice(r, num_samples, replace=False)
rr = torch.tensor(rr, dtype=torch.int64, device=device)
# Visualize the rays in the ray view.
ray_view = draw_points(
context_images[rb, rv],
sampling.xy_ray[rb, rv, rr],
0,
radius=4,
x_range=(0, 1),
y_range=(0, 1),
)
ray_view = draw_points(
ray_view,
sampling.xy_ray[rb, rv, rr],
[get_distinct_color(i) for i, _ in enumerate(rr)],
radius=3,
x_range=(0, 1),
y_range=(0, 1),
)
# Visualize attention in the sample view.
attention = rearrange(
attention, "l (b v r) hd () s -> l b v r hd s", b=b, v=v, r=r
)
attention = attention[:, rb, rv, rr, :, :]
num_layers, _, hd, _ = attention.shape
vis = []
for il in range(num_layers):
vis_layer = []
for ihd in range(hd):
# Create colors according to attention.
color = [get_distinct_color(i) for i, _ in enumerate(rr)]
color = torch.tensor(color, device=attention.device)
color = rearrange(color, "r c -> r () c")
attn = rearrange(attention[il, :, ihd], "r s -> r s ()")
color = rearrange(attn * color, "r s c -> (r s ) c")
# Draw the alternating bucket lines.
vis_layer_head = draw_lines(
context_images[rb, self.encoder.sampler.index_v[rv, rov]],
rearrange(
sampling.xy_sample_near[rb, rv, rov, rr], "r s xy -> (r s) xy"
),
rearrange(
sampling.xy_sample_far[rb, rv, rov, rr], "r s xy -> (r s) xy"
),
color,
3,
cap="butt",
x_range=(0, 1),
y_range=(0, 1),
)
vis_layer.append(vis_layer_head)
vis.append(add_label(vcat(*vis_layer), f"Layer {il}"))
vis = add_label(add_border(add_border(hcat(*vis)), 1, 0), "Keys & Values")
vis = add_border(hcat(add_label(ray_view), vis, align="top"))
return vis
def visualize_depth(
self,
context: BatchedViews,
multi_depth: Float[Tensor, "batch view height width surface spp"],
) -> Float[Tensor, "3 vis_width vis_height"]:
multi_vis = []
*_, srf, _ = multi_depth.shape
for i in range(srf):
depth = multi_depth[..., i, :]
depth = depth.mean(dim=-1)
# Compute relative depth and disparity.
near = rearrange(context["near"], "b v -> b v () ()")
far = rearrange(context["far"], "b v -> b v () ()")
relative_depth = (depth - near) / (far - near)
relative_disparity = 1 - (1 / depth - 1 / far) / (1 / near - 1 / far)
relative_depth = apply_color_map_to_image(relative_depth, "turbo")
relative_depth = vcat(*[hcat(*x) for x in relative_depth])
relative_depth = add_label(relative_depth, "Depth")
relative_disparity = apply_color_map_to_image(relative_disparity, "turbo")
relative_disparity = vcat(*[hcat(*x) for x in relative_disparity])
relative_disparity = add_label(relative_disparity, "Disparity")
multi_vis.append(add_border(hcat(relative_depth, relative_disparity)))
return add_border(vcat(*multi_vis))
def visualize_overlaps(
self,
context_images: Float[Tensor, "batch view 3 height width"],
sampling: EpipolarSampling,
is_monocular: Optional[Bool[Tensor, "batch view height width"]] = None,
) -> Float[Tensor, "3 vis_width vis_height"]:
device = context_images.device
b, v, _, h, w = context_images.shape
green = torch.tensor([0.235, 0.706, 0.294], device=device)[..., None, None]
rb = randrange(b)
valid = sampling.valid[rb].float()
ds = self.encoder.cfg.epipolar_transformer.downscale
valid = repeat(
valid,
"v ov (h w) -> v ov c (h rh) (w rw)",
c=3,
h=h // ds,
w=w // ds,
rh=ds,
rw=ds,
)
if is_monocular is not None:
is_monocular = is_monocular[rb].float()
is_monocular = repeat(is_monocular, "v h w -> v c h w", c=3, h=h, w=w)
# Select context images in grid.
context_images = context_images[rb]
index, _ = generate_heterogeneous_index(v)
valid = valid * (green + context_images[index]) / 2
vis = vcat(*(hcat(im, hcat(*v)) for im, v in zip(context_images, valid)))
vis = add_label(vis, "Context Overlaps")
if is_monocular is not None:
vis = hcat(vis, add_label(vcat(*is_monocular), "Monocular?"))
return add_border(vis)
def visualize_gaussians(
self,
context_images: Float[Tensor, "batch view 3 height width"],
opacities: Float[Tensor, "batch vrspp"],
covariances: Float[Tensor, "batch vrspp 3 3"],
colors: Float[Tensor, "batch vrspp 3"],
) -> Float[Tensor, "3 vis_height vis_width"]:
b, v, _, h, w = context_images.shape
rb = randrange(b)
context_images = context_images[rb]
opacities = repeat(
opacities[rb], "(v h w spp) -> spp v c h w", v=v, c=3, h=h, w=w
)
colors = rearrange(colors[rb], "(v h w spp) c -> spp v c h w", v=v, h=h, w=w)
# Color-map Gaussian covariawnces.
det = covariances[rb].det() | det = apply_color_map(det / det.max(), "inferno") | 3 | 2023-12-20 19:45:59+00:00 | 12k |
hutaiHang/Faster-Diffusion | if_demo.py | [
{
"identifier": "register_if1",
"path": "utils_if.py",
"snippet": "def register_if1(pipe):\r\n def new_call(self):\r\n @torch.no_grad()\r\n def call(\r\n prompt: Union[str, List[str]] = None,\r\n num_inference_steps: int = 100,\r\n timesteps: List[int] =... | from diffusers import DiffusionPipeline , IFPipeline, IFSuperResolutionPipeline, StableDiffusionUpscalePipeline
from diffusers.utils import pt_to_pil
from diffusers import DPMSolverMultistepScheduler
from utils_if import register_if1, register_if2,register_if3, register_faster_forward, seed_everything
import torch
| 9,934 |
seed_everything(2023)
prompt = "a lone sailboat drifting on calm waters"
stage_1 = DiffusionPipeline.from_pretrained(
"DeepFloyd/IF-I-XL-v1.0",
variant="fp16",
torch_dtype=torch.float16,
).to('cuda')
stage_2 = DiffusionPipeline.from_pretrained(
"DeepFloyd/IF-II-L-v1.0",
text_encoder=None,
variant="fp16",
torch_dtype=torch.float16,
).to('cuda')
# stage 3
safety_modules = {
"feature_extractor": stage_1.feature_extractor,
"safety_checker": None,
"watermarker": stage_1.watermarker,
}
stage_3 = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-x4-upscaler",
**safety_modules,
torch_dtype=torch.float16
).to('cuda')
|
seed_everything(2023)
prompt = "a lone sailboat drifting on calm waters"
stage_1 = DiffusionPipeline.from_pretrained(
"DeepFloyd/IF-I-XL-v1.0",
variant="fp16",
torch_dtype=torch.float16,
).to('cuda')
stage_2 = DiffusionPipeline.from_pretrained(
"DeepFloyd/IF-II-L-v1.0",
text_encoder=None,
variant="fp16",
torch_dtype=torch.float16,
).to('cuda')
# stage 3
safety_modules = {
"feature_extractor": stage_1.feature_extractor,
"safety_checker": None,
"watermarker": stage_1.watermarker,
}
stage_3 = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-x4-upscaler",
**safety_modules,
torch_dtype=torch.float16
).to('cuda')
| register_faster_forward(stage_1.unet, mod = '100ls')
| 3 | 2023-12-15 05:03:37+00:00 | 12k |
FoundationVision/GLEE | app/GLEE/glee/models/transformer_decoder/maskdino_decoder.py | [
{
"identifier": "TransformerDecoder",
"path": "app/GLEE/glee/models/transformer_decoder/dino_decoder.py",
"snippet": "class TransformerDecoder(nn.Module):\r\n\r\n def __init__(self, decoder_layer, num_layers, norm=None,\r\n return_intermediate=False,\r\n d_model=256, q... | import logging
import fvcore.nn.weight_init as weight_init
import torch
from torch import nn
from torch.nn import functional as F
from detectron2.config import configurable
from detectron2.layers import Conv2d
from detectron2.utils.registry import Registry
from detectron2.structures import BitMasks
from timm.models.layers import trunc_normal_
from .dino_decoder import TransformerDecoder, DeformableTransformerDecoderLayer
from ...utils.utils import MLP, gen_encoder_output_proposals, inverse_sigmoid
from ...utils import box_ops | 8,471 | 'map_known_indice': torch.as_tensor(map_known_indice).long(),
'known_lbs_bboxes': (known_labels, known_bboxs),
'know_idx': know_idx,
'pad_size': pad_size,
'scalar': scalar,
}
else:
if not refpoint_emb is None:
input_query_label = tgt.repeat(batch_size, 1, 1)
input_query_bbox = refpoint_emb.repeat(batch_size, 1, 1)
else:
input_query_label=None
input_query_bbox=None
attn_mask = None
mask_dict=None
# 100*batch*256
if not input_query_bbox is None:
input_query_label = input_query_label
input_query_bbox = input_query_bbox
return input_query_label,input_query_bbox,attn_mask,mask_dict
def dn_post_process(self,outputs_class,outputs_score,outputs_coord,mask_dict,outputs_mask):
"""
post process of dn after output from the transformer
put the dn part in the mask_dict
"""
assert mask_dict['pad_size'] > 0
output_known_class = outputs_class[:, :, :mask_dict['pad_size'], :]
outputs_class = outputs_class[:, :, mask_dict['pad_size']:, :]
output_known_score = outputs_score[:, :, :mask_dict['pad_size'], :]
outputs_score = outputs_score[:, :, mask_dict['pad_size']:, :]
output_known_coord = outputs_coord[:, :, :mask_dict['pad_size'], :]
outputs_coord = outputs_coord[:, :, mask_dict['pad_size']:, :]
if outputs_mask is not None:
output_known_mask = outputs_mask[:, :, :mask_dict['pad_size'], :]
outputs_mask = outputs_mask[:, :, mask_dict['pad_size']:, :]
out = {'pred_logits': output_known_class[-1], 'pred_scores':output_known_score[-1],'pred_boxes': output_known_coord[-1],'pred_masks': output_known_mask[-1]}
out['aux_outputs'] = self._set_aux_loss(output_known_class, output_known_score, output_known_mask, output_known_coord)
mask_dict['output_known_lbs_bboxes']=out
return outputs_class, outputs_score, outputs_coord, outputs_mask
def get_valid_ratio(self, mask):
_, H, W = mask.shape
valid_H = torch.sum(~mask[:, :, 0], 1)
valid_W = torch.sum(~mask[:, 0, :], 1)
valid_ratio_h = valid_H.float() / H
valid_ratio_w = valid_W.float() / W
valid_ratio = torch.stack([valid_ratio_w, valid_ratio_h], -1)
return valid_ratio
def pred_box(self, reference, hs, ref0=None):
"""
:param reference: reference box coordinates from each decoder layer
:param hs: content
:param ref0: whether there are prediction from the first layer
"""
device = reference[0].device
if ref0 is None:
outputs_coord_list = []
else:
outputs_coord_list = [ref0.to(device)]
for dec_lid, (layer_ref_sig, layer_bbox_embed, layer_hs) in enumerate(zip(reference[:-1], self.bbox_embed, hs)):
layer_delta_unsig = layer_bbox_embed(layer_hs).to(device)
layer_outputs_unsig = layer_delta_unsig + inverse_sigmoid(layer_ref_sig).to(device)
layer_outputs_unsig = layer_outputs_unsig.sigmoid()
outputs_coord_list.append(layer_outputs_unsig)
outputs_coord_list = torch.stack(outputs_coord_list)
return outputs_coord_list
def forward(self, x, mask_features, extra, task, masks, targets=None):
"""
:param x: input, a list of multi-scale feature
:param mask_features: is the per-pixel embeddings with resolution 1/4 of the original image,
obtained by fusing backbone encoder encoded features. This is used to produce binary masks.
:param masks: mask in the original image
:param targets: used for denoising training
"""
if 'spatial_query_pos_mask' in extra:
visual_P = True
else:
visual_P = False
assert len(x) == self.num_feature_levels
device = x[0].device
size_list = []
# disable mask, it does not affect performance
enable_mask = 0
if masks is not None:
for src in x:
if src.size(2) % 32 or src.size(3) % 32:
enable_mask = 1
if enable_mask == 0:
masks = [torch.zeros((src.size(0), src.size(2), src.size(3)), device=src.device, dtype=torch.bool) for src in x]
src_flatten = []
mask_flatten = []
spatial_shapes = []
for i in range(self.num_feature_levels):
idx=self.num_feature_levels-1-i
bs, c , h, w=x[idx].shape
size_list.append(x[i].shape[-2:])
spatial_shapes.append(x[idx].shape[-2:])
src_flatten.append(self.input_proj[idx](x[idx]).flatten(2).transpose(1, 2))
mask_flatten.append(masks[i].flatten(1))
src_flatten = torch.cat(src_flatten, 1) # bs, \sum{hxw}, c
mask_flatten = torch.cat(mask_flatten, 1) # bs, \sum{hxw}
spatial_shapes = torch.as_tensor(spatial_shapes, dtype=torch.long, device=src_flatten.device)
level_start_index = torch.cat((spatial_shapes.new_zeros((1,)), spatial_shapes.prod(1).cumsum(0)[:-1]))
valid_ratios = torch.stack([self.get_valid_ratio(m) for m in masks], 1)
predictions_federate = []
predictions_score = []
predictions_class = []
predictions_mask = []
if self.two_stage:
| # ------------------------------------------------------------------------
# DINO
# Copyright (c) 2022 IDEA. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
# ------------------------------------------------------------------------
# Modified from Mask2Former https://github.com/facebookresearch/Mask2Former by Feng Li and Hao Zhang.
TRANSFORMER_DECODER_REGISTRY = Registry("TRANSFORMER_MODULE")
TRANSFORMER_DECODER_REGISTRY.__doc__ = """
Registry for transformer module in MaskDINO.
"""
def build_transformer_decoder(cfg, in_channels, lang_encoder, mask_classification=True):
"""
Build a instance embedding branch from `cfg.MODEL.INS_EMBED_HEAD.NAME`.
"""
name = cfg.MODEL.MaskDINO.TRANSFORMER_DECODER_NAME
return TRANSFORMER_DECODER_REGISTRY.get(name)(cfg, in_channels, lang_encoder, mask_classification)
@TRANSFORMER_DECODER_REGISTRY.register()
class MaskDINODecoder(nn.Module):
@configurable
def __init__(
self,
in_channels,
lang_encoder,
mask_classification=True,
*,
num_classes: int,
hidden_dim: int,
num_queries: int,
nheads: int,
dim_feedforward: int,
dec_layers: int,
mask_dim: int,
dim_projection: int,
enforce_input_project: bool,
two_stage: bool,
dn: str,
noise_scale:float,
dn_num:int,
initialize_box_type:bool,
initial_pred:bool,
learn_tgt: bool,
total_num_feature_levels: int = 4,
dropout: float = 0.0,
activation: str = 'relu',
nhead: int = 8,
dec_n_points: int = 4,
return_intermediate_dec: bool = True,
query_dim: int = 4,
dec_layer_share: bool = False,
semantic_ce_loss: bool = False,
cross_track_layer: bool = False,
):
"""
NOTE: this interface is experimental.
Args:
in_channels: channels of the input features
mask_classification: whether to add mask classifier or not
num_classes: number of classes
hidden_dim: Transformer feature dimension
num_queries: number of queries
nheads: number of heads
dim_feedforward: feature dimension in feedforward network
enc_layers: number of Transformer encoder layers
dec_layers: number of Transformer decoder layers
pre_norm: whether to use pre-LayerNorm or not
mask_dim: mask feature dimension
enforce_input_project: add input project 1x1 conv even if input
channels and hidden dim is identical
d_model: transformer dimension
dropout: dropout rate
activation: activation function
nhead: num heads in multi-head attention
dec_n_points: number of sampling points in decoder
return_intermediate_dec: return the intermediate results of decoder
query_dim: 4 -> (x, y, w, h)
dec_layer_share: whether to share each decoder layer
semantic_ce_loss: use ce loss for semantic segmentation
"""
super().__init__()
assert mask_classification, "Only support mask classification model"
self.mask_classification = mask_classification
self.num_feature_levels = total_num_feature_levels
self.initial_pred = initial_pred
self.lang_encoder = lang_encoder
# define Transformer decoder here
self.dn=dn
self.learn_tgt = learn_tgt
self.noise_scale=noise_scale
self.dn_num=dn_num
self.num_heads = nheads
self.num_layers = dec_layers
self.two_stage=two_stage
self.initialize_box_type = initialize_box_type
self.total_num_feature_levels = total_num_feature_levels
self.num_queries = num_queries
self.semantic_ce_loss = semantic_ce_loss
# learnable query features
if not two_stage or self.learn_tgt:
self.query_feat = nn.Embedding(num_queries, hidden_dim)
if not two_stage and initialize_box_type == 'no':
self.query_embed = nn.Embedding(num_queries, 4)
if two_stage:
self.enc_output = nn.Linear(hidden_dim, hidden_dim)
self.enc_output_norm = nn.LayerNorm(hidden_dim)
self.input_proj = nn.ModuleList()
for _ in range(self.num_feature_levels):
if in_channels != hidden_dim or enforce_input_project:
self.input_proj.append(Conv2d(in_channels, hidden_dim, kernel_size=1))
weight_init.c2_xavier_fill(self.input_proj[-1])
else:
self.input_proj.append(nn.Sequential())
self.num_classes = {
'obj365':100,
'obj365_clip':100,
'lvis':100,
'openimage':100,
'lvis_clip':100,
'openimage_clip':100,
'grit':100,
'vg':200,
'coco':80,
'coco_clip':80,
'grounding':1,
'rvos':1,
'sa1b':1,
'sa1b_clip':1,
'bdd_det':10,
'bdd_inst':8,
'ytvis19':40,
'image_yt19':40,
'image_yt21':40,
'bdd_track_seg':8,
'bdd_track_box':8,
'ovis':25,
'image_o':25,
'ytvis21':40,
'uvo_video': 81,
'ytbvos':1,
}
# output FFNs
assert self.mask_classification, "why not class embedding?"
self.confidence_score = MLP(hidden_dim, hidden_dim, 1, 2)
self.category_embed = nn.Parameter(torch.rand(hidden_dim, dim_projection))
# trunc_normal_(self.category_embed, std=.02)
# self.track_embed = MLP(hidden_dim, hidden_dim, hidden_dim, 3)
self.coco_label_enc = nn.Embedding(80,hidden_dim)
self.obj365_label_enc = nn.Embedding(100, hidden_dim)
self.vg_label_enc = nn.Embedding(200, hidden_dim)
self.grounding_label_enc = nn.Embedding(1,hidden_dim)
self.ytvis19_label_enc = nn.Embedding(40,hidden_dim)
self.ytvis21_label_enc = nn.Embedding(40,hidden_dim)
self.ovis_label_enc = nn.Embedding(25,hidden_dim)
self.uvo_label_enc = nn.Embedding(81,hidden_dim)
self.bdd_det = nn.Embedding(10,hidden_dim)
self.bdd_inst = nn.Embedding(8,hidden_dim)
self.label_enc = {
'coco': self.coco_label_enc,
'coco_clip': self.coco_label_enc,
'coconomask': self.coco_label_enc,
'obj365': self.obj365_label_enc,
'lvis': self.obj365_label_enc,
'openimage': self.obj365_label_enc,
'grit': self.obj365_label_enc,
'vg': self.vg_label_enc,
'obj365_clip': self.obj365_label_enc,
'lvis_clip': self.obj365_label_enc,
'openimage_clip': self.obj365_label_enc,
'bdd_det':self.bdd_det,
'bdd_inst':self.bdd_inst,
'bdd_track_seg':self.bdd_inst,
'bdd_track_box':self.bdd_inst,
'sa1b': self.grounding_label_enc,
'sa1b_clip': self.grounding_label_enc,
'grounding': self.grounding_label_enc,
'rvos': self.grounding_label_enc,
'uvo_video':self.uvo_label_enc,
'ytvis19':self.ytvis19_label_enc,
'image_yt19': self.ytvis19_label_enc,
'ytvis21':self.ytvis21_label_enc,
'image_yt21':self.ytvis21_label_enc,
'ovis':self.ovis_label_enc,
'image_o': self.ovis_label_enc,
'burst':self.grounding_label_enc,
'ytbvos':self.grounding_label_enc,
}
self.mask_embed = MLP(hidden_dim, hidden_dim, mask_dim, 3)
# init decoder
self.decoder_norm = decoder_norm = nn.LayerNorm(hidden_dim)
decoder_layer = DeformableTransformerDecoderLayer(hidden_dim, dim_feedforward,
dropout, activation,
self.num_feature_levels, nhead, dec_n_points)
self.decoder = TransformerDecoder(decoder_layer, self.num_layers, decoder_norm,
return_intermediate=return_intermediate_dec,
d_model=hidden_dim, query_dim=query_dim,
num_feature_levels=self.num_feature_levels,
dec_layer_share=dec_layer_share,
cross_track_layer = cross_track_layer,
n_levels=self.num_feature_levels, n_heads=nhead, n_points=dec_n_points
)
self.cross_track_layer = cross_track_layer
self.hidden_dim = hidden_dim
self._bbox_embed = _bbox_embed = MLP(hidden_dim, hidden_dim, 4, 3)
nn.init.constant_(_bbox_embed.layers[-1].weight.data, 0)
nn.init.constant_(_bbox_embed.layers[-1].bias.data, 0)
box_embed_layerlist = [_bbox_embed for i in range(self.num_layers)] # share box prediction each layer
self.bbox_embed = nn.ModuleList(box_embed_layerlist)
self.decoder.bbox_embed = self.bbox_embed
@classmethod
def from_config(cls, cfg, in_channels, lang_encoder, mask_classification):
ret = {}
ret["in_channels"] = in_channels
ret["lang_encoder"] = lang_encoder
ret["mask_classification"] = mask_classification
ret["dim_projection"] = cfg.MODEL.DIM_PROJ
ret["num_classes"] = cfg.MODEL.SEM_SEG_HEAD.NUM_CLASSES
ret["hidden_dim"] = cfg.MODEL.MaskDINO.HIDDEN_DIM
ret["num_queries"] = cfg.MODEL.MaskDINO.NUM_OBJECT_QUERIES
# Transformer parameters:
ret["nheads"] = cfg.MODEL.MaskDINO.NHEADS
ret["dim_feedforward"] = cfg.MODEL.MaskDINO.DIM_FEEDFORWARD
ret["dec_layers"] = cfg.MODEL.MaskDINO.DEC_LAYERS
ret["enforce_input_project"] = cfg.MODEL.MaskDINO.ENFORCE_INPUT_PROJ
ret["mask_dim"] = cfg.MODEL.SEM_SEG_HEAD.MASK_DIM
ret["two_stage"] =cfg.MODEL.MaskDINO.TWO_STAGE
ret["initialize_box_type"] = cfg.MODEL.MaskDINO.INITIALIZE_BOX_TYPE # ['no', 'bitmask', 'mask2box']
ret["dn"]=cfg.MODEL.MaskDINO.DN
ret["noise_scale"] =cfg.MODEL.MaskDINO.DN_NOISE_SCALE
ret["dn_num"] =cfg.MODEL.MaskDINO.DN_NUM
ret["initial_pred"] =cfg.MODEL.MaskDINO.INITIAL_PRED
ret["learn_tgt"] = cfg.MODEL.MaskDINO.LEARN_TGT
ret["total_num_feature_levels"] = cfg.MODEL.SEM_SEG_HEAD.TOTAL_NUM_FEATURE_LEVELS
ret["semantic_ce_loss"] = cfg.MODEL.MaskDINO.TEST.SEMANTIC_ON and cfg.MODEL.MaskDINO.SEMANTIC_CE_LOSS and ~cfg.MODEL.MaskDINO.TEST.PANOPTIC_ON
ret["cross_track_layer"] = cfg.MODEL.CROSS_TRACK
return ret
def prepare_for_dn(self, targets, tgt, refpoint_emb, batch_size,task):
"""
modified from dn-detr. You can refer to dn-detr
https://github.com/IDEA-Research/DN-DETR/blob/main/models/dn_dab_deformable_detr/dn_components.py
for more details
:param dn_args: scalar, noise_scale
:param tgt: original tgt (content) in the matching part
:param refpoint_emb: positional anchor queries in the matching part
:param batch_size: bs
"""
if self.training:
scalar, noise_scale = self.dn_num,self.noise_scale
known = [(torch.ones_like(t['labels'])).cuda() for t in targets]
know_idx = [torch.nonzero(t) for t in known]
known_num = [sum(k) for k in known]
# use fix number of dn queries
if max(known_num)>0:
scalar = scalar//(int(max(known_num)))
else:
scalar = 0
if scalar == 0:
input_query_label = None
input_query_bbox = None
attn_mask = None
mask_dict = None
return input_query_label, input_query_bbox, attn_mask, mask_dict
# can be modified to selectively denosie some label or boxes; also known label prediction
unmask_bbox = unmask_label = torch.cat(known)
labels = torch.cat([t['labels'] for t in targets])
boxes = torch.cat([t['boxes'] for t in targets])
batch_idx = torch.cat([torch.full_like(t['labels'].long(), i) for i, t in enumerate(targets)])
# known
known_indice = torch.nonzero(unmask_label + unmask_bbox)
known_indice = known_indice.view(-1)
# noise
known_indice = known_indice.repeat(scalar, 1).view(-1)
known_labels = labels.repeat(scalar, 1).view(-1)
known_bid = batch_idx.repeat(scalar, 1).view(-1)
known_bboxs = boxes.repeat(scalar, 1)
known_labels_expaned = known_labels.clone()
known_bbox_expand = known_bboxs.clone()
# noise on the label
if noise_scale > 0:
p = torch.rand_like(known_labels_expaned.float())
chosen_indice = torch.nonzero(p < (noise_scale * 0.5)).view(-1) # half of bbox prob
new_label = torch.randint_like(chosen_indice, 0, self.num_classes[task]) # randomly put a new one here
known_labels_expaned.scatter_(0, chosen_indice, new_label)
if noise_scale > 0:
diff = torch.zeros_like(known_bbox_expand)
diff[:, :2] = known_bbox_expand[:, 2:] / 2
diff[:, 2:] = known_bbox_expand[:, 2:]
known_bbox_expand += torch.mul((torch.rand_like(known_bbox_expand) * 2 - 1.0),
diff).cuda() * noise_scale
known_bbox_expand = known_bbox_expand.clamp(min=0.0, max=1.0)
m = known_labels_expaned.long().to('cuda')
input_label_embed = self.label_enc[task](m)
input_bbox_embed = inverse_sigmoid(known_bbox_expand)
single_pad = int(max(known_num))
pad_size = int(single_pad * scalar)
padding_label = torch.zeros(pad_size, self.hidden_dim).cuda()
padding_bbox = torch.zeros(pad_size, 4).cuda()
if not refpoint_emb is None:
input_query_label = torch.cat([padding_label, tgt], dim=0).repeat(batch_size, 1, 1)
input_query_bbox = torch.cat([padding_bbox, refpoint_emb], dim=0).repeat(batch_size, 1, 1)
else:
input_query_label=padding_label.repeat(batch_size, 1, 1)
input_query_bbox = padding_bbox.repeat(batch_size, 1, 1)
# map
map_known_indice = torch.tensor([]).to('cuda')
if len(known_num):
map_known_indice = torch.cat([torch.tensor(range(num)) for num in known_num]) # [1,2, 1,2,3]
map_known_indice = torch.cat([map_known_indice + single_pad * i for i in range(scalar)]).long()
if len(known_bid):
input_query_label[(known_bid.long(), map_known_indice)] = input_label_embed
input_query_bbox[(known_bid.long(), map_known_indice)] = input_bbox_embed
tgt_size = pad_size + self.num_queries
attn_mask = torch.ones(tgt_size, tgt_size).to('cuda') < 0
# match query cannot see the reconstruct
attn_mask[pad_size:, :pad_size] = True
# reconstruct cannot see each other
for i in range(scalar):
if i == 0:
attn_mask[single_pad * i:single_pad * (i + 1), single_pad * (i + 1):pad_size] = True
if i == scalar - 1:
attn_mask[single_pad * i:single_pad * (i + 1), :single_pad * i] = True
else:
attn_mask[single_pad * i:single_pad * (i + 1), single_pad * (i + 1):pad_size] = True
attn_mask[single_pad * i:single_pad * (i + 1), :single_pad * i] = True
mask_dict = {
'known_indice': torch.as_tensor(known_indice).long(),
'batch_idx': torch.as_tensor(batch_idx).long(),
'map_known_indice': torch.as_tensor(map_known_indice).long(),
'known_lbs_bboxes': (known_labels, known_bboxs),
'know_idx': know_idx,
'pad_size': pad_size,
'scalar': scalar,
}
else:
if not refpoint_emb is None:
input_query_label = tgt.repeat(batch_size, 1, 1)
input_query_bbox = refpoint_emb.repeat(batch_size, 1, 1)
else:
input_query_label=None
input_query_bbox=None
attn_mask = None
mask_dict=None
# 100*batch*256
if not input_query_bbox is None:
input_query_label = input_query_label
input_query_bbox = input_query_bbox
return input_query_label,input_query_bbox,attn_mask,mask_dict
def dn_post_process(self,outputs_class,outputs_score,outputs_coord,mask_dict,outputs_mask):
"""
post process of dn after output from the transformer
put the dn part in the mask_dict
"""
assert mask_dict['pad_size'] > 0
output_known_class = outputs_class[:, :, :mask_dict['pad_size'], :]
outputs_class = outputs_class[:, :, mask_dict['pad_size']:, :]
output_known_score = outputs_score[:, :, :mask_dict['pad_size'], :]
outputs_score = outputs_score[:, :, mask_dict['pad_size']:, :]
output_known_coord = outputs_coord[:, :, :mask_dict['pad_size'], :]
outputs_coord = outputs_coord[:, :, mask_dict['pad_size']:, :]
if outputs_mask is not None:
output_known_mask = outputs_mask[:, :, :mask_dict['pad_size'], :]
outputs_mask = outputs_mask[:, :, mask_dict['pad_size']:, :]
out = {'pred_logits': output_known_class[-1], 'pred_scores':output_known_score[-1],'pred_boxes': output_known_coord[-1],'pred_masks': output_known_mask[-1]}
out['aux_outputs'] = self._set_aux_loss(output_known_class, output_known_score, output_known_mask, output_known_coord)
mask_dict['output_known_lbs_bboxes']=out
return outputs_class, outputs_score, outputs_coord, outputs_mask
def get_valid_ratio(self, mask):
_, H, W = mask.shape
valid_H = torch.sum(~mask[:, :, 0], 1)
valid_W = torch.sum(~mask[:, 0, :], 1)
valid_ratio_h = valid_H.float() / H
valid_ratio_w = valid_W.float() / W
valid_ratio = torch.stack([valid_ratio_w, valid_ratio_h], -1)
return valid_ratio
def pred_box(self, reference, hs, ref0=None):
"""
:param reference: reference box coordinates from each decoder layer
:param hs: content
:param ref0: whether there are prediction from the first layer
"""
device = reference[0].device
if ref0 is None:
outputs_coord_list = []
else:
outputs_coord_list = [ref0.to(device)]
for dec_lid, (layer_ref_sig, layer_bbox_embed, layer_hs) in enumerate(zip(reference[:-1], self.bbox_embed, hs)):
layer_delta_unsig = layer_bbox_embed(layer_hs).to(device)
layer_outputs_unsig = layer_delta_unsig + inverse_sigmoid(layer_ref_sig).to(device)
layer_outputs_unsig = layer_outputs_unsig.sigmoid()
outputs_coord_list.append(layer_outputs_unsig)
outputs_coord_list = torch.stack(outputs_coord_list)
return outputs_coord_list
def forward(self, x, mask_features, extra, task, masks, targets=None):
"""
:param x: input, a list of multi-scale feature
:param mask_features: is the per-pixel embeddings with resolution 1/4 of the original image,
obtained by fusing backbone encoder encoded features. This is used to produce binary masks.
:param masks: mask in the original image
:param targets: used for denoising training
"""
if 'spatial_query_pos_mask' in extra:
visual_P = True
else:
visual_P = False
assert len(x) == self.num_feature_levels
device = x[0].device
size_list = []
# disable mask, it does not affect performance
enable_mask = 0
if masks is not None:
for src in x:
if src.size(2) % 32 or src.size(3) % 32:
enable_mask = 1
if enable_mask == 0:
masks = [torch.zeros((src.size(0), src.size(2), src.size(3)), device=src.device, dtype=torch.bool) for src in x]
src_flatten = []
mask_flatten = []
spatial_shapes = []
for i in range(self.num_feature_levels):
idx=self.num_feature_levels-1-i
bs, c , h, w=x[idx].shape
size_list.append(x[i].shape[-2:])
spatial_shapes.append(x[idx].shape[-2:])
src_flatten.append(self.input_proj[idx](x[idx]).flatten(2).transpose(1, 2))
mask_flatten.append(masks[i].flatten(1))
src_flatten = torch.cat(src_flatten, 1) # bs, \sum{hxw}, c
mask_flatten = torch.cat(mask_flatten, 1) # bs, \sum{hxw}
spatial_shapes = torch.as_tensor(spatial_shapes, dtype=torch.long, device=src_flatten.device)
level_start_index = torch.cat((spatial_shapes.new_zeros((1,)), spatial_shapes.prod(1).cumsum(0)[:-1]))
valid_ratios = torch.stack([self.get_valid_ratio(m) for m in masks], 1)
predictions_federate = []
predictions_score = []
predictions_class = []
predictions_mask = []
if self.two_stage: | output_memory, output_proposals = gen_encoder_output_proposals(src_flatten, mask_flatten, spatial_shapes) | 3 | 2023-12-15 01:12:36+00:00 | 12k |
SHI-Labs/VCoder | vcoder_llava/train/vcoder_it.py | [
{
"identifier": "IGNORE_INDEX",
"path": "vcoder_llava/constants.py",
"snippet": "IGNORE_INDEX = -100"
},
{
"identifier": "DEFAULT_IMAGE_TOKEN",
"path": "vcoder_llava/constants.py",
"snippet": "DEFAULT_IMAGE_TOKEN = \"<image>\""
},
{
"identifier": "DEFAULT_SEG_TOKEN",
"path": ... | import os
import copy
import pathlib
import numpy as np
import random
import torch
import transformers
import json
import re
from dataclasses import dataclass, field
from typing import Dict, Optional, Sequence
from vcoder_llava.constants import IGNORE_INDEX, DEFAULT_IMAGE_TOKEN, DEFAULT_SEG_TOKEN
from torch.utils.data import Dataset
from vcoder_llava.train.llava_trainer import LLaVATrainer
from vcoder_llava import vcoder_conversation as conversation_lib
from vcoder_llava.model import *
from vcoder_llava.mm_utils import tokenizer_image_token, tokenizer_seg_token
from .train import (
get_peft_state_maybe_zero_3,
get_peft_state_non_lora_maybe_zero_3,
find_all_linear_names,
)
from vcoder_llava.questions import SEMANTIC_QUESTIONS, INSTANCE_QUESTIONS, PANOPTIC_QUESTIONS
from PIL import Image
from transformers import BitsAndBytesConfig
from peft import prepare_model_for_kbit_training
from peft import LoraConfig, get_peft_model
from peft.tuners.lora import LoraLayer | 7,214 | word_found = True
else:
# Remove any preceding punctuation if it's just before this word
if i > 0 and tokens[i-1] in {',', '.'}:
result_tokens.pop()
else:
result_tokens.append(token)
# Join tokens and clean up spaces before punctuation
result_text = ' '.join(result_tokens)
result_text = re.sub(r'\s([,.](?:\s|$))', r'\1', result_text)
return result_text
with open(file_path) as f:
lines = f.readlines()
seg_labels = {}
for line in lines:
key = line.split("<IMG>")[1].strip("\n")
label = line.split("<IMG>")[2].strip("\n")
label = _remove_specific_word(label, "wall")
label = _remove_specific_word(label, "window")
seg_labels[key] = label
return seg_labels
def obtain_seg_data_splits(data_args):
def _get_labels(folder):
return _obtain_seg_texts(os.path.join(data_args.seg_image_folder, folder, "panoptic.txt"))
list_data_dict = []
data_dict = json.load(open(data_args.data_path, "r"))
for l in data_dict:
if "image" in l.keys():
if os.path.exists(os.path.join(data_args.image_folder, l["image"])):
prob_add_seg = np.random.uniform(0,1.)
if prob_add_seg > 0.5:
l["seg"] = l["image"].split("/")[-1]
if "coco" in l["image"]:
l["seg_folder"] = "coco_segm_text/train/panoptic_inference"
elif "gqa" in l["image"]:
l["seg_folder"] = "gqa/seg_images/panoptic_inference"
elif "VG_100K_2" in l["image"]:
l["seg_folder"] = "vg/vg/SEG_VG_100K_2/panoptic_inference"
elif "VG_100K" in l["image"]:
l["seg_folder"] = "vg/vg/SEG_VG_100K/panoptic_inference"
elif "ocr_vqa" in l["image"]:
l["seg_folder"] = "ocr_vqa/seg_images/panoptic_inference"
if "textvqa" in l["image"]:
l["seg_folder"] = "textvqa/seg_images/panoptic_inference"
conversations = []
for c in l["conversations"]:
if "<image>" in c["value"]:
c["value"] = c["value"].replace("<image>", "<image>\n<seg>")
conversations.append(c)
l["conversations"] = conversations
list_data_dict.append(l)
else:
list_data_dict.append(l)
labels_dict = {
"coco_segm_text/train": _get_labels("coco_segm_text/train/"),
"gqa/seg_images": _get_labels("gqa/seg_images/"),
"vg/vg/SEG_VG_100K": _get_labels("vg/vg/SEG_VG_100K/"),
"vg/vg/SEG_VG_100K_2": _get_labels("vg/vg/SEG_VG_100K_2/"),
"ocr_vqa/seg_images": _get_labels("ocr_vqa/seg_images"),
"textvqa/seg_images": _get_labels("textvqa/seg_images/"),
}
final_list_data_dict = []
for l in list_data_dict:
if "seg" in l.keys():
prob_add = np.random.uniform(0,1.)
if prob_add > 0.7:
labels = labels_dict[l["seg_folder"].split("/panoptic_inference")[0]]
conversations = l["conversations"]
even_indices = list(range(2, len(conversations) + 1, 2))
random_even_index = random.choice(even_indices)
question_prob = np.random.uniform(0,1.)
if question_prob > 0.90:
question = "What objects can be seen in the image?"
else:
question = random.choice(PANOPTIC_QUESTIONS)
conv = [{
"from": "human",
"value": question
},
{
"from": "gpt",
"value": labels[l["seg"]]
}]
final_conversations = conversations[:random_even_index] + conv + conversations[random_even_index:]
l["conversations"] = final_conversations
final_list_data_dict.append(l)
return final_list_data_dict
def get_object_data_split(data_args):
list_data_dict = []
for bucket in ["train", "unlabeled", "test"]:
panoptic_labels = _obtain_seg_texts(os.path.join(data_args.seg_image_folder, "coco_segm_text", bucket, "panoptic.txt"))
semantic_labels = _obtain_seg_texts(os.path.join(data_args.seg_image_folder, "coco_segm_text", bucket, "semantic.txt"))
instance_labels = _obtain_seg_texts(os.path.join(data_args.seg_image_folder, "coco_segm_text", bucket, "instance.txt"))
for key in panoptic_labels.keys():
assert key in semantic_labels.keys() and key in instance_labels.keys(), "Instance, semantic, and panoptic labels should have the same keys."
prob_task = np.random.uniform(0,1.)
question_prob = np.random.uniform(0,1.)
if prob_task < 0.33:
answer = semantic_labels[key]
if question_prob > 0.90:
question = "What objects can be seen in the image?"
else:
question = random.choice(SEMANTIC_QUESTIONS)
seg_folder = "semantic_inference"
elif prob_task < 0.66:
answer = instance_labels[key]
if question_prob > 0.90:
question = "What objects can be seen in the image?"
else:
| # Adopted from https://github.com/lm-sys/FastChat. Below is the original copyright:
# Adopted from tatsu-lab@stanford_alpaca. Below is the original copyright:
# Copyright 2023 Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
local_rank = None
def rank0_print(*args):
if local_rank == 0:
print(*args)
@dataclass
class ModelArguments:
model_name_or_path: Optional[str] = field(default="facebook/opt-125m")
version: Optional[str] = field(default="v0")
freeze_backbone: bool = field(default=False)
tune_mm_mlp_adapter: bool = field(default=False)
vision_tower: Optional[str] = field(default=None)
mm_vision_select_layer: Optional[int] = field(default=-2) # default to the last layer
mm_projector_type: Optional[str] = field(default='linear')
pretrain_mm_mlp_adapter: Optional[str] = field(default=None)
use_mm2_proj: bool = field(default=False)
pretrain_mm2_mlp_adapter: Optional[str] = field(default=None)
seg_tune_adapter: bool = field(default=False)
mm_seg_select_layer: Optional[int] = field(default=-2) # default to the last layer
seg_mm_projector_type: Optional[str] = field(default='linear')
mm_vision_select_feature: Optional[str] = field(default="patch")
mm_seg_select_feature: Optional[str] = field(default="patch")
@dataclass
class DataArguments:
data_path: str = field(default=None,
metadata={"help": "Path to the training data."})
seg_data_path: str = field(default=None,
metadata={"help": "Path to the seg training data."})
lazy_preprocess: bool = False
is_multimodal: bool = False
image_folder: Optional[str] = field(default=None)
seg_image_folder: Optional[str] = field(default=None)
image_aspect_ratio: str = 'square'
image_grid_pinpoints: Optional[str] = field(default=None)
@dataclass
class TrainingArguments(transformers.TrainingArguments):
cache_dir: Optional[str] = field(default=None)
optim: str = field(default="adamw_torch")
remove_unused_columns: bool = field(default=False)
freeze_mm_mlp_adapter: bool = field(default=False)
freeze_seg_mm_mlp_adapter: bool = field(default=False)
mpt_attn_impl: Optional[str] = field(default="triton")
model_max_length: int = field(
default=512,
metadata={
"help":
"Maximum sequence length. Sequences will be right padded (and possibly truncated)."
},
)
double_quant: bool = field(
default=True,
metadata={"help": "Compress the quantization statistics through double quantization."}
)
quant_type: str = field(
default="nf4",
metadata={"help": "Quantization data type to use. Should be one of `fp4` or `nf4`."}
)
bits: int = field(
default=16,
metadata={"help": "How many bits to use."}
)
lora_enable: bool = False
lora_r: int = 64
lora_alpha: int = 16
lora_dropout: float = 0.05
lora_weight_path: str = ""
lora_bias: str = "none"
mm_projector_lr: Optional[float] = None
group_by_modality_length: bool = field(default=False)
def safe_save_model_for_hf_trainer(trainer: transformers.Trainer,
output_dir: str):
"""Collects the state dict and dump to disk."""
if trainer.deepspeed:
torch.cuda.synchronize()
trainer.save_model(output_dir)
return
state_dict = trainer.model.state_dict()
if trainer.args.should_save:
cpu_state_dict = {
key: value.cpu()
for key, value in state_dict.items()
}
del state_dict
trainer._save(output_dir, state_dict=cpu_state_dict) # noqa
def vcoder_preprocess_v1(
sources,
tokenizer: transformers.PreTrainedTokenizer,
has_image: bool = False,
has_seg: bool = False
) -> Dict:
conv = conversation_lib.default_conversation.copy()
roles = {"human": conv.roles[0], "gpt": conv.roles[1]}
# Apply prompt templates
conversations = []
for i, source in enumerate(sources):
if roles[source[0]["from"]] != conv.roles[0]:
# Skip the first one if it is not from human
source = source[1:]
conv.messages = []
for j, sentence in enumerate(source):
role = roles[sentence["from"]]
assert role == conv.roles[j % 2], f"{i}"
conv.append_message(role, sentence["value"])
conversations.append(conv.get_prompt())
# Tokenize conversations
if has_image and has_seg:
input_ids = torch.stack([tokenizer_seg_token(prompt, tokenizer, return_tensors='pt') for prompt in conversations], dim=0)
elif has_image:
input_ids = torch.stack([tokenizer_image_token(prompt, tokenizer, return_tensors='pt') for prompt in conversations], dim=0)
else:
input_ids = tokenizer(
conversations,
return_tensors="pt",
padding="longest",
max_length=tokenizer.model_max_length,
truncation=True,
).input_ids
targets = input_ids.clone()
assert conv.sep_style == conversation_lib.SeparatorStyle.TWO
# Mask targets
sep = conv.sep + conv.roles[1] + ": "
for conversation, target in zip(conversations, targets):
total_len = int(target.ne(tokenizer.pad_token_id).sum())
rounds = conversation.split(conv.sep2)
cur_len = 1
target[:cur_len] = IGNORE_INDEX
for i, rou in enumerate(rounds):
if rou == "":
break
parts = rou.split(sep)
if len(parts) != 2:
break
parts[0] += sep
if has_image and has_seg:
round_len = len(tokenizer_seg_token(rou, tokenizer))
instruction_len = len(tokenizer_seg_token(parts[0], tokenizer)) - 2
elif has_image:
round_len = len(tokenizer_image_token(rou, tokenizer))
instruction_len = len(tokenizer_image_token(parts[0], tokenizer)) - 2
else:
round_len = len(tokenizer(rou).input_ids)
instruction_len = len(tokenizer(parts[0]).input_ids) - 2
target[cur_len : cur_len + instruction_len] = IGNORE_INDEX
cur_len += round_len
target[cur_len:] = IGNORE_INDEX
if cur_len < tokenizer.model_max_length:
if cur_len != total_len:
target[:] = IGNORE_INDEX
print(
f"WARNING: tokenization mismatch: {cur_len} vs. {total_len}."
f" (ignored)"
)
return dict(
input_ids=input_ids,
labels=targets,
)
def vcoder_preprocess_multimodal(
sources: Sequence[str],
data_args: DataArguments
) -> Dict:
is_multimodal = data_args.is_multimodal
if not is_multimodal:
return sources
for source in sources:
for sentence in source:
if DEFAULT_IMAGE_TOKEN in sentence['value']:
sentence['value'] = sentence['value'].replace(DEFAULT_IMAGE_TOKEN, '').strip()
sentence['value'] = DEFAULT_IMAGE_TOKEN + '\n' + sentence['value']
sentence['value'] = sentence['value'].strip()
replace_token = DEFAULT_IMAGE_TOKEN
sentence["value"] = sentence["value"].replace(DEFAULT_IMAGE_TOKEN, replace_token)
if DEFAULT_SEG_TOKEN in sentence['value']:
sentence['value'] = sentence['value'].replace(DEFAULT_SEG_TOKEN, '').strip()
sentence['value'] = DEFAULT_SEG_TOKEN + '\n' + sentence['value']
sentence['value'] = sentence['value'].strip()
replace_token = DEFAULT_SEG_TOKEN
sentence["value"] = sentence["value"].replace(DEFAULT_SEG_TOKEN, replace_token)
return sources
def preprocess(
sources: Sequence[str],
tokenizer: transformers.PreTrainedTokenizer,
has_image: bool = False,
has_seg: bool = False
) -> Dict:
"""
Given a list of sources, each is a conversation list. This transform:
1. Add signal '### ' at the beginning each sentence, with end signal '\n';
2. Concatenate conversations together;
3. Tokenize the concatenated conversation;
4. Make a deepcopy as the target. Mask human words with IGNORE_INDEX.
"""
if conversation_lib.default_conversation.version.startswith("v1"):
return vcoder_preprocess_v1(sources, tokenizer, has_image=has_image, has_seg=has_seg)
raise ValueError(f"Unknown conversation version: {conversation_lib.default_conversation.version}")
def _obtain_seg_texts(file_path):
def _remove_specific_word(text, word_to_remove):
tokens = re.findall(r'\b\w+\b|[,.]', text)
result_tokens = []
word_found = False
for i, token in enumerate(tokens):
if token == word_to_remove:
if not word_found:
# Keep the first occurrence and mark it as found
result_tokens.append(token)
word_found = True
else:
# Remove any preceding punctuation if it's just before this word
if i > 0 and tokens[i-1] in {',', '.'}:
result_tokens.pop()
else:
result_tokens.append(token)
# Join tokens and clean up spaces before punctuation
result_text = ' '.join(result_tokens)
result_text = re.sub(r'\s([,.](?:\s|$))', r'\1', result_text)
return result_text
with open(file_path) as f:
lines = f.readlines()
seg_labels = {}
for line in lines:
key = line.split("<IMG>")[1].strip("\n")
label = line.split("<IMG>")[2].strip("\n")
label = _remove_specific_word(label, "wall")
label = _remove_specific_word(label, "window")
seg_labels[key] = label
return seg_labels
def obtain_seg_data_splits(data_args):
def _get_labels(folder):
return _obtain_seg_texts(os.path.join(data_args.seg_image_folder, folder, "panoptic.txt"))
list_data_dict = []
data_dict = json.load(open(data_args.data_path, "r"))
for l in data_dict:
if "image" in l.keys():
if os.path.exists(os.path.join(data_args.image_folder, l["image"])):
prob_add_seg = np.random.uniform(0,1.)
if prob_add_seg > 0.5:
l["seg"] = l["image"].split("/")[-1]
if "coco" in l["image"]:
l["seg_folder"] = "coco_segm_text/train/panoptic_inference"
elif "gqa" in l["image"]:
l["seg_folder"] = "gqa/seg_images/panoptic_inference"
elif "VG_100K_2" in l["image"]:
l["seg_folder"] = "vg/vg/SEG_VG_100K_2/panoptic_inference"
elif "VG_100K" in l["image"]:
l["seg_folder"] = "vg/vg/SEG_VG_100K/panoptic_inference"
elif "ocr_vqa" in l["image"]:
l["seg_folder"] = "ocr_vqa/seg_images/panoptic_inference"
if "textvqa" in l["image"]:
l["seg_folder"] = "textvqa/seg_images/panoptic_inference"
conversations = []
for c in l["conversations"]:
if "<image>" in c["value"]:
c["value"] = c["value"].replace("<image>", "<image>\n<seg>")
conversations.append(c)
l["conversations"] = conversations
list_data_dict.append(l)
else:
list_data_dict.append(l)
labels_dict = {
"coco_segm_text/train": _get_labels("coco_segm_text/train/"),
"gqa/seg_images": _get_labels("gqa/seg_images/"),
"vg/vg/SEG_VG_100K": _get_labels("vg/vg/SEG_VG_100K/"),
"vg/vg/SEG_VG_100K_2": _get_labels("vg/vg/SEG_VG_100K_2/"),
"ocr_vqa/seg_images": _get_labels("ocr_vqa/seg_images"),
"textvqa/seg_images": _get_labels("textvqa/seg_images/"),
}
final_list_data_dict = []
for l in list_data_dict:
if "seg" in l.keys():
prob_add = np.random.uniform(0,1.)
if prob_add > 0.7:
labels = labels_dict[l["seg_folder"].split("/panoptic_inference")[0]]
conversations = l["conversations"]
even_indices = list(range(2, len(conversations) + 1, 2))
random_even_index = random.choice(even_indices)
question_prob = np.random.uniform(0,1.)
if question_prob > 0.90:
question = "What objects can be seen in the image?"
else:
question = random.choice(PANOPTIC_QUESTIONS)
conv = [{
"from": "human",
"value": question
},
{
"from": "gpt",
"value": labels[l["seg"]]
}]
final_conversations = conversations[:random_even_index] + conv + conversations[random_even_index:]
l["conversations"] = final_conversations
final_list_data_dict.append(l)
return final_list_data_dict
def get_object_data_split(data_args):
list_data_dict = []
for bucket in ["train", "unlabeled", "test"]:
panoptic_labels = _obtain_seg_texts(os.path.join(data_args.seg_image_folder, "coco_segm_text", bucket, "panoptic.txt"))
semantic_labels = _obtain_seg_texts(os.path.join(data_args.seg_image_folder, "coco_segm_text", bucket, "semantic.txt"))
instance_labels = _obtain_seg_texts(os.path.join(data_args.seg_image_folder, "coco_segm_text", bucket, "instance.txt"))
for key in panoptic_labels.keys():
assert key in semantic_labels.keys() and key in instance_labels.keys(), "Instance, semantic, and panoptic labels should have the same keys."
prob_task = np.random.uniform(0,1.)
question_prob = np.random.uniform(0,1.)
if prob_task < 0.33:
answer = semantic_labels[key]
if question_prob > 0.90:
question = "What objects can be seen in the image?"
else:
question = random.choice(SEMANTIC_QUESTIONS)
seg_folder = "semantic_inference"
elif prob_task < 0.66:
answer = instance_labels[key]
if question_prob > 0.90:
question = "What objects can be seen in the image?"
else: | question = random.choice(INSTANCE_QUESTIONS) | 11 | 2023-12-17 07:46:27+00:00 | 12k |
DeepWok/mase | machop/chop/models/manual/opt_lora/modeling_opt_lora.py | [
{
"identifier": "LoraLayer",
"path": "machop/chop/models/manual/lora_modules.py",
"snippet": "class LoraLayer:\n def __init__(self, in_features: int, out_features: int, **kwargs):\n self.r = {}\n self.lora_alpha = {}\n self.scaling = {}\n self.lora_dropout = nn.ModuleDict(... | import random
import torch
import torch.utils.checkpoint
from typing import Optional, Tuple, Union
from torch import nn
from torch.nn import CrossEntropyLoss
from transformers.activations import ACT2FN
from transformers.modeling_outputs import (
BaseModelOutputWithPast,
CausalLMOutputWithPast,
)
from transformers.modeling_utils import PreTrainedModel
from transformers.utils import logging, replace_return_docstrings
from ..lora_modules import LoraLayer, LinearLora
from .configuration_opt_lora import OPTLoraConfig
from .utils_opt import (
OPTAttention_attention_get_dtype_min,
OPTAttention_attention_mask_shape_check,
OPTAttention_attn_output_shape_check,
OPTAttention_attn_weight_dtype_check,
OPTAttention_attn_weights_shape_check,
OPTAttention_layer_head_mask_shape_check,
OPTAttention_reshape_qkv_back_for_bmm,
OPTAttention_self_shape,
OPTDecoder_check_head_mask,
OPTDecoder_self_prepare_decoder_attention,
OPTForCasualLM_compute_loss,
) | 7,203 | self.project_out = nn.Linear(
config.hidden_size, config.word_embed_proj_dim, bias=False
)
else:
self.project_out = None
if config.word_embed_proj_dim != config.hidden_size:
self.project_in = nn.Linear(
config.word_embed_proj_dim, config.hidden_size, bias=False
)
else:
self.project_in = None
# Note that the only purpose of `config._remove_final_layer_norm` is to keep backward compatibility
# with checkpoints that have been fine-tuned before transformers v4.20.1
# see https://github.com/facebookresearch/metaseq/pull/164
if config.do_layer_norm_before and not config._remove_final_layer_norm:
self.final_layer_norm = nn.LayerNorm(
config.hidden_size,
elementwise_affine=config.layer_norm_elementwise_affine,
)
else:
self.final_layer_norm = None
self.layers = nn.ModuleList(
[OPTDecoderLayer(config) for _ in range(config.num_hidden_layers)]
)
self.gradient_checkpointing = False
# Initialize weights and apply final processing
self.post_init()
def get_input_embeddings(self):
return self.embed_tokens
def set_input_embeddings(self, value):
self.embed_tokens = value
def forward(
self,
input_ids: torch.LongTensor,
attention_mask: torch.Tensor = None,
head_mask: Optional[torch.Tensor] = None,
# inputs_embeds: Optional[torch.FloatTensor] = None,
return_dict: Optional[bool] = True,
output_attentions: Optional[bool] = False,
output_hidden_states: Optional[bool] = False,
) -> Union[Tuple, BaseModelOutputWithPast]:
r"""
Args:
input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you
provide it.
Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
[`PreTrainedTokenizer.__call__`] for details.
[What are input IDs?](../glossary#input-ids)
attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
- 1 for tokens that are **not masked**,
- 0 for tokens that are **masked**.
[What are attention masks?](../glossary#attention-mask)
head_mask (`torch.Tensor` of shape `(num_hidden_layers, num_attention_heads)`, *optional*):
Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`:
- 1 indicates the head is **not masked**,
- 0 indicates the head is **masked**.
output_attentions (`bool`, *optional*):
Whether or not to return the attentions tensors of all attention layers. See `attentions` under
returned tensors for more detail.
output_hidden_states (`bool`, *optional*):
Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors
for more detail.
"""
return_dict = self.config.return_dict if return_dict is None else return_dict
output_attentions = (
self.config.output_attentions
if output_attentions is None
else output_attentions
)
output_hidden_states = (
self.config.output_hidden_states
if output_hidden_states is None
else output_hidden_states
)
input_shape = input_ids.shape
input_ids = input_ids.view(-1, input_shape[-1])
# input_ids = OPTDecoder_view_input_ids(
# input_ids=input_ids, input_shape=input_shape
# )
past_key_values_length = 0
inputs_embeds = self.embed_tokens(input_ids)
# embed positions
# TODO: check this?
if attention_mask is None:
attention_mask = torch.ones(
inputs_embeds.shape[:2], dtype=torch.bool, device=inputs_embeds.device
)
pos_embeds = self.embed_positions(attention_mask, past_key_values_length)
attention_mask = OPTDecoder_self_prepare_decoder_attention(
attention_mask, input_shape, inputs_embeds, past_key_values_length
)
if self.project_in is not None:
inputs_embeds = self.project_in(inputs_embeds)
hidden_states = inputs_embeds + pos_embeds
# decoder layers
all_hidden_states = () if output_hidden_states else None
all_self_attns = () if output_attentions else None
# check if head_mask has a correct number of layers specified if desired
| # coding=utf-8
# ----------------------------------------------
# This is a traceable version of OPTModel and OPTForCausalLanguageModeling
# modified code based on HuggingFace's opt
# ----------------------------------------------
# Copyright 2022 The Fairseq Authors and The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
""" PyTorch OPT model."""
logger = logging.get_logger(__name__)
_CHECKPOINT_FOR_DOC = "facebook/opt-350m"
_CONFIG_FOR_DOC = "OPTLoraConfig"
# Base model docstring
_EXPECTED_OUTPUT_SHAPE = [1, 8, 1024]
OPT_PRETRAINED_MODEL_ARCHIVE_LIST = [
"facebook/opt-125m",
"facebook/opt-350m",
"facebook/opt-1.3b",
"facebook/opt-2.7b",
"facebook/opt-6.7b",
"facebook/opt-13b",
"facebook/opt-30b",
# See all OPT models at https://huggingface.co/models?filter=opt
]
class OPTLearnedPositionalEmbedding(nn.Embedding):
"""
This module learns positional embeddings up to a fixed maximum size.
"""
def __init__(self, num_embeddings: int, embedding_dim: int):
# OPT is set up so that if padding_idx is specified then offset the embedding ids by 2
# and adjust num_embeddings appropriately. Other models don't have this hack
self.offset = 2
super().__init__(num_embeddings + self.offset, embedding_dim)
def forward(
self, attention_mask: torch.LongTensor, past_key_values_length: int = 0
):
"""`input_ids_shape` is expected to be [bsz x seqlen]."""
attention_mask = attention_mask.long()
# create positions depending on attention_mask
positions = (
torch.cumsum(attention_mask, dim=1).type_as(attention_mask) * attention_mask
).long() - 1
# cut positions if `past_key_values_length` is > 0
positions = positions[:, past_key_values_length:]
return super().forward(positions + self.offset)
class OPTAttention(nn.Module):
"""
- FX-traceable Multi-headed attention from 'Attention Is All You Need' paper
- This module includes multi-head (k, q, v linear, attention), concat, and attention output linear
- To make this module traceable, `mode` must be one of integer 0, 1, 2, or 3.
- The default mode `None` (un-traceable mode) can be used for training (testing), but not for modify-sw.
"""
custom_node_leaf_patch = [
("embeddings", "BertEmbeddingsPatched", OPTLearnedPositionalEmbedding)
]
def __init__(
self,
config: OPTLoraConfig,
embed_dim: int,
num_heads: int,
layer_id: int = 0,
dropout: float = 0.0,
is_decoder: bool = False,
bias: bool = False,
):
super().__init__()
self.config = config
self.embed_dim = embed_dim
self.num_heads = num_heads
self.dropout = dropout
self.head_dim = embed_dim // num_heads
if (self.head_dim * num_heads) != self.embed_dim:
raise ValueError(
f"embed_dim must be divisible by num_heads (got `embed_dim`: {self.embed_dim}"
f" and `num_heads`: {num_heads})."
)
self.scaling = self.head_dim**-0.5
self.is_decoder = is_decoder
lora_config = config.lora_config[f"model_layer_{layer_id}"]["self_attn"]
self.k_proj = LinearLora(
in_features=embed_dim,
out_features=embed_dim,
bias=bias,
config=lora_config["k_proj"],
)
self.v_proj = LinearLora(
in_features=embed_dim,
out_features=embed_dim,
bias=bias,
config=lora_config["v_proj"],
)
self.q_proj = LinearLora(
in_features=embed_dim,
out_features=embed_dim,
bias=bias,
config=lora_config["q_proj"],
)
self.o_proj = LinearLora(
in_features=embed_dim,
out_features=embed_dim,
bias=bias,
config=lora_config["o_proj"],
)
self.lora_config = lora_config
def forward(
self,
hidden_states: torch.Tensor,
attention_mask: torch.Tensor,
# key_value_states: Optional[torch.Tensor] = None,
# past_key_value: Optional[Tuple[torch.Tensor]] = None,
layer_head_mask: Optional[torch.Tensor] = None,
output_attentions: bool = False,
) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
"""Input shape: Batch x Time x Channel"""
bsz, tgt_len, _ = hidden_states.shape
# get query proj
query_states = self.q_proj(hidden_states) * self.scaling
# self_attention
# key_value_states is None, past_key_value is None
key_states = OPTAttention_self_shape(
self.k_proj(hidden_states),
seq_len=-1,
bsz=bsz,
num_heads=self.num_heads,
head_dim=self.head_dim,
)
value_states = OPTAttention_self_shape(
self.v_proj(hidden_states),
seq_len=-1,
bsz=bsz,
num_heads=self.num_heads,
head_dim=self.head_dim,
)
# proj_shape = OPTAttention_construct_proj_shape(
# bsz, self.num_heads, self.head_dim
# )
proj_shape = (bsz * self.num_heads, -1, self.head_dim)
query_states, key_states, value_states = OPTAttention_reshape_qkv_back_for_bmm(
query_states,
key_states,
value_states,
proj_shape=proj_shape,
tgt_len=tgt_len,
bsz=bsz,
num_heads=self.num_heads,
head_dim=self.head_dim,
)
src_len = key_states.shape[1]
attn_weights = torch.bmm(query_states, key_states.transpose(1, 2))
OPTAttention_attn_weights_shape_check(
attn_weights, bsz, self.num_heads, tgt_len, src_len
)
if attention_mask is not None:
OPTAttention_attention_mask_shape_check(
attention_mask, bsz, tgt_len, src_len
)
attn_weights = (
attn_weights.view(bsz, self.num_heads, tgt_len, src_len)
+ attention_mask
)
attn_weights = torch.max(
attn_weights, OPTAttention_attention_get_dtype_min(attn_weights)
)
attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len)
# Patched OPTAttention does not support FP16
# upcast to fp32 if the weights are in fp16. Please see https://github.com/huggingface/transformers/pull/17437
OPTAttention_attn_weight_dtype_check(attn_weights)
# *: Currently this model does not support torch.float16
# if attn_weights.dtype == torch.float16:
# attn_weights = nn.functional.softmax(
# attn_weights, dim=-1, dtype=torch.float32
# ).to(torch.float16)
# else:
# attn_weights = nn.functional.softmax(attn_weights, dim=-1)
attn_weights = nn.functional.softmax(attn_weights, dim=-1)
if layer_head_mask is not None:
OPTAttention_layer_head_mask_shape_check(layer_head_mask, self.num_heads)
attn_weights = layer_head_mask.view(1, -1, 1, 1) * attn_weights.view(
bsz, self.num_heads, tgt_len, src_len
)
attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len)
if output_attentions:
# this operation is a bit awkward, but it's required to
# make sure that attn_weights keeps its gradient.
# In order to do so, attn_weights have to be reshaped
# twice and have to be reused in the following
attn_weights_reshaped = attn_weights.view(
bsz, self.num_heads, tgt_len, src_len
)
attn_weights = attn_weights_reshaped.view(
bsz * self.num_heads, tgt_len, src_len
)
else:
attn_weights_reshaped = None
attn_probs = nn.functional.dropout(
attn_weights, p=self.dropout, training=self.training
)
attn_output = torch.bmm(attn_probs, value_states)
OPTAttention_attn_output_shape_check(
attn_output, bsz, self.num_heads, tgt_len, self.head_dim
)
attn_output = attn_output.view(bsz, self.num_heads, tgt_len, self.head_dim)
attn_output = attn_output.transpose(1, 2)
# Use the `embed_dim` from the config (stored in the class) rather than `hidden_state` because `attn_output` can be
# partitioned aross GPUs when using tensor-parallelism.
attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim)
attn_output = self.o_proj(attn_output)
return attn_output, attn_weights_reshaped
class OPTDecoderLayer(nn.Module):
def __init__(self, config: OPTLoraConfig):
super().__init__()
self.embed_dim = config.hidden_size
self.self_attn = OPTAttention(
config=config,
embed_dim=self.embed_dim,
num_heads=config.num_attention_heads,
dropout=config.attention_dropout,
is_decoder=True,
bias=config.enable_bias,
)
self.do_layer_norm_before = config.do_layer_norm_before
self.dropout = config.dropout
self.activation_fn = ACT2FN[config.activation_function]
self.self_attn_layer_norm = nn.LayerNorm(
self.embed_dim, elementwise_affine=config.layer_norm_elementwise_affine
)
self.fc1 = nn.Linear(self.embed_dim, config.ffn_dim, bias=config.enable_bias)
self.fc2 = nn.Linear(config.ffn_dim, self.embed_dim, bias=config.enable_bias)
self.final_layer_norm = nn.LayerNorm(
self.embed_dim, elementwise_affine=config.layer_norm_elementwise_affine
)
def forward(
self,
hidden_states: torch.Tensor,
attention_mask: torch.Tensor,
layer_head_mask: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = False,
) -> Tuple[
torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]
]:
"""
Args:
hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
attention_mask (`torch.FloatTensor`, *optional*): attention mask of size
`(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values.
layer_head_mask (`torch.FloatTensor`, *optional*): mask for attention heads in a given layer of size
`(encoder_attention_heads,)`.
output_attentions (`bool`, *optional*):
Whether or not to return the attentions tensors of all attention layers. See `attentions` under
returned tensors for more detail.
"""
residual = hidden_states
# 125m, 1.7B, ..., 175B applies layer norm BEFORE attention
if self.do_layer_norm_before:
hidden_states = self.self_attn_layer_norm(hidden_states)
# Self Attention
# *: key_value_states is always None
hidden_states, self_attn_weights = self.self_attn(
hidden_states=hidden_states,
# past_key_value=None,
attention_mask=attention_mask,
layer_head_mask=layer_head_mask,
output_attentions=output_attentions,
# key_value_states=None,
)
hidden_states = nn.functional.dropout(
hidden_states, p=self.dropout, training=self.training
)
hidden_states = residual + hidden_states
# 350m applies layer norm AFTER attention
if not self.do_layer_norm_before:
hidden_states = self.self_attn_layer_norm(hidden_states)
# Fully Connected
hidden_states_shape = hidden_states.shape
hidden_states = hidden_states.reshape(-1, hidden_states.shape[-1])
residual = hidden_states
# 125m, 1.7B, ..., 175B applies layer norm BEFORE attention
if self.do_layer_norm_before:
hidden_states = self.final_layer_norm(hidden_states)
hidden_states = self.fc1(hidden_states)
hidden_states = self.activation_fn(hidden_states)
hidden_states = self.fc2(hidden_states)
hidden_states = nn.functional.dropout(
hidden_states, p=self.dropout, training=self.training
)
hidden_states = (residual + hidden_states).view(hidden_states_shape)
# 350m applies layer norm AFTER attention
if not self.do_layer_norm_before:
hidden_states = self.final_layer_norm(hidden_states)
outputs = (hidden_states,)
if output_attentions:
outputs += (self_attn_weights,)
return outputs
class OPTPreTrainedModel(PreTrainedModel):
config_class = OPTLoraConfig
base_model_prefix = "model"
supports_gradient_checkpointing = True
_no_split_modules = ["OPTDecoderLayer"]
_keys_to_ignore_on_load_unexpected = [r"decoder\.version"]
def _init_weights(self, module):
std = self.config.init_std
if isinstance(module, nn.Linear):
module.weight.data.normal_(mean=0.0, std=std)
if module.bias is not None:
module.bias.data.zero_()
elif isinstance(module, nn.Embedding):
module.weight.data.normal_(mean=0.0, std=std)
if module.padding_idx is not None:
module.weight.data[module.padding_idx].zero_()
def _set_gradient_checkpointing(self, module, value=False):
if isinstance(module, OPTDecoder):
module.gradient_checkpointing = value
class OPTDecoder(OPTPreTrainedModel):
"""
Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`OPTDecoderLayer`]
Args:
config: OPTConfig
"""
custom_node_leaf_patch = [
(
"embed_positions",
"OPTLearnedPositionalEmbedding",
OPTLearnedPositionalEmbedding,
)
]
def __init__(self, config: OPTLoraConfig):
super().__init__(config)
self.dropout = config.dropout
self.layerdrop = config.layerdrop
self.padding_idx = config.pad_token_id
self.max_target_positions = config.max_position_embeddings
self.vocab_size = config.vocab_size
self.embed_tokens = nn.Embedding(
config.vocab_size, config.word_embed_proj_dim, self.padding_idx
)
self.embed_positions = OPTLearnedPositionalEmbedding(
config.max_position_embeddings, config.hidden_size
)
if config.word_embed_proj_dim != config.hidden_size:
self.project_out = nn.Linear(
config.hidden_size, config.word_embed_proj_dim, bias=False
)
else:
self.project_out = None
if config.word_embed_proj_dim != config.hidden_size:
self.project_in = nn.Linear(
config.word_embed_proj_dim, config.hidden_size, bias=False
)
else:
self.project_in = None
# Note that the only purpose of `config._remove_final_layer_norm` is to keep backward compatibility
# with checkpoints that have been fine-tuned before transformers v4.20.1
# see https://github.com/facebookresearch/metaseq/pull/164
if config.do_layer_norm_before and not config._remove_final_layer_norm:
self.final_layer_norm = nn.LayerNorm(
config.hidden_size,
elementwise_affine=config.layer_norm_elementwise_affine,
)
else:
self.final_layer_norm = None
self.layers = nn.ModuleList(
[OPTDecoderLayer(config) for _ in range(config.num_hidden_layers)]
)
self.gradient_checkpointing = False
# Initialize weights and apply final processing
self.post_init()
def get_input_embeddings(self):
return self.embed_tokens
def set_input_embeddings(self, value):
self.embed_tokens = value
def forward(
self,
input_ids: torch.LongTensor,
attention_mask: torch.Tensor = None,
head_mask: Optional[torch.Tensor] = None,
# inputs_embeds: Optional[torch.FloatTensor] = None,
return_dict: Optional[bool] = True,
output_attentions: Optional[bool] = False,
output_hidden_states: Optional[bool] = False,
) -> Union[Tuple, BaseModelOutputWithPast]:
r"""
Args:
input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you
provide it.
Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
[`PreTrainedTokenizer.__call__`] for details.
[What are input IDs?](../glossary#input-ids)
attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
- 1 for tokens that are **not masked**,
- 0 for tokens that are **masked**.
[What are attention masks?](../glossary#attention-mask)
head_mask (`torch.Tensor` of shape `(num_hidden_layers, num_attention_heads)`, *optional*):
Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`:
- 1 indicates the head is **not masked**,
- 0 indicates the head is **masked**.
output_attentions (`bool`, *optional*):
Whether or not to return the attentions tensors of all attention layers. See `attentions` under
returned tensors for more detail.
output_hidden_states (`bool`, *optional*):
Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors
for more detail.
"""
return_dict = self.config.return_dict if return_dict is None else return_dict
output_attentions = (
self.config.output_attentions
if output_attentions is None
else output_attentions
)
output_hidden_states = (
self.config.output_hidden_states
if output_hidden_states is None
else output_hidden_states
)
input_shape = input_ids.shape
input_ids = input_ids.view(-1, input_shape[-1])
# input_ids = OPTDecoder_view_input_ids(
# input_ids=input_ids, input_shape=input_shape
# )
past_key_values_length = 0
inputs_embeds = self.embed_tokens(input_ids)
# embed positions
# TODO: check this?
if attention_mask is None:
attention_mask = torch.ones(
inputs_embeds.shape[:2], dtype=torch.bool, device=inputs_embeds.device
)
pos_embeds = self.embed_positions(attention_mask, past_key_values_length)
attention_mask = OPTDecoder_self_prepare_decoder_attention(
attention_mask, input_shape, inputs_embeds, past_key_values_length
)
if self.project_in is not None:
inputs_embeds = self.project_in(inputs_embeds)
hidden_states = inputs_embeds + pos_embeds
# decoder layers
all_hidden_states = () if output_hidden_states else None
all_self_attns = () if output_attentions else None
# check if head_mask has a correct number of layers specified if desired | OPTDecoder_check_head_mask(head_mask, self.layers) | 11 | 2023-12-18 12:50:53+00:00 | 12k |
byeongjun-park/HarmonyView | ldm/models/diffusion/sync_dreamer.py | [
{
"identifier": "read_pickle",
"path": "ldm/base_utils.py",
"snippet": "def read_pickle(pkl_path):\n with open(pkl_path, 'rb') as f:\n return pickle.load(f)"
},
{
"identifier": "concat_images_list",
"path": "ldm/base_utils.py",
"snippet": "def concat_images_list(*args,vert=Fals... | from pathlib import Path
from skimage.io import imsave
from torch.optim.lr_scheduler import LambdaLR
from tqdm import tqdm
from ldm.base_utils import read_pickle, concat_images_list
from ldm.models.diffusion.sync_dreamer_utils import get_warp_coordinates, create_target_volume
from ldm.models.diffusion.sync_dreamer_network import NoisyTargetViewEncoder, SpatialTime3DNet, FrustumTV3DNet
from ldm.modules.diffusionmodules.util import make_ddim_timesteps, timestep_embedding
from ldm.modules.encoders.modules import FrozenCLIPImageEmbedder
from ldm.util import instantiate_from_config
import pytorch_lightning as pl
import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np | 7,478 | image_target = batch['target_image'].permute(0, 1, 4, 2, 3) # b,n,3,h,w
N = image_target.shape[1]
x = [self.encode_first_stage(image_target[:,ni], True) for ni in range(N)]
x = torch.stack(x, 1) # b,n,4,h//8,w//8
else:
x = None
image_input = batch['input_image'].permute(0, 3, 1, 2)
elevation_input = batch['input_elevation'][:, 0] # b
x_input = self.encode_first_stage(image_input)
input_info = {'image': image_input, 'elevation': elevation_input, 'x': x_input}
with torch.no_grad():
clip_embed = self.clip_image_encoder.encode(image_input)
return x, clip_embed, input_info
def embed_time(self, t):
t_embed = timestep_embedding(t, self.time_embed_dim, repeat_only=False) # B,TED
t_embed = self.time_embed(t_embed) # B,TED
return t_embed
def get_target_view_feats(self, x_input, spatial_volume, clip_embed, t_embed, v_embed, target_index):
"""
@param x_input: B,4,H,W
@param spatial_volume: B,C,V,V,V
@param clip_embed: B,1,768
@param t_embed: B,t_dim
@param v_embed: B,N,v_dim
@param target_index: B,TN
@return:
tensors of size B*TN,*
"""
B, _, H, W = x_input.shape
frustum_volume_feats, frustum_volume_depth = self.spatial_volume.construct_view_frustum_volume(spatial_volume, t_embed, v_embed, self.poses, self.Ks, target_index)
# clip
TN = target_index.shape[1]
v_embed_ = v_embed[torch.arange(B)[:,None], target_index].view(B*TN, self.viewpoint_dim) # B*TN,v_dim
clip_embed_ = clip_embed.unsqueeze(1).repeat(1,TN,1,1).view(B*TN,1,768)
clip_embed_ = self.cc_projection(torch.cat([clip_embed_, v_embed_.unsqueeze(1)], -1)) # B*TN,1,768
x_input_ = x_input.unsqueeze(1).repeat(1, TN, 1, 1, 1).view(B * TN, 4, H, W)
x_concat = x_input_
return clip_embed_, frustum_volume_feats, x_concat
def training_step(self, batch):
B = batch['target_image'].shape[0]
time_steps = torch.randint(0, self.num_timesteps, (B,), device=self.device).long()
x, clip_embed, input_info = self.prepare(batch)
x_noisy, noise = self.add_noise(x, time_steps) # B,N,4,H,W
N = self.view_num
target_index = torch.randint(0, N, (B, 1), device=self.device).long() # B, 1
v_embed = self.get_viewpoint_embedding(B, input_info['elevation']) # N,v_dim
t_embed = self.embed_time(time_steps)
spatial_volume = self.spatial_volume.construct_spatial_volume(x_noisy, t_embed, v_embed, self.poses, self.Ks)
clip_embed, volume_feats, x_concat = self.get_target_view_feats(input_info['x'], spatial_volume, clip_embed, t_embed, v_embed, target_index)
x_noisy_ = x_noisy[torch.arange(B)[:,None],target_index][:,0] # B,4,H,W
noise_predict = self.model(x_noisy_, time_steps, clip_embed, volume_feats, x_concat, is_train=True) # B,4,H,W
noise_target = noise[torch.arange(B)[:,None],target_index][:,0] # B,4,H,W
# loss simple for diffusion
loss_simple = torch.nn.functional.mse_loss(noise_target, noise_predict, reduction='none')
loss = loss_simple.mean()
self.log('sim', loss_simple.mean(), prog_bar=True, logger=True, on_step=True, on_epoch=True, rank_zero_only=True)
# log others
lr = self.optimizers().param_groups[0]['lr']
self.log('lr', lr, prog_bar=True, logger=True, on_step=True, on_epoch=False, rank_zero_only=True)
self.log("step", self.global_step, prog_bar=True, logger=True, on_step=True, on_epoch=False, rank_zero_only=True)
return loss
def add_noise(self, x_start, t):
"""
@param x_start: B,*
@param t: B,
@return:
"""
B = x_start.shape[0]
noise = torch.randn_like(x_start) # B,*
sqrt_alphas_cumprod_ = self.sqrt_alphas_cumprod[t] # B,
sqrt_one_minus_alphas_cumprod_ = self.sqrt_one_minus_alphas_cumprod[t] # B
sqrt_alphas_cumprod_ = sqrt_alphas_cumprod_.view(B, *[1 for _ in range(len(x_start.shape)-1)])
sqrt_one_minus_alphas_cumprod_ = sqrt_one_minus_alphas_cumprod_.view(B, *[1 for _ in range(len(x_start.shape)-1)])
x_noisy = sqrt_alphas_cumprod_ * x_start + sqrt_one_minus_alphas_cumprod_ * noise
return x_noisy, noise
def sample(self, sampler, batch, cfg_scale, return_inter_results=False, inter_interval=50, inter_view_interval=2):
_, clip_embed, input_info = self.prepare(batch)
x_sample, inter = sampler.sample(input_info, clip_embed, unconditional_scale=cfg_scale, log_every_t=inter_interval)
N = x_sample.shape[1]
x_sample = torch.stack([self.decode_first_stage(x_sample[:, ni]) for ni in range(N)], 1)
if return_inter_results:
torch.cuda.synchronize()
torch.cuda.empty_cache()
inter = torch.stack(inter['x_inter'], 2) # # B,N,T,C,H,W
B,N,T,C,H,W = inter.shape
inter_results = []
for ni in tqdm(range(0, N, inter_view_interval)):
inter_results_ = []
for ti in range(T):
inter_results_.append(self.decode_first_stage(inter[:, ni, ti]))
inter_results.append(torch.stack(inter_results_, 1)) # B,T,3,H,W
inter_results = torch.stack(inter_results,1) # B,N,T,3,H,W
return x_sample, inter_results
else:
return x_sample
def log_image(self, x_sample, batch, step, output_dir):
process = lambda x: ((torch.clip(x, min=-1, max=1).cpu().numpy() * 0.5 + 0.5) * 255).astype(np.uint8)
B = x_sample.shape[0]
N = x_sample.shape[1]
image_cond = []
for bi in range(B):
|
def disabled_train(self, mode=True):
"""Overwrite model.train with this function to make sure train/eval mode
does not change anymore."""
return self
def disable_training_module(module: nn.Module):
module = module.eval()
module.train = disabled_train
for para in module.parameters():
para.requires_grad = False
return module
def repeat_to_batch(tensor, B, VN):
t_shape = tensor.shape
ones = [1 for _ in range(len(t_shape)-1)]
tensor_new = tensor.view(B,1,*t_shape[1:]).repeat(1,VN,*ones).view(B*VN,*t_shape[1:])
return tensor_new
class UNetWrapper(nn.Module):
def __init__(self, diff_model_config, drop_conditions=False, drop_scheme='default', use_zero_123=True):
super().__init__()
self.diffusion_model = instantiate_from_config(diff_model_config)
self.drop_conditions = drop_conditions
self.drop_scheme=drop_scheme
self.use_zero_123 = use_zero_123
def drop(self, cond, mask):
shape = cond.shape
B = shape[0]
cond = mask.view(B,*[1 for _ in range(len(shape)-1)]) * cond
return cond
def get_trainable_parameters(self):
return self.diffusion_model.get_trainable_parameters()
def get_drop_scheme(self, B, device):
if self.drop_scheme=='default':
random = torch.rand(B, dtype=torch.float32, device=device)
drop_clip = (random > 0.15) & (random <= 0.2)
drop_volume = (random > 0.1) & (random <= 0.15)
drop_concat = (random > 0.05) & (random <= 0.1)
drop_all = random <= 0.05
else:
raise NotImplementedError
return drop_clip, drop_volume, drop_concat, drop_all
def forward(self, x, t, clip_embed, volume_feats, x_concat, is_train=False):
"""
@param x: B,4,H,W
@param t: B,
@param clip_embed: B,M,768
@param volume_feats: B,C,D,H,W
@param x_concat: B,C,H,W
@param is_train:
@return:
"""
if self.drop_conditions and is_train:
B = x.shape[0]
drop_clip, drop_volume, drop_concat, drop_all = self.get_drop_scheme(B, x.device)
clip_mask = 1.0 - (drop_clip | drop_all).float()
clip_embed = self.drop(clip_embed, clip_mask)
volume_mask = 1.0 - (drop_volume | drop_all).float()
for k, v in volume_feats.items():
volume_feats[k] = self.drop(v, mask=volume_mask)
concat_mask = 1.0 - (drop_concat | drop_all).float()
x_concat = self.drop(x_concat, concat_mask)
if self.use_zero_123:
# zero123 does not multiply this when encoding, maybe a bug for zero123
first_stage_scale_factor = 0.18215
x_concat_ = x_concat * 1.0
x_concat_[:, :4] = x_concat_[:, :4] / first_stage_scale_factor
else:
x_concat_ = x_concat
x = torch.cat([x, x_concat_], 1)
pred = self.diffusion_model(x, t, clip_embed, source_dict=volume_feats)
return pred
def predict_with_unconditional_scale(self, x, t, clip_embed, volume_feats, x_concat, unconditional_scale):
x_ = torch.cat([x] * 2, 0)
t_ = torch.cat([t] * 2, 0)
clip_embed_ = torch.cat([clip_embed, torch.zeros_like(clip_embed)], 0)
v_ = {}
for k, v in volume_feats.items():
v_[k] = torch.cat([v, torch.zeros_like(v)], 0)
x_concat_ = torch.cat([x_concat, torch.zeros_like(x_concat)], 0)
if self.use_zero_123:
# zero123 does not multiply this when encoding, maybe a bug for zero123
first_stage_scale_factor = 0.18215
x_concat_[:, :4] = x_concat_[:, :4] / first_stage_scale_factor
x_ = torch.cat([x_, x_concat_], 1)
s, s_uc = self.diffusion_model(x_, t_, clip_embed_, source_dict=v_).chunk(2)
s = s_uc + unconditional_scale * (s - s_uc)
return s
def predict_with_decomposed_unconditional_scales(self, x, t, clip_embed, volume_feats, x_concat, unconditional_scales):
x_ = torch.cat([x] * 3, 0)
t_ = torch.cat([t] * 3, 0)
clip_embed_ = torch.cat([clip_embed, torch.zeros_like(clip_embed), clip_embed], 0)
x_concat_ = torch.cat([x_concat, torch.zeros_like(x_concat), x_concat*4], 0)
v_ = {}
for k, v in volume_feats.items():
v_[k] = torch.cat([v, v, torch.zeros_like(v)], 0)
if self.use_zero_123:
# zero123 does not multiply this when encoding, maybe a bug for zero123
first_stage_scale_factor = 0.18215
x_concat_[:, :4] = x_concat_[:, :4] / first_stage_scale_factor
x_ = torch.cat([x_, x_concat_], 1)
s, s_uc1, s_uc2 = self.diffusion_model(x_, t_, clip_embed_, source_dict=v_).chunk(3)
s = s + unconditional_scales[0] * (s - s_uc1) + unconditional_scales[1] * (s - s_uc2)
return s
class SpatialVolumeNet(nn.Module):
def __init__(self, time_dim, view_dim, view_num,
input_image_size=256, frustum_volume_depth=48,
spatial_volume_size=32, spatial_volume_length=0.5,
frustum_volume_length=0.86603 # sqrt(3)/2
):
super().__init__()
self.target_encoder = NoisyTargetViewEncoder(time_dim, view_dim, output_dim=16)
self.spatial_volume_feats = SpatialTime3DNet(input_dim=16 * view_num, time_dim=time_dim, dims=(64, 128, 256, 512))
self.frustum_volume_feats = FrustumTV3DNet(64, time_dim, view_dim, dims=(64, 128, 256, 512))
self.frustum_volume_length = frustum_volume_length
self.input_image_size = input_image_size
self.spatial_volume_size = spatial_volume_size
self.spatial_volume_length = spatial_volume_length
self.frustum_volume_size = self.input_image_size // 8
self.frustum_volume_depth = frustum_volume_depth
self.time_dim = time_dim
self.view_dim = view_dim
self.default_origin_depth = 1.5 # our rendered images are 1.5 away from the origin, we assume camera is 1.5 away from the origin
def construct_spatial_volume(self, x, t_embed, v_embed, target_poses, target_Ks):
"""
@param x: B,N,4,H,W
@param t_embed: B,t_dim
@param v_embed: B,N,v_dim
@param target_poses: N,3,4
@param target_Ks: N,3,3
@return:
"""
B, N, _, H, W = x.shape
V = self.spatial_volume_size
device = x.device
spatial_volume_verts = torch.linspace(-self.spatial_volume_length, self.spatial_volume_length, V, dtype=torch.float32, device=device)
spatial_volume_verts = torch.stack(torch.meshgrid(spatial_volume_verts, spatial_volume_verts, spatial_volume_verts, indexing='ij'), -1)
spatial_volume_verts = spatial_volume_verts.reshape(1, V ** 3, 3)[:, :, (2, 1, 0)]
spatial_volume_verts = spatial_volume_verts.view(1, V, V, V, 3).permute(0, 4, 1, 2, 3).repeat(B, 1, 1, 1, 1)
# encode source features
t_embed_ = t_embed.view(B, 1, self.time_dim).repeat(1, N, 1).view(B, N, self.time_dim)
v_embed_ = v_embed
target_Ks = target_Ks.unsqueeze(0).repeat(B, 1, 1, 1)
target_poses = target_poses.unsqueeze(0).repeat(B, 1, 1, 1)
# extract 2D image features
spatial_volume_feats = []
# project source features
for ni in range(0, N):
pose_source_ = target_poses[:, ni]
K_source_ = target_Ks[:, ni]
x_ = self.target_encoder(x[:, ni], t_embed_[:, ni], v_embed_[:, ni])
C = x_.shape[1]
coords_source = get_warp_coordinates(spatial_volume_verts, x_.shape[-1], self.input_image_size, K_source_, pose_source_).view(B, V, V * V, 2)
unproj_feats_ = F.grid_sample(x_, coords_source, mode='bilinear', padding_mode='zeros', align_corners=True)
unproj_feats_ = unproj_feats_.view(B, C, V, V, V)
spatial_volume_feats.append(unproj_feats_)
spatial_volume_feats = torch.stack(spatial_volume_feats, 1) # B,N,C,V,V,V
N = spatial_volume_feats.shape[1]
spatial_volume_feats = spatial_volume_feats.view(B, N*C, V, V, V)
spatial_volume_feats = self.spatial_volume_feats(spatial_volume_feats, t_embed) # b,64,32,32,32
return spatial_volume_feats
def construct_view_frustum_volume(self, spatial_volume, t_embed, v_embed, poses, Ks, target_indices):
"""
@param spatial_volume: B,C,V,V,V
@param t_embed: B,t_dim
@param v_embed: B,N,v_dim
@param poses: N,3,4
@param Ks: N,3,3
@param target_indices: B,TN
@return: B*TN,C,H,W
"""
B, TN = target_indices.shape
H, W = self.frustum_volume_size, self.frustum_volume_size
D = self.frustum_volume_depth
V = self.spatial_volume_size
near = torch.ones(B * TN, 1, H, W, dtype=spatial_volume.dtype, device=spatial_volume.device) * self.default_origin_depth - self.frustum_volume_length
far = torch.ones(B * TN, 1, H, W, dtype=spatial_volume.dtype, device=spatial_volume.device) * self.default_origin_depth + self.frustum_volume_length
target_indices = target_indices.view(B*TN) # B*TN
poses_ = poses[target_indices] # B*TN,3,4
Ks_ = Ks[target_indices] # B*TN,3,4
volume_xyz, volume_depth = create_target_volume(D, self.frustum_volume_size, self.input_image_size, poses_, Ks_, near, far) # B*TN,3 or 1,D,H,W
volume_xyz_ = volume_xyz / self.spatial_volume_length # since the spatial volume is constructed in [-spatial_volume_length,spatial_volume_length]
volume_xyz_ = volume_xyz_.permute(0, 2, 3, 4, 1) # B*TN,D,H,W,3
spatial_volume_ = spatial_volume.unsqueeze(1).repeat(1, TN, 1, 1, 1, 1).view(B * TN, -1, V, V, V)
volume_feats = F.grid_sample(spatial_volume_, volume_xyz_, mode='bilinear', padding_mode='zeros', align_corners=True) # B*TN,C,D,H,W
v_embed_ = v_embed[torch.arange(B)[:,None], target_indices.view(B,TN)].view(B*TN, -1) # B*TN
t_embed_ = t_embed.unsqueeze(1).repeat(1,TN,1).view(B*TN,-1)
volume_feats_dict = self.frustum_volume_feats(volume_feats, t_embed_, v_embed_)
return volume_feats_dict, volume_depth
class SyncMultiviewDiffusion(pl.LightningModule):
def __init__(self, unet_config, scheduler_config,
finetune_unet=False, finetune_projection=True,
view_num=16, image_size=256,
cfg_scale=3.0, output_num=8, batch_view_num=4,
drop_conditions=False, drop_scheme='default',
clip_image_encoder_path="/apdcephfs/private_rondyliu/projects/clip/ViT-L-14.pt",
sample_type='ddim', sample_steps=200):
super().__init__()
self.finetune_unet = finetune_unet
self.finetune_projection = finetune_projection
self.view_num = view_num
self.viewpoint_dim = 4
self.output_num = output_num
self.image_size = image_size
self.batch_view_num = batch_view_num
self.cfg_scale = cfg_scale
self.clip_image_encoder_path = clip_image_encoder_path
self._init_time_step_embedding()
self._init_first_stage()
self._init_schedule()
self._init_multiview()
self._init_clip_image_encoder()
self._init_clip_projection()
self.spatial_volume = SpatialVolumeNet(self.time_embed_dim, self.viewpoint_dim, self.view_num)
self.model = UNetWrapper(unet_config, drop_conditions=drop_conditions, drop_scheme=drop_scheme)
self.scheduler_config = scheduler_config
latent_size = image_size//8
if sample_type=='ddim':
self.sampler = SyncDDIMSampler(self, sample_steps , "uniform", 1.0, latent_size=latent_size)
else:
raise NotImplementedError
def _init_clip_projection(self):
self.cc_projection = nn.Linear(772, 768)
nn.init.eye_(list(self.cc_projection.parameters())[0][:768, :768])
nn.init.zeros_(list(self.cc_projection.parameters())[1])
self.cc_projection.requires_grad_(True)
if not self.finetune_projection:
disable_training_module(self.cc_projection)
def _init_multiview(self):
K, azs, _, _, poses = read_pickle(f'meta_info/camera-{self.view_num}.pkl')
default_image_size = 256
ratio = self.image_size/default_image_size
K = np.diag([ratio,ratio,1]) @ K
K = torch.from_numpy(K.astype(np.float32)) # [3,3]
K = K.unsqueeze(0).repeat(self.view_num,1,1) # N,3,3
poses = torch.from_numpy(poses.astype(np.float32)) # N,3,4
self.register_buffer('poses', poses)
self.register_buffer('Ks', K)
azs = (azs + np.pi) % (np.pi * 2) - np.pi # scale to [-pi,pi] and the index=0 has az=0
self.register_buffer('azimuth', torch.from_numpy(azs.astype(np.float32)))
def get_viewpoint_embedding(self, batch_size, elevation_ref):
"""
@param batch_size:
@param elevation_ref: B
@return:
"""
azimuth_input = self.azimuth[0].unsqueeze(0) # 1
azimuth_target = self.azimuth # N
elevation_input = -elevation_ref # note that zero123 use a negative elevation here!!!
elevation_target = -np.deg2rad(30)
d_e = elevation_target - elevation_input # B
N = self.azimuth.shape[0]
B = batch_size
d_e = d_e.unsqueeze(1).repeat(1, N)
d_a = azimuth_target - azimuth_input # N
d_a = d_a.unsqueeze(0).repeat(B, 1)
d_z = torch.zeros_like(d_a)
embedding = torch.stack([d_e, torch.sin(d_a), torch.cos(d_a), d_z], -1) # B,N,4
return embedding
def _init_first_stage(self):
first_stage_config={
"target": "ldm.models.autoencoder.AutoencoderKL",
"params": {
"embed_dim": 4,
"monitor": "val/rec_loss",
"ddconfig":{
"double_z": True,
"z_channels": 4,
"resolution": self.image_size,
"in_channels": 3,
"out_ch": 3,
"ch": 128,
"ch_mult": [1,2,4,4],
"num_res_blocks": 2,
"attn_resolutions": [],
"dropout": 0.0
},
"lossconfig": {"target": "torch.nn.Identity"},
}
}
self.first_stage_scale_factor = 0.18215
self.first_stage_model = instantiate_from_config(first_stage_config)
self.first_stage_model = disable_training_module(self.first_stage_model)
def _init_clip_image_encoder(self):
self.clip_image_encoder = FrozenCLIPImageEmbedder(model=self.clip_image_encoder_path)
self.clip_image_encoder = disable_training_module(self.clip_image_encoder)
def _init_schedule(self):
self.num_timesteps = 1000
linear_start = 0.00085
linear_end = 0.0120
num_timesteps = 1000
betas = torch.linspace(linear_start ** 0.5, linear_end ** 0.5, num_timesteps, dtype=torch.float32) ** 2 # T
assert betas.shape[0] == self.num_timesteps
# all in float64 first
alphas = 1. - betas
alphas_cumprod = torch.cumprod(alphas, dim=0) # T
alphas_cumprod_prev = torch.cat([torch.ones(1, dtype=torch.float64), alphas_cumprod[:-1]], 0)
posterior_variance = betas * (1. - alphas_cumprod_prev) / (1. - alphas_cumprod) # T
posterior_log_variance_clipped = torch.log(torch.clamp(posterior_variance, min=1e-20))
posterior_log_variance_clipped = torch.clamp(posterior_log_variance_clipped, min=-10)
self.register_buffer("betas", betas.float())
self.register_buffer("alphas", alphas.float())
self.register_buffer("alphas_cumprod", alphas_cumprod.float())
self.register_buffer("sqrt_alphas_cumprod", torch.sqrt(alphas_cumprod).float())
self.register_buffer("sqrt_one_minus_alphas_cumprod", torch.sqrt(1 - alphas_cumprod).float())
self.register_buffer("posterior_variance", posterior_variance.float())
self.register_buffer('posterior_log_variance_clipped', posterior_log_variance_clipped.float())
def _init_time_step_embedding(self):
self.time_embed_dim = 256
self.time_embed = nn.Sequential(
nn.Linear(self.time_embed_dim, self.time_embed_dim),
nn.SiLU(True),
nn.Linear(self.time_embed_dim, self.time_embed_dim),
)
def encode_first_stage(self, x, sample=True):
with torch.no_grad():
posterior = self.first_stage_model.encode(x) # b,4,h//8,w//8
if sample:
return posterior.sample().detach() * self.first_stage_scale_factor
else:
return posterior.mode().detach() * self.first_stage_scale_factor
def decode_first_stage(self, z):
with torch.no_grad():
z = 1. / self.first_stage_scale_factor * z
return self.first_stage_model.decode(z)
def prepare(self, batch):
# encode target
if 'target_image' in batch:
image_target = batch['target_image'].permute(0, 1, 4, 2, 3) # b,n,3,h,w
N = image_target.shape[1]
x = [self.encode_first_stage(image_target[:,ni], True) for ni in range(N)]
x = torch.stack(x, 1) # b,n,4,h//8,w//8
else:
x = None
image_input = batch['input_image'].permute(0, 3, 1, 2)
elevation_input = batch['input_elevation'][:, 0] # b
x_input = self.encode_first_stage(image_input)
input_info = {'image': image_input, 'elevation': elevation_input, 'x': x_input}
with torch.no_grad():
clip_embed = self.clip_image_encoder.encode(image_input)
return x, clip_embed, input_info
def embed_time(self, t):
t_embed = timestep_embedding(t, self.time_embed_dim, repeat_only=False) # B,TED
t_embed = self.time_embed(t_embed) # B,TED
return t_embed
def get_target_view_feats(self, x_input, spatial_volume, clip_embed, t_embed, v_embed, target_index):
"""
@param x_input: B,4,H,W
@param spatial_volume: B,C,V,V,V
@param clip_embed: B,1,768
@param t_embed: B,t_dim
@param v_embed: B,N,v_dim
@param target_index: B,TN
@return:
tensors of size B*TN,*
"""
B, _, H, W = x_input.shape
frustum_volume_feats, frustum_volume_depth = self.spatial_volume.construct_view_frustum_volume(spatial_volume, t_embed, v_embed, self.poses, self.Ks, target_index)
# clip
TN = target_index.shape[1]
v_embed_ = v_embed[torch.arange(B)[:,None], target_index].view(B*TN, self.viewpoint_dim) # B*TN,v_dim
clip_embed_ = clip_embed.unsqueeze(1).repeat(1,TN,1,1).view(B*TN,1,768)
clip_embed_ = self.cc_projection(torch.cat([clip_embed_, v_embed_.unsqueeze(1)], -1)) # B*TN,1,768
x_input_ = x_input.unsqueeze(1).repeat(1, TN, 1, 1, 1).view(B * TN, 4, H, W)
x_concat = x_input_
return clip_embed_, frustum_volume_feats, x_concat
def training_step(self, batch):
B = batch['target_image'].shape[0]
time_steps = torch.randint(0, self.num_timesteps, (B,), device=self.device).long()
x, clip_embed, input_info = self.prepare(batch)
x_noisy, noise = self.add_noise(x, time_steps) # B,N,4,H,W
N = self.view_num
target_index = torch.randint(0, N, (B, 1), device=self.device).long() # B, 1
v_embed = self.get_viewpoint_embedding(B, input_info['elevation']) # N,v_dim
t_embed = self.embed_time(time_steps)
spatial_volume = self.spatial_volume.construct_spatial_volume(x_noisy, t_embed, v_embed, self.poses, self.Ks)
clip_embed, volume_feats, x_concat = self.get_target_view_feats(input_info['x'], spatial_volume, clip_embed, t_embed, v_embed, target_index)
x_noisy_ = x_noisy[torch.arange(B)[:,None],target_index][:,0] # B,4,H,W
noise_predict = self.model(x_noisy_, time_steps, clip_embed, volume_feats, x_concat, is_train=True) # B,4,H,W
noise_target = noise[torch.arange(B)[:,None],target_index][:,0] # B,4,H,W
# loss simple for diffusion
loss_simple = torch.nn.functional.mse_loss(noise_target, noise_predict, reduction='none')
loss = loss_simple.mean()
self.log('sim', loss_simple.mean(), prog_bar=True, logger=True, on_step=True, on_epoch=True, rank_zero_only=True)
# log others
lr = self.optimizers().param_groups[0]['lr']
self.log('lr', lr, prog_bar=True, logger=True, on_step=True, on_epoch=False, rank_zero_only=True)
self.log("step", self.global_step, prog_bar=True, logger=True, on_step=True, on_epoch=False, rank_zero_only=True)
return loss
def add_noise(self, x_start, t):
"""
@param x_start: B,*
@param t: B,
@return:
"""
B = x_start.shape[0]
noise = torch.randn_like(x_start) # B,*
sqrt_alphas_cumprod_ = self.sqrt_alphas_cumprod[t] # B,
sqrt_one_minus_alphas_cumprod_ = self.sqrt_one_minus_alphas_cumprod[t] # B
sqrt_alphas_cumprod_ = sqrt_alphas_cumprod_.view(B, *[1 for _ in range(len(x_start.shape)-1)])
sqrt_one_minus_alphas_cumprod_ = sqrt_one_minus_alphas_cumprod_.view(B, *[1 for _ in range(len(x_start.shape)-1)])
x_noisy = sqrt_alphas_cumprod_ * x_start + sqrt_one_minus_alphas_cumprod_ * noise
return x_noisy, noise
def sample(self, sampler, batch, cfg_scale, return_inter_results=False, inter_interval=50, inter_view_interval=2):
_, clip_embed, input_info = self.prepare(batch)
x_sample, inter = sampler.sample(input_info, clip_embed, unconditional_scale=cfg_scale, log_every_t=inter_interval)
N = x_sample.shape[1]
x_sample = torch.stack([self.decode_first_stage(x_sample[:, ni]) for ni in range(N)], 1)
if return_inter_results:
torch.cuda.synchronize()
torch.cuda.empty_cache()
inter = torch.stack(inter['x_inter'], 2) # # B,N,T,C,H,W
B,N,T,C,H,W = inter.shape
inter_results = []
for ni in tqdm(range(0, N, inter_view_interval)):
inter_results_ = []
for ti in range(T):
inter_results_.append(self.decode_first_stage(inter[:, ni, ti]))
inter_results.append(torch.stack(inter_results_, 1)) # B,T,3,H,W
inter_results = torch.stack(inter_results,1) # B,N,T,3,H,W
return x_sample, inter_results
else:
return x_sample
def log_image(self, x_sample, batch, step, output_dir):
process = lambda x: ((torch.clip(x, min=-1, max=1).cpu().numpy() * 0.5 + 0.5) * 255).astype(np.uint8)
B = x_sample.shape[0]
N = x_sample.shape[1]
image_cond = []
for bi in range(B): | img_pr_ = concat_images_list(process(batch['input_image'][bi]),*[process(x_sample[bi, ni].permute(1, 2, 0)) for ni in range(N)]) | 1 | 2023-12-21 04:44:00+00:00 | 12k |
OPPOMKLab/u-LLaVA | models/segment_anything/automatic_mask_generator.py | [
{
"identifier": "Sam",
"path": "models/segment_anything/modeling/sam.py",
"snippet": "class Sam(nn.Module):\n mask_threshold: float = 0.0\n image_format: str = \"RGB\"\n\n def __init__(\n self,\n image_encoder: ImageEncoderViT,\n prompt_encoder: PromptEncoder,\n mask... | from typing import Any, Dict, List, Optional, Tuple
from torchvision.ops.boxes import batched_nms, box_area # type: ignore
from .modeling import Sam
from .predictor import SamPredictor
from .utils.amg import (MaskData, area_from_rle, batch_iterator,
batched_mask_to_box, box_xyxy_to_xywh,
build_all_layer_point_grids, calculate_stability_score,
coco_encode_rle, generate_crop_boxes,
is_box_near_crop_edge, mask_to_rle_pytorch,
remove_small_regions, rle_to_mask, uncrop_boxes_xyxy,
uncrop_masks, uncrop_points)
from pycocotools import \
mask as mask_utils # type: ignore # noqa: F401
import numpy as np
import torch
import cv2 # type: ignore # noqa: F401 | 10,538 | self.stability_score_thresh = stability_score_thresh
self.stability_score_offset = stability_score_offset
self.box_nms_thresh = box_nms_thresh
self.crop_n_layers = crop_n_layers
self.crop_nms_thresh = crop_nms_thresh
self.crop_overlap_ratio = crop_overlap_ratio
self.crop_n_points_downscale_factor = crop_n_points_downscale_factor
self.min_mask_region_area = min_mask_region_area
self.output_mode = output_mode
@torch.no_grad()
def generate(self, image: np.ndarray) -> List[Dict[str, Any]]:
"""
Generates masks for the given image.
Arguments:
image (np.ndarray): The image to generate masks for, in HWC uint8 format.
Returns:
list(dict(str, any)): A list over records for masks. Each record is
a dict containing the following keys:
segmentation (dict(str, any) or np.ndarray): The mask. If
output_mode='binary_mask', is an array of shape HW. Otherwise,
is a dictionary containing the RLE.
bbox (list(float)): The box around the mask, in XYWH format.
area (int): The area in pixels of the mask.
predicted_iou (float): The model's own prediction of the mask's
quality. This is filtered by the pred_iou_thresh parameter.
point_coords (list(list(float))): The point coordinates input
to the model to generate this mask.
stability_score (float): A measure of the mask's quality. This
is filtered on using the stability_score_thresh parameter.
crop_box (list(float)): The crop of the image used to generate
the mask, given in XYWH format.
"""
# Generate masks
mask_data = self._generate_masks(image)
# Filter small disconnected regions and holes in masks
if self.min_mask_region_area > 0:
mask_data = self.postprocess_small_regions(
mask_data,
self.min_mask_region_area,
max(self.box_nms_thresh, self.crop_nms_thresh),
)
# Encode masks
if self.output_mode == "coco_rle":
mask_data["segmentations"] = [
coco_encode_rle(rle) for rle in mask_data["rles"]
]
elif self.output_mode == "binary_mask":
mask_data["segmentations"] = [rle_to_mask(rle) for rle in mask_data["rles"]]
else:
mask_data["segmentations"] = mask_data["rles"]
# Write mask records
curr_anns = []
for idx in range(len(mask_data["segmentations"])):
ann = {
"segmentation": mask_data["segmentations"][idx],
"area": area_from_rle(mask_data["rles"][idx]),
"bbox": box_xyxy_to_xywh(mask_data["boxes"][idx]).tolist(),
"predicted_iou": mask_data["iou_preds"][idx].item(),
"point_coords": [mask_data["points"][idx].tolist()],
"stability_score": mask_data["stability_score"][idx].item(),
"crop_box": box_xyxy_to_xywh(mask_data["crop_boxes"][idx]).tolist(),
}
curr_anns.append(ann)
return curr_anns
def _generate_masks(self, image: np.ndarray) -> MaskData:
orig_size = image.shape[:2]
crop_boxes, layer_idxs = generate_crop_boxes(
orig_size, self.crop_n_layers, self.crop_overlap_ratio
)
# Iterate over image crops
data = MaskData()
for crop_box, layer_idx in zip(crop_boxes, layer_idxs):
crop_data = self._process_crop(image, crop_box, layer_idx, orig_size)
data.cat(crop_data)
# Remove duplicate masks between crops
if len(crop_boxes) > 1:
# Prefer masks from smaller crops
scores = 1 / box_area(data["crop_boxes"])
scores = scores.to(data["boxes"].device)
keep_by_nms = batched_nms(
data["boxes"].float(),
scores,
torch.zeros_like(data["boxes"][:, 0]), # categories
iou_threshold=self.crop_nms_thresh,
)
data.filter(keep_by_nms)
data.to_numpy()
return data
def _process_crop(
self,
image: np.ndarray,
crop_box: List[int],
crop_layer_idx: int,
orig_size: Tuple[int, ...],
) -> MaskData:
# Crop the image and calculate embeddings
x0, y0, x1, y1 = crop_box
cropped_im = image[y0:y1, x0:x1, :]
cropped_im_size = cropped_im.shape[:2]
self.predictor.set_image(cropped_im)
# Get points for this crop
points_scale = np.array(cropped_im_size)[None, ::-1]
points_for_image = self.point_grids[crop_layer_idx] * points_scale
# Generate masks for this crop in batches
data = MaskData()
| # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
class SamAutomaticMaskGenerator:
def __init__(
self,
model: Sam,
points_per_side: Optional[int] = 32,
points_per_batch: int = 64,
pred_iou_thresh: float = 0.88,
stability_score_thresh: float = 0.95,
stability_score_offset: float = 1.0,
box_nms_thresh: float = 0.7,
crop_n_layers: int = 0,
crop_nms_thresh: float = 0.7,
crop_overlap_ratio: float = 512 / 1500,
crop_n_points_downscale_factor: int = 1,
point_grids: Optional[List[np.ndarray]] = None,
min_mask_region_area: int = 0,
output_mode: str = "binary_mask",
) -> None:
"""
Using a SAM model, generates masks for the entire image.
Generates a grid of point prompts over the image, then filters
low quality and duplicate masks. The default settings are chosen
for SAM with a ViT-H backbone.
Arguments:
model (Sam): The SAM model to use for mask prediction.
points_per_side (int or None): The number of points to be sampled
along one side of the image. The total number of points is
points_per_side**2. If None, 'point_grids' must provide explicit
point sampling.
points_per_batch (int): Sets the number of points run simultaneously
by the model. Higher numbers may be faster but use more GPU memory.
pred_iou_thresh (float): A filtering threshold in [0,1], using the
model's predicted mask quality.
stability_score_thresh (float): A filtering threshold in [0,1], using
the stability of the mask under changes to the cutoff used to binarize
the model's mask predictions.
stability_score_offset (float): The amount to shift the cutoff when
calculated the stability score.
box_nms_thresh (float): The box IoU cutoff used by non-maximal
suppression to filter duplicate masks.
crop_n_layers (int): If >0, mask prediction will be run again on
crops of the image. Sets the number of layers to run, where each
layer has 2**i_layer number of image crops.
crop_nms_thresh (float): The box IoU cutoff used by non-maximal
suppression to filter duplicate masks between different crops.
crop_overlap_ratio (float): Sets the degree to which crops overlap.
In the first crop layer, crops will overlap by this fraction of
the image length. Later layers with more crops scale down this overlap.
crop_n_points_downscale_factor (int): The number of points-per-side
sampled in layer n is scaled down by crop_n_points_downscale_factor**n.
point_grids (list(np.ndarray) or None): A list over explicit grids
of points used for sampling, normalized to [0,1]. The nth grid in the
list is used in the nth crop layer. Exclusive with points_per_side.
min_mask_region_area (int): If >0, postprocessing will be applied
to remove disconnected regions and holes in masks with area smaller
than min_mask_region_area. Requires opencv.
output_mode (str): The form masks are returned in. Can be 'binary_mask',
'uncompressed_rle', or 'coco_rle'. 'coco_rle' requires pycocotools.
For large resolutions, 'binary_mask' may consume large amounts of
memory.
"""
assert (points_per_side is None) != (
point_grids is None
), "Exactly one of points_per_side or point_grid must be provided."
if points_per_side is not None:
self.point_grids = build_all_layer_point_grids(
points_per_side,
crop_n_layers,
crop_n_points_downscale_factor,
)
elif point_grids is not None:
self.point_grids = point_grids
else:
raise ValueError("Can't have both points_per_side and point_grid be None.")
assert output_mode in [
"binary_mask",
"uncompressed_rle",
"coco_rle",
], f"Unknown output_mode {output_mode}."
if output_mode == "coco_rle":
if min_mask_region_area > 0:
self.predictor = SamPredictor(model)
self.points_per_batch = points_per_batch
self.pred_iou_thresh = pred_iou_thresh
self.stability_score_thresh = stability_score_thresh
self.stability_score_offset = stability_score_offset
self.box_nms_thresh = box_nms_thresh
self.crop_n_layers = crop_n_layers
self.crop_nms_thresh = crop_nms_thresh
self.crop_overlap_ratio = crop_overlap_ratio
self.crop_n_points_downscale_factor = crop_n_points_downscale_factor
self.min_mask_region_area = min_mask_region_area
self.output_mode = output_mode
@torch.no_grad()
def generate(self, image: np.ndarray) -> List[Dict[str, Any]]:
"""
Generates masks for the given image.
Arguments:
image (np.ndarray): The image to generate masks for, in HWC uint8 format.
Returns:
list(dict(str, any)): A list over records for masks. Each record is
a dict containing the following keys:
segmentation (dict(str, any) or np.ndarray): The mask. If
output_mode='binary_mask', is an array of shape HW. Otherwise,
is a dictionary containing the RLE.
bbox (list(float)): The box around the mask, in XYWH format.
area (int): The area in pixels of the mask.
predicted_iou (float): The model's own prediction of the mask's
quality. This is filtered by the pred_iou_thresh parameter.
point_coords (list(list(float))): The point coordinates input
to the model to generate this mask.
stability_score (float): A measure of the mask's quality. This
is filtered on using the stability_score_thresh parameter.
crop_box (list(float)): The crop of the image used to generate
the mask, given in XYWH format.
"""
# Generate masks
mask_data = self._generate_masks(image)
# Filter small disconnected regions and holes in masks
if self.min_mask_region_area > 0:
mask_data = self.postprocess_small_regions(
mask_data,
self.min_mask_region_area,
max(self.box_nms_thresh, self.crop_nms_thresh),
)
# Encode masks
if self.output_mode == "coco_rle":
mask_data["segmentations"] = [
coco_encode_rle(rle) for rle in mask_data["rles"]
]
elif self.output_mode == "binary_mask":
mask_data["segmentations"] = [rle_to_mask(rle) for rle in mask_data["rles"]]
else:
mask_data["segmentations"] = mask_data["rles"]
# Write mask records
curr_anns = []
for idx in range(len(mask_data["segmentations"])):
ann = {
"segmentation": mask_data["segmentations"][idx],
"area": area_from_rle(mask_data["rles"][idx]),
"bbox": box_xyxy_to_xywh(mask_data["boxes"][idx]).tolist(),
"predicted_iou": mask_data["iou_preds"][idx].item(),
"point_coords": [mask_data["points"][idx].tolist()],
"stability_score": mask_data["stability_score"][idx].item(),
"crop_box": box_xyxy_to_xywh(mask_data["crop_boxes"][idx]).tolist(),
}
curr_anns.append(ann)
return curr_anns
def _generate_masks(self, image: np.ndarray) -> MaskData:
orig_size = image.shape[:2]
crop_boxes, layer_idxs = generate_crop_boxes(
orig_size, self.crop_n_layers, self.crop_overlap_ratio
)
# Iterate over image crops
data = MaskData()
for crop_box, layer_idx in zip(crop_boxes, layer_idxs):
crop_data = self._process_crop(image, crop_box, layer_idx, orig_size)
data.cat(crop_data)
# Remove duplicate masks between crops
if len(crop_boxes) > 1:
# Prefer masks from smaller crops
scores = 1 / box_area(data["crop_boxes"])
scores = scores.to(data["boxes"].device)
keep_by_nms = batched_nms(
data["boxes"].float(),
scores,
torch.zeros_like(data["boxes"][:, 0]), # categories
iou_threshold=self.crop_nms_thresh,
)
data.filter(keep_by_nms)
data.to_numpy()
return data
def _process_crop(
self,
image: np.ndarray,
crop_box: List[int],
crop_layer_idx: int,
orig_size: Tuple[int, ...],
) -> MaskData:
# Crop the image and calculate embeddings
x0, y0, x1, y1 = crop_box
cropped_im = image[y0:y1, x0:x1, :]
cropped_im_size = cropped_im.shape[:2]
self.predictor.set_image(cropped_im)
# Get points for this crop
points_scale = np.array(cropped_im_size)[None, ::-1]
points_for_image = self.point_grids[crop_layer_idx] * points_scale
# Generate masks for this crop in batches
data = MaskData() | for (points,) in batch_iterator(self.points_per_batch, points_for_image): | 4 | 2023-12-21 08:10:23+00:00 | 12k |
chinhsuanwu/ifusion | ldm/models/diffusion/ddpm.py | [
{
"identifier": "log_txt_as_img",
"path": "ldm/util.py",
"snippet": "def log_txt_as_img(wh, xc, size=10):\n # wh a tuple of (width, height)\n # xc a list of captions to plot\n b = len(xc)\n txts = list()\n for bi in range(b):\n txt = Image.new(\"RGB\", wh, color=\"white\")\n ... | import torch
import torch.nn as nn
import numpy as np
import pytorch_lightning as pl
import itertools
from torch.optim.lr_scheduler import LambdaLR
from einops import rearrange, repeat
from contextlib import contextmanager, nullcontext
from functools import partial
from tqdm import tqdm
from torchvision.utils import make_grid
from pytorch_lightning.utilities import rank_zero_only
from omegaconf import ListConfig
from ldm.util import (
log_txt_as_img,
exists,
default,
ismap,
isimage,
mean_flat,
count_params,
instantiate_from_config,
)
from ldm.modules.ema import LitEma
from ldm.modules.distributions.distributions import (
normal_kl,
DiagonalGaussianDistribution,
)
from ldm.models.autoencoder import (
VQModelInterface,
IdentityFirstStage,
AutoencoderKL,
)
from ldm.modules.diffusionmodules.util import (
make_beta_schedule,
extract_into_tensor,
noise_like,
)
from ldm.models.diffusion.ddim import DDIMSampler
from ldm.modules.attention import CrossAttention | 10,369 | """
wild mixture of
https://github.com/lucidrains/denoising-diffusion-pytorch/blob/7706bdfc6f527f58d33f84b7b522e61e6e3164b3/denoising_diffusion_pytorch/denoising_diffusion_pytorch.py
https://github.com/openai/improved-diffusion/blob/e94489283bb876ac1477d5dd7709bbbd2d9902ce/improved_diffusion/gaussian_diffusion.py
https://github.com/CompVis/taming-transformers
-- merci
"""
__conditioning_keys__ = {"concat": "c_concat", "crossattn": "c_crossattn", "adm": "y"}
def disabled_train(self, mode=True):
"""Overwrite model.train with this function to make sure train/eval mode
does not change anymore."""
return self
def uniform_on_device(r1, r2, shape, device):
return (r1 - r2) * torch.rand(*shape, device=device) + r2
class DDPM(pl.LightningModule):
# classic DDPM with Gaussian diffusion, in image space
def __init__(
self,
unet_config,
timesteps=1000,
beta_schedule="linear",
loss_type="l2",
ckpt_path=None,
ignore_keys=[],
load_only_unet=False,
monitor="val/loss",
use_ema=True,
first_stage_key="image_target",
image_size=256,
channels=3,
log_every_t=100,
clip_denoised=True,
linear_start=1e-4,
linear_end=2e-2,
cosine_s=8e-3,
given_betas=None,
original_elbo_weight=0.0,
v_posterior=0.0, # weight for choosing posterior variance as sigma = (1-v) * beta_tilde + v * beta
l_simple_weight=1.0,
conditioning_key=None,
parameterization="eps", # all assuming fixed variance schedules
scheduler_config=None,
use_positional_encodings=False,
learn_logvar=False,
logvar_init=0.0,
make_it_fit=False,
ucg_training=None,
):
super().__init__()
assert parameterization in [
"eps",
"x0",
], 'currently only supporting "eps" and "x0"'
self.parameterization = parameterization
print(
f"{self.__class__.__name__}: Running in {self.parameterization}-prediction mode"
)
self.cond_stage_model = None
self.clip_denoised = clip_denoised
self.log_every_t = log_every_t
self.first_stage_key = first_stage_key
self.image_size = image_size # try conv?
self.channels = channels
self.use_positional_encodings = use_positional_encodings
self.model = DiffusionWrapper(unet_config, conditioning_key)
count_params(self.model, verbose=True)
self.use_ema = use_ema
if self.use_ema:
| """
wild mixture of
https://github.com/lucidrains/denoising-diffusion-pytorch/blob/7706bdfc6f527f58d33f84b7b522e61e6e3164b3/denoising_diffusion_pytorch/denoising_diffusion_pytorch.py
https://github.com/openai/improved-diffusion/blob/e94489283bb876ac1477d5dd7709bbbd2d9902ce/improved_diffusion/gaussian_diffusion.py
https://github.com/CompVis/taming-transformers
-- merci
"""
__conditioning_keys__ = {"concat": "c_concat", "crossattn": "c_crossattn", "adm": "y"}
def disabled_train(self, mode=True):
"""Overwrite model.train with this function to make sure train/eval mode
does not change anymore."""
return self
def uniform_on_device(r1, r2, shape, device):
return (r1 - r2) * torch.rand(*shape, device=device) + r2
class DDPM(pl.LightningModule):
# classic DDPM with Gaussian diffusion, in image space
def __init__(
self,
unet_config,
timesteps=1000,
beta_schedule="linear",
loss_type="l2",
ckpt_path=None,
ignore_keys=[],
load_only_unet=False,
monitor="val/loss",
use_ema=True,
first_stage_key="image_target",
image_size=256,
channels=3,
log_every_t=100,
clip_denoised=True,
linear_start=1e-4,
linear_end=2e-2,
cosine_s=8e-3,
given_betas=None,
original_elbo_weight=0.0,
v_posterior=0.0, # weight for choosing posterior variance as sigma = (1-v) * beta_tilde + v * beta
l_simple_weight=1.0,
conditioning_key=None,
parameterization="eps", # all assuming fixed variance schedules
scheduler_config=None,
use_positional_encodings=False,
learn_logvar=False,
logvar_init=0.0,
make_it_fit=False,
ucg_training=None,
):
super().__init__()
assert parameterization in [
"eps",
"x0",
], 'currently only supporting "eps" and "x0"'
self.parameterization = parameterization
print(
f"{self.__class__.__name__}: Running in {self.parameterization}-prediction mode"
)
self.cond_stage_model = None
self.clip_denoised = clip_denoised
self.log_every_t = log_every_t
self.first_stage_key = first_stage_key
self.image_size = image_size # try conv?
self.channels = channels
self.use_positional_encodings = use_positional_encodings
self.model = DiffusionWrapper(unet_config, conditioning_key)
count_params(self.model, verbose=True)
self.use_ema = use_ema
if self.use_ema: | self.model_ema = LitEma(self.model) | 8 | 2023-12-17 12:45:38+00:00 | 12k |
wangzhecheng/SkyScript | src/training/main.py | [
{
"identifier": "create_model_and_transforms",
"path": "src/open_clip/factory.py",
"snippet": "def create_model_and_transforms(\n model_name: str,\n pretrained: Optional[str] = None,\n precision: str = 'fp32',\n device: Union[str, torch.device] = 'cpu',\n jit: bool = F... | import glob
import logging
import os
import re
import subprocess
import sys
import random
import numpy as np
import torch
import wandb
import torch.utils.tensorboard as tensorboard
import horovod.torch as hvd
import bitsandbytes as bnb
from datetime import datetime
from torch import optim
from torch.cuda.amp import GradScaler
from src.open_clip.factory import create_model_and_transforms, get_tokenizer, create_loss
from src.open_clip.model import trace_model
from src.training.data import get_data
from src.training.distributed import is_master, init_distributed_device, broadcast_object
from src.training.logger import setup_logging
from src.training.params import parse_args
from src.training.scheduler import cosine_lr, const_lr, const_lr_cooldown
from src.training.train import train_one_epoch, evaluate
from src.training.file_utils import pt_load, check_exists, start_sync_process, remote_sync
from open_clip.utils import replace_linear
from open_clip.utils import convert_int8_model_to_inference_mode
from shutil import copytree, ignore_patterns | 10,346 | """
Adapted from https://github.com/mlfoundations/open_clip. Copyright (c) 2012-2021 Gabriel Ilharco, Mitchell Wortsman, Nicholas Carlini, Rohan Taori, Achal Dave, Vaishaal Shankar, John Miller, Hongseok Namkoong, Hannaneh Hajishirzi, Ali Farhadi, Ludwig Schmidt
"""
try:
except ImportError:
wandb = None
try:
except ImportError:
tensorboard = None
try:
except ImportError:
hvd = None
LATEST_CHECKPOINT_NAME = "epoch_latest.pt"
def random_seed(seed=42, rank=0):
torch.manual_seed(seed + rank)
np.random.seed(seed + rank)
random.seed(seed + rank)
def natural_key(string_):
"""See http://www.codinghorror.com/blog/archives/001018.html"""
return [int(s) if s.isdigit() else s for s in re.split(r'(\d+)', string_.lower())]
def get_latest_checkpoint(path: str, remote : bool):
# as writen, this glob recurses, so can pick up checkpoints across multiple sub-folders
if remote:
result = subprocess.run(["aws", "s3", "ls", path + "/"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
print(result)
if result.returncode == 1:
return None
checkpoints = [os.path.join(path, x.split(' ')[-1]) for x in result.stdout.decode().split('\n')[:-1]]
else:
checkpoints = glob.glob(path + '**/*.pt', recursive=True)
if checkpoints:
checkpoints = sorted(checkpoints, key=natural_key)
return checkpoints[-1]
return None
def main(args):
args = parse_args(args)
if torch.cuda.is_available():
# This enables tf32 on Ampere GPUs which is only 8% slower than
# float16 and almost as accurate as float32
# This was a default in pytorch until 1.12
torch.backends.cuda.matmul.allow_tf32 = True
torch.backends.cudnn.benchmark = True
torch.backends.cudnn.deterministic = False
# fully initialize distributed device environment
device = init_distributed_device(args)
# get the name of the experiments
if args.name is None:
# sanitize model name for filesystem / uri use, easier if we don't use / in name as a rule?
model_name_safe = args.model.replace('/', '-')
date_str = datetime.now().strftime("%Y_%m_%d-%H_%M_%S")
if args.distributed:
# sync date_str from master to all ranks
date_str = broadcast_object(args, date_str)
args.name = '-'.join([
date_str,
f"model_{model_name_safe}",
f"lr_{args.lr}",
f"b_{args.batch_size}",
f"j_{args.workers}",
f"p_{args.precision}",
])
resume_latest = args.resume == 'latest'
log_base_path = os.path.join(args.logs, args.name)
args.log_path = None
if is_master(args, local=args.log_local):
os.makedirs(log_base_path, exist_ok=True)
log_filename = f'out-{args.rank}' if args.log_local else 'out.log'
args.log_path = os.path.join(log_base_path, log_filename)
if os.path.exists(args.log_path) and not resume_latest:
print(
"Error. Experiment already exists. Use --name {} to specify a new experiment."
)
return -1
# Setup text logger
args.log_level = logging.DEBUG if args.debug else logging.INFO
setup_logging(args.log_path, args.log_level)
# Setup wandb, tensorboard, checkpoint logging
args.wandb = 'wandb' in args.report_to or 'all' in args.report_to
args.tensorboard = 'tensorboard' in args.report_to or 'all' in args.report_to
args.checkpoint_path = os.path.join(log_base_path, "checkpoints")
if is_master(args):
args.tensorboard_path = os.path.join(log_base_path, "tensorboard") if args.tensorboard else ''
for dirname in [args.tensorboard_path, args.checkpoint_path]:
if dirname:
os.makedirs(dirname, exist_ok=True)
else:
args.tensorboard_path = ''
if resume_latest:
resume_from = None
checkpoint_path = args.checkpoint_path
# If using remote_sync, need to check the remote instead of the local checkpoints folder.
| """
Adapted from https://github.com/mlfoundations/open_clip. Copyright (c) 2012-2021 Gabriel Ilharco, Mitchell Wortsman, Nicholas Carlini, Rohan Taori, Achal Dave, Vaishaal Shankar, John Miller, Hongseok Namkoong, Hannaneh Hajishirzi, Ali Farhadi, Ludwig Schmidt
"""
try:
except ImportError:
wandb = None
try:
except ImportError:
tensorboard = None
try:
except ImportError:
hvd = None
LATEST_CHECKPOINT_NAME = "epoch_latest.pt"
def random_seed(seed=42, rank=0):
torch.manual_seed(seed + rank)
np.random.seed(seed + rank)
random.seed(seed + rank)
def natural_key(string_):
"""See http://www.codinghorror.com/blog/archives/001018.html"""
return [int(s) if s.isdigit() else s for s in re.split(r'(\d+)', string_.lower())]
def get_latest_checkpoint(path: str, remote : bool):
# as writen, this glob recurses, so can pick up checkpoints across multiple sub-folders
if remote:
result = subprocess.run(["aws", "s3", "ls", path + "/"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
print(result)
if result.returncode == 1:
return None
checkpoints = [os.path.join(path, x.split(' ')[-1]) for x in result.stdout.decode().split('\n')[:-1]]
else:
checkpoints = glob.glob(path + '**/*.pt', recursive=True)
if checkpoints:
checkpoints = sorted(checkpoints, key=natural_key)
return checkpoints[-1]
return None
def main(args):
args = parse_args(args)
if torch.cuda.is_available():
# This enables tf32 on Ampere GPUs which is only 8% slower than
# float16 and almost as accurate as float32
# This was a default in pytorch until 1.12
torch.backends.cuda.matmul.allow_tf32 = True
torch.backends.cudnn.benchmark = True
torch.backends.cudnn.deterministic = False
# fully initialize distributed device environment
device = init_distributed_device(args)
# get the name of the experiments
if args.name is None:
# sanitize model name for filesystem / uri use, easier if we don't use / in name as a rule?
model_name_safe = args.model.replace('/', '-')
date_str = datetime.now().strftime("%Y_%m_%d-%H_%M_%S")
if args.distributed:
# sync date_str from master to all ranks
date_str = broadcast_object(args, date_str)
args.name = '-'.join([
date_str,
f"model_{model_name_safe}",
f"lr_{args.lr}",
f"b_{args.batch_size}",
f"j_{args.workers}",
f"p_{args.precision}",
])
resume_latest = args.resume == 'latest'
log_base_path = os.path.join(args.logs, args.name)
args.log_path = None
if is_master(args, local=args.log_local):
os.makedirs(log_base_path, exist_ok=True)
log_filename = f'out-{args.rank}' if args.log_local else 'out.log'
args.log_path = os.path.join(log_base_path, log_filename)
if os.path.exists(args.log_path) and not resume_latest:
print(
"Error. Experiment already exists. Use --name {} to specify a new experiment."
)
return -1
# Setup text logger
args.log_level = logging.DEBUG if args.debug else logging.INFO
setup_logging(args.log_path, args.log_level)
# Setup wandb, tensorboard, checkpoint logging
args.wandb = 'wandb' in args.report_to or 'all' in args.report_to
args.tensorboard = 'tensorboard' in args.report_to or 'all' in args.report_to
args.checkpoint_path = os.path.join(log_base_path, "checkpoints")
if is_master(args):
args.tensorboard_path = os.path.join(log_base_path, "tensorboard") if args.tensorboard else ''
for dirname in [args.tensorboard_path, args.checkpoint_path]:
if dirname:
os.makedirs(dirname, exist_ok=True)
else:
args.tensorboard_path = ''
if resume_latest:
resume_from = None
checkpoint_path = args.checkpoint_path
# If using remote_sync, need to check the remote instead of the local checkpoints folder. | if args.remote_sync is not None: | 18 | 2023-12-19 11:50:56+00:00 | 12k |
Lavreniuk/EVP | depth/models_depth/model.py | [
{
"identifier": "UNetWrapper",
"path": "evp/models.py",
"snippet": "class UNetWrapper(nn.Module):\n def __init__(self, unet, use_attn=True, base_size=512, max_attn_size=None, attn_selector='up_cross+down_cross') -> None:\n super().__init__()\n self.unet = unet\n self.attention_st... | import torch
import torch.nn as nn
import torch.nn.functional as F
import os
from timm.models.layers import trunc_normal_, DropPath
from mmcv.cnn import (build_conv_layer, build_norm_layer, build_upsample_layer,
constant_init, normal_init)
from omegaconf import OmegaConf
from ldm.util import instantiate_from_config
from evp.models import UNetWrapper, TextAdapterRefer, FrozenCLIPEmbedder
from .miniViT import mViT
from .attractor import AttractorLayer, AttractorLayerUnnormed
from .dist_layers import ConditionalLogBinomial
from .localbins_layers import (Projector, SeedBinRegressor, SeedBinRegressorUnnormed) | 7,438 | x = x * channel_attention
# Apply convolutional layers
x = self.conv1(x)
x = self.group_norm(x)
x = self.relu(x)
x = self.conv2(x)
x = self.group_norm(x)
x = self.relu(x)
# Upsample
x = self.upscale(x)
return x
class ConvLayer(nn.Module):
def __init__(self, in_channels, out_channels):
super(ConvLayer, self).__init__()
self.conv1 = nn.Sequential(
nn.Conv2d(in_channels, out_channels, 1),
nn.GroupNorm(20, out_channels),
nn.ReLU(),
)
def forward(self, x):
x = self.conv1(x)
return x
class InverseMultiAttentiveFeatureRefinement(nn.Module):
def __init__(self, in_channels_list):
super(InverseMultiAttentiveFeatureRefinement, self).__init__()
self.layer1 = AttentionModule(in_channels_list[0], in_channels_list[0])
self.layer2 = AttentionDownsamplingModule(in_channels_list[0], in_channels_list[0]//2, scale_factor = 2)
self.layer3 = ConvLayer(in_channels_list[0]//2 + in_channels_list[1], in_channels_list[1])
self.layer4 = AttentionDownsamplingModule(in_channels_list[1], in_channels_list[1]//2, scale_factor = 2)
self.layer5 = ConvLayer(in_channels_list[1]//2 + in_channels_list[2], in_channels_list[2])
self.layer6 = AttentionDownsamplingModule(in_channels_list[2], in_channels_list[2]//2, scale_factor = 2)
self.layer7 = ConvLayer(in_channels_list[2]//2 + in_channels_list[3], in_channels_list[3])
'''
self.layer8 = AttentionUpsamplingModule(in_channels_list[3], in_channels_list[3])
self.layer9 = ConvLayer(in_channels_list[2] + in_channels_list[3], in_channels_list[2])
self.layer10 = AttentionUpsamplingModule(in_channels_list[2], in_channels_list[2])
self.layer11 = ConvLayer(in_channels_list[1] + in_channels_list[2], in_channels_list[1])
self.layer12 = AttentionUpsamplingModule(in_channels_list[1], in_channels_list[1])
self.layer13 = ConvLayer(in_channels_list[0] + in_channels_list[1], in_channels_list[0])
'''
def forward(self, inputs):
x_c4, x_c3, x_c2, x_c1 = inputs
x_c4 = self.layer1(x_c4)
x_c4_3 = self.layer2(x_c4)
x_c3 = torch.cat([x_c4_3, x_c3], dim=1)
x_c3 = self.layer3(x_c3)
x_c3_2 = self.layer4(x_c3)
x_c2 = torch.cat([x_c3_2, x_c2], dim=1)
x_c2 = self.layer5(x_c2)
x_c2_1 = self.layer6(x_c2)
x_c1 = torch.cat([x_c2_1, x_c1], dim=1)
x_c1 = self.layer7(x_c1)
'''
x_c1_2 = self.layer8(x_c1)
x_c2 = torch.cat([x_c1_2, x_c2], dim=1)
x_c2 = self.layer9(x_c2)
x_c2_3 = self.layer10(x_c2)
x_c3 = torch.cat([x_c2_3, x_c3], dim=1)
x_c3 = self.layer11(x_c3)
x_c3_4 = self.layer12(x_c3)
x_c4 = torch.cat([x_c3_4, x_c4], dim=1)
x_c4 = self.layer13(x_c4)
'''
return [x_c4, x_c3, x_c2, x_c1]
class EVPDepthEncoder(nn.Module):
def __init__(self, out_dim=1024, ldm_prior=[320, 680, 1320+1280], sd_path=None, text_dim=768,
dataset='nyu', caption_aggregation=False
):
super().__init__()
self.layer1 = nn.Sequential(
nn.Conv2d(ldm_prior[0], ldm_prior[0], 3, stride=2, padding=1),
nn.GroupNorm(16, ldm_prior[0]),
nn.ReLU(),
nn.Conv2d(ldm_prior[0], ldm_prior[0], 3, stride=2, padding=1),
)
self.layer2 = nn.Sequential(
nn.Conv2d(ldm_prior[1], ldm_prior[1], 3, stride=2, padding=1),
)
self.out_layer = nn.Sequential(
nn.Conv2d(sum(ldm_prior), out_dim, 1),
nn.GroupNorm(16, out_dim),
nn.ReLU(),
)
self.aggregation = InverseMultiAttentiveFeatureRefinement([320, 680, 1320, 1280])
self.apply(self._init_weights)
### stable diffusion layers
config = OmegaConf.load('./v1-inference.yaml')
if sd_path is None:
if os.path.exists('../checkpoints/v1-5-pruned-emaonly.ckpt'):
config.model.params.ckpt_path = '../checkpoints/v1-5-pruned-emaonly.ckpt'
else:
config.model.params.ckpt_path = None
else:
config.model.params.ckpt_path = f'../{sd_path}'
sd_model = instantiate_from_config(config.model)
self.encoder_vq = sd_model.first_stage_model
| # ------------------------------------------------------------------------------
# Copyright (c) Microsoft
# Licensed under the MIT License.
# The deconvolution code is based on Simple Baseline.
# (https://github.com/microsoft/human-pose-estimation.pytorch/blob/master/lib/models/pose_resnet.py)
# Modified by Zigang Geng (zigang@mail.ustc.edu.cn).
# ------------------------------------------------------------------------------
def icnr(x, scale=2, init=nn.init.kaiming_normal_):
"""
Checkerboard artifact free sub-pixel convolution
https://arxiv.org/abs/1707.02937
"""
ni,nf,h,w = x.shape
ni2 = int(ni/(scale**2))
k = init(torch.zeros([ni2,nf,h,w])).transpose(0, 1)
k = k.contiguous().view(ni2, nf, -1)
k = k.repeat(1, 1, scale**2)
k = k.contiguous().view([nf,ni,h,w]).transpose(0, 1)
x.data.copy_(k)
class PixelShuffle(nn.Module):
"""
Real-Time Single Image and Video Super-Resolution
https://arxiv.org/abs/1609.05158
"""
def __init__(self, n_channels, scale):
super(PixelShuffle, self).__init__()
self.conv = nn.Conv2d(n_channels, n_channels*(scale**2), kernel_size=1)
icnr(self.conv.weight)
self.shuf = nn.PixelShuffle(scale)
self.relu = nn.ReLU()
def forward(self,x):
x = self.shuf(self.relu(self.conv(x)))
return x
class AttentionModule(nn.Module):
def __init__(self, in_channels, out_channels):
super(AttentionModule, self).__init__()
# Convolutional Layers
self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=1, padding=1)
# Group Normalization
self.group_norm = nn.GroupNorm(20, out_channels)
# ReLU Activation
self.relu = nn.ReLU()
# Spatial Attention
self.spatial_attention = nn.Sequential(
nn.Conv2d(in_channels, 1, kernel_size=1),
nn.Sigmoid()
)
def forward(self, x):
# Apply spatial attention
spatial_attention = self.spatial_attention(x)
x = x * spatial_attention
# Apply convolutional layer
x = self.conv1(x)
x = self.group_norm(x)
x = self.relu(x)
return x
class AttentionDownsamplingModule(nn.Module):
def __init__(self, in_channels, out_channels, scale_factor=2):
super(AttentionDownsamplingModule, self).__init__()
# Spatial Attention
self.spatial_attention = nn.Sequential(
nn.Conv2d(in_channels, 1, kernel_size=1),
nn.Sigmoid()
)
# Channel Attention
self.channel_attention = nn.Sequential(
nn.AdaptiveAvgPool2d(1),
nn.Conv2d(in_channels, in_channels // 8, kernel_size=1),
nn.ReLU(inplace=True),
nn.Conv2d(in_channels // 8, in_channels, kernel_size=1),
nn.Sigmoid()
)
# Convolutional Layers
if scale_factor == 2:
self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=1, padding=1)
elif scale_factor == 4:
self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=2, padding=1)
self.conv2 = nn.Conv2d(out_channels, out_channels, kernel_size=3, stride=2, padding=1)
# Group Normalization
self.group_norm = nn.GroupNorm(20, out_channels)
# ReLU Activation
self.relu = nn.ReLU(inplace=True)
def forward(self, x):
# Apply spatial attention
spatial_attention = self.spatial_attention(x)
x = x * spatial_attention
# Apply channel attention
channel_attention = self.channel_attention(x)
x = x * channel_attention
# Apply convolutional layers
x = self.conv1(x)
x = self.group_norm(x)
x = self.relu(x)
x = self.conv2(x)
x = self.group_norm(x)
x = self.relu(x)
return x
class AttentionUpsamplingModule(nn.Module):
def __init__(self, in_channels, out_channels):
super(AttentionUpsamplingModule, self).__init__()
# Spatial Attention for outs[2]
self.spatial_attention = nn.Sequential(
nn.Conv2d(in_channels, 1, kernel_size=1),
nn.Sigmoid()
)
# Channel Attention for outs[2]
self.channel_attention = nn.Sequential(
nn.AdaptiveAvgPool2d(1),
nn.Conv2d(in_channels, in_channels // 8, kernel_size=1),
nn.ReLU(),
nn.Conv2d(in_channels // 8, in_channels, kernel_size=1),
nn.Sigmoid()
)
self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=1, padding=1)
self.conv2 = nn.Conv2d(out_channels, out_channels, kernel_size=3, stride=1, padding=1)
# Group Normalization
self.group_norm = nn.GroupNorm(20, out_channels)
# ReLU Activation
self.relu = nn.ReLU()
self.upscale = PixelShuffle(in_channels, 2)
def forward(self, x):
# Apply spatial attention
spatial_attention = self.spatial_attention(x)
x = x * spatial_attention
# Apply channel attention
channel_attention = self.channel_attention(x)
x = x * channel_attention
# Apply convolutional layers
x = self.conv1(x)
x = self.group_norm(x)
x = self.relu(x)
x = self.conv2(x)
x = self.group_norm(x)
x = self.relu(x)
# Upsample
x = self.upscale(x)
return x
class ConvLayer(nn.Module):
def __init__(self, in_channels, out_channels):
super(ConvLayer, self).__init__()
self.conv1 = nn.Sequential(
nn.Conv2d(in_channels, out_channels, 1),
nn.GroupNorm(20, out_channels),
nn.ReLU(),
)
def forward(self, x):
x = self.conv1(x)
return x
class InverseMultiAttentiveFeatureRefinement(nn.Module):
def __init__(self, in_channels_list):
super(InverseMultiAttentiveFeatureRefinement, self).__init__()
self.layer1 = AttentionModule(in_channels_list[0], in_channels_list[0])
self.layer2 = AttentionDownsamplingModule(in_channels_list[0], in_channels_list[0]//2, scale_factor = 2)
self.layer3 = ConvLayer(in_channels_list[0]//2 + in_channels_list[1], in_channels_list[1])
self.layer4 = AttentionDownsamplingModule(in_channels_list[1], in_channels_list[1]//2, scale_factor = 2)
self.layer5 = ConvLayer(in_channels_list[1]//2 + in_channels_list[2], in_channels_list[2])
self.layer6 = AttentionDownsamplingModule(in_channels_list[2], in_channels_list[2]//2, scale_factor = 2)
self.layer7 = ConvLayer(in_channels_list[2]//2 + in_channels_list[3], in_channels_list[3])
'''
self.layer8 = AttentionUpsamplingModule(in_channels_list[3], in_channels_list[3])
self.layer9 = ConvLayer(in_channels_list[2] + in_channels_list[3], in_channels_list[2])
self.layer10 = AttentionUpsamplingModule(in_channels_list[2], in_channels_list[2])
self.layer11 = ConvLayer(in_channels_list[1] + in_channels_list[2], in_channels_list[1])
self.layer12 = AttentionUpsamplingModule(in_channels_list[1], in_channels_list[1])
self.layer13 = ConvLayer(in_channels_list[0] + in_channels_list[1], in_channels_list[0])
'''
def forward(self, inputs):
x_c4, x_c3, x_c2, x_c1 = inputs
x_c4 = self.layer1(x_c4)
x_c4_3 = self.layer2(x_c4)
x_c3 = torch.cat([x_c4_3, x_c3], dim=1)
x_c3 = self.layer3(x_c3)
x_c3_2 = self.layer4(x_c3)
x_c2 = torch.cat([x_c3_2, x_c2], dim=1)
x_c2 = self.layer5(x_c2)
x_c2_1 = self.layer6(x_c2)
x_c1 = torch.cat([x_c2_1, x_c1], dim=1)
x_c1 = self.layer7(x_c1)
'''
x_c1_2 = self.layer8(x_c1)
x_c2 = torch.cat([x_c1_2, x_c2], dim=1)
x_c2 = self.layer9(x_c2)
x_c2_3 = self.layer10(x_c2)
x_c3 = torch.cat([x_c2_3, x_c3], dim=1)
x_c3 = self.layer11(x_c3)
x_c3_4 = self.layer12(x_c3)
x_c4 = torch.cat([x_c3_4, x_c4], dim=1)
x_c4 = self.layer13(x_c4)
'''
return [x_c4, x_c3, x_c2, x_c1]
class EVPDepthEncoder(nn.Module):
def __init__(self, out_dim=1024, ldm_prior=[320, 680, 1320+1280], sd_path=None, text_dim=768,
dataset='nyu', caption_aggregation=False
):
super().__init__()
self.layer1 = nn.Sequential(
nn.Conv2d(ldm_prior[0], ldm_prior[0], 3, stride=2, padding=1),
nn.GroupNorm(16, ldm_prior[0]),
nn.ReLU(),
nn.Conv2d(ldm_prior[0], ldm_prior[0], 3, stride=2, padding=1),
)
self.layer2 = nn.Sequential(
nn.Conv2d(ldm_prior[1], ldm_prior[1], 3, stride=2, padding=1),
)
self.out_layer = nn.Sequential(
nn.Conv2d(sum(ldm_prior), out_dim, 1),
nn.GroupNorm(16, out_dim),
nn.ReLU(),
)
self.aggregation = InverseMultiAttentiveFeatureRefinement([320, 680, 1320, 1280])
self.apply(self._init_weights)
### stable diffusion layers
config = OmegaConf.load('./v1-inference.yaml')
if sd_path is None:
if os.path.exists('../checkpoints/v1-5-pruned-emaonly.ckpt'):
config.model.params.ckpt_path = '../checkpoints/v1-5-pruned-emaonly.ckpt'
else:
config.model.params.ckpt_path = None
else:
config.model.params.ckpt_path = f'../{sd_path}'
sd_model = instantiate_from_config(config.model)
self.encoder_vq = sd_model.first_stage_model
| self.unet = UNetWrapper(sd_model.model, use_attn=True) | 0 | 2023-12-15 14:13:59+00:00 | 12k |
penghao-wu/vstar | LLaVA/llava/model/language_model/mpt/modeling_mpt.py | [
{
"identifier": "attn_bias_shape",
"path": "LLaVA/llava/model/language_model/mpt/attention.py",
"snippet": "def attn_bias_shape(attn_impl, n_heads, seq_len, alibi, prefix_lm, causal, use_sequence_id):\n if attn_impl == 'flash':\n return None\n elif attn_impl in ['torch', 'triton']:\n ... | import math
import warnings
import torch
import torch.nn as nn
import torch.nn.functional as F
from typing import List, Optional, Tuple, Union
from transformers import PreTrainedModel, PreTrainedTokenizer, PreTrainedTokenizerFast
from transformers.modeling_outputs import BaseModelOutputWithPast, CausalLMOutputWithPast
from .attention import attn_bias_shape, build_attn_bias
from .blocks import MPTBlock
from .custom_embedding import SharedEmbedding
from .norm import NORM_CLASS_REGISTRY
from .configuration_mpt import MPTConfig
from .adapt_tokenizer import AutoTokenizerForMOD, adapt_tokenizer_for_denoising
from .hf_prefixlm_converter import add_bidirectional_mask_if_missing, convert_hf_causal_lm_to_prefix_lm
from .meta_init_context import init_empty_weights
from .param_init_fns import MODEL_INIT_REGISTRY, generic_param_init_fn_
from .flash_attn_triton import flash_attn_func | 7,399 | """A simple, flexible implementation of a GPT model.
Inspired by https://github.com/karpathy/minGPT/blob/master/mingpt/model.py
"""
try:
except:
pass
Tokenizer = Union[PreTrainedTokenizer, PreTrainedTokenizerFast]
class MPTPreTrainedModel(PreTrainedModel):
config_class = MPTConfig
base_model_prefix = 'model'
_no_split_modules = ['MPTBlock']
class MPTModel(MPTPreTrainedModel):
def __init__(self, config: MPTConfig):
config._validate_config()
super().__init__(config)
self.attn_impl = config.attn_config['attn_impl']
self.prefix_lm = config.attn_config['prefix_lm']
self.attn_uses_sequence_id = config.attn_config['attn_uses_sequence_id']
self.alibi = config.attn_config['alibi']
self.alibi_bias_max = config.attn_config['alibi_bias_max']
if config.init_device == 'mixed':
if dist.get_local_rank() == 0:
config.init_device = 'cpu'
else:
config.init_device = 'meta'
if config.norm_type.lower() not in NORM_CLASS_REGISTRY.keys():
norm_options = ' | '.join(NORM_CLASS_REGISTRY.keys())
raise NotImplementedError(f'Requested norm type ({config.norm_type}) is not implemented within this repo (Options: {norm_options}).')
norm_class = NORM_CLASS_REGISTRY[config.norm_type.lower()]
self.embedding_fraction = config.embedding_fraction
self.wte = SharedEmbedding(config.vocab_size, config.d_model, device=config.init_device)
if not self.alibi:
self.wpe = torch.nn.Embedding(config.max_seq_len, config.d_model, device=config.init_device)
self.emb_drop = nn.Dropout(config.emb_pdrop)
self.blocks = nn.ModuleList([MPTBlock(device=config.init_device, **config.to_dict()) for _ in range(config.n_layers)])
self.norm_f = norm_class(config.d_model, device=config.init_device)
if config.init_device != 'meta':
print(f'You are using config.init_device={config.init_device!r}, but you can also use config.init_device="meta" with Composer + FSDP for fast initialization.')
self.apply(self.param_init_fn)
self.is_causal = not self.prefix_lm
self._attn_bias_initialized = False
self.attn_bias = None
self.attn_bias_shape = attn_bias_shape(self.attn_impl, config.n_heads, config.max_seq_len, self.alibi, prefix_lm=self.prefix_lm, causal=self.is_causal, use_sequence_id=self.attn_uses_sequence_id)
if config.no_bias:
for module in self.modules():
if hasattr(module, 'bias') and isinstance(module.bias, nn.Parameter):
if config.verbose:
warnings.warn(f'Removing bias ({module.bias}) from {module}.')
module.register_parameter('bias', None)
if config.verbose and config.verbose > 2:
print(self)
if 'verbose' not in self.config.init_config:
self.config.init_config['verbose'] = self.config.verbose
if self.config.init_config['verbose'] > 1:
init_fn_name = self.config.init_config['name']
warnings.warn(f'Using {init_fn_name} initialization.')
self.gradient_checkpointing = False
def get_input_embeddings(self):
return self.wte
def set_input_embeddings(self, value):
self.wte = value
@torch.no_grad()
def _attn_bias(self, device, dtype, attention_mask: Optional[torch.ByteTensor]=None, prefix_mask: Optional[torch.ByteTensor]=None, sequence_id: Optional[torch.LongTensor]=None):
if not self._attn_bias_initialized:
if self.attn_bias_shape:
self.attn_bias = torch.zeros(self.attn_bias_shape, device=device, dtype=dtype)
| """A simple, flexible implementation of a GPT model.
Inspired by https://github.com/karpathy/minGPT/blob/master/mingpt/model.py
"""
try:
except:
pass
Tokenizer = Union[PreTrainedTokenizer, PreTrainedTokenizerFast]
class MPTPreTrainedModel(PreTrainedModel):
config_class = MPTConfig
base_model_prefix = 'model'
_no_split_modules = ['MPTBlock']
class MPTModel(MPTPreTrainedModel):
def __init__(self, config: MPTConfig):
config._validate_config()
super().__init__(config)
self.attn_impl = config.attn_config['attn_impl']
self.prefix_lm = config.attn_config['prefix_lm']
self.attn_uses_sequence_id = config.attn_config['attn_uses_sequence_id']
self.alibi = config.attn_config['alibi']
self.alibi_bias_max = config.attn_config['alibi_bias_max']
if config.init_device == 'mixed':
if dist.get_local_rank() == 0:
config.init_device = 'cpu'
else:
config.init_device = 'meta'
if config.norm_type.lower() not in NORM_CLASS_REGISTRY.keys():
norm_options = ' | '.join(NORM_CLASS_REGISTRY.keys())
raise NotImplementedError(f'Requested norm type ({config.norm_type}) is not implemented within this repo (Options: {norm_options}).')
norm_class = NORM_CLASS_REGISTRY[config.norm_type.lower()]
self.embedding_fraction = config.embedding_fraction
self.wte = SharedEmbedding(config.vocab_size, config.d_model, device=config.init_device)
if not self.alibi:
self.wpe = torch.nn.Embedding(config.max_seq_len, config.d_model, device=config.init_device)
self.emb_drop = nn.Dropout(config.emb_pdrop)
self.blocks = nn.ModuleList([MPTBlock(device=config.init_device, **config.to_dict()) for _ in range(config.n_layers)])
self.norm_f = norm_class(config.d_model, device=config.init_device)
if config.init_device != 'meta':
print(f'You are using config.init_device={config.init_device!r}, but you can also use config.init_device="meta" with Composer + FSDP for fast initialization.')
self.apply(self.param_init_fn)
self.is_causal = not self.prefix_lm
self._attn_bias_initialized = False
self.attn_bias = None
self.attn_bias_shape = attn_bias_shape(self.attn_impl, config.n_heads, config.max_seq_len, self.alibi, prefix_lm=self.prefix_lm, causal=self.is_causal, use_sequence_id=self.attn_uses_sequence_id)
if config.no_bias:
for module in self.modules():
if hasattr(module, 'bias') and isinstance(module.bias, nn.Parameter):
if config.verbose:
warnings.warn(f'Removing bias ({module.bias}) from {module}.')
module.register_parameter('bias', None)
if config.verbose and config.verbose > 2:
print(self)
if 'verbose' not in self.config.init_config:
self.config.init_config['verbose'] = self.config.verbose
if self.config.init_config['verbose'] > 1:
init_fn_name = self.config.init_config['name']
warnings.warn(f'Using {init_fn_name} initialization.')
self.gradient_checkpointing = False
def get_input_embeddings(self):
return self.wte
def set_input_embeddings(self, value):
self.wte = value
@torch.no_grad()
def _attn_bias(self, device, dtype, attention_mask: Optional[torch.ByteTensor]=None, prefix_mask: Optional[torch.ByteTensor]=None, sequence_id: Optional[torch.LongTensor]=None):
if not self._attn_bias_initialized:
if self.attn_bias_shape:
self.attn_bias = torch.zeros(self.attn_bias_shape, device=device, dtype=dtype) | self.attn_bias = build_attn_bias(self.attn_impl, self.attn_bias, self.config.n_heads, self.config.max_seq_len, causal=self.is_causal, alibi=self.alibi, alibi_bias_max=self.alibi_bias_max) | 1 | 2023-12-15 14:58:24+00:00 | 12k |
worm128/AI-YinMei | text-generation-webui/extensions/Training_PRO/script.py | [
{
"identifier": "FPSchedulerTrainer",
"path": "text-generation-webui/extensions/Training_PRO/custom_scheduler.py",
"snippet": "class FPSchedulerTrainer(transformers.Trainer):\n def __init__(self,neftune_noise_alpha:float = 0.0, model = None, *args, **kwargs):\n self.neftune_noise_alpha = neftu... | import os
import json
import math
import random
import shutil
import sys
import threading
import time
import traceback
import gradio as gr
import pandas as pd
import torch
import transformers
import inspect
from datetime import datetime
from pathlib import Path
from functools import partial
from .custom_scheduler import FPSchedulerTrainer, FPNEFtuneTrainer
from .matplotgraph import create_graph
from .train_utils import get_available_loras_local, precise_cut, sliding_block_cut, download_file_from_url
from datasets import Dataset, load_dataset
from peft import (
LoraConfig,
get_peft_model,
prepare_model_for_kbit_training,
set_peft_model_state_dict
)
from peft.utils.other import \
TRANSFORMERS_MODELS_TO_LORA_TARGET_MODULES_MAPPING as model_to_lora_modules
from transformers.models.auto.modeling_auto import (
MODEL_FOR_CAUSAL_LM_MAPPING_NAMES
)
from modules import shared, utils
from modules.ui import create_refresh_button
from modules.evaluate import (
calculate_perplexity,
generate_markdown_table,
save_past_evaluations
)
from modules.logging_colors import logger
from modules.models import reload_model
from modules.utils import natural_keys
from typing import Callable, Optional, Tuple, ContextManager
from alpaca_lora_4bit.monkeypatch.peft_tuners_lora_monkey_patch import (
replace_peft_model_with_int4_lora_model
)
from alpaca_lora_4bit.autograd_4bit import Autograd4bitQuantLinear
from alpaca_lora_4bit.models import Linear4bitLt | 8,744 |
def on_log(self, args: transformers.TrainingArguments, state: transformers.TrainerState, control: transformers.TrainerControl, logs, **kwargs):
train_log.update(logs)
current_steps_offset = tracked.current_steps + non_serialized_params['checkpoint_offset']
current_epoch_offset = train_log.get('epoch', 0.0) + non_serialized_params['epoch_offset']
train_log.update({"current_steps": tracked.current_steps})
train_log.update({"current_steps_adjusted": current_steps_offset})
train_log.update({"epoch_adjusted": current_epoch_offset})
if WANT_INTERRUPT:
print("\033[1;31;1mInterrupted by user\033[0;37;0m")
if non_serialized_params['checkpoint_offset']>0:
print(f"\033[1;30;40mStep: {tracked.current_steps:6} [+{non_serialized_params['checkpoint_offset']}] \033[0;37;0m", end='')
else:
print(f"\033[1;30;40mStep: {tracked.current_steps:6} \033[0;37;0m", end='')
graphentry = {
'current_steps': int(train_log.get('current_steps_adjusted',0)),
'loss': float(train_log.get('loss', 0.0)),
'learning_rate': float(train_log.get('learning_rate', 0.0)),
'epoch': float(train_log.get('epoch_adjusted', 0.0))
}
cur_loss = float(train_log.get('loss', 0.0))
cur_lr = float(train_log.get('learning_rate', 0.0))
cur_epoch = float(train_log.get('epoch', 0.0))
if len(statistics['loss']) == 1:
first_epoch = statistics['loss'][0]['epoch']
first_value = statistics['loss'][0]['value']
if first_value ==0:
statistics['loss'] = []
statistics['loss'].append({'epoch': cur_epoch, 'value': cur_loss})
statistics['lr'].append({'epoch': cur_epoch, 'value': cur_lr})
# Add the entry to the continuous log
train_log_graph.append(graphentry)
# Save the graph log for now, we can later generate full graph
with open(f"{lora_file_path}/training_graph.json", 'w') as file:
json.dump(train_log_graph, file, indent=4)
if 'loss' in logs:
loss = float(logs['loss'])
if loss <= stop_at_loss:
control.should_epoch_stop = True
control.should_training_stop = True
print(f"{RED}Stop Loss {stop_at_loss} reached.{RESET}")
# FPHAM SAMPLE REQ Transformers error handling
gradient_accumulation_max = int(train_data.num_rows)//micro_batch_size
if gradient_accumulation_max < gradient_accumulation_steps:
print(f"{RED}WARNING:{RESET} Current gradient accumulation is {RED}too high{RESET} for the amount of training data.")
print(f"Gradient accumulation: {gradient_accumulation_steps} should be less than: {gradient_accumulation_max}. {RED}This could crash Accelerate/Transformers{RESET}")
#min_batchSize = sample_req*micro_batch_size
print(f"Preferable fix: {RED}Increase the size of dataset{RESET}")
print(f"... or Decrerase Gradient Accumulation {RED}{gradient_accumulation_steps}{RESET} to below {GREEN}{gradient_accumulation_max}{RESET}")
gradient_accumulation_steps = max(1,gradient_accumulation_max-1)
print(f"Last resort fix for this run: Lowering Gradient accumulation to {GREEN}{gradient_accumulation_steps}{RESET} [Good luck]")
else:
print(f"Data Size Check: Gradient accumulation: {YELLOW}{gradient_accumulation_steps}{RESET} <= Blocks/Batch {gradient_accumulation_max} ... {GREEN}[OK]{RESET}")
#END OF FPHAM SAMPLE REQ
# FPHAM Custom Scheduler ==
custom_scheduller = False
lr_scheduler_type_arg = lr_scheduler_type
if lr_scheduler_type == 'FP_low_epoch_annealing':
custom_scheduller = True
lr_scheduler_type_arg = 'cosine'
elif lr_scheduler_type == 'FP_half_time_annealing':
custom_scheduller = True
lr_scheduler_type_arg = 'constant'
elif lr_scheduler_type =='FP_raise_fall_creative':
custom_scheduller = True
lr_scheduler_type_arg = 'constant_with_warmup'
#gradient_checkpointing=True
args=transformers.TrainingArguments(
report_to=report_to if report_to != "None" else None,
per_device_train_batch_size=micro_batch_size,
gradient_accumulation_steps=gradient_accumulation_steps,
warmup_steps=math.ceil(warmup_steps / gradient_accumulation_steps),
warmup_ratio = warmup_ratio,
num_train_epochs=epochs,
learning_rate=actual_lr,
fp16=False if shared.args.cpu else True,
optim=optimizer,
logging_steps=1,
evaluation_strategy="steps" if eval_data is not None else "no",
eval_steps=math.ceil(eval_steps / gradient_accumulation_steps) if eval_data is not None else None,
save_strategy="steps" if eval_data is not None else "no",
output_dir=lora_file_path,
lr_scheduler_type=lr_scheduler_type_arg,
load_best_model_at_end=eval_data is not None,
# TODO: Enable multi-device support
ddp_find_unused_parameters=None,
no_cuda=shared.args.cpu,
)
if custom_scheduller:
trainer = FPSchedulerTrainer(
neftune_noise_alpha=neft_noise_alpha,
model=lora_model,
train_dataset=train_data,
eval_dataset=eval_data,
args=args,
data_collator=transformers.DataCollatorForLanguageModeling(shared.tokenizer, mlm=False),
callbacks=list([Callbacks()])
)
elif neft_noise_alpha > 0:
|
os.environ["WANDB_MODE"] = "offline"
# os.environ["WANDB_DISABLED"] = "true"
## just temporary to avoid warning
if hasattr(torch.utils.checkpoint, 'noop_context_fn'):
def my_checkpoint(
function,
*args,
use_reentrant: Optional[bool] = None,
context_fn: Callable[[], Tuple[ContextManager, ContextManager]] = torch.utils.checkpoint.noop_context_fn,
determinism_check: str = torch.utils.checkpoint._DEFAULT_DETERMINISM_MODE,
debug: bool = False,
**kwargs
):
if use_reentrant is None:
#print ("reentran = NONE")
use_reentrant = True
# Hack to mix *args with **kwargs in a python 2.7-compliant way
preserve = kwargs.pop("preserve_rng_state", True)
if kwargs and use_reentrant:
raise ValueError(
"Unexpected keyword arguments: " + ",".join(arg for arg in kwargs)
)
if use_reentrant:
if context_fn is not torch.utils.checkpoint.noop_context_fn or debug is not False:
raise ValueError(
"Passing `context_fn` or `debug` is only supported when "
"use_reentrant=False."
)
return torch.utils.checkpoint.CheckpointFunction.apply(function, preserve, *args)
else:
print ("reentran = FALSE")
gen = torch.utils.checkpoint._checkpoint_without_reentrant_generator(
function, preserve, context_fn, determinism_check, debug, *args, **kwargs
)
# Runs pre-forward logic
next(gen)
ret = function(*args, **kwargs)
# Runs post-forward logic
try:
next(gen)
except StopIteration:
return ret
params = {
"display_name": "Training PRO",
"is_tab": True
}
non_serialized_params = {
"debug_slicer": False,
"Lora_sortedByTime": False,
"stop_at_loss": 0,
"save_steps_under_loss": 0.0,
"save_checkpoint_now": False,
"training_loop": False,
"current_stability": 0,
"save_epochs": 0,
"checkpoint_offset": 0,
"epoch_offset":0,
}
MODEL_CLASSES = {v[1]: v[0] for v in MODEL_FOR_CAUSAL_LM_MAPPING_NAMES.items()}
PARAMETERS = ["lora_name", "always_override", "save_steps", "micro_batch_size", "batch_size", "epochs", "learning_rate", "lr_scheduler_type", "lora_rank", "lora_alpha", "lora_dropout", "cutoff_len", "dataset", "eval_dataset", "format", "eval_steps", "raw_text_file", "higher_rank_limit", "warmup_steps", "optimizer", "hard_cut_string", "train_only_after", "stop_at_loss", "add_eos_token", "min_chars", "report_to", "precize_slicing_overlap", "add_eos_token_type", "save_steps_under_loss", "add_bos_token", "training_projection","sliding_window","warmup_ratio","grad_accumulation","neft_noise_alpha"]
WANT_INTERRUPT = False
train_log = {}
train_template = {}
train_log_graph = []
train_choices = ["all","q-k-v-o","q-k-v","k-v-down","q-v"]
statistics = {
'loss': [],
'lr': [],
}
RED = "\033[91m"
YELLOW = "\033[93m"
GREEN = "\033[92m"
RESET = "\033[0m"
def ui():
with gr.Tab('Train LoRA', elem_id='lora-train-tab'):
tmp = gr.State('')
with gr.Row():
with gr.Column():
# YY.MM.DD
gr.Markdown("`Ver: 23.10.20` This is enhanced version of QLora Training. [Maintained by FP](https://github.com/FartyPants/Training_PRO/tree/main)")
with gr.Row():
with gr.Column(scale=5):
with gr.Row():
copy_from = gr.Dropdown(label='Copy parameters from', value='None', choices=get_available_loras_local(non_serialized_params['Lora_sortedByTime']), elem_classes=['slim-dropdown'])
create_refresh_button(copy_from, lambda: None, lambda: {'choices': get_available_loras_local(non_serialized_params['Lora_sortedByTime'])}, 'refresh-button')
with gr.Column():
sort_byTime = gr.Checkbox(label='Sort list by Date', value=False, info='Sorts Loras by date created.', elem_classes=['no-background'])
with gr.Row():
with gr.Column(scale=5):
lora_name = gr.Textbox(label='Name', info='The name of your new LoRA file')
with gr.Column():
always_override = gr.Checkbox(label='Override Existing Files', value=False, info='If the name is the same, checking will replace the existing file, and unchecking will load and continue from it (the rank must be the same).', elem_classes=['no-background'])
with gr.Row():
with gr.Column():
lora_rank = gr.Slider(label='LoRA Rank', value=32, minimum=0, maximum=1024, step=4, info='Also called dimension count. Higher values = larger file, more content control. Smaller values = smaller file, less control. Use 4 or 8 for style, 128 or 256 to teach, 1024+ for fine-detail on big data. More VRAM is needed for higher ranks.')
lora_alpha = gr.Slider(label='LoRA Alpha', value=64, minimum=0, maximum=2048, step=4, info='This divided by the rank becomes the scaling of the LoRA. Higher means stronger. A good standard value is twice your Rank.')
batch_size = gr.Slider(visible= False, label='Batch Size', value=0, minimum=0, maximum=1024, step=4, info='Now Replaced with Gradient accumulation. Keeping it for sake of old saved data')
micro_batch_size = gr.Slider(label='True Batch Size', value=4, minimum=1, maximum=128, step=1, info='Specifies how many text blocks per step will be trained. The higher value, the better the concept of training will be, but it requires more GPU memory and it reduces speed.')
grad_accumulation = gr.Slider(label='Gradient Accumulation Steps', value=1, minimum=1, maximum=256, step=1, info="Virtually multiplies the Batch Size by averaging the learning over more than one step. VRAM friendly. Evens out loss fluctuations but can also degrade training fidelity.")
with gr.Column():
stop_at_loss = gr.Slider(label='Stop at loss (Can be changed during training)', minimum=0.0, maximum=3.0, step=0.1, value=0.00, info='The process will automatically stop once the desired loss value is reached.')
gr.Markdown(" ")
epochs = gr.Number(label='Epochs', value=3, info='Number of times every entry in the dataset should be fed into training. So 1 means feed each item in once, 5 means feed it in five times, etc.')
learning_rate = gr.Textbox(label='Learning Rate', value='3e-4', info='In scientific notation. 3e-4 is a good starting base point. 1e-2 is extremely high, 1e-6 is extremely low.')
lr_scheduler_type = gr.Dropdown(label='LR Scheduler', value='linear', choices=['linear', 'constant', 'constant_with_warmup', 'cosine', 'cosine_with_restarts', 'polynomial', 'inverse_sqrt', 'FP_low_epoch_annealing', 'FP_half_time_annealing','FP_raise_fall_creative'], info='Learning rate scheduler - defines how the learning rate changes over time. Custom schedulers: FP_low_epoch_annealing, FP_half_time_annealing, FP_raise_fall_creative (see README)', elem_classes=['slim-dropdown'])
with gr.Accordion(label='Checkpoints', open=True):
with gr.Row():
with gr.Column():
save_steps = gr.Number(label='Save every n steps', value=0, info='A checkpoint will be saved every n steps and at each Epoch boundary. (0 = OFF)')
with gr.Column():
save_steps_under_loss = gr.Slider(label='Save at 10% Loss change', value=1.8, minimum=0.0, maximum=3.0, step=0.1, info="Saves checkpoints at (or bellow) this loss and then each time loss falls by at least 10% This works independently from 'Save every n steps'")
with gr.Row():
save_chackpoint_now = gr.Button('Queue Checkpoint Now')
with gr.Accordion(label='Advanced Options', open=True):
with gr.Row():
with gr.Column():
warmup_steps = gr.Number(label='Warmup Steps', value=100, info='Number of max steps used for a linear warmup. Reduces early over-fitting by the first training blocks. Value has precedent over Warmup Ratio. Aligns to the closest multiple of graddient accumulation')
warmup_ratio = gr.Slider(label='Warmup Ratio', minimum=0.0, maximum=0.2, step=0.025, value=0.0, info='Ratio of total training steps that will be used for a linear warmup. It applies only if Warmup Step is 0.')
neft_noise_alpha = gr.Slider(label='NEFtune noise scale', minimum=0.0, maximum=15, step=1, value=0.0, info='Add noise to the training to improve generalization. [0 - OFF, Starting value to experiment: 5]')
training_projection = gr.Radio(value = train_choices[4], label='LLaMA Target Projections', info='Change the targets (LORA is typically q-v)', choices=train_choices)
lora_dropout = gr.Slider(label='LoRA Dropout', minimum=0.0, maximum=1.0, step=0.025, value=0.05, info='Percentage probability for dropout of LoRA layers. This can help reduce overfitting. Most users should leave at default.')
optimizer = gr.Dropdown(label='Optimizer', value='adamw_torch', choices=['adamw_hf', 'adamw_torch', 'adamw_torch_fused', 'adamw_torch_xla', 'adamw_apex_fused', 'adafactor', 'adamw_bnb_8bit', 'adamw_anyprecision', 'sgd', 'adagrad'], info='Different optimizer implementation options, for advanced users. Effects of different options are not well documented yet.', elem_classes=['slim-dropdown'])
with gr.Column():
train_only_after = gr.Textbox(label='Train Only After', value='', info='Only consider text *after* this string in any given chunk for training. For Alpaca datasets, use "### Response:" to only train the response and ignore the input.')
add_bos_token = gr.Checkbox(label='Add BOS token', value=True, info="Adds BOS token for each dataset item")
add_eos_token = gr.Checkbox(label='Add EOS token', value=False, info="Adds EOS token for each dataset item")
add_eos_token_type = gr.Dropdown(label='EOS placement (Text file)', choices=['Every Block', 'Hard Cut Blocks Only'], value='Every Block', info='', allow_custom_value = False)
higher_rank_limit = gr.Checkbox(label='Enable higher ranks', value=False, info='If checked, changes Rank/Alpha slider above to go much higher. This will not work without a datacenter-class GPU.')
report_to = gr.Radio(label="Save detailed logs with", value="None", choices=["None", "wandb", "tensorboard"], interactive=True)
# for future
#with gr.Accordion(label='Dynamic Scheduler', open = False):
# ds_min_epochs = gr.Number(label='Minimum Epochs', value='1', info='Minimum epochs that will be always performed before ramp down can be triggered')
# ds_max_epochs = gr.Number(label='Maximum Epochs (fallback)', value='50', info='Maximum Epochs before the training will bail out completely (should be a large number)')
# ds_loss_trigger = gr.Slider(label='Trigger Loss', minimum=0.0, maximum=2.8, step=0.1, value=1.6, info='Loss at which the ramp down schedule will be triggered')
# ds_loss_rolling_window = gr.Number(label='Loss rolling average', value='4', info='Calculate loss by averaging last x numbers to avoid jumps and noise')
# ds_epochs_to_ramp = gr.Slider(label='Ramp down ratio', minimum=0.0, maximum=2.0, step=0.1, value=1.00, info='How long the ramp down will last relative to ellapsed steps (before trigger)')
# gr.Markdown('These are settings for FP_dynamic_loss_trigger scheduler. The scheduler will do warm up, then hold constant untill a loss falls under Trigger Loss, then it will commence linear ramp down schedule and stop. The length of ramp down is set by Ramp down ratio where (ramp down steps) = ratio * (elapsed steps). (The time to completition shown will be very high untill ramp down is triggered.)')
with gr.Column():
with gr.Tab(label='Formatted Dataset'):
with gr.Row():
with gr.Column():
with gr.Row():
dataset = gr.Dropdown(choices=get_datasets('training/datasets', 'json'), value='None', label='Dataset', info='The dataset file to use for training.', elem_classes=['slim-dropdown'])
create_refresh_button(dataset, lambda: None, lambda: {'choices': get_datasets('training/datasets', 'json')}, 'refresh-button')
with gr.Row():
eval_dataset = gr.Dropdown(choices=get_datasets('training/datasets', 'json'), value='None', label='Evaluation Dataset', info='The (optional) dataset file used to evaluate the model after training.', elem_classes=['slim-dropdown'])
create_refresh_button(eval_dataset, lambda: None, lambda: {'choices': get_datasets('training/datasets', 'json')}, 'refresh-button')
with gr.Column():
with gr.Row():
format = gr.Dropdown(choices=get_datasets('training/formats', 'json'), value='None', label='Data Format', info='The format file used to decide how to format the dataset input.', elem_classes=['slim-dropdown'])
create_refresh_button(format, lambda: None, lambda: {'choices': get_datasets('training/formats', 'json')}, 'refresh-button')
with gr.Row():
eval_steps = gr.Number(label='Evaluate every n steps', value=100, info='If an evaluation dataset is given, test it every time this many steps pass.')
with gr.Tab(label="Text file"):
with gr.Row():
raw_text_file = gr.Dropdown(choices=get_datasets('training/datasets', 'txt'), value='None', label='Text file', info='The text file to use for training.', elem_classes=['slim-dropdown'])
create_refresh_button(raw_text_file, lambda: None, lambda: {'choices': get_datasets('training/datasets', 'txt')}, 'refresh-button')
with gr.Row():
with gr.Column():
precize_slicing_overlap = gr.Checkbox(label='Add Overlapping blocks', value = True)
sliding_window = gr.Checkbox(label='DEMENTOR Long-form Learning by FP (Highly Experimental, use low epochs)', value = False, info='Deep Memorization Enforcement Through Overlapping and Repetition. (I named it, so shush). Special process for learning long-form text using low amount of epochs.')
#debug_slicer = gr.Checkbox(label='Dump sentencelist.json to logs', value = non_serialized_params['debug_slicer'], info='Debug Slicer')
with gr.Column():
hard_cut_string = gr.Textbox(label='Hard Cut String', value='\\n\\n\\n', info='String that indicates a cut between logical blocks of text (ex. Ideas or Chapters). Helps prevent unwanted overlap between unrelated ideas.')
min_chars = gr.Number(label='Ignore small blocks', value=0, info='Ignore Text blocks that have less or equal characters than this number.')
with gr.Tab(label="URL"):
with gr.Row():
with gr.Column():
download_file_url = gr.Textbox(label='Download JSON or txt file to datasets (or formats) folder', value='',info='The URL of a file to download. If on github, make sure you get url of the raw file (https://raw.githubusercontent.com/...). If huggin face, make sure the url has /resolve/ in it not /blob/')
with gr.Row():
download_check_overwrite = gr.Checkbox(label='Overwrite', value=False, info='Overwrite if file exist')
download_folder = gr.Radio(label="Destination", value='training/datasets', choices=['training/datasets', 'training/formats'], interactive=True)
download_button = gr.Button('Download')
download_status = gr.Textbox(label='Download Status', value='', interactive=False)
with gr.Row():
with gr.Column():
with gr.Row():
cutoff_len = gr.Slider(label='Chunk Length (Cutoff Length)', minimum=32, maximum=2048, value=256, step=32, info='The maximum length of a chunk (in tokens). Applies to both JSON dataset and text files. Higher values require much more VRAM.')
with gr.Row():
with gr.Column():
check_dataset_btn = gr.Button('Verify Dataset/Text File and suggest data entries')
check_dataset_txt = gr.Textbox(label='Dataset info', value='')
with gr.Row():
start_button = gr.Button("Start LoRA Training", variant='primary')
stop_button = gr.Button("Interrupt")
with gr.Accordion(label="Graph", open=True):
with gr.Row():
# show_actions_button = False - we use old gradio
plot_graph = gr.LinePlot(x="epoch", y="value", title="Loss Metrics", overlay_point=True, tooltip=["epoch", "value"], x_lim=[0, 1], y_lim=[0, 3.5], width=500, height=250)
output = gr.Markdown(value="Ready")
with gr.Tab('Perplexity evaluation', elem_id='evaluate-tab'):
with gr.Row():
with gr.Column():
models = gr.Dropdown(utils.get_available_models(), label='Models', multiselect=True)
evaluate_text_file = gr.Dropdown(choices=['wikitext', 'ptb', 'ptb_new'] + get_datasets('training/datasets', 'txt')[1:], value='wikitext', label='Input dataset', info='The text file on which the model will be evaluated. The first options are automatically downloaded: wikitext, ptb, and ptb_new. The next options are your local text files under training/datasets.')
with gr.Row():
with gr.Column():
stride_length = gr.Slider(label='Stride', minimum=1, maximum=2048, value=512, step=1, info='Used to make the evaluation faster at the cost of accuracy. 1 = slowest but most accurate. 512 is a common value.')
with gr.Column():
max_length = gr.Slider(label='max_length', minimum=0, maximum=8096, value=0, step=1, info='The context for each evaluation. If set to 0, the maximum context length for the model will be used.')
with gr.Row():
start_current_evaluation = gr.Button("Evaluate loaded model")
start_evaluation = gr.Button("Evaluate selected models")
stop_evaluation = gr.Button("Interrupt")
with gr.Column():
evaluation_log = gr.Markdown(value='')
evaluation_table = gr.Dataframe(value=generate_markdown_table(), interactive=True)
with gr.Row():
save_comments = gr.Button('Save comments', elem_classes="small-button")
refresh_table = gr.Button('Refresh the table', elem_classes="small-button")
# Training events
all_params = [lora_name, always_override, save_steps, micro_batch_size, batch_size, epochs, learning_rate, lr_scheduler_type, lora_rank, lora_alpha, lora_dropout, cutoff_len, dataset, eval_dataset, format, eval_steps, raw_text_file, higher_rank_limit, warmup_steps, optimizer, hard_cut_string, train_only_after, stop_at_loss, add_eos_token, min_chars, report_to, precize_slicing_overlap, add_eos_token_type, save_steps_under_loss, add_bos_token, training_projection,sliding_window,warmup_ratio,grad_accumulation, neft_noise_alpha]
def fix_old_version(batch_size_val,micro_batch_size_val, grad_accumulation_val):
if batch_size_val>0:
gradient_acc = batch_size_val // micro_batch_size_val
print(f"Using Old version of Batch Size ({batch_size_val}) to set Gradient Accumulation: {gradient_acc}")
return gradient_acc
return grad_accumulation_val
copy_from.change(partial(do_copy_params, all_params= all_params), copy_from, all_params).then(fix_old_version,[batch_size,micro_batch_size, grad_accumulation],grad_accumulation)
start_button.click(do_train, all_params, [output,plot_graph])
stop_button.click(do_interrupt, None, None, queue=False)
higher_rank_limit.change(change_rank_limit, [higher_rank_limit], [lora_rank, lora_alpha])
def trigger_stop_at_loss(stop_at_loss_value):
non_serialized_params.update({"stop_at_loss": stop_at_loss_value})
if non_serialized_params['training_loop']:
print(f"Queue: [Stop at loss Change] to {stop_at_loss_value}")
stop_at_loss.change(trigger_stop_at_loss, stop_at_loss, None)
def trigger_save_checkpoint():
non_serialized_params.update({"save_checkpoint_now": True})
if non_serialized_params['training_loop']:
print("Queue: [Save checkpoint] Checkpoint will be saved after the current step is finished.")
else:
print("Use during the training to save the checkpoint at any time.")
def update_button():
return gr.Button.update('[Checkpoint in Queue]', variant='stop', interactive=True)
def update_button2():
time.sleep(1.0)
return gr.Button.update('Queue Checkpoint Now', variant='secondary',interactive = True)
save_chackpoint_now.click(trigger_save_checkpoint, None, None).then(update_button, None,save_chackpoint_now).then(update_button2, None,save_chackpoint_now)
dataset_calc_params = [save_steps,micro_batch_size, epochs, cutoff_len, dataset, format, raw_text_file, warmup_steps, hard_cut_string, min_chars, precize_slicing_overlap,sliding_window,warmup_ratio,grad_accumulation]
def check_dataset(save_steps:int, micro_batch_size: int, epochs: int, cutoff_len: int, dataset:str, format:str, raw_text_file:str, warmup_steps:int, hard_cut_string:str, min_chars:int, precize_slicing_overlap:bool,sliding_window:bool,warmup_ratio:float,grad_accumulation:int):
result = "Specify JSON dastaset or Text file"
total_blocks = 0
if shared.tokenizer is None:
yield "Tokenizer is not available. Please Load some Model first."
return
if raw_text_file not in ['None', '']:
logger.info("Loading Text file...")
fullpath = clean_path('training/datasets', f'{raw_text_file}')
fullpath = Path(fullpath)
if fullpath.is_dir():
logger.info('Training path directory {}'.format(raw_text_file))
raw_text = ""
file_paths = sorted(fullpath.glob('*.txt'), key=lambda path: natural_keys(path.name))
for file_path in file_paths:
if file_path.is_file():
with file_path.open('r', encoding='utf-8') as file:
raw_text += file.read().replace('\r', '')
logger.info(f"Loaded training file: {file_path.name}")
else:
try:
with open(clean_path('training/datasets', f'{raw_text_file}.txt'), 'r', encoding='utf-8') as file:
raw_text = file.read().replace('\r', '')
except:
yield f"{raw_text_file}.txt doesn't seem to exsist anymore... check your training/datasets folder"
return
if min_chars<0:
min_chars = 0
# == New more precise slicing on sentence boundary ==
if sliding_window:
text_chunks = sliding_block_cut(raw_text, min_chars, False, cutoff_len, hard_cut_string,non_serialized_params['debug_slicer'])
else:
text_chunks = precise_cut(raw_text, precize_slicing_overlap, min_chars, False, cutoff_len, hard_cut_string,non_serialized_params['debug_slicer'])
total_blocks = len(text_chunks)
result = f"Text: ({raw_text_file}.txt) has {total_blocks} blocks (Block Size {cutoff_len} tokens)"
del text_chunks
else:
if dataset in ['None', '']:
yield "Select dataset or text file."
return
if format in ['None', '']:
yield "Select format choice for dataset."
return
with open(clean_path('training/formats', f'{format}.json'), 'r', encoding='utf-8-sig') as formatFile:
format_data: dict[str, str] = json.load(formatFile)
def generate_prompt(data_point: dict[str, str]):
for options, data in format_data.items():
if set(options.split(',')) == set(x[0] for x in data_point.items() if (type(x[1]) is str and len(x[1].strip()) > 0)):
for key, val in data_point.items():
if type(val) is str:
data = data.replace(f'%{key}%', val)
return data
raise RuntimeError(f'Data-point "{data_point}" has no keyset match within format "{list(format_data.keys())}"')
def tokenize_dummy(prompt):
input_ids = shared.tokenizer.encode(prompt, truncation=True, max_length=cutoff_len)
labels = [1] * len(input_ids)
input_ids = torch.tensor(input_ids)
return {
"input_ids": input_ids,
"labels": labels,
"attention_mask": input_ids.ne(shared.tokenizer.pad_token_id),
}
def generate_and_tokenize_prompt(data_point):
prompt = generate_prompt(data_point)
return tokenize_dummy(prompt)
logger.info("Loading JSON datasets...")
data = load_dataset("json", data_files=clean_path('training/datasets', f'{dataset}.json'))
data_keys = []
if data:
if 'train' in data: # Check if the 'train' split exists in the dataset
data_keys = list(data['train'][0].keys())
print("Data Keys:", data_keys)
else:
print("The dataset is empty.")
train_data = data['train'].map(generate_and_tokenize_prompt, new_fingerprint='%030x' % random.randrange(16**30))
total_blocks = train_data.num_rows
result = f"Dataset: ({dataset}.json) has {total_blocks} blocks @ length = {cutoff_len} tokens\n(Keys: {data_keys} - Format: {format}.json): "
#for options, data in format_data.items():
# format_keys = options.split(',')
# result += f"{format_keys}, "
#result = result.rstrip()
#result = result.rstrip(',')
if total_blocks>0:
number_ofSteps = int(math.ceil(total_blocks / micro_batch_size) * epochs)
num_stepsPer_epoch = int(math.ceil(number_ofSteps/epochs))
min_warm = math.ceil(100 / grad_accumulation)
warmup_steps_suggest = min(int(min_warm*grad_accumulation), int(math.ceil(number_ofSteps * 0.1)))
warmup_steps_suggest = min(warmup_steps_suggest,num_stepsPer_epoch)
save_each_n_min = int(math.ceil(number_ofSteps/10))
save_each_n_max = int(math.ceil(number_ofSteps/5))
gradient_accumulation_max = int(total_blocks)//micro_batch_size
result += f"\n[Batch Size: {micro_batch_size}, Epochs: {epochs}, Gradient Accumulation: {grad_accumulation}]\n"
result += f"Total number of steps: {number_ofSteps}\n"
result += f"Steps per each Epoch: {num_stepsPer_epoch}\n"
result += f"Suggestions:\n"
result += f"Checkpoints: Save every {save_each_n_min} - {save_each_n_max} steps (Current: {int(save_steps)})\n"
result += f"Warmup steps: {warmup_steps_suggest} (Current: {int(warmup_steps)})"
if gradient_accumulation_max < grad_accumulation:
result += f"\n\nWARNING: Gradient Accumulation {grad_accumulation} is too high: It should be below {gradient_accumulation_max}"
yield result
return
check_dataset_btn.click(check_dataset, dataset_calc_params ,check_dataset_txt)
# Evaluation events. For some reason, the interrupt event
# doesn't work with the .then() syntax, so I write them one
# by one in this ugly but functional way.
ev = start_evaluation.click(calculate_perplexity, [models, evaluate_text_file, stride_length, max_length], evaluation_log, show_progress=False)
start_evaluation.click(generate_markdown_table, None, evaluation_table, show_progress=False)
start_current_evaluation.click(lambda: ['current model'], None, tmp)
ev_cur = start_current_evaluation.click(calculate_perplexity, [tmp, evaluate_text_file, stride_length, max_length], evaluation_log, show_progress=False)
start_current_evaluation.click(generate_markdown_table, None, evaluation_table, show_progress=False)
stop_evaluation.click(None, None, None, cancels=[ev, ev_cur], queue=False)
refresh_table.click(generate_markdown_table, None, evaluation_table, show_progress=True)
save_comments.click(
save_past_evaluations, evaluation_table, None).then(
lambda: "Comments saved.", None, evaluation_log, show_progress=False)
def reload_lora():
return gr.Dropdown.update(choices=get_available_loras_local(non_serialized_params['Lora_sortedByTime']))
# nonserialized items
sort_byTime.change(lambda x: non_serialized_params.update({"Lora_sortedByTime": x}), sort_byTime, None).then(reload_lora,None,copy_from)
#debug_slicer.change(lambda x: non_serialized_params.update({"debug_slicer": x}), debug_slicer, None)
def update_dataset():
return gr.update(choices=get_datasets('training/datasets', 'json')), gr.update(choices=get_datasets('training/datasets', 'txt'))
download_button.click(download_file_from_url, [download_file_url,download_check_overwrite,download_folder] , download_status).then(update_dataset,None,[dataset , raw_text_file])
def get_datasets(path: str, ext: str):
# include subdirectories for raw txt files to allow training from a subdirectory of txt files
#if ext == "txt":
# return ['None'] + sorted(set([k.stem for k in list(Path(path).glob('txt')) + list(Path(path).glob('*/')) if k.stem != 'put-trainer-datasets-here']), key=natural_keys)
return ['None'] + sorted(set([k.stem for k in Path(path).glob(f'*.{ext}') if k.stem != 'put-trainer-datasets-here']), key=natural_keys)
def do_interrupt():
global WANT_INTERRUPT
WANT_INTERRUPT = True
def do_copy_params(lora_name: str, all_params):
if lora_name:
f_name = f"{shared.args.lora_dir}/{clean_path(None, lora_name)}/training_parameters.json"
if Path(f_name).is_file():
with open(f_name, 'r', encoding='utf-8') as format_file:
params: dict[str, str] = json.load(format_file)
else:
params = {}
else:
params = {}
result = list()
for i in range(0, len(PARAMETERS)):
key = PARAMETERS[i]
if key in params:
result.append(params[key])
else:
result.append(all_params[i])
return result
def change_rank_limit(use_higher_ranks: bool):
mult = 2 if use_higher_ranks else 1
return {"maximum": 1024 * mult, "__type__": "update"}, {"maximum": 2048 * mult, "__type__": "update"}
def clean_path(base_path: str, path: str):
"""Strips unusual symbols and forcibly builds a path as relative to the intended directory."""
path = path.replace('\\', '/').replace('..', '_')
if base_path is None:
return path
return f'{Path(base_path).absolute()}/{path}'
def backup_adapter(input_folder):
# Get the creation date of the file adapter_model.bin
try:
adapter_file = Path(f"{input_folder}/adapter_model.bin")
if adapter_file.is_file():
logger.info("Backing up existing LoRA adapter...")
creation_date = datetime.fromtimestamp(adapter_file.stat().st_ctime)
creation_date_str = creation_date.strftime("Backup-%Y-%m-%d")
# Create the new subfolder
subfolder_path = Path(f"{input_folder}/{creation_date_str}")
subfolder_path.mkdir(parents=True, exist_ok=True)
# Check if the file already exists in the subfolder
backup_adapter_file = Path(f"{input_folder}/{creation_date_str}/adapter_model.bin")
if backup_adapter_file.is_file():
print(" - Backup already exists. Skipping backup process.")
return
# Copy existing files to the new subfolder
existing_files = Path(input_folder).iterdir()
for file in existing_files:
if file.is_file():
shutil.copy2(file, subfolder_path)
except Exception as e:
print("An error occurred in backup_adapter:", str(e))
def calc_trainable_parameters(model):
trainable_params = 0
all_param = 0
for _, param in model.named_parameters():
num_params = param.numel()
# if using DS Zero 3 and the weights are initialized empty
if num_params == 0 and hasattr(param, "ds_numel"):
num_params = param.ds_numel
all_param += num_params
if param.requires_grad:
trainable_params += num_params
return trainable_params, all_param
def do_train(lora_name: str, always_override: bool, save_steps: int, micro_batch_size: int, batch_size: int, epochs: int, learning_rate: str, lr_scheduler_type: str, lora_rank: int, lora_alpha: int, lora_dropout: float, cutoff_len: int, dataset: str, eval_dataset: str, format: str, eval_steps: int, raw_text_file: str, higher_rank_limit: bool, warmup_steps: int, optimizer: str, hard_cut_string: str, train_only_after: str, stop_at_loss: float, add_eos_token: bool, min_chars: int, report_to: str, precize_slicing_overlap: bool, add_eos_token_type: str, save_steps_under_loss: float, add_bos_token: bool, training_projection: str,sliding_window:bool,warmup_ratio:float, grad_accumulation: int,neft_noise_alpha:float):
if shared.args.monkey_patch:
replace_peft_model_with_int4_lora_model()
global train_log_graph
global WANT_INTERRUPT
WANT_INTERRUPT = False
statistics['loss'] = []
statistics['loss'].append({'epoch': 0, 'value': 0})
zero_pd = pd.DataFrame(statistics['loss'])
# == Input validation / processing ==
yield "Preparing the input...", zero_pd
lora_file_path = clean_path(None, lora_name)
if lora_file_path.strip() == '':
yield "Missing or invalid LoRA file name input.", zero_pd
return
lora_file_path = f"{Path(shared.args.lora_dir)}/{lora_file_path}"
actual_lr = float(learning_rate)
model_type = type(shared.model).__name__
if model_type in MODEL_CLASSES:
model_id = MODEL_CLASSES[model_type]
else:
model_id = "llama"
if model_type == "PeftModelForCausalLM":
if len(shared.lora_names) > 0:
yield "You are trying to train a LoRA while you already have another LoRA loaded. This will work, but may have unexpected effects. *(Will continue anyway in 5 seconds, press `Interrupt` to stop.)*", zero_pd
logger.warning("Training LoRA over top of another LoRA. May have unexpected effects.")
else:
yield "Model ID not matched due to LoRA loading. Consider reloading base model. *(Will continue anyway in 5 seconds, press `Interrupt` to stop.)*", zero_pd
logger.warning("Model ID not matched due to LoRA loading. Consider reloading base model.")
else:
yield "LoRA training has only currently been validated for LLaMA, OPT, GPT-J, and GPT-NeoX models. Unexpected errors may follow. *(Will continue anyway in 5 seconds, press `Interrupt` to stop.)*", zero_pd
logger.warning(f"LoRA training has only currently been validated for LLaMA, OPT, GPT-J, and GPT-NeoX models. (Found model type: {model_type})")
time.sleep(5)
if shared.args.loader == 'GPTQ-for-LLaMa' and not shared.args.monkey_patch:
yield "LoRA training with GPTQ-for-LLaMa requires loading with `--monkey-patch`", zero_pd
return
if cutoff_len <= 0 or micro_batch_size <= 0 or actual_lr <= 0 or lora_rank <= 0 or lora_alpha <= 0:
yield "Cannot input zeroes.", zero_pd
return
#in new version we dumped this in favor of grad_accumulation
#set it to zero fo new save
batch_size = 0
gradient_accumulation_steps = grad_accumulation #batch_size // micro_batch_size
shared.tokenizer.pad_token_id = 0
shared.tokenizer.padding_side = "left"
def encode(text, prepend_bos_token):
result = shared.tokenizer.encode(text, truncation=True, max_length=cutoff_len)
# Check if the first two tokens are BOS
if len(result) >= 2 and result[:2] == [shared.tokenizer.bos_token_id, shared.tokenizer.bos_token_id]:
result = result[1:]
if not prepend_bos_token and result[0] == shared.tokenizer.bos_token_id:
result = result[1:]
return result
def tokenize(prompt, append_eos_token=False, prepend_bos_token = False):
if train_only_after == '' or train_only_after not in prompt:
input_ids = encode(prompt, prepend_bos_token)
if append_eos_token and input_ids[-1] != shared.tokenizer.eos_token_id and len(input_ids) < cutoff_len:
input_ids.append(shared.tokenizer.eos_token_id)
input_ids = [shared.tokenizer.pad_token_id] * (cutoff_len - len(input_ids)) + input_ids
labels = [1] * len(input_ids)
else:
ind = prompt.index(train_only_after) + len(train_only_after)
before_tokens = encode(prompt[:ind], prepend_bos_token)
after_tokens = encode(prompt[ind:], False)
if append_eos_token and after_tokens[-1] != shared.tokenizer.eos_token_id:
after_tokens.append(shared.tokenizer.eos_token_id)
full_length = len(after_tokens) + len(before_tokens)
if full_length > cutoff_len:
after_tokens = after_tokens[:cutoff_len - len(before_tokens)]
else:
before_tokens = [shared.tokenizer.pad_token_id] * (cutoff_len - full_length) + before_tokens
input_ids = before_tokens + after_tokens
labels = [-100] * len(before_tokens) + [1] * len(after_tokens)
input_ids = torch.tensor(input_ids)
return {
"input_ids": input_ids,
"labels": labels,
"attention_mask": input_ids.ne(shared.tokenizer.pad_token_id),
}
train_template.clear()
#reset stuff
print(f"*** LoRA: {lora_name} ***")
non_serialized_params.update({"stop_at_loss": stop_at_loss})
non_serialized_params.update({"save_steps_under_loss": save_steps_under_loss+0.01})
non_serialized_params.update({"save_checkpoint_now": False})
non_serialized_params.update({"training_loop": False})
non_serialized_params.update({"current_stability": 0})
non_serialized_params.update({"save_epochs": 0})
non_serialized_params.update({"checkpoint_offset": 0})
non_serialized_params.update({"epoch_offset": 0})
train_log_graph.clear()
# === once fixed, this can be removed ==============================
if hasattr(torch.utils.checkpoint, 'noop_context_fn'):
print("Testing Pytorch...")
old_checkpoint_signature = inspect.signature(torch.utils.checkpoint.checkpoint)
# Get the signature of your new checkpoint function
my_checkpoint_signature = inspect.signature(my_checkpoint)
# Check if the signatures match
if old_checkpoint_signature.parameters == my_checkpoint_signature.parameters:
print(F"{RED}Overriding Torch checkpoint function to avoid repeated 'use_reentrant not explicitly set' warnings{RESET}")
#print(" - Note: Transformers need to pass use_reentrant in llama.modeling_llama in def forward, layer_outputs = torch.utils.checkpoint.checkpoint")
#print(" Once they do, this function can be removed")
torch.utils.checkpoint.checkpoint = my_checkpoint
# END OF FPHAM SENTENCE SPLIT functions ===================
# == Prep the dataset, format, etc ==
if raw_text_file not in ['None', '']:
train_template["template_type"] = "raw_text"
logger.info("Loading text file...")
fullpath = clean_path('training/datasets', f'{raw_text_file}')
fullpath = Path(fullpath)
if fullpath.is_dir():
logger.info('Training path directory {}'.format(raw_text_file))
raw_text = ""
file_paths = sorted(fullpath.glob('*.txt'), key=lambda path: natural_keys(path.name))
for file_path in file_paths:
if file_path.is_file():
with file_path.open('r', encoding='utf-8') as file:
raw_text += file.read().replace('\r', '')
logger.info(f"Loaded training file: {file_path.name}")
else:
with open(clean_path('training/datasets', f'{raw_text_file}.txt'), 'r', encoding='utf-8') as file:
raw_text = file.read().replace('\r', '')
# FPHAM PRECISE SLICING
if min_chars<0:
min_chars = 0
add_EOS_to_all = add_eos_token and add_eos_token_type == 'Every Block'
add_EOS_to_HC = add_eos_token and add_eos_token_type != 'Every Block'
#print (f"add_eos_token {add_eos_token}, add_EOS_to_all {add_EOS_to_all}, add_EOS_to_HC {add_EOS_to_HC}")
# == New more precise slicing on sentence boundary ==
if sliding_window:
text_chunks = sliding_block_cut(raw_text, min_chars, add_EOS_to_HC, cutoff_len, hard_cut_string,non_serialized_params['debug_slicer'])
else:
text_chunks = precise_cut(raw_text, precize_slicing_overlap, min_chars, add_EOS_to_HC, cutoff_len, hard_cut_string,non_serialized_params['debug_slicer'])
train_data = Dataset.from_list([tokenize(x, add_EOS_to_all, add_bos_token) for x in text_chunks])
if add_EOS_to_all:
print(f"Added EOS to {len(text_chunks)} blocks")
print(f"All Data Blocks: {len(text_chunks)}")
del text_chunks
eval_data = None
else:
if dataset in ['None', '']:
yield "Missing dataset choice input, cannot continue.", zero_pd
return
if format in ['None', '']:
yield "Missing format choice input, cannot continue.", zero_pd
return
train_template["template_type"] = "dataset"
with open(clean_path('training/formats', f'{format}.json'), 'r', encoding='utf-8-sig') as formatFile:
format_data: dict[str, str] = json.load(formatFile)
# == store training prompt ==
for _, value in format_data.items():
prompt_key = f"template_{len(train_template)}"
train_template[prompt_key] = value
def generate_prompt(data_point: dict[str, str]):
for options, data in format_data.items():
if set(options.split(',')) == set(x[0] for x in data_point.items() if (type(x[1]) is str and len(x[1].strip()) > 0)):
for key, val in data_point.items():
if type(val) is str:
data = data.replace(f'%{key}%', val)
return data
raise RuntimeError(f'Data-point "{data_point}" has no keyset match within format "{list(format_data.keys())}"')
def generate_and_tokenize_prompt(data_point):
prompt = generate_prompt(data_point)
return tokenize(prompt, add_eos_token, add_bos_token)
logger.info("Loading JSON datasets...")
data = load_dataset("json", data_files=clean_path('training/datasets', f'{dataset}.json'))
train_data = data['train'].map(generate_and_tokenize_prompt, new_fingerprint='%030x' % random.randrange(16**30))
print(f"BOS: {add_bos_token} EOS: {add_eos_token}")
print(f"Data Blocks: {train_data.num_rows}")
if eval_dataset == 'None':
eval_data = None
else:
eval_data = load_dataset("json", data_files=clean_path('training/datasets', f'{eval_dataset}.json'))
eval_data = eval_data['train'].map(generate_and_tokenize_prompt, new_fingerprint='%030x' % random.randrange(16**30))
# == We MUST reload model if it went through any previous training, even failed one ==
if shared.model_dirty_from_training:
selected_model = shared.model_name
if selected_model:
print("\033[1;31;1m(Model has been modified by previous training, it needs to be reloaded...)\033[0;37;0m")
try:
yield f"Reloading {selected_model}...", zero_pd
reload_model()
shared.tokenizer.pad_token_id = 0
shared.tokenizer.padding_side = "left"
if shared.model is not None:
print("Model reloaded OK, continue with training.")
else:
return f"Failed to load {selected_model}."
except:
exc = traceback.format_exc()
logger.error('Failed to reload the model.')
print(exc)
return exc.replace('\n', '\n\n')
# == Start prepping the model itself ==
if not hasattr(shared.model, 'lm_head') or hasattr(shared.model.lm_head, 'weight'):
logger.info("Getting model ready...")
# here we can disable gradient checkpoint, by default = true, use_gradient_checkpointing=True
prepare_model_for_kbit_training(shared.model)
# base model is now frozen and should not be reused for any other LoRA training than this one
shared.model_dirty_from_training = True
print(f"Transformers Model Type: {YELLOW}{model_type}{RESET}")
if training_projection==train_choices[0]:
model_to_lora_modules[model_id] = ["gate_proj","down_proj","up_proj","q_proj","k_proj","v_proj","o_proj"]
elif training_projection==train_choices[1]:
model_to_lora_modules[model_id] = ["q_proj","k_proj", "v_proj", "o_proj"]
elif training_projection==train_choices[2]:
model_to_lora_modules[model_id] = ["q_proj","k_proj", "v_proj"]
elif training_projection==train_choices[3]:
model_to_lora_modules[model_id] = ["k_proj", "v_proj", "down_proj"]
else:
model_to_lora_modules[model_id] = ["q_proj", "v_proj"]
logger.info("Preparing for training...")
config = LoraConfig(
r=lora_rank,
lora_alpha=lora_alpha,
target_modules=model_to_lora_modules[model_id],
lora_dropout=lora_dropout,
bias="none",
task_type="CAUSAL_LM"
)
# == Backup the existing adapter ==
if not always_override:
backup_adapter(lora_file_path)
# == get model trainable params
model_trainable_params, model_all_params = calc_trainable_parameters(shared.model)
try:
logger.info("Creating LoRA model...")
lora_model = get_peft_model(shared.model, config)
if not always_override and Path(f"{lora_file_path}/adapter_model.bin").is_file():
logger.info("Loading existing LoRA data...")
state_dict_peft = torch.load(f"{lora_file_path}/adapter_model.bin")
set_peft_model_state_dict(lora_model, state_dict_peft)
print(f" + Continue Training on {RED}{lora_file_path}/adapter_model.bin{RESET}")
#load training_log.json if exist
if Path(f"{lora_file_path}/training_log.json").is_file():
with open(f"{lora_file_path}/training_log.json", 'r') as json_file:
json_ilog = json.load(json_file)
for key, value in json_ilog.items():
if key=='current_steps':
non_serialized_params.update({"checkpoint_offset": int(value+1)})
print(f" + Checkpoints will be saved with offset: {RED}{non_serialized_params['checkpoint_offset']}{RESET}")
if key=='epoch':
non_serialized_params.update({"epoch_offset": value})
print(f" + Epoch offset: {RED}{non_serialized_params['epoch_offset']}{RESET}")
if Path(f"{lora_file_path}/training_graph.json").is_file():
try:
with open(f"{lora_file_path}/training_graph.json", 'r') as json_file:
train_log_graph = json.load(json_file)
print(" + Training Graph loaded")
except:
print(f"Can't read training_graph")
except:
yield traceback.format_exc().replace('\n', '\n\n'), zero_pd
return
if shared.args.monkey_patch:
for _, m in lora_model.named_modules():
if isinstance(m, Autograd4bitQuantLinear) or isinstance(m, Linear4bitLt):
if m.is_v1_model:
m.zeros = m.zeros.half()
m.scales = m.scales.half()
class Tracked():
def __init__(self):
self.current_steps = 0
self.max_steps = 0
self.did_save = False
tracked = Tracked()
actual_save_steps = math.ceil(save_steps / gradient_accumulation_steps)
class Callbacks(transformers.TrainerCallback):
def on_step_begin(self, args: transformers.TrainingArguments, state: transformers.TrainerState, control: transformers.TrainerControl, **kwargs):
tracked.current_steps = state.global_step * gradient_accumulation_steps
tracked.max_steps = state.max_steps * gradient_accumulation_steps
ssteps10 = int(max(2,(state.max_steps/epochs)*0.1))
if WANT_INTERRUPT:
control.should_epoch_stop = True
control.should_training_stop = True
else:
current_loss = float(train_log.get('loss', 0.0))
current_epoch_int = int(float(train_log.get('epoch', 0.0)))
force_save = False
current_steps_offset = tracked.current_steps + non_serialized_params['checkpoint_offset']
folder_save = f"checkpoint-{current_steps_offset}"
# save if triggered by user
if non_serialized_params['save_checkpoint_now']:
force_save = True
non_serialized_params.update({"save_checkpoint_now": False})
print(f"\033[1;31;1mSave Checkpoint manually trigerred.\033[0;37;0m")
folder_save = f"checkpoint-{current_steps_offset}-user"
patience = 3 # Set the number of consecutive steps for tracking stability
if gradient_accumulation_steps==1:
patience = 4
min_steps = ssteps10
# Save each time the loss is below the threshold
if current_loss < non_serialized_params['save_steps_under_loss'] and current_loss > 0 and state.global_step > min_steps:
current_stability = non_serialized_params['current_stability']
current_stability += 1
non_serialized_params.update({"current_stability": current_stability})
if current_stability >= patience:
current_stability = 0
non_serialized_params.update({"current_stability": current_stability})
current_loss_dec = round(current_loss, 2)
loss_str = f"{current_loss_dec:.2f}"
loss_str = loss_str.replace('.', '_')
new_save = (current_loss_dec-0.1) + 0.01
non_serialized_params.update({"save_steps_under_loss": new_save})
folder_save = f"checkpoint-{current_steps_offset}-loss-{loss_str}"
force_save = True
else:
# Reset stability if the loss goes above the threshold
non_serialized_params.update({"current_stability": 0})
# Save full epochs
if actual_save_steps>0 and current_epoch_int > non_serialized_params['save_epochs'] and state.global_step > min_steps:
current_epoch_offset = current_epoch_int
if non_serialized_params['epoch_offset'] > 0:
current_epoch_offset = current_epoch_int + round(non_serialized_params['epoch_offset'], 2)
ep_off_str = f"{current_epoch_offset}"
ep_off_str = ep_off_str.replace('.', '_')
folder_save = f"checkpoint-{current_steps_offset}-epoch-{ep_off_str}"
non_serialized_params.update({"save_epochs": current_epoch_int})
force_save = True
# save each actual_save_steps
if state.global_step > 0 and actual_save_steps > 0 and state.global_step % actual_save_steps == 0:
folder_save = f"checkpoint-{current_steps_offset}"
force_save = True
if force_save:
lora_model.save_pretrained(f"{lora_file_path}/{folder_save}/")
print(f"\033[1;30;40mStep: {tracked.current_steps:6} \033[0;37;0m Saved: [{folder_save}]")
# Save log
with open(f"{lora_file_path}/{folder_save}/training_log.json", 'w', encoding='utf-8') as file:
json.dump(train_log, file, indent=2)
# == Save training prompt ==
with open(f"{lora_file_path}/{folder_save}/training_prompt.json", 'w', encoding='utf-8') as file:
json.dump(train_template, file, indent=2)
def on_substep_end(self, args: transformers.TrainingArguments, state: transformers.TrainerState, control: transformers.TrainerControl, **kwargs):
tracked.current_steps += 1
if WANT_INTERRUPT:
control.should_epoch_stop = True
control.should_training_stop = True
def on_log(self, args: transformers.TrainingArguments, state: transformers.TrainerState, control: transformers.TrainerControl, logs, **kwargs):
train_log.update(logs)
current_steps_offset = tracked.current_steps + non_serialized_params['checkpoint_offset']
current_epoch_offset = train_log.get('epoch', 0.0) + non_serialized_params['epoch_offset']
train_log.update({"current_steps": tracked.current_steps})
train_log.update({"current_steps_adjusted": current_steps_offset})
train_log.update({"epoch_adjusted": current_epoch_offset})
if WANT_INTERRUPT:
print("\033[1;31;1mInterrupted by user\033[0;37;0m")
if non_serialized_params['checkpoint_offset']>0:
print(f"\033[1;30;40mStep: {tracked.current_steps:6} [+{non_serialized_params['checkpoint_offset']}] \033[0;37;0m", end='')
else:
print(f"\033[1;30;40mStep: {tracked.current_steps:6} \033[0;37;0m", end='')
graphentry = {
'current_steps': int(train_log.get('current_steps_adjusted',0)),
'loss': float(train_log.get('loss', 0.0)),
'learning_rate': float(train_log.get('learning_rate', 0.0)),
'epoch': float(train_log.get('epoch_adjusted', 0.0))
}
cur_loss = float(train_log.get('loss', 0.0))
cur_lr = float(train_log.get('learning_rate', 0.0))
cur_epoch = float(train_log.get('epoch', 0.0))
if len(statistics['loss']) == 1:
first_epoch = statistics['loss'][0]['epoch']
first_value = statistics['loss'][0]['value']
if first_value ==0:
statistics['loss'] = []
statistics['loss'].append({'epoch': cur_epoch, 'value': cur_loss})
statistics['lr'].append({'epoch': cur_epoch, 'value': cur_lr})
# Add the entry to the continuous log
train_log_graph.append(graphentry)
# Save the graph log for now, we can later generate full graph
with open(f"{lora_file_path}/training_graph.json", 'w') as file:
json.dump(train_log_graph, file, indent=4)
if 'loss' in logs:
loss = float(logs['loss'])
if loss <= stop_at_loss:
control.should_epoch_stop = True
control.should_training_stop = True
print(f"{RED}Stop Loss {stop_at_loss} reached.{RESET}")
# FPHAM SAMPLE REQ Transformers error handling
gradient_accumulation_max = int(train_data.num_rows)//micro_batch_size
if gradient_accumulation_max < gradient_accumulation_steps:
print(f"{RED}WARNING:{RESET} Current gradient accumulation is {RED}too high{RESET} for the amount of training data.")
print(f"Gradient accumulation: {gradient_accumulation_steps} should be less than: {gradient_accumulation_max}. {RED}This could crash Accelerate/Transformers{RESET}")
#min_batchSize = sample_req*micro_batch_size
print(f"Preferable fix: {RED}Increase the size of dataset{RESET}")
print(f"... or Decrerase Gradient Accumulation {RED}{gradient_accumulation_steps}{RESET} to below {GREEN}{gradient_accumulation_max}{RESET}")
gradient_accumulation_steps = max(1,gradient_accumulation_max-1)
print(f"Last resort fix for this run: Lowering Gradient accumulation to {GREEN}{gradient_accumulation_steps}{RESET} [Good luck]")
else:
print(f"Data Size Check: Gradient accumulation: {YELLOW}{gradient_accumulation_steps}{RESET} <= Blocks/Batch {gradient_accumulation_max} ... {GREEN}[OK]{RESET}")
#END OF FPHAM SAMPLE REQ
# FPHAM Custom Scheduler ==
custom_scheduller = False
lr_scheduler_type_arg = lr_scheduler_type
if lr_scheduler_type == 'FP_low_epoch_annealing':
custom_scheduller = True
lr_scheduler_type_arg = 'cosine'
elif lr_scheduler_type == 'FP_half_time_annealing':
custom_scheduller = True
lr_scheduler_type_arg = 'constant'
elif lr_scheduler_type =='FP_raise_fall_creative':
custom_scheduller = True
lr_scheduler_type_arg = 'constant_with_warmup'
#gradient_checkpointing=True
args=transformers.TrainingArguments(
report_to=report_to if report_to != "None" else None,
per_device_train_batch_size=micro_batch_size,
gradient_accumulation_steps=gradient_accumulation_steps,
warmup_steps=math.ceil(warmup_steps / gradient_accumulation_steps),
warmup_ratio = warmup_ratio,
num_train_epochs=epochs,
learning_rate=actual_lr,
fp16=False if shared.args.cpu else True,
optim=optimizer,
logging_steps=1,
evaluation_strategy="steps" if eval_data is not None else "no",
eval_steps=math.ceil(eval_steps / gradient_accumulation_steps) if eval_data is not None else None,
save_strategy="steps" if eval_data is not None else "no",
output_dir=lora_file_path,
lr_scheduler_type=lr_scheduler_type_arg,
load_best_model_at_end=eval_data is not None,
# TODO: Enable multi-device support
ddp_find_unused_parameters=None,
no_cuda=shared.args.cpu,
)
if custom_scheduller:
trainer = FPSchedulerTrainer(
neftune_noise_alpha=neft_noise_alpha,
model=lora_model,
train_dataset=train_data,
eval_dataset=eval_data,
args=args,
data_collator=transformers.DataCollatorForLanguageModeling(shared.tokenizer, mlm=False),
callbacks=list([Callbacks()])
)
elif neft_noise_alpha > 0: | trainer = FPNEFtuneTrainer( | 1 | 2023-12-20 14:13:38+00:00 | 12k |
foocker/Bert-VITS2-Faster | text/chinese.py | [
{
"identifier": "punctuation",
"path": "text/symbols.py",
"snippet": ""
},
{
"identifier": "ToneSandhi",
"path": "text/tone_sandhi.py",
"snippet": "class ToneSandhi:\n def __init__(self):\n self.must_neural_tone_words = {\n \"麻烦\",\n \"麻利\",\n \... | import os
import re
import cn2an
import sys
import jieba.posseg as psg
from pypinyin import lazy_pinyin, Style
from text.symbols import punctuation
from text.tone_sandhi import ToneSandhi
from text import chinese_bert
from text.chinese_bert import get_bert_feature | 7,688 |
sys.path.insert(0,"/data/stable-diffusion-tritonserver/Bert-VITS2")
current_file_path = os.path.dirname(__file__)
pinyin_to_symbol_map = {
line.split("\t")[0]: line.strip().split("\t")[1]
for line in open(os.path.join(current_file_path, "opencpop-strict.txt")).readlines()
}
rep_map = {
":": ",",
";": ",",
",": ",",
"。": ".",
"!": "!",
"?": "?",
"\n": ".",
"·": ",",
"、": ",",
"...": "…",
"$": ".",
"“": "'",
"”": "'",
"‘": "'",
"’": "'",
"(": "'",
")": "'",
"(": "'",
")": "'",
"《": "'",
"》": "'",
"【": "'",
"】": "'",
"[": "'",
"]": "'",
"—": "-",
"~": "-",
"~": "-",
"「": "'",
"」": "'",
}
tone_modifier = ToneSandhi()
def replace_punctuation(text):
text = text.replace("嗯", "恩").replace("呣", "母")
pattern = re.compile("|".join(re.escape(p) for p in rep_map.keys()))
replaced_text = pattern.sub(lambda x: rep_map[x.group()], text)
replaced_text = re.sub(
|
sys.path.insert(0,"/data/stable-diffusion-tritonserver/Bert-VITS2")
current_file_path = os.path.dirname(__file__)
pinyin_to_symbol_map = {
line.split("\t")[0]: line.strip().split("\t")[1]
for line in open(os.path.join(current_file_path, "opencpop-strict.txt")).readlines()
}
rep_map = {
":": ",",
";": ",",
",": ",",
"。": ".",
"!": "!",
"?": "?",
"\n": ".",
"·": ",",
"、": ",",
"...": "…",
"$": ".",
"“": "'",
"”": "'",
"‘": "'",
"’": "'",
"(": "'",
")": "'",
"(": "'",
")": "'",
"《": "'",
"》": "'",
"【": "'",
"】": "'",
"[": "'",
"]": "'",
"—": "-",
"~": "-",
"~": "-",
"「": "'",
"」": "'",
}
tone_modifier = ToneSandhi()
def replace_punctuation(text):
text = text.replace("嗯", "恩").replace("呣", "母")
pattern = re.compile("|".join(re.escape(p) for p in rep_map.keys()))
replaced_text = pattern.sub(lambda x: rep_map[x.group()], text)
replaced_text = re.sub( | r"[^\u4e00-\u9fa5" + "".join(punctuation) + r"]+", "", replaced_text | 0 | 2023-12-18 09:53:41+00:00 | 12k |
sinoyou/nelf-pro | nerfstudio/cameras/cameras.py | [
{
"identifier": "camera_utils",
"path": "nerfstudio/cameras/camera_utils.py",
"snippet": "_EPS = np.finfo(float).eps * 4.0\n M = np.array(matrix, dtype=np.float64, copy=False)[:4, :4]\n K = np.array(\n [\n [m00 - m11 - m22, 0.0, 0.0, 0.0],\n [m01 + m10,... | import base64
import math
import cv2
import torch
import torchvision
import numpy as np
import nerfstudio.utils.poses as pose_utils
from dataclasses import dataclass, field
from enum import Enum, auto
from typing import Dict, List, Optional, Tuple, Union
from rich.console import Console
from torch.nn.functional import normalize
from torchtyping import TensorType
from nerfstudio.cameras import camera_utils
from nerfstudio.cameras.probes import Probes
from nerfstudio.cameras.rays import RayBundle
from nerfstudio.utils.tensor_dataclass import TensorDataclass
from nerfstudio.utils.plotly_utils_nelfpro import plot_camera_components, plotly_camera_scale | 9,979 | camera_type
), f"camera_type tensor must be of type int, not: {camera_type.dtype}"
camera_type = camera_type.to(self.device)
if camera_type.ndim == 0 or camera_type.shape[-1] != 1:
camera_type = camera_type.unsqueeze(-1)
# assert torch.all(
# camera_type.view(-1)[0] == camera_type
# ), "Batched cameras of different camera_types will be allowed in the future."
else:
raise ValueError(
'Invalid camera_type. Must be CameraType, List[CameraType], int, or torch.Tensor["num_cameras"]. \
Received: '
+ str(type(camera_type))
)
return camera_type
def _init_get_height_width(
self,
h_w: Union[TensorType["batch_hws":..., 1], TensorType["batch_hws":...], int, None],
c_x_y: TensorType["batch_cxys":...],
) -> TensorType["num_cameras":..., 1]:
"""
Parses the __init__() argument for height or width
Height/Width Calculation:
If int, first go to tensor and then broadcast to all cameras
If tensor, broadcast to all cameras
If none, use cx or cy * 2
Else raise error
Args:
h_w: height or width argument from __init__()
c_x_y: cx or cy for when h_w == None
"""
if isinstance(h_w, int):
h_w = torch.Tensor([h_w]).to(torch.int64).to(self.device)
elif isinstance(h_w, torch.Tensor):
assert not torch.is_floating_point(h_w), f"height and width tensor must be of type int, not: {h_w.dtype}"
h_w = h_w.to(torch.int64).to(self.device)
if h_w.ndim == 0 or h_w.shape[-1] != 1:
h_w = h_w.unsqueeze(-1)
# assert torch.all(h_w == h_w.view(-1)[0]), "Batched cameras of different h, w will be allowed in the future."
elif h_w is None:
h_w = torch.Tensor((c_x_y * 2).to(torch.int64).to(self.device))
else:
raise ValueError("Height must be an int, tensor, or None, received: " + str(type(h_w)))
return h_w
def _init_get_times(self, times):
if times is None:
times = None
elif isinstance(times, torch.Tensor):
if times.ndim == 0 or times.shape[-1] != 1:
times = times.unsqueeze(-1).to(self.device)
else:
raise ValueError(f"times must be None or a tensor, got {type(times)}")
return times
@property
def device(self):
"""Returns the device that the camera is on."""
return self.camera_to_worlds.device
@property
def image_height(self) -> TensorType["num_cameras":..., 1]:
"""Returns the height of the images."""
return self.height
@property
def image_width(self) -> TensorType["num_cameras":..., 1]:
"""Returns the height of the images."""
return self.width
@property
def is_jagged(self):
"""
Returns whether or not the cameras are "jagged" (i.e. the height and widths are different, meaning that
you cannot concatenate the image coordinate maps together)
"""
h_jagged = not torch.all(self.height == self.height.view(-1)[0])
w_jagged = not torch.all(self.width == self.width.view(-1)[0])
return h_jagged or w_jagged
def get_image_coords(
self, pixel_offset: float = 0.5, index: Optional[Tuple] = None
) -> TensorType["height", "width", 2]:
"""This gets the image coordinates of one of the cameras in this object.
If no index is specified, it will return the maximum possible sized height / width image coordinate map,
by looking at the maximum height and width of all the cameras in this object.
Args:
pixel_offset: Offset for each pixel. Defaults to center of pixel (0.5)
index: Tuple of indices into the batch dimensions of the camera. Defaults to None, which returns the 0th
flattened camera
Returns:
Grid of image coordinates.
"""
if index is None:
image_height = torch.max(self.image_height.view(-1))
image_width = torch.max(self.image_width.view(-1))
image_coords = torch.meshgrid(torch.arange(image_height), torch.arange(image_width), indexing="ij")
image_coords = torch.stack(image_coords, dim=-1) + pixel_offset # stored as (y, x) coordinates
else:
image_height = self.image_height[index].item()
image_width = self.image_width[index].item()
image_coords = torch.meshgrid(torch.arange(image_height), torch.arange(image_width), indexing="ij")
image_coords = torch.stack(image_coords, dim=-1) + pixel_offset # stored as (y, x) coordinates
return image_coords
def generate_rays( # pylint: disable=too-many-statements
self,
camera_indices: Union[TensorType["num_rays":..., "num_cameras_batch_dims"], int],
coords: Optional[TensorType["num_rays":..., 2]] = None,
camera_opt_to_camera: Optional[TensorType["num_rays":..., 3, 4]] = None,
distortion_params_delta: Optional[TensorType["num_rays":..., 6]] = None,
keep_shape: Optional[bool] = None,
disable_distortion: bool = False,
| # Copyright 2022 The Nerfstudio Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Camera Models
"""
CONSOLE = Console(width=120)
class CameraType(Enum):
"""Supported camera types."""
PERSPECTIVE = auto()
FISHEYE = auto()
EQUIRECTANGULAR = auto()
CAMERA_MODEL_TO_TYPE = {
"SIMPLE_PINHOLE": CameraType.PERSPECTIVE,
"PINHOLE": CameraType.PERSPECTIVE,
"SIMPLE_RADIAL": CameraType.PERSPECTIVE,
"RADIAL": CameraType.PERSPECTIVE,
"OPENCV": CameraType.PERSPECTIVE,
"OPENCV_FISHEYE": CameraType.FISHEYE,
"EQUIRECTANGULAR": CameraType.EQUIRECTANGULAR,
}
@dataclass(init=False)
class Cameras(TensorDataclass):
"""Dataparser outputs for the image dataset and the ray generator.
Note: currently only supports cameras with the same principal points and types. The reason we type
the focal lengths, principal points, and image sizes as tensors is to allow for batched cameras
down the line in cases where your batches of camera data don't come from the same cameras.
If a single value is provided, it is broadcasted to all cameras.
Args:
camera_to_worlds: Camera to world matrices. Tensor of per-image c2w matrices, in [R | t] format
fx: Focal length x
fy: Focal length y
cx: Principal point x
cy: Principal point y
width: Image width
height: Image height
distortion_params: OpenCV 6 radial distortion coefficients
camera_type: Type of camera model. This will be an int corresponding to the CameraType enum.
times: Timestamps for each camera
probe_config: dict config containing the generated probe information (core and basis)
"""
camera_to_worlds: TensorType["num_cameras":..., 3, 4]
fx: TensorType["num_cameras":..., 1]
fy: TensorType["num_cameras":..., 1]
cx: TensorType["num_cameras":..., 1]
cy: TensorType["num_cameras":..., 1]
width: TensorType["num_cameras":..., 1]
height: TensorType["num_cameras":..., 1]
distortion_params: Optional[TensorType["num_cameras":..., 6]]
camera_type: TensorType["num_cameras":..., 1]
times: Optional[TensorType["num_cameras":..., 1]]
image_filenames: Optional[List[str]]
probe_config: Optional[list]
def __init__(
self,
camera_to_worlds: TensorType["batch_c2ws":..., 3, 4],
fx: Union[TensorType["batch_fxs":..., 1], float],
fy: Union[TensorType["batch_fys":..., 1], float],
cx: Union[TensorType["batch_cxs":..., 1], float],
cy: Union[TensorType["batch_cys":..., 1], float],
width: Optional[Union[TensorType["batch_ws":..., 1], int]] = None,
height: Optional[Union[TensorType["batch_hs":..., 1], int]] = None,
distortion_params: Optional[TensorType["batch_dist_params":..., 6]] = None,
camera_type: Optional[
Union[
TensorType["batch_cam_types":..., 1],
int,
List[CameraType],
CameraType,
]
] = CameraType.PERSPECTIVE,
times: Optional[TensorType["num_cameras"]] = None,
image_filenames: Optional[List[str]] = None,
probe_config: Optional[list] = None
):
"""Initializes the Cameras object.
Note on Input Tensor Dimensions: All of these tensors have items of dimensions TensorType[3, 4]
(in the case of the c2w matrices), TensorType[6] (in the case of distortion params), or
TensorType[1] (in the case of the rest of the elements). The dimensions before that are
considered the batch dimension of that tensor (batch_c2ws, batch_fxs, etc.). We will broadcast
all the tensors to be the same batch dimension. This means you can use any combination of the
input types in the function signature and it won't break. Your batch size for all tensors
must be broadcastable to the same size, and the resulting number of batch dimensions will be
the batch dimension with the largest number of dimensions.
"""
# This will notify the tensordataclass that we have a field with more than 1 dimension
self._field_custom_dimensions = {"camera_to_worlds": 2}
self.camera_to_worlds = camera_to_worlds
# fx fy calculation
self.fx = self._init_get_fc_xy(fx, "fx") # @dataclass's post_init will take care of broadcasting
self.fy = self._init_get_fc_xy(fy, "fy") # @dataclass's post_init will take care of broadcasting
# cx cy calculation
self.cx = self._init_get_fc_xy(cx, "cx") # @dataclass's post_init will take care of broadcasting
self.cy = self._init_get_fc_xy(cy, "cy") # @dataclass's post_init will take care of broadcasting
# Distortion Params Calculation:
self.distortion_params = distortion_params # @dataclass's post_init will take care of broadcasting
# @dataclass's post_init will take care of broadcasting
self.height = self._init_get_height_width(height, self.cy)
self.width = self._init_get_height_width(width, self.cx)
self.camera_type = self._init_get_camera_type(camera_type)
self.times = self._init_get_times(times)
self.image_filenames = image_filenames
self.probe_config = probe_config
if self.probe_config is not None:
self.probe = Probes(self.camera_to_worlds, self.probe_config)
else:
self.probe = None
self.__post_init__() # This will do the dataclass post_init and broadcast all the tensors
def _init_get_fc_xy(self, fc_xy, name):
"""
Parses the input focal length / principle point x or y and returns a tensor of the correct shape
Only needs to make sure that we a 1 in the last dimension if it is a tensor. If it is a float, we
just need to make it into a tensor and it will be broadcasted later in the __post_init__ function.
Args:
fc_xy: The focal length / principle point x or y
name: The name of the variable. Used for error messages
"""
if isinstance(fc_xy, float):
fc_xy = torch.Tensor([fc_xy], device=self.device)
elif isinstance(fc_xy, torch.Tensor):
if fc_xy.ndim == 0 or fc_xy.shape[-1] != 1:
fc_xy = fc_xy.unsqueeze(-1)
fc_xy = fc_xy.to(self.device)
else:
raise ValueError(f"{name} must be a float or tensor, got {type(fc_xy)}")
return fc_xy
def _init_get_camera_type(
self,
camera_type: Union[
TensorType["batch_cam_types":..., 1], TensorType["batch_cam_types":...], int, List[CameraType], CameraType
],
) -> TensorType["num_cameras":..., 1]:
"""
Parses the __init__() argument camera_type
Camera Type Calculation:
If CameraType, convert to int and then to tensor, then broadcast to all cameras
If List of CameraTypes, convert to ints and then to tensor, then broadcast to all cameras
If int, first go to tensor and then broadcast to all cameras
If tensor, broadcast to all cameras
Args:
camera_type: camera_type argument from __init__()
"""
if isinstance(camera_type, CameraType):
camera_type = torch.tensor([camera_type.value], device=self.device)
elif isinstance(camera_type, List) and isinstance(camera_type[0], CameraType):
camera_type = torch.tensor([[c.value] for c in camera_type], device=self.device)
elif isinstance(camera_type, int):
camera_type = torch.tensor([camera_type], device=self.device)
elif isinstance(camera_type, torch.Tensor):
assert not torch.is_floating_point(
camera_type
), f"camera_type tensor must be of type int, not: {camera_type.dtype}"
camera_type = camera_type.to(self.device)
if camera_type.ndim == 0 or camera_type.shape[-1] != 1:
camera_type = camera_type.unsqueeze(-1)
# assert torch.all(
# camera_type.view(-1)[0] == camera_type
# ), "Batched cameras of different camera_types will be allowed in the future."
else:
raise ValueError(
'Invalid camera_type. Must be CameraType, List[CameraType], int, or torch.Tensor["num_cameras"]. \
Received: '
+ str(type(camera_type))
)
return camera_type
def _init_get_height_width(
self,
h_w: Union[TensorType["batch_hws":..., 1], TensorType["batch_hws":...], int, None],
c_x_y: TensorType["batch_cxys":...],
) -> TensorType["num_cameras":..., 1]:
"""
Parses the __init__() argument for height or width
Height/Width Calculation:
If int, first go to tensor and then broadcast to all cameras
If tensor, broadcast to all cameras
If none, use cx or cy * 2
Else raise error
Args:
h_w: height or width argument from __init__()
c_x_y: cx or cy for when h_w == None
"""
if isinstance(h_w, int):
h_w = torch.Tensor([h_w]).to(torch.int64).to(self.device)
elif isinstance(h_w, torch.Tensor):
assert not torch.is_floating_point(h_w), f"height and width tensor must be of type int, not: {h_w.dtype}"
h_w = h_w.to(torch.int64).to(self.device)
if h_w.ndim == 0 or h_w.shape[-1] != 1:
h_w = h_w.unsqueeze(-1)
# assert torch.all(h_w == h_w.view(-1)[0]), "Batched cameras of different h, w will be allowed in the future."
elif h_w is None:
h_w = torch.Tensor((c_x_y * 2).to(torch.int64).to(self.device))
else:
raise ValueError("Height must be an int, tensor, or None, received: " + str(type(h_w)))
return h_w
def _init_get_times(self, times):
if times is None:
times = None
elif isinstance(times, torch.Tensor):
if times.ndim == 0 or times.shape[-1] != 1:
times = times.unsqueeze(-1).to(self.device)
else:
raise ValueError(f"times must be None or a tensor, got {type(times)}")
return times
@property
def device(self):
"""Returns the device that the camera is on."""
return self.camera_to_worlds.device
@property
def image_height(self) -> TensorType["num_cameras":..., 1]:
"""Returns the height of the images."""
return self.height
@property
def image_width(self) -> TensorType["num_cameras":..., 1]:
"""Returns the height of the images."""
return self.width
@property
def is_jagged(self):
"""
Returns whether or not the cameras are "jagged" (i.e. the height and widths are different, meaning that
you cannot concatenate the image coordinate maps together)
"""
h_jagged = not torch.all(self.height == self.height.view(-1)[0])
w_jagged = not torch.all(self.width == self.width.view(-1)[0])
return h_jagged or w_jagged
def get_image_coords(
self, pixel_offset: float = 0.5, index: Optional[Tuple] = None
) -> TensorType["height", "width", 2]:
"""This gets the image coordinates of one of the cameras in this object.
If no index is specified, it will return the maximum possible sized height / width image coordinate map,
by looking at the maximum height and width of all the cameras in this object.
Args:
pixel_offset: Offset for each pixel. Defaults to center of pixel (0.5)
index: Tuple of indices into the batch dimensions of the camera. Defaults to None, which returns the 0th
flattened camera
Returns:
Grid of image coordinates.
"""
if index is None:
image_height = torch.max(self.image_height.view(-1))
image_width = torch.max(self.image_width.view(-1))
image_coords = torch.meshgrid(torch.arange(image_height), torch.arange(image_width), indexing="ij")
image_coords = torch.stack(image_coords, dim=-1) + pixel_offset # stored as (y, x) coordinates
else:
image_height = self.image_height[index].item()
image_width = self.image_width[index].item()
image_coords = torch.meshgrid(torch.arange(image_height), torch.arange(image_width), indexing="ij")
image_coords = torch.stack(image_coords, dim=-1) + pixel_offset # stored as (y, x) coordinates
return image_coords
def generate_rays( # pylint: disable=too-many-statements
self,
camera_indices: Union[TensorType["num_rays":..., "num_cameras_batch_dims"], int],
coords: Optional[TensorType["num_rays":..., 2]] = None,
camera_opt_to_camera: Optional[TensorType["num_rays":..., 3, 4]] = None,
distortion_params_delta: Optional[TensorType["num_rays":..., 6]] = None,
keep_shape: Optional[bool] = None,
disable_distortion: bool = False, | ) -> RayBundle: | 2 | 2023-12-15 20:07:22+00:00 | 12k |
Infleqtion/qLDPC | qldpc/codes.py | [
{
"identifier": "abstract",
"path": "qldpc/abstract.py",
"snippet": "DEFAULT_FIELD_ORDER = 2\nclass GroupMember(comb.Permutation):\nclass Group:\nclass Element:\nclass Protograph:\nclass TrivialGroup(Group):\nclass CyclicGroup(Group):\nclass DihedralGroup(Group):\nclass QuaternionGroup(Group):\n def ... | import abc
import functools
import itertools
import cachetools
import galois
import ldpc.mod2
import networkx as nx
import numpy as np
import numpy.typing as npt
import qldpc
from collections.abc import Collection, Iterable, Sequence
from typing import TYPE_CHECKING, Literal
from qldpc import abstract
from qldpc.objects import CayleyComplex, Node, Pauli, QuditOperator
from typing_extensions import Self | 7,713 | Here:
- n is the number of data bits
- k is the number of encoded ("logical") bits
- d is the code distance
"""
return self.num_bits, self.dimension, self.get_distance()
@classmethod
def random(cls, bits: int, checks: int, field: int | None = None) -> ClassicalCode:
"""Construct a random classical code with the given number of bits and nontrivial checks."""
if field is None:
field = DEFAULT_FIELD_ORDER
code_field = galois.GF(field)
rows, cols = checks, bits
matrix = code_field.Random((rows, cols))
for row in range(matrix.shape[0]):
if not matrix[row, :].any():
matrix[row, np.random.randint(cols)] = code_field.Random(low=1) # pragma: no cover
for col in range(matrix.shape[1]):
if not matrix[:, col].any():
matrix[np.random.randint(rows), col] = code_field.Random(low=1) # pragma: no cover
return ClassicalCode(matrix, field)
@classmethod
def repetition(cls, num_bits: int, field: int | None = None) -> ClassicalCode:
"""Construct a repetition code on the given number of bits."""
minus_one = galois.GF(field or DEFAULT_FIELD_ORDER).characteristic - 1
matrix = np.zeros((num_bits - 1, num_bits), dtype=int)
for row in range(num_bits - 1):
matrix[row, row] = 1
matrix[row, row + 1] = minus_one
return ClassicalCode(matrix, field)
@classmethod
def ring(cls, num_bits: int, field: int | None = None) -> ClassicalCode:
"""Construct a repetition code with periodic boundary conditions."""
minus_one = galois.GF(field or DEFAULT_FIELD_ORDER).characteristic - 1
matrix = np.zeros((num_bits, num_bits), dtype=int)
for row in range(num_bits):
matrix[row, row] = 1
matrix[row, (row + 1) % num_bits] = minus_one
return ClassicalCode(matrix, field)
@classmethod
def hamming(cls, rank: int, field: int | None = None) -> ClassicalCode:
"""Construct a hamming code of a given rank."""
field = field or DEFAULT_FIELD_ORDER
if field == 2:
# parity check matrix: columns = all nonzero bitstrings
bitstrings = list(itertools.product([0, 1], repeat=rank))
return ClassicalCode(np.array(bitstrings[1:]).T)
# More generally, columns = maximal set of nonzero, linearly independent strings.
# This is achieved by collecting together all strings whose first nonzero element is a 1.
strings = [
(0,) * top_row + (1,) + rest
for top_row in range(rank - 1, -1, -1)
for rest in itertools.product(range(field), repeat=rank - top_row - 1)
]
return ClassicalCode(np.array(strings).T, field=field)
# TODO: add more codes, particularly from code families that are useful for good quantum codes
# see https://mhostetter.github.io/galois/latest/api/#forward-error-correction
# TODO:
# - add method to convert a parity check matrix into standard form
# - see https://arxiv.org/abs/1101.1519
# - one method to compute "blocks" of standard form, one to return the matrix itself
# - add is_CSS method to figure out whether this is a CSS Code
# - see https://quantumcomputing.stackexchange.com/questions/15432/
# - also compute and store sub-codes, if CSS
# - also add QuditCode.to_CSS() -> CSSCode
class QuditCode(AbstractCode):
"""Quantum stabilizer code for Galois qudits, with dimension q = p^m for prime p and integer m.
The parity check matrix of a QuditCode has dimensions (num_checks, 2 * num_qudits), and can be
written as a block matrix in the form H = [H_x|H_z]. Each block has num_qudits columns.
The entries H_x[c, d] = r_x and H_z[c, d] = r_z iff check c addresses qudit d with the operator
X(r_x) * Z(r_z), where r_x, r_z range over the base field, and X(r), Z(r) are generalized Pauli
operators. Specifically:
- X(r) = sum_{j=0}^{q-1} |j+r><j| is a shift operator, and
- Z(r) = sum_{j=0}^{q-1} w^{j r} |j><j| is a phase operator, with w = exp(2 pi i / q).
Warning: here j, r, s, etc. not integers, but elements of the Galois field GF(q), which has
different rules for addition and multiplication when q is not a prime number.
Helpful lecture by Gottesman: https://www.youtube.com/watch?v=JWg4zrNAF-g
"""
@property
def num_checks(self) -> int:
"""Number of parity checks (stabilizers) in this code."""
return self.matrix.shape[0]
@property
def num_qudits(self) -> int:
"""Number of data qudits in this code."""
return self.matrix.shape[1] // 2
@property
def num_qubits(self) -> int:
"""Number of data qubits in this code."""
self._assert_qubit_code()
return self.num_qudits
def _assert_qubit_code(self) -> None:
if self._field_order != 2:
raise ValueError("Attempted to call a qubit-only method with a non-qubit code.")
@classmethod
def matrix_to_graph(cls, matrix: npt.NDArray[np.int_] | Sequence[Sequence[int]]) -> nx.DiGraph:
"""Convert a parity check matrix into a Tanner graph."""
graph = nx.DiGraph()
matrix = np.reshape(matrix, (len(matrix), 2, -1))
for row, col_xz, col in zip(*np.nonzero(matrix)):
node_check = Node(index=int(row), is_data=False)
node_qudit = Node(index=int(col), is_data=True)
graph.add_edge(node_check, node_qudit)
| """Error correction code constructions
Copyright 2023 The qLDPC Authors and Infleqtion Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
from __future__ import annotations
if TYPE_CHECKING:
DEFAULT_FIELD_ORDER = abstract.DEFAULT_FIELD_ORDER
################################################################################
# template error correction code classes
class AbstractCode(abc.ABC):
"""Template class for error-correcting codes."""
_field_order: int
def __init__(
self,
matrix: Self | npt.NDArray[np.int_] | Sequence[Sequence[int]],
field: int | None = None,
) -> None:
"""Construct a code from a parity check matrix over a finite field.
The base field is taken to be F_2 by default.
"""
self._matrix: galois.FieldArray
if isinstance(matrix, type(self)):
self._field_order = matrix.field.order
if not (field is None or field == self._field_order):
raise ValueError(
f"Field argument {field} is inconsistent with the given code, which is defined"
f" over F_{self._field_order}"
)
self._matrix = matrix.matrix
elif isinstance(matrix, galois.FieldArray):
self._field_order = type(matrix).order
self._matrix = matrix
else:
self._field_order = field or DEFAULT_FIELD_ORDER
self._matrix = self.field(np.array(matrix))
@property
def field(self) -> type[galois.FieldArray]:
"""Base field over which this code is defined."""
return galois.GF(self._field_order)
@property
def matrix(self) -> galois.FieldArray:
"""Parity check matrix of this code."""
return self._matrix
@functools.cached_property
def graph(self) -> nx.DiGraph:
"""Tanner graph of this code."""
return self.matrix_to_graph(self.matrix)
@classmethod
@abc.abstractmethod
def matrix_to_graph(cls, matrix: npt.NDArray[np.int_] | Sequence[Sequence[int]]) -> nx.DiGraph:
"""Convert a parity check matrix into a Tanner graph."""
@classmethod
@abc.abstractmethod
def graph_to_matrix(cls, graph: nx.DiGraph) -> galois.FieldArray:
"""Convert a Tanner graph into a parity check matrix."""
class ClassicalCode(AbstractCode):
"""Classical linear error-correcting code over a finite field F_q.
A classical binary code C = {x} is a set of vectors x (with entries in F_q) called code words.
We consider only linear codes, for which any linear combination of code words is also code word.
Operationally, we define a classical code by a parity check matrix H with dimensions
(num_checks, num_bits). Each row of H represents a linear constraint (a "check") that code
words must satisfy. A vector x is a code word iff H @ x = 0.
"""
def __contains__(self, word: npt.NDArray[np.int_] | Sequence[int]) -> bool:
return not np.any(self.matrix @ self.field(word))
@classmethod
def matrix_to_graph(cls, matrix: npt.NDArray[np.int_] | Sequence[Sequence[int]]) -> nx.DiGraph:
"""Convert a parity check matrix H into a Tanner graph.
The Tanner graph is a bipartite graph with (num_checks, num_bits) vertices, respectively
identified with the checks and bits of the code. The check vertex c and the bit vertex b
share an edge iff c addresses b; that is, edge (c, b) is in the graph iff H[c, b] != 0.
"""
graph = nx.DiGraph()
for row, col in zip(*np.nonzero(matrix)):
node_c = Node(index=int(row), is_data=False)
node_d = Node(index=int(col), is_data=True)
graph.add_edge(node_c, node_d, val=matrix[row][col])
if isinstance(matrix, galois.FieldArray):
graph.order = type(matrix).order
return graph
@classmethod
def graph_to_matrix(cls, graph: nx.DiGraph) -> galois.FieldArray:
"""Convert a Tanner graph into a parity check matrix."""
num_bits = sum(1 for node in graph.nodes() if node.is_data)
num_checks = len(graph.nodes()) - num_bits
field = graph.order if hasattr(graph, "order") else DEFAULT_FIELD_ORDER
matrix = galois.GF(field).Zeros((num_checks, num_bits))
for node_c, node_b, data in graph.edges(data=True):
matrix[node_c.index, node_b.index] = data.get("val", 1)
return matrix
@functools.cached_property
def generator(self) -> galois.FieldArray:
"""Generator of this code: a matrix whose rows for a basis for code words."""
return self.matrix.null_space()
def words(self) -> galois.FieldArray:
"""Code words of this code."""
vectors = itertools.product(self.field.elements, repeat=self.generator.shape[0])
return self.field(list(vectors)) @ self.generator
def get_random_word(self) -> galois.FieldArray:
"""Random code word: a sum all generators with random field coefficients."""
return self.field.Random(self.generator.shape[0]) @ self.generator
def dual(self) -> ClassicalCode:
"""Dual to this code.
The dual code ~C is the set of bitstrings orthogonal to C:
~C = { x : x @ y = 0 for all y in C }.
The parity check matrix of ~C is equal to the generator of C.
"""
return ClassicalCode(self.generator, self._field_order)
def __invert__(self) -> ClassicalCode:
return self.dual()
@classmethod
def tensor_product(cls, code_a: ClassicalCode, code_b: ClassicalCode) -> ClassicalCode:
"""Tensor product C_a ⊗ C_b of two codes C_a and C_b.
Let G_a and G_b respectively denote the generators C_a and C_b.
Definition: C_a ⊗ C_b is the code whose generators are G_a ⊗ G_b.
Observation: G_a ⊗ G_b is the check matrix of ~(C_a ⊗ C_b).
We therefore construct ~(C_a ⊗ C_b) and return its dual ~~(C_a ⊗ C_b) = C_a ⊗ C_b.
"""
if not code_a._field_order == code_b._field_order:
raise ValueError("Cannot take tensor product of codes over different fields")
gen_a: npt.NDArray[np.int_] = code_a.generator
gen_b: npt.NDArray[np.int_] = code_b.generator
return ~ClassicalCode(np.kron(gen_a, gen_b))
@property
def num_checks(self) -> int:
"""Number of check bits in this code."""
return self._matrix.shape[0]
@property
def num_bits(self) -> int:
"""Number of data bits in this code."""
return self._matrix.shape[1]
@functools.cached_property
def rank(self) -> int:
"""Rank of this code's parity check matrix.
Equivalently, the number of linearly independent parity checks in this code.
"""
if self._field_order == 2:
return ldpc.mod2.rank(self._matrix)
return np.linalg.matrix_rank(self._matrix)
@property
def dimension(self) -> int:
"""The number of logical bits encoded by this code."""
return self.num_bits - self.rank
@functools.cache
def get_distance(self) -> int:
"""The distance of this code, or equivalently the minimal weight of a nonzero code word."""
words = self.words().view(np.ndarray)
return np.min(np.count_nonzero(words[1:], axis=1))
def get_code_params(self) -> tuple[int, int, int]:
"""Compute the parameters of this code: [n,k,d].
Here:
- n is the number of data bits
- k is the number of encoded ("logical") bits
- d is the code distance
"""
return self.num_bits, self.dimension, self.get_distance()
@classmethod
def random(cls, bits: int, checks: int, field: int | None = None) -> ClassicalCode:
"""Construct a random classical code with the given number of bits and nontrivial checks."""
if field is None:
field = DEFAULT_FIELD_ORDER
code_field = galois.GF(field)
rows, cols = checks, bits
matrix = code_field.Random((rows, cols))
for row in range(matrix.shape[0]):
if not matrix[row, :].any():
matrix[row, np.random.randint(cols)] = code_field.Random(low=1) # pragma: no cover
for col in range(matrix.shape[1]):
if not matrix[:, col].any():
matrix[np.random.randint(rows), col] = code_field.Random(low=1) # pragma: no cover
return ClassicalCode(matrix, field)
@classmethod
def repetition(cls, num_bits: int, field: int | None = None) -> ClassicalCode:
"""Construct a repetition code on the given number of bits."""
minus_one = galois.GF(field or DEFAULT_FIELD_ORDER).characteristic - 1
matrix = np.zeros((num_bits - 1, num_bits), dtype=int)
for row in range(num_bits - 1):
matrix[row, row] = 1
matrix[row, row + 1] = minus_one
return ClassicalCode(matrix, field)
@classmethod
def ring(cls, num_bits: int, field: int | None = None) -> ClassicalCode:
"""Construct a repetition code with periodic boundary conditions."""
minus_one = galois.GF(field or DEFAULT_FIELD_ORDER).characteristic - 1
matrix = np.zeros((num_bits, num_bits), dtype=int)
for row in range(num_bits):
matrix[row, row] = 1
matrix[row, (row + 1) % num_bits] = minus_one
return ClassicalCode(matrix, field)
@classmethod
def hamming(cls, rank: int, field: int | None = None) -> ClassicalCode:
"""Construct a hamming code of a given rank."""
field = field or DEFAULT_FIELD_ORDER
if field == 2:
# parity check matrix: columns = all nonzero bitstrings
bitstrings = list(itertools.product([0, 1], repeat=rank))
return ClassicalCode(np.array(bitstrings[1:]).T)
# More generally, columns = maximal set of nonzero, linearly independent strings.
# This is achieved by collecting together all strings whose first nonzero element is a 1.
strings = [
(0,) * top_row + (1,) + rest
for top_row in range(rank - 1, -1, -1)
for rest in itertools.product(range(field), repeat=rank - top_row - 1)
]
return ClassicalCode(np.array(strings).T, field=field)
# TODO: add more codes, particularly from code families that are useful for good quantum codes
# see https://mhostetter.github.io/galois/latest/api/#forward-error-correction
# TODO:
# - add method to convert a parity check matrix into standard form
# - see https://arxiv.org/abs/1101.1519
# - one method to compute "blocks" of standard form, one to return the matrix itself
# - add is_CSS method to figure out whether this is a CSS Code
# - see https://quantumcomputing.stackexchange.com/questions/15432/
# - also compute and store sub-codes, if CSS
# - also add QuditCode.to_CSS() -> CSSCode
class QuditCode(AbstractCode):
"""Quantum stabilizer code for Galois qudits, with dimension q = p^m for prime p and integer m.
The parity check matrix of a QuditCode has dimensions (num_checks, 2 * num_qudits), and can be
written as a block matrix in the form H = [H_x|H_z]. Each block has num_qudits columns.
The entries H_x[c, d] = r_x and H_z[c, d] = r_z iff check c addresses qudit d with the operator
X(r_x) * Z(r_z), where r_x, r_z range over the base field, and X(r), Z(r) are generalized Pauli
operators. Specifically:
- X(r) = sum_{j=0}^{q-1} |j+r><j| is a shift operator, and
- Z(r) = sum_{j=0}^{q-1} w^{j r} |j><j| is a phase operator, with w = exp(2 pi i / q).
Warning: here j, r, s, etc. not integers, but elements of the Galois field GF(q), which has
different rules for addition and multiplication when q is not a prime number.
Helpful lecture by Gottesman: https://www.youtube.com/watch?v=JWg4zrNAF-g
"""
@property
def num_checks(self) -> int:
"""Number of parity checks (stabilizers) in this code."""
return self.matrix.shape[0]
@property
def num_qudits(self) -> int:
"""Number of data qudits in this code."""
return self.matrix.shape[1] // 2
@property
def num_qubits(self) -> int:
"""Number of data qubits in this code."""
self._assert_qubit_code()
return self.num_qudits
def _assert_qubit_code(self) -> None:
if self._field_order != 2:
raise ValueError("Attempted to call a qubit-only method with a non-qubit code.")
@classmethod
def matrix_to_graph(cls, matrix: npt.NDArray[np.int_] | Sequence[Sequence[int]]) -> nx.DiGraph:
"""Convert a parity check matrix into a Tanner graph."""
graph = nx.DiGraph()
matrix = np.reshape(matrix, (len(matrix), 2, -1))
for row, col_xz, col in zip(*np.nonzero(matrix)):
node_check = Node(index=int(row), is_data=False)
node_qudit = Node(index=int(col), is_data=True)
graph.add_edge(node_check, node_qudit)
| qudit_op = graph[node_check][node_qudit].get(QuditOperator, QuditOperator()) | 4 | 2023-12-19 22:29:42+00:00 | 12k |
amazon-science/c2f-seg | src/image_model.py | [
{
"identifier": "VQModel",
"path": "taming_src/taming_models.py",
"snippet": "class VQModel(nn.Module):\n def __init__(self, config):\n super(VQModel, self).__init__()\n self.config = config\n self.iteration = 0\n self.name = config.model_type\n self.m_path = os.pat... | import os
import math
import random
import numpy as np
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.distributed as dist
from torchvision import transforms
from taming_src.taming_models import VQModel
from src.image_component import MaskedTransformer, Resnet_Encoder, Refine_Module
from src.loss import VGG19, PerceptualLoss
from utils.pytorch_optimization import AdamW, get_linear_schedule_with_warmup
from utils.utils import torch_show_all_params, torch_init_model
from utils.utils import Config
from utils.evaluation import evaluation_image
from utils.loss import CrossEntropyLoss
from tqdm import tqdm | 7,680 |
class C2F_Seg(nn.Module):
def __init__(self, config, g_path, mode, logger=None, save_eval_dict={}):
super(C2F_Seg, self).__init__()
self.config = config
self.iteration = 0
self.sample_iter = 0
self.name = config.model_type
# load g model for mask
self.g_config = Config(os.path.join(g_path, 'vqgan_{}.yml'.format(config.dataset)))
self.g_path = os.path.join(g_path, self.g_config.model_type)
self.root_path = config.path
self.transformer_path = os.path.join(config.path, self.name)
self.mode = mode
self.save_eval_dict = save_eval_dict
self.eps = 1e-6
self.train_sample_iters = config.train_sample_iters
self.g_model = VQModel(self.g_config).to(config.device)
self.img_encoder = Resnet_Encoder().to(config.device)
|
class C2F_Seg(nn.Module):
def __init__(self, config, g_path, mode, logger=None, save_eval_dict={}):
super(C2F_Seg, self).__init__()
self.config = config
self.iteration = 0
self.sample_iter = 0
self.name = config.model_type
# load g model for mask
self.g_config = Config(os.path.join(g_path, 'vqgan_{}.yml'.format(config.dataset)))
self.g_path = os.path.join(g_path, self.g_config.model_type)
self.root_path = config.path
self.transformer_path = os.path.join(config.path, self.name)
self.mode = mode
self.save_eval_dict = save_eval_dict
self.eps = 1e-6
self.train_sample_iters = config.train_sample_iters
self.g_model = VQModel(self.g_config).to(config.device)
self.img_encoder = Resnet_Encoder().to(config.device) | self.refine_module = Refine_Module().to(config.device) | 3 | 2023-12-21 04:25:47+00:00 | 12k |
huahuahuage/Bert-VITS2-Speech | onnx_infer/text/chinese.py | [
{
"identifier": "punctuation",
"path": "onnx_infer/text/symbols.py",
"snippet": ""
},
{
"identifier": "ToneSandhi",
"path": "onnx_infer/text/chinese_tone_sandhi.py",
"snippet": "class ToneSandhi:\r\n def __init__(self):\r\n self.must_neural_tone_words = {\r\n \"麻烦\",... | import os
import re
import cn2an
import jieba.posseg as psg
from typing import List, Dict
from pypinyin import lazy_pinyin, Style
from .symbols import punctuation
from .chinese_tone_sandhi import ToneSandhi
from log import log_instance
| 9,001 | f = open("onnx/Text/opencpop-strict.txt", "r")
for line in f.readlines():
self.pinyin_to_symbol_map[line.split("\t")[0]] = line.strip().split("\t")[1]
f.close()
@staticmethod
def __get_initials_finals(word):
initials = []
finals = []
orig_initials = lazy_pinyin(
word, neutral_tone_with_five=True, style=Style.INITIALS
)
orig_finals = lazy_pinyin(
word, neutral_tone_with_five=True, style=Style.FINALS_TONE3
)
for c, v in zip(orig_initials, orig_finals):
initials.append(c)
finals.append(v)
return initials, finals
def g2p(self, segments_list: List[str]):
phones_list = []
tones_list = []
word2ph = []
for seg in segments_list:
seg_cut = psg.lcut(seg)
initials = []
finals = []
seg_cut = self.tone_modifier.pre_merge_for_modify(seg_cut)
for word, pos in seg_cut:
if pos == "eng":
continue
sub_initials, sub_finals = self.__get_initials_finals(word)
sub_finals = self.tone_modifier.modified_tone(word, pos, sub_finals)
initials.append(sub_initials)
finals.append(sub_finals)
# assert len(sub_initials) == len(sub_finals) == len(word)
initials = sum(initials, [])
finals = sum(finals, [])
#
for c, v in zip(initials, finals):
raw_pinyin = c + v
# NOTE: post process for pypinyin outputs
# we discriminate i, ii and iii
if c == v:
assert c in punctuation
phone = [c]
tone = "0"
word2ph.append(1)
else:
v_without_tone = v[:-1]
tone = v[-1]
pinyin = c + v_without_tone
assert tone in "12345"
if c:
# 多音节
v_rep_map = {
"uei": "ui",
"iou": "iu",
"uen": "un",
}
if v_without_tone in v_rep_map.keys():
pinyin = c + v_rep_map[v_without_tone]
else:
# 单音节
pinyin_rep_map = {
"ing": "ying",
"i": "yi",
"in": "yin",
"u": "wu",
}
if pinyin in pinyin_rep_map.keys():
pinyin = pinyin_rep_map[pinyin]
else:
single_rep_map = {
"v": "yu",
"e": "e",
"i": "y",
"u": "w",
}
if pinyin[0] in single_rep_map.keys():
pinyin = single_rep_map[pinyin[0]] + pinyin[1:]
assert pinyin in self.pinyin_to_symbol_map.keys(), (
pinyin,
seg,
raw_pinyin,
)
phone = self.pinyin_to_symbol_map[pinyin].split(" ")
word2ph.append(len(phone))
phones_list += phone
tones_list += [int(tone)] * len(phone)
return phones_list, tones_list, word2ph
chinese_g2p_instance = ChineseG2P()
def g2p(text: str):
"""
将文本转换成音节
"""
# 将文本按照标点符号切分成列表
pattern = r"(?<=[{0}])\s*".format("".join(punctuation))
sentences = [i for i in re.split(pattern, text) if i.strip() != ""]
# 根据切分后的列表,返回文本对应发音列表
# phone:拼音的声母、韵母
# tone:声调 1 2 3 4 5
# word2ph:如果只有韵母,返回1,如果有声母韵母,返回2
phones_list, tones_list, word2ph_list = chinese_g2p_instance.g2p(sentences)
if sum(word2ph_list) != len(phones_list):
raise ValueError("中文转拼音失败:音节总数(sum(word2ph_list))与音节的个数(len(phones_list))不匹配。")
if len(word2ph_list) != len(text): # Sometimes it will crash,you can add a try-catch.
raise ValueError("中文转拼音失败:拼音结果个数(len(word2ph_list))与文本长度(len(text))不匹配。")
phones_list = ["_"] + phones_list + ["_"]
|
REP_MAP = {
":": ",",
";": ",",
",": ",",
"。": ".",
"!": "!",
"?": "?",
"\n": ".",
"·": ",",
"、": ",",
"...": "…",
"$": ".",
"“": "'",
"”": "'",
'"': "'",
"‘": "'",
"’": "'",
"(": "'",
")": "'",
"(": "'",
")": "'",
"《": "'",
"》": "'",
"【": "'",
"】": "'",
"[": "'",
"]": "'",
"—": "-",
"~": "-",
"~": "-",
"「": "'",
"」": "'",
}
class ChineseG2P:
def __init__(self) -> None:
self.tone_modifier = ToneSandhi()
self.pinyin_to_symbol_map: Dict[str, str] = {}
self.__read_opencpop_symbol_map()
def __read_opencpop_symbol_map(self):
"""
取读opencpop数据
"""
f = open("onnx/Text/opencpop-strict.txt", "r")
for line in f.readlines():
self.pinyin_to_symbol_map[line.split("\t")[0]] = line.strip().split("\t")[1]
f.close()
@staticmethod
def __get_initials_finals(word):
initials = []
finals = []
orig_initials = lazy_pinyin(
word, neutral_tone_with_five=True, style=Style.INITIALS
)
orig_finals = lazy_pinyin(
word, neutral_tone_with_five=True, style=Style.FINALS_TONE3
)
for c, v in zip(orig_initials, orig_finals):
initials.append(c)
finals.append(v)
return initials, finals
def g2p(self, segments_list: List[str]):
phones_list = []
tones_list = []
word2ph = []
for seg in segments_list:
seg_cut = psg.lcut(seg)
initials = []
finals = []
seg_cut = self.tone_modifier.pre_merge_for_modify(seg_cut)
for word, pos in seg_cut:
if pos == "eng":
continue
sub_initials, sub_finals = self.__get_initials_finals(word)
sub_finals = self.tone_modifier.modified_tone(word, pos, sub_finals)
initials.append(sub_initials)
finals.append(sub_finals)
# assert len(sub_initials) == len(sub_finals) == len(word)
initials = sum(initials, [])
finals = sum(finals, [])
#
for c, v in zip(initials, finals):
raw_pinyin = c + v
# NOTE: post process for pypinyin outputs
# we discriminate i, ii and iii
if c == v:
assert c in punctuation
phone = [c]
tone = "0"
word2ph.append(1)
else:
v_without_tone = v[:-1]
tone = v[-1]
pinyin = c + v_without_tone
assert tone in "12345"
if c:
# 多音节
v_rep_map = {
"uei": "ui",
"iou": "iu",
"uen": "un",
}
if v_without_tone in v_rep_map.keys():
pinyin = c + v_rep_map[v_without_tone]
else:
# 单音节
pinyin_rep_map = {
"ing": "ying",
"i": "yi",
"in": "yin",
"u": "wu",
}
if pinyin in pinyin_rep_map.keys():
pinyin = pinyin_rep_map[pinyin]
else:
single_rep_map = {
"v": "yu",
"e": "e",
"i": "y",
"u": "w",
}
if pinyin[0] in single_rep_map.keys():
pinyin = single_rep_map[pinyin[0]] + pinyin[1:]
assert pinyin in self.pinyin_to_symbol_map.keys(), (
pinyin,
seg,
raw_pinyin,
)
phone = self.pinyin_to_symbol_map[pinyin].split(" ")
word2ph.append(len(phone))
phones_list += phone
tones_list += [int(tone)] * len(phone)
return phones_list, tones_list, word2ph
chinese_g2p_instance = ChineseG2P()
def g2p(text: str):
"""
将文本转换成音节
"""
# 将文本按照标点符号切分成列表
pattern = r"(?<=[{0}])\s*".format("".join(punctuation))
sentences = [i for i in re.split(pattern, text) if i.strip() != ""]
# 根据切分后的列表,返回文本对应发音列表
# phone:拼音的声母、韵母
# tone:声调 1 2 3 4 5
# word2ph:如果只有韵母,返回1,如果有声母韵母,返回2
phones_list, tones_list, word2ph_list = chinese_g2p_instance.g2p(sentences)
if sum(word2ph_list) != len(phones_list):
raise ValueError("中文转拼音失败:音节总数(sum(word2ph_list))与音节的个数(len(phones_list))不匹配。")
if len(word2ph_list) != len(text): # Sometimes it will crash,you can add a try-catch.
raise ValueError("中文转拼音失败:拼音结果个数(len(word2ph_list))与文本长度(len(text))不匹配。")
phones_list = ["_"] + phones_list + ["_"]
| log_instance.debug(f"phones {str(phones_list)}")
| 2 | 2023-12-21 13:50:50+00:00 | 12k |
lipku/metahuman-stream | main.py | [
{
"identifier": "NeRFDataset",
"path": "nerf_triplane/provider.py",
"snippet": "class NeRFDataset:\n def __init__(self, opt, device, type='train', downscale=1):\n super().__init__()\n \n self.opt = opt\n self.device = device\n self.type = type # train, val, test\n ... | import torch
import argparse
from nerf_triplane.provider import NeRFDataset
from nerf_triplane.utils import *
from nerf_triplane.network import NeRFNetwork
from nerf_triplane.gui import NeRFGUI | 10,710 | parser.add_argument('--lr', type=float, default=1e-2, help="initial learning rate")
parser.add_argument('--lr_net', type=float, default=1e-3, help="initial learning rate")
parser.add_argument('--ckpt', type=str, default='latest')
parser.add_argument('--num_rays', type=int, default=4096 * 16, help="num rays sampled per image for each training step")
parser.add_argument('--cuda_ray', action='store_true', help="use CUDA raymarching instead of pytorch")
parser.add_argument('--max_steps', type=int, default=16, help="max num steps sampled per ray (only valid when using --cuda_ray)")
parser.add_argument('--num_steps', type=int, default=16, help="num steps sampled per ray (only valid when NOT using --cuda_ray)")
parser.add_argument('--upsample_steps', type=int, default=0, help="num steps up-sampled per ray (only valid when NOT using --cuda_ray)")
parser.add_argument('--update_extra_interval', type=int, default=16, help="iter interval to update extra status (only valid when using --cuda_ray)")
parser.add_argument('--max_ray_batch', type=int, default=4096, help="batch size of rays at inference to avoid OOM (only valid when NOT using --cuda_ray)")
### loss set
parser.add_argument('--warmup_step', type=int, default=10000, help="warm up steps")
parser.add_argument('--amb_aud_loss', type=int, default=1, help="use ambient aud loss")
parser.add_argument('--amb_eye_loss', type=int, default=1, help="use ambient eye loss")
parser.add_argument('--unc_loss', type=int, default=1, help="use uncertainty loss")
parser.add_argument('--lambda_amb', type=float, default=1e-4, help="lambda for ambient loss")
### network backbone options
parser.add_argument('--fp16', action='store_true', help="use amp mixed precision training")
parser.add_argument('--bg_img', type=str, default='', help="background image")
parser.add_argument('--fbg', action='store_true', help="frame-wise bg")
parser.add_argument('--exp_eye', action='store_true', help="explicitly control the eyes")
parser.add_argument('--fix_eye', type=float, default=-1, help="fixed eye area, negative to disable, set to 0-0.3 for a reasonable eye")
parser.add_argument('--smooth_eye', action='store_true', help="smooth the eye area sequence")
parser.add_argument('--torso_shrink', type=float, default=0.8, help="shrink bg coords to allow more flexibility in deform")
### dataset options
parser.add_argument('--color_space', type=str, default='srgb', help="Color space, supports (linear, srgb)")
parser.add_argument('--preload', type=int, default=0, help="0 means load data from disk on-the-fly, 1 means preload to CPU, 2 means GPU.")
# (the default value is for the fox dataset)
parser.add_argument('--bound', type=float, default=1, help="assume the scene is bounded in box[-bound, bound]^3, if > 1, will invoke adaptive ray marching.")
parser.add_argument('--scale', type=float, default=4, help="scale camera location into box[-bound, bound]^3")
parser.add_argument('--offset', type=float, nargs='*', default=[0, 0, 0], help="offset of camera location")
parser.add_argument('--dt_gamma', type=float, default=1/256, help="dt_gamma (>=0) for adaptive ray marching. set to 0 to disable, >0 to accelerate rendering (but usually with worse quality)")
parser.add_argument('--min_near', type=float, default=0.05, help="minimum near distance for camera")
parser.add_argument('--density_thresh', type=float, default=10, help="threshold for density grid to be occupied (sigma)")
parser.add_argument('--density_thresh_torso', type=float, default=0.01, help="threshold for density grid to be occupied (alpha)")
parser.add_argument('--patch_size', type=int, default=1, help="[experimental] render patches in training, so as to apply LPIPS loss. 1 means disabled, use [64, 32, 16] to enable")
parser.add_argument('--init_lips', action='store_true', help="init lips region")
parser.add_argument('--finetune_lips', action='store_true', help="use LPIPS and landmarks to fine tune lips region")
parser.add_argument('--smooth_lips', action='store_true', help="smooth the enc_a in a exponential decay way...")
parser.add_argument('--torso', action='store_true', help="fix head and train torso")
parser.add_argument('--head_ckpt', type=str, default='', help="head model")
### GUI options
parser.add_argument('--gui', action='store_true', help="start a GUI")
parser.add_argument('--W', type=int, default=450, help="GUI width")
parser.add_argument('--H', type=int, default=450, help="GUI height")
parser.add_argument('--radius', type=float, default=3.35, help="default GUI camera radius from center")
parser.add_argument('--fovy', type=float, default=21.24, help="default GUI camera fovy")
parser.add_argument('--max_spp', type=int, default=1, help="GUI rendering max sample per pixel")
### else
parser.add_argument('--att', type=int, default=2, help="audio attention mode (0 = turn off, 1 = left-direction, 2 = bi-direction)")
parser.add_argument('--aud', type=str, default='', help="audio source (empty will load the default, else should be a path to a npy file)")
parser.add_argument('--emb', action='store_true', help="use audio class + embedding instead of logits")
parser.add_argument('--ind_dim', type=int, default=4, help="individual code dim, 0 to turn off")
parser.add_argument('--ind_num', type=int, default=10000, help="number of individual codes, should be larger than training dataset size")
parser.add_argument('--ind_dim_torso', type=int, default=8, help="individual code dim, 0 to turn off")
parser.add_argument('--amb_dim', type=int, default=2, help="ambient dimension")
parser.add_argument('--part', action='store_true', help="use partial training data (1/10)")
parser.add_argument('--part2', action='store_true', help="use partial training data (first 15s)")
parser.add_argument('--train_camera', action='store_true', help="optimize camera pose")
parser.add_argument('--smooth_path', action='store_true', help="brute-force smooth camera pose trajectory with a window size")
parser.add_argument('--smooth_path_window', type=int, default=7, help="smoothing window size")
# asr
parser.add_argument('--asr', action='store_true', help="load asr for real-time app")
parser.add_argument('--asr_wav', type=str, default='', help="load the wav and use as input")
parser.add_argument('--asr_play', action='store_true', help="play out the audio")
parser.add_argument('--asr_model', type=str, default='deepspeech')
# parser.add_argument('--asr_model', type=str, default='cpierse/wav2vec2-large-xlsr-53-esperanto')
# parser.add_argument('--asr_model', type=str, default='facebook/wav2vec2-large-960h-lv60-self')
parser.add_argument('--asr_save_feats', action='store_true')
# audio FPS
parser.add_argument('--fps', type=int, default=50)
# sliding window left-middle-right length (unit: 20ms)
parser.add_argument('-l', type=int, default=10)
parser.add_argument('-m', type=int, default=50)
parser.add_argument('-r', type=int, default=10)
opt = parser.parse_args()
if opt.O:
opt.fp16 = True
opt.exp_eye = True
if opt.test and False:
opt.smooth_path = True
opt.smooth_eye = True
opt.smooth_lips = True
opt.cuda_ray = True
# assert opt.cuda_ray, "Only support CUDA ray mode."
if opt.patch_size > 1:
# assert opt.patch_size > 16, "patch_size should > 16 to run LPIPS loss."
assert opt.num_rays % (opt.patch_size ** 2) == 0, "patch_size ** 2 should be dividable by num_rays."
# if opt.finetune_lips:
# # do not update density grid in finetune stage
# opt.update_extra_interval = 1e9
print(opt)
seed_everything(opt.seed)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
|
# torch.autograd.set_detect_anomaly(True)
# Close tf32 features. Fix low numerical accuracy on rtx30xx gpu.
try:
torch.backends.cuda.matmul.allow_tf32 = False
torch.backends.cudnn.allow_tf32 = False
except AttributeError as e:
print('Info. This pytorch version is not support with tf32.')
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('path', type=str)
parser.add_argument('-O', action='store_true', help="equals --fp16 --cuda_ray --exp_eye")
parser.add_argument('--test', action='store_true', help="test mode (load model and test dataset)")
parser.add_argument('--test_train', action='store_true', help="test mode (load model and train dataset)")
parser.add_argument('--data_range', type=int, nargs='*', default=[0, -1], help="data range to use")
parser.add_argument('--workspace', type=str, default='workspace')
parser.add_argument('--seed', type=int, default=0)
### training options
parser.add_argument('--iters', type=int, default=200000, help="training iters")
parser.add_argument('--lr', type=float, default=1e-2, help="initial learning rate")
parser.add_argument('--lr_net', type=float, default=1e-3, help="initial learning rate")
parser.add_argument('--ckpt', type=str, default='latest')
parser.add_argument('--num_rays', type=int, default=4096 * 16, help="num rays sampled per image for each training step")
parser.add_argument('--cuda_ray', action='store_true', help="use CUDA raymarching instead of pytorch")
parser.add_argument('--max_steps', type=int, default=16, help="max num steps sampled per ray (only valid when using --cuda_ray)")
parser.add_argument('--num_steps', type=int, default=16, help="num steps sampled per ray (only valid when NOT using --cuda_ray)")
parser.add_argument('--upsample_steps', type=int, default=0, help="num steps up-sampled per ray (only valid when NOT using --cuda_ray)")
parser.add_argument('--update_extra_interval', type=int, default=16, help="iter interval to update extra status (only valid when using --cuda_ray)")
parser.add_argument('--max_ray_batch', type=int, default=4096, help="batch size of rays at inference to avoid OOM (only valid when NOT using --cuda_ray)")
### loss set
parser.add_argument('--warmup_step', type=int, default=10000, help="warm up steps")
parser.add_argument('--amb_aud_loss', type=int, default=1, help="use ambient aud loss")
parser.add_argument('--amb_eye_loss', type=int, default=1, help="use ambient eye loss")
parser.add_argument('--unc_loss', type=int, default=1, help="use uncertainty loss")
parser.add_argument('--lambda_amb', type=float, default=1e-4, help="lambda for ambient loss")
### network backbone options
parser.add_argument('--fp16', action='store_true', help="use amp mixed precision training")
parser.add_argument('--bg_img', type=str, default='', help="background image")
parser.add_argument('--fbg', action='store_true', help="frame-wise bg")
parser.add_argument('--exp_eye', action='store_true', help="explicitly control the eyes")
parser.add_argument('--fix_eye', type=float, default=-1, help="fixed eye area, negative to disable, set to 0-0.3 for a reasonable eye")
parser.add_argument('--smooth_eye', action='store_true', help="smooth the eye area sequence")
parser.add_argument('--torso_shrink', type=float, default=0.8, help="shrink bg coords to allow more flexibility in deform")
### dataset options
parser.add_argument('--color_space', type=str, default='srgb', help="Color space, supports (linear, srgb)")
parser.add_argument('--preload', type=int, default=0, help="0 means load data from disk on-the-fly, 1 means preload to CPU, 2 means GPU.")
# (the default value is for the fox dataset)
parser.add_argument('--bound', type=float, default=1, help="assume the scene is bounded in box[-bound, bound]^3, if > 1, will invoke adaptive ray marching.")
parser.add_argument('--scale', type=float, default=4, help="scale camera location into box[-bound, bound]^3")
parser.add_argument('--offset', type=float, nargs='*', default=[0, 0, 0], help="offset of camera location")
parser.add_argument('--dt_gamma', type=float, default=1/256, help="dt_gamma (>=0) for adaptive ray marching. set to 0 to disable, >0 to accelerate rendering (but usually with worse quality)")
parser.add_argument('--min_near', type=float, default=0.05, help="minimum near distance for camera")
parser.add_argument('--density_thresh', type=float, default=10, help="threshold for density grid to be occupied (sigma)")
parser.add_argument('--density_thresh_torso', type=float, default=0.01, help="threshold for density grid to be occupied (alpha)")
parser.add_argument('--patch_size', type=int, default=1, help="[experimental] render patches in training, so as to apply LPIPS loss. 1 means disabled, use [64, 32, 16] to enable")
parser.add_argument('--init_lips', action='store_true', help="init lips region")
parser.add_argument('--finetune_lips', action='store_true', help="use LPIPS and landmarks to fine tune lips region")
parser.add_argument('--smooth_lips', action='store_true', help="smooth the enc_a in a exponential decay way...")
parser.add_argument('--torso', action='store_true', help="fix head and train torso")
parser.add_argument('--head_ckpt', type=str, default='', help="head model")
### GUI options
parser.add_argument('--gui', action='store_true', help="start a GUI")
parser.add_argument('--W', type=int, default=450, help="GUI width")
parser.add_argument('--H', type=int, default=450, help="GUI height")
parser.add_argument('--radius', type=float, default=3.35, help="default GUI camera radius from center")
parser.add_argument('--fovy', type=float, default=21.24, help="default GUI camera fovy")
parser.add_argument('--max_spp', type=int, default=1, help="GUI rendering max sample per pixel")
### else
parser.add_argument('--att', type=int, default=2, help="audio attention mode (0 = turn off, 1 = left-direction, 2 = bi-direction)")
parser.add_argument('--aud', type=str, default='', help="audio source (empty will load the default, else should be a path to a npy file)")
parser.add_argument('--emb', action='store_true', help="use audio class + embedding instead of logits")
parser.add_argument('--ind_dim', type=int, default=4, help="individual code dim, 0 to turn off")
parser.add_argument('--ind_num', type=int, default=10000, help="number of individual codes, should be larger than training dataset size")
parser.add_argument('--ind_dim_torso', type=int, default=8, help="individual code dim, 0 to turn off")
parser.add_argument('--amb_dim', type=int, default=2, help="ambient dimension")
parser.add_argument('--part', action='store_true', help="use partial training data (1/10)")
parser.add_argument('--part2', action='store_true', help="use partial training data (first 15s)")
parser.add_argument('--train_camera', action='store_true', help="optimize camera pose")
parser.add_argument('--smooth_path', action='store_true', help="brute-force smooth camera pose trajectory with a window size")
parser.add_argument('--smooth_path_window', type=int, default=7, help="smoothing window size")
# asr
parser.add_argument('--asr', action='store_true', help="load asr for real-time app")
parser.add_argument('--asr_wav', type=str, default='', help="load the wav and use as input")
parser.add_argument('--asr_play', action='store_true', help="play out the audio")
parser.add_argument('--asr_model', type=str, default='deepspeech')
# parser.add_argument('--asr_model', type=str, default='cpierse/wav2vec2-large-xlsr-53-esperanto')
# parser.add_argument('--asr_model', type=str, default='facebook/wav2vec2-large-960h-lv60-self')
parser.add_argument('--asr_save_feats', action='store_true')
# audio FPS
parser.add_argument('--fps', type=int, default=50)
# sliding window left-middle-right length (unit: 20ms)
parser.add_argument('-l', type=int, default=10)
parser.add_argument('-m', type=int, default=50)
parser.add_argument('-r', type=int, default=10)
opt = parser.parse_args()
if opt.O:
opt.fp16 = True
opt.exp_eye = True
if opt.test and False:
opt.smooth_path = True
opt.smooth_eye = True
opt.smooth_lips = True
opt.cuda_ray = True
# assert opt.cuda_ray, "Only support CUDA ray mode."
if opt.patch_size > 1:
# assert opt.patch_size > 16, "patch_size should > 16 to run LPIPS loss."
assert opt.num_rays % (opt.patch_size ** 2) == 0, "patch_size ** 2 should be dividable by num_rays."
# if opt.finetune_lips:
# # do not update density grid in finetune stage
# opt.update_extra_interval = 1e9
print(opt)
seed_everything(opt.seed)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
| model = NeRFNetwork(opt) | 1 | 2023-12-19 01:32:46+00:00 | 12k |
MingtaoGuo/AnimateAnyone_unofficial | ldm/models/diffusion/ddpm.py | [
{
"identifier": "log_txt_as_img",
"path": "ldm/util.py",
"snippet": "def log_txt_as_img(wh, xc, size=10):\n # wh a tuple of (width, height)\n # xc a list of captions to plot\n b = len(xc)\n txts = list()\n for bi in range(b):\n txt = Image.new(\"RGB\", wh, color=\"white\")\n ... | import torch
import torch.nn as nn
import numpy as np
import pytorch_lightning as pl
import itertools
from torch.optim.lr_scheduler import LambdaLR
from einops import rearrange, repeat
from contextlib import contextmanager, nullcontext
from functools import partial
from tqdm import tqdm
from torchvision.utils import make_grid
from pytorch_lightning.utilities.distributed import rank_zero_only
from omegaconf import ListConfig
from ldm.util import log_txt_as_img, exists, default, ismap, isimage, mean_flat, count_params, instantiate_from_config
from ldm.modules.ema import LitEma
from ldm.modules.distributions.distributions import normal_kl, DiagonalGaussianDistribution
from ldm.models.autoencoder import IdentityFirstStage, AutoencoderKL
from ldm.modules.diffusionmodules.util import make_beta_schedule, extract_into_tensor, noise_like
from ldm.models.diffusion.ddim import DDIMSampler | 9,767 | """
wild mixture of
https://github.com/lucidrains/denoising-diffusion-pytorch/blob/7706bdfc6f527f58d33f84b7b522e61e6e3164b3/denoising_diffusion_pytorch/denoising_diffusion_pytorch.py
https://github.com/openai/improved-diffusion/blob/e94489283bb876ac1477d5dd7709bbbd2d9902ce/improved_diffusion/gaussian_diffusion.py
https://github.com/CompVis/taming-transformers
-- merci
"""
__conditioning_keys__ = {'concat': 'c_concat',
'crossattn': 'c_crossattn',
'adm': 'y'}
def disabled_train(self, mode=True):
"""Overwrite model.train with this function to make sure train/eval mode
does not change anymore."""
return self
def uniform_on_device(r1, r2, shape, device):
return (r1 - r2) * torch.rand(*shape, device=device) + r2
class DDPM(pl.LightningModule):
# classic DDPM with Gaussian diffusion, in image space
def __init__(self,
unet_config,
timesteps=1000,
beta_schedule="linear",
loss_type="l2",
ckpt_path=None,
ignore_keys=[],
load_only_unet=False,
monitor="val/loss",
use_ema=True,
first_stage_key="image",
image_size=256,
channels=3,
log_every_t=100,
clip_denoised=True,
linear_start=1e-4,
linear_end=2e-2,
cosine_s=8e-3,
given_betas=None,
original_elbo_weight=0.,
v_posterior=0., # weight for choosing posterior variance as sigma = (1-v) * beta_tilde + v * beta
l_simple_weight=1.,
conditioning_key=None,
parameterization="eps", # all assuming fixed variance schedules
scheduler_config=None,
use_positional_encodings=False,
learn_logvar=False,
logvar_init=0.,
make_it_fit=False,
ucg_training=None,
reset_ema=False,
reset_num_ema_updates=False,
):
super().__init__()
assert parameterization in ["eps", "x0", "v"], 'currently only supporting "eps" and "x0" and "v"'
self.parameterization = parameterization
print(f"{self.__class__.__name__}: Running in {self.parameterization}-prediction mode")
self.cond_stage_model = None
self.clip_denoised = clip_denoised
self.log_every_t = log_every_t
self.first_stage_key = first_stage_key
self.image_size = image_size # try conv?
self.channels = channels
self.use_positional_encodings = use_positional_encodings
self.model = DiffusionWrapper(unet_config, conditioning_key)
count_params(self.model, verbose=True)
self.use_ema = use_ema
if self.use_ema:
| """
wild mixture of
https://github.com/lucidrains/denoising-diffusion-pytorch/blob/7706bdfc6f527f58d33f84b7b522e61e6e3164b3/denoising_diffusion_pytorch/denoising_diffusion_pytorch.py
https://github.com/openai/improved-diffusion/blob/e94489283bb876ac1477d5dd7709bbbd2d9902ce/improved_diffusion/gaussian_diffusion.py
https://github.com/CompVis/taming-transformers
-- merci
"""
__conditioning_keys__ = {'concat': 'c_concat',
'crossattn': 'c_crossattn',
'adm': 'y'}
def disabled_train(self, mode=True):
"""Overwrite model.train with this function to make sure train/eval mode
does not change anymore."""
return self
def uniform_on_device(r1, r2, shape, device):
return (r1 - r2) * torch.rand(*shape, device=device) + r2
class DDPM(pl.LightningModule):
# classic DDPM with Gaussian diffusion, in image space
def __init__(self,
unet_config,
timesteps=1000,
beta_schedule="linear",
loss_type="l2",
ckpt_path=None,
ignore_keys=[],
load_only_unet=False,
monitor="val/loss",
use_ema=True,
first_stage_key="image",
image_size=256,
channels=3,
log_every_t=100,
clip_denoised=True,
linear_start=1e-4,
linear_end=2e-2,
cosine_s=8e-3,
given_betas=None,
original_elbo_weight=0.,
v_posterior=0., # weight for choosing posterior variance as sigma = (1-v) * beta_tilde + v * beta
l_simple_weight=1.,
conditioning_key=None,
parameterization="eps", # all assuming fixed variance schedules
scheduler_config=None,
use_positional_encodings=False,
learn_logvar=False,
logvar_init=0.,
make_it_fit=False,
ucg_training=None,
reset_ema=False,
reset_num_ema_updates=False,
):
super().__init__()
assert parameterization in ["eps", "x0", "v"], 'currently only supporting "eps" and "x0" and "v"'
self.parameterization = parameterization
print(f"{self.__class__.__name__}: Running in {self.parameterization}-prediction mode")
self.cond_stage_model = None
self.clip_denoised = clip_denoised
self.log_every_t = log_every_t
self.first_stage_key = first_stage_key
self.image_size = image_size # try conv?
self.channels = channels
self.use_positional_encodings = use_positional_encodings
self.model = DiffusionWrapper(unet_config, conditioning_key)
count_params(self.model, verbose=True)
self.use_ema = use_ema
if self.use_ema: | self.model_ema = LitEma(self.model) | 8 | 2023-12-16 03:31:33+00:00 | 12k |
yasserben/CLOUDS | clouds/clouds.py | [
{
"identifier": "SetCriterion",
"path": "clouds/modeling/criterion.py",
"snippet": "class SetCriterion(nn.Module):\n \"\"\"This class computes the loss for DETR.\n The process happens in two steps:\n 1) we compute hungarian assignment between ground truth boxes and the outputs of the model\... | from typing import Tuple
from copy import deepcopy
from torch import nn
from torch.nn import functional as F
from detectron2.config import configurable
from detectron2.data import MetadataCatalog
from detectron2.modeling import META_ARCH_REGISTRY, build_backbone, build_sem_seg_head
from detectron2.modeling.backbone import Backbone
from detectron2.modeling.postprocessing import sem_seg_postprocess
from detectron2.structures import Boxes, ImageList, Instances, BitMasks
from detectron2.utils.memory import retry_if_cuda_oom
from torch.nn.parallel import DistributedDataParallel
from .modeling.criterion import SetCriterion
from .modeling.matcher import HungarianMatcher
from my_utils import *
from scipy.ndimage import label, center_of_mass
from scipy.ndimage import binary_erosion
from scipy.ndimage import label, sum as ndi_sum
from .sam import SAM
from .modeling.transformer_decoder.clouds_transformer_decoder import (
MaskPooling,
get_classification_logits,
)
from torch.nn.modules.dropout import _DropoutNd
from timm.models.layers import DropPath
import numpy as np
import matplotlib.pyplot as plt
import torch
import os
import copy
import cv2 | 7,308 | seg_maps_target = self.predict_inference(
outputs_target,
features_clean["clip_vis_dense"],
text_classifier,
num_templates,
images_norm_clean,
batched_inputs_target,
)
targets_target = process_segmentation_maps(seg_maps_target)
if self.sam_enabled:
separate_dict = separate_shapes_list(
targets_target, size_threshold=self.sam_size_threshold
)
coordinate_dict = get_fixed_points(
separate_dict,
apply_erosion=self.sam_erosion,
num_points=self.sam_num_points,
erosion_size=self.sam_erosion_size,
selection_mode=self.sam_selection_mode,
)
last_targets_target = []
for i, dico in enumerate(batched_inputs_target):
image_i = dico["image"]
image_perm = image_i.permute(1, 2, 0).cpu().numpy()
image_perm = self.sam.apply_image(image_perm)
self.sam.set_torch_image(
torch.tensor(image_perm.transpose(2, 0, 1))
.unsqueeze(0)
.to(self.device),
(768, 768),
)
points_coords, count_per_key = dict_to_tensor(
coordinate_dict[i]
)
points_coords = self.sam.apply_coords(
points_coords.cpu().numpy(), (768, 768)
)
if points_coords.shape[0]:
(masks, logits, masks_input,) = self.sam.predict_torch(
point_coords=torch.tensor(points_coords).to(
self.device
),
point_labels=create_ones_tensor(points_coords).to(
self.device
),
multimask_output=True,
)
if self.sam_refinement:
masks_input = select_best_masks(masks_input, logits)
masks, logits, _, = self.sam.predict_torch(
point_coords=torch.tensor(points_coords).to(
self.device
),
point_labels=create_ones_tensor(
points_coords
).to(self.device),
mask_input=masks_input.unsqueeze(1),
multimask_output=True,
)
masks = select_best_masks(masks, logits)
if self.sam_rm_intersection:
masks = remove_intersecting_pixels(masks)
reconstructed_dict = reconstruct_dict(
masks, count_per_key
)
new_targets_target = transform_masks(reconstructed_dict)
last_targets_target.append(new_targets_target)
viz_targets_target = union_of_masks(reconstructed_dict)
visualize_semantic_map_maxed(viz_targets_target)
save_semantic_map_maxed(viz_targets_target, after=True)
else:
last_targets_target.append(targets_target[i])
targets_target = last_targets_target
for i, index in enumerate(order_target):
targets[index] = targets_target[i]
losses = self.criterion(outputs, targets)
for k in list(losses.keys()):
if k in self.criterion.weight_dict:
losses[k] *= self.criterion.weight_dict[k]
else:
# remove this loss if not specified in `weight_dict`
losses.pop(k)
self.local_iter += 1
return losses
else:
mask_cls_results = outputs["pred_logits"]
mask_pred_results = outputs["pred_masks"]
if self.geometric_ensemble:
# We ensemble the pred logits of in-vocab and out-vocab
clip_feature = features["clip_vis_dense"]
mask_for_pooling = F.interpolate(
mask_pred_results,
size=clip_feature.shape[-2:],
mode="bilinear",
align_corners=False,
)
if "convnext" in self.backbone.model_name.lower():
pooled_clip_feature = self.mask_pooling(
clip_feature, mask_for_pooling
)
pooled_clip_feature = self.backbone.visual_prediction_forward(
pooled_clip_feature
)
elif "rn" in self.backbone.model_name.lower():
pooled_clip_feature = self.backbone.visual_prediction_forward(
clip_feature, mask_for_pooling
)
else:
raise NotImplementedError
| """
# ---------------------------------------------------------------
# Copyright 2023 Telecom Paris, Yasser BENIGMIM. All rights reserved.
# Licensed under the Apache License, Version 2.0
Reference: https://github.com/facebookresearch/Mask2Former/blob/main/train_net.py
https://github.com/bytedance/fc-clip/blob/main/fcclip/fcclip.py
# ---------------------------------------------------------------
"""
def is_element_in_string(my_list, my_string):
for element in my_list:
if element in my_string:
return True
return False
def show_anns(anns, val=0.35):
if len(anns) == 0:
return
sorted_anns = sorted(anns, key=(lambda x: x["area"]), reverse=True)
ax = plt.gca()
ax.set_autoscale_on(False)
for ann in sorted_anns:
m = ann["segmentation"]
img = np.ones((m.shape[0], m.shape[1], 3))
color_mask = np.random.random((1, 3)).tolist()[0]
for i in range(3):
img[:, :, i] = color_mask[i]
ax.imshow(np.dstack((img, m * val)))
def write_masks_to_png(
masks,
image,
filename,
path="segmented",
val=0.35,
) -> None:
plt.figure(figsize=(30, 30))
plt.imshow(image)
show_anns(masks, val)
plt.axis("off")
# plt.show()
# filename = f"masks.png"
plt.savefig(os.path.join(path, filename))
return
#
# pred = processed_results[0]["sem_seg"].unsqueeze(dim=0)
# pred = torch.argmax(pred, dim=1)
# pred_1 = torch.squeeze(pred)
# pred_1 = np.asarray(pred_1.cpu().data, dtype=np.uint8)
# pred_1_map = colorize_mask(pred_1, None)
VILD_PROMPT = [
"a photo of a {}.",
"This is a photo of a {}",
"There is a {} in the scene",
"There is the {} in the scene",
"a photo of a {} in the scene",
"a photo of a small {}.",
"a photo of a medium {}.",
"a photo of a large {}.",
"This is a photo of a small {}.",
"This is a photo of a medium {}.",
"This is a photo of a large {}.",
"There is a small {} in the scene.",
"There is a medium {} in the scene.",
"There is a large {} in the scene.",
]
def _params_equal(ema_model, model):
for ema_param, param in zip(ema_model.named_parameters(), model.named_parameters()):
if not torch.equal(ema_param[1].data, param[1].data):
# print("Difference in", ema_param[0])
return False
return True
@META_ARCH_REGISTRY.register()
class CLOUDS(nn.Module):
"""
Main class for mask classification semantic segmentation architectures.
"""
@configurable
def __init__(
self,
*,
backbone: Backbone,
sem_seg_head: nn.Module,
criterion: nn.Module,
num_queries: int,
object_mask_threshold: float,
overlap_threshold: float,
train_metadata,
test_metadata,
size_divisibility: int,
sem_seg_postprocess_before_inference: bool,
pixel_mean: Tuple[float],
pixel_std: Tuple[float],
# inference
semantic_on: bool,
panoptic_on: bool,
instance_on: bool,
test_topk_per_image: int,
# CLOUDS
geometric_ensemble_alpha: float,
geometric_ensemble_beta: float,
ensemble_on_valid_mask: bool,
geometric_ensemble: bool,
geometric_ensemble_ema: bool,
sam_enabled: bool,
sam_mobile: bool,
sam_minibatch: bool,
sam_size_threshold: int,
sam_erosion: bool,
sam_erosion_size: int,
sam_num_points: int,
sam_selection_mode: str,
sam_rm_intersection: bool,
sam_refinement: bool,
alpha_ema: float,
overwriting: bool,
iteration_update: int,
):
"""
Args:
backbone: a backbone module, must follow detectron2's backbone interface
sem_seg_head: a module that predicts semantic segmentation from backbone features
criterion: a module that defines the loss
num_queries: int, number of queries
object_mask_threshold: float, threshold to filter query based on classification score
for panoptic segmentation inference
overlap_threshold: overlap threshold used in general inference for panoptic segmentation
metadata: dataset meta, get `thing` and `stuff` category names for panoptic
segmentation inference
size_divisibility: Some backbones require the input height and width to be divisible by a
specific integer. We can use this to override such requirement.
sem_seg_postprocess_before_inference: whether to resize the prediction back
to original input size before semantic segmentation inference or after.
For high-resolution dataset like Mapillary, resizing predictions before
inference will cause OOM error.
pixel_mean, pixel_std: list or tuple with #channels element, representing
the per-channel mean and std to be used to normalize the input image
semantic_on: bool, whether to output semantic segmentation prediction
instance_on: bool, whether to output instance segmentation prediction
panoptic_on: bool, whether to output panoptic segmentation prediction
test_topk_per_image: int, instance segmentation parameter, keep topk instances per image
"""
super().__init__()
self.backbone = backbone
self.sem_seg_head = sem_seg_head
self.sam_minibatch = sam_minibatch
self.overwriting = overwriting
if self.sam_minibatch:
self.sem_seg_head_ema = deepcopy(self.sem_seg_head)
self.local_iter = 0
self.criterion = criterion
self.num_queries = num_queries
self.iteration_update = iteration_update
self.overlap_threshold = overlap_threshold
self.object_mask_threshold = object_mask_threshold
self.train_metadata = train_metadata
self.test_metadata = test_metadata
if size_divisibility < 0:
# use backbone size_divisibility if not set
size_divisibility = self.backbone.size_divisibility
self.size_divisibility = size_divisibility
self.sem_seg_postprocess_before_inference = sem_seg_postprocess_before_inference
self.register_buffer(
"pixel_mean", torch.Tensor(pixel_mean).view(-1, 1, 1), False
)
self.register_buffer("pixel_std", torch.Tensor(pixel_std).view(-1, 1, 1), False)
# additional args
self.semantic_on = semantic_on
self.instance_on = instance_on
self.panoptic_on = panoptic_on
self.test_topk_per_image = test_topk_per_image
if not self.semantic_on:
assert self.sem_seg_postprocess_before_inference
# CLOUDS args
self.mask_pooling = MaskPooling()
self.geometric_ensemble_alpha = geometric_ensemble_alpha
self.geometric_ensemble_beta = geometric_ensemble_beta
self.ensemble_on_valid_mask = ensemble_on_valid_mask
self.train_text_classifier = None
self.test_text_classifier = None
self.void_embedding = nn.Embedding(1, backbone.dim_latent) # use this for void
self.geometric_ensemble = geometric_ensemble
self.geometric_ensemble_ema = geometric_ensemble_ema
(
_,
self.train_num_templates,
self.train_class_names,
) = self.prepare_class_names_from_metadata(train_metadata, train_metadata)
(
self.category_overlapping_mask,
self.test_num_templates,
self.test_class_names,
) = self.prepare_class_names_from_metadata(test_metadata, train_metadata)
self.sam_enabled = sam_enabled
if self.sam_enabled:
self.sam = SAM(
mobile=sam_mobile,
size_threshold=sam_size_threshold,
erosion=sam_erosion,
erosion_size=sam_erosion_size,
num_points=sam_num_points,
selection_mode=sam_selection_mode,
rm_intersection=sam_rm_intersection,
refinement=sam_refinement,
)
self.sam_size_threshold = sam_size_threshold
self.sam_erosion = sam_erosion
self.sam_erosion_size = sam_erosion_size
self.sam_num_points = sam_num_points
self.sam_selection_mode = sam_selection_mode
self.sam_rm_intersection = sam_rm_intersection
self.sam_refinement = sam_refinement
self.alpha_ema = alpha_ema
def get_module(self, module):
"""Get `nn.ModuleDict` to fit the `MMDistributedDataParallel` interface.
Args:
module (MMDistributedDataParallel | nn.ModuleDict): The input
module that needs processing.
Returns:
nn.ModuleDict: The ModuleDict of multiple networks.
"""
if isinstance(module, DistributedDataParallel):
return module.module
return module
def get_ema_model(self):
return self.get_module(self.sem_seg_head_ema)
def get_model(self):
return self.get_module(self.sem_seg_head)
def init_ema_weights(self):
for param in self.get_ema_model().parameters():
param.detach_()
mp = list(self.get_model().parameters())
mcp = list(self.get_ema_model().parameters())
for i in range(0, len(mp)):
if not mcp[i].data.shape: # scalar tensor
mcp[i].data = mp[i].data.clone()
else:
mcp[i].data[:] = mp[i].data[:].clone()
def update_ema_weights(self, iter):
# alpha_teacher = min(1 - 1 / (iter + 1), self.alpha_ema)
alpha_teacher = self.alpha_ema
for ema_param, param in zip(
self.get_ema_model().parameters(), self.get_model().parameters()
):
if not param.data.shape: # scalar tensor
ema_param.data = (
alpha_teacher * ema_param.data + (1 - alpha_teacher) * param.data
)
else:
ema_param.data[:] = (
alpha_teacher * ema_param[:].data[:]
+ (1 - alpha_teacher) * param[:].data[:]
)
def prepare_class_names_from_metadata(self, metadata, train_metadata):
def split_labels(x):
res = []
for x_ in x:
x_ = x_.replace(", ", ",")
x_ = x_.split(",") # there can be multiple synonyms for single class
res.append(x_)
return res
# get text classifier
try:
class_names = split_labels(
metadata.stuff_classes
) # it includes both thing and stuff
train_class_names = split_labels(train_metadata.stuff_classes)
except:
# this could be for insseg, where only thing_classes are available
class_names = split_labels(metadata.thing_classes)
train_class_names = split_labels(train_metadata.thing_classes)
train_class_names = {l for label in train_class_names for l in label}
category_overlapping_list = []
for test_class_names in class_names:
is_overlapping = not set(train_class_names).isdisjoint(
set(test_class_names)
)
category_overlapping_list.append(is_overlapping)
category_overlapping_mask = torch.tensor(
category_overlapping_list, dtype=torch.long
)
def fill_all_templates_ensemble(x_=""):
res = []
for x in x_:
for template in VILD_PROMPT:
res.append(template.format(x))
return res, len(res) // len(VILD_PROMPT)
num_templates = []
templated_class_names = []
for x in class_names:
templated_classes, templated_classes_num = fill_all_templates_ensemble(x)
templated_class_names += templated_classes
num_templates.append(
templated_classes_num
) # how many templates for current classes
class_names = templated_class_names
# print("text for classification:", class_names)
return category_overlapping_mask, num_templates, class_names
def set_metadata(self, metadata):
self.test_metadata = metadata
(
self.category_overlapping_mask,
self.test_num_templates,
self.test_class_names,
) = self.prepare_class_names_from_metadata(metadata, self.train_metadata)
self.test_text_classifier = None
return
def get_text_classifier(self):
if self.training:
if self.train_text_classifier is None:
text_classifier = []
# this is needed to avoid oom, which may happen when num of class is large
bs = 128
for idx in range(0, len(self.train_class_names), bs):
text_classifier.append(
self.backbone.get_text_classifier(
self.train_class_names[idx : idx + bs], self.device
).detach()
)
text_classifier = torch.cat(text_classifier, dim=0)
# average across templates and normalization.
text_classifier /= text_classifier.norm(dim=-1, keepdim=True)
text_classifier = text_classifier.reshape(
text_classifier.shape[0] // len(VILD_PROMPT),
len(VILD_PROMPT),
text_classifier.shape[-1],
).mean(1)
text_classifier /= text_classifier.norm(dim=-1, keepdim=True)
self.train_text_classifier = text_classifier
return self.train_text_classifier, self.train_num_templates
else:
if self.test_text_classifier is None:
text_classifier = []
# this is needed to avoid oom, which may happen when num of class is large
bs = 128
for idx in range(0, len(self.test_class_names), bs):
text_classifier.append(
self.backbone.get_text_classifier(
self.test_class_names[idx : idx + bs], self.device
).detach()
)
text_classifier = torch.cat(text_classifier, dim=0)
# average across templates and normalization.
text_classifier /= text_classifier.norm(dim=-1, keepdim=True)
text_classifier = text_classifier.reshape(
text_classifier.shape[0] // len(VILD_PROMPT),
len(VILD_PROMPT),
text_classifier.shape[-1],
).mean(1)
text_classifier /= text_classifier.norm(dim=-1, keepdim=True)
self.test_text_classifier = text_classifier
return self.test_text_classifier, self.test_num_templates
@classmethod
def from_config(cls, cfg):
backbone = build_backbone(cfg)
sem_seg_head = build_sem_seg_head(cfg, backbone.output_shape())
# Loss parameters:
deep_supervision = cfg.MODEL.MASK_FORMER.DEEP_SUPERVISION
no_object_weight = cfg.MODEL.MASK_FORMER.NO_OBJECT_WEIGHT
# loss weights
class_weight = cfg.MODEL.MASK_FORMER.CLASS_WEIGHT
dice_weight = cfg.MODEL.MASK_FORMER.DICE_WEIGHT
mask_weight = cfg.MODEL.MASK_FORMER.MASK_WEIGHT
# building criterion
matcher = HungarianMatcher(
cost_class=class_weight,
cost_mask=mask_weight,
cost_dice=dice_weight,
num_points=cfg.MODEL.MASK_FORMER.TRAIN_NUM_POINTS,
)
weight_dict = {
"loss_ce": class_weight,
"loss_mask": mask_weight,
"loss_dice": dice_weight,
}
if deep_supervision:
dec_layers = cfg.MODEL.MASK_FORMER.DEC_LAYERS
aux_weight_dict = {}
for i in range(dec_layers - 1):
aux_weight_dict.update({k + f"_{i}": v for k, v in weight_dict.items()})
weight_dict.update(aux_weight_dict)
losses = ["labels", "masks"]
criterion = SetCriterion(
sem_seg_head.num_classes,
matcher=matcher,
weight_dict=weight_dict,
eos_coef=no_object_weight,
losses=losses,
num_points=cfg.MODEL.MASK_FORMER.TRAIN_NUM_POINTS,
oversample_ratio=cfg.MODEL.MASK_FORMER.OVERSAMPLE_RATIO,
importance_sample_ratio=cfg.MODEL.MASK_FORMER.IMPORTANCE_SAMPLE_RATIO,
)
return {
"backbone": backbone,
"sem_seg_head": sem_seg_head,
"criterion": criterion,
"num_queries": cfg.MODEL.MASK_FORMER.NUM_OBJECT_QUERIES,
"object_mask_threshold": cfg.MODEL.MASK_FORMER.TEST.OBJECT_MASK_THRESHOLD,
"overlap_threshold": cfg.MODEL.MASK_FORMER.TEST.OVERLAP_THRESHOLD,
"train_metadata": MetadataCatalog.get(cfg.DATASETS.TRAIN[0]),
"test_metadata": MetadataCatalog.get(cfg.DATASETS.TEST[0]),
"size_divisibility": cfg.MODEL.MASK_FORMER.SIZE_DIVISIBILITY,
"sem_seg_postprocess_before_inference": (
cfg.MODEL.MASK_FORMER.TEST.SEM_SEG_POSTPROCESSING_BEFORE_INFERENCE
or cfg.MODEL.MASK_FORMER.TEST.PANOPTIC_ON
or cfg.MODEL.MASK_FORMER.TEST.INSTANCE_ON
),
"pixel_mean": cfg.MODEL.PIXEL_MEAN,
"pixel_std": cfg.MODEL.PIXEL_STD,
# inference
"semantic_on": cfg.MODEL.MASK_FORMER.TEST.SEMANTIC_ON,
"instance_on": cfg.MODEL.MASK_FORMER.TEST.INSTANCE_ON,
"panoptic_on": cfg.MODEL.MASK_FORMER.TEST.PANOPTIC_ON,
"test_topk_per_image": cfg.TEST.DETECTIONS_PER_IMAGE,
"geometric_ensemble_alpha": cfg.MODEL.CLOUDS.GEOMETRIC_ENSEMBLE_ALPHA,
"geometric_ensemble_beta": cfg.MODEL.CLOUDS.GEOMETRIC_ENSEMBLE_BETA,
"ensemble_on_valid_mask": cfg.MODEL.CLOUDS.ENSEMBLE_ON_VALID_MASK,
"geometric_ensemble": cfg.MODEL.CLOUDS.GEOMETRIC_ENSEMBLE,
"geometric_ensemble_ema": cfg.MODEL.CLOUDS.GEOMETRIC_ENSEMBLE_EMA,
"sam_enabled": cfg.MODEL.CLOUDS.SAM.ENABLED,
"sam_mobile": cfg.MODEL.CLOUDS.SAM.MOBILE,
"sam_minibatch": cfg.MODEL.CLOUDS.SAM.MINIBATCH,
"sam_size_threshold": cfg.MODEL.CLOUDS.SAM.SIZE_THRESHOLD,
"sam_erosion": cfg.MODEL.CLOUDS.SAM.EROSION,
"sam_erosion_size": cfg.MODEL.CLOUDS.SAM.EROSION_SIZE,
"sam_num_points": cfg.MODEL.CLOUDS.SAM.NUM_POINTS,
"sam_selection_mode": cfg.MODEL.CLOUDS.SAM.SELECTION_MODE,
"sam_rm_intersection": cfg.MODEL.CLOUDS.SAM.RM_INTERSECTION,
"sam_refinement": cfg.MODEL.CLOUDS.SAM.REFINEMENT,
"alpha_ema": cfg.MODEL.CLOUDS.SAM.ALPHA_EMA,
"overwriting": cfg.MODEL.CLOUDS.OVERWRITING,
"iteration_update": cfg.MODEL.CLOUDS.ITERATION_UPDATE,
}
@property
def device(self):
return self.pixel_mean.device
def forward(self, batched_inputs):
"""
Args:
batched_inputs: a list, batched outputs of :class:`DatasetMapper`.
Each item in the list contains the inputs for one image.
For now, each item in the list is a dict that contains:
* "image": Tensor, image in (C, H, W) format.
* "instances": per-region ground truth
* Other information that's included in the original dicts, such as:
"height", "width" (int): the output resolution of the model (may be different
from input resolution), used in inference.
Returns:
list[dict]:
each dict has the results for one image. The dict contains the following keys:
* "sem_seg":
A Tensor that represents the
per-pixel segmentation prediced by the head.
The prediction has shape KxHxW that represents the logits of
each class for each pixel.
* "panoptic_seg":
A tuple that represent panoptic output
panoptic_seg (Tensor): of shape (height, width) where the values are ids for each segment.
segments_info (list[dict]): Describe each segment in `panoptic_seg`.
Each dict contains keys "id", "category_id", "isthing".
"""
if self.training:
if self.sam_minibatch:
# Init/update ema model
if self.local_iter == 0:
self.init_ema_weights()
# assert _params_equal(self.get_ema_model(), self.get_model())
if not self.local_iter % self.iteration_update:
self.update_ema_weights(self.local_iter)
# assert not _params_equal(self.get_ema_model(), self.get_model())
# assert self.get_ema_model().training
# We select the source images and augmented version of the generated ones
images = [
x["image_aug"].to(self.device)
if "image_aug" in x
else x["image"].to(self.device)
for x in batched_inputs
]
images_norm_list = [(x - self.pixel_mean) / self.pixel_std for x in images]
images_norm = ImageList.from_tensors(images_norm_list, self.size_divisibility)
# We select the clean version of the generated ones
images_clean = [
x["image"].to(self.device) for x in batched_inputs if "image_aug" in x
]
if images_clean:
images_norm_list_clean = [
(x - self.pixel_mean) / self.pixel_std for x in images_clean
]
images_norm_clean = ImageList.from_tensors(
images_norm_list_clean, self.size_divisibility
)
with torch.no_grad():
features_clean = self.backbone(images_norm_clean.tensor)
features = self.backbone(images_norm.tensor)
text_classifier, num_templates = self.get_text_classifier()
# Append void class weight
text_classifier = torch.cat(
[text_classifier, F.normalize(self.void_embedding.weight, dim=-1)], dim=0
)
features["text_classifier"] = text_classifier
features["num_templates"] = num_templates
if images_clean:
features_clean["text_classifier"] = text_classifier
features_clean["num_templates"] = num_templates
outputs = self.sem_seg_head(features)
if self.training:
gt_instances = [x["instances"].to(self.device) for x in batched_inputs]
targets = self.prepare_targets(gt_instances, images_norm)
if images_clean:
(
batched_inputs_target,
order_target,
) = separate_dicts_by_filename(batched_inputs)
for m in self.get_ema_model().modules():
if isinstance(m, _DropoutNd):
m.training = False
if isinstance(m, DropPath):
m.training = False
with torch.no_grad():
outputs_target = self.get_ema_model()(features_clean)
seg_maps_target = self.predict_inference(
outputs_target,
features_clean["clip_vis_dense"],
text_classifier,
num_templates,
images_norm_clean,
batched_inputs_target,
)
targets_target = process_segmentation_maps(seg_maps_target)
if self.sam_enabled:
separate_dict = separate_shapes_list(
targets_target, size_threshold=self.sam_size_threshold
)
coordinate_dict = get_fixed_points(
separate_dict,
apply_erosion=self.sam_erosion,
num_points=self.sam_num_points,
erosion_size=self.sam_erosion_size,
selection_mode=self.sam_selection_mode,
)
last_targets_target = []
for i, dico in enumerate(batched_inputs_target):
image_i = dico["image"]
image_perm = image_i.permute(1, 2, 0).cpu().numpy()
image_perm = self.sam.apply_image(image_perm)
self.sam.set_torch_image(
torch.tensor(image_perm.transpose(2, 0, 1))
.unsqueeze(0)
.to(self.device),
(768, 768),
)
points_coords, count_per_key = dict_to_tensor(
coordinate_dict[i]
)
points_coords = self.sam.apply_coords(
points_coords.cpu().numpy(), (768, 768)
)
if points_coords.shape[0]:
(masks, logits, masks_input,) = self.sam.predict_torch(
point_coords=torch.tensor(points_coords).to(
self.device
),
point_labels=create_ones_tensor(points_coords).to(
self.device
),
multimask_output=True,
)
if self.sam_refinement:
masks_input = select_best_masks(masks_input, logits)
masks, logits, _, = self.sam.predict_torch(
point_coords=torch.tensor(points_coords).to(
self.device
),
point_labels=create_ones_tensor(
points_coords
).to(self.device),
mask_input=masks_input.unsqueeze(1),
multimask_output=True,
)
masks = select_best_masks(masks, logits)
if self.sam_rm_intersection:
masks = remove_intersecting_pixels(masks)
reconstructed_dict = reconstruct_dict(
masks, count_per_key
)
new_targets_target = transform_masks(reconstructed_dict)
last_targets_target.append(new_targets_target)
viz_targets_target = union_of_masks(reconstructed_dict)
visualize_semantic_map_maxed(viz_targets_target)
save_semantic_map_maxed(viz_targets_target, after=True)
else:
last_targets_target.append(targets_target[i])
targets_target = last_targets_target
for i, index in enumerate(order_target):
targets[index] = targets_target[i]
losses = self.criterion(outputs, targets)
for k in list(losses.keys()):
if k in self.criterion.weight_dict:
losses[k] *= self.criterion.weight_dict[k]
else:
# remove this loss if not specified in `weight_dict`
losses.pop(k)
self.local_iter += 1
return losses
else:
mask_cls_results = outputs["pred_logits"]
mask_pred_results = outputs["pred_masks"]
if self.geometric_ensemble:
# We ensemble the pred logits of in-vocab and out-vocab
clip_feature = features["clip_vis_dense"]
mask_for_pooling = F.interpolate(
mask_pred_results,
size=clip_feature.shape[-2:],
mode="bilinear",
align_corners=False,
)
if "convnext" in self.backbone.model_name.lower():
pooled_clip_feature = self.mask_pooling(
clip_feature, mask_for_pooling
)
pooled_clip_feature = self.backbone.visual_prediction_forward(
pooled_clip_feature
)
elif "rn" in self.backbone.model_name.lower():
pooled_clip_feature = self.backbone.visual_prediction_forward(
clip_feature, mask_for_pooling
)
else:
raise NotImplementedError
| out_vocab_cls_results = get_classification_logits( | 4 | 2023-12-15 15:40:58+00:00 | 12k |
modelscope/scepter | scepter/modules/solver/train_val_solver.py | [
{
"identifier": "BaseSolver",
"path": "scepter/modules/solver/base_solver.py",
"snippet": "class BaseSolver(object, metaclass=ABCMeta):\n \"\"\" Base Solver.\n To initialize the solver.\n We have to initialize the data, model, optimizer and schedule.\n To process the common proce... | import os.path as osp
import torch
from collections import OrderedDict, defaultdict
from scepter.modules.solver.base_solver import BaseSolver
from scepter.modules.solver.registry import SOLVERS
from scepter.modules.utils.config import dict_to_yaml
from scepter.modules.utils.data import (transfer_data_to_cpu,
transfer_data_to_cuda)
from scepter.modules.utils.distribute import gather_data, we
from scepter.modules.utils.file_system import FS | 9,402 | # -*- coding: utf-8 -*-
# Copyright (c) Alibaba, Inc. and its affiliates.
def _get_value(data: dict, key: str):
""" Recursively get value from data by a multi-level key.
Args:
data (dict):
key (str): 'data', 'meta.path', 'a.b.c'
Returns:
Value.
"""
if not isinstance(data, dict):
return None
if key in data:
return data[key]
elif '.' in key:
par_key = key.split('.')[0]
sub_key = '.'.join(key.split('.')[1:])
if par_key in data:
return _get_value(data[par_key], sub_key)
return None
@SOLVERS.register_class()
| # -*- coding: utf-8 -*-
# Copyright (c) Alibaba, Inc. and its affiliates.
def _get_value(data: dict, key: str):
""" Recursively get value from data by a multi-level key.
Args:
data (dict):
key (str): 'data', 'meta.path', 'a.b.c'
Returns:
Value.
"""
if not isinstance(data, dict):
return None
if key in data:
return data[key]
elif '.' in key:
par_key = key.split('.')[0]
sub_key = '.'.join(key.split('.')[1:])
if par_key in data:
return _get_value(data[par_key], sub_key)
return None
@SOLVERS.register_class() | class TrainValSolver(BaseSolver): | 0 | 2023-12-21 02:01:48+00:00 | 12k |
pigeonai-org/ViDove | src/task.py | [
{
"identifier": "SrtScript",
"path": "src/srt_util/srt.py",
"snippet": "class SrtScript(object):\n def __init__(self, src_lang, tgt_lang, segments, domain=\"General\") -> None:\n self.domain = domain\n self.src_lang = src_lang\n self.tgt_lang = tgt_lang\n self.segments = [... | import threading
import time
import openai
import logging
import subprocess
import torch
import stable_whisper
import shutil
from pytube import YouTube
from os import getenv, getcwd
from pathlib import Path
from enum import Enum, auto
from src.srt_util.srt import SrtScript
from src.srt_util.srt2ass import srt2ass
from time import time, strftime, gmtime, sleep
from src.translators.translation import get_translation, prompt_selector
from datetime import datetime | 8,492 | self.result = None
self.s_t = None
self.t_e = None
self.t_s = time()
# logging setting
logfmt = "%(asctime)s (%(module)s:%(lineno)d) %(levelname)s: %(message)s"
logging.basicConfig(level=logging.INFO, format=logfmt, handlers=[
logging.FileHandler(
"{}/{}_{}.log".format(task_local_dir, f"task_{task_id}", datetime.now().strftime("%m%d%Y_%H%M%S")),
'w', encoding='utf-8')])
print(f"Task ID: {self.task_id}")
logging.info(f"Task ID: {self.task_id}")
logging.info(f"{self.source_lang} -> {self.target_lang} task in {self.field}")
logging.info(f"Translation Model: {self.translation_model}")
logging.info(f"subtitle_type: {self.output_type['subtitle']}")
logging.info(f"video_ouput: {self.output_type['video']}")
logging.info(f"bilingual_ouput: {self.output_type['bilingual']}")
logging.info("Pre-process setting:")
for key in self.pre_setting:
logging.info(f"{key}: {self.pre_setting[key]}")
logging.info("Post-process setting:")
for key in self.post_setting:
logging.info(f"{key}: {self.post_setting[key]}")
@staticmethod
def fromYoutubeLink(youtube_url, task_id, task_dir, task_cfg):
"""
Creates a YoutubeTask instance from a YouTube URL.
"""
return YoutubeTask(task_id, task_dir, task_cfg, youtube_url)
@staticmethod
def fromAudioFile(audio_path, task_id, task_dir, task_cfg):
"""
Creates an AudioTask instance from an audio file path.
"""
return AudioTask(task_id, task_dir, task_cfg, audio_path)
@staticmethod
def fromVideoFile(video_path, task_id, task_dir, task_cfg):
"""
Creates a VideoTask instance from a video file path.
"""
return VideoTask(task_id, task_dir, task_cfg, video_path)
@staticmethod
def fromSRTFile(srt_path, task_id, task_dir, task_cfg):
"""
Creates a SRTTask instance from a srt file path.
"""
return SRTTask(task_id, task_dir, task_cfg, srt_path)
# Module 1 ASR: audio --> SRT_script
def get_srt_class(self):
"""
Handles the ASR module to convert audio to SRT script format.
"""
# Instead of using the script_en variable directly, we'll use script_input
# TODO: setup ASR module like translator
self.status = TaskStatus.INITIALIZING_ASR
if self.SRT_Script != None:
logging.info("SRT input mode, skip ASR Module")
return
method = self.ASR_setting["whisper_config"]["method"]
whisper_model = self.ASR_setting["whisper_config"]["whisper_model"]
src_srt_path = self.task_local_dir.joinpath(f"task_{self.task_id}_{self.source_lang}.srt")
if not Path.exists(src_srt_path):
# extract script from audio
logging.info("extract script from audio")
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
logging.info(f"Module 1: ASR inference method: {method}")
init_prompt = "Hello, welcome to my lecture." if self.source_lang == "EN" else ""
if method == "api":
with open(self.audio_path, 'rb') as audio_file:
transcript = openai.Audio.transcribe(model="whisper-1", file=audio_file, response_format="srt", language=self.source_lang.lower(), prompt=init_prompt)
elif method == "stable":
model = stable_whisper.load_model(whisper_model, device)
transcript = model.transcribe(str(self.audio_path), regroup=False,
initial_prompt=init_prompt)
(
transcript
.split_by_punctuation(['.', '。', '?'])
.merge_by_gap(.15, max_words=3)
.merge_by_punctuation([' '])
.split_by_punctuation(['.', '。', '?'])
)
transcript = transcript.to_dict()
transcript = transcript['segments']
# after get the transcript, release the gpu resource
torch.cuda.empty_cache()
else:
raise RuntimeError(f"unavaliable ASR inference method: {method}")
if isinstance(transcript, str):
self.SRT_Script = SrtScript.parse_from_srt_file(self.source_lang, self.target_lang, domain = self.field, srt_str = transcript.rstrip())
else:
self.SRT_Script = SrtScript(self.source_lang, self.target_lang, transcript, self.field)
# save the srt script to local
self.SRT_Script.write_srt_file_src(src_srt_path)
# Module 2: SRT preprocess: perform preprocess steps
def preprocess(self):
"""
Performs preprocessing steps on the SRT script.
"""
self.status = TaskStatus.PRE_PROCESSING
logging.info("--------------------Start Preprocessing SRT class--------------------")
if self.pre_setting["sentence_form"]:
self.SRT_Script.form_whole_sentence()
if self.pre_setting["spell_check"]:
self.SRT_Script.spell_check_term()
if self.pre_setting["term_correct"]:
self.SRT_Script.correct_with_force_term()
processed_srt_path_src = str(Path(self.task_local_dir) / f'{self.task_id}_processed.srt')
self.SRT_Script.write_srt_file_src(processed_srt_path_src)
if self.output_type["subtitle"] == "ass":
logging.info("write English .srt file to .ass")
|
class TaskStatus(str, Enum):
"""
An enumeration class representing the different statuses a task can have in the translation pipeline.
TODO: add translation progress indicator (%).
"""
CREATED = 'CREATED'
INITIALIZING_ASR = 'INITIALIZING_ASR'
PRE_PROCESSING = 'PRE_PROCESSING'
TRANSLATING = 'TRANSLATING'
POST_PROCESSING = 'POST_PROCESSING'
OUTPUT_MODULE = 'OUTPUT_MODULE'
class Task:
"""
A class representing a task in the translation pipeline. It includes methods for handling different stages of the task.
If one want to add a new entry type (e.g. add support for different video formats),
one should extend this class and override the `run` method.
"""
@property
def status(self):
with self.__status_lock:
return self.__status
@status.setter
def status(self, new_status):
"""
Sets the new status of the task, ensuring thread safety with a lock.
"""
with self.__status_lock:
self.__status = new_status
def __init__(self, task_id, task_local_dir, task_cfg):
"""
Constructor for initializing a task with its ID, local directory, and configuration settings.
"""
self.__status_lock = threading.Lock()
self.__status = TaskStatus.CREATED
self.gpu_status = 0
openai.api_key = getenv("OPENAI_API_KEY")
self.task_id = task_id
self.task_local_dir = task_local_dir
self.ASR_setting = task_cfg["ASR"]
self.translation_setting = task_cfg["translation"]
self.translation_model = self.translation_setting["model"]
self.output_type = task_cfg["output_type"]
self.target_lang = task_cfg["target_lang"]
self.source_lang = task_cfg["source_lang"]
self.field = task_cfg["field"]
self.pre_setting = task_cfg["pre_process"]
self.post_setting = task_cfg["post_process"]
self.audio_path = None
self.SRT_Script = None
self.result = None
self.s_t = None
self.t_e = None
self.t_s = time()
# logging setting
logfmt = "%(asctime)s (%(module)s:%(lineno)d) %(levelname)s: %(message)s"
logging.basicConfig(level=logging.INFO, format=logfmt, handlers=[
logging.FileHandler(
"{}/{}_{}.log".format(task_local_dir, f"task_{task_id}", datetime.now().strftime("%m%d%Y_%H%M%S")),
'w', encoding='utf-8')])
print(f"Task ID: {self.task_id}")
logging.info(f"Task ID: {self.task_id}")
logging.info(f"{self.source_lang} -> {self.target_lang} task in {self.field}")
logging.info(f"Translation Model: {self.translation_model}")
logging.info(f"subtitle_type: {self.output_type['subtitle']}")
logging.info(f"video_ouput: {self.output_type['video']}")
logging.info(f"bilingual_ouput: {self.output_type['bilingual']}")
logging.info("Pre-process setting:")
for key in self.pre_setting:
logging.info(f"{key}: {self.pre_setting[key]}")
logging.info("Post-process setting:")
for key in self.post_setting:
logging.info(f"{key}: {self.post_setting[key]}")
@staticmethod
def fromYoutubeLink(youtube_url, task_id, task_dir, task_cfg):
"""
Creates a YoutubeTask instance from a YouTube URL.
"""
return YoutubeTask(task_id, task_dir, task_cfg, youtube_url)
@staticmethod
def fromAudioFile(audio_path, task_id, task_dir, task_cfg):
"""
Creates an AudioTask instance from an audio file path.
"""
return AudioTask(task_id, task_dir, task_cfg, audio_path)
@staticmethod
def fromVideoFile(video_path, task_id, task_dir, task_cfg):
"""
Creates a VideoTask instance from a video file path.
"""
return VideoTask(task_id, task_dir, task_cfg, video_path)
@staticmethod
def fromSRTFile(srt_path, task_id, task_dir, task_cfg):
"""
Creates a SRTTask instance from a srt file path.
"""
return SRTTask(task_id, task_dir, task_cfg, srt_path)
# Module 1 ASR: audio --> SRT_script
def get_srt_class(self):
"""
Handles the ASR module to convert audio to SRT script format.
"""
# Instead of using the script_en variable directly, we'll use script_input
# TODO: setup ASR module like translator
self.status = TaskStatus.INITIALIZING_ASR
if self.SRT_Script != None:
logging.info("SRT input mode, skip ASR Module")
return
method = self.ASR_setting["whisper_config"]["method"]
whisper_model = self.ASR_setting["whisper_config"]["whisper_model"]
src_srt_path = self.task_local_dir.joinpath(f"task_{self.task_id}_{self.source_lang}.srt")
if not Path.exists(src_srt_path):
# extract script from audio
logging.info("extract script from audio")
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
logging.info(f"Module 1: ASR inference method: {method}")
init_prompt = "Hello, welcome to my lecture." if self.source_lang == "EN" else ""
if method == "api":
with open(self.audio_path, 'rb') as audio_file:
transcript = openai.Audio.transcribe(model="whisper-1", file=audio_file, response_format="srt", language=self.source_lang.lower(), prompt=init_prompt)
elif method == "stable":
model = stable_whisper.load_model(whisper_model, device)
transcript = model.transcribe(str(self.audio_path), regroup=False,
initial_prompt=init_prompt)
(
transcript
.split_by_punctuation(['.', '。', '?'])
.merge_by_gap(.15, max_words=3)
.merge_by_punctuation([' '])
.split_by_punctuation(['.', '。', '?'])
)
transcript = transcript.to_dict()
transcript = transcript['segments']
# after get the transcript, release the gpu resource
torch.cuda.empty_cache()
else:
raise RuntimeError(f"unavaliable ASR inference method: {method}")
if isinstance(transcript, str):
self.SRT_Script = SrtScript.parse_from_srt_file(self.source_lang, self.target_lang, domain = self.field, srt_str = transcript.rstrip())
else:
self.SRT_Script = SrtScript(self.source_lang, self.target_lang, transcript, self.field)
# save the srt script to local
self.SRT_Script.write_srt_file_src(src_srt_path)
# Module 2: SRT preprocess: perform preprocess steps
def preprocess(self):
"""
Performs preprocessing steps on the SRT script.
"""
self.status = TaskStatus.PRE_PROCESSING
logging.info("--------------------Start Preprocessing SRT class--------------------")
if self.pre_setting["sentence_form"]:
self.SRT_Script.form_whole_sentence()
if self.pre_setting["spell_check"]:
self.SRT_Script.spell_check_term()
if self.pre_setting["term_correct"]:
self.SRT_Script.correct_with_force_term()
processed_srt_path_src = str(Path(self.task_local_dir) / f'{self.task_id}_processed.srt')
self.SRT_Script.write_srt_file_src(processed_srt_path_src)
if self.output_type["subtitle"] == "ass":
logging.info("write English .srt file to .ass") | assSub_src = srt2ass(processed_srt_path_src, "default", "No", "Modest") | 1 | 2023-12-20 01:46:47+00:00 | 12k |
YyzHarry/shortcut-ood-fairness | train.py | [
{
"identifier": "datasets",
"path": "dataset/datasets.py",
"snippet": "DATASETS = [\n 'MIMIC',\n 'CheXpert',\n 'NIH',\n 'PadChest',\n 'VinDr',\n 'SIIM',\n 'ISIC',\n 'ODIR'\n]\nCXR_DATASETS = [\n 'MIMIC',\n 'CheXpert',\n 'NIH',\n 'PadChest',\n 'VinDr',\n 'SIIM'\n... | import argparse
import collections
import json
import os
import random
import sys
import time
import numpy as np
import pandas as pd
import PIL
import torch
import torchvision
import torch.utils.data
import pickle
import hparams_registry
import wandb
import hashlib
from tensorboard_logger import Logger
from pathlib import Path
from torch.utils.data import DataLoader
from dataset import datasets
from learning import algorithms, early_stopping, swad_utils
from utils import misc, eval_helper
from dataset.fast_dataloader import InfiniteDataLoader
from collections import OrderedDict | 7,521 | for k, v in sorted(vars(args).items()):
print('\t{}: {}'.format(k, v))
if args.hparams_seed == 0:
hparams = hparams_registry.default_hparams(args.algorithm, args.dataset)
else:
hparams = hparams_registry.random_hparams(args.algorithm, args.dataset, misc.seed_hash(args.hparams_seed))
if args.hparams:
hparams.update(json.loads(args.hparams))
hparams.update({
'image_arch': args.image_arch,
'data_augmentation': args.aug,
'task': args.task,
'attr': args.attr,
'group_def': args.group_def
})
if args.log_online:
wandb.init(project='subpop_fairness', config={**vars(args), **hparams},
name=f"train_{args.dataset}_{args.task}_{args.algorithm}_{args.attr}_"
f"{hashlib.md5(str({**vars(args), **hparams}).encode('utf-8')).hexdigest()[:8]}_"
f"{os.environ['SLURM_JOB_ID'] if 'SLURM_JOB_ID' in os.environ else ''}")
print('HParams:')
for k, v in sorted(hparams.items()):
print('\t{}: {}'.format(k, v))
with open(os.path.join(output_dir, 'args.json'), 'w') as f:
json.dump(vars(args), f, indent=4)
random.seed(args.seed)
np.random.seed(args.seed)
torch.manual_seed(args.seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
os.environ["TOKENIZERS_PARALLELISM"] = "false"
torch.multiprocessing.set_sharing_strategy('file_system')
device = "cuda" if torch.cuda.is_available() else "cpu"
def make_combined_dataset(names, sset, group_def, override_attr=None):
ind_datasets = []
for ds in names:
ind_datasets.append(vars(datasets)[ds](args.data_dir, sset, hparams, group_def=group_def, override_attr=override_attr))
return datasets.ConcatImageDataset(ind_datasets)
if len(args.dataset) == 1:
if args.dataset[0] in vars(datasets):
train_dataset = vars(datasets)[args.dataset[0]](args.data_dir, 'tr', hparams, group_def=args.group_def)
val_dataset = vars(datasets)[args.dataset[0]](args.data_dir, 'va', hparams, group_def='group')
test_dataset = vars(datasets)[args.dataset[0]](args.data_dir, 'te', hparams, group_def='group')
else:
raise NotImplementedError
else:
train_dataset = make_combined_dataset(args.dataset, 'tr', args.group_def)
val_dataset = make_combined_dataset(args.dataset, 'va', 'group')
test_dataset = make_combined_dataset(args.dataset, 'te', 'group')
if args.algorithm == 'DFR':
train_datasets = []
for ds in args.dataset:
train_datasets.append(vars(datasets)[ds](
args.data_dir, 'va', hparams, group_def=args.group_def, subsample_type='group'))
train_dataset = datasets.ConcatImageDataset(train_datasets)
elif args.algorithm == 'StratifiedERM':
assert args.stratified_erm_subset is not None
train_dataset = datasets.SubsetImageDataset(
train_dataset, idxs=np.argwhere(np.array(train_dataset.a) == args.stratified_erm_subset).squeeze())
val_dataset = datasets.SubsetImageDataset(
val_dataset, idxs=np.argwhere(np.array(val_dataset.a) == args.stratified_erm_subset).squeeze())
test_dataset = datasets.SubsetImageDataset(
test_dataset, idxs=np.argwhere(np.array(test_dataset.a) == args.stratified_erm_subset).squeeze())
num_workers = train_dataset.N_WORKERS
input_shape = train_dataset.INPUT_SHAPE
num_labels = train_dataset.num_labels
num_attributes = train_dataset.num_attributes
data_type = train_dataset.data_type
n_steps = args.steps or train_dataset.N_STEPS
checkpoint_freq = args.checkpoint_freq or train_dataset.CHECKPOINT_FREQ
hparams.update({
"steps": n_steps
})
print(f"Dataset:\n\t[train]\t{len(train_dataset)}"
f"\n\t[val]\t{len(val_dataset)}")
if hparams['group_balanced']:
# if attribute not available, groups degenerate to classes
train_weights = np.asarray(train_dataset.weights_g)
train_weights /= np.sum(train_weights)
elif hparams['attr_balanced']:
train_weights = np.asarray(train_dataset.weights_a)
train_weights /= np.sum(train_weights)
else:
train_weights = None
train_loader = InfiniteDataLoader(
dataset=train_dataset,
weights=train_weights,
batch_size=min(len(train_dataset), hparams['batch_size']),
num_workers=num_workers
)
split_names = ['va', 'te']
eval_loaders = [DataLoader(
dataset=dset,
batch_size=max(128, hparams['batch_size'] * 2),
num_workers=num_workers,
shuffle=False)
for dset in [val_dataset, test_dataset]
]
algorithm_class = algorithms.get_algorithm_class(args.algorithm)
algorithm = algorithm_class(data_type, input_shape, num_labels, num_attributes, len(train_dataset), hparams,
grp_sizes=train_dataset.group_sizes, attr_sizes=train_dataset.attr_sizes)
es_group = args.es_metric.split(':')[0]
es_metric = args.es_metric.split(':')[1]
|
if __name__ == "__main__":
parser = argparse.ArgumentParser(description='Shortcut Learning in Chest X-rays')
# training
parser.add_argument('--store_name', type=str, default='debug')
parser.add_argument('--dataset', type=str, default=["MIMIC"], nargs='+')
parser.add_argument('--task', type=str, default="No Finding", choices=datasets.TASKS + datasets.ATTRS)
parser.add_argument('--attr', type=str, default="sex", choices=datasets.ATTRS)
parser.add_argument('--group_def', type=str, default="group", choices=['group', 'label'])
parser.add_argument('--algorithm', type=str, default="ERM", choices=algorithms.ALGORITHMS)
# others
parser.add_argument('--output_dir', type=str, default='output')
parser.add_argument('--data_dir', type=str, default='data')
parser.add_argument('--hparams', type=str, help='JSON-serialized hparams dict')
parser.add_argument('--hparams_seed', type=int, default=0, help='Seed for random hparams (0 for "default hparams")')
parser.add_argument('--seed', type=int, default=0, help='Seed for everything else')
parser.add_argument('--steps', type=int, default=None)
parser.add_argument('--log_online', help='Log online using wandb', action='store_true')
parser.add_argument('--skip_ood_eval', help='skip evals on OOD datasets', action='store_true')
parser.add_argument('--log_all', help='Log all val metrics at each step to tb and wandb', action='store_true')
parser.add_argument('--stratified_erm_subset', type=int, default=None)
# two-stage related
parser.add_argument('--stage1_folder', type=str)
# early stopping
parser.add_argument('--use_es', action='store_true')
parser.add_argument('--es_strategy', choices=['metric'], default='metric')
parser.add_argument('--es_metric', type=str, default='min_group:accuracy')
parser.add_argument('--es_patience', type=int, default=5, help='Stop after this many checkpoints w/ no improvement')
# checkpoints
parser.add_argument('--resume', '-r', type=str, default='')
parser.add_argument('--checkpoint_freq', type=int, default=None, help='Checkpoint every N steps')
parser.add_argument('--skip_model_save', action='store_true')
parser.add_argument('--debug', action='store_true')
# architectures and pre-training sources
parser.add_argument('--image_arch', default='densenet_sup_in1k',
choices=['densenet_sup_in1k', 'resnet_sup_in1k', 'resnet_sup_in21k', 'resnet_simclr_in1k',
'resnet_barlow_in1k', 'vit_sup_in1k', 'vit_sup_in21k', 'vit_sup_swag', 'vit_clip_oai',
'vit_clip_laion', 'vit_dino_in1k', 'resnet_dino_in1k'])
# data augmentations
parser.add_argument('--aug', default='basic2',
choices=['none', 'basic', 'basic2', 'auto_aug', 'rand_aug', 'trivial_aug', 'augmix'])
args = parser.parse_args()
start_step = 0
misc.prepare_folders(args)
output_dir = os.path.join(args.output_dir, args.store_name)
if not args.debug:
sys.stdout = misc.Tee(os.path.join(output_dir, 'out.txt'))
sys.stderr = misc.Tee(os.path.join(output_dir, 'err.txt'))
tb_logger = Logger(logdir=output_dir, flush_secs=2)
print("Environment:")
print("\tPython: {}".format(sys.version.split(" ")[0]))
print("\tPyTorch: {}".format(torch.__version__))
print("\tTorchvision: {}".format(torchvision.__version__))
print("\tCUDA: {}".format(torch.version.cuda))
print("\tCUDNN: {}".format(torch.backends.cudnn.version()))
print("\tNumPy: {}".format(np.__version__))
print("\tPIL: {}".format(PIL.__version__))
print('Args:')
for k, v in sorted(vars(args).items()):
print('\t{}: {}'.format(k, v))
if args.hparams_seed == 0:
hparams = hparams_registry.default_hparams(args.algorithm, args.dataset)
else:
hparams = hparams_registry.random_hparams(args.algorithm, args.dataset, misc.seed_hash(args.hparams_seed))
if args.hparams:
hparams.update(json.loads(args.hparams))
hparams.update({
'image_arch': args.image_arch,
'data_augmentation': args.aug,
'task': args.task,
'attr': args.attr,
'group_def': args.group_def
})
if args.log_online:
wandb.init(project='subpop_fairness', config={**vars(args), **hparams},
name=f"train_{args.dataset}_{args.task}_{args.algorithm}_{args.attr}_"
f"{hashlib.md5(str({**vars(args), **hparams}).encode('utf-8')).hexdigest()[:8]}_"
f"{os.environ['SLURM_JOB_ID'] if 'SLURM_JOB_ID' in os.environ else ''}")
print('HParams:')
for k, v in sorted(hparams.items()):
print('\t{}: {}'.format(k, v))
with open(os.path.join(output_dir, 'args.json'), 'w') as f:
json.dump(vars(args), f, indent=4)
random.seed(args.seed)
np.random.seed(args.seed)
torch.manual_seed(args.seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
os.environ["TOKENIZERS_PARALLELISM"] = "false"
torch.multiprocessing.set_sharing_strategy('file_system')
device = "cuda" if torch.cuda.is_available() else "cpu"
def make_combined_dataset(names, sset, group_def, override_attr=None):
ind_datasets = []
for ds in names:
ind_datasets.append(vars(datasets)[ds](args.data_dir, sset, hparams, group_def=group_def, override_attr=override_attr))
return datasets.ConcatImageDataset(ind_datasets)
if len(args.dataset) == 1:
if args.dataset[0] in vars(datasets):
train_dataset = vars(datasets)[args.dataset[0]](args.data_dir, 'tr', hparams, group_def=args.group_def)
val_dataset = vars(datasets)[args.dataset[0]](args.data_dir, 'va', hparams, group_def='group')
test_dataset = vars(datasets)[args.dataset[0]](args.data_dir, 'te', hparams, group_def='group')
else:
raise NotImplementedError
else:
train_dataset = make_combined_dataset(args.dataset, 'tr', args.group_def)
val_dataset = make_combined_dataset(args.dataset, 'va', 'group')
test_dataset = make_combined_dataset(args.dataset, 'te', 'group')
if args.algorithm == 'DFR':
train_datasets = []
for ds in args.dataset:
train_datasets.append(vars(datasets)[ds](
args.data_dir, 'va', hparams, group_def=args.group_def, subsample_type='group'))
train_dataset = datasets.ConcatImageDataset(train_datasets)
elif args.algorithm == 'StratifiedERM':
assert args.stratified_erm_subset is not None
train_dataset = datasets.SubsetImageDataset(
train_dataset, idxs=np.argwhere(np.array(train_dataset.a) == args.stratified_erm_subset).squeeze())
val_dataset = datasets.SubsetImageDataset(
val_dataset, idxs=np.argwhere(np.array(val_dataset.a) == args.stratified_erm_subset).squeeze())
test_dataset = datasets.SubsetImageDataset(
test_dataset, idxs=np.argwhere(np.array(test_dataset.a) == args.stratified_erm_subset).squeeze())
num_workers = train_dataset.N_WORKERS
input_shape = train_dataset.INPUT_SHAPE
num_labels = train_dataset.num_labels
num_attributes = train_dataset.num_attributes
data_type = train_dataset.data_type
n_steps = args.steps or train_dataset.N_STEPS
checkpoint_freq = args.checkpoint_freq or train_dataset.CHECKPOINT_FREQ
hparams.update({
"steps": n_steps
})
print(f"Dataset:\n\t[train]\t{len(train_dataset)}"
f"\n\t[val]\t{len(val_dataset)}")
if hparams['group_balanced']:
# if attribute not available, groups degenerate to classes
train_weights = np.asarray(train_dataset.weights_g)
train_weights /= np.sum(train_weights)
elif hparams['attr_balanced']:
train_weights = np.asarray(train_dataset.weights_a)
train_weights /= np.sum(train_weights)
else:
train_weights = None
train_loader = InfiniteDataLoader(
dataset=train_dataset,
weights=train_weights,
batch_size=min(len(train_dataset), hparams['batch_size']),
num_workers=num_workers
)
split_names = ['va', 'te']
eval_loaders = [DataLoader(
dataset=dset,
batch_size=max(128, hparams['batch_size'] * 2),
num_workers=num_workers,
shuffle=False)
for dset in [val_dataset, test_dataset]
]
algorithm_class = algorithms.get_algorithm_class(args.algorithm)
algorithm = algorithm_class(data_type, input_shape, num_labels, num_attributes, len(train_dataset), hparams,
grp_sizes=train_dataset.group_sizes, attr_sizes=train_dataset.attr_sizes)
es_group = args.es_metric.split(':')[0]
es_metric = args.es_metric.split(':')[1] | es = early_stopping.EarlyStopping( | 2 | 2023-12-15 04:10:31+00:00 | 12k |
RomGai/BrainVis | cascade_diffusion.py | [
{
"identifier": "PLMSSampler",
"path": "dc_ldm/models/diffusion/plms.py",
"snippet": "class PLMSSampler(object):\n def __init__(self, model, schedule=\"linear\", **kwargs):\n super().__init__()\n self.model = model\n self.ddpm_num_timesteps = model.num_timesteps\n self.sch... | import torch
import os
import numpy as np
import torchvision.transforms as transforms
import argparse
from omegaconf import OmegaConf
from dc_ldm.models.diffusion.plms import PLMSSampler
from einops import rearrange, repeat
from dc_ldm.util import instantiate_from_config
from torch.utils.data import Dataset, DataLoader
from dataset import Dataset as selfdataset
from model.BrainVisModels import TimeEncoder, AlignNet,TimeFreqEncoder,FreqEncoder
from args import args, Test_data, Train_data_all, Train_data, Train_data_all_with_image_name, Train_data_with_image_name, Test_data_with_image_name
from diffusers import StableDiffusionImg2ImgPipeline
from PIL import Image
| 7,533 | '[32]': 'n07753592',
'[19]': 'n07873807',
'[9]': 'n11939491',
'[33]': 'n13054560'
}
parser = argparse.ArgumentParser(description="Template")
parser.add_argument('-mp','--model_params', default='', nargs='*', help='list of key=value pairs of model options')
opt = parser.parse_args()
# Path
datapath='data/EEG_Feature_Label/'
img_file_type='.JPEG'
device = "cuda"
test_img_names_file=datapath+'test_image_names.pth'
test_seq_file=datapath+'test_seqs.pth'
dff_model_path = "pretrained_model/v1-5-pruned-emaonly.ckpt"
dff_yaml_path = "pretrained_model/config15.yaml"
test_pred_file=datapath+'test_pred.pth'
output_path="picture"
logger=None
ddim_steps=40
global_pool=True
use_time_cond=False
clip_tune=False
cls_tune=False
def normalize(img):
if img.shape[-1] == 3:
img = rearrange(img, 'h w c -> c h w')
img = torch.tensor(img)
img = img * 2.0 - 1.0 # to -1 ~ 1
return img
def channel_last(img):
if img.shape[-1] == 3:
return img
return rearrange(img, 'c h w -> h w c')
class Dataset(Dataset):
def __init__(self, img_names_file,seq_file,labels_file):
self.image_names = torch.load(img_names_file)
self.seqs = torch.load(seq_file)
self.labels = torch.load(labels_file)
def __len__(self):
return len(self.seqs)
def __getitem__(self, idx):
input_vec=torch.tensor(self.seqs[idx]).to("cuda")
img_label = self.image_names[idx].split("_")[0]
img_path = "data/image/" + img_label + "/" + self.image_names[idx] + img_file_type
image = Image.open(img_path).convert('RGB')
img_transform_test = transforms.Compose([
normalize,
transforms.Resize((512, 512)),
channel_last
])
gt_image = np.array(image) / 255.0
gt_image = img_transform_test(gt_image)
prompt = propmt_dict[lable_number_dict[str(self.labels[idx])]]
return input_vec, gt_image,self.image_names[idx],prompt
#Load data
batch_size = 1
test_dataset = Dataset(test_img_names_file,test_seq_file, test_pred_file)
test_loader = DataLoader(test_dataset, batch_size=batch_size)
train_dataset = selfdataset(device=args.device, mode='pretrain', data=Train_data_all, wave_len=args.wave_length)
args.data_shape = train_dataset.shape()
#Load AlignNet
time_model=TimeEncoder(args)
time_model=time_model.to("cuda")
freq_model_options = {key: int(value) if value.isdigit() else (float(value) if value[0].isdigit() else value) for
(key, value) in [x.split("=") for x in opt.model_params]}
freq_model = FreqEncoder(**freq_model_options)
timefreq_model = TimeFreqEncoder(time_model, freq_model, args)
timefreq_model=timefreq_model.to("cuda")
time_size=128
freq_size=128
clip_size=int(77*768)
model_eegtoclip=AlignNet(time_size,freq_size,clip_size,timefreq_model)
eegtoclip_state_dict = torch.load('exp/epilepsy/test/clipfinetune_model.pkl', map_location="cuda")#device)
model_eegtoclip.load_state_dict(eegtoclip_state_dict)
model_eegtoclip.to("cuda")
model_eegtoclip.eval()
#Load stable diffusion
ckp_path = os.path.join(dff_model_path)
config_path = os.path.join(dff_yaml_path)
config = OmegaConf.load(config_path)
config.model.params.unet_config.params.use_time_cond = use_time_cond
config.model.params.unet_config.params.global_pool = global_pool
cond_dim = config.model.params.unet_config.params.context_dim
model = instantiate_from_config(config.model)
pl_sd = torch.load(ckp_path, map_location=device)['state_dict']
m, u = model.load_state_dict(pl_sd, strict=False)
model.cond_stage_trainable = False
model.ddim_steps = ddim_steps
model.re_init_ema()
model.p_channels = config.model.params.channels
model.p_image_size = config.model.params.image_size
model.ch_mult = config.model.params.first_stage_config.params.ddconfig.ch_mult
model.clip_tune = clip_tune
model.cls_tune = cls_tune
model = model.to(device)
|
propmt_dict = {'n02106662': 'german shepherd dog',
'n02124075': 'cat ',
'n02281787': 'lycaenid butterfly',
'n02389026': 'sorrel horse',
'n02492035': 'Cebus capucinus',
'n02504458': 'African elephant',
'n02510455': 'panda',
'n02607072': 'anemone fish',
'n02690373': 'airliner',
'n02906734': 'broom',
'n02951358': 'canoe or kayak',
'n02992529': 'cellular telephone',
'n03063599': 'coffee mug',
'n03100240': 'old convertible',
'n03180011': 'desktop computer',
'n03197337': 'digital watch',
'n03272010': 'electric guitar',
'n03272562': 'electric locomotive',
'n03297495': 'espresso maker',
'n03376595': 'folding chair',
'n03445777': 'golf ball',
'n03452741': 'grand piano',
'n03584829': 'smoothing iron',
'n03590841': 'Orange jack-o’-lantern',
'n03709823': 'mailbag',
'n03773504': 'missile',
'n03775071': 'mitten,glove',
'n03792782': 'mountain bike, all-terrain bike',
'n03792972': 'mountain tent',
'n03877472': 'pajama',
'n03888257': 'parachute',
'n03982430': 'pool table, billiard table, snooker table ',
'n04044716': 'radio telescope',
'n04069434': 'eflex camera',
'n04086273': 'revolver, six-shooter',
'n04120489': 'running shoe',
'n07753592': 'banana',
'n07873807': 'pizza',
'n11939491': 'daisy',
'n13054560': 'bolete'
}
lable_number_dict={
'[12]': 'n02106662',
'[39]': 'n02124075',
'[11]': 'n02281787',
'[0]': 'n02389026',
'[21]': 'n02492035',
'[35]': 'n02504458',
'[8]': 'n02510455',
'[3]': 'n02607072',
'[36]': 'n02690373',
'[18]': 'n02906734',
'[10]': 'n02951358',
'[15]': 'n02992529',
'[5]': 'n03063599',
'[24]': 'n03100240',
'[17]': 'n03180011',
'[34]': 'n03197337',
'[28]': 'n03272010',
'[37]': 'n03272562',
'[4]': 'n03297495',
'[25]': 'n03376595',
'[16]': 'n03445777',
'[30]': 'n03452741',
'[2]': 'n03584829',
'[14]': 'n03590841',
'[23]': 'n03709823',
'[20]': 'n03773504',
'[27]': 'n03775071',
'[6]': 'n03792782',
'[31]': 'n03792972',
'[26]': 'n03877472',
'[1]': 'n03888257',
'[22]': 'n03982430',
'[38]': 'n04044716',
'[29]': 'n04069434',
'[7]': 'n04086273',
'[13]': 'n04120489',
'[32]': 'n07753592',
'[19]': 'n07873807',
'[9]': 'n11939491',
'[33]': 'n13054560'
}
parser = argparse.ArgumentParser(description="Template")
parser.add_argument('-mp','--model_params', default='', nargs='*', help='list of key=value pairs of model options')
opt = parser.parse_args()
# Path
datapath='data/EEG_Feature_Label/'
img_file_type='.JPEG'
device = "cuda"
test_img_names_file=datapath+'test_image_names.pth'
test_seq_file=datapath+'test_seqs.pth'
dff_model_path = "pretrained_model/v1-5-pruned-emaonly.ckpt"
dff_yaml_path = "pretrained_model/config15.yaml"
test_pred_file=datapath+'test_pred.pth'
output_path="picture"
logger=None
ddim_steps=40
global_pool=True
use_time_cond=False
clip_tune=False
cls_tune=False
def normalize(img):
if img.shape[-1] == 3:
img = rearrange(img, 'h w c -> c h w')
img = torch.tensor(img)
img = img * 2.0 - 1.0 # to -1 ~ 1
return img
def channel_last(img):
if img.shape[-1] == 3:
return img
return rearrange(img, 'c h w -> h w c')
class Dataset(Dataset):
def __init__(self, img_names_file,seq_file,labels_file):
self.image_names = torch.load(img_names_file)
self.seqs = torch.load(seq_file)
self.labels = torch.load(labels_file)
def __len__(self):
return len(self.seqs)
def __getitem__(self, idx):
input_vec=torch.tensor(self.seqs[idx]).to("cuda")
img_label = self.image_names[idx].split("_")[0]
img_path = "data/image/" + img_label + "/" + self.image_names[idx] + img_file_type
image = Image.open(img_path).convert('RGB')
img_transform_test = transforms.Compose([
normalize,
transforms.Resize((512, 512)),
channel_last
])
gt_image = np.array(image) / 255.0
gt_image = img_transform_test(gt_image)
prompt = propmt_dict[lable_number_dict[str(self.labels[idx])]]
return input_vec, gt_image,self.image_names[idx],prompt
#Load data
batch_size = 1
test_dataset = Dataset(test_img_names_file,test_seq_file, test_pred_file)
test_loader = DataLoader(test_dataset, batch_size=batch_size)
train_dataset = selfdataset(device=args.device, mode='pretrain', data=Train_data_all, wave_len=args.wave_length)
args.data_shape = train_dataset.shape()
#Load AlignNet
time_model=TimeEncoder(args)
time_model=time_model.to("cuda")
freq_model_options = {key: int(value) if value.isdigit() else (float(value) if value[0].isdigit() else value) for
(key, value) in [x.split("=") for x in opt.model_params]}
freq_model = FreqEncoder(**freq_model_options)
timefreq_model = TimeFreqEncoder(time_model, freq_model, args)
timefreq_model=timefreq_model.to("cuda")
time_size=128
freq_size=128
clip_size=int(77*768)
model_eegtoclip=AlignNet(time_size,freq_size,clip_size,timefreq_model)
eegtoclip_state_dict = torch.load('exp/epilepsy/test/clipfinetune_model.pkl', map_location="cuda")#device)
model_eegtoclip.load_state_dict(eegtoclip_state_dict)
model_eegtoclip.to("cuda")
model_eegtoclip.eval()
#Load stable diffusion
ckp_path = os.path.join(dff_model_path)
config_path = os.path.join(dff_yaml_path)
config = OmegaConf.load(config_path)
config.model.params.unet_config.params.use_time_cond = use_time_cond
config.model.params.unet_config.params.global_pool = global_pool
cond_dim = config.model.params.unet_config.params.context_dim
model = instantiate_from_config(config.model)
pl_sd = torch.load(ckp_path, map_location=device)['state_dict']
m, u = model.load_state_dict(pl_sd, strict=False)
model.cond_stage_trainable = False
model.ddim_steps = ddim_steps
model.re_init_ema()
model.p_channels = config.model.params.channels
model.p_image_size = config.model.params.image_size
model.ch_mult = config.model.params.first_stage_config.params.ddconfig.ch_mult
model.clip_tune = clip_tune
model.cls_tune = cls_tune
model = model.to(device)
| sampler = PLMSSampler(model)
| 0 | 2023-12-16 12:52:14+00:00 | 12k |
tonnetonne814/PL-Bert-VITS2 | train_ms.py | [
{
"identifier": "DistributedBucketSampler",
"path": "data_utils.py",
"snippet": "class DistributedBucketSampler(torch.utils.data.distributed.DistributedSampler):\n \"\"\"\n Maintain similar input lengths in a batch.\n Length groups are specified by boundaries.\n Ex) boundaries = [b1, b2, b3]... | import argparse
import itertools
import json
import math
import os
import logging
import torch
import torch.distributed as dist
import torch.multiprocessing as mp
import tqdm
import commons
import models
import utils
from torch import nn, optim
from torch.cuda.amp import GradScaler, autocast
from torch.nn import functional as F
from torch.nn.parallel import DistributedDataParallel as DDP
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriter
from data_utils import (DistributedBucketSampler, TextAudioSpeakerCollate,
TextAudioSpeakerLoader)
from losses import discriminator_loss, feature_loss, generator_loss, kl_loss
from mel_processing import mel_spectrogram_torch, spec_to_mel_torch
from models import (AVAILABLE_DURATION_DISCRIMINATOR_TYPES,
AVAILABLE_FLOW_TYPES,
DurationDiscriminatorV1, DurationDiscriminatorV2,
MultiPeriodDiscriminator, SynthesizerTrn)
from PL_BERT_ja.text.symbols import symbols | 10,111 |
numba_logger = logging.getLogger('numba')
numba_logger.setLevel(logging.WARNING)
# from tensorboardX import SummaryWriter
torch.backends.cudnn.benchmark = True
global_step = 0
def main():
"""Assume Single Node Multi GPUs Training Only"""
assert torch.cuda.is_available(), "CPU training is not allowed."
n_gpus = torch.cuda.device_count()
os.environ["MASTER_ADDR"] = "localhost"
os.environ["MASTER_PORT"] = "6060"
hps = utils.get_hparams()
mp.spawn(
run,
nprocs=n_gpus,
args=(
n_gpus,
hps,
),
)
def run(rank, n_gpus, hps):
net_dur_disc = None
global global_step
if rank == 0:
logger = utils.get_logger(hps.model_dir)
logger.info(hps)
utils.check_git_hash(hps.model_dir)
writer = SummaryWriter(log_dir=hps.model_dir)
writer_eval = SummaryWriter(log_dir=os.path.join(hps.model_dir, "eval"))
dist.init_process_group(
backend="nccl", init_method="env://", world_size=n_gpus, rank=rank
)
torch.manual_seed(hps.train.seed)
torch.cuda.set_device(rank)
if (
"use_mel_posterior_encoder" in hps.model.keys()
and hps.model.use_mel_posterior_encoder == True
):
print("Using mel posterior encoder for VITS2")
posterior_channels = 128 # vits2
hps.data.use_mel_posterior_encoder = True
else:
print("Using lin posterior encoder for VITS1")
posterior_channels = hps.data.filter_length // 2 + 1
hps.data.use_mel_posterior_encoder = False
train_dataset = TextAudioSpeakerLoader(hps.data.training_files, hps.data)
train_sampler = DistributedBucketSampler(
train_dataset,
hps.train.batch_size,
[32, 300, 500, 700, 900, 1100, 1300, 1500, 3000],
num_replicas=n_gpus,
rank=rank,
shuffle=True,
)
|
numba_logger = logging.getLogger('numba')
numba_logger.setLevel(logging.WARNING)
# from tensorboardX import SummaryWriter
torch.backends.cudnn.benchmark = True
global_step = 0
def main():
"""Assume Single Node Multi GPUs Training Only"""
assert torch.cuda.is_available(), "CPU training is not allowed."
n_gpus = torch.cuda.device_count()
os.environ["MASTER_ADDR"] = "localhost"
os.environ["MASTER_PORT"] = "6060"
hps = utils.get_hparams()
mp.spawn(
run,
nprocs=n_gpus,
args=(
n_gpus,
hps,
),
)
def run(rank, n_gpus, hps):
net_dur_disc = None
global global_step
if rank == 0:
logger = utils.get_logger(hps.model_dir)
logger.info(hps)
utils.check_git_hash(hps.model_dir)
writer = SummaryWriter(log_dir=hps.model_dir)
writer_eval = SummaryWriter(log_dir=os.path.join(hps.model_dir, "eval"))
dist.init_process_group(
backend="nccl", init_method="env://", world_size=n_gpus, rank=rank
)
torch.manual_seed(hps.train.seed)
torch.cuda.set_device(rank)
if (
"use_mel_posterior_encoder" in hps.model.keys()
and hps.model.use_mel_posterior_encoder == True
):
print("Using mel posterior encoder for VITS2")
posterior_channels = 128 # vits2
hps.data.use_mel_posterior_encoder = True
else:
print("Using lin posterior encoder for VITS1")
posterior_channels = hps.data.filter_length // 2 + 1
hps.data.use_mel_posterior_encoder = False
train_dataset = TextAudioSpeakerLoader(hps.data.training_files, hps.data)
train_sampler = DistributedBucketSampler(
train_dataset,
hps.train.batch_size,
[32, 300, 500, 700, 900, 1100, 1300, 1500, 3000],
num_replicas=n_gpus,
rank=rank,
shuffle=True,
) | collate_fn = TextAudioSpeakerCollate() | 1 | 2023-12-16 05:34:02+00:00 | 12k |
camenduru/FreeInit-hf | animatediff/models/unet.py | [
{
"identifier": "CrossAttnDownBlock3D",
"path": "animatediff/models/unet_blocks.py",
"snippet": "class CrossAttnDownBlock3D(nn.Module):\n def __init__(\n self,\n in_channels: int,\n out_channels: int,\n temb_channels: int,\n dropout: float = 0.0,\n num_layers... | from dataclasses import dataclass
from typing import List, Optional, Tuple, Union
from diffusers.configuration_utils import ConfigMixin, register_to_config
from diffusers.modeling_utils import ModelMixin
from diffusers.utils import BaseOutput, logging
from diffusers.models.embeddings import TimestepEmbedding, Timesteps
from .unet_blocks import (
CrossAttnDownBlock3D,
CrossAttnUpBlock3D,
DownBlock3D,
UNetMidBlock3DCrossAttn,
UpBlock3D,
get_down_block,
get_up_block,
)
from .resnet import InflatedConv3d, InflatedGroupNorm
from diffusers.utils import WEIGHTS_NAME
import os
import json
import pdb
import torch
import torch.nn as nn
import torch.utils.checkpoint | 9,416 | output_channel = reversed_block_out_channels[0]
for i, up_block_type in enumerate(up_block_types):
res = 2 ** (3 - i)
is_final_block = i == len(block_out_channels) - 1
prev_output_channel = output_channel
output_channel = reversed_block_out_channels[i]
input_channel = reversed_block_out_channels[min(i + 1, len(block_out_channels) - 1)]
# add upsample block for all BUT final layer
if not is_final_block:
add_upsample = True
self.num_upsamplers += 1
else:
add_upsample = False
up_block = get_up_block(
up_block_type,
num_layers=layers_per_block + 1,
in_channels=input_channel,
out_channels=output_channel,
prev_output_channel=prev_output_channel,
temb_channels=time_embed_dim,
add_upsample=add_upsample,
resnet_eps=norm_eps,
resnet_act_fn=act_fn,
resnet_groups=norm_num_groups,
cross_attention_dim=cross_attention_dim,
attn_num_head_channels=reversed_attention_head_dim[i],
dual_cross_attention=dual_cross_attention,
use_linear_projection=use_linear_projection,
only_cross_attention=only_cross_attention[i],
upcast_attention=upcast_attention,
resnet_time_scale_shift=resnet_time_scale_shift,
unet_use_cross_frame_attention=unet_use_cross_frame_attention,
unet_use_temporal_attention=unet_use_temporal_attention,
use_inflated_groupnorm=use_inflated_groupnorm,
use_motion_module=use_motion_module and (res in motion_module_resolutions),
motion_module_type=motion_module_type,
motion_module_kwargs=motion_module_kwargs,
)
self.up_blocks.append(up_block)
prev_output_channel = output_channel
# out
if use_inflated_groupnorm:
self.conv_norm_out = InflatedGroupNorm(num_channels=block_out_channels[0], num_groups=norm_num_groups, eps=norm_eps)
else:
self.conv_norm_out = nn.GroupNorm(num_channels=block_out_channels[0], num_groups=norm_num_groups, eps=norm_eps)
self.conv_act = nn.SiLU()
self.conv_out = InflatedConv3d(block_out_channels[0], out_channels, kernel_size=3, padding=1)
def set_attention_slice(self, slice_size):
r"""
Enable sliced attention computation.
When this option is enabled, the attention module will split the input tensor in slices, to compute attention
in several steps. This is useful to save some memory in exchange for a small speed decrease.
Args:
slice_size (`str` or `int` or `list(int)`, *optional*, defaults to `"auto"`):
When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If
`"max"`, maxium amount of memory will be saved by running only one slice at a time. If a number is
provided, uses as many slices as `attention_head_dim // slice_size`. In this case, `attention_head_dim`
must be a multiple of `slice_size`.
"""
sliceable_head_dims = []
def fn_recursive_retrieve_slicable_dims(module: torch.nn.Module):
if hasattr(module, "set_attention_slice"):
sliceable_head_dims.append(module.sliceable_head_dim)
for child in module.children():
fn_recursive_retrieve_slicable_dims(child)
# retrieve number of attention layers
for module in self.children():
fn_recursive_retrieve_slicable_dims(module)
num_slicable_layers = len(sliceable_head_dims)
if slice_size == "auto":
# half the attention head size is usually a good trade-off between
# speed and memory
slice_size = [dim // 2 for dim in sliceable_head_dims]
elif slice_size == "max":
# make smallest slice possible
slice_size = num_slicable_layers * [1]
slice_size = num_slicable_layers * [slice_size] if not isinstance(slice_size, list) else slice_size
if len(slice_size) != len(sliceable_head_dims):
raise ValueError(
f"You have provided {len(slice_size)}, but {self.config} has {len(sliceable_head_dims)} different"
f" attention layers. Make sure to match `len(slice_size)` to be {len(sliceable_head_dims)}."
)
for i in range(len(slice_size)):
size = slice_size[i]
dim = sliceable_head_dims[i]
if size is not None and size > dim:
raise ValueError(f"size {size} has to be smaller or equal to {dim}.")
# Recursively walk through all the children.
# Any children which exposes the set_attention_slice method
# gets the message
def fn_recursive_set_attention_slice(module: torch.nn.Module, slice_size: List[int]):
if hasattr(module, "set_attention_slice"):
module.set_attention_slice(slice_size.pop())
for child in module.children():
fn_recursive_set_attention_slice(child, slice_size)
reversed_slice_size = list(reversed(slice_size))
for module in self.children():
fn_recursive_set_attention_slice(module, reversed_slice_size)
def _set_gradient_checkpointing(self, module, value=False):
| # Adapted from https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unet_2d_condition.py
logger = logging.get_logger(__name__) # pylint: disable=invalid-name
@dataclass
class UNet3DConditionOutput(BaseOutput):
sample: torch.FloatTensor
class UNet3DConditionModel(ModelMixin, ConfigMixin):
_supports_gradient_checkpointing = True
@register_to_config
def __init__(
self,
sample_size: Optional[int] = None,
in_channels: int = 4,
out_channels: int = 4,
center_input_sample: bool = False,
flip_sin_to_cos: bool = True,
freq_shift: int = 0,
down_block_types: Tuple[str] = (
"CrossAttnDownBlock3D",
"CrossAttnDownBlock3D",
"CrossAttnDownBlock3D",
"DownBlock3D",
),
mid_block_type: str = "UNetMidBlock3DCrossAttn",
up_block_types: Tuple[str] = (
"UpBlock3D",
"CrossAttnUpBlock3D",
"CrossAttnUpBlock3D",
"CrossAttnUpBlock3D"
),
only_cross_attention: Union[bool, Tuple[bool]] = False,
block_out_channels: Tuple[int] = (320, 640, 1280, 1280),
layers_per_block: int = 2,
downsample_padding: int = 1,
mid_block_scale_factor: float = 1,
act_fn: str = "silu",
norm_num_groups: int = 32,
norm_eps: float = 1e-5,
cross_attention_dim: int = 1280,
attention_head_dim: Union[int, Tuple[int]] = 8,
dual_cross_attention: bool = False,
use_linear_projection: bool = False,
class_embed_type: Optional[str] = None,
num_class_embeds: Optional[int] = None,
upcast_attention: bool = False,
resnet_time_scale_shift: str = "default",
use_inflated_groupnorm=False,
# Additional
use_motion_module = False,
motion_module_resolutions = ( 1,2,4,8 ),
motion_module_mid_block = False,
motion_module_decoder_only = False,
motion_module_type = None,
motion_module_kwargs = {},
unet_use_cross_frame_attention = None,
unet_use_temporal_attention = None,
):
super().__init__()
self.sample_size = sample_size
time_embed_dim = block_out_channels[0] * 4
# input
self.conv_in = InflatedConv3d(in_channels, block_out_channels[0], kernel_size=3, padding=(1, 1))
# time
self.time_proj = Timesteps(block_out_channels[0], flip_sin_to_cos, freq_shift)
timestep_input_dim = block_out_channels[0]
self.time_embedding = TimestepEmbedding(timestep_input_dim, time_embed_dim)
# class embedding
if class_embed_type is None and num_class_embeds is not None:
self.class_embedding = nn.Embedding(num_class_embeds, time_embed_dim)
elif class_embed_type == "timestep":
self.class_embedding = TimestepEmbedding(timestep_input_dim, time_embed_dim)
elif class_embed_type == "identity":
self.class_embedding = nn.Identity(time_embed_dim, time_embed_dim)
else:
self.class_embedding = None
self.down_blocks = nn.ModuleList([])
self.mid_block = None
self.up_blocks = nn.ModuleList([])
if isinstance(only_cross_attention, bool):
only_cross_attention = [only_cross_attention] * len(down_block_types)
if isinstance(attention_head_dim, int):
attention_head_dim = (attention_head_dim,) * len(down_block_types)
# down
output_channel = block_out_channels[0]
for i, down_block_type in enumerate(down_block_types):
res = 2 ** i
input_channel = output_channel
output_channel = block_out_channels[i]
is_final_block = i == len(block_out_channels) - 1
down_block = get_down_block(
down_block_type,
num_layers=layers_per_block,
in_channels=input_channel,
out_channels=output_channel,
temb_channels=time_embed_dim,
add_downsample=not is_final_block,
resnet_eps=norm_eps,
resnet_act_fn=act_fn,
resnet_groups=norm_num_groups,
cross_attention_dim=cross_attention_dim,
attn_num_head_channels=attention_head_dim[i],
downsample_padding=downsample_padding,
dual_cross_attention=dual_cross_attention,
use_linear_projection=use_linear_projection,
only_cross_attention=only_cross_attention[i],
upcast_attention=upcast_attention,
resnet_time_scale_shift=resnet_time_scale_shift,
unet_use_cross_frame_attention=unet_use_cross_frame_attention,
unet_use_temporal_attention=unet_use_temporal_attention,
use_inflated_groupnorm=use_inflated_groupnorm,
use_motion_module=use_motion_module and (res in motion_module_resolutions) and (not motion_module_decoder_only),
motion_module_type=motion_module_type,
motion_module_kwargs=motion_module_kwargs,
)
self.down_blocks.append(down_block)
# mid
if mid_block_type == "UNetMidBlock3DCrossAttn":
self.mid_block = UNetMidBlock3DCrossAttn(
in_channels=block_out_channels[-1],
temb_channels=time_embed_dim,
resnet_eps=norm_eps,
resnet_act_fn=act_fn,
output_scale_factor=mid_block_scale_factor,
resnet_time_scale_shift=resnet_time_scale_shift,
cross_attention_dim=cross_attention_dim,
attn_num_head_channels=attention_head_dim[-1],
resnet_groups=norm_num_groups,
dual_cross_attention=dual_cross_attention,
use_linear_projection=use_linear_projection,
upcast_attention=upcast_attention,
unet_use_cross_frame_attention=unet_use_cross_frame_attention,
unet_use_temporal_attention=unet_use_temporal_attention,
use_inflated_groupnorm=use_inflated_groupnorm,
use_motion_module=use_motion_module and motion_module_mid_block,
motion_module_type=motion_module_type,
motion_module_kwargs=motion_module_kwargs,
)
else:
raise ValueError(f"unknown mid_block_type : {mid_block_type}")
# count how many layers upsample the videos
self.num_upsamplers = 0
# up
reversed_block_out_channels = list(reversed(block_out_channels))
reversed_attention_head_dim = list(reversed(attention_head_dim))
only_cross_attention = list(reversed(only_cross_attention))
output_channel = reversed_block_out_channels[0]
for i, up_block_type in enumerate(up_block_types):
res = 2 ** (3 - i)
is_final_block = i == len(block_out_channels) - 1
prev_output_channel = output_channel
output_channel = reversed_block_out_channels[i]
input_channel = reversed_block_out_channels[min(i + 1, len(block_out_channels) - 1)]
# add upsample block for all BUT final layer
if not is_final_block:
add_upsample = True
self.num_upsamplers += 1
else:
add_upsample = False
up_block = get_up_block(
up_block_type,
num_layers=layers_per_block + 1,
in_channels=input_channel,
out_channels=output_channel,
prev_output_channel=prev_output_channel,
temb_channels=time_embed_dim,
add_upsample=add_upsample,
resnet_eps=norm_eps,
resnet_act_fn=act_fn,
resnet_groups=norm_num_groups,
cross_attention_dim=cross_attention_dim,
attn_num_head_channels=reversed_attention_head_dim[i],
dual_cross_attention=dual_cross_attention,
use_linear_projection=use_linear_projection,
only_cross_attention=only_cross_attention[i],
upcast_attention=upcast_attention,
resnet_time_scale_shift=resnet_time_scale_shift,
unet_use_cross_frame_attention=unet_use_cross_frame_attention,
unet_use_temporal_attention=unet_use_temporal_attention,
use_inflated_groupnorm=use_inflated_groupnorm,
use_motion_module=use_motion_module and (res in motion_module_resolutions),
motion_module_type=motion_module_type,
motion_module_kwargs=motion_module_kwargs,
)
self.up_blocks.append(up_block)
prev_output_channel = output_channel
# out
if use_inflated_groupnorm:
self.conv_norm_out = InflatedGroupNorm(num_channels=block_out_channels[0], num_groups=norm_num_groups, eps=norm_eps)
else:
self.conv_norm_out = nn.GroupNorm(num_channels=block_out_channels[0], num_groups=norm_num_groups, eps=norm_eps)
self.conv_act = nn.SiLU()
self.conv_out = InflatedConv3d(block_out_channels[0], out_channels, kernel_size=3, padding=1)
def set_attention_slice(self, slice_size):
r"""
Enable sliced attention computation.
When this option is enabled, the attention module will split the input tensor in slices, to compute attention
in several steps. This is useful to save some memory in exchange for a small speed decrease.
Args:
slice_size (`str` or `int` or `list(int)`, *optional*, defaults to `"auto"`):
When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If
`"max"`, maxium amount of memory will be saved by running only one slice at a time. If a number is
provided, uses as many slices as `attention_head_dim // slice_size`. In this case, `attention_head_dim`
must be a multiple of `slice_size`.
"""
sliceable_head_dims = []
def fn_recursive_retrieve_slicable_dims(module: torch.nn.Module):
if hasattr(module, "set_attention_slice"):
sliceable_head_dims.append(module.sliceable_head_dim)
for child in module.children():
fn_recursive_retrieve_slicable_dims(child)
# retrieve number of attention layers
for module in self.children():
fn_recursive_retrieve_slicable_dims(module)
num_slicable_layers = len(sliceable_head_dims)
if slice_size == "auto":
# half the attention head size is usually a good trade-off between
# speed and memory
slice_size = [dim // 2 for dim in sliceable_head_dims]
elif slice_size == "max":
# make smallest slice possible
slice_size = num_slicable_layers * [1]
slice_size = num_slicable_layers * [slice_size] if not isinstance(slice_size, list) else slice_size
if len(slice_size) != len(sliceable_head_dims):
raise ValueError(
f"You have provided {len(slice_size)}, but {self.config} has {len(sliceable_head_dims)} different"
f" attention layers. Make sure to match `len(slice_size)` to be {len(sliceable_head_dims)}."
)
for i in range(len(slice_size)):
size = slice_size[i]
dim = sliceable_head_dims[i]
if size is not None and size > dim:
raise ValueError(f"size {size} has to be smaller or equal to {dim}.")
# Recursively walk through all the children.
# Any children which exposes the set_attention_slice method
# gets the message
def fn_recursive_set_attention_slice(module: torch.nn.Module, slice_size: List[int]):
if hasattr(module, "set_attention_slice"):
module.set_attention_slice(slice_size.pop())
for child in module.children():
fn_recursive_set_attention_slice(child, slice_size)
reversed_slice_size = list(reversed(slice_size))
for module in self.children():
fn_recursive_set_attention_slice(module, reversed_slice_size)
def _set_gradient_checkpointing(self, module, value=False): | if isinstance(module, (CrossAttnDownBlock3D, DownBlock3D, CrossAttnUpBlock3D, UpBlock3D)): | 2 | 2023-12-19 21:06:32+00:00 | 12k |
zyrant/SPGroup3D | tools/data_converter/indoor_converter.py | [
{
"identifier": "S3DISData",
"path": "tools/data_converter/s3dis_data_utils.py",
"snippet": "class S3DISData(object):\n \"\"\"S3DIS data.\n\n Generate s3dis infos for s3dis_converter.\n\n Args:\n root_path (str): Root path of the raw data.\n split (str, optional): Set split type o... | import os
import mmcv
import numpy as np
from tools.data_converter.s3dis_data_utils import S3DISData, S3DISSegData
from tools.data_converter.scannet_data_utils import ScanNetData, ScanNetSegData
from tools.data_converter.scannet_md40_data_utils import ScanNetData_md40, ScanNetSegData_md40
from tools.data_converter.sunrgbd_data_utils import SUNRGBDData | 8,711 | # Copyright (c) OpenMMLab. All rights reserved.
def create_indoor_info_file(data_path,
pkl_prefix='sunrgbd',
save_path=None,
use_v1=False,
workers=4):
"""Create indoor information file.
Get information of the raw data and save it to the pkl file.
Args:
data_path (str): Path of the data.
pkl_prefix (str, optional): Prefix of the pkl to be saved.
Default: 'sunrgbd'.
save_path (str, optional): Path of the pkl to be saved. Default: None.
use_v1 (bool, optional): Whether to use v1. Default: False.
workers (int, optional): Number of threads to be used. Default: 4.
"""
assert os.path.exists(data_path)
assert pkl_prefix in ['sunrgbd', 'scannet', 's3dis', 'scannet_md40'], \
f'unsupported indoor dataset {pkl_prefix}'
save_path = data_path if save_path is None else save_path
assert os.path.exists(save_path)
# generate infos for both detection and segmentation task
if pkl_prefix in ['sunrgbd', 'scannet', 'scannet_md40']:
train_filename = os.path.join(save_path,
f'{pkl_prefix}_infos_train.pkl')
val_filename = os.path.join(save_path, f'{pkl_prefix}_infos_val.pkl')
if pkl_prefix == 'sunrgbd':
# SUN RGB-D has a train-val split
train_dataset = SUNRGBDData(
root_path=data_path, split='train', use_v1=use_v1)
val_dataset = SUNRGBDData(
root_path=data_path, split='val', use_v1=use_v1)
elif pkl_prefix == 'scannet':
# ScanNet has a train-val-test split
train_dataset = ScanNetData(root_path=data_path, split='train')
val_dataset = ScanNetData(root_path=data_path, split='val')
test_dataset = ScanNetData(root_path=data_path, split='test')
test_filename = os.path.join(save_path,
f'{pkl_prefix}_infos_test.pkl')
else:
# ScanNet has a train-val-test split
train_dataset = ScanNetData_md40(root_path=data_path, split='train')
val_dataset = ScanNetData_md40(root_path=data_path, split='val')
test_dataset = ScanNetData_md40(root_path=data_path, split='test')
test_filename = os.path.join(save_path,
f'{pkl_prefix}_infos_test.pkl')
infos_train = train_dataset.get_infos(
num_workers=workers, has_label=True)
mmcv.dump(infos_train, train_filename, 'pkl')
print(f'{pkl_prefix} info train file is saved to {train_filename}')
infos_val = val_dataset.get_infos(num_workers=workers, has_label=True)
mmcv.dump(infos_val, val_filename, 'pkl')
print(f'{pkl_prefix} info val file is saved to {val_filename}')
if pkl_prefix == 'scannet_md40':
infos_test = test_dataset.get_infos(
num_workers=workers, has_label=False)
mmcv.dump(infos_test, test_filename, 'pkl')
print(f'{pkl_prefix} info test file is saved to {test_filename}')
if pkl_prefix == 'scannet':
infos_test = test_dataset.get_infos(
num_workers=workers, has_label=False)
mmcv.dump(infos_test, test_filename, 'pkl')
print(f'{pkl_prefix} info test file is saved to {test_filename}')
# generate infos for the semantic segmentation task
# e.g. re-sampled scene indexes and label weights
# scene indexes are used to re-sample rooms with different number of points
# label weights are used to balance classes with different number of points
if pkl_prefix == 'scannet':
# label weight computation function is adopted from
# https://github.com/charlesq34/pointnet2/blob/master/scannet/scannet_dataset.py#L24
| # Copyright (c) OpenMMLab. All rights reserved.
def create_indoor_info_file(data_path,
pkl_prefix='sunrgbd',
save_path=None,
use_v1=False,
workers=4):
"""Create indoor information file.
Get information of the raw data and save it to the pkl file.
Args:
data_path (str): Path of the data.
pkl_prefix (str, optional): Prefix of the pkl to be saved.
Default: 'sunrgbd'.
save_path (str, optional): Path of the pkl to be saved. Default: None.
use_v1 (bool, optional): Whether to use v1. Default: False.
workers (int, optional): Number of threads to be used. Default: 4.
"""
assert os.path.exists(data_path)
assert pkl_prefix in ['sunrgbd', 'scannet', 's3dis', 'scannet_md40'], \
f'unsupported indoor dataset {pkl_prefix}'
save_path = data_path if save_path is None else save_path
assert os.path.exists(save_path)
# generate infos for both detection and segmentation task
if pkl_prefix in ['sunrgbd', 'scannet', 'scannet_md40']:
train_filename = os.path.join(save_path,
f'{pkl_prefix}_infos_train.pkl')
val_filename = os.path.join(save_path, f'{pkl_prefix}_infos_val.pkl')
if pkl_prefix == 'sunrgbd':
# SUN RGB-D has a train-val split
train_dataset = SUNRGBDData(
root_path=data_path, split='train', use_v1=use_v1)
val_dataset = SUNRGBDData(
root_path=data_path, split='val', use_v1=use_v1)
elif pkl_prefix == 'scannet':
# ScanNet has a train-val-test split
train_dataset = ScanNetData(root_path=data_path, split='train')
val_dataset = ScanNetData(root_path=data_path, split='val')
test_dataset = ScanNetData(root_path=data_path, split='test')
test_filename = os.path.join(save_path,
f'{pkl_prefix}_infos_test.pkl')
else:
# ScanNet has a train-val-test split
train_dataset = ScanNetData_md40(root_path=data_path, split='train')
val_dataset = ScanNetData_md40(root_path=data_path, split='val')
test_dataset = ScanNetData_md40(root_path=data_path, split='test')
test_filename = os.path.join(save_path,
f'{pkl_prefix}_infos_test.pkl')
infos_train = train_dataset.get_infos(
num_workers=workers, has_label=True)
mmcv.dump(infos_train, train_filename, 'pkl')
print(f'{pkl_prefix} info train file is saved to {train_filename}')
infos_val = val_dataset.get_infos(num_workers=workers, has_label=True)
mmcv.dump(infos_val, val_filename, 'pkl')
print(f'{pkl_prefix} info val file is saved to {val_filename}')
if pkl_prefix == 'scannet_md40':
infos_test = test_dataset.get_infos(
num_workers=workers, has_label=False)
mmcv.dump(infos_test, test_filename, 'pkl')
print(f'{pkl_prefix} info test file is saved to {test_filename}')
if pkl_prefix == 'scannet':
infos_test = test_dataset.get_infos(
num_workers=workers, has_label=False)
mmcv.dump(infos_test, test_filename, 'pkl')
print(f'{pkl_prefix} info test file is saved to {test_filename}')
# generate infos for the semantic segmentation task
# e.g. re-sampled scene indexes and label weights
# scene indexes are used to re-sample rooms with different number of points
# label weights are used to balance classes with different number of points
if pkl_prefix == 'scannet':
# label weight computation function is adopted from
# https://github.com/charlesq34/pointnet2/blob/master/scannet/scannet_dataset.py#L24 | train_dataset = ScanNetSegData( | 3 | 2023-12-21 12:50:35+00:00 | 12k |
jdejaegh/irm-kmi-ha | custom_components/irm_kmi/coordinator.py | [
{
"identifier": "IrmKmiApiClient",
"path": "custom_components/irm_kmi/api.py",
"snippet": "class IrmKmiApiClient:\n \"\"\"API client for IRM KMI weather data\"\"\"\n COORD_DECIMALS = 6\n\n def __init__(self, session: aiohttp.ClientSession) -> None:\n self._session = session\n self... | import asyncio
import logging
import async_timeout
import pytz
from datetime import datetime, timedelta
from typing import Any, List, Tuple
from homeassistant.components.weather import Forecast
from homeassistant.config_entries import ConfigEntry
from homeassistant.const import ATTR_LATITUDE, ATTR_LONGITUDE, CONF_ZONE
from homeassistant.core import HomeAssistant
from homeassistant.helpers import issue_registry
from homeassistant.helpers.aiohttp_client import async_get_clientsession
from homeassistant.helpers.update_coordinator import (DataUpdateCoordinator,
UpdateFailed)
from .api import IrmKmiApiClient, IrmKmiApiError
from .const import CONF_DARK_MODE, CONF_STYLE, DOMAIN
from .const import IRM_KMI_TO_HA_CONDITION_MAP as CDT_MAP
from .const import LANGS
from .const import MAP_WARNING_ID_TO_SLUG as SLUG_MAP
from .const import OPTION_STYLE_SATELLITE, OUT_OF_BENELUX, STYLE_TO_PARAM_MAP
from .data import (AnimationFrameData, CurrentWeatherData, IrmKmiForecast,
ProcessedCoordinatorData, RadarAnimationData, WarningData)
from .rain_graph import RainGraph
from .utils import disable_from_config, get_config_value | 9,065 | wind_gust_speed = float(now_hourly.get('windPeakSpeedKm', None)) if now_hourly is not None else None
except TypeError:
wind_gust_speed = None
try:
temperature = float(api_data.get('obs', {}).get('temp'))
except TypeError:
temperature = None
current_weather = CurrentWeatherData(
condition=CDT_MAP.get((api_data.get('obs', {}).get('ww'), api_data.get('obs', {}).get('dayNight')), None),
temperature=temperature,
wind_speed=wind_speed,
wind_gust_speed=wind_gust_speed,
wind_bearing=now_hourly.get('windDirectionText', {}).get('en') if now_hourly is not None else None,
pressure=pressure,
uv_index=uv_index
)
if api_data.get('country', '') == 'NL':
current_weather['wind_speed'] = api_data.get('obs', {}).get('windSpeedKm')
current_weather['wind_bearing'] = api_data.get('obs', {}).get('windDirectionText', {}).get('en')
return current_weather
@staticmethod
def hourly_list_to_forecast(data: List[dict] | None) -> List[Forecast] | None:
"""Parse data from the API to create a list of hourly forecasts"""
if data is None or not isinstance(data, list) or len(data) == 0:
return None
forecasts = list()
day = datetime.now()
for f in data:
if 'dateShow' in f:
day = day + timedelta(days=1)
hour = f.get('hour', None)
if hour is None:
continue
precipitation_probability = None
if f.get('precipChance', None) is not None:
precipitation_probability = int(f.get('precipChance'))
ww = None
if f.get('ww', None) is not None:
ww = int(f.get('ww'))
forecast = Forecast(
datetime=day.strftime(f'%Y-%m-%dT{hour}:00:00'),
condition=CDT_MAP.get((ww, f.get('dayNight', None)), None),
native_precipitation=f.get('precipQuantity', None),
native_temperature=f.get('temp', None),
native_templow=None,
native_wind_gust_speed=f.get('windPeakSpeedKm', None),
native_wind_speed=f.get('windSpeedKm', None),
precipitation_probability=precipitation_probability,
wind_bearing=f.get('windDirectionText', {}).get('en'),
native_pressure=f.get('pressure', None),
is_daytime=f.get('dayNight', None) == 'd'
)
forecasts.append(forecast)
return forecasts
@staticmethod
def daily_list_to_forecast(data: List[dict] | None) -> List[Forecast] | None:
"""Parse data from the API to create a list of daily forecasts"""
if data is None or not isinstance(data, list) or len(data) == 0:
return None
forecasts = list()
n_days = 0
for (idx, f) in enumerate(data):
precipitation = None
if f.get('precipQuantity', None) is not None:
try:
precipitation = float(f.get('precipQuantity'))
except TypeError:
pass
native_wind_gust_speed = None
if f.get('wind', {}).get('peakSpeed') is not None:
try:
native_wind_gust_speed = int(f.get('wind', {}).get('peakSpeed'))
except TypeError:
pass
is_daytime = f.get('dayNight', None) == 'd'
forecast = IrmKmiForecast(
datetime=(datetime.now() + timedelta(days=n_days)).strftime('%Y-%m-%d')
if is_daytime else datetime.now().strftime('%Y-%m-%d'),
condition=CDT_MAP.get((f.get('ww1', None), f.get('dayNight', None)), None),
native_precipitation=precipitation,
native_temperature=f.get('tempMax', None),
native_templow=f.get('tempMin', None),
native_wind_gust_speed=native_wind_gust_speed,
native_wind_speed=f.get('wind', {}).get('speed'),
precipitation_probability=f.get('precipChance', None),
wind_bearing=f.get('wind', {}).get('dirText', {}).get('en'),
is_daytime=is_daytime,
text_fr=f.get('text', {}).get('fr'),
text_nl=f.get('text', {}).get('nl')
)
forecasts.append(forecast)
if is_daytime or idx == 0:
n_days += 1
return forecasts
def create_rain_graph(self,
radar_animation: RadarAnimationData,
api_animation_data: List[dict],
country: str,
images_from_api: Tuple[bytes],
| """DataUpdateCoordinator for the IRM KMI integration."""
_LOGGER = logging.getLogger(__name__)
class IrmKmiCoordinator(DataUpdateCoordinator):
"""Coordinator to update data from IRM KMI"""
def __init__(self, hass: HomeAssistant, entry: ConfigEntry):
"""Initialize the coordinator."""
super().__init__(
hass,
_LOGGER,
# Name of the data. For logging purposes.
name="IRM KMI weather",
# Polling interval. Will only be polled if there are subscribers.
update_interval=timedelta(minutes=7),
)
self._api_client = IrmKmiApiClient(session=async_get_clientsession(hass))
self._zone = get_config_value(entry, CONF_ZONE)
self._dark_mode = get_config_value(entry, CONF_DARK_MODE)
self._style = get_config_value(entry, CONF_STYLE)
self._config_entry = entry
async def _async_update_data(self) -> ProcessedCoordinatorData:
"""Fetch data from API endpoint.
This is the place to pre-process the data to lookup tables
so entities can quickly look up their data.
"""
if (zone := self.hass.states.get(self._zone)) is None:
raise UpdateFailed(f"Zone '{self._zone}' not found")
try:
# Note: asyncio.TimeoutError and aiohttp.ClientError are already
# handled by the data update coordinator.
async with async_timeout.timeout(10):
api_data = await self._api_client.get_forecasts_coord(
{'lat': zone.attributes[ATTR_LATITUDE],
'long': zone.attributes[ATTR_LONGITUDE]}
)
_LOGGER.debug(f"Observation for {api_data.get('cityName', '')}: {api_data.get('obs', '{}')}")
except IrmKmiApiError as err:
raise UpdateFailed(f"Error communicating with API: {err}")
if api_data.get('cityName', None) in OUT_OF_BENELUX:
# TODO create a repair when this triggers
_LOGGER.info(f"Config state: {self._config_entry.state}")
_LOGGER.error(f"The zone {self._zone} is now out of Benelux and forecast is only available in Benelux."
f"Associated device is now disabled. Move the zone back in Benelux and re-enable to fix "
f"this")
disable_from_config(self.hass, self._config_entry)
issue_registry.async_create_issue(
self.hass,
DOMAIN,
"zone_moved",
is_fixable=True,
severity=issue_registry.IssueSeverity.ERROR,
translation_key='zone_moved',
data={'config_entry_id': self._config_entry.entry_id, 'zone': self._zone},
translation_placeholders={'zone': self._zone}
)
return ProcessedCoordinatorData()
return await self.process_api_data(api_data)
async def async_refresh(self) -> None:
"""Refresh data and log errors."""
await self._async_refresh(log_failures=True, raise_on_entry_error=True)
async def _async_animation_data(self, api_data: dict) -> RadarAnimationData:
"""From the API data passed in, call the API to get all the images and create the radar animation data object.
Frames from the API are merged with the background map and the location marker to create each frame."""
animation_data = api_data.get('animation', {}).get('sequence')
localisation_layer_url = api_data.get('animation', {}).get('localisationLayer')
country = api_data.get('country', '')
if animation_data is None or localisation_layer_url is None or not isinstance(animation_data, list):
return RadarAnimationData()
try:
images_from_api = await self.download_images_from_api(animation_data, country, localisation_layer_url)
except IrmKmiApiError:
_LOGGER.warning(f"Could not get images for weather radar")
return RadarAnimationData()
localisation = images_from_api[0]
images_from_api = images_from_api[1:]
lang = self.hass.config.language if self.hass.config.language in LANGS else 'en'
radar_animation = RadarAnimationData(
hint=api_data.get('animation', {}).get('sequenceHint', {}).get(lang),
unit=api_data.get('animation', {}).get('unit', {}).get(lang),
location=localisation
)
rain_graph = self.create_rain_graph(radar_animation, animation_data, country, images_from_api)
radar_animation['svg_animated'] = rain_graph.get_svg_string()
radar_animation['svg_still'] = rain_graph.get_svg_string(still_image=True)
return radar_animation
async def process_api_data(self, api_data: dict) -> ProcessedCoordinatorData:
"""From the API data, create the object that will be used in the entities"""
return ProcessedCoordinatorData(
current_weather=IrmKmiCoordinator.current_weather_from_data(api_data),
daily_forecast=IrmKmiCoordinator.daily_list_to_forecast(api_data.get('for', {}).get('daily')),
hourly_forecast=IrmKmiCoordinator.hourly_list_to_forecast(api_data.get('for', {}).get('hourly')),
animation=await self._async_animation_data(api_data=api_data),
warnings=self.warnings_from_data(api_data.get('for', {}).get('warning'))
)
async def download_images_from_api(self,
animation_data: list,
country: str,
localisation_layer_url: str) -> tuple[Any]:
"""Download a batch of images to create the radar frames."""
coroutines = list()
coroutines.append(
self._api_client.get_image(localisation_layer_url,
params={'th': 'd' if country == 'NL' or not self._dark_mode else 'n'}))
for frame in animation_data:
if frame.get('uri', None) is not None:
coroutines.append(
self._api_client.get_image(frame.get('uri'), params={'rs': STYLE_TO_PARAM_MAP[self._style]}))
async with async_timeout.timeout(20):
images_from_api = await asyncio.gather(*coroutines)
_LOGGER.debug(f"Just downloaded {len(images_from_api)} images")
return images_from_api
@staticmethod
def current_weather_from_data(api_data: dict) -> CurrentWeatherData:
"""Parse the API data to build a CurrentWeatherData."""
# Process data to get current hour forecast
now_hourly = None
hourly_forecast_data = api_data.get('for', {}).get('hourly')
if not (hourly_forecast_data is None
or not isinstance(hourly_forecast_data, list)
or len(hourly_forecast_data) == 0):
for current in hourly_forecast_data[:2]:
if datetime.now().strftime('%H') == current['hour']:
now_hourly = current
break
# Get UV index
module_data = api_data.get('module', None)
uv_index = None
if not (module_data is None or not isinstance(module_data, list)):
for module in module_data:
if module.get('type', None) == 'uv':
uv_index = module.get('data', {}).get('levelValue')
try:
pressure = float(now_hourly.get('pressure', None)) if now_hourly is not None else None
except TypeError:
pressure = None
try:
wind_speed = float(now_hourly.get('windSpeedKm', None)) if now_hourly is not None else None
except TypeError:
wind_speed = None
try:
wind_gust_speed = float(now_hourly.get('windPeakSpeedKm', None)) if now_hourly is not None else None
except TypeError:
wind_gust_speed = None
try:
temperature = float(api_data.get('obs', {}).get('temp'))
except TypeError:
temperature = None
current_weather = CurrentWeatherData(
condition=CDT_MAP.get((api_data.get('obs', {}).get('ww'), api_data.get('obs', {}).get('dayNight')), None),
temperature=temperature,
wind_speed=wind_speed,
wind_gust_speed=wind_gust_speed,
wind_bearing=now_hourly.get('windDirectionText', {}).get('en') if now_hourly is not None else None,
pressure=pressure,
uv_index=uv_index
)
if api_data.get('country', '') == 'NL':
current_weather['wind_speed'] = api_data.get('obs', {}).get('windSpeedKm')
current_weather['wind_bearing'] = api_data.get('obs', {}).get('windDirectionText', {}).get('en')
return current_weather
@staticmethod
def hourly_list_to_forecast(data: List[dict] | None) -> List[Forecast] | None:
"""Parse data from the API to create a list of hourly forecasts"""
if data is None or not isinstance(data, list) or len(data) == 0:
return None
forecasts = list()
day = datetime.now()
for f in data:
if 'dateShow' in f:
day = day + timedelta(days=1)
hour = f.get('hour', None)
if hour is None:
continue
precipitation_probability = None
if f.get('precipChance', None) is not None:
precipitation_probability = int(f.get('precipChance'))
ww = None
if f.get('ww', None) is not None:
ww = int(f.get('ww'))
forecast = Forecast(
datetime=day.strftime(f'%Y-%m-%dT{hour}:00:00'),
condition=CDT_MAP.get((ww, f.get('dayNight', None)), None),
native_precipitation=f.get('precipQuantity', None),
native_temperature=f.get('temp', None),
native_templow=None,
native_wind_gust_speed=f.get('windPeakSpeedKm', None),
native_wind_speed=f.get('windSpeedKm', None),
precipitation_probability=precipitation_probability,
wind_bearing=f.get('windDirectionText', {}).get('en'),
native_pressure=f.get('pressure', None),
is_daytime=f.get('dayNight', None) == 'd'
)
forecasts.append(forecast)
return forecasts
@staticmethod
def daily_list_to_forecast(data: List[dict] | None) -> List[Forecast] | None:
"""Parse data from the API to create a list of daily forecasts"""
if data is None or not isinstance(data, list) or len(data) == 0:
return None
forecasts = list()
n_days = 0
for (idx, f) in enumerate(data):
precipitation = None
if f.get('precipQuantity', None) is not None:
try:
precipitation = float(f.get('precipQuantity'))
except TypeError:
pass
native_wind_gust_speed = None
if f.get('wind', {}).get('peakSpeed') is not None:
try:
native_wind_gust_speed = int(f.get('wind', {}).get('peakSpeed'))
except TypeError:
pass
is_daytime = f.get('dayNight', None) == 'd'
forecast = IrmKmiForecast(
datetime=(datetime.now() + timedelta(days=n_days)).strftime('%Y-%m-%d')
if is_daytime else datetime.now().strftime('%Y-%m-%d'),
condition=CDT_MAP.get((f.get('ww1', None), f.get('dayNight', None)), None),
native_precipitation=precipitation,
native_temperature=f.get('tempMax', None),
native_templow=f.get('tempMin', None),
native_wind_gust_speed=native_wind_gust_speed,
native_wind_speed=f.get('wind', {}).get('speed'),
precipitation_probability=f.get('precipChance', None),
wind_bearing=f.get('wind', {}).get('dirText', {}).get('en'),
is_daytime=is_daytime,
text_fr=f.get('text', {}).get('fr'),
text_nl=f.get('text', {}).get('nl')
)
forecasts.append(forecast)
if is_daytime or idx == 0:
n_days += 1
return forecasts
def create_rain_graph(self,
radar_animation: RadarAnimationData,
api_animation_data: List[dict],
country: str,
images_from_api: Tuple[bytes], | ) -> RainGraph: | 17 | 2023-12-17 16:35:01+00:00 | 12k |
v3ucn/Bert-vits2-V2.2 | oldVersion/V101/text/chinese.py | [
{
"identifier": "punctuation",
"path": "oldVersion/V101/text/symbols.py",
"snippet": ""
},
{
"identifier": "ToneSandhi",
"path": "oldVersion/V101/text/tone_sandhi.py",
"snippet": "class ToneSandhi:\n def __init__(self):\n self.must_neural_tone_words = {\n \"麻烦\",\n ... | import os
import re
import cn2an
import jieba.posseg as psg
from pypinyin import lazy_pinyin, Style
from .symbols import punctuation
from .tone_sandhi import ToneSandhi
from text import chinese_bert
from text.chinese_bert import get_bert_feature | 7,596 |
current_file_path = os.path.dirname(__file__)
pinyin_to_symbol_map = {
line.split("\t")[0]: line.strip().split("\t")[1]
for line in open(os.path.join(current_file_path, "opencpop-strict.txt")).readlines()
}
rep_map = {
":": ",",
";": ",",
",": ",",
"。": ".",
"!": "!",
"?": "?",
"\n": ".",
"·": ",",
"、": ",",
"...": "…",
"$": ".",
"“": "'",
"”": "'",
"‘": "'",
"’": "'",
"(": "'",
")": "'",
"(": "'",
")": "'",
"《": "'",
"》": "'",
"【": "'",
"】": "'",
"[": "'",
"]": "'",
"—": "-",
"~": "-",
"~": "-",
"「": "'",
"」": "'",
}
|
current_file_path = os.path.dirname(__file__)
pinyin_to_symbol_map = {
line.split("\t")[0]: line.strip().split("\t")[1]
for line in open(os.path.join(current_file_path, "opencpop-strict.txt")).readlines()
}
rep_map = {
":": ",",
";": ",",
",": ",",
"。": ".",
"!": "!",
"?": "?",
"\n": ".",
"·": ",",
"、": ",",
"...": "…",
"$": ".",
"“": "'",
"”": "'",
"‘": "'",
"’": "'",
"(": "'",
")": "'",
"(": "'",
")": "'",
"《": "'",
"》": "'",
"【": "'",
"】": "'",
"[": "'",
"]": "'",
"—": "-",
"~": "-",
"~": "-",
"「": "'",
"」": "'",
}
| tone_modifier = ToneSandhi() | 1 | 2023-12-18 04:54:46+00:00 | 12k |
d-krupke/CP-SAT-Log-Analyzer | app.py | [
{
"identifier": "LogParser",
"path": "cpsat_log_parser/parser.py",
"snippet": "class LogParser:\n def __init__(self, log: typing.Union[str, typing.List[str]]) -> None:\n self.comments, log_without_comments = self._extract_comments(log)\n self.blocks = self.parse_blocks(log_without_comme... | import streamlit as st
from cpsat_log_parser import LogParser
from cpsat_log_parser.blocks import (
SearchProgressBlock,
SearchStatsBlock,
SolutionsBlock,
TableBlock,
SolverBlock,
ResponseBlock,
PresolveLogBlock,
TaskTimingBlock,
PresolvedModelBlock,
)
from _app import print_header, input_log, show_overview | 9,026 | """
This file is the main entry point for the Streamlit app.
Further parts of the app are in the `_app` folder.
The logic for parsing the log is in the `cpsat_log_parser` folder.
"""
print_header()
data = input_log()
if data:
st.header("Log Analysis")
st.warning(
"This is just a prototype and may crash or show wrong results. Please report any issues [here](https://github.com/d-krupke/CP-SAT-Log-Analyzer). I welcome any feedback and complex logs to test this on."
)
parser = LogParser(data)
| """
This file is the main entry point for the Streamlit app.
Further parts of the app are in the `_app` folder.
The logic for parsing the log is in the `cpsat_log_parser` folder.
"""
print_header()
data = input_log()
if data:
st.header("Log Analysis")
st.warning(
"This is just a prototype and may crash or show wrong results. Please report any issues [here](https://github.com/d-krupke/CP-SAT-Log-Analyzer). I welcome any feedback and complex logs to test this on."
)
parser = LogParser(data) | show_overview(parser) | 12 | 2023-12-18 09:18:19+00:00 | 12k |
MMC-K/multimodal_generation_downstream_tasks | training_veldt5_accelerate.py | [
{
"identifier": "DatasetForVLAlign",
"path": "data_utils.py",
"snippet": "class DatasetForVLAlign(Dataset):\n def __init__(\n self,\n file_path: str,\n image_tokenizer: ViTFeatureExtractor,\n text_tokenizer: AutoTokenizer,\n ... | import argparse
import json
import logging
import math
import os
import random
import numpy as np
import torch
import transformers
import datasets
from curses import raw
from datetime import timedelta
from itertools import chain
from torch import nn
from torch.utils.data import DataLoader
from tqdm.auto import tqdm
from torch.nn import CrossEntropyLoss
from accelerate import Accelerator
from accelerate.logging import get_logger
from accelerate.utils import set_seed, InitProcessGroupKwargs, DistributedDataParallelKwargs
from torch.optim import AdamW
from transformers import (
AutoTokenizer,
ViTFeatureExtractor,
SchedulerType,
get_scheduler,
default_data_collator,
)
from datasets import load_dataset
from data_utils import DatasetForVLAlign
from modeling_veldt5 import VELDT5Model | 7,931 | default=None,
help="Total number of validation steps to perform.",
)
parser.add_argument(
"--max_train_steps_per_epoch",
type=int,
default=None,
help="The number of training steps to perform on a epoch. (for debugging)",
)
parser.add_argument(
"--lr_scheduler_type",
type=SchedulerType,
default="linear",
help="The scheduler type to use.",
choices=["linear", "cosine", "cosine_with_restarts", "polynomial", "constant", "constant_with_warmup"],
)
parser.add_argument(
"--num_warmup_steps", type=int, default=0, help="Number of steps for the warmup in the lr scheduler."
)
parser.add_argument(
"--warmup_portion", type=float, default=0, help="Portion of total training steps for the warmup in the lr scheduler."
)
parser.add_argument(
"--checkpointing_steps",
type=str,
default=None,
help="Whether the various states should be saved at the end of every n steps, or 'epoch' for each epoch.",
)
parser.add_argument(
"--resume_from_checkpoint",
type=str,
default=None,
help="If the training should continue from a checkpoint folder.",
)
# logging
parser.add_argument(
"--logging_steps", type=int, default=0, help="Number of steps for logging (stdout)."
)
parser.add_argument(
"--with_tracking",
action="store_true",
help="Whether to enable experiment trackers for logging.",
)
parser.add_argument("--output_dir", type=str, default=None, help="Where to store the final model.")
parser.add_argument(
"--report_to",
type=str,
default="all",
help=(
'The integration to report the results and logs to. Supported platforms are `"tensorboard"`,'
' `"wandb"` and `"comet_ml"`. Use `"all"` (default) to report to all integrations.'
"Only applicable when `--with_tracking` is passed."
),
)
parser.add_argument(
"--finetune",
action="store_true",
help="disable language dataset training in finetuning.",
)
parser.add_argument(
"--from_veld_pretrained",
type=str,
default=None,
help="pretrained veld model to use for finetuning.",
)
args = parser.parse_args()
return args
def main():
args = parse_args()
accelerator_log_kwargs = {}
if args.with_tracking:
accelerator_log_kwargs["log_with"] = args.report_to
# accelerator_log_kwargs["logging_dir"] = args.output_dir
accelerator_log_kwargs["project_dir"] = args.output_dir
kwargs_handlers = [
InitProcessGroupKwargs(timeout=timedelta(days=10)),
DistributedDataParallelKwargs(find_unused_parameters=True)
]
accelerator = Accelerator(
gradient_accumulation_steps=args.gradient_accumulation_steps,
kwargs_handlers=kwargs_handlers , **accelerator_log_kwargs)
# Make one log on every process with the configuration for debugging.
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %H:%M:%S",
level=logging.INFO,
)
logger.info(accelerator.state, main_process_only=False)
if accelerator.is_local_main_process:
datasets.utils.logging.set_verbosity_warning()
transformers.utils.logging.set_verbosity_info()
else:
datasets.utils.logging.set_verbosity_error()
transformers.utils.logging.set_verbosity_error()
# If passed along, set the training seed now.
if args.seed is not None:
set_seed(args.seed)
random.seed(args.seed)
if accelerator.is_main_process and args.output_dir is not None:
os.makedirs(args.output_dir, exist_ok=True)
# Load model and tokenizer
if args.from_veld_pretrained is None:
| #!/usr/bin/env python
# coding=utf-8
# Copyright The HuggingFace Team and The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Copyright 2022 san kim
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
logger = get_logger(__name__)
# epochs=1
# learning_rate=0.001
# scheduler_type=linear
# accelerate launch training_veldt5_accelerate.py \
# --vision_model 'google/vit-base-patch16-384' \
# --language_model 'KETI-AIR/ke-t5-base' \
# --gradient_accumulation_steps 32 \
# --per_device_train_batch_size 16 \
# --per_device_eval_batch_size 16 \
# --warmup_portion 0.02 \
# --logging_steps 20 \
# --checkpointing_steps 10000 \
# --num_train_epochs $epochs \
# --lr_scheduler_type $scheduler_type \
# --with_tracking \
# --output_dir veld_e${epochs}_${scheduler_type}
# accelerate launch training_veldt5_accelerate.py \
# --max_train_steps_per_epoch 100 \
# --max_validation_steps 20 \
# --logging_steps 5 \
# --with_tracking \
# --output_dir test
def parse_args():
parser = argparse.ArgumentParser(description="Finetune a transformers model on a summarization task")
# data
parser = argparse.ArgumentParser(description="Finetune a transformers model on a causal language modeling task")
parser.add_argument(
"--dataset_name_lm",
type=str,
default="sent_dataset.py",
help="The name of the dataset to use (via the datasets library).",
)
parser.add_argument(
"--dataset_config_name_lm",
type=str,
default="base",
help="The configuration name of the dataset to use (via the datasets library).",
)
parser.add_argument(
"--hf_cache_dir",
type=str,
default="../huggingface_datasets",
help="The path to cache directory for huggingface datasets.",
)
parser.add_argument(
"--hf_data_dir_lm",
type=str,
default="../sent_eq_4k_25/*/",
help="The path to data directory for huggingface datasets.",
)
parser.add_argument(
"--validation_split_percentage",
default=1,
help="The percentage of the train set used as validation set in case there's no validation split",
)
parser.add_argument(
"--preprocessing_num_workers",
type=int,
default=256,
help="The number of processes to use for the preprocessing.",
)
parser.add_argument(
"--overwrite_cache", type=bool, default=False, help="Overwrite the cached training and evaluation sets"
)
parser.add_argument(
"--block_size",
type=int,
default=None,
help=(
"Optional input sequence length after tokenization. The training dataset will be truncated in block of"
" this size for training. Default to the model max input length for single sentence inputs (take into"
" account special tokens)."
),
)
parser.add_argument("--train_path",
default="../../downloaded_data/train-filtered.json", type=str)
parser.add_argument("--validation_path",
default="../../downloaded_data/validation-filtered.json", type=str)
parser.add_argument("--image_root_dir",
default="../../downloaded_data", type=str)
parser.add_argument(
"--dataset_name",
type=str,
default="image_text_pair_datasets.py",
help="The name of the dataset to use (via the datasets library).",
)
parser.add_argument(
"--dataset_config_name",
type=str,
default="base",
help="The configuration name of the dataset to use (via the datasets library).",
)
parser.add_argument(
"--hf_data_dir",
type=str,
default="../../downloaded_data",
help="The path to data directory for huggingface datasets.",
)
# model
parser.add_argument("--vision_model",
default="google/vit-base-patch16-384", type=str)
parser.add_argument("--language_model",
default="KETI-AIR/ke-t5-base", type=str)
parser.add_argument(
"--use_slow_tokenizer",
action="store_true",
help="If passed, will use a slow tokenizer (not backed by the 🤗 Tokenizers library).",
)
# training
parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.")
parser.add_argument(
"--gradient_accumulation_steps",
type=int,
default=1,
help="Number of updates steps to accumulate before performing a backward/update pass.",
)
parser.add_argument(
"--per_device_train_batch_size",
type=int,
default=16,
help="Batch size (per device) for the training dataloader.",
)
parser.add_argument(
"--per_device_eval_batch_size",
type=int,
default=8,
help="Batch size (per device) for the evaluation dataloader.",
)
parser.add_argument(
"--learning_rate",
type=float,
default=8e-4,
help="Initial learning rate (after the potential warmup period) to use.",
)
parser.add_argument("--contrastive_weight", default=1.0,
type=float, help="The weighting value for contrastive loss")
parser.add_argument("--captioning_weight", default=2.0,
type=float, help="The weighting value for captioning loss")
parser.add_argument("--lm_weight", default=1.0,
type=float, help="The weighting value for lm loss")
parser.add_argument("--weight_decay", type=float, default=0.0, help="Weight decay to use.")
parser.add_argument("--logit_temperature", default=1.0,
type=float, help="temperature for logits")
parser.add_argument("--label_smoothing", default=0.0,
type=float, help="label smoothing for cross entropy")
parser.add_argument("--num_train_epochs", type=int, default=1, help="Total number of training epochs to perform.")
parser.add_argument(
"--max_train_steps",
type=int,
default=None,
help="Total number of training steps to perform. If provided, overrides num_train_epochs.",
)
parser.add_argument(
"--max_validation_steps",
type=int,
default=None,
help="Total number of validation steps to perform.",
)
parser.add_argument(
"--max_train_steps_per_epoch",
type=int,
default=None,
help="The number of training steps to perform on a epoch. (for debugging)",
)
parser.add_argument(
"--lr_scheduler_type",
type=SchedulerType,
default="linear",
help="The scheduler type to use.",
choices=["linear", "cosine", "cosine_with_restarts", "polynomial", "constant", "constant_with_warmup"],
)
parser.add_argument(
"--num_warmup_steps", type=int, default=0, help="Number of steps for the warmup in the lr scheduler."
)
parser.add_argument(
"--warmup_portion", type=float, default=0, help="Portion of total training steps for the warmup in the lr scheduler."
)
parser.add_argument(
"--checkpointing_steps",
type=str,
default=None,
help="Whether the various states should be saved at the end of every n steps, or 'epoch' for each epoch.",
)
parser.add_argument(
"--resume_from_checkpoint",
type=str,
default=None,
help="If the training should continue from a checkpoint folder.",
)
# logging
parser.add_argument(
"--logging_steps", type=int, default=0, help="Number of steps for logging (stdout)."
)
parser.add_argument(
"--with_tracking",
action="store_true",
help="Whether to enable experiment trackers for logging.",
)
parser.add_argument("--output_dir", type=str, default=None, help="Where to store the final model.")
parser.add_argument(
"--report_to",
type=str,
default="all",
help=(
'The integration to report the results and logs to. Supported platforms are `"tensorboard"`,'
' `"wandb"` and `"comet_ml"`. Use `"all"` (default) to report to all integrations.'
"Only applicable when `--with_tracking` is passed."
),
)
parser.add_argument(
"--finetune",
action="store_true",
help="disable language dataset training in finetuning.",
)
parser.add_argument(
"--from_veld_pretrained",
type=str,
default=None,
help="pretrained veld model to use for finetuning.",
)
args = parser.parse_args()
return args
def main():
args = parse_args()
accelerator_log_kwargs = {}
if args.with_tracking:
accelerator_log_kwargs["log_with"] = args.report_to
# accelerator_log_kwargs["logging_dir"] = args.output_dir
accelerator_log_kwargs["project_dir"] = args.output_dir
kwargs_handlers = [
InitProcessGroupKwargs(timeout=timedelta(days=10)),
DistributedDataParallelKwargs(find_unused_parameters=True)
]
accelerator = Accelerator(
gradient_accumulation_steps=args.gradient_accumulation_steps,
kwargs_handlers=kwargs_handlers , **accelerator_log_kwargs)
# Make one log on every process with the configuration for debugging.
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %H:%M:%S",
level=logging.INFO,
)
logger.info(accelerator.state, main_process_only=False)
if accelerator.is_local_main_process:
datasets.utils.logging.set_verbosity_warning()
transformers.utils.logging.set_verbosity_info()
else:
datasets.utils.logging.set_verbosity_error()
transformers.utils.logging.set_verbosity_error()
# If passed along, set the training seed now.
if args.seed is not None:
set_seed(args.seed)
random.seed(args.seed)
if accelerator.is_main_process and args.output_dir is not None:
os.makedirs(args.output_dir, exist_ok=True)
# Load model and tokenizer
if args.from_veld_pretrained is None: | model = VELDT5Model.from_encoder_decoder_pretrained( | 1 | 2023-12-19 01:37:23+00:00 | 12k |
sidharthrajaram/StyleTTS2 | src/styletts2/models.py | [
{
"identifier": "ASRCNN",
"path": "src/styletts2/Utils/ASR/models.py",
"snippet": "class ASRCNN(nn.Module):\n def __init__(self,\n input_dim=80,\n hidden_dim=256,\n n_token=35,\n n_layers=6,\n token_embedding_dim=256,\n\n... | import os
import os.path as osp
import copy
import math
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import yaml
from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
from .Utils.ASR.models import ASRCNN
from .Utils.JDC.model import JDCNet
from .Modules.diffusion.sampler import KDiffusion, LogNormalDistribution
from .Modules.diffusion.modules import Transformer1d, StyleTransformer1d
from .Modules.diffusion.diffusion import AudioDiffusionConditional
from .Modules.discriminators import MultiPeriodDiscriminator, MultiResSpecDiscriminator, WavLMDiscriminator
from munch import Munch
from .Modules.istftnet import Decoder
from .Modules.hifigan import Decoder | 10,300 |
x_pad[:, :, :x.shape[-1]] = x
x = x_pad.to(x.device)
return x.transpose(-1, -2)
def inference(self, x, style):
x = self.embedding(x.transpose(-1, -2)) * math.sqrt(self.d_model)
style = style.expand(x.shape[0], x.shape[1], -1)
x = torch.cat([x, style], axis=-1)
src = self.pos_encoder(x)
output = self.transformer_encoder(src).transpose(0, 1)
return output
def length_to_mask(self, lengths):
mask = torch.arange(lengths.max()).unsqueeze(0).expand(lengths.shape[0], -1).type_as(lengths)
mask = torch.gt(mask+1, lengths.unsqueeze(1))
return mask
def load_F0_models(path):
# load F0 model
F0_model = JDCNet(num_class=1, seq_len=192)
params = torch.load(path, map_location='cpu')['net']
F0_model.load_state_dict(params)
_ = F0_model.train()
return F0_model
def load_ASR_models(ASR_MODEL_PATH, ASR_MODEL_CONFIG):
# load ASR model
def _load_config(path):
with open(path) as f:
config = yaml.safe_load(f)
model_config = config['model_params']
return model_config
def _load_model(model_config, model_path):
model = ASRCNN(**model_config)
params = torch.load(model_path, map_location='cpu')['model']
model.load_state_dict(params)
return model
asr_model_config = _load_config(ASR_MODEL_CONFIG)
asr_model = _load_model(asr_model_config, ASR_MODEL_PATH)
_ = asr_model.train()
return asr_model
def build_model(args, text_aligner, pitch_extractor, bert):
assert args.decoder.type in ['istftnet', 'hifigan'], 'Decoder type unknown'
if args.decoder.type == "istftnet":
decoder = Decoder(dim_in=args.hidden_dim, style_dim=args.style_dim, dim_out=args.n_mels,
resblock_kernel_sizes = args.decoder.resblock_kernel_sizes,
upsample_rates = args.decoder.upsample_rates,
upsample_initial_channel=args.decoder.upsample_initial_channel,
resblock_dilation_sizes=args.decoder.resblock_dilation_sizes,
upsample_kernel_sizes=args.decoder.upsample_kernel_sizes,
gen_istft_n_fft=args.decoder.gen_istft_n_fft, gen_istft_hop_size=args.decoder.gen_istft_hop_size)
else:
decoder = Decoder(dim_in=args.hidden_dim, style_dim=args.style_dim, dim_out=args.n_mels,
resblock_kernel_sizes = args.decoder.resblock_kernel_sizes,
upsample_rates = args.decoder.upsample_rates,
upsample_initial_channel=args.decoder.upsample_initial_channel,
resblock_dilation_sizes=args.decoder.resblock_dilation_sizes,
upsample_kernel_sizes=args.decoder.upsample_kernel_sizes)
text_encoder = TextEncoder(channels=args.hidden_dim, kernel_size=5, depth=args.n_layer, n_symbols=args.n_token)
predictor = ProsodyPredictor(style_dim=args.style_dim, d_hid=args.hidden_dim, nlayers=args.n_layer, max_dur=args.max_dur, dropout=args.dropout)
style_encoder = StyleEncoder(dim_in=args.dim_in, style_dim=args.style_dim, max_conv_dim=args.hidden_dim) # acoustic style encoder
predictor_encoder = StyleEncoder(dim_in=args.dim_in, style_dim=args.style_dim, max_conv_dim=args.hidden_dim) # prosodic style encoder
# define diffusion model
if args.multispeaker:
transformer = StyleTransformer1d(channels=args.style_dim*2,
context_embedding_features=bert.config.hidden_size,
context_features=args.style_dim*2,
**args.diffusion.transformer)
else:
transformer = Transformer1d(channels=args.style_dim*2,
context_embedding_features=bert.config.hidden_size,
**args.diffusion.transformer)
diffusion = AudioDiffusionConditional(
in_channels=1,
embedding_max_length=bert.config.max_position_embeddings,
embedding_features=bert.config.hidden_size,
embedding_mask_proba=args.diffusion.embedding_mask_proba, # Conditional dropout of batch elements,
channels=args.style_dim*2,
context_features=args.style_dim*2,
)
diffusion.diffusion = KDiffusion(
net=diffusion.unet,
sigma_distribution=LogNormalDistribution(mean = args.diffusion.dist.mean, std = args.diffusion.dist.std),
sigma_data=args.diffusion.dist.sigma_data, # a placeholder, will be changed dynamically when start training diffusion model
dynamic_threshold=0.0
)
diffusion.diffusion.net = transformer
diffusion.unet = transformer
nets = Munch(
bert=bert,
bert_encoder=nn.Linear(bert.config.hidden_size, args.hidden_dim),
predictor=predictor,
decoder=decoder,
text_encoder=text_encoder,
predictor_encoder=predictor_encoder,
style_encoder=style_encoder,
diffusion=diffusion,
text_aligner = text_aligner,
pitch_extractor=pitch_extractor,
| #coding:utf-8
class LearnedDownSample(nn.Module):
def __init__(self, layer_type, dim_in):
super().__init__()
self.layer_type = layer_type
if self.layer_type == 'none':
self.conv = nn.Identity()
elif self.layer_type == 'timepreserve':
self.conv = spectral_norm(nn.Conv2d(dim_in, dim_in, kernel_size=(3, 1), stride=(2, 1), groups=dim_in, padding=(1, 0)))
elif self.layer_type == 'half':
self.conv = spectral_norm(nn.Conv2d(dim_in, dim_in, kernel_size=(3, 3), stride=(2, 2), groups=dim_in, padding=1))
else:
raise RuntimeError('Got unexpected donwsampletype %s, expected is [none, timepreserve, half]' % self.layer_type)
def forward(self, x):
return self.conv(x)
class LearnedUpSample(nn.Module):
def __init__(self, layer_type, dim_in):
super().__init__()
self.layer_type = layer_type
if self.layer_type == 'none':
self.conv = nn.Identity()
elif self.layer_type == 'timepreserve':
self.conv = nn.ConvTranspose2d(dim_in, dim_in, kernel_size=(3, 1), stride=(2, 1), groups=dim_in, output_padding=(1, 0), padding=(1, 0))
elif self.layer_type == 'half':
self.conv = nn.ConvTranspose2d(dim_in, dim_in, kernel_size=(3, 3), stride=(2, 2), groups=dim_in, output_padding=1, padding=1)
else:
raise RuntimeError('Got unexpected upsampletype %s, expected is [none, timepreserve, half]' % self.layer_type)
def forward(self, x):
return self.conv(x)
class DownSample(nn.Module):
def __init__(self, layer_type):
super().__init__()
self.layer_type = layer_type
def forward(self, x):
if self.layer_type == 'none':
return x
elif self.layer_type == 'timepreserve':
return F.avg_pool2d(x, (2, 1))
elif self.layer_type == 'half':
if x.shape[-1] % 2 != 0:
x = torch.cat([x, x[..., -1].unsqueeze(-1)], dim=-1)
return F.avg_pool2d(x, 2)
else:
raise RuntimeError('Got unexpected donwsampletype %s, expected is [none, timepreserve, half]' % self.layer_type)
class UpSample(nn.Module):
def __init__(self, layer_type):
super().__init__()
self.layer_type = layer_type
def forward(self, x):
if self.layer_type == 'none':
return x
elif self.layer_type == 'timepreserve':
return F.interpolate(x, scale_factor=(2, 1), mode='nearest')
elif self.layer_type == 'half':
return F.interpolate(x, scale_factor=2, mode='nearest')
else:
raise RuntimeError('Got unexpected upsampletype %s, expected is [none, timepreserve, half]' % self.layer_type)
class ResBlk(nn.Module):
def __init__(self, dim_in, dim_out, actv=nn.LeakyReLU(0.2),
normalize=False, downsample='none'):
super().__init__()
self.actv = actv
self.normalize = normalize
self.downsample = DownSample(downsample)
self.downsample_res = LearnedDownSample(downsample, dim_in)
self.learned_sc = dim_in != dim_out
self._build_weights(dim_in, dim_out)
def _build_weights(self, dim_in, dim_out):
self.conv1 = spectral_norm(nn.Conv2d(dim_in, dim_in, 3, 1, 1))
self.conv2 = spectral_norm(nn.Conv2d(dim_in, dim_out, 3, 1, 1))
if self.normalize:
self.norm1 = nn.InstanceNorm2d(dim_in, affine=True)
self.norm2 = nn.InstanceNorm2d(dim_in, affine=True)
if self.learned_sc:
self.conv1x1 = spectral_norm(nn.Conv2d(dim_in, dim_out, 1, 1, 0, bias=False))
def _shortcut(self, x):
if self.learned_sc:
x = self.conv1x1(x)
if self.downsample:
x = self.downsample(x)
return x
def _residual(self, x):
if self.normalize:
x = self.norm1(x)
x = self.actv(x)
x = self.conv1(x)
x = self.downsample_res(x)
if self.normalize:
x = self.norm2(x)
x = self.actv(x)
x = self.conv2(x)
return x
def forward(self, x):
x = self._shortcut(x) + self._residual(x)
return x / math.sqrt(2) # unit variance
class StyleEncoder(nn.Module):
def __init__(self, dim_in=48, style_dim=48, max_conv_dim=384):
super().__init__()
blocks = []
blocks += [spectral_norm(nn.Conv2d(1, dim_in, 3, 1, 1))]
repeat_num = 4
for _ in range(repeat_num):
dim_out = min(dim_in*2, max_conv_dim)
blocks += [ResBlk(dim_in, dim_out, downsample='half')]
dim_in = dim_out
blocks += [nn.LeakyReLU(0.2)]
blocks += [spectral_norm(nn.Conv2d(dim_out, dim_out, 5, 1, 0))]
blocks += [nn.AdaptiveAvgPool2d(1)]
blocks += [nn.LeakyReLU(0.2)]
self.shared = nn.Sequential(*blocks)
self.unshared = nn.Linear(dim_out, style_dim)
def forward(self, x):
h = self.shared(x)
h = h.view(h.size(0), -1)
s = self.unshared(h)
return s
class LinearNorm(torch.nn.Module):
def __init__(self, in_dim, out_dim, bias=True, w_init_gain='linear'):
super(LinearNorm, self).__init__()
self.linear_layer = torch.nn.Linear(in_dim, out_dim, bias=bias)
torch.nn.init.xavier_uniform_(
self.linear_layer.weight,
gain=torch.nn.init.calculate_gain(w_init_gain))
def forward(self, x):
return self.linear_layer(x)
class Discriminator2d(nn.Module):
def __init__(self, dim_in=48, num_domains=1, max_conv_dim=384, repeat_num=4):
super().__init__()
blocks = []
blocks += [spectral_norm(nn.Conv2d(1, dim_in, 3, 1, 1))]
for lid in range(repeat_num):
dim_out = min(dim_in*2, max_conv_dim)
blocks += [ResBlk(dim_in, dim_out, downsample='half')]
dim_in = dim_out
blocks += [nn.LeakyReLU(0.2)]
blocks += [spectral_norm(nn.Conv2d(dim_out, dim_out, 5, 1, 0))]
blocks += [nn.LeakyReLU(0.2)]
blocks += [nn.AdaptiveAvgPool2d(1)]
blocks += [spectral_norm(nn.Conv2d(dim_out, num_domains, 1, 1, 0))]
self.main = nn.Sequential(*blocks)
def get_feature(self, x):
features = []
for l in self.main:
x = l(x)
features.append(x)
out = features[-1]
out = out.view(out.size(0), -1) # (batch, num_domains)
return out, features
def forward(self, x):
out, features = self.get_feature(x)
out = out.squeeze() # (batch)
return out, features
class ResBlk1d(nn.Module):
def __init__(self, dim_in, dim_out, actv=nn.LeakyReLU(0.2),
normalize=False, downsample='none', dropout_p=0.2):
super().__init__()
self.actv = actv
self.normalize = normalize
self.downsample_type = downsample
self.learned_sc = dim_in != dim_out
self._build_weights(dim_in, dim_out)
self.dropout_p = dropout_p
if self.downsample_type == 'none':
self.pool = nn.Identity()
else:
self.pool = weight_norm(nn.Conv1d(dim_in, dim_in, kernel_size=3, stride=2, groups=dim_in, padding=1))
def _build_weights(self, dim_in, dim_out):
self.conv1 = weight_norm(nn.Conv1d(dim_in, dim_in, 3, 1, 1))
self.conv2 = weight_norm(nn.Conv1d(dim_in, dim_out, 3, 1, 1))
if self.normalize:
self.norm1 = nn.InstanceNorm1d(dim_in, affine=True)
self.norm2 = nn.InstanceNorm1d(dim_in, affine=True)
if self.learned_sc:
self.conv1x1 = weight_norm(nn.Conv1d(dim_in, dim_out, 1, 1, 0, bias=False))
def downsample(self, x):
if self.downsample_type == 'none':
return x
else:
if x.shape[-1] % 2 != 0:
x = torch.cat([x, x[..., -1].unsqueeze(-1)], dim=-1)
return F.avg_pool1d(x, 2)
def _shortcut(self, x):
if self.learned_sc:
x = self.conv1x1(x)
x = self.downsample(x)
return x
def _residual(self, x):
if self.normalize:
x = self.norm1(x)
x = self.actv(x)
x = F.dropout(x, p=self.dropout_p, training=self.training)
x = self.conv1(x)
x = self.pool(x)
if self.normalize:
x = self.norm2(x)
x = self.actv(x)
x = F.dropout(x, p=self.dropout_p, training=self.training)
x = self.conv2(x)
return x
def forward(self, x):
x = self._shortcut(x) + self._residual(x)
return x / math.sqrt(2) # unit variance
class LayerNorm(nn.Module):
def __init__(self, channels, eps=1e-5):
super().__init__()
self.channels = channels
self.eps = eps
self.gamma = nn.Parameter(torch.ones(channels))
self.beta = nn.Parameter(torch.zeros(channels))
def forward(self, x):
x = x.transpose(1, -1)
x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
return x.transpose(1, -1)
class TextEncoder(nn.Module):
def __init__(self, channels, kernel_size, depth, n_symbols, actv=nn.LeakyReLU(0.2)):
super().__init__()
self.embedding = nn.Embedding(n_symbols, channels)
padding = (kernel_size - 1) // 2
self.cnn = nn.ModuleList()
for _ in range(depth):
self.cnn.append(nn.Sequential(
weight_norm(nn.Conv1d(channels, channels, kernel_size=kernel_size, padding=padding)),
LayerNorm(channels),
actv,
nn.Dropout(0.2),
))
# self.cnn = nn.Sequential(*self.cnn)
self.lstm = nn.LSTM(channels, channels//2, 1, batch_first=True, bidirectional=True)
def forward(self, x, input_lengths, m):
x = self.embedding(x) # [B, T, emb]
x = x.transpose(1, 2) # [B, emb, T]
m = m.to(input_lengths.device).unsqueeze(1)
x.masked_fill_(m, 0.0)
for c in self.cnn:
x = c(x)
x.masked_fill_(m, 0.0)
x = x.transpose(1, 2) # [B, T, chn]
input_lengths = input_lengths.cpu().numpy()
x = nn.utils.rnn.pack_padded_sequence(
x, input_lengths, batch_first=True, enforce_sorted=False)
self.lstm.flatten_parameters()
x, _ = self.lstm(x)
x, _ = nn.utils.rnn.pad_packed_sequence(
x, batch_first=True)
x = x.transpose(-1, -2)
x_pad = torch.zeros([x.shape[0], x.shape[1], m.shape[-1]])
x_pad[:, :, :x.shape[-1]] = x
x = x_pad.to(x.device)
x.masked_fill_(m, 0.0)
return x
def inference(self, x):
x = self.embedding(x)
x = x.transpose(1, 2)
x = self.cnn(x)
x = x.transpose(1, 2)
self.lstm.flatten_parameters()
x, _ = self.lstm(x)
return x
def length_to_mask(self, lengths):
mask = torch.arange(lengths.max()).unsqueeze(0).expand(lengths.shape[0], -1).type_as(lengths)
mask = torch.gt(mask+1, lengths.unsqueeze(1))
return mask
class AdaIN1d(nn.Module):
def __init__(self, style_dim, num_features):
super().__init__()
self.norm = nn.InstanceNorm1d(num_features, affine=False)
self.fc = nn.Linear(style_dim, num_features*2)
def forward(self, x, s):
h = self.fc(s)
h = h.view(h.size(0), h.size(1), 1)
gamma, beta = torch.chunk(h, chunks=2, dim=1)
return (1 + gamma) * self.norm(x) + beta
class UpSample1d(nn.Module):
def __init__(self, layer_type):
super().__init__()
self.layer_type = layer_type
def forward(self, x):
if self.layer_type == 'none':
return x
else:
return F.interpolate(x, scale_factor=2, mode='nearest')
class AdainResBlk1d(nn.Module):
def __init__(self, dim_in, dim_out, style_dim=64, actv=nn.LeakyReLU(0.2),
upsample='none', dropout_p=0.0):
super().__init__()
self.actv = actv
self.upsample_type = upsample
self.upsample = UpSample1d(upsample)
self.learned_sc = dim_in != dim_out
self._build_weights(dim_in, dim_out, style_dim)
self.dropout = nn.Dropout(dropout_p)
if upsample == 'none':
self.pool = nn.Identity()
else:
self.pool = weight_norm(nn.ConvTranspose1d(dim_in, dim_in, kernel_size=3, stride=2, groups=dim_in, padding=1, output_padding=1))
def _build_weights(self, dim_in, dim_out, style_dim):
self.conv1 = weight_norm(nn.Conv1d(dim_in, dim_out, 3, 1, 1))
self.conv2 = weight_norm(nn.Conv1d(dim_out, dim_out, 3, 1, 1))
self.norm1 = AdaIN1d(style_dim, dim_in)
self.norm2 = AdaIN1d(style_dim, dim_out)
if self.learned_sc:
self.conv1x1 = weight_norm(nn.Conv1d(dim_in, dim_out, 1, 1, 0, bias=False))
def _shortcut(self, x):
x = self.upsample(x)
if self.learned_sc:
x = self.conv1x1(x)
return x
def _residual(self, x, s):
x = self.norm1(x, s)
x = self.actv(x)
x = self.pool(x)
x = self.conv1(self.dropout(x))
x = self.norm2(x, s)
x = self.actv(x)
x = self.conv2(self.dropout(x))
return x
def forward(self, x, s):
out = self._residual(x, s)
out = (out + self._shortcut(x)) / math.sqrt(2)
return out
class AdaLayerNorm(nn.Module):
def __init__(self, style_dim, channels, eps=1e-5):
super().__init__()
self.channels = channels
self.eps = eps
self.fc = nn.Linear(style_dim, channels*2)
def forward(self, x, s):
x = x.transpose(-1, -2)
x = x.transpose(1, -1)
h = self.fc(s)
h = h.view(h.size(0), h.size(1), 1)
gamma, beta = torch.chunk(h, chunks=2, dim=1)
gamma, beta = gamma.transpose(1, -1), beta.transpose(1, -1)
x = F.layer_norm(x, (self.channels,), eps=self.eps)
x = (1 + gamma) * x + beta
return x.transpose(1, -1).transpose(-1, -2)
class ProsodyPredictor(nn.Module):
def __init__(self, style_dim, d_hid, nlayers, max_dur=50, dropout=0.1):
super().__init__()
self.text_encoder = DurationEncoder(sty_dim=style_dim,
d_model=d_hid,
nlayers=nlayers,
dropout=dropout)
self.lstm = nn.LSTM(d_hid + style_dim, d_hid // 2, 1, batch_first=True, bidirectional=True)
self.duration_proj = LinearNorm(d_hid, max_dur)
self.shared = nn.LSTM(d_hid + style_dim, d_hid // 2, 1, batch_first=True, bidirectional=True)
self.F0 = nn.ModuleList()
self.F0.append(AdainResBlk1d(d_hid, d_hid, style_dim, dropout_p=dropout))
self.F0.append(AdainResBlk1d(d_hid, d_hid // 2, style_dim, upsample=True, dropout_p=dropout))
self.F0.append(AdainResBlk1d(d_hid // 2, d_hid // 2, style_dim, dropout_p=dropout))
self.N = nn.ModuleList()
self.N.append(AdainResBlk1d(d_hid, d_hid, style_dim, dropout_p=dropout))
self.N.append(AdainResBlk1d(d_hid, d_hid // 2, style_dim, upsample=True, dropout_p=dropout))
self.N.append(AdainResBlk1d(d_hid // 2, d_hid // 2, style_dim, dropout_p=dropout))
self.F0_proj = nn.Conv1d(d_hid // 2, 1, 1, 1, 0)
self.N_proj = nn.Conv1d(d_hid // 2, 1, 1, 1, 0)
def forward(self, texts, style, text_lengths, alignment, m):
d = self.text_encoder(texts, style, text_lengths, m)
batch_size = d.shape[0]
text_size = d.shape[1]
# predict duration
input_lengths = text_lengths.cpu().numpy()
x = nn.utils.rnn.pack_padded_sequence(
d, input_lengths, batch_first=True, enforce_sorted=False)
m = m.to(text_lengths.device).unsqueeze(1)
self.lstm.flatten_parameters()
x, _ = self.lstm(x)
x, _ = nn.utils.rnn.pad_packed_sequence(
x, batch_first=True)
x_pad = torch.zeros([x.shape[0], m.shape[-1], x.shape[-1]])
x_pad[:, :x.shape[1], :] = x
x = x_pad.to(x.device)
duration = self.duration_proj(nn.functional.dropout(x, 0.5, training=self.training))
en = (d.transpose(-1, -2) @ alignment)
return duration.squeeze(-1), en
def F0Ntrain(self, x, s):
x, _ = self.shared(x.transpose(-1, -2))
F0 = x.transpose(-1, -2)
for block in self.F0:
F0 = block(F0, s)
F0 = self.F0_proj(F0)
N = x.transpose(-1, -2)
for block in self.N:
N = block(N, s)
N = self.N_proj(N)
return F0.squeeze(1), N.squeeze(1)
def length_to_mask(self, lengths):
mask = torch.arange(lengths.max()).unsqueeze(0).expand(lengths.shape[0], -1).type_as(lengths)
mask = torch.gt(mask+1, lengths.unsqueeze(1))
return mask
class DurationEncoder(nn.Module):
def __init__(self, sty_dim, d_model, nlayers, dropout=0.1):
super().__init__()
self.lstms = nn.ModuleList()
for _ in range(nlayers):
self.lstms.append(nn.LSTM(d_model + sty_dim,
d_model // 2,
num_layers=1,
batch_first=True,
bidirectional=True,
dropout=dropout))
self.lstms.append(AdaLayerNorm(sty_dim, d_model))
self.dropout = dropout
self.d_model = d_model
self.sty_dim = sty_dim
def forward(self, x, style, text_lengths, m):
masks = m.to(text_lengths.device)
x = x.permute(2, 0, 1)
s = style.expand(x.shape[0], x.shape[1], -1)
x = torch.cat([x, s], axis=-1)
x.masked_fill_(masks.unsqueeze(-1).transpose(0, 1), 0.0)
x = x.transpose(0, 1)
input_lengths = text_lengths.cpu().numpy()
x = x.transpose(-1, -2)
for block in self.lstms:
if isinstance(block, AdaLayerNorm):
x = block(x.transpose(-1, -2), style).transpose(-1, -2)
x = torch.cat([x, s.permute(1, -1, 0)], axis=1)
x.masked_fill_(masks.unsqueeze(-1).transpose(-1, -2), 0.0)
else:
x = x.transpose(-1, -2)
x = nn.utils.rnn.pack_padded_sequence(
x, input_lengths, batch_first=True, enforce_sorted=False)
block.flatten_parameters()
x, _ = block(x)
x, _ = nn.utils.rnn.pad_packed_sequence(
x, batch_first=True)
x = F.dropout(x, p=self.dropout, training=self.training)
x = x.transpose(-1, -2)
x_pad = torch.zeros([x.shape[0], x.shape[1], m.shape[-1]])
x_pad[:, :, :x.shape[-1]] = x
x = x_pad.to(x.device)
return x.transpose(-1, -2)
def inference(self, x, style):
x = self.embedding(x.transpose(-1, -2)) * math.sqrt(self.d_model)
style = style.expand(x.shape[0], x.shape[1], -1)
x = torch.cat([x, style], axis=-1)
src = self.pos_encoder(x)
output = self.transformer_encoder(src).transpose(0, 1)
return output
def length_to_mask(self, lengths):
mask = torch.arange(lengths.max()).unsqueeze(0).expand(lengths.shape[0], -1).type_as(lengths)
mask = torch.gt(mask+1, lengths.unsqueeze(1))
return mask
def load_F0_models(path):
# load F0 model
F0_model = JDCNet(num_class=1, seq_len=192)
params = torch.load(path, map_location='cpu')['net']
F0_model.load_state_dict(params)
_ = F0_model.train()
return F0_model
def load_ASR_models(ASR_MODEL_PATH, ASR_MODEL_CONFIG):
# load ASR model
def _load_config(path):
with open(path) as f:
config = yaml.safe_load(f)
model_config = config['model_params']
return model_config
def _load_model(model_config, model_path):
model = ASRCNN(**model_config)
params = torch.load(model_path, map_location='cpu')['model']
model.load_state_dict(params)
return model
asr_model_config = _load_config(ASR_MODEL_CONFIG)
asr_model = _load_model(asr_model_config, ASR_MODEL_PATH)
_ = asr_model.train()
return asr_model
def build_model(args, text_aligner, pitch_extractor, bert):
assert args.decoder.type in ['istftnet', 'hifigan'], 'Decoder type unknown'
if args.decoder.type == "istftnet":
decoder = Decoder(dim_in=args.hidden_dim, style_dim=args.style_dim, dim_out=args.n_mels,
resblock_kernel_sizes = args.decoder.resblock_kernel_sizes,
upsample_rates = args.decoder.upsample_rates,
upsample_initial_channel=args.decoder.upsample_initial_channel,
resblock_dilation_sizes=args.decoder.resblock_dilation_sizes,
upsample_kernel_sizes=args.decoder.upsample_kernel_sizes,
gen_istft_n_fft=args.decoder.gen_istft_n_fft, gen_istft_hop_size=args.decoder.gen_istft_hop_size)
else:
decoder = Decoder(dim_in=args.hidden_dim, style_dim=args.style_dim, dim_out=args.n_mels,
resblock_kernel_sizes = args.decoder.resblock_kernel_sizes,
upsample_rates = args.decoder.upsample_rates,
upsample_initial_channel=args.decoder.upsample_initial_channel,
resblock_dilation_sizes=args.decoder.resblock_dilation_sizes,
upsample_kernel_sizes=args.decoder.upsample_kernel_sizes)
text_encoder = TextEncoder(channels=args.hidden_dim, kernel_size=5, depth=args.n_layer, n_symbols=args.n_token)
predictor = ProsodyPredictor(style_dim=args.style_dim, d_hid=args.hidden_dim, nlayers=args.n_layer, max_dur=args.max_dur, dropout=args.dropout)
style_encoder = StyleEncoder(dim_in=args.dim_in, style_dim=args.style_dim, max_conv_dim=args.hidden_dim) # acoustic style encoder
predictor_encoder = StyleEncoder(dim_in=args.dim_in, style_dim=args.style_dim, max_conv_dim=args.hidden_dim) # prosodic style encoder
# define diffusion model
if args.multispeaker:
transformer = StyleTransformer1d(channels=args.style_dim*2,
context_embedding_features=bert.config.hidden_size,
context_features=args.style_dim*2,
**args.diffusion.transformer)
else:
transformer = Transformer1d(channels=args.style_dim*2,
context_embedding_features=bert.config.hidden_size,
**args.diffusion.transformer)
diffusion = AudioDiffusionConditional(
in_channels=1,
embedding_max_length=bert.config.max_position_embeddings,
embedding_features=bert.config.hidden_size,
embedding_mask_proba=args.diffusion.embedding_mask_proba, # Conditional dropout of batch elements,
channels=args.style_dim*2,
context_features=args.style_dim*2,
)
diffusion.diffusion = KDiffusion(
net=diffusion.unet,
sigma_distribution=LogNormalDistribution(mean = args.diffusion.dist.mean, std = args.diffusion.dist.std),
sigma_data=args.diffusion.dist.sigma_data, # a placeholder, will be changed dynamically when start training diffusion model
dynamic_threshold=0.0
)
diffusion.diffusion.net = transformer
diffusion.unet = transformer
nets = Munch(
bert=bert,
bert_encoder=nn.Linear(bert.config.hidden_size, args.hidden_dim),
predictor=predictor,
decoder=decoder,
text_encoder=text_encoder,
predictor_encoder=predictor_encoder,
style_encoder=style_encoder,
diffusion=diffusion,
text_aligner = text_aligner,
pitch_extractor=pitch_extractor,
| mpd = MultiPeriodDiscriminator(), | 7 | 2023-12-15 10:04:21+00:00 | 12k |
alibaba/u2mot | yolox/models/yolox.py | [
{
"identifier": "YOLOXHead",
"path": "yolox/models/yolo_head.py",
"snippet": "class YOLOXHead(nn.Module):\n def __init__(\n self,\n num_classes,\n width=1.0,\n strides=[8, 16, 32],\n in_channels=[256, 512, 1024],\n act=\"silu\",\n depthwise=False,\n ... | import torch
import torch.nn as nn
import contextlib
from .yolo_head import YOLOXHead
from .yolo_pafpn import YOLOPAFPN | 10,305 | #!/usr/bin/env python3
# -*- encoding:utf-8 -*-
# Copyright (c) 2014-2021 Megvii Inc. All rights reserved.
# Copyright (c) Alibaba, Inc. and its affiliates.
class YOLOX(nn.Module):
"""
YOLOX model module. The module list is defined by create_yolov3_modules function.
The network returns loss values from three YOLO layers during training
and detection results during test.
"""
def __init__(self, backbone=None, head=None, moco=None, freeze=False):
super().__init__()
if backbone is None:
| #!/usr/bin/env python3
# -*- encoding:utf-8 -*-
# Copyright (c) 2014-2021 Megvii Inc. All rights reserved.
# Copyright (c) Alibaba, Inc. and its affiliates.
class YOLOX(nn.Module):
"""
YOLOX model module. The module list is defined by create_yolov3_modules function.
The network returns loss values from three YOLO layers during training
and detection results during test.
"""
def __init__(self, backbone=None, head=None, moco=None, freeze=False):
super().__init__()
if backbone is None: | backbone = YOLOPAFPN() # backbone, CSPNet with PANet | 1 | 2023-12-18 10:04:40+00:00 | 12k |
liuhuang31/HiFTNet-sr | train.py | [
{
"identifier": "AttrDict",
"path": "env.py",
"snippet": "class AttrDict(dict):\n def __init__(self, *args, **kwargs):\n super(AttrDict, self).__init__(*args, **kwargs)\n self.__dict__ = self"
},
{
"identifier": "build_env",
"path": "env.py",
"snippet": "def build_env(co... | import warnings
import itertools
import os
import time
import argparse
import json
import torch
import torch.nn.functional as F
import torch.multiprocessing as mp
from torch.utils.tensorboard import SummaryWriter
from torch.utils.data import DistributedSampler, DataLoader
from torch.distributed import init_process_group
from torch.nn.parallel import DistributedDataParallel
from env import AttrDict, build_env
from meldataset import MelDataset, mel_spectrogram, get_dataset_filelist
from models import Generator, MultiPeriodDiscriminator, MultiResSpecDiscriminator, feature_loss, generator_loss,\
discriminator_loss, discriminator_TPRLS_loss, generator_TPRLS_loss
from utils import plot_spectrogram, scan_checkpoint, load_checkpoint, save_checkpoint
from stft import TorchSTFT
from Utils.JDC.model import JDCNet | 8,279 | stft = TorchSTFT(filter_length=h.gen_istft_n_fft, hop_length=h.gen_istft_hop_size, win_length=h.gen_istft_n_fft).to(device)
if rank == 0:
print(generator)
os.makedirs(a.checkpoint_path, exist_ok=True)
print("checkpoints directory : ", a.checkpoint_path)
if os.path.isdir(a.checkpoint_path):
cp_g = scan_checkpoint(a.checkpoint_path, 'g_')
cp_do = scan_checkpoint(a.checkpoint_path, 'do_')
steps = 0
if cp_g is None or cp_do is None:
state_dict_do = None
last_epoch = -1
else:
state_dict_g = load_checkpoint(cp_g, device)
state_dict_do = load_checkpoint(cp_do, device)
generator.load_state_dict(state_dict_g['generator'])
mpd.load_state_dict(state_dict_do['mpd'])
msd.load_state_dict(state_dict_do['msd'])
steps = state_dict_do['steps'] + 1
last_epoch = state_dict_do['epoch']
if h.num_gpus > 1:
generator = DistributedDataParallel(generator, device_ids=[rank], find_unused_parameters=True).to(device)
mpd = DistributedDataParallel(mpd, device_ids=[rank]).to(device)
msd = DistributedDataParallel(msd, device_ids=[rank]).to(device)
optim_g = torch.optim.AdamW(generator.parameters(), h.learning_rate, betas=[h.adam_b1, h.adam_b2])
optim_d = torch.optim.AdamW(itertools.chain(msd.parameters(), mpd.parameters()),
h.learning_rate, betas=[h.adam_b1, h.adam_b2])
if state_dict_do is not None:
optim_g.load_state_dict(state_dict_do['optim_g'])
optim_d.load_state_dict(state_dict_do['optim_d'])
scheduler_g = torch.optim.lr_scheduler.ExponentialLR(optim_g, gamma=h.lr_decay, last_epoch=last_epoch)
scheduler_d = torch.optim.lr_scheduler.ExponentialLR(optim_d, gamma=h.lr_decay, last_epoch=last_epoch)
training_filelist, validation_filelist = get_dataset_filelist(a)
trainset = MelDataset(training_filelist, h.segment_size, h.n_fft, h.num_mels,
h.hop_size, h.win_size, h.sampling_rate, h.fmin, h.fmax, n_cache_reuse=0,
shuffle=False if h.num_gpus > 1 else True, fmax_loss=h.fmax_for_loss, device=device,
fine_tuning=a.fine_tuning, base_mels_path=a.input_mels_dir)
train_sampler = DistributedSampler(trainset) if h.num_gpus > 1 else None
train_loader = DataLoader(trainset, num_workers=h.num_workers, shuffle=False,
sampler=train_sampler,
batch_size=h.batch_size,
pin_memory=True,
drop_last=True)
if rank == 0:
validset = MelDataset(validation_filelist, h.segment_size, h.n_fft, h.num_mels,
h.hop_size, h.win_size, h.sampling_rate, h.fmin, h.fmax, False, False, n_cache_reuse=0,
fmax_loss=h.fmax_for_loss, device=device, fine_tuning=a.fine_tuning,
base_mels_path=a.input_mels_dir)
validation_loader = DataLoader(validset, num_workers=1, shuffle=False,
sampler=None,
batch_size=1,
pin_memory=True,
drop_last=True)
sw = SummaryWriter(os.path.join(a.checkpoint_path, 'logs'))
generator.train()
mpd.train()
msd.train()
for epoch in range(max(0, last_epoch), a.training_epochs):
if rank == 0:
start = time.time()
print("Epoch: {}".format(epoch+1))
if h.num_gpus > 1:
train_sampler.set_epoch(epoch)
for i, batch in enumerate(train_loader):
if rank == 0:
start_b = time.time()
x, y, _, y_mel = batch
x = torch.autograd.Variable(x.to(device, non_blocking=True))
y = torch.autograd.Variable(y.to(device, non_blocking=True))
y_mel = torch.autograd.Variable(y_mel.to(device, non_blocking=True))
y = y.unsqueeze(1)
# y_g_hat = generator(x)
spec, phase = generator(x)
y_g_hat = stft.inverse(spec, phase)
y_g_hat_mel = mel_spectrogram(y_g_hat.squeeze(1), h.n_fft, h.num_mels, h.sampling_rate, h.hop_size, h.win_size,
h.fmin, h.fmax_for_loss)
optim_d.zero_grad()
# MPD
y_df_hat_r, y_df_hat_g, _, _ = mpd(y, y_g_hat.detach())
loss_disc_f, losses_disc_f_r, losses_disc_f_g = discriminator_loss(y_df_hat_r, y_df_hat_g)
loss_disc_f += discriminator_TPRLS_loss(y_df_hat_r, y_df_hat_g)
# MSD
y_ds_hat_r, y_ds_hat_g, _, _ = msd(y, y_g_hat.detach())
loss_disc_s, losses_disc_s_r, losses_disc_s_g = discriminator_loss(y_ds_hat_r, y_ds_hat_g)
loss_disc_s += discriminator_TPRLS_loss(y_ds_hat_r, y_ds_hat_g)
loss_disc_all = loss_disc_s + loss_disc_f
loss_disc_all.backward()
optim_d.step()
# Generator
optim_g.zero_grad()
# L1 Mel-Spectrogram Loss
loss_mel = F.l1_loss(y_mel, y_g_hat_mel) * 45
y_df_hat_r, y_df_hat_g, fmap_f_r, fmap_f_g = mpd(y, y_g_hat)
y_ds_hat_r, y_ds_hat_g, fmap_s_r, fmap_s_g = msd(y, y_g_hat)
| warnings.simplefilter(action='ignore', category=FutureWarning)
torch.backends.cudnn.benchmark = True
def train(rank, a, h):
if h.num_gpus > 1:
init_process_group(backend=h.dist_config['dist_backend'], init_method=h.dist_config['dist_url'],
world_size=h.dist_config['world_size'] * h.num_gpus, rank=rank)
torch.cuda.manual_seed(h.seed)
device = torch.device('cuda:{:d}'.format(rank))
F0_model = JDCNet(num_class=1, seq_len=192)
params = torch.load(h.F0_path)['model']
F0_model.load_state_dict(params)
generator = Generator(h, F0_model).to(device)
mpd = MultiPeriodDiscriminator().to(device)
msd = MultiResSpecDiscriminator().to(device)
stft = TorchSTFT(filter_length=h.gen_istft_n_fft, hop_length=h.gen_istft_hop_size, win_length=h.gen_istft_n_fft).to(device)
if rank == 0:
print(generator)
os.makedirs(a.checkpoint_path, exist_ok=True)
print("checkpoints directory : ", a.checkpoint_path)
if os.path.isdir(a.checkpoint_path):
cp_g = scan_checkpoint(a.checkpoint_path, 'g_')
cp_do = scan_checkpoint(a.checkpoint_path, 'do_')
steps = 0
if cp_g is None or cp_do is None:
state_dict_do = None
last_epoch = -1
else:
state_dict_g = load_checkpoint(cp_g, device)
state_dict_do = load_checkpoint(cp_do, device)
generator.load_state_dict(state_dict_g['generator'])
mpd.load_state_dict(state_dict_do['mpd'])
msd.load_state_dict(state_dict_do['msd'])
steps = state_dict_do['steps'] + 1
last_epoch = state_dict_do['epoch']
if h.num_gpus > 1:
generator = DistributedDataParallel(generator, device_ids=[rank], find_unused_parameters=True).to(device)
mpd = DistributedDataParallel(mpd, device_ids=[rank]).to(device)
msd = DistributedDataParallel(msd, device_ids=[rank]).to(device)
optim_g = torch.optim.AdamW(generator.parameters(), h.learning_rate, betas=[h.adam_b1, h.adam_b2])
optim_d = torch.optim.AdamW(itertools.chain(msd.parameters(), mpd.parameters()),
h.learning_rate, betas=[h.adam_b1, h.adam_b2])
if state_dict_do is not None:
optim_g.load_state_dict(state_dict_do['optim_g'])
optim_d.load_state_dict(state_dict_do['optim_d'])
scheduler_g = torch.optim.lr_scheduler.ExponentialLR(optim_g, gamma=h.lr_decay, last_epoch=last_epoch)
scheduler_d = torch.optim.lr_scheduler.ExponentialLR(optim_d, gamma=h.lr_decay, last_epoch=last_epoch)
training_filelist, validation_filelist = get_dataset_filelist(a)
trainset = MelDataset(training_filelist, h.segment_size, h.n_fft, h.num_mels,
h.hop_size, h.win_size, h.sampling_rate, h.fmin, h.fmax, n_cache_reuse=0,
shuffle=False if h.num_gpus > 1 else True, fmax_loss=h.fmax_for_loss, device=device,
fine_tuning=a.fine_tuning, base_mels_path=a.input_mels_dir)
train_sampler = DistributedSampler(trainset) if h.num_gpus > 1 else None
train_loader = DataLoader(trainset, num_workers=h.num_workers, shuffle=False,
sampler=train_sampler,
batch_size=h.batch_size,
pin_memory=True,
drop_last=True)
if rank == 0:
validset = MelDataset(validation_filelist, h.segment_size, h.n_fft, h.num_mels,
h.hop_size, h.win_size, h.sampling_rate, h.fmin, h.fmax, False, False, n_cache_reuse=0,
fmax_loss=h.fmax_for_loss, device=device, fine_tuning=a.fine_tuning,
base_mels_path=a.input_mels_dir)
validation_loader = DataLoader(validset, num_workers=1, shuffle=False,
sampler=None,
batch_size=1,
pin_memory=True,
drop_last=True)
sw = SummaryWriter(os.path.join(a.checkpoint_path, 'logs'))
generator.train()
mpd.train()
msd.train()
for epoch in range(max(0, last_epoch), a.training_epochs):
if rank == 0:
start = time.time()
print("Epoch: {}".format(epoch+1))
if h.num_gpus > 1:
train_sampler.set_epoch(epoch)
for i, batch in enumerate(train_loader):
if rank == 0:
start_b = time.time()
x, y, _, y_mel = batch
x = torch.autograd.Variable(x.to(device, non_blocking=True))
y = torch.autograd.Variable(y.to(device, non_blocking=True))
y_mel = torch.autograd.Variable(y_mel.to(device, non_blocking=True))
y = y.unsqueeze(1)
# y_g_hat = generator(x)
spec, phase = generator(x)
y_g_hat = stft.inverse(spec, phase)
y_g_hat_mel = mel_spectrogram(y_g_hat.squeeze(1), h.n_fft, h.num_mels, h.sampling_rate, h.hop_size, h.win_size,
h.fmin, h.fmax_for_loss)
optim_d.zero_grad()
# MPD
y_df_hat_r, y_df_hat_g, _, _ = mpd(y, y_g_hat.detach())
loss_disc_f, losses_disc_f_r, losses_disc_f_g = discriminator_loss(y_df_hat_r, y_df_hat_g)
loss_disc_f += discriminator_TPRLS_loss(y_df_hat_r, y_df_hat_g)
# MSD
y_ds_hat_r, y_ds_hat_g, _, _ = msd(y, y_g_hat.detach())
loss_disc_s, losses_disc_s_r, losses_disc_s_g = discriminator_loss(y_ds_hat_r, y_ds_hat_g)
loss_disc_s += discriminator_TPRLS_loss(y_ds_hat_r, y_ds_hat_g)
loss_disc_all = loss_disc_s + loss_disc_f
loss_disc_all.backward()
optim_d.step()
# Generator
optim_g.zero_grad()
# L1 Mel-Spectrogram Loss
loss_mel = F.l1_loss(y_mel, y_g_hat_mel) * 45
y_df_hat_r, y_df_hat_g, fmap_f_r, fmap_f_g = mpd(y, y_g_hat)
y_ds_hat_r, y_ds_hat_g, fmap_s_r, fmap_s_g = msd(y, y_g_hat) | loss_fm_f = feature_loss(fmap_f_r, fmap_f_g) | 8 | 2023-12-16 03:53:55+00:00 | 12k |
UnbSky/Hanabi-AI-Assitant | main_connect.py | [
{
"identifier": "AIWindow",
"path": "game_ui.py",
"snippet": "class AIWindow(QMainWindow, Ui_AIUI):\n def __init__(self, url, cookie, model_data=None):\n super().__init__()\n self.setupUi(self)\n #self.setFixedSize(1300, 1200)\n self.setWindowTitle(\"HanabiAIAssitant\")\n\... | import sys
import json
import requests
from PyQt5 import QtWidgets, QtCore
from game_ui import AIWindow
from play_util import load_model | 8,746 | def printf(*args):
print(*args, flush=True)
# Imports (3rd-party)
# Imports (local application)
# Authenticate, login to the WebSocket server, and run forever.
def login_to_hanab(username, password):
if username == "":
printf('error: "HANABI_USERNAME" is blank in the ".env" file')
sys.exit(1)
if password == "":
printf('error: "HANABI_PASSWORD" is blank in the ".env" file')
sys.exit(1)
# The official site uses HTTPS.
protocol = "https"
ws_protocol = "wss"
host = "hanab.live"
path = "/login"
ws_path = "/ws"
url = protocol + "://" + host + path
ws_url = ws_protocol + "://" + host + ws_path
printf('Authenticating to "' + url + '" with a username of "' + username + '".')
resp = requests.post(
url,
{
"username": username,
"password": password,
# This is normally supposed to be the version of the JavaScript
# client, but the server will also accept "bot" as a valid version.
"version": "bot",
},
)
# Handle failed authentication and other errors.
if resp.status_code != 200:
printf("Authentication failed:")
printf(resp.text)
sys.exit(1)
# Scrape the cookie from the response.
cookie = ""
for header in resp.headers.items():
if header[0] == "Set-Cookie":
cookie = header[1]
break
if cookie == "":
printf("Failed to parse the cookie from the authentication response headers:")
printf(resp.headers)
sys.exit(1)
return ws_url, cookie
def main():
with open(f'user_config.json', 'r') as json_file:
user_args = json.load(json_file)
username = user_args["username"]
password = user_args["password"]
model_name = user_args["model"]
printf("Load Model")
| def printf(*args):
print(*args, flush=True)
# Imports (3rd-party)
# Imports (local application)
# Authenticate, login to the WebSocket server, and run forever.
def login_to_hanab(username, password):
if username == "":
printf('error: "HANABI_USERNAME" is blank in the ".env" file')
sys.exit(1)
if password == "":
printf('error: "HANABI_PASSWORD" is blank in the ".env" file')
sys.exit(1)
# The official site uses HTTPS.
protocol = "https"
ws_protocol = "wss"
host = "hanab.live"
path = "/login"
ws_path = "/ws"
url = protocol + "://" + host + path
ws_url = ws_protocol + "://" + host + ws_path
printf('Authenticating to "' + url + '" with a username of "' + username + '".')
resp = requests.post(
url,
{
"username": username,
"password": password,
# This is normally supposed to be the version of the JavaScript
# client, but the server will also accept "bot" as a valid version.
"version": "bot",
},
)
# Handle failed authentication and other errors.
if resp.status_code != 200:
printf("Authentication failed:")
printf(resp.text)
sys.exit(1)
# Scrape the cookie from the response.
cookie = ""
for header in resp.headers.items():
if header[0] == "Set-Cookie":
cookie = header[1]
break
if cookie == "":
printf("Failed to parse the cookie from the authentication response headers:")
printf(resp.headers)
sys.exit(1)
return ws_url, cookie
def main():
with open(f'user_config.json', 'r') as json_file:
user_args = json.load(json_file)
username = user_args["username"]
password = user_args["password"]
model_name = user_args["model"]
printf("Load Model") | model, action_dict_toact, action_dict_toid, output_action_dict_toact, output_action_dict_toid, device = load_model(model_name) | 1 | 2023-12-17 03:57:47+00:00 | 12k |
m-abr/FCPCodebase | math_ops/Matrix_4x4.py | [
{
"identifier": "Math_Ops",
"path": "math_ops/Math_Ops.py",
"snippet": "class Math_Ops():\n '''\n This class provides general mathematical operations that are not directly available through numpy \n '''\n \n @staticmethod\n def deg_sph2cart(spherical_vec):\n ''' Converts SimSpark'... | from math import asin, atan2, pi, sqrt
from math_ops.Math_Ops import Math_Ops as M
from math_ops.Matrix_3x3 import Matrix_3x3
import numpy as np | 7,569 |
class Matrix_4x4():
def __init__(self, matrix = None) -> None:
'''
Constructor examples:
a = Matrix_4x4( ) # create identity matrix
b = Matrix_4x4( [[1,1,1,1],[2,2,2,2],[3,3,3,3],[4,4,4,4]] ) # manually initialize matrix
c = Matrix_4x4( [1,1,1,1,2,2,2,2,3,3,3,3,4,4,4,4] ) # manually initialize matrix
d = Matrix_4x4( b ) # copy constructor
'''
if matrix is None:
self.m = np.identity(4)
elif type(matrix) == Matrix_4x4:
self.m = np.copy(matrix.m)
|
class Matrix_4x4():
def __init__(self, matrix = None) -> None:
'''
Constructor examples:
a = Matrix_4x4( ) # create identity matrix
b = Matrix_4x4( [[1,1,1,1],[2,2,2,2],[3,3,3,3],[4,4,4,4]] ) # manually initialize matrix
c = Matrix_4x4( [1,1,1,1,2,2,2,2,3,3,3,3,4,4,4,4] ) # manually initialize matrix
d = Matrix_4x4( b ) # copy constructor
'''
if matrix is None:
self.m = np.identity(4)
elif type(matrix) == Matrix_4x4:
self.m = np.copy(matrix.m) | elif type(matrix) == Matrix_3x3: | 1 | 2023-12-16 23:40:23+00:00 | 12k |
Sam-Izdat/tinycio | src/tinycio/util/colorutil.py | [
{
"identifier": "Float2",
"path": "src/tinycio/numerics/vector.py",
"snippet": "class Float2(np.ndarray):\n \"\"\"\n Float2 type using numpy.ndarray.\n \"\"\"\n def __new__(cls, *args):\n if len(args) == 1:\n if isinstance(args[0], list) or isinstance(args[0], tuple):\n ... | import typing
import torch
import numpy as np
from typing import Union
from ..numerics import Float2, Float3 | 9,916 | from __future__ import annotations
def srgb_luminance(im_srgb:Union[torch.Tensor, ColorImage]) -> torch.Tensor:
"""
Return relative luminance of linear sRGB image.
:param im_srgb: [C=3, H, W] color image tensor in sRGB color space
:type im_srgb: torch.Tensor | ColorImage
:return: [C=1, H, W] image tensor
"""
lum_r, lum_g, lum_b = 0.2126, 0.7152, 0.0722
return lum_r * im_srgb[0:1,...] + lum_g * im_srgb[1:2,...] + lum_b * im_srgb[2:3,...]
def apply_gamma(im:Union[torch.Tensor, ColorImage], gamma:float) -> torch.Tensor:
"""
Apply arbitrary gamma correction.
:param im: Image tensor
:type im: torch.Tensor | ColorImage
:param gamma: Gamma correction (should be in the range [0.1, 10.0])
:return: Gamma-corrected image tensor
"""
if gamma == 1.: return im
assert 0.1 <= gamma <= 10.0, "gamma value should be in range [0.1, 10.0]"
im = torch.pow(im, gamma)
def apply_hue_oklab(im_oklab:Union[torch.Tensor, ColorImage], hue_delta:float) -> torch.Tensor:
"""
Manually shift hue of an image by a -1 to +1 delta value.
:param im_oklab: Image tensor in OKLAB color space
:type im_oklab: torch.Tensor | ColorImage
:param hue_delta: Hue shift value in the range [-1., 1.]
:return: Image tensor in OKLAB color space with adjusted hue
"""
assert -1. <= hue_delta <= 1., "hue_delta value should be in range [-1., 1.]"
L, a, b = im_oklab[0:1], im_oklab[1:2], im_oklab[2:3]
hue_delta = ((hue_delta * 0.5) % 1.) * 2. * torch.pi
# Calculate angle and magnitude in the a-b plane
angle = torch.atan2(b, a)
magnitude = torch.sqrt(a**2 + b**2)
# Apply hue correction
angle += hue_delta
# Convert back to Cartesian coordinates
a_corrected = magnitude * torch.cos(angle)
b_corrected = magnitude * torch.sin(angle)
corrected = torch.cat([L, a_corrected, b_corrected], dim=0)
return corrected
return im
| from __future__ import annotations
def srgb_luminance(im_srgb:Union[torch.Tensor, ColorImage]) -> torch.Tensor:
"""
Return relative luminance of linear sRGB image.
:param im_srgb: [C=3, H, W] color image tensor in sRGB color space
:type im_srgb: torch.Tensor | ColorImage
:return: [C=1, H, W] image tensor
"""
lum_r, lum_g, lum_b = 0.2126, 0.7152, 0.0722
return lum_r * im_srgb[0:1,...] + lum_g * im_srgb[1:2,...] + lum_b * im_srgb[2:3,...]
def apply_gamma(im:Union[torch.Tensor, ColorImage], gamma:float) -> torch.Tensor:
"""
Apply arbitrary gamma correction.
:param im: Image tensor
:type im: torch.Tensor | ColorImage
:param gamma: Gamma correction (should be in the range [0.1, 10.0])
:return: Gamma-corrected image tensor
"""
if gamma == 1.: return im
assert 0.1 <= gamma <= 10.0, "gamma value should be in range [0.1, 10.0]"
im = torch.pow(im, gamma)
def apply_hue_oklab(im_oklab:Union[torch.Tensor, ColorImage], hue_delta:float) -> torch.Tensor:
"""
Manually shift hue of an image by a -1 to +1 delta value.
:param im_oklab: Image tensor in OKLAB color space
:type im_oklab: torch.Tensor | ColorImage
:param hue_delta: Hue shift value in the range [-1., 1.]
:return: Image tensor in OKLAB color space with adjusted hue
"""
assert -1. <= hue_delta <= 1., "hue_delta value should be in range [-1., 1.]"
L, a, b = im_oklab[0:1], im_oklab[1:2], im_oklab[2:3]
hue_delta = ((hue_delta * 0.5) % 1.) * 2. * torch.pi
# Calculate angle and magnitude in the a-b plane
angle = torch.atan2(b, a)
magnitude = torch.sqrt(a**2 + b**2)
# Apply hue correction
angle += hue_delta
# Convert back to Cartesian coordinates
a_corrected = magnitude * torch.cos(angle)
b_corrected = magnitude * torch.sin(angle)
corrected = torch.cat([L, a_corrected, b_corrected], dim=0)
return corrected
return im
| def col_hsv_to_rgb(hsv:Union[Float3, Color]) -> Float3: | 1 | 2023-12-15 15:39:08+00:00 | 12k |
quocanh34/magic-animate-modified | magicanimate/models/unet_controlnet.py | [
{
"identifier": "CrossAttnDownBlock3D",
"path": "magicanimate/models/unet_3d_blocks.py",
"snippet": "class CrossAttnDownBlock3D(nn.Module):\n def __init__(\n self,\n in_channels: int,\n out_channels: int,\n temb_channels: int,\n dropout: float = 0.0,\n num_la... | from dataclasses import dataclass
from typing import List, Optional, Tuple, Union
from diffusers.configuration_utils import ConfigMixin, register_to_config
from diffusers.models.modeling_utils import ModelMixin
from diffusers.utils import BaseOutput, logging
from diffusers.models.embeddings import TimestepEmbedding, Timesteps
from magicanimate.models.unet_3d_blocks import (
CrossAttnDownBlock3D,
CrossAttnUpBlock3D,
DownBlock3D,
UNetMidBlock3DCrossAttn,
UpBlock3D,
get_down_block,
get_up_block,
)
from .resnet import InflatedConv3d
from diffusers.utils import WEIGHTS_NAME
import os
import json
import torch
import torch.nn as nn
import torch.utils.checkpoint | 9,105 | # up
reversed_block_out_channels = list(reversed(block_out_channels))
reversed_attention_head_dim = list(reversed(attention_head_dim))
only_cross_attention = list(reversed(only_cross_attention))
output_channel = reversed_block_out_channels[0]
for i, up_block_type in enumerate(up_block_types):
res = 2 ** (3 - i)
is_final_block = i == len(block_out_channels) - 1
prev_output_channel = output_channel
output_channel = reversed_block_out_channels[i]
input_channel = reversed_block_out_channels[min(i + 1, len(block_out_channels) - 1)]
# add upsample block for all BUT final layer
if not is_final_block:
add_upsample = True
self.num_upsamplers += 1
else:
add_upsample = False
up_block = get_up_block(
up_block_type,
num_layers=layers_per_block + 1,
in_channels=input_channel,
out_channels=output_channel,
prev_output_channel=prev_output_channel,
temb_channels=time_embed_dim,
add_upsample=add_upsample,
resnet_eps=norm_eps,
resnet_act_fn=act_fn,
resnet_groups=norm_num_groups,
cross_attention_dim=cross_attention_dim,
attn_num_head_channels=reversed_attention_head_dim[i],
dual_cross_attention=dual_cross_attention,
use_linear_projection=use_linear_projection,
only_cross_attention=only_cross_attention[i],
upcast_attention=upcast_attention,
resnet_time_scale_shift=resnet_time_scale_shift,
unet_use_cross_frame_attention=unet_use_cross_frame_attention,
unet_use_temporal_attention=unet_use_temporal_attention,
use_motion_module=use_motion_module and (res in motion_module_resolutions),
motion_module_type=motion_module_type,
motion_module_kwargs=motion_module_kwargs,
)
self.up_blocks.append(up_block)
prev_output_channel = output_channel
# out
self.conv_norm_out = nn.GroupNorm(num_channels=block_out_channels[0], num_groups=norm_num_groups, eps=norm_eps)
self.conv_act = nn.SiLU()
self.conv_out = InflatedConv3d(block_out_channels[0], out_channels, kernel_size=3, padding=1)
def set_attention_slice(self, slice_size):
r"""
Enable sliced attention computation.
When this option is enabled, the attention module will split the input tensor in slices, to compute attention
in several steps. This is useful to save some memory in exchange for a small speed decrease.
Args:
slice_size (`str` or `int` or `list(int)`, *optional*, defaults to `"auto"`):
When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If
`"max"`, maxium amount of memory will be saved by running only one slice at a time. If a number is
provided, uses as many slices as `attention_head_dim // slice_size`. In this case, `attention_head_dim`
must be a multiple of `slice_size`.
"""
sliceable_head_dims = []
def fn_recursive_retrieve_slicable_dims(module: torch.nn.Module):
if hasattr(module, "set_attention_slice"):
sliceable_head_dims.append(module.sliceable_head_dim)
for child in module.children():
fn_recursive_retrieve_slicable_dims(child)
# retrieve number of attention layers
for module in self.children():
fn_recursive_retrieve_slicable_dims(module)
num_slicable_layers = len(sliceable_head_dims)
if slice_size == "auto":
# half the attention head size is usually a good trade-off between
# speed and memory
slice_size = [dim // 2 for dim in sliceable_head_dims]
elif slice_size == "max":
# make smallest slice possible
slice_size = num_slicable_layers * [1]
slice_size = num_slicable_layers * [slice_size] if not isinstance(slice_size, list) else slice_size
if len(slice_size) != len(sliceable_head_dims):
raise ValueError(
f"You have provided {len(slice_size)}, but {self.config} has {len(sliceable_head_dims)} different"
f" attention layers. Make sure to match `len(slice_size)` to be {len(sliceable_head_dims)}."
)
for i in range(len(slice_size)):
size = slice_size[i]
dim = sliceable_head_dims[i]
if size is not None and size > dim:
raise ValueError(f"size {size} has to be smaller or equal to {dim}.")
# Recursively walk through all the children.
# Any children which exposes the set_attention_slice method
# gets the message
def fn_recursive_set_attention_slice(module: torch.nn.Module, slice_size: List[int]):
if hasattr(module, "set_attention_slice"):
module.set_attention_slice(slice_size.pop())
for child in module.children():
fn_recursive_set_attention_slice(child, slice_size)
reversed_slice_size = list(reversed(slice_size))
for module in self.children():
fn_recursive_set_attention_slice(module, reversed_slice_size)
def _set_gradient_checkpointing(self, module, value=False):
| # *************************************************************************
# This file may have been modified by Bytedance Inc. (“Bytedance Inc.'s Mo-
# difications”). All Bytedance Inc.'s Modifications are Copyright (2023) B-
# ytedance Inc..
# *************************************************************************
# Copyright 2023 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
logger = logging.get_logger(__name__) # pylint: disable=invalid-name
@dataclass
class UNet3DConditionOutput(BaseOutput):
sample: torch.FloatTensor
class UNet3DConditionModel(ModelMixin, ConfigMixin):
_supports_gradient_checkpointing = True
@register_to_config
def __init__(
self,
sample_size: Optional[int] = None,
in_channels: int = 4,
out_channels: int = 4,
center_input_sample: bool = False,
flip_sin_to_cos: bool = True,
freq_shift: int = 0,
down_block_types: Tuple[str] = (
"CrossAttnDownBlock3D",
"CrossAttnDownBlock3D",
"CrossAttnDownBlock3D",
"DownBlock3D",
),
mid_block_type: str = "UNetMidBlock3DCrossAttn",
up_block_types: Tuple[str] = (
"UpBlock3D",
"CrossAttnUpBlock3D",
"CrossAttnUpBlock3D",
"CrossAttnUpBlock3D"
),
only_cross_attention: Union[bool, Tuple[bool]] = False,
block_out_channels: Tuple[int] = (320, 640, 1280, 1280),
layers_per_block: int = 2,
downsample_padding: int = 1,
mid_block_scale_factor: float = 1,
act_fn: str = "silu",
norm_num_groups: int = 32,
norm_eps: float = 1e-5,
cross_attention_dim: int = 1280,
attention_head_dim: Union[int, Tuple[int]] = 8,
dual_cross_attention: bool = False,
use_linear_projection: bool = False,
class_embed_type: Optional[str] = None,
num_class_embeds: Optional[int] = None,
upcast_attention: bool = False,
resnet_time_scale_shift: str = "default",
# Additional
use_motion_module = False,
motion_module_resolutions = ( 1,2,4,8 ),
motion_module_mid_block = False,
motion_module_decoder_only = False,
motion_module_type = None,
motion_module_kwargs = {},
unet_use_cross_frame_attention = None,
unet_use_temporal_attention = None,
):
super().__init__()
self.sample_size = sample_size
time_embed_dim = block_out_channels[0] * 4
# input
self.conv_in = InflatedConv3d(in_channels, block_out_channels[0], kernel_size=3, padding=(1, 1))
# time
self.time_proj = Timesteps(block_out_channels[0], flip_sin_to_cos, freq_shift)
timestep_input_dim = block_out_channels[0]
self.time_embedding = TimestepEmbedding(timestep_input_dim, time_embed_dim)
# class embedding
if class_embed_type is None and num_class_embeds is not None:
self.class_embedding = nn.Embedding(num_class_embeds, time_embed_dim)
elif class_embed_type == "timestep":
self.class_embedding = TimestepEmbedding(timestep_input_dim, time_embed_dim)
elif class_embed_type == "identity":
self.class_embedding = nn.Identity(time_embed_dim, time_embed_dim)
else:
self.class_embedding = None
self.down_blocks = nn.ModuleList([])
self.mid_block = None
self.up_blocks = nn.ModuleList([])
if isinstance(only_cross_attention, bool):
only_cross_attention = [only_cross_attention] * len(down_block_types)
if isinstance(attention_head_dim, int):
attention_head_dim = (attention_head_dim,) * len(down_block_types)
# down
output_channel = block_out_channels[0]
for i, down_block_type in enumerate(down_block_types):
res = 2 ** i
input_channel = output_channel
output_channel = block_out_channels[i]
is_final_block = i == len(block_out_channels) - 1
down_block = get_down_block(
down_block_type,
num_layers=layers_per_block,
in_channels=input_channel,
out_channels=output_channel,
temb_channels=time_embed_dim,
add_downsample=not is_final_block,
resnet_eps=norm_eps,
resnet_act_fn=act_fn,
resnet_groups=norm_num_groups,
cross_attention_dim=cross_attention_dim,
attn_num_head_channels=attention_head_dim[i],
downsample_padding=downsample_padding,
dual_cross_attention=dual_cross_attention,
use_linear_projection=use_linear_projection,
only_cross_attention=only_cross_attention[i],
upcast_attention=upcast_attention,
resnet_time_scale_shift=resnet_time_scale_shift,
unet_use_cross_frame_attention=unet_use_cross_frame_attention,
unet_use_temporal_attention=unet_use_temporal_attention,
use_motion_module=use_motion_module and (res in motion_module_resolutions) and (not motion_module_decoder_only),
motion_module_type=motion_module_type,
motion_module_kwargs=motion_module_kwargs,
)
self.down_blocks.append(down_block)
# mid
if mid_block_type == "UNetMidBlock3DCrossAttn":
self.mid_block = UNetMidBlock3DCrossAttn(
in_channels=block_out_channels[-1],
temb_channels=time_embed_dim,
resnet_eps=norm_eps,
resnet_act_fn=act_fn,
output_scale_factor=mid_block_scale_factor,
resnet_time_scale_shift=resnet_time_scale_shift,
cross_attention_dim=cross_attention_dim,
attn_num_head_channels=attention_head_dim[-1],
resnet_groups=norm_num_groups,
dual_cross_attention=dual_cross_attention,
use_linear_projection=use_linear_projection,
upcast_attention=upcast_attention,
unet_use_cross_frame_attention=unet_use_cross_frame_attention,
unet_use_temporal_attention=unet_use_temporal_attention,
use_motion_module=use_motion_module and motion_module_mid_block,
motion_module_type=motion_module_type,
motion_module_kwargs=motion_module_kwargs,
)
else:
raise ValueError(f"unknown mid_block_type : {mid_block_type}")
# count how many layers upsample the videos
self.num_upsamplers = 0
# up
reversed_block_out_channels = list(reversed(block_out_channels))
reversed_attention_head_dim = list(reversed(attention_head_dim))
only_cross_attention = list(reversed(only_cross_attention))
output_channel = reversed_block_out_channels[0]
for i, up_block_type in enumerate(up_block_types):
res = 2 ** (3 - i)
is_final_block = i == len(block_out_channels) - 1
prev_output_channel = output_channel
output_channel = reversed_block_out_channels[i]
input_channel = reversed_block_out_channels[min(i + 1, len(block_out_channels) - 1)]
# add upsample block for all BUT final layer
if not is_final_block:
add_upsample = True
self.num_upsamplers += 1
else:
add_upsample = False
up_block = get_up_block(
up_block_type,
num_layers=layers_per_block + 1,
in_channels=input_channel,
out_channels=output_channel,
prev_output_channel=prev_output_channel,
temb_channels=time_embed_dim,
add_upsample=add_upsample,
resnet_eps=norm_eps,
resnet_act_fn=act_fn,
resnet_groups=norm_num_groups,
cross_attention_dim=cross_attention_dim,
attn_num_head_channels=reversed_attention_head_dim[i],
dual_cross_attention=dual_cross_attention,
use_linear_projection=use_linear_projection,
only_cross_attention=only_cross_attention[i],
upcast_attention=upcast_attention,
resnet_time_scale_shift=resnet_time_scale_shift,
unet_use_cross_frame_attention=unet_use_cross_frame_attention,
unet_use_temporal_attention=unet_use_temporal_attention,
use_motion_module=use_motion_module and (res in motion_module_resolutions),
motion_module_type=motion_module_type,
motion_module_kwargs=motion_module_kwargs,
)
self.up_blocks.append(up_block)
prev_output_channel = output_channel
# out
self.conv_norm_out = nn.GroupNorm(num_channels=block_out_channels[0], num_groups=norm_num_groups, eps=norm_eps)
self.conv_act = nn.SiLU()
self.conv_out = InflatedConv3d(block_out_channels[0], out_channels, kernel_size=3, padding=1)
def set_attention_slice(self, slice_size):
r"""
Enable sliced attention computation.
When this option is enabled, the attention module will split the input tensor in slices, to compute attention
in several steps. This is useful to save some memory in exchange for a small speed decrease.
Args:
slice_size (`str` or `int` or `list(int)`, *optional*, defaults to `"auto"`):
When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If
`"max"`, maxium amount of memory will be saved by running only one slice at a time. If a number is
provided, uses as many slices as `attention_head_dim // slice_size`. In this case, `attention_head_dim`
must be a multiple of `slice_size`.
"""
sliceable_head_dims = []
def fn_recursive_retrieve_slicable_dims(module: torch.nn.Module):
if hasattr(module, "set_attention_slice"):
sliceable_head_dims.append(module.sliceable_head_dim)
for child in module.children():
fn_recursive_retrieve_slicable_dims(child)
# retrieve number of attention layers
for module in self.children():
fn_recursive_retrieve_slicable_dims(module)
num_slicable_layers = len(sliceable_head_dims)
if slice_size == "auto":
# half the attention head size is usually a good trade-off between
# speed and memory
slice_size = [dim // 2 for dim in sliceable_head_dims]
elif slice_size == "max":
# make smallest slice possible
slice_size = num_slicable_layers * [1]
slice_size = num_slicable_layers * [slice_size] if not isinstance(slice_size, list) else slice_size
if len(slice_size) != len(sliceable_head_dims):
raise ValueError(
f"You have provided {len(slice_size)}, but {self.config} has {len(sliceable_head_dims)} different"
f" attention layers. Make sure to match `len(slice_size)` to be {len(sliceable_head_dims)}."
)
for i in range(len(slice_size)):
size = slice_size[i]
dim = sliceable_head_dims[i]
if size is not None and size > dim:
raise ValueError(f"size {size} has to be smaller or equal to {dim}.")
# Recursively walk through all the children.
# Any children which exposes the set_attention_slice method
# gets the message
def fn_recursive_set_attention_slice(module: torch.nn.Module, slice_size: List[int]):
if hasattr(module, "set_attention_slice"):
module.set_attention_slice(slice_size.pop())
for child in module.children():
fn_recursive_set_attention_slice(child, slice_size)
reversed_slice_size = list(reversed(slice_size))
for module in self.children():
fn_recursive_set_attention_slice(module, reversed_slice_size)
def _set_gradient_checkpointing(self, module, value=False): | if isinstance(module, (CrossAttnDownBlock3D, DownBlock3D, CrossAttnUpBlock3D, UpBlock3D)): | 4 | 2023-12-15 01:22:37+00:00 | 12k |
cvlab-yonsei/RankMixup | tools/train_net.py | [
{
"identifier": "Trainer",
"path": "calibrate/engine/trainer.py",
"snippet": "class Trainer:\n def __init__(self, cfg: DictConfig) -> None:\n self.cfg = cfg\n self.work_dir = self.cfg.work_dir\n self.device = torch.device(self.cfg.device)\n self.build_data_loader()\n ... | import os
import sys
import logging
import hydra
from omegaconf import DictConfig, OmegaConf
from omegaconf.omegaconf import open_dict
from calibrate.engine import Trainer, SegmentTrainer, NLPTrainer
from calibrate.utils import set_random_seed | 8,720 |
logger = logging.getLogger(__name__)
TRAINERS = {
"cv": Trainer,
"segment": SegmentTrainer,
"nlp": NLPTrainer,
}
@hydra.main(config_path="../configs", config_name="defaults")
def main(cfg: DictConfig):
logger.info("Launch command : ")
logger.info(" ".join(sys.argv))
with open_dict(cfg):
cfg.work_dir = os.getcwd()
logger.info("\n" + OmegaConf.to_yaml(cfg))
|
logger = logging.getLogger(__name__)
TRAINERS = {
"cv": Trainer,
"segment": SegmentTrainer,
"nlp": NLPTrainer,
}
@hydra.main(config_path="../configs", config_name="defaults")
def main(cfg: DictConfig):
logger.info("Launch command : ")
logger.info(" ".join(sys.argv))
with open_dict(cfg):
cfg.work_dir = os.getcwd()
logger.info("\n" + OmegaConf.to_yaml(cfg))
| set_random_seed( | 3 | 2023-12-17 13:53:18+00:00 | 12k |
daihaojun554/biliscrapy | biliscrapy/views.py | [
{
"identifier": "BiliDanmu",
"path": "biliscrapy/models.py",
"snippet": "class BiliDanmu(models.Model):\n _id = models.CharField(max_length=255)\n cid = models.CharField(max_length=255)\n content = models.TextField()\n color = models.CharField(max_length=255)\n fontsize = models.IntegerFi... | import time
from django.core.paginator import Paginator
from django.shortcuts import render, redirect
from django.utils.timezone import make_aware
from .models import BiliDanmu, BiliComment, BiliVideo, Card
from .network.bilibili_danmu import *
from .network.bilibili_comment import Comments
from .network.bilibili_utils import bili_utils
from .network.bilibili_video import Video
from django.utils import timezone
from django.http import JsonResponse, HttpResponse | 7,471 | 'total': paginator.count,
'data': page_obj,
"new_request": not comments_exist,
}
return render(request, 'comment.html', context)
return render(request, 'comment.html')
def reflash_cookie(request):
"""
刷新cookie
:param request:
:return:
"""
utils.get_bilibili_cookies()
return render(request, 'danmaku.html')
def generate_chart(request):
keyword = request.POST.get("keyword")
print(keyword)
"""
生成图表
:param request:
:return:
"""
context = {
'message': 'fail',
'data': [],
'code': -1,
}
videos = BiliVideo.objects.all().values().order_by('pubdate')
# 分页 # 每页显示6个视频
paginator = Paginator(videos, 6)
page_number = request.GET.get('page', 1)
page_obj = paginator.get_page(page_number)
if videos:
context['message'] = 'success'
context['data'] = page_obj
context['code'] = 0
return render(request, 'generate_chart.html', context)
def download_video(request):
context = {}
if request.method == 'POST':
bvid = request.POST.get('bvid')
print(bvid)
if not utils.check_url(bvid):
context['message'] = 'url 不合法!'
context['code'] = -1
return render(request, 'download_video.html', context)
url = base_url + bvid
info = bili_video.get_video_info(url)
if not info:
context['message'] = '获取视频信息失败!'
context['code'] = -1
data = json.loads(info)
video_name = data[1]['videoData']['title']
v_urls = [i['baseUrl'] for i in data[0]['data']['dash']['video']]
a_urls = [i['baseUrl'] for i in data[0]['data']['dash']['audio']]
print(v_urls[0], a_urls[0])
v_suffix = 'flv'
a_suffix = 'mp3'
# 如果已经存在的话不需要合并
if not os.path.exists(os.path.join(
os.path.dirname(os.path.abspath(__file__)),
"network",
"data",
"video",
f"{video_name}.mp4")):
logger.info(f"开始合并视频和音频")
bili_video.download_file(v_urls[0], f'{video_name}.{v_suffix}')
bili_video.download_file(a_urls[0], f'{video_name}.{a_suffix}')
bili_video.merge_video_audio(f"{video_name}.{v_suffix}", f"{video_name}.{a_suffix}")
# 返回给前端响应流数据
logger.info(f"视频数据已存在!")
with open(
f'{os.path.join(os.path.dirname(os.path.abspath(__file__)), "network", "data", "video", f"{video_name}.mp4")}',
'rb') as f:
response = HttpResponse(f.read(), content_type='video/mp4')
response['Content-Disposition'] = 'attachment; filename="{}"'.format(f"{video_name}.mp4")
return response
return render(request, 'download_video.html', context)
def parse_video(request):
context = {
'message': 'success',
'data': [],
'code': 0
}
if request.method == 'POST':
url = request.POST.get("_bv")
if not utils.check_url(url):
context['message'] = 'url 不合法!'
context['code'] = -1
return render(request, 'download_video.html', context)
logger.info(url)
bv = utils.bv_get(url)
logger.info(f"bv,--->{bv}")
info = utils.get_info_by_bv(bv)
if info is None:
context.update({"message": "fail", "code": -1})
return render(request, 'download_video.html', context)
context.update({"data": info})
return render(request, 'download_video.html', context)
def enter_card(request):
if request.method == 'POST':
card_code = request.POST.get('card_code')
current_datetime = timezone.now()
try:
|
# Create your views here.
utils = bili_utils()
bili_video = Video()
logger = logging.getLogger('log')
base_url = 'https://www.bilibili.com/video/'
def danmaku(request):
if request.method == 'POST':
bv = request.POST.get('bv') # 获取用户输入的 BV 号或链接
bvid = utils.bv_get(bv)
url = bv
context = {
'result': 'error',
'data': [],
'message': '请输入正确的链接地址或BV号!'
}
if bv.startswith("https://www.bilibili.com/video/BV") or bv.startswith("BV") or bv.startswith("bv"):
danmu = Danmu()
vv = BiliVideo.objects.filter(bvid=bvid).values()
cid = vv[0]['oid'] if vv else danmu.bv2cid(bv)
bvid_exists = BiliDanmu.objects.filter(cid=cid).exists()
if not bvid_exists:
logger.info("bvid_exists,不存在!!!")
dates = danmu.get_available_dates(cid) # 获取视频的所有日期列表
danmu.down_so_files(cid, dates) # 下载所有弹ci幕文件
unique_danmakus = danmu.parse_so_to_json(cid, dates) # 解析并保存为 JSON 文件
if unique_danmakus is None:
return render(request, 'danmaku.html',
context.update({'message': '解析弹幕失败,请检查BV号是否正确!'}))
danmu_objects = [
BiliDanmu(
_id=danmaku['_id'],
cid=cid,
content=danmaku['content'],
color=danmaku['color'],
fontsize=danmaku['fontsize'],
midHash=danmaku['midHash'],
mode=danmaku['mode'],
progress=danmaku['progress'],
ctime=make_aware(datetime.fromtimestamp(danmaku['ctime']))
)
for danmaku in unique_danmakus
]
BiliDanmu.objects.bulk_create(danmu_objects)
# 不存在 弹幕信息
danmaku_count = BiliDanmu.objects.filter(cid=cid).count()
print(danmaku_count)
try:
logger.info("try.....")
# 尝试更新视频的抓取弹幕的状态
logger.info(bvid)
video = BiliVideo.objects.get(bvid=bvid)
video.danmu_fetched = True
video.danmaku_count = danmaku_count
video.save()
except Exception as e:
logger.error("error~~~~~~~~~")
logger.error(e)
# 如果视频记录不存在,则创建新的视频记录
info = utils.get_info_by_bv(bvid)
logger.info("info---->{}".format(info))
if info is None:
return render(request, 'danmaku.html', context)
cid = utils.bv2cid(bvid)
logger.info(f'{cid}, cid')
video = BiliVideo(bvid=bvid,
avid=info['aid'],
oid=cid,
title=info['title'],
author=info['owner']['name'],
tag=info['tname'],
pubdate=make_aware(datetime.fromtimestamp(info['pubdate'])),
pic=info['pic'],
desc=info['desc'],
danmu_fetched=True,
danmaku_count=danmaku_count
) # 设置弹幕抓取状态
video.save()
logger.info("新视频信息已添加")
# 查询数据库并返回结果
# 查询数据库并返回结果
danmakus = BiliDanmu.objects.filter(cid=cid).values().order_by('ctime')
paginator = Paginator(danmakus, 15) # 每页显示10条记录
page_number = request.POST.get('page') if request.POST.get('page') else 1 # 获取页码参数
page_obj = paginator.get_page(page_number) # 获取对应页码的数据
print(paginator.count)
context = {
"url": url,
'result': 'error',
'bvid': bv,
'total': paginator.count,
'data': page_obj,
'new_request': not bvid_exists,
}
if len(danmakus) > 0:
context['result'] = 'success'
return render(request, 'danmaku.html', context)
return render(request, 'danmaku.html')
def comment(request):
if request.method == 'POST':
bv = request.POST.get('bv') # 获取用户输入的 BV 号或链接
url = bv
context = {
'result': 'error',
'data': [],
'message': '请输入正确的链接地址或BV号!',
'cid': ''
}
c = Comments()
bv_ = utils.bv_get(bv) if bv.startswith("https://www.bilibili.com/video/BV") or bv.startswith(
"BV") or bv.startswith("bv") else bv
logger.info(f'bv_====>{bv_}')
vv = BiliVideo.objects.filter(bvid=bv_).values()
# logger.info(vv[0]['avid'], 'sadjkaskjadssajasjdsjkaaashhakads')
av = utils.bv2av(bv_)
av_count = 1
while av is None:
logger.info(f"av is None, retrying...{av_count}")
av_count += 1
av = utils.bv2av(bv_)
avid = vv[0]['avid'] if vv else av
logger.info(f"avid=====>{avid}")
if avid is None:
context = {
'result': 'error',
'data': [],
'message': 'b站服务器返回错误,请重新尝试'
}
return render(request, 'comment.html', context)
comments_exist = BiliComment.objects.filter(avid=avid).exists()
if not comments_exist:
comments = c.get_comments(bv)
comment_obj = [BiliComment(
avid=avid,
uname=cmt['uname'],
current_level=cmt['current_level'],
like=cmt['like'],
sex=cmt['sex'],
ctime=make_aware(datetime.fromtimestamp(cmt['ctime'])),
message=cmt['message']
) for cmt in comments]
BiliComment.objects.bulk_create(comment_obj)
bili_comment_count = BiliComment.objects.filter(avid=avid).count()
try:
# 尝试更新视频的抓取弹幕的状态
video = BiliVideo.objects.get(avid=avid)
video.comment_fetched = True
video.comment_count = bili_comment_count
video.save()
except BiliVideo.DoesNotExist:
# 如果视频记录不存在,则创建新的视频记录
info = utils.get_info_by_bv(bv_)
if info is None:
return render(request, 'comment.html', context)
cid = utils.bv2cid(bv_)
# 如果cid 为空的话就一直重新尝试获取cid
cid_count = 1
while cid is None:
cid = utils.bv2cid(bv_)
logger.info(f'{cid}, cid,尝试了{cid_count}次')
cid_count += 1
time.sleep(3)
video = BiliVideo(avid=avid,
bvid=bv_,
oid=cid,
title=info['title'],
author=info['owner']['name'],
tag=info['tname'],
pubdate=make_aware(datetime.fromtimestamp(info['pubdate'])),
pic=info['pic'],
desc=info['desc'],
comment_fetched=True,
comment_count=bili_comment_count
) # 设置弹幕抓取状态
video.save()
comments = BiliComment.objects.filter(avid=avid).values().order_by('ctime')
paginator = Paginator(comments, 15)
page_number = request.POST.get('page', 1)
page_obj = paginator.get_page(page_number)
context = {
"url": url,
'result': 'success',
'bvid': bv,
'total': paginator.count,
'data': page_obj,
"new_request": not comments_exist,
}
return render(request, 'comment.html', context)
return render(request, 'comment.html')
def reflash_cookie(request):
"""
刷新cookie
:param request:
:return:
"""
utils.get_bilibili_cookies()
return render(request, 'danmaku.html')
def generate_chart(request):
keyword = request.POST.get("keyword")
print(keyword)
"""
生成图表
:param request:
:return:
"""
context = {
'message': 'fail',
'data': [],
'code': -1,
}
videos = BiliVideo.objects.all().values().order_by('pubdate')
# 分页 # 每页显示6个视频
paginator = Paginator(videos, 6)
page_number = request.GET.get('page', 1)
page_obj = paginator.get_page(page_number)
if videos:
context['message'] = 'success'
context['data'] = page_obj
context['code'] = 0
return render(request, 'generate_chart.html', context)
def download_video(request):
context = {}
if request.method == 'POST':
bvid = request.POST.get('bvid')
print(bvid)
if not utils.check_url(bvid):
context['message'] = 'url 不合法!'
context['code'] = -1
return render(request, 'download_video.html', context)
url = base_url + bvid
info = bili_video.get_video_info(url)
if not info:
context['message'] = '获取视频信息失败!'
context['code'] = -1
data = json.loads(info)
video_name = data[1]['videoData']['title']
v_urls = [i['baseUrl'] for i in data[0]['data']['dash']['video']]
a_urls = [i['baseUrl'] for i in data[0]['data']['dash']['audio']]
print(v_urls[0], a_urls[0])
v_suffix = 'flv'
a_suffix = 'mp3'
# 如果已经存在的话不需要合并
if not os.path.exists(os.path.join(
os.path.dirname(os.path.abspath(__file__)),
"network",
"data",
"video",
f"{video_name}.mp4")):
logger.info(f"开始合并视频和音频")
bili_video.download_file(v_urls[0], f'{video_name}.{v_suffix}')
bili_video.download_file(a_urls[0], f'{video_name}.{a_suffix}')
bili_video.merge_video_audio(f"{video_name}.{v_suffix}", f"{video_name}.{a_suffix}")
# 返回给前端响应流数据
logger.info(f"视频数据已存在!")
with open(
f'{os.path.join(os.path.dirname(os.path.abspath(__file__)), "network", "data", "video", f"{video_name}.mp4")}',
'rb') as f:
response = HttpResponse(f.read(), content_type='video/mp4')
response['Content-Disposition'] = 'attachment; filename="{}"'.format(f"{video_name}.mp4")
return response
return render(request, 'download_video.html', context)
def parse_video(request):
context = {
'message': 'success',
'data': [],
'code': 0
}
if request.method == 'POST':
url = request.POST.get("_bv")
if not utils.check_url(url):
context['message'] = 'url 不合法!'
context['code'] = -1
return render(request, 'download_video.html', context)
logger.info(url)
bv = utils.bv_get(url)
logger.info(f"bv,--->{bv}")
info = utils.get_info_by_bv(bv)
if info is None:
context.update({"message": "fail", "code": -1})
return render(request, 'download_video.html', context)
context.update({"data": info})
return render(request, 'download_video.html', context)
def enter_card(request):
if request.method == 'POST':
card_code = request.POST.get('card_code')
current_datetime = timezone.now()
try: | card = Card.objects.get(card_code=card_code) | 3 | 2023-12-14 10:14:24+00:00 | 12k |
mjavadpur/Sadtalker_LongVideos | src/facerender/animate.py | [
{
"identifier": "HEEstimator",
"path": "src/facerender/modules/keypoint_detector.py",
"snippet": "class HEEstimator(nn.Module):\n \"\"\"\n Estimating head pose and expression.\n \"\"\"\n\n def __init__(self, block_expansion, feature_channel, num_kp, image_channel, max_features, num_bins=66, ... | import os
import cv2
import yaml
import numpy as np
import warnings
import safetensors
import safetensors.torch
import imageio
import torch
import torchvision
import webui # in webui
from skimage import img_as_ubyte
from src.facerender.modules.keypoint_detector import HEEstimator, KPDetector
from src.facerender.modules.mapping import MappingNet
from src.facerender.modules.generator import OcclusionAwareGenerator, OcclusionAwareSPADEGenerator
from src.facerender.modules.make_animation import make_animation
from pydub import AudioSegment
from src.utils.face_enhancer import enhancer_generator_with_len, enhancer_list
from src.utils.paste_pic import paste_pic_stream
from src.utils.videoio import save_video_with_watermark | 9,008 | self.kp_extractor.eval()
self.generator.eval()
self.he_estimator.eval()
self.mapping.eval()
self.device = device
def load_cpk_facevid2vid_safetensor(self, checkpoint_path, generator=None,
kp_detector=None, he_estimator=None,
device="cpu"):
checkpoint = safetensors.torch.load_file(checkpoint_path)
if generator is not None:
x_generator = {}
for k,v in checkpoint.items():
if 'generator' in k:
x_generator[k.replace('generator.', '')] = v
generator.load_state_dict(x_generator)
if kp_detector is not None:
x_generator = {}
for k,v in checkpoint.items():
if 'kp_extractor' in k:
x_generator[k.replace('kp_extractor.', '')] = v
kp_detector.load_state_dict(x_generator)
if he_estimator is not None:
x_generator = {}
for k,v in checkpoint.items():
if 'he_estimator' in k:
x_generator[k.replace('he_estimator.', '')] = v
he_estimator.load_state_dict(x_generator)
return None
def load_cpk_facevid2vid(self, checkpoint_path, generator=None, discriminator=None,
kp_detector=None, he_estimator=None, optimizer_generator=None,
optimizer_discriminator=None, optimizer_kp_detector=None,
optimizer_he_estimator=None, device="cpu"):
checkpoint = torch.load(checkpoint_path, map_location=torch.device(device))
if generator is not None:
generator.load_state_dict(checkpoint['generator'])
if kp_detector is not None:
kp_detector.load_state_dict(checkpoint['kp_detector'])
if he_estimator is not None:
he_estimator.load_state_dict(checkpoint['he_estimator'])
if discriminator is not None:
try:
discriminator.load_state_dict(checkpoint['discriminator'])
except:
print ('No discriminator in the state-dict. Dicriminator will be randomly initialized')
if optimizer_generator is not None:
optimizer_generator.load_state_dict(checkpoint['optimizer_generator'])
if optimizer_discriminator is not None:
try:
optimizer_discriminator.load_state_dict(checkpoint['optimizer_discriminator'])
except RuntimeError as e:
print ('No discriminator optimizer in the state-dict. Optimizer will be not initialized')
if optimizer_kp_detector is not None:
optimizer_kp_detector.load_state_dict(checkpoint['optimizer_kp_detector'])
if optimizer_he_estimator is not None:
optimizer_he_estimator.load_state_dict(checkpoint['optimizer_he_estimator'])
return checkpoint['epoch']
def load_cpk_mapping(self, checkpoint_path, mapping=None, discriminator=None,
optimizer_mapping=None, optimizer_discriminator=None, device='cpu'):
checkpoint = torch.load(checkpoint_path, map_location=torch.device(device))
if mapping is not None:
mapping.load_state_dict(checkpoint['mapping'])
if discriminator is not None:
discriminator.load_state_dict(checkpoint['discriminator'])
if optimizer_mapping is not None:
optimizer_mapping.load_state_dict(checkpoint['optimizer_mapping'])
if optimizer_discriminator is not None:
optimizer_discriminator.load_state_dict(checkpoint['optimizer_discriminator'])
return checkpoint['epoch']
def generate(self, args, x, save_dir, pic_path, crop_info, enhancer=None, background_enhancer=None, preprocess='crop', img_size=256):
source_image=x['source_image'].type(torch.FloatTensor)
source_semantics=x['source_semantics'].type(torch.FloatTensor)
target_semantics=x['target_semantics_list'].type(torch.FloatTensor)
source_image=source_image.to(self.device)
source_semantics=source_semantics.to(self.device)
target_semantics=target_semantics.to(self.device)
if 'yaw_c_seq' in x:
yaw_c_seq = x['yaw_c_seq'].type(torch.FloatTensor)
yaw_c_seq = x['yaw_c_seq'].to(self.device)
else:
yaw_c_seq = None
if 'pitch_c_seq' in x:
pitch_c_seq = x['pitch_c_seq'].type(torch.FloatTensor)
pitch_c_seq = x['pitch_c_seq'].to(self.device)
else:
pitch_c_seq = None
if 'roll_c_seq' in x:
roll_c_seq = x['roll_c_seq'].type(torch.FloatTensor)
roll_c_seq = x['roll_c_seq'].to(self.device)
else:
roll_c_seq = None
frame_num = x['frame_num']
audio_path = x['audio_path']
video_name = x['video_name']
full_video_path, temp_dir = make_animation(args, audio_path, save_dir, video_name, img_size, crop_info, source_image, source_semantics, target_semantics,
self.generator, self.kp_extractor, self.he_estimator, self.mapping,
yaw_c_seq, pitch_c_seq, roll_c_seq, use_exp = True)
#### paste back then enhancers
if enhancer:
video_name_enhancer = x['video_name'] + '_enhanced.mp4'
enhanced_path = os.path.join(save_dir, 'temp_'+video_name_enhancer)
av_path_enhancer = os.path.join(save_dir, video_name_enhancer)
return_path = av_path_enhancer
try:
enhanced_images_gen_with_len = enhancer_generator_with_len(full_video_path, method=enhancer, bg_upsampler=background_enhancer)
imageio.mimsave(enhanced_path, enhanced_images_gen_with_len, fps=float(25))
except:
| warnings.filterwarnings('ignore')
try:
in_webui = True
except:
in_webui = False
class AnimateFromCoeff():
def __init__(self, sadtalker_path, device):
with open(sadtalker_path['facerender_yaml']) as f:
config = yaml.safe_load(f)
generator = OcclusionAwareSPADEGenerator(**config['model_params']['generator_params'],
**config['model_params']['common_params'])
kp_extractor = KPDetector(**config['model_params']['kp_detector_params'],
**config['model_params']['common_params'])
he_estimator = HEEstimator(**config['model_params']['he_estimator_params'],
**config['model_params']['common_params'])
mapping = MappingNet(**config['model_params']['mapping_params'])
generator.to(device)
kp_extractor.to(device)
he_estimator.to(device)
mapping.to(device)
for param in generator.parameters():
param.requires_grad = False
for param in kp_extractor.parameters():
param.requires_grad = False
for param in he_estimator.parameters():
param.requires_grad = False
for param in mapping.parameters():
param.requires_grad = False
if sadtalker_path is not None:
if 'checkpoint' in sadtalker_path: # use safe tensor
self.load_cpk_facevid2vid_safetensor(sadtalker_path['checkpoint'], kp_detector=kp_extractor, generator=generator, he_estimator=None)
else:
self.load_cpk_facevid2vid(sadtalker_path['free_view_checkpoint'], kp_detector=kp_extractor, generator=generator, he_estimator=he_estimator)
else:
raise AttributeError("Checkpoint should be specified for video head pose estimator.")
if sadtalker_path['mappingnet_checkpoint'] is not None:
self.load_cpk_mapping(sadtalker_path['mappingnet_checkpoint'], mapping=mapping)
else:
raise AttributeError("Checkpoint should be specified for video head pose estimator.")
self.kp_extractor = kp_extractor
self.generator = generator
self.he_estimator = he_estimator
self.mapping = mapping
self.kp_extractor.eval()
self.generator.eval()
self.he_estimator.eval()
self.mapping.eval()
self.device = device
def load_cpk_facevid2vid_safetensor(self, checkpoint_path, generator=None,
kp_detector=None, he_estimator=None,
device="cpu"):
checkpoint = safetensors.torch.load_file(checkpoint_path)
if generator is not None:
x_generator = {}
for k,v in checkpoint.items():
if 'generator' in k:
x_generator[k.replace('generator.', '')] = v
generator.load_state_dict(x_generator)
if kp_detector is not None:
x_generator = {}
for k,v in checkpoint.items():
if 'kp_extractor' in k:
x_generator[k.replace('kp_extractor.', '')] = v
kp_detector.load_state_dict(x_generator)
if he_estimator is not None:
x_generator = {}
for k,v in checkpoint.items():
if 'he_estimator' in k:
x_generator[k.replace('he_estimator.', '')] = v
he_estimator.load_state_dict(x_generator)
return None
def load_cpk_facevid2vid(self, checkpoint_path, generator=None, discriminator=None,
kp_detector=None, he_estimator=None, optimizer_generator=None,
optimizer_discriminator=None, optimizer_kp_detector=None,
optimizer_he_estimator=None, device="cpu"):
checkpoint = torch.load(checkpoint_path, map_location=torch.device(device))
if generator is not None:
generator.load_state_dict(checkpoint['generator'])
if kp_detector is not None:
kp_detector.load_state_dict(checkpoint['kp_detector'])
if he_estimator is not None:
he_estimator.load_state_dict(checkpoint['he_estimator'])
if discriminator is not None:
try:
discriminator.load_state_dict(checkpoint['discriminator'])
except:
print ('No discriminator in the state-dict. Dicriminator will be randomly initialized')
if optimizer_generator is not None:
optimizer_generator.load_state_dict(checkpoint['optimizer_generator'])
if optimizer_discriminator is not None:
try:
optimizer_discriminator.load_state_dict(checkpoint['optimizer_discriminator'])
except RuntimeError as e:
print ('No discriminator optimizer in the state-dict. Optimizer will be not initialized')
if optimizer_kp_detector is not None:
optimizer_kp_detector.load_state_dict(checkpoint['optimizer_kp_detector'])
if optimizer_he_estimator is not None:
optimizer_he_estimator.load_state_dict(checkpoint['optimizer_he_estimator'])
return checkpoint['epoch']
def load_cpk_mapping(self, checkpoint_path, mapping=None, discriminator=None,
optimizer_mapping=None, optimizer_discriminator=None, device='cpu'):
checkpoint = torch.load(checkpoint_path, map_location=torch.device(device))
if mapping is not None:
mapping.load_state_dict(checkpoint['mapping'])
if discriminator is not None:
discriminator.load_state_dict(checkpoint['discriminator'])
if optimizer_mapping is not None:
optimizer_mapping.load_state_dict(checkpoint['optimizer_mapping'])
if optimizer_discriminator is not None:
optimizer_discriminator.load_state_dict(checkpoint['optimizer_discriminator'])
return checkpoint['epoch']
def generate(self, args, x, save_dir, pic_path, crop_info, enhancer=None, background_enhancer=None, preprocess='crop', img_size=256):
source_image=x['source_image'].type(torch.FloatTensor)
source_semantics=x['source_semantics'].type(torch.FloatTensor)
target_semantics=x['target_semantics_list'].type(torch.FloatTensor)
source_image=source_image.to(self.device)
source_semantics=source_semantics.to(self.device)
target_semantics=target_semantics.to(self.device)
if 'yaw_c_seq' in x:
yaw_c_seq = x['yaw_c_seq'].type(torch.FloatTensor)
yaw_c_seq = x['yaw_c_seq'].to(self.device)
else:
yaw_c_seq = None
if 'pitch_c_seq' in x:
pitch_c_seq = x['pitch_c_seq'].type(torch.FloatTensor)
pitch_c_seq = x['pitch_c_seq'].to(self.device)
else:
pitch_c_seq = None
if 'roll_c_seq' in x:
roll_c_seq = x['roll_c_seq'].type(torch.FloatTensor)
roll_c_seq = x['roll_c_seq'].to(self.device)
else:
roll_c_seq = None
frame_num = x['frame_num']
audio_path = x['audio_path']
video_name = x['video_name']
full_video_path, temp_dir = make_animation(args, audio_path, save_dir, video_name, img_size, crop_info, source_image, source_semantics, target_semantics,
self.generator, self.kp_extractor, self.he_estimator, self.mapping,
yaw_c_seq, pitch_c_seq, roll_c_seq, use_exp = True)
#### paste back then enhancers
if enhancer:
video_name_enhancer = x['video_name'] + '_enhanced.mp4'
enhanced_path = os.path.join(save_dir, 'temp_'+video_name_enhancer)
av_path_enhancer = os.path.join(save_dir, video_name_enhancer)
return_path = av_path_enhancer
try:
enhanced_images_gen_with_len = enhancer_generator_with_len(full_video_path, method=enhancer, bg_upsampler=background_enhancer)
imageio.mimsave(enhanced_path, enhanced_images_gen_with_len, fps=float(25))
except: | enhanced_images_gen_with_len = enhancer_list(full_video_path, method=enhancer, bg_upsampler=background_enhancer) | 7 | 2023-12-19 11:01:35+00:00 | 12k |
Angryrou/udao | udao/optimization/moo/weighted_sum.py | [
{
"identifier": "Objective",
"path": "udao/optimization/concepts/objective.py",
"snippet": "class Objective(Constraint):\n \"\"\"\n\n Parameters\n ----------\n name : str\n Name of the objective.\n minimize : bool\n Direction of the objective: if True, minimize, else maximiz... | import json
import numpy as np
import torch as th
from dataclasses import dataclass, field
from typing import Any, Dict, List, Optional, Tuple
from ..concepts import Objective
from ..concepts.problem import MOProblem
from ..soo.mogd import MOGD
from ..soo.so_solver import SOSolver
from ..utils import moo_utils as moo_ut
from ..utils.exceptions import NoSolutionError
from ..utils.moo_utils import Point, get_default_device
from .mo_solver import MOSolver | 9,647 | ws: List[float],
allow_cache: bool = False,
normalize: bool = True,
device: Optional[th.device] = None,
) -> None:
self.device = device or get_default_device()
self.problem = problem
self.ws = ws
super().__init__(name="weighted_sum", function=self.function, minimize=True)
self._cache: Dict[str, th.Tensor] = {}
self.allow_cache = allow_cache
self.normalize = normalize
def _function(self, *args: Any, **kwargs: Any) -> th.Tensor:
hash_var = ""
if self.allow_cache:
hash_var = json.dumps(str(args) + str(kwargs))
if hash_var in self._cache:
return self._cache[hash_var]
objs: List[th.Tensor] = []
for objective in self.problem.objectives:
obj = objective(*args, **kwargs) * objective.direction
objs.append(obj.squeeze())
objs_tensor = th.vstack(objs).T
# shape (n_feasible_samples/grids, n_objs)
if self.allow_cache:
self._cache[hash_var] = objs_tensor
return objs_tensor
def function(self, *args: Any, **kwargs: Any) -> th.Tensor:
"""Sum of weighted normalized objectives"""
objs_tensor = self._function(*args, **kwargs)
if self.normalize:
objs_tensor = self._normalize_objective(objs_tensor)
return th.sum(objs_tensor * th.tensor(self.ws, device=self.device), dim=1)
def _normalize_objective(self, objs_array: th.Tensor) -> th.Tensor:
"""Normalize objective values to [0, 1]
Parameters
----------
objs_array : np.ndarray
shape (n_feasible_samples/grids, n_objs)
Returns
-------
np.ndarray
shape (n_feasible_samples/grids, n_objs)
Raises
------
NoSolutionError
if lower bounds of objective values are
higher than their upper bounds
"""
objs_min, objs_max = th.min(objs_array, 0).values, th.max(objs_array, 0).values
if th.any((objs_min - objs_max) > 0):
raise NoSolutionError(
"Cannot do normalization! Lower bounds of "
"objective values are higher than their upper bounds."
)
elif th.all((objs_min - objs_max) == 0):
return th.zeros_like(objs_array)
return (objs_array - objs_min) / (objs_max - objs_min)
def to(self, device: Optional[th.device] = None) -> "WeightedSumObjective":
"""Move objective to device"""
if device is None:
device = get_default_device()
self.device = device
for objective in self.problem.objectives:
objective.to(device)
for constraint in self.problem.constraints:
constraint.to(device)
self._cache = {k: v.to(device) for k, v in self._cache.items()}
return self
class WeightedSum(MOSolver):
"""
Weighted Sum (WS) algorithm for MOO
Parameters
----------
ws_pairs: np.ndarray,
weight settings for all objectives, of shape (n_weights, n_objs)
inner_solver: BaseSolver,
the solver used in Weighted Sum
objectives: List[Objective],
objective functions
constraints: List[Constraint],
constraint functions
"""
@dataclass
class Params:
ws_pairs: np.ndarray
"""weight sets for all objectives, of shape (n_weights, n_objs)"""
so_solver: SOSolver
"""solver for SOO"""
normalize: bool = True
"""whether to normalize objective values to [0, 1] before applying WS"""
allow_cache: bool = False
"""whether to cache the objective values"""
device: Optional[th.device] = field(default_factory=get_default_device)
"""device on which to perform torch operations, by default available device."""
def __init__(
self,
params: Params,
):
super().__init__()
self.so_solver = params.so_solver
self.ws_pairs = params.ws_pairs
self.allow_cache = params.allow_cache
self.normalize = params.normalize
self.device = params.device
|
class WeightedSumObjective(Objective):
"""Weighted Sum Objective"""
def __init__(
self,
problem: MOProblem,
ws: List[float],
allow_cache: bool = False,
normalize: bool = True,
device: Optional[th.device] = None,
) -> None:
self.device = device or get_default_device()
self.problem = problem
self.ws = ws
super().__init__(name="weighted_sum", function=self.function, minimize=True)
self._cache: Dict[str, th.Tensor] = {}
self.allow_cache = allow_cache
self.normalize = normalize
def _function(self, *args: Any, **kwargs: Any) -> th.Tensor:
hash_var = ""
if self.allow_cache:
hash_var = json.dumps(str(args) + str(kwargs))
if hash_var in self._cache:
return self._cache[hash_var]
objs: List[th.Tensor] = []
for objective in self.problem.objectives:
obj = objective(*args, **kwargs) * objective.direction
objs.append(obj.squeeze())
objs_tensor = th.vstack(objs).T
# shape (n_feasible_samples/grids, n_objs)
if self.allow_cache:
self._cache[hash_var] = objs_tensor
return objs_tensor
def function(self, *args: Any, **kwargs: Any) -> th.Tensor:
"""Sum of weighted normalized objectives"""
objs_tensor = self._function(*args, **kwargs)
if self.normalize:
objs_tensor = self._normalize_objective(objs_tensor)
return th.sum(objs_tensor * th.tensor(self.ws, device=self.device), dim=1)
def _normalize_objective(self, objs_array: th.Tensor) -> th.Tensor:
"""Normalize objective values to [0, 1]
Parameters
----------
objs_array : np.ndarray
shape (n_feasible_samples/grids, n_objs)
Returns
-------
np.ndarray
shape (n_feasible_samples/grids, n_objs)
Raises
------
NoSolutionError
if lower bounds of objective values are
higher than their upper bounds
"""
objs_min, objs_max = th.min(objs_array, 0).values, th.max(objs_array, 0).values
if th.any((objs_min - objs_max) > 0):
raise NoSolutionError(
"Cannot do normalization! Lower bounds of "
"objective values are higher than their upper bounds."
)
elif th.all((objs_min - objs_max) == 0):
return th.zeros_like(objs_array)
return (objs_array - objs_min) / (objs_max - objs_min)
def to(self, device: Optional[th.device] = None) -> "WeightedSumObjective":
"""Move objective to device"""
if device is None:
device = get_default_device()
self.device = device
for objective in self.problem.objectives:
objective.to(device)
for constraint in self.problem.constraints:
constraint.to(device)
self._cache = {k: v.to(device) for k, v in self._cache.items()}
return self
class WeightedSum(MOSolver):
"""
Weighted Sum (WS) algorithm for MOO
Parameters
----------
ws_pairs: np.ndarray,
weight settings for all objectives, of shape (n_weights, n_objs)
inner_solver: BaseSolver,
the solver used in Weighted Sum
objectives: List[Objective],
objective functions
constraints: List[Constraint],
constraint functions
"""
@dataclass
class Params:
ws_pairs: np.ndarray
"""weight sets for all objectives, of shape (n_weights, n_objs)"""
so_solver: SOSolver
"""solver for SOO"""
normalize: bool = True
"""whether to normalize objective values to [0, 1] before applying WS"""
allow_cache: bool = False
"""whether to cache the objective values"""
device: Optional[th.device] = field(default_factory=get_default_device)
"""device on which to perform torch operations, by default available device."""
def __init__(
self,
params: Params,
):
super().__init__()
self.so_solver = params.so_solver
self.ws_pairs = params.ws_pairs
self.allow_cache = params.allow_cache
self.normalize = params.normalize
self.device = params.device
| if self.allow_cache and isinstance(params.so_solver, MOGD): | 2 | 2023-12-20 09:10:42+00:00 | 12k |
XLearning-SCU/2023-TPAMI-SMILE | Net.py | [
{
"identifier": "get_dist_release",
"path": "DistComput.py",
"snippet": "def get_dist_release(loader, dist_path):\r\n if not os.path.exists(dist_path):\r\n # loader = test_loader\r\n num_data = [10]\r\n with torch.no_grad():\r\n dist_list = [[] for i in range(len(num_d... | import math
import os
import time
import warnings
import numpy as np
import torch
import torchvision
import torch.nn.functional as F
import evaluate
import faiss
import scipy.io as sio
from torch import nn
from torch.autograd import Variable
from DistComput import get_dist_release
from _Utils.Calculator import get_nearest_k
from _Utils.Logs import update_log
from _Utils.Scatter import visualize2
from _Utils.Visualize import visualize, visual_matrix_console, visualize_image, plot_heat_map
from _Utils import TimeOperator, DirectoryOperator
from DataSetMaster.dataset import get_clusters
from classification import svm_classify
from evaluate import UMAP, evaluate2
from sklearn import metrics
from munkres import Munkres
from figures.ScatterMaster import visual_image_scatter
| 10,313 | # type_vec = type_vec[ind]
# visualize2(feature_vec=feature_vec, type_vec=type_vec, group_vec=group_vec,
# pred_vec=None,
# prefix=os.path.join('../', 'Visualization/E{:03d}N{:04d}'.format(epoch, len(type_vec))))
# visual_image_scatter(
# data_vec,
# feature_vec,
# group_vec,
# type_vec,
# )
raw_dataset = torchvision.datasets.ImageFolder(
'D:/VirtualMachine/Data/caltech-101/101_ObjectCategories',
transform=torchvision.transforms.Resize([256, 256])
)
# mat = sio.loadmat('D:/VirtualMachine/Data/Caltech101-all.mat')
# data = mat['X'][0][3:5]
# label = np.squeeze(mat['Y'])-1
raw_data_ind = np.ones(len(data_vec), dtype=int) * -1
class_num = len(np.unique(type_vec))
class_num_s = len(np.unique(raw_dataset.targets))
raw_dataset.targets = np.asarray(raw_dataset.targets) - 1
for t in range(class_num_s):
print('{: 4d} {: 4d} {: 4d} {: 4d}'.format(
t,
np.sum(t == type_vec),
np.sum(t == raw_dataset.targets),
np.sum(t == 0),
))
for t in np.unique(type_vec):
bank_inds = np.arange(len(raw_dataset.targets))[raw_dataset.targets == t]
raw_data_ind[t == type_vec] = np.concatenate([bank_inds, bank_inds])
# raw_data = raw_dataset[np.asarray(raw_data_ind, dtype=int)]
raw_data = np.asarray([np.asarray(raw_dataset[it][0]) for it in raw_data_ind])
np.savez(
os.path.join(args.resume.replace('.checkpoint', 'Raw2.npz')),
data_vec=raw_data,
feature_vec=feature_vec,
group_vec=group_vec,
type_vec=type_vec,
pred_adjusted=pred_adjusted,
)
return
if (epoch + 1) == epochs or (epoch + 1) % args.VisualFreq == 0:
met_mul2 = {}
if args.EvalMulti:
print('EvalMulti')
multi_modality_feature = np.concatenate(
[feature_vec[group_vec == view] for view in np.unique(group_vec)],
axis=1)
_, met_mul, _ = cluster_and_measure(
features=multi_modality_feature, types=type_vec[group_vec == 0],
groups=group_vec[group_vec == 0])
for nv, v in met_mul.items():
met_mul2['Multi-' + nv] = v
if args.EvalMean:
print('EvalMean')
_, met_mul, _ = cluster_and_measure(
features=np.mean(
np.asarray([feature_vec[group_vec == view] for view in np.unique(group_vec)]),
axis=0
), types=type_vec[group_vec == 0], groups=group_vec[group_vec == 0])
for nv, v in met_mul.items():
met_mul2['Mean-' + nv] = v
if args.EvalSingel0:
print('EvalSingel0')
_, met_mul, _ = cluster_and_measure(features=feature_vec_cluster[group_vec_cluster == 0],
types=type_vec_cluster[group_vec_cluster == 0],
groups=group_vec_cluster[group_vec_cluster == 0])
for nv, v in met_mul.items():
met_mul2['Singel0-' + nv] = v
if args.EvalSingel1:
print('EvalSingel1')
_, met_mul, _ = cluster_and_measure(features=feature_vec_cluster[group_vec_cluster == 1],
types=type_vec_cluster[group_vec_cluster == 1],
groups=group_vec_cluster[group_vec_cluster == 1] - 1)
for nv, v in met_mul.items():
met_mul2['Singel1-' + nv] = v
if args.EvalOriMean:
print('EvalOriMean')
mean_fea = np.mean(
np.asarray([feature_vec[group_vec == view] for view in np.unique(group_vec)]),
axis=0
)
score = self.soft_ass(mean_fea, centroids.cpu().numpy())
pred_vec = np.argmax(score, axis=1)
_, met_mul = evaluate2(None, pred_vec, type_vec[group_vec == 0], group_vec[group_vec == 0])
for nv, v in met_mul.items():
met_mul2['OriMean-' + nv] = v
if args.EvalOriScoreMean:
print('EvalOriScoreMean')
score = self.soft_ass(torch.from_numpy(feature_vec).cuda(), centroids).cpu().numpy()
pred_vec = np.argmax(np.mean(
np.asarray([score[group_vec == view] for view in np.unique(group_vec)]),
axis=0
), axis=1)
_, met_mul = evaluate2(None, pred_vec, type_vec[group_vec == 0], group_vec[group_vec == 0])
for nv, v in met_mul.items():
met_mul2['OriScoreMean-' + nv] = v
if args.EvalOriPredMean:
print('EvalOriPredMean')
pred = torch.softmax(self.soft_ass(torch.from_numpy(feature_vec).cuda(), centroids) / 0.2,
dim=1).cpu().numpy()
pred_vec = np.argmax(np.mean(
np.asarray([pred[group_vec == view] for view in np.unique(group_vec)]),
axis=0
), axis=1)
_, met_mul = evaluate2(None, pred_vec, type_vec[group_vec == 0], group_vec[group_vec == 0])
for nv, v in met_mul.items():
met_mul2['EvalOriPredMean-' + nv] = v
if args.EvalCla:
mv_f = np.asarray([feature_vec[group_vec == view] for view in np.unique(group_vec)])
mv_gt = np.asarray([feature_vec_classification[group_vec == view] for view in np.unique(group_vec)])
for test_prop in [0.2, 0.5, 0.8]:
met_mul2['ClassificationACC{:.01f}'.format(test_prop)] = np.mean(
|
def show_distribution_ct(type_vec, group_vec, pred_vec, class_num, group_num):
v = np.zeros((class_num, class_num, group_num), dtype=int)
for t, c, g in zip(type_vec, pred_vec, group_vec):
v[t, c, g] += 1
visual_matrix_console(x=v)
def kmeans(feature_vec, class_num):
d = feature_vec.shape[1]
kmeans = faiss.Clustering(d, class_num)
kmeans.verbose = False
kmeans.niter = 300
kmeans.nredo = 10
# kmeans.spherical = True
# if LimitKmeans:
# kmeans.max_points_per_centroid = 1000
# kmeans.min_points_per_centroid = 10
res = faiss.StandardGpuResources()
cfg = faiss.GpuIndexFlatConfig()
cfg.useFloat16 = True
cfg.device = 0
index = faiss.GpuIndexFlatL2(res, d, cfg)
# print(feature_vec.shape)
kmeans.train(feature_vec, index)
centroids = faiss.vector_to_array(kmeans.centroids).reshape(class_num, d)
return centroids
def show_distribution(cluster_vec, group_vec, class_num, group_num):
for it in np.arange(group_num):
print('{:4d}, '.format(it), end='')
print('')
cluster_group = torch.zeros((class_num, group_num), dtype=torch.int)
for i, j in zip(cluster_vec, group_vec):
cluster_group[i, j] += 1
# cluster_group = cluster_group[torch.argsort(torch.sum(cluster_group, dim=1))]
for line in cluster_group:
print('{:4d}: '.format(torch.sum(line)), end='')
for it in line:
print('{:4d}, '.format(it), end='')
print('')
def save_checkpoint(state, epoch):
"""
it has been trained for *epoch* epochs
"""
filename = 'Epoch{:03d}.checkpoint'.format(epoch)
checkpoint_dir = os.path.join(
os.path.dirname(os.getcwd()),
'Checkpoints',
filename
)
DirectoryOperator.FoldOperator(directory=checkpoint_dir).make_fold()
if os.path.exists(checkpoint_dir):
warnings.warn('Checkpoint exist and been replaced.({})'.format(checkpoint_dir))
print('Save check point into {}'.format(checkpoint_dir))
torch.save(state, checkpoint_dir)
def get_ffn(dims, last_layers=None, with_bn=False, drop_out=0):
layers = []
for ind in range(len(dims) - 1):
in_dim = dims[ind]
out_dim = dims[ind + 1]
layers.append(nn.Linear(in_dim, out_dim))
if with_bn:
layers.append(nn.BatchNorm1d(out_dim))
layers.append(nn.ReLU())
if drop_out:
layers.append(nn.Dropout(drop_out))
if last_layers is not None:
layers.extend(last_layers)
return nn.Sequential(*layers)
def get_cov(dims, strides, last_layers=None, with_bn=False, drop_out=0):
layers = []
for ind in range(len(dims) - 1):
in_dim = dims[ind]
out_dim = dims[ind + 1]
stride = strides[ind]
# layers.append(nn.Linear(in_dim, out_dim))
if stride >= 0:
layers.append(nn.Conv2d(in_dim, out_dim, kernel_size=3, stride=stride, padding=1))
else:
layers.append(nn.ConvTranspose2d(
in_dim, out_dim, kernel_size=3, stride=-stride, padding=1, output_padding=0 if stride == -1 else 1))
if with_bn:
# layers.append(nn.BatchNorm1d(out_dim))
layers.append(nn.BatchNorm2d(out_dim))
layers.append(nn.ReLU())
if drop_out:
layers.append(nn.Dropout(drop_out))
if last_layers is not None:
layers.extend(last_layers)
return nn.Sequential(*layers)
class Net(nn.Module):
def __init__(self, args, in_dims, class_num, group_num):
super(Net, self).__init__()
self.encoder_adaption = nn.ModuleList([
get_ffn([in_dims[i], 1024], with_bn=args.BatchNormType[0] == '1', drop_out=args.Dropout)
for i in range(group_num if args.GroupWiseLayer[0] == '1' else 1)])
self.encoder = nn.ModuleList([
get_ffn([1024, 1024, 512], with_bn=args.BatchNormType[1] == '1', drop_out=args.Dropout)
for _ in range(group_num if args.GroupWiseLayer[1] == '1' else 1)])
if args.representation_dim == 0:
args.representation_dim = class_num
self.class_num = class_num
self.group_num = group_num
self.pred_cac = None
self.pred_center_cac = None
if args.ElActivationType == 'None':
el_activation_ = []
elif args.ElActivationType == 'Normalize':
el_activation_ = []
elif args.ElActivationType == 'BnNormalize':
el_activation_ = [nn.BatchNorm1d(args.representation_dim)]
elif args.ElActivationType == 'BnReNormalize':
el_activation_ = [nn.BatchNorm1d(args.representation_dim), nn.ReLU()]
elif args.ElActivationType == 'BnRe':
el_activation_ = [nn.BatchNorm1d(args.representation_dim), nn.ReLU()]
else:
raise NotImplementedError('')
self.el_activation_ = el_activation_
self.encoder_linear = nn.ModuleList([
get_ffn([512, 256], with_bn=args.BatchNormType[2] == '1', drop_out=args.Dropout,
last_layers=[nn.Linear(256, args.representation_dim)] + self.el_activation_)
for _ in range(group_num if args.GroupWiseLayer[2] == '1' else 1)])
dec_in = args.representation_dim
if args.McDecoder:
dec_in *= group_num
self.dec_in = dec_in
self.decoder_linear = nn.ModuleList([
get_ffn([self.dec_in, 256, 512], with_bn=args.BatchNormType[3] == '1', drop_out=args.Dropout)
for _ in range(group_num if args.GroupWiseLayer[3] == '1' else 1)])
if args.ActivationType == 'None':
final_activation_ = []
elif args.ActivationType == 'Sigmoid':
final_activation_ = [nn.Sigmoid()]
elif args.ActivationType == 'Tanh':
final_activation_ = [nn.Tanh()]
else:
raise NotImplementedError('')
self.final_activation_ = final_activation_
self.decoder = nn.ModuleList([
get_ffn([512, 1024, 1024], with_bn=args.BatchNormType[4] == '1', drop_out=args.Dropout)
for _ in range(group_num if args.GroupWiseLayer[4] == '1' else 1)])
self.decoder_adaption = nn.ModuleList([
get_ffn([], last_layers=[nn.Linear(1024, in_dims[i])] + self.final_activation_)
for i in range(group_num if args.GroupWiseLayer[5] == '1' else 1)])
self.args = args
self.in_dims = in_dims
# def update_cluster_center(self, center):
# self.cluster_centers = F.normalize(torch.from_numpy(center), dim=1).cuda()
def forward(self, x, **kwargs):
return self.decode(self.encode([x]))
def encode(self, xs: list):
hs = []
for g, x in enumerate(xs):
if self.args.noise_type == 'None':
pass
elif self.args.noise_type == 'Drop':
x = x * (Variable(x.data.new(x.size()).normal_(0, 0.1)) < self.args.noise_weight).type_as(x)
elif self.args.noise_type == 'Add':
x = x + Variable(x.data.new(x.size()).normal_(0, self.args.noise_weight)).type_as(x)
else:
raise NotImplementedError('')
if len(x) != 0:
if len(x) == 1:
x = torch.concat([x, x])
# print(x.shape)
# x = x.view((len(x), -1))
# print(x.shape)
x = self.encoder_adaption[g if self.args.GroupWiseLayer[0] == '1' else 0](x)
x = self.encoder[g if self.args.GroupWiseLayer[1] == '1' else 0](x)
x = self.encoder_linear[g if self.args.GroupWiseLayer[2] == '1' else 0](x)
if len(x) == 1:
x = x[[0]]
if self.args.ElActivationType in ['Normalize', 'BnNormalize', 'BnReNormalize']:
x = F.normalize(x, dim=1)
else:
x = torch.zeros([0, self.args.representation_dim], device=torch.device('cuda:0'))
hs.append(x)
return hs
def soft_ass(self, h, centroids):
if self.args.ElActivationType in ['Normalize', 'BnNormalize', 'BnReNormalize']:
return h @ centroids.T
else:
dst = torch.cdist(h, centroids)
# return (torch.mean(dst) - dst) / (torch.amax(dst) - torch.amin(dst)) * 2
return -dst / 2
# def encode_class(self, hs):
# cs = []
# for h in hs:
# c = h @ self.cluster_centers.T
# cs.append(c)
# return cs
def decode(self, hs):
xs = []
for g, h in enumerate(hs):
if self.args.McDecoder:
h = torch.cat(hs, dim=1)
if len(h) != 0:
if len(h) == 1:
h = torch.concat([h, h])
h = self.decoder_linear[g if self.args.GroupWiseLayer[3] == '1' else 0](h)
h = self.decoder[g if self.args.GroupWiseLayer[4] == '1' else 0](h)
h = self.decoder_adaption[g if self.args.GroupWiseLayer[5] == '1' else 0](h)
if len(h) == 1:
h = h[[0]]
else:
h = torch.zeros([0, self.in_dims[g]], device=torch.device('cuda:0'))
xs.append(h)
return xs
def run(self, epochs, train_dataloader, test_dataloader, args):
# if args.loss_self_cons:
# clusters = get_clusters(args=args)
optimizer_g = torch.optim.Adam(
self.parameters(),
lr=args.LearnRate,
betas=(args.betas_a, args.betas_v),
weight_decay=args.WeightDecay
)
mse_loss = nn.MSELoss().cuda()
timer_all = TimeOperator.Timer()
timer_train = TimeOperator.Timer()
timer_save = TimeOperator.Timer()
ce_loss = nn.CrossEntropyLoss().cuda()
type_detail_shown = False
start_epoch = 0
if args.resume:
if os.path.isfile(args.resume):
print("=> loading checkpoint '{}'".format(args.resume))
checkpoint = torch.load(args.resume)
# if args.gpu is None:
# checkpoint = torch.load(args.resume)
# else:
# # Map model to be loaded to specified single gpu.
# loc = 'cuda:{}'.format(args.gpu)
# checkpoint = torch.load(args.resume, map_location=loc)
start_epoch = checkpoint['epoch']
self.load_state_dict(checkpoint['state_dict'])
optimizer_g.load_state_dict(checkpoint['optimizer']['optimizer_g'])
# self.__dict__ = checkpoint['self_dic']
print("=> loaded checkpoint '{}' (epoch {})"
.format(args.resume, checkpoint['epoch']))
# self.args = args
# warnings.warn('This is not equal to start from the beginning due to different rands states.')
#
else:
raise NotImplementedError("=> no checkpoint found at '{}'".format(args.resume))
if args.CodeTest:
args.train_epoch = start_epoch + 1
epochs = start_epoch + 1
best_acc = 0
for epoch in range(start_epoch, epochs):
if (epoch + 1) <= args.LearnRateWarm:
lr = args.LearnRate * (epoch + 1) / args.LearnRateWarm
else:
if args.LearnRateDecayType == 'None':
lr = args.LearnRate
elif args.LearnRateDecayType == 'Exp':
lr = args.LearnRate * ((1 + 10 * (epoch + 1 - args.LearnRateWarm) / (
args.train_epoch - args.LearnRateWarm)) ** -0.75)
elif args.LearnRateDecayType == 'Cosine':
lr = args.LearnRate * 0.5 * (1. + math.cos(
math.pi * (epoch + 1 - args.LearnRateWarm) / (args.train_epoch - args.LearnRateWarm)))
else:
raise NotImplementedError('args.LearnRateDecayType')
if lr != args.LearnRate:
def adjust_learning_rate(optimizer):
print('adjust_learning_rate: {}'.format(lr))
for param_group in optimizer.param_groups:
param_group['lr'] = lr
adjust_learning_rate(optimizer_g)
timer_all_time = time.time()
# inf_t = time.time()
# print('start epoch {}'.format(epoch))
self.eval()
feature_vec, type_vec, group_vec = [], [], []
feature_vec_cluster = []
group_vec_cluster = []
feature_vec_classification = []
type_vec_cluster = []
data_vec = []
is_pair_all = []
timer_infer_data = TimeOperator.Timer()
rnmse_vec = [[], []] # mask = 0 1
with torch.no_grad():
inf_data_t = time.time()
for (fea0, fea1, class_labels0, class_labels1, mask, is_pair, index) in test_dataloader:
timer_infer_data.update(time.time() - inf_data_t)
# timer_infer_data.show(prefix='InferDataTime', total_count=len(test_dataloader),
# print_end_time=False)
fea0 = fea0.cuda()
fea1 = fea1.cuda()
if args.Rev:
h1, h0 = self.encode([fea0, fea1])
if args.SingleView != -1:
for v in range(len(mask[0])):
if v != 1 - args.SingleView:
mask[:, v] = 0
else:
h0, h1 = self.encode([fea0, fea1])
if args.SingleView != -1:
for v in range(len(mask[0])):
if v != args.SingleView:
mask[:, v] = 0
cluster_h0 = h0[mask[:, 0] == 1]
cluster_h1 = h1[mask[:, 1] == 1]
# if args.SingleView != -1:
# mask[:, args.SingleView] = 0
# # if args.SingleView == 0:
# # cluster_h1 = cluster_h1[[]]
# # class_labels1 = class_labels1[[]]
# # elif args.SingleView == 1:
# # class_labels0 = class_labels0[[]]
# # cluster_h0 = cluster_h0[[]]
# # else:
# # raise NotImplementedError('')
is_pair_all.extend(is_pair)
feature_vec_cluster.extend(torch.cat([cluster_h0, cluster_h1]).detach().cpu().numpy())
group_vec_cluster.extend(torch.concat((torch.zeros(len(cluster_h0), dtype=torch.int),
torch.ones(len(cluster_h1), dtype=torch.int))).numpy())
type_vec_cluster.extend(torch.concat((class_labels0[mask[:, 0] == 1],
class_labels1[mask[:, 1] == 1])).numpy())
feature_vec_classification.extend(torch.cat([h0, h1]).detach().cpu().numpy())
if (epoch + 1) == epochs or (epoch + 1) % args.VisualFreq == 0:
if torch.sum(torch.logical_not(torch.logical_or(mask[:, 1], mask[:, 0]))):
raise NotImplementedError('存在一个pair两个模态都缺失')
if args.reFill == 'Copy':
if torch.sum(mask[:, 0] == 0):
h0[mask[:, 0] == 0] = h1[mask[:, 0] == 0]
if torch.sum(mask[:, 1] == 0):
h1[mask[:, 1] == 0] = h0[mask[:, 1] == 0]
elif args.reFill == 'Center':
# raise NotImplementedError('')
if self.pred_center_cac is None:
pass
warnings.warn('self.pred_center_cac == None')
else:
centors = torch.zeros((len(mask), 2, len(self.pred_center_cac[0]))).cuda()
centors[mask[:, 0] == 1, 0] = self.pred_center_cac[
self.pred_cac[:torch.sum(mask[:, 0] == 1)]]
centors[mask[:, 1] == 1, 1] = self.pred_center_cac[
self.pred_cac[torch.sum(mask[:, 0] == 1):]]
if torch.sum(mask[:, 0] == 0):
h0[mask[:, 0] == 0] = centors[mask[:, 0] == 0, 1]
if torch.sum(mask[:, 1] == 0):
h1[mask[:, 1] == 0] = centors[mask[:, 1] == 0, 0]
elif args.reFill == 'KnnMapMean':
if torch.sum(mask[:, 0] == 0):
nearest = get_nearest_k(h1[mask[:, 0] == 0], h1[is_pair], args.reAlignK)
h0p = h0[is_pair]
h1[mask[:, 0] == 0] = torch.cat([torch.mean(h0p[ns], dim=0) for ns in nearest])
if torch.sum(mask[:, 1] == 0):
nearest = get_nearest_k(h0[mask[:, 1] == 0], h0[is_pair], args.reAlignK)
h1p = h1[is_pair]
h1[mask[:, 1] == 0] = torch.cat([torch.mean(h1p[ns], dim=0) for ns in nearest])
# raise NotImplementedError('')
elif args.reFill == 'KnnMean':
# 关联对齐, xi1 不变, xi2替换成离xi1最近的k个view2的点的mean
if torch.sum(mask[:, 1] == 0):
hs0 = h0[mask[:, 1] == 0]
he1 = h1[mask[:, 1] == 1]
nearest = get_nearest_k(hs0, he1, args.reAlignK)
# nearest = torch.argsort(torch.cdist(hs0.cpu(), he1.cpu()), dim=1)[:, :args.reAlignK]
h1[mask[:, 1] == 0] = torch.cat([torch.mean(he1[ns], dim=0) for ns in nearest])
# class_labels1[mask[:, 1] == 0] = class_labels1[mask[:, 1] == 1][nearest[:, 0]]
if torch.sum(mask[:, 0] == 0):
hs1 = h1[mask[:, 0] == 0]
he0 = h0[mask[:, 0] == 1]
nearest = get_nearest_k(hs1, he0, args.reAlignK)
# nearest = torch.argsort(torch.cdist(hs1.cpu(), he0.cpu()), dim=1)[:, :args.reAlignK]
h0[mask[:, 0] == 0] = torch.cat([torch.mean(he0[ns], dim=0) for ns in nearest])
# class_labels0[mask[:, 0] == 0] = class_labels0[mask[:, 0] == 1][nearest[:, 0]]
###############################################################
# 缺失补全, xi2 = mean(离xi1最近的k个view2的点)
# fill_num = k
# C = euclidean_dist(h0, h1)
# row_idx = C.argsort()
# col_idx = (C.t()).argsort()
# # Mij denotes the flag of i-th sample in view 0 and j-th sample in view 1
# M = torch.logical_and((mask[:, 0].repeat(test_num, 1)).t(), mask[:, 1].repeat(test_num, 1))
# for i in range(test_num):
# idx0 = col_idx[i, :][
# M[col_idx[i, :], i]] # idx for view 0 to sort and find the non-missing neighbors
# idx1 = row_idx[i, :][
# M[i, row_idx[i, :]]] # idx for view 1 to sort and find the non-missing neighbors
# if len(idx1) != 0 and len(idx0) == 0: # i-th sample in view 1 is missing
# avg_fill = h1[idx1[0:fill_num], :].sum(dim=0) / fill_num
# cnt += (class_labels1[idx1[0:fill_num]] == class_labels1[i]).sum()
# missing_cnt += 1
# recover_out0[i, :] = h0[i, :]
# recover_out1[i, :] = avg_fill # missing
# elif len(idx0) != 0 and len(idx1) == 0:
# avg_fill = h0[idx0[0:fill_num], :].sum(dim=0) / fill_num
# cnt += (class_labels0[idx0[0:fill_num]] == class_labels0[i]).sum()
# missing_cnt += 1
# recover_out0[i, :] = avg_fill # missing
# recover_out1[i, :] = h1[i, :]
# elif len(idx0) != 0 and len(idx1) != 0:
# recover_out0[i, :] = h0[i, :]
# recover_out1[i, :] = h1[i, :]
# else:
# raise Exception('error')
# if setting == 1:
# align_out0.extend((recover_out0.cpu()).numpy())
# align_out1.extend((recover_out1.cpu()).numpy())
# continue
#
else:
raise NotImplementedError('')
to_realign = torch.logical_and(is_pair == 0, torch.logical_and(mask[:, 1], mask[:, 0]))
if args.reAlign == 'KnnMean':
# 关联对齐, xi1 不变, xi2替换成离xi1最近的k个view2的点的mean
if torch.sum(to_realign):
ha1 = h1[to_realign]
nearest = get_nearest_k(h0[to_realign], ha1, args.reAlignK)
# dist = torch.cdist(h0[to_realign].cpu(), ha1.cpu())
# nearest = torch.argsort(dist, dim=1)[:, :args.reAlignK]
h1[to_realign] = torch.cat([torch.mean(ha1[ns], dim=0) for ns in nearest])
# class_labels1[is_pair == 0] = class_labels1[is_pair == 0][nearest[:, 0]]
elif args.reAlign == 'Copy':
if torch.sum(to_realign):
h1[to_realign] = h0[to_realign]
# class_labels1[is_pair == 0] = class_labels0[is_pair == 0]
elif args.reAlign == 'KnnMapMean':
if torch.sum(to_realign):
targ_v1 = h1[is_pair]
nearest = get_nearest_k(h0[to_realign], h0[is_pair], args.reAlignK)
h1[to_realign] = torch.cat([torch.mean(targ_v1[ns], dim=0) for ns in nearest])
# class_labels1[is_pair == 0] = ...
elif args.reAlign == 'Ignore':
pass
else:
raise NotImplementedError('')
if args.Rev:
fea0_rec, fea1_rec = self.decode([h1, h0])
else:
fea0_rec, fea1_rec = self.decode([h0, h1])
# if len(fea0_rec[0]) == len(fea1_rec[0]):
# fea_rec = torch.concat([fea0_rec, fea1_rec])
# fea = torch.concat([fea0, fea1])
# mask_c = torch.concat([mask[:, 0], mask[:, 1]])
# if torch.sum(mask_c == 0):
# rnmse_vec[0].extend(
# evaluate.get_rnmse(xs_hat=fea_rec[mask_c == 0], xs=fea[mask_c == 0]).cpu().numpy())
# if torch.sum(mask_c == 1):
# rnmse_vec[1].extend(
# evaluate.get_rnmse(xs_hat=fea_rec[mask_c == 1], xs=fea[mask_c == 1]).cpu().numpy())
# else:
# if torch.sum(mask == 0):
# n0_v0 = evaluate.get_rnmse(
# xs_hat=fea0_rec[mask[:, 0] == 0], xs=fea0[mask[:, 0] == 0]).cpu().numpy()
# n0_v1 = evaluate.get_rnmse(
# xs_hat=fea1_rec[mask[:, 1] == 0], xs=fea1[mask[:, 1] == 0]).cpu().numpy()
# rnmse_vec[0].extend(n0_v0)
# rnmse_vec[0].extend(n0_v1)
# if torch.sum(mask == 1):
# n1_v0 = evaluate.get_rnmse(
# xs_hat=fea0_rec[mask[:, 0] == 1], xs=fea0[mask[:, 0] == 1]).cpu().numpy()
# n1_v1 = evaluate.get_rnmse(
# xs_hat=fea1_rec[mask[:, 1] == 1], xs=fea1[mask[:, 1] == 1]).cpu().numpy()
# rnmse_vec[1].extend(n1_v0)
# rnmse_vec[1].extend(n1_v1)
g = torch.concat((torch.zeros(len(fea0), device=fea0.device, dtype=torch.int),
torch.ones(len(fea1), device=fea0.device, dtype=torch.int)))
h = torch.cat([h0, h1]).detach().cpu().numpy()
feature_vec.extend(h)
data_vec.extend(torch.cat([fea0, fea1]).detach().cpu().numpy())
group_vec.extend(g.cpu().numpy())
type_vec.extend(torch.concat((class_labels0, class_labels1)).numpy())
inf_data_t = time.time()
feature_vec = np.array(feature_vec)
data_vec = np.array(data_vec)
feature_vec_cluster = np.array(feature_vec_cluster)
is_pair_all = np.array(is_pair_all)
feature_vec_classification = np.array(feature_vec_classification)
group_vec = np.array(group_vec)
group_vec_cluster = np.array(group_vec_cluster)
type_vec = np.array(type_vec)
type_vec_cluster = np.array(type_vec_cluster)
rnmse_vec[0] = np.array(rnmse_vec[0])
rnmse_vec[1] = np.array(rnmse_vec[1])
kmeans_time = TimeOperator.Timer()
if args.ShowReconstruct:
if args.dataset == 'MNISTUSPS':
dims = [np.product(d.data.shape[1:]) for d in test_dataloader.dataset.datasets]
data_list = [np.asarray(it.data, dtype=np.float32) for it in test_dataloader.dataset.datasets]
Y = test_dataloader.dataset.datasets[0].targets
else:
dims = [d.shape[1] for d in test_dataloader.dataset.data]
data_list = [np.asarray(it, dtype=np.float32) for it in test_dataloader.dataset.data]
Y = test_dataloader.dataset.class_labels0
mask = test_dataloader.dataset.mask
n_per_cat = 10
rec0, rec1 = self.decode([
torch.from_numpy(feature_vec[group_vec == 0]).cuda(),
torch.from_numpy(feature_vec[group_vec == 1]).cuda()])
rec0 = rec0.detach().cpu().numpy()
rec1 = rec1.detach().cpu().numpy()
show_img = np.asarray([])
inds_map = np.asarray([])
for v in range(2):
col = np.asarray([])
inds_map_col = np.asarray([])
for y in range(10):
inds = np.arange(len(Y))[
np.logical_and(np.logical_and(mask[:, v] == 1, mask[:, 1 - v] == 0), Y == y)
]
np.random.shuffle(inds)
assert len(inds) >= n_per_cat
inds = inds[:n_per_cat]
raw_imgs = data_list[v][inds]
missing_imgs = data_list[1 - v][inds]
rec_imgs = [rec0, rec1][v][inds]
rec_imgs_miss = [rec0, rec1][1 - v][inds]
pack = np.asarray(
[raw_imgs, rec_imgs, missing_imgs, rec_imgs_miss]).reshape([-1, n_per_cat, 28, 28])
if len(col):
col = np.concatenate([col, pack], axis=0)
else:
col = pack
if len(inds_map_col):
inds_map_col = np.concatenate([inds_map_col, inds.reshape([1, -1])], axis=0)
else:
inds_map_col = inds.reshape([1, -1])
if len(show_img):
show_img = np.concatenate([show_img, col], axis=1)
else:
show_img = col
if len(inds_map):
inds_map = np.concatenate([inds_map, inds_map_col], axis=1)
else:
inds_map = inds_map_col
plot_heat_map(inds_map, show=True, fig_path='/xlearning/pengxin/Temp/MissingRecIM.svg')
visualize_image(show_img, show=True, fig_path='/xlearning/pengxin/Temp/MissingRec.svg')
selected_ind = [
[8, 2, 8, 9, 7, 2, 5, 9, 9, 9],
[0, 2, 2, 3, 5, 7, 7, 9, 7, 0],
]
# ToMouxin
inds_to_mouxin = [
[im[si] for im, si in zip(inds_map[:, :n_per_cat], selected_ind[0])],
[im[si] for im, si in zip(inds_map[:, n_per_cat:], selected_ind[1])],
]
re_dt = np.load(
'/xlearning/pengxin/Checkpoints/MultiClustering/RunSets/230105/IMvC_RunSet0114_Ablation_FakeSampleWise/ --QuickConfig X50C50 --dataset MNISTUSPS --loss_sim_contras 0.02 --seed 1998/SampleCache/Np.npz')
np.savez('/xlearning/pengxin/Temp/MNISTUSPS_show.npz',
feature_vec=np.asarray([
re_dt['d0_data'][inds_to_mouxin[0]],
re_dt['d1_data'][inds_to_mouxin[1]]
]))
selected_ind_global = np.concatenate(
(np.asarray(selected_ind[0]).reshape([-1, 1]),
np.asarray(selected_ind[1]).reshape([-1, 1]) + n_per_cat),
axis=1
)
show_img_final = np.concatenate(
[show_img[4 * i:4 * i + 4, selected_ind_global[i]] for i in range(len(selected_ind_global))],
axis=1
)[:, [i * 2 for i in range(10)] + [i * 2 + 1 for i in range(10)]]
visualize_image(show_img_final, show=True, fig_path='/xlearning/pengxin/Temp/MissingRecFinal.svg')
return
def cluster_and_measure(features, types, groups, row_pred=False):
kst = time.time()
centroids = torch.from_numpy(kmeans(features, self.class_num))
if args.ElActivationType in ['Normalize', 'BnNormalize', 'BnReNormalize']:
centroids = F.normalize(centroids, dim=1)
pred_vec = np.argmax(self.soft_ass(torch.from_numpy(features), centroids).numpy(), axis=1)
pred_adjusted, met = evaluate2(features, pred_vec, types, groups)
kmeans_time.update(time.time() - kst)
kmeans_time.show(prefix='kmeans_time')
if row_pred:
return pred_vec, pred_adjusted, met, centroids.cuda()
else:
return pred_adjusted, met, centroids.cuda()
if not (args.CodeTest and not args.EvalOriMean and not args.EvalOriScoreMean and not args.EvalOriPredMean):
print('EvalSigel-1')
pred_vec, pred_adjusted, met, centroids = cluster_and_measure(
features=feature_vec_cluster, types=type_vec_cluster, groups=group_vec_cluster, row_pred=True)
self.pred_cac = pred_vec
self.pred_center_cac = centroids
else:
met = {}
pred_adjusted = None
centroids = None
if args.ShowClustering:
# sub_sample = args.DrawMax
# if len(feature_vec) > sub_sample * 2:
# ind = np.arange(int(len(feature_vec) // 2))
# np.random.shuffle(ind)
# ind = ind[:sub_sample]
# ind = np.concatenate((ind, ind + int(len(feature_vec) // 2)))
# feature_vec = feature_vec[ind]
# group_vec = group_vec[ind]
# type_vec = type_vec[ind]
# visualize2(feature_vec=feature_vec, type_vec=type_vec, group_vec=group_vec,
# pred_vec=None,
# prefix=os.path.join('../', 'Visualization/E{:03d}N{:04d}'.format(epoch, len(type_vec))))
# visual_image_scatter(
# data_vec,
# feature_vec,
# group_vec,
# type_vec,
# )
raw_dataset = torchvision.datasets.ImageFolder(
'D:/VirtualMachine/Data/caltech-101/101_ObjectCategories',
transform=torchvision.transforms.Resize([256, 256])
)
# mat = sio.loadmat('D:/VirtualMachine/Data/Caltech101-all.mat')
# data = mat['X'][0][3:5]
# label = np.squeeze(mat['Y'])-1
raw_data_ind = np.ones(len(data_vec), dtype=int) * -1
class_num = len(np.unique(type_vec))
class_num_s = len(np.unique(raw_dataset.targets))
raw_dataset.targets = np.asarray(raw_dataset.targets) - 1
for t in range(class_num_s):
print('{: 4d} {: 4d} {: 4d} {: 4d}'.format(
t,
np.sum(t == type_vec),
np.sum(t == raw_dataset.targets),
np.sum(t == 0),
))
for t in np.unique(type_vec):
bank_inds = np.arange(len(raw_dataset.targets))[raw_dataset.targets == t]
raw_data_ind[t == type_vec] = np.concatenate([bank_inds, bank_inds])
# raw_data = raw_dataset[np.asarray(raw_data_ind, dtype=int)]
raw_data = np.asarray([np.asarray(raw_dataset[it][0]) for it in raw_data_ind])
np.savez(
os.path.join(args.resume.replace('.checkpoint', 'Raw2.npz')),
data_vec=raw_data,
feature_vec=feature_vec,
group_vec=group_vec,
type_vec=type_vec,
pred_adjusted=pred_adjusted,
)
return
if (epoch + 1) == epochs or (epoch + 1) % args.VisualFreq == 0:
met_mul2 = {}
if args.EvalMulti:
print('EvalMulti')
multi_modality_feature = np.concatenate(
[feature_vec[group_vec == view] for view in np.unique(group_vec)],
axis=1)
_, met_mul, _ = cluster_and_measure(
features=multi_modality_feature, types=type_vec[group_vec == 0],
groups=group_vec[group_vec == 0])
for nv, v in met_mul.items():
met_mul2['Multi-' + nv] = v
if args.EvalMean:
print('EvalMean')
_, met_mul, _ = cluster_and_measure(
features=np.mean(
np.asarray([feature_vec[group_vec == view] for view in np.unique(group_vec)]),
axis=0
), types=type_vec[group_vec == 0], groups=group_vec[group_vec == 0])
for nv, v in met_mul.items():
met_mul2['Mean-' + nv] = v
if args.EvalSingel0:
print('EvalSingel0')
_, met_mul, _ = cluster_and_measure(features=feature_vec_cluster[group_vec_cluster == 0],
types=type_vec_cluster[group_vec_cluster == 0],
groups=group_vec_cluster[group_vec_cluster == 0])
for nv, v in met_mul.items():
met_mul2['Singel0-' + nv] = v
if args.EvalSingel1:
print('EvalSingel1')
_, met_mul, _ = cluster_and_measure(features=feature_vec_cluster[group_vec_cluster == 1],
types=type_vec_cluster[group_vec_cluster == 1],
groups=group_vec_cluster[group_vec_cluster == 1] - 1)
for nv, v in met_mul.items():
met_mul2['Singel1-' + nv] = v
if args.EvalOriMean:
print('EvalOriMean')
mean_fea = np.mean(
np.asarray([feature_vec[group_vec == view] for view in np.unique(group_vec)]),
axis=0
)
score = self.soft_ass(mean_fea, centroids.cpu().numpy())
pred_vec = np.argmax(score, axis=1)
_, met_mul = evaluate2(None, pred_vec, type_vec[group_vec == 0], group_vec[group_vec == 0])
for nv, v in met_mul.items():
met_mul2['OriMean-' + nv] = v
if args.EvalOriScoreMean:
print('EvalOriScoreMean')
score = self.soft_ass(torch.from_numpy(feature_vec).cuda(), centroids).cpu().numpy()
pred_vec = np.argmax(np.mean(
np.asarray([score[group_vec == view] for view in np.unique(group_vec)]),
axis=0
), axis=1)
_, met_mul = evaluate2(None, pred_vec, type_vec[group_vec == 0], group_vec[group_vec == 0])
for nv, v in met_mul.items():
met_mul2['OriScoreMean-' + nv] = v
if args.EvalOriPredMean:
print('EvalOriPredMean')
pred = torch.softmax(self.soft_ass(torch.from_numpy(feature_vec).cuda(), centroids) / 0.2,
dim=1).cpu().numpy()
pred_vec = np.argmax(np.mean(
np.asarray([pred[group_vec == view] for view in np.unique(group_vec)]),
axis=0
), axis=1)
_, met_mul = evaluate2(None, pred_vec, type_vec[group_vec == 0], group_vec[group_vec == 0])
for nv, v in met_mul.items():
met_mul2['EvalOriPredMean-' + nv] = v
if args.EvalCla:
mv_f = np.asarray([feature_vec[group_vec == view] for view in np.unique(group_vec)])
mv_gt = np.asarray([feature_vec_classification[group_vec == view] for view in np.unique(group_vec)])
for test_prop in [0.2, 0.5, 0.8]:
met_mul2['ClassificationACC{:.01f}'.format(test_prop)] = np.mean(
| [svm_classify(
| 11 | 2023-12-21 08:50:36+00:00 | 12k |
botcs/wolfson-scheduler | tests/test_solver.py | [
{
"identifier": "unravel_indices",
"path": "solver.py",
"snippet": "def unravel_indices(indices, shape):\n coord = []\n\n for dim in reversed(shape):\n coord.append(indices % dim)\n indices = indices // dim\n\n coord = torch.stack(coord[::-1], dim=-1)\n\n return coord"
},
{... | import torch
import unittest
import math
from unittest.mock import patch
from solver import (
unravel_indices,
generalized_outer_addition,
compute_variances,
get_max_numel,
check_matrix_fit_and_num_chunks,
convert_property_to_categorical,
extract_best_assignment,
get_no_overlap_inds,
generate_binary_matrices,
eliminate_invalid_boats,
generate_valid_assignments,
evaluate_skill_variance,
evaluate_num_preferred_outings,
evaluate_assignments_per_week,
permute_top_assignments,
) | 8,855 | assignments_per_week = torch.randint(0, 2, (3, 4, 5), dtype=torch.uint8)
total_score = torch.rand(4, 4, 4) # Mock score tensor for 3 outings
# Expected output shape
expected_shape = (3, 1, 5)
result = extract_best_assignment(assignments_per_week, total_score)
self.assertEqual(result.shape, expected_shape)
def test_edge_case_single_outing(self):
assignments_per_week = torch.randint(0, 2, (1, 4, 5), dtype=torch.uint8)
total_score = torch.rand(4,)
expected_shape = (1, 1, 5)
result = extract_best_assignment(assignments_per_week, total_score)
self.assertEqual(result.shape, expected_shape)
def test_output_type(self):
assignments_per_week = torch.randint(0, 2, (3, 4, 5), dtype=torch.uint8)
total_score = torch.rand(4, 4, 4)
result = extract_best_assignment(assignments_per_week, total_score)
self.assertIsInstance(result, torch.Tensor)
self.assertTrue(result.dtype, torch.uint8)
def test_correctness_of_assignment_extraction(self):
# Mock data for 3 outings with 4 combinations each
assignments_per_week = torch.tensor([
[[0, 0], [0, 1], [1, 0], [1, 1]], # Outing 1
[[0, 0], [0, 1], [1, 0], [1, 1]], # Outing 2
[[0, 0], [0, 1], [1, 0], [1, 1]] # Outing 3
], dtype=torch.uint8)
# Mock total scores where the best scores are known
# Assuming the best scores are for the combinations [1, 0, 3] for outings [1, 2, 3]
total_score = torch.zeros((4, 4, 4))
total_score[1, 0, 3] = 1 # Highest score
# Expected best assignments for each outing
expected_assignments = torch.tensor([
[[0, 1]], # Outing 1
[[0, 0]], # Outing 2
[[1, 1]] # Outing 3
], dtype=torch.uint8) # Add dimension to match the expected output shape
result = extract_best_assignment(assignments_per_week, total_score)
self.assertTrue(torch.equal(result, expected_assignments))
class TestGetNoOverlapInds(unittest.TestCase):
def test_no_overlap(self):
A = torch.tensor([[1, 0], [0, 1]])
B = torch.tensor([[0, 1], [1, 0]])
expected_result = torch.tensor([[0, 0], [1, 1]])
result = get_no_overlap_inds(A, B)
self.assertTrue(torch.equal(result, expected_result))
def test_partial_overlap(self):
A = torch.tensor([[1, 1], [0, 1]])
B = torch.tensor([[1, 0], [0, 1]])
expected_result = torch.tensor([[1, 0]])
result = get_no_overlap_inds(A, B)
self.assertTrue(torch.equal(result, expected_result))
def test_complete_overlap(self):
A = torch.tensor([[1, 1], [1, 1]])
B = torch.tensor([[1, 1], [1, 1]])
expected_result = torch.empty((0, 2), dtype=torch.int64)
result = get_no_overlap_inds(A, B)
self.assertTrue(torch.equal(result, expected_result))
def test_different_sizes(self):
A = torch.tensor([[1, 1, 0, 0], [0, 1, 1, 0]])
B = torch.tensor([[1, 1, 0, 0], [0, 1, 1, 0], [1, 0, 0, 1]])
expected_result = torch.tensor([[1, 2]])
result = get_no_overlap_inds(A, B)
self.assertTrue(torch.equal(result, expected_result))
class TestGenerateBinaryMatrices(unittest.TestCase):
def test_correct_matrix_generation(self):
num_rowers = 4
boat_sizes = [2, 3]
expected_combinations = [math.comb(num_rowers, boat_size) for boat_size in boat_sizes]
result_matrices = generate_binary_matrices(num_rowers, boat_sizes)
for i, M in enumerate(result_matrices):
self.assertEqual(M.shape[0], expected_combinations[i]) # Correct number of combinations
self.assertEqual(M.shape[1], num_rowers) # Correct number of columns
self.assertTrue(torch.all((M.sum(axis=1) == boat_sizes[i]).logical_or(M.sum(axis=1) == 0))) # Correct boat sizes
def test_different_rower_and_boat_sizes(self):
num_rowers = 5
boat_sizes = [1, 4]
result_matrices = generate_binary_matrices(num_rowers, boat_sizes)
for M, boat_size in zip(result_matrices, boat_sizes):
self.assertEqual(M.shape, (math.comb(num_rowers, boat_size), num_rowers))
def test_output_type(self):
num_rowers = 3
boat_sizes = [2]
result_matrices = generate_binary_matrices(num_rowers, boat_sizes)
for M in result_matrices:
self.assertIsInstance(M, torch.Tensor)
self.assertTrue(M.dtype, torch.bool)
class TestEliminateInvalidBoats(unittest.TestCase):
def test_no_elimination_of_valid_boats(self):
binary_matrix = torch.tensor([[1, 0, 1], [1, 1, 0], [0, 1, 1]])
rower_sides = torch.tensor([1, -1, 0]) # Stroke, Bow, No preference
expected_result = torch.tensor([[1, 0, 1], [1, 1, 0], [0, 1, 1]]) # Eliminate [1, 1, 0] combination
|
class TestUnravelIndices(unittest.TestCase):
def test_simple_case(self):
indices = torch.tensor([0, 1, 2, 3, 4, 5])
shape = (2, 3)
expected_result = torch.tensor([[0, 0], [0, 1], [0, 2], [1, 0], [1, 1], [1, 2]])
result = unravel_indices(indices, shape)
self.assertTrue(torch.equal(result, expected_result))
def test_single_dimension(self):
indices = torch.tensor([0, 1, 2, 3])
shape = (4,)
expected_result = torch.tensor([[0], [1], [2], [3]])
result = unravel_indices(indices, shape)
self.assertTrue(torch.equal(result, expected_result))
def test_multi_dimension(self):
indices = torch.tensor([0, 1, 5, 11])
shape = (2, 3, 2)
expected_result = torch.tensor([[0, 0, 0], [0, 0, 1], [0, 2, 1], [1, 2, 1]])
result = unravel_indices(indices, shape)
self.assertTrue(torch.equal(result, expected_result))
def test_edge_cases(self):
indices = torch.tensor([0])
shape = (1, 1, 1)
expected_result = torch.tensor([[0, 0, 0]])
result = unravel_indices(indices, shape)
self.assertTrue(torch.equal(result, expected_result))
def test_output_type_and_shape(self):
indices = torch.tensor([3, 7])
shape = (2, 4)
result = unravel_indices(indices, shape)
self.assertIsInstance(result, torch.Tensor)
self.assertEqual(result.shape, (2, 2))
class TestGeneralizedOuterAddition(unittest.TestCase):
def test_correct_calculation(self):
vectors = [torch.tensor([1, 2]), torch.tensor([3, 4])]
expected_result = torch.tensor([[4, 5], [5, 6]])
result = generalized_outer_addition(vectors)
self.assertTrue(torch.equal(result, expected_result))
def test_different_vector_sizes(self):
vectors = [torch.tensor([1, 2]), torch.tensor([3, 4, 5])]
expected_result = torch.tensor([[4, 5, 6], [5, 6, 7]])
result = generalized_outer_addition(vectors)
self.assertTrue(torch.equal(result, expected_result))
def test_with_output_tensor(self):
vectors = [torch.tensor([1, 2]), torch.tensor([3, 4])]
output = torch.empty((2, 2))
expected_result = torch.tensor([[4, 5], [5, 6]])
result = generalized_outer_addition(vectors, output)
self.assertTrue(torch.equal(result, expected_result))
def test_error_with_incorrect_output_shape(self):
vectors = [torch.tensor([1, 2]), torch.tensor([3, 4])]
output = torch.empty((3, 3))
with self.assertRaises(AssertionError):
generalized_outer_addition(vectors, output)
def test_type_and_device_consistency(self):
vectors = [torch.tensor([1., 2.], device="cuda"), torch.tensor([3., 4.], device="cuda")]
result = generalized_outer_addition(vectors)
self.assertTrue(result.dtype == torch.float32)
self.assertTrue(result.device.type == "cuda")
class TestComputeVariances(unittest.TestCase):
def test_variances(self):
# Create sample data
torch.manual_seed(0) # For reproducibility
X = torch.rand(3, 7)
Y = torch.rand(4, 5)
# Expected variances computed by manual concatenation
expected_variances = torch.zeros((X.size(0), Y.size(0)))
for i in range(X.size(0)):
for j in range(Y.size(0)):
concatenated = torch.cat((X[i], Y[j]))
expected_variances[i, j] = torch.var(concatenated, unbiased=False)
# Variances computed by the function
actual_variances = compute_variances(X, Y)
# Assert equality (within a tolerance to account for floating-point errors)
self.assertTrue(torch.allclose(expected_variances, actual_variances, atol=1e-6))
class TestGetMaxNumel(unittest.TestCase):
@patch('solver.get_free_memory')
def test_with_different_dtypes(self, mock_get_free_memory):
mock_get_free_memory.return_value = 1024 # Mock 1024 bytes of free memory
dtypes = [torch.float32, torch.int32, torch.float64]
for dtype in dtypes:
element_size = torch.tensor([], dtype=dtype).element_size()
expected_result = 1024 // element_size
result = get_max_numel(dtype)
self.assertEqual(result, expected_result)
@patch('solver.get_free_memory')
def test_without_specified_memory_capacity(self, mock_get_free_memory):
mock_get_free_memory.return_value = 2048 # Mock 2048 bytes of free memory
dtype = torch.float32
element_size = torch.tensor([], dtype=dtype).element_size()
expected_result = 2048 // element_size
result = get_max_numel(dtype)
self.assertEqual(result, expected_result)
def test_with_specified_memory_capacity(self):
dtype = torch.float32
memory_capacity = 4096 # Specify 4096 bytes of memory
element_size = torch.tensor([], dtype=dtype).element_size()
expected_result = 4096 // element_size
result = get_max_numel(dtype, memory_capacity)
self.assertEqual(result, expected_result)
class TestCheckMatrixFitAndNumChunks(unittest.TestCase):
def test_tensor_fits_memory(self):
dimensions = (10, 10, 10)
dtype = torch.float32
memory_capacity = 40000 # Set a capacity that's more than enough
self.assertEqual(check_matrix_fit_and_num_chunks(dimensions, dtype, memory_capacity), 1)
def test_tensor_exceeds_memory(self):
dimensions = (100, 100, 100)
dtype = torch.float32
memory_capacity = 1000 # Set a capacity that's too small
self.assertRaises(ValueError, check_matrix_fit_and_num_chunks, dimensions, dtype, memory_capacity)
def test_different_data_types(self):
dimensions = (100, 100)
memory_capacity = 100000
for dtype in [torch.float32, torch.int32, torch.float64]:
self.assertIsInstance(check_matrix_fit_and_num_chunks(dimensions, dtype, memory_capacity), int)
def test_various_dimensions(self):
dtype = torch.float32
memory_capacity = 10000
test_dimensions = [
(100, 20, 5),
(50, 40, 30),
(200, 10, 10)
]
for dimensions in test_dimensions:
self.assertIsInstance(check_matrix_fit_and_num_chunks(dimensions, dtype, memory_capacity), int)
def test_without_specified_memory_capacity(self):
dimensions = (10, 10, 10)
dtype = torch.float32
self.assertIsInstance(check_matrix_fit_and_num_chunks(dimensions, dtype), int)
class TestConvertPropertyToCategorical(unittest.TestCase):
def test_correct_conversion(self):
property_list = ["red", "blue", "red"]
expected_result = torch.tensor([1, 0, 1])
result = convert_property_to_categorical(property_list)
self.assertTrue(torch.equal(result, expected_result))
def test_empty_input(self):
property_list = []
expected_result = torch.tensor([])
result = convert_property_to_categorical(property_list)
self.assertTrue(torch.equal(result, expected_result))
def test_mixed_values(self):
property_list = ["apple", "banana", "apple", "cherry"]
expected_result = torch.tensor([0, 1, 0, 2])
result = convert_property_to_categorical(property_list)
self.assertTrue(torch.equal(result, expected_result))
def test_consistency_in_indexing(self):
property_list = ["dog", "cat", "bird", "cat"]
expected_result = torch.tensor([2, 1, 0, 1])
result = convert_property_to_categorical(property_list)
self.assertTrue(torch.equal(result, expected_result))
def test_output_type_and_shape(self):
property_list = ["one", "two", "three"]
result = convert_property_to_categorical(property_list)
self.assertIsInstance(result, torch.Tensor)
self.assertEqual(result.dtype, torch.int64)
self.assertEqual(result.shape, (3,))
class TestExtractBestAssignment(unittest.TestCase):
def test_valid_inputs(self):
# Mock data
assignments_per_week = torch.randint(0, 2, (3, 4, 5), dtype=torch.uint8)
total_score = torch.rand(4, 4, 4) # Mock score tensor for 3 outings
# Expected output shape
expected_shape = (3, 1, 5)
result = extract_best_assignment(assignments_per_week, total_score)
self.assertEqual(result.shape, expected_shape)
def test_edge_case_single_outing(self):
assignments_per_week = torch.randint(0, 2, (1, 4, 5), dtype=torch.uint8)
total_score = torch.rand(4,)
expected_shape = (1, 1, 5)
result = extract_best_assignment(assignments_per_week, total_score)
self.assertEqual(result.shape, expected_shape)
def test_output_type(self):
assignments_per_week = torch.randint(0, 2, (3, 4, 5), dtype=torch.uint8)
total_score = torch.rand(4, 4, 4)
result = extract_best_assignment(assignments_per_week, total_score)
self.assertIsInstance(result, torch.Tensor)
self.assertTrue(result.dtype, torch.uint8)
def test_correctness_of_assignment_extraction(self):
# Mock data for 3 outings with 4 combinations each
assignments_per_week = torch.tensor([
[[0, 0], [0, 1], [1, 0], [1, 1]], # Outing 1
[[0, 0], [0, 1], [1, 0], [1, 1]], # Outing 2
[[0, 0], [0, 1], [1, 0], [1, 1]] # Outing 3
], dtype=torch.uint8)
# Mock total scores where the best scores are known
# Assuming the best scores are for the combinations [1, 0, 3] for outings [1, 2, 3]
total_score = torch.zeros((4, 4, 4))
total_score[1, 0, 3] = 1 # Highest score
# Expected best assignments for each outing
expected_assignments = torch.tensor([
[[0, 1]], # Outing 1
[[0, 0]], # Outing 2
[[1, 1]] # Outing 3
], dtype=torch.uint8) # Add dimension to match the expected output shape
result = extract_best_assignment(assignments_per_week, total_score)
self.assertTrue(torch.equal(result, expected_assignments))
class TestGetNoOverlapInds(unittest.TestCase):
def test_no_overlap(self):
A = torch.tensor([[1, 0], [0, 1]])
B = torch.tensor([[0, 1], [1, 0]])
expected_result = torch.tensor([[0, 0], [1, 1]])
result = get_no_overlap_inds(A, B)
self.assertTrue(torch.equal(result, expected_result))
def test_partial_overlap(self):
A = torch.tensor([[1, 1], [0, 1]])
B = torch.tensor([[1, 0], [0, 1]])
expected_result = torch.tensor([[1, 0]])
result = get_no_overlap_inds(A, B)
self.assertTrue(torch.equal(result, expected_result))
def test_complete_overlap(self):
A = torch.tensor([[1, 1], [1, 1]])
B = torch.tensor([[1, 1], [1, 1]])
expected_result = torch.empty((0, 2), dtype=torch.int64)
result = get_no_overlap_inds(A, B)
self.assertTrue(torch.equal(result, expected_result))
def test_different_sizes(self):
A = torch.tensor([[1, 1, 0, 0], [0, 1, 1, 0]])
B = torch.tensor([[1, 1, 0, 0], [0, 1, 1, 0], [1, 0, 0, 1]])
expected_result = torch.tensor([[1, 2]])
result = get_no_overlap_inds(A, B)
self.assertTrue(torch.equal(result, expected_result))
class TestGenerateBinaryMatrices(unittest.TestCase):
def test_correct_matrix_generation(self):
num_rowers = 4
boat_sizes = [2, 3]
expected_combinations = [math.comb(num_rowers, boat_size) for boat_size in boat_sizes]
result_matrices = generate_binary_matrices(num_rowers, boat_sizes)
for i, M in enumerate(result_matrices):
self.assertEqual(M.shape[0], expected_combinations[i]) # Correct number of combinations
self.assertEqual(M.shape[1], num_rowers) # Correct number of columns
self.assertTrue(torch.all((M.sum(axis=1) == boat_sizes[i]).logical_or(M.sum(axis=1) == 0))) # Correct boat sizes
def test_different_rower_and_boat_sizes(self):
num_rowers = 5
boat_sizes = [1, 4]
result_matrices = generate_binary_matrices(num_rowers, boat_sizes)
for M, boat_size in zip(result_matrices, boat_sizes):
self.assertEqual(M.shape, (math.comb(num_rowers, boat_size), num_rowers))
def test_output_type(self):
num_rowers = 3
boat_sizes = [2]
result_matrices = generate_binary_matrices(num_rowers, boat_sizes)
for M in result_matrices:
self.assertIsInstance(M, torch.Tensor)
self.assertTrue(M.dtype, torch.bool)
class TestEliminateInvalidBoats(unittest.TestCase):
def test_no_elimination_of_valid_boats(self):
binary_matrix = torch.tensor([[1, 0, 1], [1, 1, 0], [0, 1, 1]])
rower_sides = torch.tensor([1, -1, 0]) # Stroke, Bow, No preference
expected_result = torch.tensor([[1, 0, 1], [1, 1, 0], [0, 1, 1]]) # Eliminate [1, 1, 0] combination | result = eliminate_invalid_boats(binary_matrix, rower_sides) | 9 | 2023-12-18 05:12:36+00:00 | 12k |
Azure-Samples/functions-python-web-crawler | .venv/Lib/site-packages/charset_normalizer/cd.py | [
{
"identifier": "FREQUENCIES",
"path": ".venv/Lib/site-packages/charset_normalizer/constant.py",
"snippet": "FREQUENCIES: Dict[str, List[str]] = {\n \"English\": [\n \"e\",\n \"a\",\n \"t\",\n \"i\",\n \"o\",\n \"n\",\n \"s\",\n \"r\",\n ... | import importlib
from codecs import IncrementalDecoder
from collections import Counter
from functools import lru_cache
from typing import Counter as TypeCounter, Dict, List, Optional, Tuple
from .constant import (
FREQUENCIES,
KO_NAMES,
LANGUAGE_SUPPORTED_COUNT,
TOO_SMALL_SEQUENCE,
ZH_NAMES,
)
from .md import is_suspiciously_successive_range
from .models import CoherenceMatches
from .utils import (
is_accentuated,
is_latin,
is_multi_byte_encoding,
is_unicode_range_secondary,
unicode_range,
) | 10,281 |
def encoding_unicode_range(iana_name: str) -> List[str]:
"""
Return associated unicode ranges in a single byte code page.
"""
if is_multi_byte_encoding(iana_name):
raise IOError("Function not supported on multi-byte code page")
decoder = importlib.import_module(
"encodings.{}".format(iana_name)
).IncrementalDecoder
p: IncrementalDecoder = decoder(errors="ignore")
seen_ranges: Dict[str, int] = {}
character_count: int = 0
for i in range(0x40, 0xFF):
chunk: str = p.decode(bytes([i]))
if chunk:
character_range: Optional[str] = unicode_range(chunk)
if character_range is None:
continue
if is_unicode_range_secondary(character_range) is False:
if character_range not in seen_ranges:
seen_ranges[character_range] = 0
seen_ranges[character_range] += 1
character_count += 1
return sorted(
[
character_range
for character_range in seen_ranges
if seen_ranges[character_range] / character_count >= 0.15
]
)
def unicode_range_languages(primary_range: str) -> List[str]:
"""
Return inferred languages used with a unicode range.
"""
languages: List[str] = []
for language, characters in FREQUENCIES.items():
for character in characters:
if unicode_range(character) == primary_range:
languages.append(language)
break
return languages
@lru_cache()
def encoding_languages(iana_name: str) -> List[str]:
"""
Single-byte encoding language association. Some code page are heavily linked to particular language(s).
This function does the correspondence.
"""
unicode_ranges: List[str] = encoding_unicode_range(iana_name)
primary_range: Optional[str] = None
for specified_range in unicode_ranges:
if "Latin" not in specified_range:
primary_range = specified_range
break
if primary_range is None:
return ["Latin Based"]
return unicode_range_languages(primary_range)
@lru_cache()
def mb_encoding_languages(iana_name: str) -> List[str]:
"""
Multi-byte encoding language association. Some code page are heavily linked to particular language(s).
This function does the correspondence.
"""
if (
iana_name.startswith("shift_")
or iana_name.startswith("iso2022_jp")
or iana_name.startswith("euc_j")
or iana_name == "cp932"
):
return ["Japanese"]
if iana_name.startswith("gb") or iana_name in ZH_NAMES:
return ["Chinese"]
if iana_name.startswith("iso2022_kr") or iana_name in KO_NAMES:
return ["Korean"]
return []
@lru_cache(maxsize=LANGUAGE_SUPPORTED_COUNT)
def get_target_features(language: str) -> Tuple[bool, bool]:
"""
Determine main aspects from a supported language if it contains accents and if is pure Latin.
"""
target_have_accents: bool = False
target_pure_latin: bool = True
for character in FREQUENCIES[language]:
|
def encoding_unicode_range(iana_name: str) -> List[str]:
"""
Return associated unicode ranges in a single byte code page.
"""
if is_multi_byte_encoding(iana_name):
raise IOError("Function not supported on multi-byte code page")
decoder = importlib.import_module(
"encodings.{}".format(iana_name)
).IncrementalDecoder
p: IncrementalDecoder = decoder(errors="ignore")
seen_ranges: Dict[str, int] = {}
character_count: int = 0
for i in range(0x40, 0xFF):
chunk: str = p.decode(bytes([i]))
if chunk:
character_range: Optional[str] = unicode_range(chunk)
if character_range is None:
continue
if is_unicode_range_secondary(character_range) is False:
if character_range not in seen_ranges:
seen_ranges[character_range] = 0
seen_ranges[character_range] += 1
character_count += 1
return sorted(
[
character_range
for character_range in seen_ranges
if seen_ranges[character_range] / character_count >= 0.15
]
)
def unicode_range_languages(primary_range: str) -> List[str]:
"""
Return inferred languages used with a unicode range.
"""
languages: List[str] = []
for language, characters in FREQUENCIES.items():
for character in characters:
if unicode_range(character) == primary_range:
languages.append(language)
break
return languages
@lru_cache()
def encoding_languages(iana_name: str) -> List[str]:
"""
Single-byte encoding language association. Some code page are heavily linked to particular language(s).
This function does the correspondence.
"""
unicode_ranges: List[str] = encoding_unicode_range(iana_name)
primary_range: Optional[str] = None
for specified_range in unicode_ranges:
if "Latin" not in specified_range:
primary_range = specified_range
break
if primary_range is None:
return ["Latin Based"]
return unicode_range_languages(primary_range)
@lru_cache()
def mb_encoding_languages(iana_name: str) -> List[str]:
"""
Multi-byte encoding language association. Some code page are heavily linked to particular language(s).
This function does the correspondence.
"""
if (
iana_name.startswith("shift_")
or iana_name.startswith("iso2022_jp")
or iana_name.startswith("euc_j")
or iana_name == "cp932"
):
return ["Japanese"]
if iana_name.startswith("gb") or iana_name in ZH_NAMES:
return ["Chinese"]
if iana_name.startswith("iso2022_kr") or iana_name in KO_NAMES:
return ["Korean"]
return []
@lru_cache(maxsize=LANGUAGE_SUPPORTED_COUNT)
def get_target_features(language: str) -> Tuple[bool, bool]:
"""
Determine main aspects from a supported language if it contains accents and if is pure Latin.
"""
target_have_accents: bool = False
target_pure_latin: bool = True
for character in FREQUENCIES[language]: | if not target_have_accents and is_accentuated(character): | 7 | 2023-12-16 04:12:01+00:00 | 12k |
liebrandapps/FindMyGUI | main.py | [
{
"identifier": "AirTag",
"path": "airTag.py",
"snippet": "class AirTag:\n\n def __init__(self, ctx, jsonFile=None):\n self.log = ctx.log\n self.cfg = ctx.cfg\n self.__id = uuid.uuid4().hex\n self._name = \"\"\n self._privateKey = None\n self._advertisementKe... | import glob
import logging
import signal
import sys
from http.server import BaseHTTPRequestHandler, ThreadingHTTPServer
from logging.handlers import RotatingFileHandler
from os import makedirs
from os.path import join, exists, splitext
from threading import Thread
from urllib.parse import parse_qs, urlparse
from airTag import AirTag
from api import API
from config import Config
from context import Context
from daemon import Daemon | 7,275 | global runAsDaemon
try:
_log = logging.Logger(APP)
loghdl = RotatingFileHandler(cfg.logging_logFile, 'a', cfg.logging_maxFilesize, 4)
loghdl.setFormatter(logging.Formatter(cfg.logging_msgFormat))
loghdl.setLevel(cfg.logging_logLevel)
_log.addHandler(loghdl)
if cfg.logging_stdout and not runAsDaemon:
loghdl = logging.StreamHandler(sys.stdout)
loghdl.setFormatter(logging.Formatter(cfg.logging_msgFormat))
loghdl.setLevel(cfg.logging_logLevel)
_log.addHandler(loghdl)
_log.disabled = False
return _log
except Exception as e:
print("[%s] Unable to initialize logging. Reason: %s" % (APP, e))
return None
def terminate(sigNo, _):
global doTerminate
global myServer
global httpIsRunning
if doTerminate:
return
doTerminate = True
ctx.log.info(f"[{APP}] Terminating with Signal {sigNo} {sigs[sigNo]}")
if httpIsRunning:
Thread(target=myServer.shutdown).start()
def loadAirTags():
global ctx
airTagDir = ctx.cfg.general_airTagDirectory
airTagSuffix = ctx.cfg.general_airTagSuffix
if not exists(airTagDir):
ctx.log.info(
f"[loadAirTags] Airtags Directory '{airTagDir}' does not exist, creating it. This will be used to store Airtag key information.")
makedirs(airTagDir)
tags = glob.glob(join(airTagDir, '*' + airTagSuffix))
for t in tags:
airtag = AirTag(ctx, jsonFile=t)
ctx.airtags[airtag.id] = airtag
class FindMyServer(BaseHTTPRequestHandler):
''' Extension: ContentType, Encode '''
contentTypeDct = {'.html': ["text/html", True],
'.js': ["application/javascript", True],
'.css': ["text/css", True],
'.png': ["image/png", False],
}
def do_GET(self):
if self.path.startswith('/api'):
api = API(ctx)
query_components = parse_qs(urlparse(self.path).query)
cmd = query_components["command"]
result = api.call(cmd[0], params=query_components)
self.send_response(200)
self.send_header("Content-type", "application/json")
self.end_headers()
self.wfile.write(result.encode('UTF-8'))
else:
file = "/index.html" if self.path == "/" else self.path
file = join('www', file[1:])
ext = splitext(file)[1]
ct = self.contentTypeDct[ext] if ext in self.contentTypeDct.keys() else None
if exists(file) and ct is not None:
contentType = ct[0]
encode = ct[1]
self.send_response(200)
self.send_header("Content-type", contentType)
self.end_headers()
with open(file, 'r' if encode else 'rb') as f:
data = f.read()
self.wfile.write(data.encode('UTF-8') if encode else data)
else:
self.send_response(404)
self.end_headers()
if __name__ == '__main__':
doTerminate = False
initialConfig = {
"general": {
"httpHost": ['String', '0.0.0.0'],
"httpPort": ['Integer', 8008],
"httpFiles": ['String', 'www'],
"anisetteHost": ['String', 'http://192.168.2.15'],
"anisettePort": ['Integer', 6969],
"airTagDirectory": ['String', 'airtags'],
"airTagSuffix": ['String', '.json'],
"history": ["Integer", 30],
},
"logging": {
"logFile": ["String", "/tmp/findMyGUI.log"],
"maxFilesize": ["Integer", 1000000],
"msgFormat": ["String", "%(asctime)s, %(levelname)s, %(module)s {%(process)d}, %(lineno)d, %(message)s"],
"logLevel": ["Integer", 10],
"stdout": ["Boolean", True],
},
"appleId": {
"appleId": ["String", ''],
"password": ["String", ''],
"trustedDevice": ["Boolean", False],
}
}
path = join(CONFIG_DIR, CONFIG_FILE)
if not (exists(path)):
print(f"[{APP}] No config file {CONFIG_FILE} found at {CONFIG_DIR}, using defaults")
cfg = Config(path)
cfg.addScope(initialConfig)
runAsDaemon = False
if len(sys.argv) > 1:
todo = sys.argv[1]
if todo in ['start', 'stop', 'restart', 'status']:
runAsDaemon = True
pidFile = cfg.general_pidFile
logFile = cfg.logging_logFile
| """
Mark Liebrand 2024
This file is part of FindMyGUI which is released under the Apache 2.0 License
See file LICENSE or go to for full license details https://github.com/liebrandapps/FindMyGUI
"""
APP = "findMyGUI"
CONFIG_DIR = "./"
CONFIG_FILE = "findMyGUI.ini"
def setupLogger():
global runAsDaemon
try:
_log = logging.Logger(APP)
loghdl = RotatingFileHandler(cfg.logging_logFile, 'a', cfg.logging_maxFilesize, 4)
loghdl.setFormatter(logging.Formatter(cfg.logging_msgFormat))
loghdl.setLevel(cfg.logging_logLevel)
_log.addHandler(loghdl)
if cfg.logging_stdout and not runAsDaemon:
loghdl = logging.StreamHandler(sys.stdout)
loghdl.setFormatter(logging.Formatter(cfg.logging_msgFormat))
loghdl.setLevel(cfg.logging_logLevel)
_log.addHandler(loghdl)
_log.disabled = False
return _log
except Exception as e:
print("[%s] Unable to initialize logging. Reason: %s" % (APP, e))
return None
def terminate(sigNo, _):
global doTerminate
global myServer
global httpIsRunning
if doTerminate:
return
doTerminate = True
ctx.log.info(f"[{APP}] Terminating with Signal {sigNo} {sigs[sigNo]}")
if httpIsRunning:
Thread(target=myServer.shutdown).start()
def loadAirTags():
global ctx
airTagDir = ctx.cfg.general_airTagDirectory
airTagSuffix = ctx.cfg.general_airTagSuffix
if not exists(airTagDir):
ctx.log.info(
f"[loadAirTags] Airtags Directory '{airTagDir}' does not exist, creating it. This will be used to store Airtag key information.")
makedirs(airTagDir)
tags = glob.glob(join(airTagDir, '*' + airTagSuffix))
for t in tags:
airtag = AirTag(ctx, jsonFile=t)
ctx.airtags[airtag.id] = airtag
class FindMyServer(BaseHTTPRequestHandler):
''' Extension: ContentType, Encode '''
contentTypeDct = {'.html': ["text/html", True],
'.js': ["application/javascript", True],
'.css': ["text/css", True],
'.png': ["image/png", False],
}
def do_GET(self):
if self.path.startswith('/api'):
api = API(ctx)
query_components = parse_qs(urlparse(self.path).query)
cmd = query_components["command"]
result = api.call(cmd[0], params=query_components)
self.send_response(200)
self.send_header("Content-type", "application/json")
self.end_headers()
self.wfile.write(result.encode('UTF-8'))
else:
file = "/index.html" if self.path == "/" else self.path
file = join('www', file[1:])
ext = splitext(file)[1]
ct = self.contentTypeDct[ext] if ext in self.contentTypeDct.keys() else None
if exists(file) and ct is not None:
contentType = ct[0]
encode = ct[1]
self.send_response(200)
self.send_header("Content-type", contentType)
self.end_headers()
with open(file, 'r' if encode else 'rb') as f:
data = f.read()
self.wfile.write(data.encode('UTF-8') if encode else data)
else:
self.send_response(404)
self.end_headers()
if __name__ == '__main__':
doTerminate = False
initialConfig = {
"general": {
"httpHost": ['String', '0.0.0.0'],
"httpPort": ['Integer', 8008],
"httpFiles": ['String', 'www'],
"anisetteHost": ['String', 'http://192.168.2.15'],
"anisettePort": ['Integer', 6969],
"airTagDirectory": ['String', 'airtags'],
"airTagSuffix": ['String', '.json'],
"history": ["Integer", 30],
},
"logging": {
"logFile": ["String", "/tmp/findMyGUI.log"],
"maxFilesize": ["Integer", 1000000],
"msgFormat": ["String", "%(asctime)s, %(levelname)s, %(module)s {%(process)d}, %(lineno)d, %(message)s"],
"logLevel": ["Integer", 10],
"stdout": ["Boolean", True],
},
"appleId": {
"appleId": ["String", ''],
"password": ["String", ''],
"trustedDevice": ["Boolean", False],
}
}
path = join(CONFIG_DIR, CONFIG_FILE)
if not (exists(path)):
print(f"[{APP}] No config file {CONFIG_FILE} found at {CONFIG_DIR}, using defaults")
cfg = Config(path)
cfg.addScope(initialConfig)
runAsDaemon = False
if len(sys.argv) > 1:
todo = sys.argv[1]
if todo in ['start', 'stop', 'restart', 'status']:
runAsDaemon = True
pidFile = cfg.general_pidFile
logFile = cfg.logging_logFile | d = Daemon(pidFile, APP, logFile) | 4 | 2023-12-16 12:39:52+00:00 | 12k |
zhcui/polar_preview | polar/lang_firsov/ulf.py | [
{
"identifier": "grad_ulf",
"path": "polar/lang_firsov/grad_ulf.py",
"snippet": "def get_grad_lf(mylf, params=None, rdm1=None, mo_coeff=None, mo_occ=None,\n scf_max_cycle=50, fci=False, beta=np.inf):\ndef get_grad_lf_full(mylf, params=None, rdm1=None, mo_coeff=None, mo_occ=None):\ndef get... | from functools import partial
from scipy import linalg as la
from scipy import optimize as opt
from pyscf import gto, scf, ao2mo, lib
from pyscf.scf import hf, uhf
from pyscf.lib import logger
from polar.lang_firsov import grad_ulf as grad
from polar.lang_firsov import thermal_average as ta
from polar.fci.fci import fc_factor
from polar.lang_firsov.lang_firsov import GLangFirsov, GGLangFirsov
from pyscf.pbc.scf.addons import smearing_
from polar.lang_firsov import mp_glf
import numpy as np | 7,543 | mylf.e_hf = float(e_tot)
conv = mf.converged
mylf.mo_coeff = mf.mo_coeff = mo_coeff
mylf.mo_occ = mf.mo_occ = mo_occ
mylf.mo_energy = mf.mo_energy = mo_energy
if mp4 or mp3 or mp2:
logger.info(mylf, "LF-MP2 start, nph = %d", nph)
ovlp_g = la.block_diag(ovlp, ovlp)
# ZHC FIXME should we use h1 or H1?
hcore_g = la.block_diag(H1, H1)
#hcore_g = la.block_diag(h1, h1)
mf = mylf._scf = mf.to_ghf()
mf.get_ovlp = lambda *args: ovlp_g
mf.get_hcore = lambda *args: hcore_g
mf._eri = H2
if mp4:
e_mp1, e_mp2, e_mp3, e_mp4 = mp_glf.get_e_mp4(mylf, lams=lams, zs=zs, nph=nph)
e_tot += e_mp1
e_tot += e_mp2
e_tot += e_mp3
e_tot += e_mp4
mylf.e_mp1 = e_mp1
mylf.e_mp2 = e_mp2
mylf.e_mp3 = e_mp3
mylf.e_mp4 = e_mp4
logger.info(mylf, "e_mp1 %15.8f", e_mp1)
logger.info(mylf, "e_mp2 %15.8f", e_mp2)
logger.info(mylf, "e_mp3 %15.8f", e_mp3)
logger.info(mylf, "e_mp4 %15.8f", e_mp4)
elif mp2:
e_mp2 = mp_glf.get_e_mp2(mylf, lams=lams, zs=zs, nph=nph)
e_tot += e_mp2
mylf.e_mp2 = e_mp2
logger.info(mylf, "e_mp2 %15.8f", mylf.e_mp2)
return e_tot, rdm1
class UGLangFirsov(GLangFirsov):
@property
def nkappa(self):
nocc_a = self.nelec_a
nvir_a = self.nao - nocc_a
nk_a = nvir_a * nocc_a
nocc_b = self.nelec_b
nvir_b = self.nao - nocc_b
nk_b = nvir_b * nocc_b
nparam = nk_a + nk_b
return nparam
def unpack_params_full(self, params, uniform=None):
nocc_a = self.nelec_a
nvir_a = self.nao - nocc_a
nk_a = nvir_a * nocc_a
nocc_b = self.nelec_b
nvir_b = self.nao - nocc_b
nk_b = nvir_b * nocc_b
kappa_a = params[:nk_a]
kappa_b = params[nk_a:(nk_a+nk_b)]
lams, zs = self.unpack_params(params[(nk_a+nk_b):])
return (kappa_a, kappa_b), lams, zs
def make_rdm1(self, mo_coeff=None, mo_occ=None):
if mo_occ is None:
mo_occ = self.mo_occ
if mo_coeff is None:
mo_coeff = self.mo_coeff
dm_a = np.dot(mo_coeff[0] * mo_occ[0], mo_coeff[0].conj().T)
dm_b = np.dot(mo_coeff[1] * mo_occ[1], mo_coeff[1].conj().T)
dm = np.asarray((dm_a, dm_b))
return dm
def make_rdm1p(self, mo_coeff=None, mo_occ=None, lams=None, zs=None):
"""
Phonon part of rdm1.
rho_xy = <LF | b^{\dag}_y b_x |LF>
"""
if lams is None or zs is None:
lams, zs = self.get_lams_zs(opt=True)
rdm1 = self.make_rdm1(mo_coeff=mo_coeff, mo_occ=mo_occ)
nao = self.nao
rdm1_diag = rdm1[:, range(nao), range(nao)]
rdm1_diag_sum = np.sum(rdm1_diag, axis=0)
rho = np.einsum("y, x -> xy", zs, zs)
tmp = np.einsum("xp, p -> x", lams, rdm1_diag_sum)
tmp = np.einsum("y, x -> xy", zs, tmp)
rho -= tmp
rho -= tmp.conj().T
rho += np.einsum("yp, xp, p -> xy", lams, lams, rdm1_diag_sum, optimize=True)
tmp = np.einsum("p, q -> pq", rdm1_diag_sum, rdm1_diag_sum)
tmp -= np.einsum("sqp, spq -> pq", rdm1, rdm1)
rho += np.einsum("yp, xp, pq -> xy", lams, lams, tmp, optimize=True)
return rho
def make_rdm1p_linear(self, mo_coeff=None, mo_occ=None, lams=None, zs=None):
"""
Phonon linear part of rdm1.
rho_x = <LF | b_x |LF>
"""
if lams is None or zs is None:
lams, zs = self.get_lams_zs(opt=True)
rdm1 = self.make_rdm1(mo_coeff=mo_coeff, mo_occ=mo_occ)
nao = self.nao
rdm1_diag = rdm1[:, range(nao), range(nao)].sum(axis=0)
rho = zs - np.einsum("xp, p -> x", lams, rdm1_diag)
return rho
get_grad = grad.get_grad_glf
get_grad_full = grad.get_grad_lf_full
solve_lf_ham = solve_lf_ham
solve_lf_ham_full = solve_lf_ham_full
| #!/usr/bin/env python
"""
Unrestricted version variational Lang-Firsov.
Authors:
Zhi-Hao Cui <zhcui0408@gmail.com>
"""
einsum = partial(np.einsum, optimize=True)
# ****************************************************************************
# Variational Lang-Firsov
# ****************************************************************************
def solve_lf_ham(mylf, params=None, nelec=None, spin=None, mp2=False, mp3=False, mp4=False,
nph=9, verbose=False, scf_newton=False, beta=np.inf, dm0=None,
scf_max_cycle=50, fci=False):
H0, H1, H2, H_ep, w_p = mylf.get_lf_ham(params=params)
ovlp = mylf.get_ovlp()
nao = mylf.nao
h1 = mylf.get_h1()
if nelec is None:
nelec = mylf.nelec
if spin is None:
spin = mylf.spin
if params is None:
params = mylf.params
lams, zs = mylf.unpack_params(params)
if H2 is not None:
mf = uhf.UHF(mylf.mol)
mf.energy_nuc = lambda *args: H0
mf.get_hcore = lambda *args: H1
mf.get_ovlp = lambda *args: ovlp
mf._eri = H2
mf.direct_scf = False
mf.max_cycle = scf_max_cycle
mf.conv_tol = mylf.conv_tol * 0.1
if scf_newton:
mf = mf.newton()
if beta < np.inf:
mf = smearing_(mf, sigma=1.0/beta, method='fermi')
e_tot = mf.kernel(dm0=dm0)
rdm1 = mf.make_rdm1()
mylf._scf = mf
mylf.mo_energy = mf.mo_energy
mylf.mo_coeff = mf.mo_coeff
mylf.mo_occ = mf.mo_occ
mylf.e_hf = float(e_tot)
conv = mf.converged
if fci:
raise NotImplementedError
else:
raise NotImplementedError
mylf.e_tot = e_tot
return e_tot, rdm1
def solve_lf_ham_full(mylf, params=None, nelec=None, mp2=False, mp3=False, mp4=False,
nph=9, verbose=False, scf_newton=False, beta=np.inf, dm0=None,
scf_max_cycle=50, mo_coeff=None, mo_occ=None, canonicalization=True):
if params is None:
params = mylf.params
(kappa_a, kappa_b), lams, zs = mylf.unpack_params_full(params)
params_p = mylf.pack_params(lams, zs)
H0, H1, H2, H_ep, w_p = mylf.get_lf_ham(params=params_p)
ovlp = mylf.get_ovlp()
nao = mylf.nao
h1 = mylf.get_h1()
if nelec is None:
nelec = mylf.nelec
if H2 is not None:
mf = uhf.UHF(mylf.mol)
mf.energy_nuc = lambda *args: H0
mf.get_hcore = lambda *args: H1
mf.get_ovlp = lambda *args: ovlp
# ZHC FIXME NOTE the transformed H2 may not have the 4-fold symmetry,
# it is only 2-fold. pqrs = rspq
#mf._eri = ao2mo.restore(4, H2, nao)
mf._eri = H2
mf.direct_scf = False
mf.max_cycle = scf_max_cycle
mf.conv_tol = mylf.conv_tol * 0.1
nmo = len(mo_occ[0])
nocc_a = mylf.nelec_a
nocc_b = mylf.nelec_b
nvir_a = nmo - nocc_a
nvir_b = nmo - nocc_b
dr_a = hf.unpack_uniq_var(kappa_a, mo_occ[0])
mo_coeff_a = np.dot(mo_coeff[0], la.expm(dr_a))
dr_b = hf.unpack_uniq_var(kappa_b, mo_occ[1])
mo_coeff_b = np.dot(mo_coeff[1], la.expm(dr_b))
mo_coeff = np.asarray([mo_coeff_a, mo_coeff_b])
rdm1 = mf.make_rdm1(mo_coeff, mo_occ)
e_tot = mf.energy_elec(dm=rdm1)[0] + mf.energy_nuc()
fock = mf.get_fock(dm=rdm1)
if canonicalization:
print("-" * 79)
mo_energy, mo_coeff = mf.canonicalize(mo_coeff, mo_occ, fock)
homo_a = lumo_a = homo_b = lumo_b = None
mo_e_occ_a = mo_energy[0][mo_occ[0] >= 0.5]
mo_e_vir_a = mo_energy[0][mo_occ[0] < 0.5]
if len(mo_e_occ_a) > 0:
homo_a = mo_e_occ_a.max()
if len(mo_e_vir_a) > 0:
lumo_a = mo_e_vir_a.min()
if homo_a is not None:
print ('HOMO (a) = %15.8g'%(homo_a))
if lumo_a is not None:
print ('LUMO (a) = %15.8g'%(lumo_a))
if homo_a is not None:
print ("gap (a) = %15.8g"%(lumo_a - homo_a))
if (lumo_a is not None) and (homo_a is not None) and (homo_a > lumo_a):
print ('WARN: HOMO (a) %s > LUMO (a) %s was found in the canonicalized orbitals.'
%(homo_a, lumo_a))
print ("mo_energy (a):\n%s"%mo_energy[0])
print("-" * 79)
mo_e_occ_b = mo_energy[1][mo_occ[1] >= 0.5]
mo_e_vir_b = mo_energy[1][mo_occ[1] < 0.5]
if len(mo_e_occ_b) > 0:
homo_b = mo_e_occ_b.max()
if len(mo_e_vir_b) > 0:
lumo_b = mo_e_vir_b.min()
if homo_b is not None:
print ('HOMO (b) = %15.8g'%(homo_b))
if lumo_b is not None:
print ('LUMO (b) = %15.8g'%(lumo_b))
if homo_b is not None:
print ("gap (b) = %15.8g"%(lumo_b - homo_b))
if (lumo_b is not None) and (homo_b is not None) and (homo_b > lumo_b):
print ('WARN: HOMO (b) %s > LUMO (b) %s was found in the canonicalized orbitals.'
%(homo_b, lumo_b))
print ("mo_energy (b):\n%s"%mo_energy[1])
grad = mf.get_grad(mo_coeff, mo_occ, fock)
grad_norm = la.norm(grad)
print("-" * 79)
print ("|g| = %15.8g" % grad_norm)
print("-" * 79)
else:
mo_energy = einsum("spm, spq, sqm -> sm", mo_coeff.conj(), fock, mo_coeff)
mylf._scf = mf
mylf.e_hf = float(e_tot)
conv = mf.converged
mylf.mo_coeff = mf.mo_coeff = mo_coeff
mylf.mo_occ = mf.mo_occ = mo_occ
mylf.mo_energy = mf.mo_energy = mo_energy
if mp4 or mp3 or mp2:
logger.info(mylf, "LF-MP2 start, nph = %d", nph)
ovlp_g = la.block_diag(ovlp, ovlp)
# ZHC FIXME should we use h1 or H1?
hcore_g = la.block_diag(H1, H1)
#hcore_g = la.block_diag(h1, h1)
mf = mylf._scf = mf.to_ghf()
mf.get_ovlp = lambda *args: ovlp_g
mf.get_hcore = lambda *args: hcore_g
mf._eri = H2
if mp4:
e_mp1, e_mp2, e_mp3, e_mp4 = mp_glf.get_e_mp4(mylf, lams=lams, zs=zs, nph=nph)
e_tot += e_mp1
e_tot += e_mp2
e_tot += e_mp3
e_tot += e_mp4
mylf.e_mp1 = e_mp1
mylf.e_mp2 = e_mp2
mylf.e_mp3 = e_mp3
mylf.e_mp4 = e_mp4
logger.info(mylf, "e_mp1 %15.8f", e_mp1)
logger.info(mylf, "e_mp2 %15.8f", e_mp2)
logger.info(mylf, "e_mp3 %15.8f", e_mp3)
logger.info(mylf, "e_mp4 %15.8f", e_mp4)
elif mp2:
e_mp2 = mp_glf.get_e_mp2(mylf, lams=lams, zs=zs, nph=nph)
e_tot += e_mp2
mylf.e_mp2 = e_mp2
logger.info(mylf, "e_mp2 %15.8f", mylf.e_mp2)
return e_tot, rdm1
class UGLangFirsov(GLangFirsov):
@property
def nkappa(self):
nocc_a = self.nelec_a
nvir_a = self.nao - nocc_a
nk_a = nvir_a * nocc_a
nocc_b = self.nelec_b
nvir_b = self.nao - nocc_b
nk_b = nvir_b * nocc_b
nparam = nk_a + nk_b
return nparam
def unpack_params_full(self, params, uniform=None):
nocc_a = self.nelec_a
nvir_a = self.nao - nocc_a
nk_a = nvir_a * nocc_a
nocc_b = self.nelec_b
nvir_b = self.nao - nocc_b
nk_b = nvir_b * nocc_b
kappa_a = params[:nk_a]
kappa_b = params[nk_a:(nk_a+nk_b)]
lams, zs = self.unpack_params(params[(nk_a+nk_b):])
return (kappa_a, kappa_b), lams, zs
def make_rdm1(self, mo_coeff=None, mo_occ=None):
if mo_occ is None:
mo_occ = self.mo_occ
if mo_coeff is None:
mo_coeff = self.mo_coeff
dm_a = np.dot(mo_coeff[0] * mo_occ[0], mo_coeff[0].conj().T)
dm_b = np.dot(mo_coeff[1] * mo_occ[1], mo_coeff[1].conj().T)
dm = np.asarray((dm_a, dm_b))
return dm
def make_rdm1p(self, mo_coeff=None, mo_occ=None, lams=None, zs=None):
"""
Phonon part of rdm1.
rho_xy = <LF | b^{\dag}_y b_x |LF>
"""
if lams is None or zs is None:
lams, zs = self.get_lams_zs(opt=True)
rdm1 = self.make_rdm1(mo_coeff=mo_coeff, mo_occ=mo_occ)
nao = self.nao
rdm1_diag = rdm1[:, range(nao), range(nao)]
rdm1_diag_sum = np.sum(rdm1_diag, axis=0)
rho = np.einsum("y, x -> xy", zs, zs)
tmp = np.einsum("xp, p -> x", lams, rdm1_diag_sum)
tmp = np.einsum("y, x -> xy", zs, tmp)
rho -= tmp
rho -= tmp.conj().T
rho += np.einsum("yp, xp, p -> xy", lams, lams, rdm1_diag_sum, optimize=True)
tmp = np.einsum("p, q -> pq", rdm1_diag_sum, rdm1_diag_sum)
tmp -= np.einsum("sqp, spq -> pq", rdm1, rdm1)
rho += np.einsum("yp, xp, pq -> xy", lams, lams, tmp, optimize=True)
return rho
def make_rdm1p_linear(self, mo_coeff=None, mo_occ=None, lams=None, zs=None):
"""
Phonon linear part of rdm1.
rho_x = <LF | b_x |LF>
"""
if lams is None or zs is None:
lams, zs = self.get_lams_zs(opt=True)
rdm1 = self.make_rdm1(mo_coeff=mo_coeff, mo_occ=mo_occ)
nao = self.nao
rdm1_diag = rdm1[:, range(nao), range(nao)].sum(axis=0)
rho = zs - np.einsum("xp, p -> x", lams, rdm1_diag)
return rho
get_grad = grad.get_grad_glf
get_grad_full = grad.get_grad_lf_full
solve_lf_ham = solve_lf_ham
solve_lf_ham_full = solve_lf_ham_full
| class UGGLangFirsov(GGLangFirsov, UGLangFirsov): | 4 | 2023-12-18 07:39:51+00:00 | 12k |
YaoFANGUK/video-subtitle-remover | backend/scenedetect/backends/opencv.py | [
{
"identifier": "FrameTimecode",
"path": "backend/scenedetect/frame_timecode.py",
"snippet": "class FrameTimecode:\n \"\"\"Object for frame-based timecodes, using the video framerate to compute back and\n forth between frame number and seconds/timecode.\n\n A timecode is valid only if it compli... | from logging import getLogger
from typing import AnyStr, Tuple, Union, Optional
from numpy import ndarray
from backend.scenedetect.frame_timecode import FrameTimecode, MAX_FPS_DELTA
from backend.scenedetect.platform import get_file_name
from backend.scenedetect.video_stream import VideoStream, SeekError, VideoOpenFailure, FrameRateUnavailable
import math
import os.path
import cv2 | 7,345 |
NON_VIDEO_FILE_INPUT_IDENTIFIERS = (
IMAGE_SEQUENCE_IDENTIFIER, # image sequence
'://', # URL/network stream
' ! ', # gstreamer pipe
)
def _get_aspect_ratio(cap: cv2.VideoCapture, epsilon: float = 0.0001) -> float:
"""Display/pixel aspect ratio of the VideoCapture as a float (1.0 represents square pixels)."""
# Versions of OpenCV < 3.4.1 do not support this, so we fall back to 1.0.
if not 'CAP_PROP_SAR_NUM' in dir(cv2):
return 1.0
num: float = cap.get(cv2.CAP_PROP_SAR_NUM)
den: float = cap.get(cv2.CAP_PROP_SAR_DEN)
# If numerator or denominator are close to zero, so we fall back to 1.0.
if abs(num) < epsilon or abs(den) < epsilon:
return 1.0
return num / den
class VideoStreamCv2(VideoStream):
"""OpenCV `cv2.VideoCapture` backend."""
def __init__(
self,
path: AnyStr = None,
framerate: Optional[float] = None,
max_decode_attempts: int = 5,
path_or_device: Union[bytes, str, int] = None,
):
"""Open a video file, image sequence, or network stream.
Arguments:
path: Path to the video. Can be a file, image sequence (`'folder/DSC_%04d.jpg'`),
or network stream.
framerate: If set, overrides the detected framerate.
max_decode_attempts: Number of attempts to continue decoding the video
after a frame fails to decode. This allows processing videos that
have a few corrupted frames or metadata (in which case accuracy
of detection algorithms may be lower). Once this limit is passed,
decoding will stop and emit an error.
path_or_device: [DEPRECATED] Specify `path` for files, image sequences, or
network streams/URLs. Use `VideoCaptureAdapter` for devices/pipes.
Raises:
OSError: file could not be found or access was denied
VideoOpenFailure: video could not be opened (may be corrupted)
ValueError: specified framerate is invalid
"""
super().__init__()
# TODO(v0.7): Replace with DeprecationWarning that `path_or_device` will be removed in v0.8.
if path_or_device is not None:
logger.error('path_or_device is deprecated, use path or VideoCaptureAdapter instead.')
path = path_or_device
if path is None:
raise ValueError('Path must be specified!')
if framerate is not None and framerate < MAX_FPS_DELTA:
raise ValueError('Specified framerate (%f) is invalid!' % framerate)
if max_decode_attempts < 0:
raise ValueError('Maximum decode attempts must be >= 0!')
self._path_or_device = path
self._is_device = isinstance(self._path_or_device, int)
# Initialized in _open_capture:
self._cap: Optional[
cv2.VideoCapture] = None # Reference to underlying cv2.VideoCapture object.
self._frame_rate: Optional[float] = None
# VideoCapture state
self._has_grabbed = False
self._max_decode_attempts = max_decode_attempts
self._decode_failures = 0
self._warning_displayed = False
self._open_capture(framerate)
#
# Backend-Specific Methods/Properties
#
@property
def capture(self) -> cv2.VideoCapture:
"""Returns reference to underlying VideoCapture object. Use with caution.
Prefer to use this property only to take ownership of the underlying cv2.VideoCapture object
backing this object. Seeking or using the read/grab methods through this property are
unsupported and will leave this object in an inconsistent state.
"""
assert self._cap
return self._cap
#
# VideoStream Methods/Properties
#
BACKEND_NAME = 'opencv'
"""Unique name used to identify this backend."""
@property
def frame_rate(self) -> float:
"""Framerate in frames/sec."""
assert self._frame_rate
return self._frame_rate
@property
def path(self) -> Union[bytes, str]:
"""Video or device path."""
if self._is_device:
assert isinstance(self._path_or_device, (int))
return "Device %d" % self._path_or_device
assert isinstance(self._path_or_device, (bytes, str))
return self._path_or_device
@property
def name(self) -> str:
"""Name of the video, without extension, or device."""
if self._is_device:
return self.path
| # -*- coding: utf-8 -*-
#
# PySceneDetect: Python-Based Video Scene Detector
# -------------------------------------------------------------------
# [ Site: https://scenedetect.com ]
# [ Docs: https://scenedetect.com/docs/ ]
# [ Github: https://github.com/Breakthrough/PySceneDetect/ ]
#
# Copyright (C) 2014-2023 Brandon Castellano <http://www.bcastell.com>.
# PySceneDetect is licensed under the BSD 3-Clause License; see the
# included LICENSE file, or visit one of the above pages for details.
#
""":class:`VideoStreamCv2` is backed by the OpenCV `VideoCapture` object. This is the default
backend. Works with video files, image sequences, and network streams/URLs.
For wrapping input devices or pipes, there is also :class:`VideoCaptureAdapter` which can be
constructed from an existing `cv2.VideoCapture`. This allows performing scene detection on inputs
which do not support seeking.
"""
logger = getLogger('pyscenedetect')
IMAGE_SEQUENCE_IDENTIFIER = '%'
NON_VIDEO_FILE_INPUT_IDENTIFIERS = (
IMAGE_SEQUENCE_IDENTIFIER, # image sequence
'://', # URL/network stream
' ! ', # gstreamer pipe
)
def _get_aspect_ratio(cap: cv2.VideoCapture, epsilon: float = 0.0001) -> float:
"""Display/pixel aspect ratio of the VideoCapture as a float (1.0 represents square pixels)."""
# Versions of OpenCV < 3.4.1 do not support this, so we fall back to 1.0.
if not 'CAP_PROP_SAR_NUM' in dir(cv2):
return 1.0
num: float = cap.get(cv2.CAP_PROP_SAR_NUM)
den: float = cap.get(cv2.CAP_PROP_SAR_DEN)
# If numerator or denominator are close to zero, so we fall back to 1.0.
if abs(num) < epsilon or abs(den) < epsilon:
return 1.0
return num / den
class VideoStreamCv2(VideoStream):
"""OpenCV `cv2.VideoCapture` backend."""
def __init__(
self,
path: AnyStr = None,
framerate: Optional[float] = None,
max_decode_attempts: int = 5,
path_or_device: Union[bytes, str, int] = None,
):
"""Open a video file, image sequence, or network stream.
Arguments:
path: Path to the video. Can be a file, image sequence (`'folder/DSC_%04d.jpg'`),
or network stream.
framerate: If set, overrides the detected framerate.
max_decode_attempts: Number of attempts to continue decoding the video
after a frame fails to decode. This allows processing videos that
have a few corrupted frames or metadata (in which case accuracy
of detection algorithms may be lower). Once this limit is passed,
decoding will stop and emit an error.
path_or_device: [DEPRECATED] Specify `path` for files, image sequences, or
network streams/URLs. Use `VideoCaptureAdapter` for devices/pipes.
Raises:
OSError: file could not be found or access was denied
VideoOpenFailure: video could not be opened (may be corrupted)
ValueError: specified framerate is invalid
"""
super().__init__()
# TODO(v0.7): Replace with DeprecationWarning that `path_or_device` will be removed in v0.8.
if path_or_device is not None:
logger.error('path_or_device is deprecated, use path or VideoCaptureAdapter instead.')
path = path_or_device
if path is None:
raise ValueError('Path must be specified!')
if framerate is not None and framerate < MAX_FPS_DELTA:
raise ValueError('Specified framerate (%f) is invalid!' % framerate)
if max_decode_attempts < 0:
raise ValueError('Maximum decode attempts must be >= 0!')
self._path_or_device = path
self._is_device = isinstance(self._path_or_device, int)
# Initialized in _open_capture:
self._cap: Optional[
cv2.VideoCapture] = None # Reference to underlying cv2.VideoCapture object.
self._frame_rate: Optional[float] = None
# VideoCapture state
self._has_grabbed = False
self._max_decode_attempts = max_decode_attempts
self._decode_failures = 0
self._warning_displayed = False
self._open_capture(framerate)
#
# Backend-Specific Methods/Properties
#
@property
def capture(self) -> cv2.VideoCapture:
"""Returns reference to underlying VideoCapture object. Use with caution.
Prefer to use this property only to take ownership of the underlying cv2.VideoCapture object
backing this object. Seeking or using the read/grab methods through this property are
unsupported and will leave this object in an inconsistent state.
"""
assert self._cap
return self._cap
#
# VideoStream Methods/Properties
#
BACKEND_NAME = 'opencv'
"""Unique name used to identify this backend."""
@property
def frame_rate(self) -> float:
"""Framerate in frames/sec."""
assert self._frame_rate
return self._frame_rate
@property
def path(self) -> Union[bytes, str]:
"""Video or device path."""
if self._is_device:
assert isinstance(self._path_or_device, (int))
return "Device %d" % self._path_or_device
assert isinstance(self._path_or_device, (bytes, str))
return self._path_or_device
@property
def name(self) -> str:
"""Name of the video, without extension, or device."""
if self._is_device:
return self.path | file_name: str = get_file_name(self.path, include_extension=False) | 2 | 2023-10-25 02:50:01+00:00 | 12k |
Genesis-Embodied-AI/RoboGen | manipulation/sim.py | [
{
"identifier": "Panda",
"path": "manipulation/panda.py",
"snippet": "class Panda(Robot):\n def __init__(self, controllable_joints='right', slider=True, floating=False):\n self.slider = slider\n self.floating = floating\n if not floating:\n if not slider:\n ... | import numpy as np
import pybullet as p
import gym
import pickle
import yaml
import os.path as osp
from gym.utils import seeding
from gym import spaces
from collections import defaultdict
from scipy.spatial.transform import Rotation as R
from manipulation.panda import Panda
from manipulation.ur5 import UR5
from manipulation.sawyer import Sawyer
from manipulation.utils import parse_config, load_env, download_and_parse_objavarse_obj_from_yaml_config
from manipulation.gpt_reward_api import get_joint_id_from_name, get_link_id_from_name
from manipulation.table_utils import table_paths, table_scales, table_poses, table_bbox_scale_down_factors | 7,620 | joint_name = p.getJointInfo(obj_id, joint_idx, physicsClientId=self.id)[1].decode("utf-8")
joint_angle = p.getJointState(obj_id, joint_idx, physicsClientId=self.id)[0]
self.initial_joint_angle[name][joint_name] = joint_angle
self.initial_pos = {}
self.initial_orient = {}
for name in self.urdf_ids:
obj_id = self.urdf_ids[name.lower()]
if name == 'robot' or name == 'plane' or name == "init_table": continue
pos, orient = p.getBasePositionAndOrientation(obj_id, physicsClientId=self.id)
self.initial_pos[name] = pos
self.initial_orient[name] = orient
def set_to_default_joint_angles(self):
for obj_name in self.urdf_ids:
if obj_name == 'robot' or obj_name == 'plane' or obj_name == "init_table": continue
obj_id = self.urdf_ids[obj_name]
num_joints = p.getNumJoints(obj_id, physicsClientId=self.id)
for joint_idx in range(num_joints):
joint_limit_low, joint_limit_high = p.getJointInfo(obj_id, joint_idx, physicsClientId=self.id)[8:10]
if joint_limit_low > joint_limit_high:
joint_limit_low, joint_limit_high = joint_limit_high, joint_limit_low
joint_val = joint_limit_low + 0.06 * (joint_limit_high - joint_limit_low)
p.resetJointState(obj_id, joint_idx, joint_val, physicsClientId=self.id)
def handle_gpt_special_relationships(self, spatial_relationships):
# we support "on" and "in" for now, but this can be extended to more relationships
for spatial_relationship in spatial_relationships:
words = spatial_relationship.lower().split(",")
words = [word.strip().lstrip() for word in words]
if words[0] == "on":
obj_a = words[1]
obj_b = words[2]
if len(words) == 4:
obj_b_link = words[3]
obj_b_link_id = get_link_id_from_name(self, obj_b, obj_b_link)
else:
obj_b_link_id = -1
obj_a_id, obj_b_id = self.urdf_ids[obj_a], self.urdf_ids[obj_b]
obj_a_bbox_min, obj_a_bbox_max = self.get_aabb(obj_a_id)
obj_a_size = obj_a_bbox_max - obj_a_bbox_min
target_aabb_min, target_aabb_max = self.get_aabb_link(obj_b_id, obj_b_link_id)
id_line = p.addUserDebugLine(target_aabb_min, target_aabb_max, [1, 0, 0], lineWidth=10, lifeTime=0, physicsClientId=self.id)
id_point = p.addUserDebugPoints([(target_aabb_min + target_aabb_max) / 2], [[0, 0, 1]], 10, 0, physicsClientId=self.id)
new_pos = (target_aabb_min + target_aabb_max) / 2
new_pos[2] = target_aabb_max[2] # put obj a on top of obj b.
new_pos[2] += obj_a_size[2] # add the height of obj a
if not self.randomize:
obj_a_orientation = p.getQuaternionFromEuler([np.pi/2, 0, 0], physicsClientId=self.id)
else:
random_orientations = [0, np.pi / 2, np.pi, np.pi * 3 / 2]
obj_a_orientation = p.getQuaternionFromEuler([np.pi/2, 0, random_orientations[np.random.randint(4)]], physicsClientId=self.id)
p.resetBasePositionAndOrientation(obj_a_id, new_pos, obj_a_orientation, physicsClientId=self.id)
p.removeUserDebugItem(id_line, physicsClientId=self.id)
p.removeUserDebugItem(id_point, physicsClientId=self.id)
if words[0] == 'in':
obj_a = words[1]
obj_b = words[2]
if len(words) == 4:
obj_b_link = words[3]
obj_b_link_id = get_link_id_from_name(self, obj_b, obj_b_link)
else:
obj_b_link_id = -1
obj_a_id, obj_b_id = self.urdf_ids[obj_a], self.urdf_ids[obj_b]
# if after a lot of trying times, there is still collision, we should scale down the size of object A.
cnt = 1
collision_free = False
obj_a_new_size = self.simulator_sizes[obj_a]
obj_a_ori_pos, obj_a_orientation = p.getBasePositionAndOrientation(obj_a_id, physicsClientId=self.id)
target_aabb_min, target_aabb_max = self.get_aabb_link(obj_b_id, obj_b_link_id)
while not collision_free:
if cnt % 100 == 0:
print("scaling down! object size is {}".format(obj_a_new_size))
obj_a_new_size = obj_a_new_size * 0.9
p.removeBody(obj_a_id, physicsClientId=self.id)
obj_a_id = p.loadURDF(self.urdf_paths[obj_a],
basePosition=obj_a_ori_pos,
baseOrientation=obj_a_orientation,
physicsClientId=self.id, useFixedBase=False, globalScaling=obj_a_new_size)
self.urdf_ids[obj_a] = obj_a_id
self.simulator_sizes[obj_a] = obj_a_new_size
obj_a_bbox_min, obj_a_bbox_max = self.get_aabb(obj_a_id)
obj_a_size = obj_a_bbox_max - obj_a_bbox_min
id_line = p.addUserDebugLine(target_aabb_min, target_aabb_max, [1, 0, 0], lineWidth=10, lifeTime=0, physicsClientId=self.id)
id_point = p.addUserDebugPoints([(target_aabb_min + target_aabb_max) / 2], [[0, 0, 1]], 10, 0, physicsClientId=self.id)
center_pos = (target_aabb_min + target_aabb_max) / 2
up_pos = center_pos.copy()
up_pos[2] += obj_a_size[2]
possible_locations = [center_pos, up_pos]
obj_a_orientation = p.getQuaternionFromEuler([np.pi/2, 0, 0], physicsClientId=self.id)
for pos in possible_locations: # we try two possible locations to put obj a in obj b
p.resetBasePositionAndOrientation(obj_a_id, pos, obj_a_orientation, physicsClientId=self.id)
contact_points = p.getClosestPoints(obj_a_id, obj_b_id, 0.002, physicsClientId=self.id)
if len(contact_points) == 0:
collision_free = True
break
p.removeUserDebugItem(id_line, physicsClientId=self.id)
p.removeUserDebugItem(id_point, physicsClientId=self.id)
cnt += 1
if cnt > 1000: # if after scaling for 10 times it still does not work, let it be.
break
def handle_gpt_joint_angle(self, articulated_init_joint_angles):
for name in articulated_init_joint_angles:
obj_id = self.urdf_ids[name.lower()]
for joint_name, joint_angle in articulated_init_joint_angles[name].items():
|
class SimpleEnv(gym.Env):
def __init__(self,
dt=0.01,
config_path=None,
gui=False,
frameskip=2,
horizon=120,
restore_state_file=None,
rotation_mode='delta-axis-angle-local',
translation_mode='delta-translation',
max_rotation=np.deg2rad(5),
max_translation=0.15,
use_suction=True, # whether to use a suction gripper
object_candidate_num=6, # how many candidate objects to sample from objaverse
vhacd=False, # if to perform vhacd on the object for better collision detection for pybullet
randomize=0, # if to randomize the scene
obj_id=0, # which object to choose to use from the candidates
):
super().__init__()
# Task
self.config_path = config_path
self.restore_state_file = restore_state_file
self.frameskip = frameskip
self.horizon = horizon
self.gui = gui
self.object_candidate_num = object_candidate_num
self.solution_path = None
self.success = False # not really used, keeped for now
self.primitive_save_path = None # to be used for saving the primitives execution results
self.randomize = randomize
self.obj_id = obj_id # which object to choose to use from the candidates
# physics
self.gravity = -9.81
self.contact_constraint = None
self.vhacd = vhacd
# action space
self.use_suction = use_suction
self.rotation_mode = rotation_mode
self.translation_mode = translation_mode
self.max_rotation_angle = max_rotation
self.max_translation = max_translation
self.suction_to_obj_pose = 0
self.suction_contact_link = None
self.suction_obj_id = None
self.activated = 0
if self.gui:
try:
self.id = p.connect(p.GUI)
except:
self.id = p.connect(p.DIRECT)
else:
self.id = p.connect(p.DIRECT)
self.asset_dir = osp.join(osp.dirname(osp.realpath(__file__)), "assets/")
hz=int(1/dt)
p.setTimeStep(1.0 / hz, physicsClientId=self.id)
self.seed()
self.set_scene()
self.setup_camera_rpy()
self.scene_lower, self.scene_upper = self.get_scene_bounds()
self.scene_center = (self.scene_lower + self.scene_upper) / 2
self.scene_range = (self.scene_upper - self.scene_lower) / 2
self.grasp_action_mag = 0.06 if not self.use_suction else 1
self.action_low = np.array([-1, -1, -1, -1, -1, -1, -1])
self.action_high = np.array([1, 1, 1, 1, 1, 1, self.grasp_action_mag])
self.action_space = spaces.Box(low=self.action_low, high=self.action_high, dtype=np.float32)
self.base_action_space = spaces.Box(low=self.action_low, high=self.action_high, dtype=np.float32)
self.num_objects = len(self.urdf_ids) - 2 # exclude plane, robot
distractor_object_num = np.sum(list(self.is_distractor.values()))
self.num_objects -= distractor_object_num
### For RL policy learning, observation space includes:
# 1. object positions and orientations (6 * num_objects)
# 2. object min and max bounding box (6 * num_objects)
# 3. articulated object joint angles (num_objects * num_joints)
# 4. articulated object link position and orientation (num_objects * num_joints * 6)
# 5. robot base position (xy)
# 6. robot end-effector position and orientation (6)
# 7. gripper suction activated/deactivate or gripper joint angle (if not using suction gripper) (1)
num_obs = self.num_objects * 12 # obs 1 and 2
for name in self.urdf_types:
if self.urdf_types[name] == 'urdf' and not self.is_distractor[name]: # obs 3 and 4
num_joints = p.getNumJoints(self.urdf_ids[name], physicsClientId=self.id)
num_obs += num_joints
num_obs += 6 * num_joints
num_obs += 2 + 6 + 1 # obs 5 6 7
self.base_num_obs = num_obs
self.observation_space = spaces.Box(low=-np.inf, high=np.inf, shape=(num_obs, ), dtype=np.float32)
self.base_observation_space = spaces.Box(low=-np.inf, high=np.inf, shape=(self.base_num_obs, ), dtype=np.float32)
self.detected_position = {} # not used for now, keep it
def normalize_position(self, pos):
if self.translation_mode == 'normalized-direct-translation':
return (pos - self.scene_center) / self.scene_range
else:
return pos
def seed(self, seed=None):
self.np_random, _ = seeding.np_random()
def get_aabb(self, id):
num_joints = p.getNumJoints(id, physicsClientId=self.id)
min_aabbs, max_aabbs = [], []
for link_idx in range(-1, num_joints):
min_aabb, max_aabb = p.getAABB(id, link_idx, physicsClientId=self.id)
min_aabbs.append(list(min_aabb))
max_aabbs.append(list(max_aabb))
min_aabb = np.min(np.concatenate(min_aabbs, axis=0).reshape(-1, 3), axis=0)
max_aabb = np.max(np.concatenate(max_aabbs, axis=0).reshape(-1, 3), axis=0)
return min_aabb, max_aabb
def get_aabb_link(self, id, link_id):
min_aabb, max_aabb = p.getAABB(id, link_id, physicsClientId=self.id)
return np.array(min_aabb), np.array(max_aabb)
def get_scene_bounds(self):
min_aabbs = []
max_aabbs = []
for name, id in self.urdf_ids.items():
if name == 'plane': continue
min_aabb, max_aabb = self.get_aabb(id)
min_aabbs.append(min_aabb)
max_aabbs.append(max_aabb)
min_aabb = np.min(np.stack(min_aabbs, axis=0).reshape(-1, 3), axis=0)
max_aabb = np.max(np.stack(max_aabbs, axis=0).reshape(-1, 3), axis=0)
range = max_aabb - min_aabb
return min_aabb - 0.5 * range, max_aabb + 0.5 * range
def clip_within_workspace(self, robot_pos, ori_pos, on_table):
pos = ori_pos.copy()
if not on_table:
# If objects are too close to the robot, push them away
x_near_low, x_near_high = robot_pos[0] - 0.3, robot_pos[0] + 0.3
y_near_low, y_near_high = robot_pos[1] - 0.3, robot_pos[1] + 0.3
if pos[0] > x_near_low and pos[0] < x_near_high:
pos[0] = x_near_low if pos[0] < robot_pos[0] else x_near_high
if pos[1] > y_near_low and pos[1] < y_near_high:
pos[1] = y_near_low if pos[1] < robot_pos[1] else y_near_high
return pos
else:
# Object is on table, should be within table's bounding box
new_pos = pos.copy()
new_pos[:2] = np.clip(new_pos[:2], self.table_bbox_min[:2], self.table_bbox_max[:2])
return new_pos
def get_robot_base_pos(self):
robot_base_pos = [1, 1, 0.28]
return robot_base_pos
def get_robot_init_joint_angles(self):
init_joint_angles = [0 for _ in range(len(self.robot.right_arm_joint_indices))]
if self.robot_name == 'panda':
init_joint_angles = [0, -1.10916842e-04, 7.33823451e-05, -5.47701370e-01, -5.94950533e-01,
2.62857916e+00, -4.85316284e-01, 1.96042022e+00, 2.15271531e+00,
-7.35304443e-01]
return init_joint_angles
def set_scene(
self,
):
### simulation preparation
p.resetSimulation(physicsClientId=self.id)
if self.gui:
p.resetDebugVisualizerCamera(cameraDistance=1.75, cameraYaw=-25, cameraPitch=-45, cameraTargetPosition=[-0.2, 0, 0.4], physicsClientId=self.id)
p.configureDebugVisualizer(p.COV_ENABLE_MOUSE_PICKING, 0, physicsClientId=self.id)
p.configureDebugVisualizer(p.COV_ENABLE_GUI, 0, physicsClientId=self.id)
p.setRealTimeSimulation(0, physicsClientId=self.id)
p.setGravity(0, 0, self.gravity, physicsClientId=self.id)
### load restore state
restore_state = None
if self.restore_state_file is not None:
with open(self.restore_state_file, 'rb') as f:
restore_state = pickle.load(f)
### load plane
planeId = p.loadURDF(osp.join(self.asset_dir, "plane", "plane.urdf"), physicsClientId=self.id)
### create and load a robot
robot_base_pos = self.load_robot(restore_state)
### load and parse task config (including semantically meaningful distractor objects)
self.urdf_ids = {
"robot": self.robot.body,
"plane": planeId,
}
self.urdf_paths = {}
self.urdf_types = {}
self.init_positions = {}
self.on_tables = {}
self.simulator_sizes = {}
self.is_distractor = {
"robot": 0,
"plane": 0,
}
urdf_paths, urdf_sizes, urdf_positions, urdf_names, urdf_types, urdf_on_table, urdf_movables, \
use_table, articulated_init_joint_angles, spatial_relationships = self.load_and_parse_config(restore_state)
### handle the case if there is a table
self.load_table(use_table, restore_state)
### load each object from the task config
self.load_object(urdf_paths, urdf_sizes, urdf_positions, urdf_names, urdf_types, urdf_on_table, urdf_movables)
### adjusting object positions
### place the lowest point on the object to be the height where GPT specifies
object_height = self.adjust_object_positions(robot_base_pos)
### resolve collisions between objects
self.resolve_collision(robot_base_pos, object_height, spatial_relationships)
### handle any special relationships outputted by GPT
self.handle_gpt_special_relationships(spatial_relationships)
### set all object's joint angles to the lower joint limit
self.set_to_default_joint_angles()
### overwrite joint angles specified by GPT
self.handle_gpt_joint_angle(articulated_init_joint_angles)
### record initial joint angles and positions
self.record_initial_joint_and_pose()
### stabilize the scene
for _ in range(500):
p.stepSimulation(physicsClientId=self.id)
### restore to a state if provided
if self.restore_state_file is not None:
load_env(self, self.restore_state_file)
### Enable debug rendering
if self.gui:
p.configureDebugVisualizer(p.COV_ENABLE_RENDERING, 1, physicsClientId=self.id)
self.init_state = p.saveState(physicsClientId=self.id)
def load_robot(self, restore_state):
robot_classes = {
"panda": Panda,
"sawyer": Sawyer,
"ur5": UR5,
}
robot_names = list(robot_classes.keys())
self.robot_name = robot_names[np.random.randint(len(robot_names))]
if restore_state is not None and "robot_name" in restore_state:
self.robot_name = restore_state['robot_name']
self.robot_class = robot_classes[self.robot_name]
# Create robot
self.robot = self.robot_class()
self.robot.init(self.asset_dir, self.id, self.np_random, fixed_base=True, use_suction=self.use_suction)
self.agents = [self.robot]
self.suction_id = self.robot.right_gripper_indices[0]
# Update robot motor gains
self.robot.motor_gains = 0.05
self.robot.motor_forces = 100.0
# Set robot base position & orientation, and joint angles
robot_base_pos = self.get_robot_base_pos()
robot_base_orient = [0, 0, 0, 1]
self.robot_base_orient = robot_base_orient
self.robot.set_base_pos_orient(robot_base_pos, robot_base_orient)
init_joint_angles = self.get_robot_init_joint_angles()
self.robot.set_joint_angles(self.robot.right_arm_joint_indices, init_joint_angles)
return robot_base_pos
def load_and_parse_config(self, restore_state):
### select and download objects from objaverse
res = download_and_parse_objavarse_obj_from_yaml_config(self.config_path, candidate_num=self.object_candidate_num, vhacd=self.vhacd)
if not res:
print("=" * 20)
print("some objects cannot be found in objaverse, task_build failed, now exit ...")
print("=" * 20)
exit()
self.config = None
while self.config is None:
with open(self.config_path, 'r') as file:
self.config = yaml.safe_load(file)
for obj in self.config:
if "solution_path" in obj:
self.solution_path = obj["solution_path"]
break
### parse config
urdf_paths, urdf_sizes, urdf_positions, urdf_names, urdf_types, urdf_on_table, use_table, \
articulated_init_joint_angles, spatial_relationships, distractor_config_path, urdf_movables = parse_config(self.config,
use_bard=True, obj_id=self.obj_id)
if not use_table:
urdf_on_table = [False for _ in urdf_on_table]
urdf_names = [x.lower() for x in urdf_names]
for name in urdf_names:
self.is_distractor[name] = 0
### parse distractor object config (semantically meaningful objects that are related but not used for the task)
if distractor_config_path is not None:
self.distractor_config_path = distractor_config_path
res = download_and_parse_objavarse_obj_from_yaml_config(distractor_config_path, candidate_num=self.object_candidate_num, vhacd=self.vhacd)
with open(distractor_config_path, 'r') as f:
self.distractor_config = yaml.safe_load(f)
distractor_urdf_paths, distractor_urdf_sizes, distractor_urdf_positions, distractor_urdf_names, distractor_urdf_types, \
distractor_urdf_on_table, _, _, _, _, _ = \
parse_config(self.distractor_config, use_bard=True, obj_id=self.obj_id, use_vhacd=False)
distractor_urdf_names = [x.lower() for x in distractor_urdf_names]
if not use_table:
distractor_urdf_on_table = [False for _ in distractor_urdf_on_table]
for name in distractor_urdf_names:
self.is_distractor[name] = 1
distractor_movables = [True for _ in distractor_urdf_names]
urdf_paths += distractor_urdf_paths
urdf_sizes += distractor_urdf_sizes
urdf_positions += distractor_urdf_positions
urdf_names += distractor_urdf_names
urdf_types += distractor_urdf_types
urdf_on_table += distractor_urdf_on_table
urdf_movables += distractor_movables
if restore_state is not None:
if "urdf_paths" in restore_state:
self.urdf_paths = restore_state['urdf_paths']
urdf_paths = [self.urdf_paths[name] for name in urdf_names]
if "object_sizes" in restore_state:
self.simulator_sizes = restore_state['object_sizes']
urdf_sizes = [self.simulator_sizes[name] for name in urdf_names]
return urdf_paths, urdf_sizes, urdf_positions, urdf_names, urdf_types, urdf_on_table, urdf_movables, \
use_table, articulated_init_joint_angles, spatial_relationships
def load_table(self, use_table, restore_state):
self.use_table = use_table
if use_table:
self.table_path = table_paths[np.random.randint(len(table_paths))]
if restore_state is not None:
self.table_path = restore_state['table_path']
table_scale = table_scales[self.table_path]
table_pos = table_poses[self.table_path]
table_orientation = [np.pi/2, 0, 0]
self.table = p.loadURDF(osp.join(self.asset_dir, self.table_path, "material.urdf"), physicsClientId=self.id, useFixedBase=True,
globalScaling=table_scale)
if not self.randomize:
random_orientation = p.getQuaternionFromEuler(table_orientation, physicsClientId=self.id)
else:
random_orientations = [0, np.pi / 2, np.pi, np.pi * 3 / 2]
random_orientation = p.getQuaternionFromEuler([np.pi/2, 0, random_orientations[np.random.randint(4)]], physicsClientId=self.id)
p.resetBasePositionAndOrientation(self.table, table_pos, random_orientation, physicsClientId=self.id)
self.table_bbox_min, self.table_bbox_max = self.get_aabb(self.table)
table_range = self.table_bbox_max - self.table_bbox_min
self.table_bbox_min[:2] += table_range[:2] * table_bbox_scale_down_factors[self.table_path]
self.table_bbox_max[:2] -= table_range[:2] * table_bbox_scale_down_factors[self.table_path]
self.table_height = self.table_bbox_max[2]
p.addUserDebugLine([*self.table_bbox_min[:2], self.table_height], self.table_bbox_max, [1, 0, 0], lineWidth=10, lifeTime=0, physicsClientId=self.id)
self.simulator_sizes["init_table"] = table_scale
self.urdf_ids["init_table"] = self.table
self.is_distractor['init_table'] = 0
def load_object(self, urdf_paths, urdf_sizes, urdf_positions, urdf_names, urdf_types, urdf_on_table, urdf_movables):
for path, size, pos, name, type, on_table, moveable in zip(urdf_paths, urdf_sizes, urdf_positions, urdf_names, urdf_types, urdf_on_table, urdf_movables):
name = name.lower()
# by default, all objects movable, except the urdf files
use_fixed_base = (type == 'urdf' and not self.is_distractor[name])
if type == 'urdf' and moveable: # if gpt specified the object is movable, then it is movable
use_fixed_base = False
size = min(size, 1.2)
size = max(size, 0.1) # if the object is too small, current gripper cannot really manipulate it.
x_orient = np.pi/2 if type == 'mesh' else 0 # handle different coordinate axis by objaverse and partnet-mobility
if self.randomize or self.is_distractor[name]:
orientation = p.getQuaternionFromEuler([x_orient, 0, self.np_random.uniform(-np.pi/3, np.pi/3)], physicsClientId=self.id)
else:
orientation = p.getQuaternionFromEuler([x_orient, 0, 0], physicsClientId=self.id)
if not on_table:
load_pos = pos
else: # change to be table coordinate
table_xy_range = self.table_bbox_max[:2] - self.table_bbox_min[:2]
obj_x = self.table_bbox_min[0] + pos[0] * table_xy_range[0]
obj_y = self.table_bbox_min[1] + pos[1] * table_xy_range[1]
obj_z = self.table_height
load_pos = [obj_x, obj_y, obj_z]
id = p.loadURDF(path, basePosition=load_pos, baseOrientation=orientation, physicsClientId=self.id, useFixedBase=use_fixed_base, globalScaling=size)
# scale size
if name in self.simulator_sizes:
p.removeBody(id, physicsClientId=self.id)
saved_size = self.simulator_sizes[name]
id = p.loadURDF(path, basePosition=load_pos, baseOrientation=orientation, physicsClientId=self.id, useFixedBase=use_fixed_base, globalScaling=saved_size)
else:
min_aabb, max_aabb = self.get_aabb(id)
actual_size = np.linalg.norm(max_aabb - min_aabb)
if np.abs(actual_size - size) > 0.05:
p.removeBody(id, physicsClientId=self.id)
id = p.loadURDF(path, basePosition=load_pos, baseOrientation=orientation, physicsClientId=self.id, useFixedBase=use_fixed_base, globalScaling=size ** 2 / actual_size)
self.simulator_sizes[name] = size ** 2 / actual_size
else:
self.simulator_sizes[name] = size
self.urdf_ids[name] = id
self.urdf_paths[name] = path
self.urdf_types[name] = type
self.init_positions[name] = np.array(load_pos)
self.on_tables[name] = on_table
print("Finished loading object: ", name)
def adjust_object_positions(self, robot_base_pos):
object_height = {}
for name, id in self.urdf_ids.items():
if name == 'robot' or name == 'plane' or name == 'init_table': continue
min_aabb, max_aabb = self.get_aabb(id)
min_z = min_aabb[2]
object_height[id] = 2 * self.init_positions[name][2] - min_z
pos, orient = p.getBasePositionAndOrientation(id, physicsClientId=self.id)
new_pos = np.array(pos)
new_pos = self.clip_within_workspace(robot_base_pos, new_pos, self.on_tables[name])
new_pos[2] = object_height[id]
p.resetBasePositionAndOrientation(id, new_pos, orient, physicsClientId=self.id)
self.init_positions[name] = new_pos
return object_height
def resolve_collision(self, robot_base_pos, object_height, spatial_relationships):
collision = True
collision_cnt = 1
while collision:
if collision_cnt % 50 == 0: # if collision is not resolved every 50 iterations, we randomly reset object's position
for name, id in self.urdf_ids.items():
if name == 'robot' or name == 'plane' or name == "init_table": continue
pos = self.init_positions[name]
_, orient = p.getBasePositionAndOrientation(id, physicsClientId=self.id)
new_pos = np.array(pos) + np.random.uniform(-0.2, 0.2, size=3)
new_pos = self.clip_within_workspace(robot_base_pos, new_pos, self.on_tables[name])
new_pos[2] = object_height[id]
p.resetBasePositionAndOrientation(id, new_pos, orient, physicsClientId=self.id)
p.stepSimulation(physicsClientId=self.id)
push_directions = defaultdict(list) # store the push direction for each object
# detect collisions between objects
detected_collision = False
for name, id in self.urdf_ids.items():
if name == 'robot' or name == 'plane' or name == 'init_table': continue
for name2, id2 in self.urdf_ids.items():
if name == name2 or name2 == 'robot' or name2 == 'plane' or name2 == 'init_table': continue
# if gpt specifies obj a and obj b should have some special relationship, then skip collision resolution
skip = False
for spatial_relationship in spatial_relationships:
words = spatial_relationship.lower().split(",")
words = [word.strip().lstrip() for word in words]
if name in words and name2 in words:
skip = True
break
if skip: continue
contact_points = p.getClosestPoints(id, id2, 0.01, physicsClientId=self.id)
if len(contact_points) > 0:
contact_point = contact_points[0]
push_direction = contact_point[7]
push_direction = np.array([push_direction[0], push_direction[1], push_direction[2]])
# both are distractors or both are not, push both objects away
if (self.is_distractor[name] and self.is_distractor[name2]) or \
(not self.is_distractor[name] and not self.is_distractor[name2]):
push_directions[id].append(-push_direction)
push_directions[id2].append(push_direction)
# only 1 is distractor, only pushes the distractor
if self.is_distractor[name] and not self.is_distractor[name2]:
push_directions[id].append(push_direction)
if not self.is_distractor[name] and self.is_distractor[name2]:
push_directions[id2].append(-push_direction)
detected_collision = True
# collisions between robot and objects, only push object away
for name, id in self.urdf_ids.items():
if name == 'robot' or name == 'plane' or name == 'init_table':
continue
contact_points = p.getClosestPoints(self.robot.body, id, 0.05, physicsClientId=self.id)
if len(contact_points) > 0:
contact_point = contact_points[0]
push_direction = contact_point[7]
push_direction = np.array([push_direction[0], push_direction[1], push_direction[2]])
push_directions[id].append(-push_direction)
detected_collision = True
# between table and objects that should not be placed on table
if self.use_table:
for name, id in self.urdf_ids.items():
if name == 'robot' or name == 'plane' or name == 'init_table':
continue
if self.on_tables[name]:
continue
contact_points = p.getClosestPoints(self.robot.body, id, 0.05, physicsClientId=self.id)
if len(contact_points) > 0:
contact_point = contact_points[0]
push_direction = contact_point[7]
push_direction = np.array([push_direction[0], push_direction[1], push_direction[2]])
push_directions[id].append(-push_direction)
detected_collision = True
# move objects
push_distance = 0.1
for id in push_directions:
for direction in push_directions[id]:
pos, orient = p.getBasePositionAndOrientation(id, physicsClientId=self.id)
new_pos = np.array(pos) + push_distance * direction
new_pos = self.clip_within_workspace(robot_base_pos, new_pos, self.on_tables[name])
new_pos[2] = object_height[id]
p.resetBasePositionAndOrientation(id, new_pos, orient, physicsClientId=self.id)
p.stepSimulation(physicsClientId=self.id)
collision = detected_collision
collision_cnt += 1
if collision_cnt > 1000:
break
def record_initial_joint_and_pose(self):
self.initial_joint_angle = {}
for name in self.urdf_ids:
obj_id = self.urdf_ids[name.lower()]
if name == 'robot' or name == 'plane' or name == "init_table": continue
if self.urdf_types[name.lower()] == 'urdf':
self.initial_joint_angle[name] = {}
num_joints = p.getNumJoints(obj_id, physicsClientId=self.id)
for joint_idx in range(num_joints):
joint_name = p.getJointInfo(obj_id, joint_idx, physicsClientId=self.id)[1].decode("utf-8")
joint_angle = p.getJointState(obj_id, joint_idx, physicsClientId=self.id)[0]
self.initial_joint_angle[name][joint_name] = joint_angle
self.initial_pos = {}
self.initial_orient = {}
for name in self.urdf_ids:
obj_id = self.urdf_ids[name.lower()]
if name == 'robot' or name == 'plane' or name == "init_table": continue
pos, orient = p.getBasePositionAndOrientation(obj_id, physicsClientId=self.id)
self.initial_pos[name] = pos
self.initial_orient[name] = orient
def set_to_default_joint_angles(self):
for obj_name in self.urdf_ids:
if obj_name == 'robot' or obj_name == 'plane' or obj_name == "init_table": continue
obj_id = self.urdf_ids[obj_name]
num_joints = p.getNumJoints(obj_id, physicsClientId=self.id)
for joint_idx in range(num_joints):
joint_limit_low, joint_limit_high = p.getJointInfo(obj_id, joint_idx, physicsClientId=self.id)[8:10]
if joint_limit_low > joint_limit_high:
joint_limit_low, joint_limit_high = joint_limit_high, joint_limit_low
joint_val = joint_limit_low + 0.06 * (joint_limit_high - joint_limit_low)
p.resetJointState(obj_id, joint_idx, joint_val, physicsClientId=self.id)
def handle_gpt_special_relationships(self, spatial_relationships):
# we support "on" and "in" for now, but this can be extended to more relationships
for spatial_relationship in spatial_relationships:
words = spatial_relationship.lower().split(",")
words = [word.strip().lstrip() for word in words]
if words[0] == "on":
obj_a = words[1]
obj_b = words[2]
if len(words) == 4:
obj_b_link = words[3]
obj_b_link_id = get_link_id_from_name(self, obj_b, obj_b_link)
else:
obj_b_link_id = -1
obj_a_id, obj_b_id = self.urdf_ids[obj_a], self.urdf_ids[obj_b]
obj_a_bbox_min, obj_a_bbox_max = self.get_aabb(obj_a_id)
obj_a_size = obj_a_bbox_max - obj_a_bbox_min
target_aabb_min, target_aabb_max = self.get_aabb_link(obj_b_id, obj_b_link_id)
id_line = p.addUserDebugLine(target_aabb_min, target_aabb_max, [1, 0, 0], lineWidth=10, lifeTime=0, physicsClientId=self.id)
id_point = p.addUserDebugPoints([(target_aabb_min + target_aabb_max) / 2], [[0, 0, 1]], 10, 0, physicsClientId=self.id)
new_pos = (target_aabb_min + target_aabb_max) / 2
new_pos[2] = target_aabb_max[2] # put obj a on top of obj b.
new_pos[2] += obj_a_size[2] # add the height of obj a
if not self.randomize:
obj_a_orientation = p.getQuaternionFromEuler([np.pi/2, 0, 0], physicsClientId=self.id)
else:
random_orientations = [0, np.pi / 2, np.pi, np.pi * 3 / 2]
obj_a_orientation = p.getQuaternionFromEuler([np.pi/2, 0, random_orientations[np.random.randint(4)]], physicsClientId=self.id)
p.resetBasePositionAndOrientation(obj_a_id, new_pos, obj_a_orientation, physicsClientId=self.id)
p.removeUserDebugItem(id_line, physicsClientId=self.id)
p.removeUserDebugItem(id_point, physicsClientId=self.id)
if words[0] == 'in':
obj_a = words[1]
obj_b = words[2]
if len(words) == 4:
obj_b_link = words[3]
obj_b_link_id = get_link_id_from_name(self, obj_b, obj_b_link)
else:
obj_b_link_id = -1
obj_a_id, obj_b_id = self.urdf_ids[obj_a], self.urdf_ids[obj_b]
# if after a lot of trying times, there is still collision, we should scale down the size of object A.
cnt = 1
collision_free = False
obj_a_new_size = self.simulator_sizes[obj_a]
obj_a_ori_pos, obj_a_orientation = p.getBasePositionAndOrientation(obj_a_id, physicsClientId=self.id)
target_aabb_min, target_aabb_max = self.get_aabb_link(obj_b_id, obj_b_link_id)
while not collision_free:
if cnt % 100 == 0:
print("scaling down! object size is {}".format(obj_a_new_size))
obj_a_new_size = obj_a_new_size * 0.9
p.removeBody(obj_a_id, physicsClientId=self.id)
obj_a_id = p.loadURDF(self.urdf_paths[obj_a],
basePosition=obj_a_ori_pos,
baseOrientation=obj_a_orientation,
physicsClientId=self.id, useFixedBase=False, globalScaling=obj_a_new_size)
self.urdf_ids[obj_a] = obj_a_id
self.simulator_sizes[obj_a] = obj_a_new_size
obj_a_bbox_min, obj_a_bbox_max = self.get_aabb(obj_a_id)
obj_a_size = obj_a_bbox_max - obj_a_bbox_min
id_line = p.addUserDebugLine(target_aabb_min, target_aabb_max, [1, 0, 0], lineWidth=10, lifeTime=0, physicsClientId=self.id)
id_point = p.addUserDebugPoints([(target_aabb_min + target_aabb_max) / 2], [[0, 0, 1]], 10, 0, physicsClientId=self.id)
center_pos = (target_aabb_min + target_aabb_max) / 2
up_pos = center_pos.copy()
up_pos[2] += obj_a_size[2]
possible_locations = [center_pos, up_pos]
obj_a_orientation = p.getQuaternionFromEuler([np.pi/2, 0, 0], physicsClientId=self.id)
for pos in possible_locations: # we try two possible locations to put obj a in obj b
p.resetBasePositionAndOrientation(obj_a_id, pos, obj_a_orientation, physicsClientId=self.id)
contact_points = p.getClosestPoints(obj_a_id, obj_b_id, 0.002, physicsClientId=self.id)
if len(contact_points) == 0:
collision_free = True
break
p.removeUserDebugItem(id_line, physicsClientId=self.id)
p.removeUserDebugItem(id_point, physicsClientId=self.id)
cnt += 1
if cnt > 1000: # if after scaling for 10 times it still does not work, let it be.
break
def handle_gpt_joint_angle(self, articulated_init_joint_angles):
for name in articulated_init_joint_angles:
obj_id = self.urdf_ids[name.lower()]
for joint_name, joint_angle in articulated_init_joint_angles[name].items(): | joint_idx = get_joint_id_from_name(self, name.lower(), joint_name) | 6 | 2023-10-31 19:44:09+00:00 | 12k |
KoeAI/LLVC | minimal_rvc/model.py | [
{
"identifier": "SynthesizerTrnMs256NSFSid",
"path": "minimal_rvc/models.py",
"snippet": "class SynthesizerTrnMs256NSFSid(nn.Module):\n def __init__(\n self,\n spec_channels,\n segment_size,\n inter_channels,\n hidden_channels,\n filter_channels,\n n_h... | import os
import re
import torch
from typing import *
from fairseq import checkpoint_utils
from fairseq.models.hubert.hubert import HubertModel
from pydub import AudioSegment
from .models import (SynthesizerTrnMs256NSFSid, SynthesizerTrnMs256NSFSidNono)
from .pipeline import VocalConvertPipeline
from .cmd_opts import opts
from .shared import ROOT_DIR, device, is_half
from .utils import load_audio | 7,837 | # This module is based on code from ddPn08, liujing04, and teftef6220
# https://github.com/ddPn08/rvc-webui
# https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI
# https://github.com/teftef6220/Voice_Separation_and_Selection
# These modules are licensed under the MIT License.
AUDIO_OUT_DIR = opts.output_dir or os.path.join(ROOT_DIR, "outputs")
EMBEDDINGS_LIST = {
"hubert-base-japanese": (
"rinna_hubert_base_jp.pt",
"hubert-base-japanese",
"local",
),
"contentvec": ("checkpoint_best_legacy_500.pt", "contentvec", "local"),
}
def update_state_dict(state_dict):
if "params" in state_dict and state_dict["params"] is not None:
return
keys = [
"spec_channels",
"segment_size",
"inter_channels",
"hidden_channels",
"filter_channels",
"n_heads",
"n_layers",
"kernel_size",
"p_dropout",
"resblock",
"resblock_kernel_sizes",
"resblock_dilation_sizes",
"upsample_rates",
"upsample_initial_channel",
"upsample_kernel_sizes",
"spk_embed_dim",
"gin_channels",
"emb_channels",
"sr",
]
state_dict["params"] = {}
n = 0
for i, key in enumerate(keys):
i = i - n
if len(state_dict["config"]) != 19 and key == "emb_channels":
# backward compat.
n += 1
continue
state_dict["params"][key] = state_dict["config"][i]
if not "emb_channels" in state_dict["params"]:
if state_dict.get("version", "v1") == "v1":
state_dict["params"]["emb_channels"] = 256 # for backward compat.
state_dict["embedder_output_layer"] = 9
else:
state_dict["params"]["emb_channels"] = 768 # for backward compat.
state_dict["embedder_output_layer"] = 12
class VoiceConvertModel:
def __init__(self, model_name: str, state_dict: Dict[str, Any]) -> None:
update_state_dict(state_dict)
self.model_name = model_name
self.state_dict = state_dict
self.tgt_sr = state_dict["params"]["sr"]
f0 = state_dict.get("f0", 1)
state_dict["params"]["spk_embed_dim"] = state_dict["weight"][
"emb_g.weight"
].shape[0]
if not "emb_channels" in state_dict["params"]:
state_dict["params"]["emb_channels"] = 768 # for backward compat.
if f0 == 1:
self.net_g = SynthesizerTrnMs256NSFSid(
**state_dict["params"], is_half=is_half
)
else:
self.net_g = SynthesizerTrnMs256NSFSidNono(**state_dict["params"])
del self.net_g.enc_q
self.net_g.load_state_dict(state_dict["weight"], strict=False)
| # This module is based on code from ddPn08, liujing04, and teftef6220
# https://github.com/ddPn08/rvc-webui
# https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI
# https://github.com/teftef6220/Voice_Separation_and_Selection
# These modules are licensed under the MIT License.
AUDIO_OUT_DIR = opts.output_dir or os.path.join(ROOT_DIR, "outputs")
EMBEDDINGS_LIST = {
"hubert-base-japanese": (
"rinna_hubert_base_jp.pt",
"hubert-base-japanese",
"local",
),
"contentvec": ("checkpoint_best_legacy_500.pt", "contentvec", "local"),
}
def update_state_dict(state_dict):
if "params" in state_dict and state_dict["params"] is not None:
return
keys = [
"spec_channels",
"segment_size",
"inter_channels",
"hidden_channels",
"filter_channels",
"n_heads",
"n_layers",
"kernel_size",
"p_dropout",
"resblock",
"resblock_kernel_sizes",
"resblock_dilation_sizes",
"upsample_rates",
"upsample_initial_channel",
"upsample_kernel_sizes",
"spk_embed_dim",
"gin_channels",
"emb_channels",
"sr",
]
state_dict["params"] = {}
n = 0
for i, key in enumerate(keys):
i = i - n
if len(state_dict["config"]) != 19 and key == "emb_channels":
# backward compat.
n += 1
continue
state_dict["params"][key] = state_dict["config"][i]
if not "emb_channels" in state_dict["params"]:
if state_dict.get("version", "v1") == "v1":
state_dict["params"]["emb_channels"] = 256 # for backward compat.
state_dict["embedder_output_layer"] = 9
else:
state_dict["params"]["emb_channels"] = 768 # for backward compat.
state_dict["embedder_output_layer"] = 12
class VoiceConvertModel:
def __init__(self, model_name: str, state_dict: Dict[str, Any]) -> None:
update_state_dict(state_dict)
self.model_name = model_name
self.state_dict = state_dict
self.tgt_sr = state_dict["params"]["sr"]
f0 = state_dict.get("f0", 1)
state_dict["params"]["spk_embed_dim"] = state_dict["weight"][
"emb_g.weight"
].shape[0]
if not "emb_channels" in state_dict["params"]:
state_dict["params"]["emb_channels"] = 768 # for backward compat.
if f0 == 1:
self.net_g = SynthesizerTrnMs256NSFSid(
**state_dict["params"], is_half=is_half
)
else:
self.net_g = SynthesizerTrnMs256NSFSidNono(**state_dict["params"])
del self.net_g.enc_q
self.net_g.load_state_dict(state_dict["weight"], strict=False) | self.net_g.eval().to(device) | 4 | 2023-10-28 01:58:49+00:00 | 12k |
baaivision/JudgeLM | judgelm/serve/multi_model_worker.py | [
{
"identifier": "WORKER_HEART_BEAT_INTERVAL",
"path": "judgelm/constants.py",
"snippet": "WORKER_HEART_BEAT_INTERVAL = int(os.getenv(\"JUDGELM_WORKER_HEART_BEAT_INTERVAL\", 45))"
},
{
"identifier": "ErrorCode",
"path": "judgelm/constants.py",
"snippet": "class ErrorCode(IntEnum):\n \"... | import argparse
import asyncio
import dataclasses
import logging
import json
import os
import time
import threading
import uuid
import requests
import torch
import torch.nn.functional as F
import uvicorn
from typing import List, Union
from fastapi import FastAPI, Request, BackgroundTasks
from fastapi.responses import StreamingResponse, JSONResponse
from transformers import (
AutoTokenizer,
AutoModelForCausalLM,
LlamaTokenizer,
AutoModel,
)
from transformers import (
AutoTokenizer,
AutoModelForCausalLM,
LLaMATokenizer,
AutoModel,
)
from judgelm.constants import WORKER_HEART_BEAT_INTERVAL, ErrorCode, SERVER_ERROR_MSG
from judgelm.model.model_adapter import (
load_model,
add_model_args,
get_conversation_template,
)
from judgelm.model.model_chatglm import generate_stream_chatglm
from judgelm.model.model_falcon import generate_stream_falcon
from judgelm.model.model_codet5p import generate_stream_codet5p
from judgelm.modules.gptq import GptqConfig
from judgelm.serve.inference import generate_stream
from judgelm.serve.model_worker import ModelWorker, worker_id, logger
from fastchat.utils import build_logger, pretty_print_semaphore, get_context_length | 7,302 | # Note: for all the calls below, we make a hard assumption that the caller
# includes the model name in the payload, otherwise we can't figure out which
# underlying sub-worker to call.
@app.post("/worker_generate_stream")
async def api_generate_stream(request: Request):
params = await request.json()
await acquire_worker_semaphore()
worker = worker_map[params["model"]]
generator = worker.generate_stream_gate(params)
background_tasks = create_background_tasks()
return StreamingResponse(generator, background=background_tasks)
@app.post("/worker_generate")
async def api_generate(request: Request):
params = await request.json()
await acquire_worker_semaphore()
worker = worker_map[params["model"]]
output = worker.generate_gate(params)
release_worker_semaphore()
return JSONResponse(output)
@app.post("/worker_get_embeddings")
async def api_get_embeddings(request: Request):
params = await request.json()
await acquire_worker_semaphore()
worker = worker_map[params["model"]]
embedding = worker.get_embeddings(params)
background_tasks = create_background_tasks()
return JSONResponse(content=embedding, background=background_tasks)
@app.post("/worker_get_status")
async def api_get_status(request: Request):
return {
"model_names": [m for w in workers for m in w.model_names],
"speed": 1,
"queue_length": sum([w.get_queue_length() for w in workers]),
}
@app.post("/count_token")
async def api_count_token(request: Request):
params = await request.json()
worker = worker_map[params["model"]]
return worker.count_token(params)
@app.post("/worker_get_conv_template")
async def api_get_conv(request: Request):
params = await request.json()
worker = worker_map[params["model"]]
return worker.get_conv_template()
@app.post("/model_details")
async def api_model_details(request: Request):
params = await request.json()
worker = worker_map[params["model"]]
return {"context_length": worker.context_len}
if __name__ == "__main__":
# Note: Ensure we resolve arg conflicts. We let `add_model_args` add MOST
# of the model args but we'll override one to have an append action that
# supports multiple values.
parser = argparse.ArgumentParser(conflict_handler="resolve")
parser.add_argument("--host", type=str, default="localhost")
parser.add_argument("--port", type=int, default=21002)
parser.add_argument("--worker-address", type=str, default="http://localhost:21002")
parser.add_argument(
"--controller-address", type=str, default="http://localhost:21001"
)
add_model_args(parser)
# Override the model path to be repeated and align it with model names.
parser.add_argument(
"--model-path",
type=str,
default=[],
action="append",
help="One or more paths to model weights to load. This can be a local folder or a Hugging Face repo ID.",
)
parser.add_argument(
"--model-names",
type=lambda s: s.split(","),
action="append",
help="One or more model names. Values must be aligned with `--model-path` values.",
)
parser.add_argument("--limit-worker-concurrency", type=int, default=5)
parser.add_argument("--stream-interval", type=int, default=2)
parser.add_argument("--no-register", action="store_true")
args = parser.parse_args()
logger.info(f"args: {args}")
if args.gpus:
if len(args.gpus.split(",")) < args.num_gpus:
raise ValueError(
f"Larger --num-gpus ({args.num_gpus}) than --gpus {args.gpus}!"
)
os.environ["CUDA_VISIBLE_DEVICES"] = args.gpus
gptq_config = GptqConfig(
ckpt=args.gptq_ckpt or args.model_path,
wbits=args.gptq_wbits,
groupsize=args.gptq_groupsize,
act_order=args.gptq_act_order,
)
if args.model_names is None:
args.model_names = [[x.split("/")[-1]] for x in args.model_path]
# Launch all workers
workers = []
for model_path, model_names in zip(args.model_path, args.model_names):
w = ModelWorker(
args.controller_address,
args.worker_address,
| """
A multi-model worker that contains multiple sub-works one for each model. This
supports running a list of models on the same machine so that they can
(potentially) share the same background weights.
Each model can have one or more model names.
This multi-model worker assumes the models shares some underlying weights and
thus reports the combined queue lengths for health checks.
We recommend using this with multiple Peft models (with `peft` in the name)
where all Peft models are trained on the exact same base model.
"""
try:
except ImportError:
# We store both the underlying workers and a mapping from their model names to
# the worker instance. This makes it easy to fetch the appropriate worker for
# each API call.
workers = []
worker_map = {}
app = FastAPI()
def release_worker_semaphore():
workers[0].semaphore.release()
def acquire_worker_semaphore():
if workers[0].semaphore is None:
# Share the same semaphore for all workers because
# all workers share the same GPU.
semaphore = asyncio.Semaphore(workers[0].limit_worker_concurrency)
for w in workers:
w.semaphore = semaphore
return workers[0].semaphore.acquire()
def create_background_tasks():
background_tasks = BackgroundTasks()
background_tasks.add_task(release_worker_semaphore)
return background_tasks
# Note: for all the calls below, we make a hard assumption that the caller
# includes the model name in the payload, otherwise we can't figure out which
# underlying sub-worker to call.
@app.post("/worker_generate_stream")
async def api_generate_stream(request: Request):
params = await request.json()
await acquire_worker_semaphore()
worker = worker_map[params["model"]]
generator = worker.generate_stream_gate(params)
background_tasks = create_background_tasks()
return StreamingResponse(generator, background=background_tasks)
@app.post("/worker_generate")
async def api_generate(request: Request):
params = await request.json()
await acquire_worker_semaphore()
worker = worker_map[params["model"]]
output = worker.generate_gate(params)
release_worker_semaphore()
return JSONResponse(output)
@app.post("/worker_get_embeddings")
async def api_get_embeddings(request: Request):
params = await request.json()
await acquire_worker_semaphore()
worker = worker_map[params["model"]]
embedding = worker.get_embeddings(params)
background_tasks = create_background_tasks()
return JSONResponse(content=embedding, background=background_tasks)
@app.post("/worker_get_status")
async def api_get_status(request: Request):
return {
"model_names": [m for w in workers for m in w.model_names],
"speed": 1,
"queue_length": sum([w.get_queue_length() for w in workers]),
}
@app.post("/count_token")
async def api_count_token(request: Request):
params = await request.json()
worker = worker_map[params["model"]]
return worker.count_token(params)
@app.post("/worker_get_conv_template")
async def api_get_conv(request: Request):
params = await request.json()
worker = worker_map[params["model"]]
return worker.get_conv_template()
@app.post("/model_details")
async def api_model_details(request: Request):
params = await request.json()
worker = worker_map[params["model"]]
return {"context_length": worker.context_len}
if __name__ == "__main__":
# Note: Ensure we resolve arg conflicts. We let `add_model_args` add MOST
# of the model args but we'll override one to have an append action that
# supports multiple values.
parser = argparse.ArgumentParser(conflict_handler="resolve")
parser.add_argument("--host", type=str, default="localhost")
parser.add_argument("--port", type=int, default=21002)
parser.add_argument("--worker-address", type=str, default="http://localhost:21002")
parser.add_argument(
"--controller-address", type=str, default="http://localhost:21001"
)
add_model_args(parser)
# Override the model path to be repeated and align it with model names.
parser.add_argument(
"--model-path",
type=str,
default=[],
action="append",
help="One or more paths to model weights to load. This can be a local folder or a Hugging Face repo ID.",
)
parser.add_argument(
"--model-names",
type=lambda s: s.split(","),
action="append",
help="One or more model names. Values must be aligned with `--model-path` values.",
)
parser.add_argument("--limit-worker-concurrency", type=int, default=5)
parser.add_argument("--stream-interval", type=int, default=2)
parser.add_argument("--no-register", action="store_true")
args = parser.parse_args()
logger.info(f"args: {args}")
if args.gpus:
if len(args.gpus.split(",")) < args.num_gpus:
raise ValueError(
f"Larger --num-gpus ({args.num_gpus}) than --gpus {args.gpus}!"
)
os.environ["CUDA_VISIBLE_DEVICES"] = args.gpus
gptq_config = GptqConfig(
ckpt=args.gptq_ckpt or args.model_path,
wbits=args.gptq_wbits,
groupsize=args.gptq_groupsize,
act_order=args.gptq_act_order,
)
if args.model_names is None:
args.model_names = [[x.split("/")[-1]] for x in args.model_path]
# Launch all workers
workers = []
for model_path, model_names in zip(args.model_path, args.model_names):
w = ModelWorker(
args.controller_address,
args.worker_address, | worker_id, | 11 | 2023-10-26 19:41:07+00:00 | 12k |
EulerSearch/embedding_studio | embedding_studio/workers/fine_tuning/finetune_embedding.py | [
{
"identifier": "QueryRetriever",
"path": "embedding_studio/embeddings/data/clickstream/query_retriever.py",
"snippet": "class QueryRetriever(object):\n \"\"\"As we can't exactly predict a schema of storing queries:\n 1. As text exceptly in clickstream service\n 2. As ID of a record with a text... | import gc
import logging
import os
import tempfile
import traceback
import torch
from typing import Any, Dict, List, Optional
from hyperopt import Trials, fmin, hp, tpe
from embedding_studio.embeddings.data.clickstream.query_retriever import (
QueryRetriever,
)
from embedding_studio.embeddings.data.ranking_data import RankingData
from embedding_studio.embeddings.models.interface import (
EmbeddingsModelInterface,
)
from embedding_studio.workers.fine_tuning.experiments.experiments_tracker import (
ExperimentsManager,
)
from embedding_studio.workers.fine_tuning.experiments.finetuning_iteration import (
FineTuningIteration,
)
from embedding_studio.workers.fine_tuning.experiments.finetuning_params import (
FineTuningParams,
)
from embedding_studio.workers.fine_tuning.experiments.finetuning_settings import (
FineTuningSettings,
)
from embedding_studio.workers.fine_tuning.finetune_embedding_one_param import (
fine_tune_embedding_model_one_param,
) | 10,176 |
logger = logging.getLogger(__name__)
def _finetune_embedding_model_one_step(
initial_model_path: str,
settings: FineTuningSettings,
ranking_data: RankingData,
query_retriever: QueryRetriever,
|
logger = logging.getLogger(__name__)
def _finetune_embedding_model_one_step(
initial_model_path: str,
settings: FineTuningSettings,
ranking_data: RankingData,
query_retriever: QueryRetriever, | fine_tuning_params: FineTuningParams, | 5 | 2023-10-31 00:33:13+00:00 | 12k |
facebookresearch/minimax | src/minimax/envs/maze/maze_ood.py | [
{
"identifier": "DIR_TO_VEC",
"path": "src/minimax/envs/maze/common.py",
"snippet": "DIR_TO_VEC = jnp.array([\n\t# Pointing right (positive X)\n\t(1, 0), # right\n\t(0, 1), # down\n\t(-1, 0), # left\n\t(0, -1), # up\n], dtype=jnp.int8)"
},
{
"identifier": "OBJECT_TO_INDEX",
"path": "src/mini... | from typing import Tuple, Optional
from flax import struct
from minimax.envs.registration import register
from .common import (
DIR_TO_VEC,
OBJECT_TO_INDEX,
COLOR_TO_INDEX,
make_maze_map,
)
from .maze import (
Maze,
EnvParams,
EnvState,
Actions
)
import jax
import jax.numpy as jnp
import chex | 7,859 | (wall_map, visited_map, vstack, vstack_size), _ = jax.lax.scan(
_scan_step,
(wall_map, visited_map, vstack, vstack_size),
jnp.array(subkeys),
length=max_n_steps
)
# Randomize goal position
all_pos_idx = jnp.arange(height*width)
key, subkey = jax.random.split(key)
goal_mask = ~wall_map.flatten()
goal_pos_idx = jax.random.choice(subkey, all_pos_idx, p=goal_mask)
goal_pos = jnp.array([goal_pos_idx%width, goal_pos_idx//width])
# Randomize agent position
key, subkey = jax.random.split(key)
agent_mask = goal_mask.at[goal_pos_idx].set(False)
agent_pos_idx = jax.random.choice(subkey, all_pos_idx, p=agent_mask)
agent_pos = jnp.array([agent_pos_idx%width, agent_pos_idx//width], dtype=jnp.uint32)
# Randomize agent dir
key, subkey = jax.random.split(key)
agent_dir_idx = jax.random.choice(subkey, 4)
maze_map = make_maze_map(
self.params,
wall_map,
goal_pos,
agent_pos,
agent_dir_idx,
pad_obs=True)
state = EnvState(
agent_pos=agent_pos,
agent_dir=DIR_TO_VEC[agent_dir_idx],
agent_dir_idx=agent_dir_idx,
goal_pos=goal_pos,
wall_map=wall_map,
maze_map=maze_map,
time=0,
terminal=False,
)
return self.get_obs(state), state
class PerfectMazeMedium(PerfectMaze):
def __init__(self, *args, **kwargs):
super().__init__(height=19, width=19, *args, **kwargs)
class PerfectMazeExtraLarge(PerfectMaze):
def __init__(self, *args, **kwargs):
super().__init__(height=101, width=101, *args, **kwargs)
class Memory(MazeSingleton):
def __init__(
self,
height=17,
width=17,
agent_view_size=7,
see_through_walls=True,
see_agent=False,
normalize_obs=False,
obs_agent_pos=False,
max_episode_steps=250,
singleton_seed=-1):
# Generate walls
wall_map = [
"0 0 0 0 0 0 0 0 1 0 1 0 0 0 0",
"0 0 0 0 0 0 0 0 1 0 1 0 0 0 0",
"0 0 0 0 0 0 0 0 1 0 1 0 0 0 0",
"0 0 0 0 0 0 0 0 1 0 1 0 0 0 0",
"0 0 0 0 0 0 0 0 1 0 1 0 0 0 0",
"1 1 1 1 0 0 0 0 1 0 1 0 0 0 0",
"0 0 0 1 1 1 1 1 1 0 1 0 0 0 0",
"0 0 0 0 0 0 0 0 0 0 1 0 0 0 0",
"0 0 0 1 1 1 1 1 1 0 1 0 0 0 0",
"1 1 1 1 0 0 0 0 1 0 1 0 0 0 0",
"0 0 0 0 0 0 0 0 1 0 1 0 0 0 0",
"0 0 0 0 0 0 0 0 1 0 1 0 0 0 0",
"0 0 0 0 0 0 0 0 1 0 1 0 0 0 0",
"0 0 0 0 0 0 0 0 1 0 1 0 0 0 0",
"0 0 0 0 0 0 0 0 1 0 1 0 0 0 0"
]
super().__init__(
wall_map=wall_map,
goal_pos=(9,5),
agent_pos=(0,7),
agent_dir_idx=0,
see_agent=see_agent,
normalize_obs=normalize_obs,
obs_agent_pos=obs_agent_pos,
max_episode_steps=max_episode_steps
)
self.top_pos = jnp.array([9,5], dtype=jnp.uint32)
self.bottom_pos = jnp.array([9,9], dtype=jnp.uint32)
def reset_env(
self,
key: chex.PRNGKey,
) -> Tuple[chex.Array, EnvState]:
params = self.params
height, width = params.height, params.width
agent_pos = jnp.array([0,7], dtype=jnp.uint32)
agent_dir_idx = 0
# Randomly generate a memory location
is_top_goal = jax.random.randint(key, minval=0, maxval=2, shape=(1,), dtype=jnp.uint8)
clue_pos = jnp.array((0,6), dtype=jnp.uint32)
self.goal_pos = is_top_goal*self.top_pos + (1-is_top_goal)*self.bottom_pos
self.distractor_pos = is_top_goal*self.bottom_pos + (1-is_top_goal)*self.top_pos
| """
Copyright (c) Meta Platforms, Inc. and affiliates.
All rights reserved.
This source code is licensed under the license found in the
LICENSE file in the root directory of this source tree.
"""
# ======== Singleton mazes ========
class MazeSingleton(Maze):
def __init__(
self,
height=15,
width=15,
wall_map=None,
goal_pos=None,
agent_pos=None,
agent_dir_idx=None,
agent_view_size=5,
see_through_walls=True,
see_agent=False,
normalize_obs=False,
obs_agent_pos=False,
max_episode_steps=None,
singleton_seed=-1,
):
super().__init__(
height=height,
width=width,
agent_view_size=agent_view_size,
see_through_walls=see_through_walls,
see_agent=see_agent,
normalize_obs=normalize_obs,
obs_agent_pos=obs_agent_pos,
max_episode_steps=max_episode_steps,
singleton_seed=singleton_seed
)
if wall_map is None:
self.wall_map = jnp.zeros((height,width), dtype=jnp.bool_)
else:
self.wall_map = \
jnp.array(
[[int(x) for x in row.split()]
for row in wall_map], dtype=jnp.bool_)
height, width = self.wall_map.shape
if max_episode_steps is None:
max_episode_steps = 2*(height+2)*(width+2) # Match original eval steps
self.goal_pos_choices = None
if goal_pos is None:
self.goal_pos = jnp.array([height, width]) - jnp.ones(2, dtype=jnp.uint32)
elif isinstance(goal_pos, (tuple, list)) \
and isinstance(goal_pos[0], (tuple, list)):
self.goal_pos_choices = jnp.array(goal_pos, dtype=jnp.uint32)
self.goal_pos = goal_pos[0]
else:
self.goal_pos = jnp.array(goal_pos, dtype=jnp.uint32)
if agent_pos is None:
self.agent_pos = jnp.zeros(2, dtype=jnp.uint32)
else:
self.agent_pos = jnp.array(agent_pos, dtype=jnp.uint32)
self.agent_dir_idx = agent_dir_idx
if self.agent_dir_idx is None:
self.agent_dir_idx = 0
self.params = EnvParams(
height=height,
width=width,
agent_view_size=agent_view_size,
see_through_walls=see_through_walls,
see_agent=see_agent,
normalize_obs=normalize_obs,
obs_agent_pos=obs_agent_pos,
max_episode_steps=max_episode_steps,
singleton_seed=-1,
)
self.maze_map = make_maze_map(
self.params,
self.wall_map,
self.goal_pos,
self.agent_pos,
self.agent_dir_idx,
pad_obs=True)
@property
def default_params(self) -> EnvParams:
# Default environment parameters
return EnvParams()
def reset_env(
self,
key: chex.PRNGKey,
) -> Tuple[chex.Array, EnvState]:
if self.agent_dir_idx is None:
key, subkey = jax.random.split(key)
agent_dir_idx = jax.random.choice(subkey, 4)
else:
agent_dir_idx = self.agent_dir_idx
if self.goal_pos_choices is not None:
key, subkey = jax.random.split(key)
goal_pos = jax.random.choice(subkey, self.goal_pos_choices)
maze_map = make_maze_map(
self.params,
self.wall_map,
goal_pos,
self.agent_pos,
agent_dir_idx,
pad_obs=True)
else:
goal_pos = self.goal_pos
maze_map = self.maze_map
state = EnvState(
agent_pos=self.agent_pos,
agent_dir=DIR_TO_VEC[agent_dir_idx],
agent_dir_idx=agent_dir_idx,
goal_pos=goal_pos,
wall_map=self.wall_map,
maze_map=maze_map,
time=0,
terminal=False,
)
return self.get_obs(state), state
# ======== Specific mazes ========
class SixteenRooms(MazeSingleton):
def __init__(
self,
see_agent=False,
normalize_obs=False):
wall_map = [
"0 0 0 1 0 0 1 0 0 1 0 0 0",
"0 0 0 0 0 0 0 0 0 1 0 0 0",
"0 0 0 1 0 0 1 0 0 0 0 0 0",
"1 0 1 1 1 0 1 1 0 1 1 1 0",
"0 0 0 1 0 0 0 0 0 0 0 0 0",
"0 0 0 0 0 0 1 0 0 1 0 0 0",
"1 1 0 1 0 1 1 0 1 1 1 0 1",
"0 0 0 1 0 0 0 0 0 1 0 0 0",
"0 0 0 1 0 0 1 0 0 0 0 0 0",
"0 1 1 1 1 0 1 1 0 1 0 1 1",
"0 0 0 1 0 0 1 0 0 1 0 0 0",
"0 0 0 0 0 0 1 0 0 0 0 0 0",
"0 0 0 1 0 0 0 0 0 1 0 0 0"
]
goal_pos = (11,11)
agent_pos = (1,1)
agent_dir_idx = 0
super().__init__(
wall_map=wall_map,
goal_pos=goal_pos,
agent_pos=agent_pos,
agent_dir_idx=agent_dir_idx,
see_agent=see_agent,
normalize_obs=normalize_obs
)
class SixteenRooms2(MazeSingleton):
def __init__(
self,
see_agent=False,
normalize_obs=False):
wall_map = [
"0 0 0 1 0 0 0 0 0 1 0 0 0",
"0 0 0 0 0 0 1 0 0 1 0 0 0",
"0 0 0 1 0 0 1 0 0 1 0 0 0",
"1 1 1 1 0 1 1 0 1 1 1 0 1",
"0 0 0 1 0 0 1 0 0 0 0 0 0",
"0 0 0 0 0 0 1 0 0 1 0 0 0",
"1 0 1 1 1 1 1 0 1 1 1 1 1",
"0 0 0 1 0 0 1 0 0 1 0 0 0",
"0 0 0 1 0 0 0 0 0 0 0 0 0",
"1 1 0 1 1 0 1 1 0 1 1 1 1",
"0 0 0 1 0 0 1 0 0 1 0 0 0",
"0 0 0 0 0 0 1 0 0 0 0 0 0",
"0 0 0 1 0 0 1 0 0 1 0 0 0"
]
goal_pos = (11,11)
agent_pos = (1,1)
agent_dir_idx = None
super().__init__(
wall_map=wall_map,
goal_pos=goal_pos,
agent_pos=agent_pos,
agent_dir_idx=agent_dir_idx,
see_agent=see_agent,
normalize_obs=normalize_obs
)
class Labyrinth(MazeSingleton):
def __init__(
self,
see_agent=False,
normalize_obs=False):
wall_map = [
"0 0 0 0 0 0 0 0 0 0 0 0 0",
"0 1 1 1 1 1 1 1 1 1 1 1 0",
"0 1 0 0 0 0 0 0 0 0 0 1 0",
"0 1 0 1 1 1 1 1 1 1 0 1 0",
"0 1 0 1 0 0 0 0 0 1 0 1 0",
"0 1 0 1 0 1 1 1 0 1 0 1 0",
"0 1 0 1 0 1 0 1 0 1 0 1 0",
"0 1 0 1 0 1 0 1 0 1 0 1 0",
"0 1 0 0 0 1 0 0 0 1 0 1 0",
"0 1 1 1 1 1 1 1 1 1 0 1 0",
"0 0 0 0 0 1 0 0 0 0 0 1 0",
"1 1 1 1 0 1 0 1 1 1 1 1 0",
"0 0 0 0 0 1 0 0 0 0 0 0 0"
]
goal_pos = (6,6)
agent_pos = (0,12)
agent_dir_idx = 0
super().__init__(
wall_map=wall_map,
goal_pos=goal_pos,
agent_pos=agent_pos,
agent_dir_idx=agent_dir_idx,
see_agent=see_agent,
normalize_obs=normalize_obs
)
class LabyrinthFlipped(MazeSingleton):
def __init__(
self,
see_agent=False,
normalize_obs=False):
wall_map = [
'0 0 0 0 0 0 0 0 0 0 0 0 0',
'0 1 1 1 1 1 1 1 1 1 1 1 0',
'0 1 0 0 0 0 0 0 0 0 0 1 0',
'0 1 0 1 1 1 1 1 1 1 0 1 0',
'0 1 0 1 0 0 0 0 0 1 0 1 0',
'0 1 0 1 0 1 1 1 0 1 0 1 0',
'0 1 0 1 0 1 0 1 0 1 0 1 0',
'0 1 0 1 0 1 0 1 0 1 0 1 0',
'0 1 0 1 0 0 0 1 0 0 0 1 0',
'0 1 0 1 1 1 1 1 1 1 1 1 0',
'0 1 0 0 0 0 0 1 0 0 0 0 0',
'0 1 1 1 1 1 0 1 0 1 1 1 1',
'0 0 0 0 0 0 0 1 0 0 0 0 0'
]
goal_pos = (6,6)
agent_pos = (12,12)
agent_dir_idx = 2
super().__init__(
wall_map=wall_map,
goal_pos=goal_pos,
agent_pos=agent_pos,
agent_dir_idx=agent_dir_idx,
see_agent=see_agent,
normalize_obs=normalize_obs
)
class Labyrinth2(MazeSingleton):
def __init__(
self,
see_agent=False,
normalize_obs=False):
wall_map = [
"0 1 0 0 0 0 0 0 0 0 0 0 0",
"0 1 0 1 1 1 1 1 1 1 1 1 0",
"0 1 0 1 0 0 0 0 0 0 0 1 0",
"0 1 0 1 0 1 1 1 1 1 0 1 0",
"0 1 0 1 0 1 0 0 0 1 0 1 0",
"0 0 0 1 0 1 0 1 0 1 0 1 0",
"1 1 1 1 0 1 0 1 0 1 0 1 0",
"0 0 0 1 0 1 1 1 0 1 0 1 0",
"0 1 0 1 0 0 0 0 0 1 0 1 0",
"0 1 0 1 1 1 1 1 1 1 0 1 0",
"0 1 0 0 0 0 0 0 0 0 0 1 0",
"0 1 1 1 1 1 1 1 1 1 1 1 0",
"0 0 0 0 0 0 0 0 0 0 0 0 0"
]
goal_pos = (6,6)
agent_pos = (0,0)
agent_dir_idx = None
super().__init__(
wall_map=wall_map,
goal_pos=goal_pos,
agent_pos=agent_pos,
agent_dir_idx=agent_dir_idx,
see_agent=see_agent,
normalize_obs=normalize_obs
)
class StandardMaze(MazeSingleton):
def __init__(
self,
see_agent=False,
normalize_obs=False):
wall_map = [
"0 0 0 0 0 1 0 0 0 0 1 0 0",
"0 1 1 1 0 1 1 1 1 0 1 1 0",
"0 1 0 0 0 0 0 0 0 0 0 0 0",
"0 1 1 1 1 1 1 1 1 0 1 1 1",
"0 0 0 0 0 0 0 0 1 0 0 0 0",
"1 1 1 1 1 1 0 1 1 1 1 1 0",
"0 0 0 0 1 0 0 1 0 0 0 0 0",
"0 1 1 0 0 0 1 1 0 1 1 1 1",
"0 0 1 0 1 0 0 1 0 0 0 1 0",
"1 0 1 0 1 1 0 1 1 1 0 1 0",
"1 0 1 0 0 1 0 0 0 1 0 0 0",
"1 0 1 1 0 1 1 1 0 1 1 1 0",
"0 0 0 1 0 0 0 1 0 1 0 0 0"
]
goal_pos = (6,12)
agent_pos = (6,0)
agent_dir_idx = 0
super().__init__(
wall_map=wall_map,
goal_pos=goal_pos,
agent_pos=agent_pos,
agent_dir_idx=agent_dir_idx,
see_agent=see_agent,
normalize_obs=normalize_obs
)
class StandardMaze2(MazeSingleton):
def __init__(
self,
see_agent=False,
normalize_obs=False):
wall_map = [
"0 0 0 1 0 1 0 0 0 0 1 0 0",
"0 1 0 1 0 1 1 1 1 0 0 0 1",
"0 1 0 0 0 0 0 0 0 0 1 0 0",
"0 1 1 1 1 1 1 1 1 0 1 1 1",
"0 0 0 1 0 0 1 0 1 0 1 0 0",
"1 1 0 1 0 1 1 0 1 0 1 0 0",
"0 1 0 1 0 0 0 0 1 0 1 1 0",
"0 1 0 1 1 0 1 1 1 0 0 1 0",
"0 1 0 0 1 0 0 1 1 1 0 1 0",
"0 1 1 0 1 1 0 1 0 1 0 1 0",
"0 1 0 0 0 1 0 1 0 1 0 1 0",
"0 1 0 1 0 1 0 1 0 1 0 1 0",
"0 0 0 1 0 0 0 1 0 0 0 0 0"
]
goal_pos = (12,4)
agent_pos = (0,6)
agent_dir_idx = None
super().__init__(
wall_map=wall_map,
goal_pos=goal_pos,
agent_pos=agent_pos,
agent_dir_idx=agent_dir_idx,
see_agent=see_agent,
normalize_obs=normalize_obs
)
class StandardMaze3(MazeSingleton):
def __init__(
self,
see_agent=False,
normalize_obs=False):
wall_map = [
"0 0 0 0 1 0 1 0 0 0 0 0 0",
"0 1 1 1 1 0 1 0 1 1 1 1 0",
"0 1 0 0 0 0 1 0 1 0 0 0 0",
"0 0 0 1 1 1 1 0 1 0 1 0 1",
"1 1 0 1 0 0 0 0 1 0 1 0 0",
"0 0 0 1 0 1 1 0 1 0 1 1 0",
"0 1 0 1 0 1 0 0 1 0 0 1 0",
"0 1 0 1 0 1 0 1 1 1 0 1 1",
"0 1 0 0 0 1 0 1 0 1 0 0 0",
"0 1 1 1 0 1 0 1 0 1 1 1 0",
"0 1 0 0 0 1 0 1 0 0 0 1 0",
"0 1 0 1 1 1 0 1 0 1 0 1 0",
"0 1 0 0 0 1 0 0 0 1 0 0 0"
]
goal_pos = (12,6)
agent_pos = (3,0)
agent_dir_idx = None
super().__init__(
wall_map=wall_map,
goal_pos=goal_pos,
agent_pos=agent_pos,
agent_dir_idx=agent_dir_idx,
see_agent=see_agent,
normalize_obs=normalize_obs
)
class SmallCorridor(MazeSingleton):
def __init__(
self,
see_agent=False,
normalize_obs=False):
wall_map = [
"0 0 0 0 0 0 0 0 0 0 0 0 0",
"0 1 0 1 0 1 0 1 0 1 0 1 0",
"0 1 0 1 0 1 0 1 0 1 0 1 0",
"0 1 0 1 0 1 0 1 0 1 0 1 0",
"0 1 0 1 0 1 0 1 0 1 0 1 0",
"0 1 0 1 0 1 0 1 0 1 0 1 0",
"0 1 1 1 1 1 1 1 1 1 1 1 0",
"0 1 0 1 0 1 0 1 0 1 0 1 0",
"0 1 0 1 0 1 0 1 0 1 0 1 0",
"0 1 0 1 0 1 0 1 0 1 0 1 0",
"0 1 0 1 0 1 0 1 0 1 0 1 0",
"0 1 0 1 0 1 0 1 0 1 0 1 0",
"0 0 0 0 0 0 0 0 0 0 0 0 0"
]
goal_pos = [
(2,5),(4,5),(6,5),(8,5),(10,5),
(2,7),(4,7),(6,7),(8,7),(10,7),
]
agent_pos = (0,6)
agent_dir_idx = None
super().__init__(
wall_map=wall_map,
goal_pos=goal_pos,
agent_pos=agent_pos,
agent_dir_idx=agent_dir_idx,
see_agent=see_agent,
normalize_obs=normalize_obs
)
class LargeCorridor(MazeSingleton):
def __init__(
self,
see_agent=False,
normalize_obs=False):
wall_map = [
"0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0",
"0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0",
"0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0",
"0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0",
"0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0",
"0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0",
"0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0",
"0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0",
"0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0",
"0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0",
"0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0",
"0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0",
"0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0",
"0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0",
"0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0",
"0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0",
"0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0",
"0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0",
"0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0",
]
goal_pos = [
(2,8),(4,8),(6,8),(8,8),(10,8),(12,8),(14,8),(16,8),
(2,10),(4,10),(6,10),(8,10),(10,10),(12,10),(14,10),(16,10)
]
agent_pos = (0,9)
agent_dir_idx = None
super().__init__(
wall_map=wall_map,
goal_pos=goal_pos,
agent_pos=agent_pos,
agent_dir_idx=agent_dir_idx,
see_agent=see_agent,
normalize_obs=normalize_obs
)
class FourRooms(Maze):
def __init__(
self,
height=17,
width=17,
agent_view_size=5,
see_through_walls=True,
see_agent=False,
normalize_obs=False,
max_episode_steps=250,
singleton_seed=-1):
super().__init__(
height=height,
width=width,
agent_view_size=agent_view_size,
see_through_walls=see_through_walls,
see_agent=see_agent,
normalize_obs=normalize_obs,
max_episode_steps=max_episode_steps,
singleton_seed=singleton_seed
)
assert height % 2 == 1 and width % 2 == 1, \
'Grid height and width must be odd'
wall_map = jnp.zeros((height, width), dtype=jnp.bool_)
wall_map = wall_map.at[height//2, :].set(True)
wall_map = wall_map.at[:, width//2].set(True)
self.wall_map = wall_map
self.room_h = height//2
self.room_w = width//2
self.all_pos_idxs = jnp.arange(height*width)
self.goal_pos_mask = (~wall_map).flatten()
self.agent_pos_mask = self.goal_pos_mask
def reset_env(
self,
key: chex.PRNGKey
) -> Tuple[chex.Array, EnvState]:
# Randomize door positions
params = self.params
key, x_rng, y_rng = jax.random.split(key,3)
x_door_idxs = jax.random.randint(x_rng, (2,), 0, self.room_w) \
+ jnp.array([0, self.room_w+1], dtype=jnp.uint32)
y_door_idxs = jax.random.randint(y_rng, (2,), 0, self.room_h) \
+ jnp.array([0, self.room_h+1], dtype=jnp.uint32)
wall_map = self.wall_map.at[self.room_h, x_door_idxs].set(False)
wall_map = wall_map.at[y_door_idxs,self.room_w].set(False)
# Randomize goal pos
key, subkey = jax.random.split(key)
goal_pos_idx = jax.random.choice(subkey, self.all_pos_idxs, shape=(), p=self.goal_pos_mask)
goal_pos = jnp.array([goal_pos_idx%params.width, goal_pos_idx//params.width], dtype=jnp.uint32)
# Randomize agent pos
key, subkey = jax.random.split(key)
agent_pos_mask = self.agent_pos_mask.at[goal_pos_idx].set(False)
agent_pos_idx = jax.random.choice(subkey, self.all_pos_idxs, shape=(), p=self.agent_pos_mask)
agent_pos = jnp.array([agent_pos_idx%params.width, agent_pos_idx//params.width], dtype=jnp.uint32)
key, subkey = jax.random.split(key)
agent_dir_idx = jax.random.choice(subkey, 4)
maze_map = make_maze_map(
self.params,
wall_map,
goal_pos,
agent_pos,
agent_dir_idx,
pad_obs=True)
state = EnvState(
agent_pos=agent_pos,
agent_dir=DIR_TO_VEC[agent_dir_idx],
agent_dir_idx=agent_dir_idx,
goal_pos=goal_pos,
wall_map=wall_map,
maze_map=maze_map,
time=0,
terminal=False,
)
return self.get_obs(state), state
class Crossing(Maze):
def __init__(
self,
height=9,
width=9,
n_crossings=5,
agent_view_size=5,
see_through_walls=True,
see_agent=False,
normalize_obs=False,
max_episode_steps=250,
singleton_seed=-1):
self.n_crossings = n_crossings
max_episode_steps = 4*(height+2)*(width+2)
super().__init__(
height=height,
width=width,
agent_view_size=agent_view_size,
see_through_walls=see_through_walls,
see_agent=see_agent,
normalize_obs=normalize_obs,
max_episode_steps=max_episode_steps,
singleton_seed=singleton_seed
)
def reset_env(
self,
key: chex.PRNGKey
) -> Tuple[chex.Array, EnvState]:
params = self.params
height, width = params.height, params.width
goal_pos = jnp.array([width-1, height-1])
agent_pos = jnp.array([0,0], dtype=jnp.uint32)
agent_dir_idx = 0
# Generate walls
wall_map = jnp.zeros((height, width), dtype=jnp.bool_)
row_y_choices = jnp.arange(1,height-1,2)
col_x_choices = jnp.arange(1,width-1,2)
rng, subrng = jax.random.split(key)
dirs = jax.random.permutation(
subrng,
jnp.concatenate(
(jnp.zeros(len(row_y_choices)),
jnp.ones(len(col_x_choices)))
)
)[:self.n_crossings]
n_v = sum(dirs.astype(jnp.uint32))
n_h = len(dirs) - n_v
rng, row_rng, col_rng = jax.random.split(rng, 3)
row_ys_mask = jax.random.permutation(row_rng, (jnp.arange(len(row_y_choices)) < n_v).repeat(2))
if height % 2 == 0:
row_ys_mask = jnp.concatenate((row_ys_mask, jnp.zeros(2)))
else:
row_ys_mask = jnp.concatenate((row_ys_mask, jnp.zeros(1)))
row_ys_mask = jnp.logical_and(
jnp.zeros(height, dtype=jnp.bool_).at[row_y_choices].set(True),
row_ys_mask
)
col_xs_mask = jax.random.permutation(col_rng, (jnp.arange(len(col_x_choices)) < n_h).repeat(2))
if width % 2 == 0:
col_xs_mask = jnp.concatenate((col_xs_mask, jnp.zeros(2)))
else:
col_xs_mask = jnp.concatenate((col_xs_mask, jnp.zeros(1)))
col_xs_mask = jnp.logical_and(
jnp.zeros(width, dtype=jnp.bool_).at[col_x_choices].set(True),
col_xs_mask
)
wall_map = jnp.logical_or(
wall_map,
jnp.tile(jnp.expand_dims(row_ys_mask,-1), (1,width))
)
wall_map = jnp.logical_or(
wall_map,
jnp.tile(jnp.expand_dims(col_xs_mask,0), (height,1))
)
# Generate wall openings
def _scan_step(carry, rng):
wall_map, pos, passed_wall, last_dir, last_dir_idx = carry
dir_idx = jax.random.randint(rng,(),0,2)
go_dir = (~passed_wall)*DIR_TO_VEC[dir_idx] + passed_wall*last_dir
next_pos = pos + go_dir
# If next pos is the right border, force direction to be down
collide = jnp.logical_or(
(next_pos[0] >= width),
(next_pos[1] >= height)
)
go_dir = collide*DIR_TO_VEC[(dir_idx+1)%2] + (~collide)*go_dir
dir_idx = (dir_idx+1)%2 + (~collide)*dir_idx
next_pos = collide*(pos + go_dir) + (~collide)*next_pos
last_dir = go_dir
last_dir_idx = dir_idx
pos = next_pos
passed_wall = wall_map[pos[1],pos[0]]
wall_map = wall_map.at[pos[1], pos[0]].set(False)
return (wall_map, pos.astype(jnp.uint32), passed_wall, last_dir, last_dir_idx), None
n_steps_to_goal = width + height - 2
rng, *subrngs = jax.random.split(rng, n_steps_to_goal+1)
pos = agent_pos
passed_wall = jnp.array(False)
last_dir = DIR_TO_VEC[0]
(wall_map, pos, passed_wall, last_dir, last_dir_idx), _ = jax.lax.scan(
_scan_step,
(wall_map, pos, passed_wall, last_dir, 0),
jnp.array(subrngs),
length=n_steps_to_goal
)
maze_map = make_maze_map(
self.params,
wall_map,
goal_pos,
agent_pos,
agent_dir_idx,
pad_obs=True)
state = EnvState(
agent_pos=agent_pos,
agent_dir=DIR_TO_VEC[agent_dir_idx],
agent_dir_idx=agent_dir_idx,
goal_pos=goal_pos,
wall_map=wall_map,
maze_map=maze_map,
time=0,
terminal=False,
)
return self.get_obs(state), state
NEIGHBOR_WALL_OFFSETS = jnp.array([
[1,0], # right
[0,1], # bottom
[-1,0], # left
[0,-1], # top
[0,0] # self
], dtype=jnp.int32)
class PerfectMaze(Maze):
def __init__(
self,
height=13,
width=13,
agent_view_size=5,
see_through_walls=True,
see_agent=False,
normalize_obs=False,
max_episode_steps=250,
singleton_seed=-1):
assert height % 2 == 1 and width % 2 == 1, \
'Maze dimensions must be odd.'
max_episode_steps = 2*(width+2)*(height+2)
super().__init__(
height=height,
width=width,
agent_view_size=agent_view_size,
see_through_walls=see_through_walls,
see_agent=see_agent,
normalize_obs=normalize_obs,
max_episode_steps=max_episode_steps,
singleton_seed=singleton_seed
)
def reset_env(
self,
key: chex.PRNGKey
) -> Tuple[chex.Array, EnvState]:
"""
Generate a perfect maze using an iterated search procedure.
"""
params = self.params
height, width = self.params.height, self.params.width
n_tiles = height*width
# Track maze wall map
wall_map = jnp.ones((height, width), dtype=jnp.bool_)
# Track visited, walkable tiles
_h = height//2+1
_w = width//2+1
visited_map = jnp.zeros((_h, _w), dtype=jnp.bool_)
vstack = jnp.zeros((_h*_w, 2), dtype=jnp.uint32)
vstack_size = 0
# Get initial start tile in walkable index
key, subkey = jax.random.split(key)
start_pos_x = jax.random.randint(subkey, (), 0, _w)
start_pos_y = jax.random.randint(subkey, (), 0, _h)
start_pos = jnp.array([start_pos_x,start_pos_y], dtype=jnp.uint32)
# Set initial start tile as visited
visited_map = visited_map.at[
start_pos[1],start_pos[0]
].set(True)
wall_map = wall_map.at[
2*start_pos[1],2*start_pos[0]
].set(False)
vstack = vstack.at[vstack_size:vstack_size+2].set(start_pos)
vstack_size += 2
def _scan_step(carry, key):
# Choose last visited tile and move to a neighbor
wall_map, visited_map, vstack, vstack_size = carry
abs_pos = 2*vstack[vstack_size-1]
neighbor_wall_offsets = NEIGHBOR_WALL_OFFSETS.at[-1].set(
vstack[vstack_size-2] - vstack[vstack_size-1]
)
# Find a random unvisited neighbor
neighbor_pos = \
jnp.minimum(
jnp.maximum(
jnp.tile(abs_pos, (len(NEIGHBOR_WALL_OFFSETS),1)) \
+ 2*neighbor_wall_offsets, 0
),
jnp.array([width, height], dtype=jnp.uint32)
)
# Check for unvisited neighbors. Set self to unvisited if all visited.
neighbor_visited = visited_map.at[
neighbor_pos[:,1]//2, neighbor_pos[:,0]//2
].get()
n_neighbor_visited = neighbor_visited[:4].sum()
all_visited = n_neighbor_visited == 4
all_visited_post = n_neighbor_visited >= 3
neighbor_visited = neighbor_visited.at[-1].set(~all_visited)
# Choose a random unvisited neigbor and remove walls between current tile
# and this neighbor and at this neighbor.
rand_neighbor_idx = jax.random.choice(
key, jnp.arange(len(NEIGHBOR_WALL_OFFSETS)), p=~neighbor_visited)
rand_neighbor_pos = neighbor_pos[rand_neighbor_idx]
rand_neighbor_wall_pos = abs_pos + (~all_visited)*neighbor_wall_offsets[rand_neighbor_idx]
remove_wall_pos = jnp.concatenate(
(jnp.expand_dims(rand_neighbor_pos, 0),
jnp.expand_dims(rand_neighbor_wall_pos,0)), 0)
wall_map = wall_map.at[
remove_wall_pos[:,1], remove_wall_pos[:,0]
].set(False)
# Set selected neighbor as visited
visited_map = visited_map.at[
rand_neighbor_pos[1]//2,rand_neighbor_pos[0]//2
].set(True)
# Pop current tile from stack if all neighbors have been visited
vstack_size -= all_visited_post
# Push selected neighbor onto stack
vstack = vstack.at[vstack_size].set(
rand_neighbor_pos//2
)
vstack_size += ~all_visited
return (wall_map, visited_map, vstack, vstack_size), None
# for i in range(3*_h*_w):
max_n_steps = 2*_w*_h
key, *subkeys = jax.random.split(key, max_n_steps+1)
(wall_map, visited_map, vstack, vstack_size), _ = jax.lax.scan(
_scan_step,
(wall_map, visited_map, vstack, vstack_size),
jnp.array(subkeys),
length=max_n_steps
)
# Randomize goal position
all_pos_idx = jnp.arange(height*width)
key, subkey = jax.random.split(key)
goal_mask = ~wall_map.flatten()
goal_pos_idx = jax.random.choice(subkey, all_pos_idx, p=goal_mask)
goal_pos = jnp.array([goal_pos_idx%width, goal_pos_idx//width])
# Randomize agent position
key, subkey = jax.random.split(key)
agent_mask = goal_mask.at[goal_pos_idx].set(False)
agent_pos_idx = jax.random.choice(subkey, all_pos_idx, p=agent_mask)
agent_pos = jnp.array([agent_pos_idx%width, agent_pos_idx//width], dtype=jnp.uint32)
# Randomize agent dir
key, subkey = jax.random.split(key)
agent_dir_idx = jax.random.choice(subkey, 4)
maze_map = make_maze_map(
self.params,
wall_map,
goal_pos,
agent_pos,
agent_dir_idx,
pad_obs=True)
state = EnvState(
agent_pos=agent_pos,
agent_dir=DIR_TO_VEC[agent_dir_idx],
agent_dir_idx=agent_dir_idx,
goal_pos=goal_pos,
wall_map=wall_map,
maze_map=maze_map,
time=0,
terminal=False,
)
return self.get_obs(state), state
class PerfectMazeMedium(PerfectMaze):
def __init__(self, *args, **kwargs):
super().__init__(height=19, width=19, *args, **kwargs)
class PerfectMazeExtraLarge(PerfectMaze):
def __init__(self, *args, **kwargs):
super().__init__(height=101, width=101, *args, **kwargs)
class Memory(MazeSingleton):
def __init__(
self,
height=17,
width=17,
agent_view_size=7,
see_through_walls=True,
see_agent=False,
normalize_obs=False,
obs_agent_pos=False,
max_episode_steps=250,
singleton_seed=-1):
# Generate walls
wall_map = [
"0 0 0 0 0 0 0 0 1 0 1 0 0 0 0",
"0 0 0 0 0 0 0 0 1 0 1 0 0 0 0",
"0 0 0 0 0 0 0 0 1 0 1 0 0 0 0",
"0 0 0 0 0 0 0 0 1 0 1 0 0 0 0",
"0 0 0 0 0 0 0 0 1 0 1 0 0 0 0",
"1 1 1 1 0 0 0 0 1 0 1 0 0 0 0",
"0 0 0 1 1 1 1 1 1 0 1 0 0 0 0",
"0 0 0 0 0 0 0 0 0 0 1 0 0 0 0",
"0 0 0 1 1 1 1 1 1 0 1 0 0 0 0",
"1 1 1 1 0 0 0 0 1 0 1 0 0 0 0",
"0 0 0 0 0 0 0 0 1 0 1 0 0 0 0",
"0 0 0 0 0 0 0 0 1 0 1 0 0 0 0",
"0 0 0 0 0 0 0 0 1 0 1 0 0 0 0",
"0 0 0 0 0 0 0 0 1 0 1 0 0 0 0",
"0 0 0 0 0 0 0 0 1 0 1 0 0 0 0"
]
super().__init__(
wall_map=wall_map,
goal_pos=(9,5),
agent_pos=(0,7),
agent_dir_idx=0,
see_agent=see_agent,
normalize_obs=normalize_obs,
obs_agent_pos=obs_agent_pos,
max_episode_steps=max_episode_steps
)
self.top_pos = jnp.array([9,5], dtype=jnp.uint32)
self.bottom_pos = jnp.array([9,9], dtype=jnp.uint32)
def reset_env(
self,
key: chex.PRNGKey,
) -> Tuple[chex.Array, EnvState]:
params = self.params
height, width = params.height, params.width
agent_pos = jnp.array([0,7], dtype=jnp.uint32)
agent_dir_idx = 0
# Randomly generate a memory location
is_top_goal = jax.random.randint(key, minval=0, maxval=2, shape=(1,), dtype=jnp.uint8)
clue_pos = jnp.array((0,6), dtype=jnp.uint32)
self.goal_pos = is_top_goal*self.top_pos + (1-is_top_goal)*self.bottom_pos
self.distractor_pos = is_top_goal*self.bottom_pos + (1-is_top_goal)*self.top_pos
| goal_color = is_top_goal*COLOR_TO_INDEX['red'] + (1-is_top_goal)*COLOR_TO_INDEX['green'] | 2 | 2023-10-28 12:12:01+00:00 | 12k |
innnky/ar-vits | s2_train.py | [
{
"identifier": "commons",
"path": "module/commons.py",
"snippet": "def init_weights(m, mean=0.0, std=0.01):\ndef get_padding(kernel_size, dilation=1):\ndef convert_pad_shape(pad_shape):\ndef intersperse(lst, item):\ndef kl_divergence(m_p, logs_p, m_q, logs_q):\ndef rand_gumbel(shape):\ndef rand_gumbel_... | import os
import torch
import torch.multiprocessing as mp
import torch.distributed as dist
import logging
import utils
from torch.nn import functional as F
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriter
from torch.nn.parallel import DistributedDataParallel as DDP
from torch.cuda.amp import autocast, GradScaler
from tqdm import tqdm
from module import commons
from module.data_utils import (
TextAudioSpeakerLoader,
TextAudioSpeakerCollate,
DistributedBucketSampler
)
from module.models import (
SynthesizerTrn,
MultiPeriodDiscriminator,
)
from module.losses import (
generator_loss,
discriminator_loss,
feature_loss,
kl_loss
)
from module.mel_processing import mel_spectrogram_torch, spec_to_mel_torch | 8,211 | global global_step
if rank == 0:
logger = utils.get_logger(hps.s2_ckpt_dir)
logger.info(hps)
utils.check_git_hash(hps.s2_ckpt_dir)
writer = SummaryWriter(log_dir=hps.s2_ckpt_dir)
writer_eval = SummaryWriter(log_dir=os.path.join(hps.s2_ckpt_dir, "eval"))
dist.init_process_group(backend='gloo' if os.name == 'nt' else 'nccl', init_method='env://', world_size=n_gpus,
rank=rank)
torch.manual_seed(hps.train.seed)
torch.cuda.set_device(rank)
train_dataset = TextAudioSpeakerLoader(hps.data.training_files, hps.data)
train_sampler = DistributedBucketSampler(
train_dataset,
hps.train.batch_size,
[32, 300, 400, 500, 600, 700, 800, 900, 1000],
num_replicas=n_gpus,
rank=rank,
shuffle=True)
collate_fn = TextAudioSpeakerCollate()
train_loader = DataLoader(train_dataset, num_workers=6, shuffle=False, pin_memory=True,
collate_fn=collate_fn, batch_sampler=train_sampler, persistent_workers=True)
if rank == 0:
eval_dataset = TextAudioSpeakerLoader(hps.data.validation_files, hps.data, val=True)
eval_loader = DataLoader(eval_dataset, num_workers=0, shuffle=False,
batch_size=1, pin_memory=True,
drop_last=False, collate_fn=collate_fn)
net_g = SynthesizerTrn(
hps.data.filter_length // 2 + 1,
hps.train.segment_size // hps.data.hop_length,
n_speakers=hps.data.n_speakers,
**hps.model).cuda(rank)
net_d = MultiPeriodDiscriminator(hps.model.use_spectral_norm).cuda(rank)
for name, param in net_g.named_parameters():
if not param.requires_grad:
print(name,"not requires_grad")
optim_g = torch.optim.AdamW(
filter(lambda p: p.requires_grad, net_g.parameters()),
hps.train.learning_rate,
betas=hps.train.betas,
eps=hps.train.eps)
optim_d = torch.optim.AdamW(
net_d.parameters(),
hps.train.learning_rate,
betas=hps.train.betas,
eps=hps.train.eps)
net_g = DDP(net_g, device_ids=[rank])
net_d = DDP(net_d, device_ids=[rank])
pretrain_dir = hps.pretrain
if pretrain_dir is None:
_, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.s2_ckpt_dir, "G_*.pth"), net_g,
optim_g, False)
_, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.s2_ckpt_dir, "D_*.pth"), net_d,
optim_d, False)
epoch_str = max(epoch_str, 1)
global_step = (epoch_str - 1) * len(train_loader)
else:
_, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(pretrain_dir, "G_*.pth"), net_g,
optim_g, True)
_, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(pretrain_dir, "D_*.pth"), net_d,
optim_d, True)
epoch_str = 1
global_step = 0
if hps.resume_step != None:
global_step = hps.resume_step
scheduler_g = torch.optim.lr_scheduler.ExponentialLR(optim_g, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2)
scheduler_d = torch.optim.lr_scheduler.ExponentialLR(optim_d, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2)
scaler = GradScaler(enabled=hps.train.fp16_run)
for epoch in range(epoch_str, hps.train.epochs + 1):
if rank == 0:
train_and_evaluate(rank, epoch, hps, [net_g, net_d], [optim_g, optim_d], [scheduler_g, scheduler_d], scaler,
[train_loader, eval_loader], logger, [writer, writer_eval])
else:
train_and_evaluate(rank, epoch, hps, [net_g, net_d], [optim_g, optim_d], [scheduler_g, scheduler_d], scaler,
[train_loader, None], None, None)
scheduler_g.step()
scheduler_d.step()
def train_and_evaluate(rank, epoch, hps, nets, optims, schedulers, scaler, loaders, logger, writers):
net_g, net_d = nets
optim_g, optim_d = optims
scheduler_g, scheduler_d = schedulers
train_loader, eval_loader = loaders
if writers is not None:
writer, writer_eval = writers
train_loader.batch_sampler.set_epoch(epoch)
global global_step
net_g.train()
net_d.train()
for batch_idx, (ssl, ssl_lengths, spec, spec_lengths, y, y_lengths, text, text_lengths) in tqdm(enumerate(train_loader)):
spec, spec_lengths = spec.cuda(rank, non_blocking=True), spec_lengths.cuda(rank, non_blocking=True)
y, y_lengths = y.cuda(rank, non_blocking=True), y_lengths.cuda(rank, non_blocking=True)
ssl = ssl.cuda(rank, non_blocking=True)
ssl_lengths = ssl_lengths.cuda(rank, non_blocking=True)
text, text_lengths = text.cuda(rank, non_blocking=True), text_lengths.cuda(rank, non_blocking=True)
with autocast(enabled=hps.train.fp16_run):
y_hat, kl_ssl, ids_slice, x_mask, z_mask, \
(z, z_p, m_p, logs_p, m_q, logs_q), stats_ssl = net_g(ssl, spec, spec_lengths, text, text_lengths)
mel = spec_to_mel_torch(
spec,
hps.data.filter_length,
hps.data.n_mel_channels,
hps.data.sampling_rate,
hps.data.mel_fmin,
hps.data.mel_fmax)
| logging.getLogger("matplotlib").setLevel(logging.INFO)
logging.getLogger("h5py").setLevel(logging.INFO)
logging.getLogger("numba").setLevel(logging.INFO)
torch.backends.cudnn.benchmark = True
global_step = 0
def main():
"""Assume Single Node Multi GPUs Training Only"""
assert torch.cuda.is_available(), "CPU training is not allowed."
n_gpus = torch.cuda.device_count()
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '8000'
hps = utils.get_hparams(stage=2)
mp.spawn(run, nprocs=n_gpus, args=(n_gpus, hps,))
def run(rank, n_gpus, hps):
global global_step
if rank == 0:
logger = utils.get_logger(hps.s2_ckpt_dir)
logger.info(hps)
utils.check_git_hash(hps.s2_ckpt_dir)
writer = SummaryWriter(log_dir=hps.s2_ckpt_dir)
writer_eval = SummaryWriter(log_dir=os.path.join(hps.s2_ckpt_dir, "eval"))
dist.init_process_group(backend='gloo' if os.name == 'nt' else 'nccl', init_method='env://', world_size=n_gpus,
rank=rank)
torch.manual_seed(hps.train.seed)
torch.cuda.set_device(rank)
train_dataset = TextAudioSpeakerLoader(hps.data.training_files, hps.data)
train_sampler = DistributedBucketSampler(
train_dataset,
hps.train.batch_size,
[32, 300, 400, 500, 600, 700, 800, 900, 1000],
num_replicas=n_gpus,
rank=rank,
shuffle=True)
collate_fn = TextAudioSpeakerCollate()
train_loader = DataLoader(train_dataset, num_workers=6, shuffle=False, pin_memory=True,
collate_fn=collate_fn, batch_sampler=train_sampler, persistent_workers=True)
if rank == 0:
eval_dataset = TextAudioSpeakerLoader(hps.data.validation_files, hps.data, val=True)
eval_loader = DataLoader(eval_dataset, num_workers=0, shuffle=False,
batch_size=1, pin_memory=True,
drop_last=False, collate_fn=collate_fn)
net_g = SynthesizerTrn(
hps.data.filter_length // 2 + 1,
hps.train.segment_size // hps.data.hop_length,
n_speakers=hps.data.n_speakers,
**hps.model).cuda(rank)
net_d = MultiPeriodDiscriminator(hps.model.use_spectral_norm).cuda(rank)
for name, param in net_g.named_parameters():
if not param.requires_grad:
print(name,"not requires_grad")
optim_g = torch.optim.AdamW(
filter(lambda p: p.requires_grad, net_g.parameters()),
hps.train.learning_rate,
betas=hps.train.betas,
eps=hps.train.eps)
optim_d = torch.optim.AdamW(
net_d.parameters(),
hps.train.learning_rate,
betas=hps.train.betas,
eps=hps.train.eps)
net_g = DDP(net_g, device_ids=[rank])
net_d = DDP(net_d, device_ids=[rank])
pretrain_dir = hps.pretrain
if pretrain_dir is None:
_, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.s2_ckpt_dir, "G_*.pth"), net_g,
optim_g, False)
_, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.s2_ckpt_dir, "D_*.pth"), net_d,
optim_d, False)
epoch_str = max(epoch_str, 1)
global_step = (epoch_str - 1) * len(train_loader)
else:
_, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(pretrain_dir, "G_*.pth"), net_g,
optim_g, True)
_, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(pretrain_dir, "D_*.pth"), net_d,
optim_d, True)
epoch_str = 1
global_step = 0
if hps.resume_step != None:
global_step = hps.resume_step
scheduler_g = torch.optim.lr_scheduler.ExponentialLR(optim_g, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2)
scheduler_d = torch.optim.lr_scheduler.ExponentialLR(optim_d, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2)
scaler = GradScaler(enabled=hps.train.fp16_run)
for epoch in range(epoch_str, hps.train.epochs + 1):
if rank == 0:
train_and_evaluate(rank, epoch, hps, [net_g, net_d], [optim_g, optim_d], [scheduler_g, scheduler_d], scaler,
[train_loader, eval_loader], logger, [writer, writer_eval])
else:
train_and_evaluate(rank, epoch, hps, [net_g, net_d], [optim_g, optim_d], [scheduler_g, scheduler_d], scaler,
[train_loader, None], None, None)
scheduler_g.step()
scheduler_d.step()
def train_and_evaluate(rank, epoch, hps, nets, optims, schedulers, scaler, loaders, logger, writers):
net_g, net_d = nets
optim_g, optim_d = optims
scheduler_g, scheduler_d = schedulers
train_loader, eval_loader = loaders
if writers is not None:
writer, writer_eval = writers
train_loader.batch_sampler.set_epoch(epoch)
global global_step
net_g.train()
net_d.train()
for batch_idx, (ssl, ssl_lengths, spec, spec_lengths, y, y_lengths, text, text_lengths) in tqdm(enumerate(train_loader)):
spec, spec_lengths = spec.cuda(rank, non_blocking=True), spec_lengths.cuda(rank, non_blocking=True)
y, y_lengths = y.cuda(rank, non_blocking=True), y_lengths.cuda(rank, non_blocking=True)
ssl = ssl.cuda(rank, non_blocking=True)
ssl_lengths = ssl_lengths.cuda(rank, non_blocking=True)
text, text_lengths = text.cuda(rank, non_blocking=True), text_lengths.cuda(rank, non_blocking=True)
with autocast(enabled=hps.train.fp16_run):
y_hat, kl_ssl, ids_slice, x_mask, z_mask, \
(z, z_p, m_p, logs_p, m_q, logs_q), stats_ssl = net_g(ssl, spec, spec_lengths, text, text_lengths)
mel = spec_to_mel_torch(
spec,
hps.data.filter_length,
hps.data.n_mel_channels,
hps.data.sampling_rate,
hps.data.mel_fmin,
hps.data.mel_fmax) | y_mel = commons.slice_segments(mel, ids_slice, hps.train.segment_size // hps.data.hop_length) | 0 | 2023-10-30 04:40:19+00:00 | 12k |
nv-tlabs/vid2player3d | vid2player/utils/torch_transform.py | [
{
"identifier": "quaternion_to_angle_axis",
"path": "vid2player/utils/konia_transform.py",
"snippet": "@torch.jit.script\ndef quaternion_to_angle_axis(\n quaternion: torch.Tensor, eps: float = 1.0e-6, order: QuaternionCoeffOrder = QuaternionCoeffOrder.WXYZ\n) -> torch.Tensor:\n \"\"\"Convert quate... | import numpy as np
import torch
from .konia_transform import quaternion_to_angle_axis, angle_axis_to_quaternion, quaternion_to_rotation_matrix, rotation_matrix_to_quaternion, rotation_matrix_to_angle_axis, angle_axis_to_rotation_matrix | 7,586 |
def quat_between_two_vec(v1, v2, eps: float = 1e-6):
"""
quaternion for rotating v1 to v2
"""
orig_shape = v1.shape
v1 = v1.reshape(-1, 3)
v2 = v2.reshape(-1, 3)
dot = (v1 * v2).sum(-1)
cross = torch.cross(v1, v2, dim=-1)
out = torch.cat([(1 + dot).unsqueeze(-1), cross], dim=-1)
# handle v1 & v2 with same direction
sind = dot > 1 - eps
out[sind] = torch.tensor([1., 0., 0., 0.], device=v1.device)
# handle v1 & v2 with opposite direction
nind = dot < -1 + eps
if torch.any(nind):
vx = torch.tensor([1., 0., 0.], device=v1.device)
vxdot = (v1 * vx).sum(-1).abs()
nxind = nind & (vxdot < 1 - eps)
if torch.any(nxind):
out[nxind] = angle_axis_to_quaternion(normalize(torch.cross(vx.expand_as(v1[nxind]), v1[nxind], dim=-1)) * np.pi)
# handle v1 & v2 with opposite direction and they are parallel to x axis
pind = nind & (vxdot >= 1 - eps)
if torch.any(pind):
vy = torch.tensor([0., 1., 0.], device=v1.device)
out[pind] = angle_axis_to_quaternion(normalize(torch.cross(vy.expand_as(v1[pind]), v1[pind], dim=-1)) * np.pi)
# normalize and reshape
out = normalize(out).view(orig_shape[:-1] + (4,))
return out
@torch.jit.script
def get_yaw(q, eps: float = 1e-6):
yaw_atany = 2 * (q[..., 0] * q[..., 3] + q[..., 1] * q[..., 2])
yaw_atanx = 1 - 2 * (q[..., 2] * q[..., 2] + q[..., 3] * q[..., 3])
yaw = torch_safe_atan2(yaw_atany, yaw_atanx, eps)
return yaw
@torch.jit.script
def get_yaw_q(q):
yaw = get_yaw(q)
angle_axis = torch.cat([torch.zeros(yaw.shape + (2,), device=q.device), yaw.unsqueeze(-1)], dim=-1)
heading_q = angle_axis_to_quaternion(angle_axis)
return heading_q
@torch.jit.script
def get_heading(q, eps: float = 1e-6):
heading_atany = q[..., 3]
heading_atanx = q[..., 0]
heading = 2 * torch_safe_atan2(heading_atany, heading_atanx, eps)
return heading
def get_heading_q(q):
q_new = q.clone()
q_new[..., 1] = 0
q_new[..., 2] = 0
q_new = normalize(q_new)
return q_new
@torch.jit.script
def heading_to_vec(h_theta):
v = torch.stack([torch.cos(h_theta), torch.sin(h_theta)], dim=-1)
return v
@torch.jit.script
def vec_to_heading(h_vec):
h_theta = torch_safe_atan2(h_vec[..., 1], h_vec[..., 0])
return h_theta
@torch.jit.script
def heading_to_quat(h_theta):
angle_axis = torch.cat([torch.zeros(h_theta.shape + (2,), device=h_theta.device), h_theta.unsqueeze(-1)], dim=-1)
heading_q = angle_axis_to_quaternion(angle_axis)
return heading_q
def deheading_quat(q, heading_q=None):
if heading_q is None:
heading_q = get_heading_q(q)
dq = quat_mul(quat_conjugate(heading_q), q)
return dq
@torch.jit.script
def rotmat_to_rot6d(mat):
rot6d = torch.cat([mat[..., 0], mat[..., 1]], dim=-1)
return rot6d
@torch.jit.script
def rot6d_to_rotmat(rot6d, eps: float = 1e-8):
a1 = rot6d[..., :3].clone()
a2 = rot6d[..., 3:].clone()
ind = torch.norm(a1, dim=-1) < eps
a1[ind] = torch.tensor([1.0, 0.0, 0.0], device=a1.device)
b1 = normalize(a1)
b2 = normalize(a2 - (b1 * a2).sum(dim=-1).unsqueeze(-1) * b1)
ind = torch.norm(b2, dim=-1) < eps
b2[ind] = torch.tensor([0.0, 1.0, 0.0], device=b2.device)
b3 = torch.cross(b1, b2, dim=-1)
mat = torch.stack([b1, b2, b3], dim=-1)
return mat
@torch.jit.script
def angle_axis_to_rot6d(aa):
return rotmat_to_rot6d(angle_axis_to_rotation_matrix(aa))
@torch.jit.script
def rot6d_to_angle_axis(rot6d):
|
def normalize(x, eps: float = 1e-9):
return x / x.norm(p=2, dim=-1).clamp(min=eps, max=None).unsqueeze(-1)
@torch.jit.script
def quat_mul(a, b):
assert a.shape == b.shape
shape = a.shape
a = a.reshape(-1, 4)
b = b.reshape(-1, 4)
w1, x1, y1, z1 = a[:, 0], a[:, 1], a[:, 2], a[:, 3]
w2, x2, y2, z2 = b[:, 0], b[:, 1], b[:, 2], b[:, 3]
ww = (z1 + x1) * (x2 + y2)
yy = (w1 - y1) * (w2 + z2)
zz = (w1 + y1) * (w2 - z2)
xx = ww + yy + zz
qq = 0.5 * (xx + (z1 - x1) * (x2 - y2))
w = qq - ww + (z1 - y1) * (y2 - z2)
x = qq - xx + (x1 + w1) * (x2 + w2)
y = qq - yy + (w1 - x1) * (y2 + z2)
z = qq - zz + (z1 + y1) * (w2 - x2)
return torch.stack([w, x, y, z], dim=-1).view(shape)
@torch.jit.script
def quat_conjugate(a):
shape = a.shape
a = a.reshape(-1, 4)
return torch.cat((a[:, 0:1], -a[:, 1:]), dim=-1).view(shape)
@torch.jit.script
def quat_apply(a, b):
shape = b.shape
a = a.reshape(-1, 4)
b = b.reshape(-1, 3)
xyz = a[:, 1:].clone()
t = xyz.cross(b, dim=-1) * 2
return (b + a[:, 0:1].clone() * t + xyz.cross(t, dim=-1)).view(shape)
@torch.jit.script
def quat_angle(a, eps: float = 1e-6):
shape = a.shape
a = a.reshape(-1, 4)
s = 2 * (a[:, 0] ** 2) - 1
s = s.clamp(-1 + eps, 1 - eps)
s = s.acos()
return s.view(shape[:-1])
@torch.jit.script
def quat_angle_diff(quat1, quat2):
return quat_angle(quat_mul(quat1, quat_conjugate(quat2)))
@torch.jit.script
def torch_safe_atan2(y, x, eps: float = 1e-8):
y = y.clone()
y[(y.abs() < eps) & (x.abs() < eps)] += eps
return torch.atan2(y, x)
@torch.jit.script
def ypr_euler_from_quat(q, handle_singularity: bool = False, eps: float = 1e-6, singular_eps: float = 1e-6):
"""
convert quaternion to yaw-pitch-roll euler angles
"""
yaw_atany = 2 * (q[..., 0] * q[..., 3] + q[..., 1] * q[..., 2])
yaw_atanx = 1 - 2 * (q[..., 2] * q[..., 2] + q[..., 3] * q[..., 3])
roll_atany = 2 * (q[..., 0] * q[..., 1] + q[..., 2] * q[..., 3])
roll_atanx = 1 - 2 * (q[..., 1] * q[..., 1] + q[..., 2] * q[..., 2])
yaw = torch_safe_atan2(yaw_atany, yaw_atanx, eps)
pitch = torch.asin(torch.clamp(2 * (q[..., 0] * q[..., 2] - q[..., 1] * q[..., 3]), min=-1 + eps, max=1 - eps))
roll = torch_safe_atan2(roll_atany, roll_atanx, eps)
if handle_singularity:
""" handle two special cases """
test = q[..., 0] * q[..., 2] - q[..., 1] * q[..., 3]
# north pole, pitch ~= 90 degrees
np_ind = test > 0.5 - singular_eps
if torch.any(np_ind):
# print('ypr_euler_from_quat singularity -- north pole!')
roll[np_ind] = 0.0
pitch[np_ind].clamp_max_(0.5 * np.pi)
yaw_atany = q[..., 3][np_ind]
yaw_atanx = q[..., 0][np_ind]
yaw[np_ind] = 2 * torch_safe_atan2(yaw_atany, yaw_atanx, eps)
# south pole, pitch ~= -90 degrees
sp_ind = test < -0.5 + singular_eps
if torch.any(sp_ind):
# print('ypr_euler_from_quat singularity -- south pole!')
roll[sp_ind] = 0.0
pitch[sp_ind].clamp_min_(-0.5 * np.pi)
yaw_atany = q[..., 3][sp_ind]
yaw_atanx = q[..., 0][sp_ind]
yaw[sp_ind] = 2 * torch_safe_atan2(yaw_atany, yaw_atanx, eps)
return torch.stack([roll, pitch, yaw], dim=-1)
@torch.jit.script
def quat_from_ypr_euler(angles):
"""
convert yaw-pitch-roll euler angles to quaternion
"""
half_ang = angles * 0.5
sin = torch.sin(half_ang)
cos = torch.cos(half_ang)
q = torch.stack([
cos[..., 0] * cos[..., 1] * cos[..., 2] + sin[..., 0] * sin[..., 1] * sin[..., 2],
sin[..., 0] * cos[..., 1] * cos[..., 2] - cos[..., 0] * sin[..., 1] * sin[..., 2],
cos[..., 0] * sin[..., 1] * cos[..., 2] + sin[..., 0] * cos[..., 1] * sin[..., 2],
cos[..., 0] * cos[..., 1] * sin[..., 2] - sin[..., 0] * sin[..., 1] * cos[..., 2]
], dim=-1)
return q
def quat_between_two_vec(v1, v2, eps: float = 1e-6):
"""
quaternion for rotating v1 to v2
"""
orig_shape = v1.shape
v1 = v1.reshape(-1, 3)
v2 = v2.reshape(-1, 3)
dot = (v1 * v2).sum(-1)
cross = torch.cross(v1, v2, dim=-1)
out = torch.cat([(1 + dot).unsqueeze(-1), cross], dim=-1)
# handle v1 & v2 with same direction
sind = dot > 1 - eps
out[sind] = torch.tensor([1., 0., 0., 0.], device=v1.device)
# handle v1 & v2 with opposite direction
nind = dot < -1 + eps
if torch.any(nind):
vx = torch.tensor([1., 0., 0.], device=v1.device)
vxdot = (v1 * vx).sum(-1).abs()
nxind = nind & (vxdot < 1 - eps)
if torch.any(nxind):
out[nxind] = angle_axis_to_quaternion(normalize(torch.cross(vx.expand_as(v1[nxind]), v1[nxind], dim=-1)) * np.pi)
# handle v1 & v2 with opposite direction and they are parallel to x axis
pind = nind & (vxdot >= 1 - eps)
if torch.any(pind):
vy = torch.tensor([0., 1., 0.], device=v1.device)
out[pind] = angle_axis_to_quaternion(normalize(torch.cross(vy.expand_as(v1[pind]), v1[pind], dim=-1)) * np.pi)
# normalize and reshape
out = normalize(out).view(orig_shape[:-1] + (4,))
return out
@torch.jit.script
def get_yaw(q, eps: float = 1e-6):
yaw_atany = 2 * (q[..., 0] * q[..., 3] + q[..., 1] * q[..., 2])
yaw_atanx = 1 - 2 * (q[..., 2] * q[..., 2] + q[..., 3] * q[..., 3])
yaw = torch_safe_atan2(yaw_atany, yaw_atanx, eps)
return yaw
@torch.jit.script
def get_yaw_q(q):
yaw = get_yaw(q)
angle_axis = torch.cat([torch.zeros(yaw.shape + (2,), device=q.device), yaw.unsqueeze(-1)], dim=-1)
heading_q = angle_axis_to_quaternion(angle_axis)
return heading_q
@torch.jit.script
def get_heading(q, eps: float = 1e-6):
heading_atany = q[..., 3]
heading_atanx = q[..., 0]
heading = 2 * torch_safe_atan2(heading_atany, heading_atanx, eps)
return heading
def get_heading_q(q):
q_new = q.clone()
q_new[..., 1] = 0
q_new[..., 2] = 0
q_new = normalize(q_new)
return q_new
@torch.jit.script
def heading_to_vec(h_theta):
v = torch.stack([torch.cos(h_theta), torch.sin(h_theta)], dim=-1)
return v
@torch.jit.script
def vec_to_heading(h_vec):
h_theta = torch_safe_atan2(h_vec[..., 1], h_vec[..., 0])
return h_theta
@torch.jit.script
def heading_to_quat(h_theta):
angle_axis = torch.cat([torch.zeros(h_theta.shape + (2,), device=h_theta.device), h_theta.unsqueeze(-1)], dim=-1)
heading_q = angle_axis_to_quaternion(angle_axis)
return heading_q
def deheading_quat(q, heading_q=None):
if heading_q is None:
heading_q = get_heading_q(q)
dq = quat_mul(quat_conjugate(heading_q), q)
return dq
@torch.jit.script
def rotmat_to_rot6d(mat):
rot6d = torch.cat([mat[..., 0], mat[..., 1]], dim=-1)
return rot6d
@torch.jit.script
def rot6d_to_rotmat(rot6d, eps: float = 1e-8):
a1 = rot6d[..., :3].clone()
a2 = rot6d[..., 3:].clone()
ind = torch.norm(a1, dim=-1) < eps
a1[ind] = torch.tensor([1.0, 0.0, 0.0], device=a1.device)
b1 = normalize(a1)
b2 = normalize(a2 - (b1 * a2).sum(dim=-1).unsqueeze(-1) * b1)
ind = torch.norm(b2, dim=-1) < eps
b2[ind] = torch.tensor([0.0, 1.0, 0.0], device=b2.device)
b3 = torch.cross(b1, b2, dim=-1)
mat = torch.stack([b1, b2, b3], dim=-1)
return mat
@torch.jit.script
def angle_axis_to_rot6d(aa):
return rotmat_to_rot6d(angle_axis_to_rotation_matrix(aa))
@torch.jit.script
def rot6d_to_angle_axis(rot6d): | return rotation_matrix_to_angle_axis(rot6d_to_rotmat(rot6d)) | 4 | 2023-10-30 20:43:43+00:00 | 12k |
YARAHQ/yara-forge | yara-forge.py | [
{
"identifier": "retrieve_yara_rule_sets",
"path": "main/rule_collector.py",
"snippet": "def retrieve_yara_rule_sets(repo_staging_dir, yara_repos):\n \"\"\"\n Retrieves YARA rules from online repositories.\n \"\"\"\n\n # The list of YARA rule sets of all repositories\n yara_rule_repo_sets... | import argparse
import logging
import sys
import yaml
from main.rule_collector import retrieve_yara_rule_sets
from main.rule_processors import process_yara_rules
from main.rule_output import write_yara_packages
from qa.rule_qa import evaluate_rules_quality, check_yara_packages, get_yara_qa_commit_hash | 8,458 | #!/usr/bin/env python
# -*- coding: iso-8859-1 -*-
# -*- coding: utf-8 -*-
#
# YARA Forge
# A YARA Rule Concentrator
# Florian Roth
# January 2024
__version__ = '0.7.2'
#import pprint
# Write a section header with dividers
def write_section_header(title, divider_with=72):
print("\n" + "=" * divider_with)
print(title.center(divider_with).upper())
print("=" * divider_with + "\n")
if __name__ == "__main__":
print(r' __ _____ ____ ___ ______ ')
print(r' \ \/ / | / __ \/ | / ____/___ _________ ____ ')
print(r' \ / /| | / /_/ / /| | / /_ / __ \/ ___/ __ `/ _ \ ')
print(r' / / ___ |/ _, _/ ___ | / __/ / /_/ / / / /_/ / __/ ')
print(r' /_/_/ |_/_/ |_/_/ |_| /_/ \____/_/ \__, /\___/ ')
print(r' /____/ ')
print(r' YARA Forge ')
print(r' Brining Order to Chaos ')
print(r' ')
print(r' Version %s ' % __version__)
print(r' Florian Roth, January 2024 ')
parser = argparse.ArgumentParser()
parser.add_argument("--debug", help="enable debug output", action="store_true")
parser.add_argument("-c", "--config", help="specify a different config file", default="yara-forge-config.yml")
args = parser.parse_args()
# Create a new logger to log into the command line and a log file name yara-forge.log
# (only set the level to debug if the debug argument is set)
logger = logging.getLogger()
logger.setLevel(logging.DEBUG if args.debug else logging.INFO)
# Set the level of the plyara logger to warning
logging.getLogger('plyara').setLevel(logging.WARNING)
logging.getLogger('tzlocal').setLevel(logging.CRITICAL)
# Create a handler for the command line
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG if args.debug else logging.INFO)
# Create a handler for the log file
fh = logging.FileHandler("yara-forge.log")
fh.setLevel(logging.DEBUG)
# Create a formatter for the log messages that go to the log file
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
# Create a formatter for the log messages that go to the command line
formatter_cmd = logging.Formatter('%(message)s')
# Add the formatter to the handlers
ch.setFormatter(formatter_cmd)
fh.setFormatter(formatter)
# Add the handlers to the logger
logger.addHandler(ch)
logger.addHandler(fh)
# Read configuration file
with open(args.config, 'r') as f:
YARA_FORGE_CONFIG = yaml.safe_load(f)
# Retrieve the YARA rule sets
write_section_header("Retrieving YARA rule sets")
yara_rule_repo_sets = retrieve_yara_rule_sets(
YARA_FORGE_CONFIG['repo_staging_dir'],
YARA_FORGE_CONFIG['yara_repositories'])
#pprint.pprint(yara_rule_repo_sets)
# Process the YARA rules
write_section_header("Processing YARA rules")
processed_yara_repos = process_yara_rules(yara_rule_repo_sets, YARA_FORGE_CONFIG)
# Evaluate the quality of the rules
write_section_header("Evaluating YARA rules")
evaluated_yara_repos = evaluate_rules_quality(processed_yara_repos, YARA_FORGE_CONFIG)
# Write the YARA packages
write_section_header("Writing YARA packages")
| #!/usr/bin/env python
# -*- coding: iso-8859-1 -*-
# -*- coding: utf-8 -*-
#
# YARA Forge
# A YARA Rule Concentrator
# Florian Roth
# January 2024
__version__ = '0.7.2'
#import pprint
# Write a section header with dividers
def write_section_header(title, divider_with=72):
print("\n" + "=" * divider_with)
print(title.center(divider_with).upper())
print("=" * divider_with + "\n")
if __name__ == "__main__":
print(r' __ _____ ____ ___ ______ ')
print(r' \ \/ / | / __ \/ | / ____/___ _________ ____ ')
print(r' \ / /| | / /_/ / /| | / /_ / __ \/ ___/ __ `/ _ \ ')
print(r' / / ___ |/ _, _/ ___ | / __/ / /_/ / / / /_/ / __/ ')
print(r' /_/_/ |_/_/ |_/_/ |_| /_/ \____/_/ \__, /\___/ ')
print(r' /____/ ')
print(r' YARA Forge ')
print(r' Brining Order to Chaos ')
print(r' ')
print(r' Version %s ' % __version__)
print(r' Florian Roth, January 2024 ')
parser = argparse.ArgumentParser()
parser.add_argument("--debug", help="enable debug output", action="store_true")
parser.add_argument("-c", "--config", help="specify a different config file", default="yara-forge-config.yml")
args = parser.parse_args()
# Create a new logger to log into the command line and a log file name yara-forge.log
# (only set the level to debug if the debug argument is set)
logger = logging.getLogger()
logger.setLevel(logging.DEBUG if args.debug else logging.INFO)
# Set the level of the plyara logger to warning
logging.getLogger('plyara').setLevel(logging.WARNING)
logging.getLogger('tzlocal').setLevel(logging.CRITICAL)
# Create a handler for the command line
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG if args.debug else logging.INFO)
# Create a handler for the log file
fh = logging.FileHandler("yara-forge.log")
fh.setLevel(logging.DEBUG)
# Create a formatter for the log messages that go to the log file
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
# Create a formatter for the log messages that go to the command line
formatter_cmd = logging.Formatter('%(message)s')
# Add the formatter to the handlers
ch.setFormatter(formatter_cmd)
fh.setFormatter(formatter)
# Add the handlers to the logger
logger.addHandler(ch)
logger.addHandler(fh)
# Read configuration file
with open(args.config, 'r') as f:
YARA_FORGE_CONFIG = yaml.safe_load(f)
# Retrieve the YARA rule sets
write_section_header("Retrieving YARA rule sets")
yara_rule_repo_sets = retrieve_yara_rule_sets(
YARA_FORGE_CONFIG['repo_staging_dir'],
YARA_FORGE_CONFIG['yara_repositories'])
#pprint.pprint(yara_rule_repo_sets)
# Process the YARA rules
write_section_header("Processing YARA rules")
processed_yara_repos = process_yara_rules(yara_rule_repo_sets, YARA_FORGE_CONFIG)
# Evaluate the quality of the rules
write_section_header("Evaluating YARA rules")
evaluated_yara_repos = evaluate_rules_quality(processed_yara_repos, YARA_FORGE_CONFIG)
# Write the YARA packages
write_section_header("Writing YARA packages") | repo_files = write_yara_packages(evaluated_yara_repos, program_version=__version__, yaraqa_commit=get_yara_qa_commit_hash(), YARA_FORGE_CONFIG=YARA_FORGE_CONFIG) | 5 | 2023-10-28 18:04:14+00:00 | 12k |
masked-spacetime-hashing/msth | MSTH/streamable_pipeline.py | [
{
"identifier": "base_config",
"path": "nerfstudio/configs/base_config.py",
"snippet": "class PrintableConfig: # pylint: disable=too-few-public-methods\nclass InstantiateConfig(PrintableConfig): # pylint: disable=too-few-public-methods\nclass MachineConfig(PrintableConfig):\nclass LocalWriterConfig(In... | import typing
import torch
import torch.distributed as dist
from abc import abstractmethod
from dataclasses import dataclass, field
from time import time
from typing import Any, Dict, List, Mapping, Optional, Type, Union, cast
from rich.progress import (
BarColumn,
MofNCompleteColumn,
Progress,
TextColumn,
TimeElapsedColumn,
)
from torch import nn
from torch.nn import Parameter
from torch.nn.parallel import DistributedDataParallel as DDP
from typing_extensions import Literal
from nerfstudio.configs import base_config as cfg
from nerfstudio.data.datamanagers.base_datamanager import (
DataManager,
DataManagerConfig,
VanillaDataManager,
VanillaDataManagerConfig,
)
from nerfstudio.engine.callbacks import TrainingCallback, TrainingCallbackAttributes
from nerfstudio.models.base_model import Model, ModelConfig
from nerfstudio.utils import profiler | 8,785 | and so on.
Args:
config: configuration to instantiate pipeline
device: location to place model and data
test_mode:
'train': loads train/eval datasets into memory
'test': loads train/test dataset into memory
'inference': does not load any dataset into memory
world_size: total number of machines available
local_rank: rank of current machine
Attributes:
datamanager: The data manager that will be used
model: The model that will be used
"""
# pylint: disable=abstract-method
datamanager: DataManager
_model: Model
@property
def model(self):
"""Returns the unwrapped model if in ddp"""
return module_wrapper(self._model)
@property
def device(self):
"""Returns the device that the model is on."""
return self.model.device
def load_state_dict(self, state_dict: Mapping[str, Any], strict: bool = True):
model_state = {
key.replace("_model.", ""): value for key, value in state_dict.items() if key.startswith("_model.")
}
pipeline_state = {key: value for key, value in state_dict.items() if not key.startswith("_model.")}
self._model.load_state_dict(model_state, strict=strict)
super().load_state_dict(pipeline_state, strict=False)
@profiler.time_function
def get_train_loss_dict(self, step: int):
"""This function gets your training loss dict. This will be responsible for
getting the next batch of data from the DataManager and interfacing with the
Model class, feeding the data to the model's forward function.
Args:
step: current iteration step to update sampler if using DDP (distributed)
"""
if self.world_size > 1 and step:
assert self.datamanager.train_sampler is not None
self.datamanager.train_sampler.set_epoch(step)
ray_bundle, batch = self.datamanager.next_train(step)
model_outputs = self.model(ray_bundle, batch)
metrics_dict = self.model.get_metrics_dict(model_outputs, batch)
loss_dict = self.model.get_loss_dict(model_outputs, batch, metrics_dict)
return model_outputs, loss_dict, metrics_dict
@profiler.time_function
def get_eval_loss_dict(self, step: int):
"""This function gets your evaluation loss dict. It needs to get the data
from the DataManager and feed it to the model's forward function
Args:
step: current iteration step
"""
self.eval()
if self.world_size > 1:
assert self.datamanager.eval_sampler is not None
self.datamanager.eval_sampler.set_epoch(step)
ray_bundle, batch = self.datamanager.next_eval(step)
model_outputs = self.model(ray_bundle, batch)
metrics_dict = self.model.get_metrics_dict(model_outputs, batch)
loss_dict = self.model.get_loss_dict(model_outputs, batch, metrics_dict)
self.train()
return model_outputs, loss_dict, metrics_dict
@abstractmethod
@profiler.time_function
def get_eval_image_metrics_and_images(self, step: int):
"""This function gets your evaluation loss dict. It needs to get the data
from the DataManager and feed it to the model's forward function
Args:
step: current iteration step
"""
@abstractmethod
@profiler.time_function
def get_average_eval_image_metrics(self, step: Optional[int] = None):
"""Iterate over all the images in the eval dataset and get the average."""
def load_pipeline(self, loaded_state: Dict[str, Any], step: int) -> None:
"""Load the checkpoint from the given path
Args:
loaded_state: pre-trained model state dict
step: training step of the loaded checkpoint
"""
def get_training_callbacks(
self, training_callback_attributes: TrainingCallbackAttributes
) -> List[TrainingCallback]:
"""Returns the training callbacks from both the Dataloader and the Model."""
def get_param_groups(self) -> Dict[str, List[Parameter]]:
"""Get the param groups for the pipeline.
Returns:
A list of dictionaries containing the pipeline's param groups.
"""
@dataclass
class VanillaPipelineConfig(cfg.InstantiateConfig):
"""Configuration for pipeline instantiation"""
_target: Type = field(default_factory=lambda: VanillaPipeline)
"""target class to instantiate"""
| # Copyright 2022 The Nerfstudio Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Abstracts for the Pipeline class.
"""
from __future__ import annotations
def module_wrapper(ddp_or_model: Union[DDP, Model]) -> Model:
"""
If DDP, then return the .module. Otherwise, return the model.
"""
if isinstance(ddp_or_model, DDP):
return cast(Model, ddp_or_model.module)
return ddp_or_model
class Pipeline(nn.Module):
"""The intent of this class is to provide a higher level interface for the Model
that will be easy to use for our Trainer class.
This class will contain high level functions for the model like getting the loss
dictionaries and visualization code. It should have ways to get the next iterations
training loss, evaluation loss, and generate whole images for visualization. Each model
class should be 1:1 with a pipeline that can act as a standardized interface and hide
differences in how each model takes in and outputs data.
This class's function is to hide the data manager and model classes from the trainer,
worrying about:
1) Fetching data with the data manager
2) Feeding the model the data and fetching the loss
Hopefully this provides a higher level interface for the trainer to use, and
simplifying the model classes, which each may have different forward() methods
and so on.
Args:
config: configuration to instantiate pipeline
device: location to place model and data
test_mode:
'train': loads train/eval datasets into memory
'test': loads train/test dataset into memory
'inference': does not load any dataset into memory
world_size: total number of machines available
local_rank: rank of current machine
Attributes:
datamanager: The data manager that will be used
model: The model that will be used
"""
# pylint: disable=abstract-method
datamanager: DataManager
_model: Model
@property
def model(self):
"""Returns the unwrapped model if in ddp"""
return module_wrapper(self._model)
@property
def device(self):
"""Returns the device that the model is on."""
return self.model.device
def load_state_dict(self, state_dict: Mapping[str, Any], strict: bool = True):
model_state = {
key.replace("_model.", ""): value for key, value in state_dict.items() if key.startswith("_model.")
}
pipeline_state = {key: value for key, value in state_dict.items() if not key.startswith("_model.")}
self._model.load_state_dict(model_state, strict=strict)
super().load_state_dict(pipeline_state, strict=False)
@profiler.time_function
def get_train_loss_dict(self, step: int):
"""This function gets your training loss dict. This will be responsible for
getting the next batch of data from the DataManager and interfacing with the
Model class, feeding the data to the model's forward function.
Args:
step: current iteration step to update sampler if using DDP (distributed)
"""
if self.world_size > 1 and step:
assert self.datamanager.train_sampler is not None
self.datamanager.train_sampler.set_epoch(step)
ray_bundle, batch = self.datamanager.next_train(step)
model_outputs = self.model(ray_bundle, batch)
metrics_dict = self.model.get_metrics_dict(model_outputs, batch)
loss_dict = self.model.get_loss_dict(model_outputs, batch, metrics_dict)
return model_outputs, loss_dict, metrics_dict
@profiler.time_function
def get_eval_loss_dict(self, step: int):
"""This function gets your evaluation loss dict. It needs to get the data
from the DataManager and feed it to the model's forward function
Args:
step: current iteration step
"""
self.eval()
if self.world_size > 1:
assert self.datamanager.eval_sampler is not None
self.datamanager.eval_sampler.set_epoch(step)
ray_bundle, batch = self.datamanager.next_eval(step)
model_outputs = self.model(ray_bundle, batch)
metrics_dict = self.model.get_metrics_dict(model_outputs, batch)
loss_dict = self.model.get_loss_dict(model_outputs, batch, metrics_dict)
self.train()
return model_outputs, loss_dict, metrics_dict
@abstractmethod
@profiler.time_function
def get_eval_image_metrics_and_images(self, step: int):
"""This function gets your evaluation loss dict. It needs to get the data
from the DataManager and feed it to the model's forward function
Args:
step: current iteration step
"""
@abstractmethod
@profiler.time_function
def get_average_eval_image_metrics(self, step: Optional[int] = None):
"""Iterate over all the images in the eval dataset and get the average."""
def load_pipeline(self, loaded_state: Dict[str, Any], step: int) -> None:
"""Load the checkpoint from the given path
Args:
loaded_state: pre-trained model state dict
step: training step of the loaded checkpoint
"""
def get_training_callbacks(
self, training_callback_attributes: TrainingCallbackAttributes
) -> List[TrainingCallback]:
"""Returns the training callbacks from both the Dataloader and the Model."""
def get_param_groups(self) -> Dict[str, List[Parameter]]:
"""Get the param groups for the pipeline.
Returns:
A list of dictionaries containing the pipeline's param groups.
"""
@dataclass
class VanillaPipelineConfig(cfg.InstantiateConfig):
"""Configuration for pipeline instantiation"""
_target: Type = field(default_factory=lambda: VanillaPipeline)
"""target class to instantiate""" | datamanager: DataManagerConfig = VanillaDataManagerConfig() | 4 | 2023-10-26 04:39:15+00:00 | 12k |
mikacuy/PL-NeRF | run_plnerf.py | [
{
"identifier": "load_llff_data",
"path": "load_llff.py",
"snippet": "def load_llff_data(basedir, factor=8, recenter=True, bd_factor=.75, spherify=False, path_zflat=False):\n \n\n poses, bds, imgs = _load_data(basedir, factor=factor) # factor=8 downsamples original imgs by 8x\n print('Loaded', ... | import os, sys
import numpy as np
import imageio
import json
import random
import time
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision
import configargparse
import datetime
import math
import cv2
import shutil
import configargparse
from tqdm import tqdm, trange
from torch.utils.tensorboard import SummaryWriter
from skimage.metrics import structural_similarity
from lpips import LPIPS
from run_nerf_helpers import *
from load_llff import load_llff_data
from load_dtu import load_dtu, load_dtu2
from load_blender import load_blender_data, load_scene_blender_fixed_dist_new, load_scene_blender2
from natsort import natsorted
from argparse import Namespace | 8,713 | print(f"Using {args.n_gpus} GPU(s).")
# Load data
scene_data_dir = os.path.join(args.data_dir, args.scene_id)
K = None
if args.dataset == 'llff':
images, poses, bds, render_poses, i_test = load_llff_data(scene_data_dir, args.factor,
recenter=True, bd_factor=.75,
spherify=args.spherify)
hwf = poses[0,:3,-1]
poses = poses[:,:3,:4]
print('Loaded llff', images.shape, render_poses.shape, hwf, scene_data_dir)
if not isinstance(i_test, list):
i_test = [i_test]
if args.llffhold > 0:
print('Auto LLFF holdout,', args.llffhold)
i_test = np.arange(images.shape[0])[::args.llffhold]
i_val = i_test
i_train = np.array([i for i in np.arange(int(images.shape[0])) if
(i not in i_test and i not in i_val)])
print('DEFINING BOUNDS')
if args.no_ndc:
near = np.ndarray.min(bds) * .9
far = np.ndarray.max(bds) * 1.
else:
near = 0.
far = 1.
print('NEAR FAR', near, far)
elif args.dataset == 'blender':
images, poses, render_poses, hwf, i_split = load_blender_data(scene_data_dir, args.half_res, args.testskip)
print('Loaded blender', images.shape, render_poses.shape, hwf, scene_data_dir)
i_train, i_val, i_test = i_split
# near = 2.
near = args.set_near_plane
print("Set near plane to: " + str(near))
far = 6.
if args.white_bkgd:
images = images[...,:3]*images[...,-1:] + (1.-images[...,-1:])
else:
images = images[...,:3]
elif args.dataset == "blender2":
images, poses, render_poses, hwf, i_split = load_scene_blender2(scene_data_dir, half_res=args.half_res)
print('Loaded blender2', images.shape, render_poses.shape, hwf, scene_data_dir)
i_train, i_val, i_test = i_split
# near = 2.
near = args.set_near_plane
print("Set near plane to: " + str(near))
far = 6.
if args.white_bkgd:
images = images[...,:3]*images[...,-1:] + (1.-images[...,-1:])
else:
images = images[...,:3]
elif args.dataset == "blender_fixeddist":
images, poses, render_poses, hwf, i_split = load_scene_blender_fixed_dist_new(scene_data_dir, half_res=args.half_res, train_dist=1.0, test_dist=args.test_dist)
print('Loaded blender fixed dist', images.shape, hwf, scene_data_dir)
i_train, i_val, i_test = i_split
near = args.set_near_plane
print("Set near plane to: " + str(near))
far = 6.
if args.white_bkgd:
images = images[...,:3]*images[...,-1:] + (1.-images[...,-1:])
else:
images = images[...,:3]
elif args.dataset == 'LINEMOD':
images, poses, render_poses, hwf, K, i_split, near, far = load_LINEMOD_data(scene_data_dir, args.half_res, args.testskip)
print(f'Loaded LINEMOD, images shape: {images.shape}, hwf: {hwf}, K: {K}')
print(f'[CHECK HERE] near: {near}, far: {far}.')
i_train, i_val, i_test = i_split
if args.white_bkgd:
images = images[...,:3]*images[...,-1:] + (1.-images[...,-1:])
else:
images = images[...,:3]
elif args.dataset == 'DTU':
# use the existing split
if args.dtu_split is not None:
with open(args.dtu_split, 'r') as ff:
train_split = json.load(ff)
else:
train_split = None
images, Ks, poses, render_poses, hwf, i_split, near, far, splits = load_dtu(args.data_dir, args.dtu_scene_id, num_train=args.num_train, half_res=args.half_res, train_split=train_split)
K = Ks[0]
print(f'Loaded DTU, images shape: {images.shape}, hwf: {hwf}, K: {K}')
print(f'[CHECK HERE] near: {near}, far: {far}.')
i_train, i_test = i_split
i_val = i_test
save_json = build_json_for_dtu(splits, Ks, poses, near, far)
save_split_file = os.path.join(args.ckpt_dir, args.expname, 'split.json')
with open(save_split_file, 'w') as f:
json.dump(save_json, f, indent=4)
if args.white_bkgd:
images = images[...,:3]*images[...,-1:] + (1.-images[...,-1:])
else:
images = images[...,:3]
elif args.dataset == 'DTU2':
# use the existing split
if args.dtu_split is not None:
with open(args.dtu_split, 'r') as ff:
train_split = json.load(ff)
else:
train_split = None
| '''
Mikaela Uy
mikacuy@cs.stanford.edu
PL-NeRF: novel view synthesis experiments
A piecewise linear formulation to volume rendering
'''
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
np.random.seed(0)
torch.manual_seed(0)
torch.cuda.manual_seed(0)
DEBUG = False
def build_json_for_dtu(splits, intrinsics, poses, near, far):
out_dict = {}
out_dict = {"near": near,
"far": far}
i_train, i_test = splits
train_dicts = []
test_dicts = []
for i in i_train:
train_dict = {}
train_dict["extrinsic"] = poses[i].tolist()
train_dict["intrinsic"] = intrinsics[i].tolist()
train_dict["pose_id"] = int(i)
train_dicts.append(train_dict)
for i in i_test:
test_dict = {}
test_dict["extrinsic"] = poses[i].tolist()
test_dict["intrinsic"] = intrinsics[i].tolist()
test_dict["pose_id"] = int(i)
test_dicts.append(test_dict)
out_dict["train_frames"] = train_dicts
out_dict["test_frames"] = test_dicts
return out_dict
def batchify(fn, chunk):
"""Constructs a version of 'fn' that applies to smaller batches.
"""
if chunk is None:
return fn
def ret(inputs):
return torch.cat([fn(inputs[i:i+chunk]) for i in range(0, inputs.shape[0], chunk)], 0)
return ret
def run_network(inputs, viewdirs, fn, embed_fn, embeddirs_fn, netchunk=1024*64):
"""Prepares inputs and applies network 'fn'.
"""
inputs_flat = torch.reshape(inputs, [-1, inputs.shape[-1]])
embedded = embed_fn(inputs_flat)
if viewdirs is not None:
input_dirs = viewdirs[:,None].expand(inputs.shape)
input_dirs_flat = torch.reshape(input_dirs, [-1, input_dirs.shape[-1]])
embedded_dirs = embeddirs_fn(input_dirs_flat)
embedded = torch.cat([embedded, embedded_dirs], -1)
outputs_flat = batchify(fn, netchunk)(embedded)
outputs = torch.reshape(outputs_flat, list(inputs.shape[:-1]) + [outputs_flat.shape[-1]])
return outputs
def batchify_rays(rays_flat, chunk=1024*32, **kwargs):
"""Render rays in smaller minibatches to avoid OOM.
"""
all_ret = {}
for i in range(0, rays_flat.shape[0], chunk):
ret = render_rays(rays_flat[i:i+chunk], **kwargs)
for k in ret:
if k not in all_ret:
all_ret[k] = []
all_ret[k].append(ret[k])
all_ret = {k : torch.cat(all_ret[k], 0) for k in all_ret}
return all_ret
def render(H, W, K, chunk=1024*32, rays=None, c2w=None, ndc=True,
near=0., far=1.,
use_viewdirs=False, c2w_staticcam=None,
**kwargs):
"""Render rays
Args:
H: int. Height of image in pixels.
W: int. Width of image in pixels.
focal: float. Focal length of pinhole camera.
chunk: int. Maximum number of rays to process simultaneously. Used to
control maximum memory usage. Does not affect final results.
rays: array of shape [2, batch_size, 3]. Ray origin and direction for
each example in batch.
c2w: array of shape [3, 4]. Camera-to-world transformation matrix.
ndc: bool. If True, represent ray origin, direction in NDC coordinates.
near: float or array of shape [batch_size]. Nearest distance for a ray.
far: float or array of shape [batch_size]. Farthest distance for a ray.
use_viewdirs: bool. If True, use viewing direction of a point in space in model.
c2w_staticcam: array of shape [3, 4]. If not None, use this transformation matrix for
camera while using other c2w argument for viewing directions.
Returns:
rgb_map: [batch_size, 3]. Predicted RGB values for rays.
disp_map: [batch_size]. Disparity map. Inverse of depth.
acc_map: [batch_size]. Accumulated opacity (alpha) along a ray.
extras: dict with everything returned by render_rays().
"""
if c2w is not None:
# special case to render full image
rays_o, rays_d = get_rays(H, W, K, c2w)
else:
# use provided ray batch
rays_o, rays_d = rays
if use_viewdirs:
# provide ray directions as input
viewdirs = rays_d
if c2w_staticcam is not None:
# special case to visualize effect of viewdirs
rays_o, rays_d = get_rays(H, W, K, c2w_staticcam)
viewdirs = viewdirs / torch.norm(viewdirs, dim=-1, keepdim=True)
viewdirs = torch.reshape(viewdirs, [-1,3]).float()
sh = rays_d.shape # [..., 3]
if ndc:
# for forward facing scenes
rays_o, rays_d = ndc_rays(H, W, K[0][0], 1., rays_o, rays_d)
# Create ray batch
rays_o = torch.reshape(rays_o, [-1,3]).float()
rays_d = torch.reshape(rays_d, [-1,3]).float()
near, far = near * torch.ones_like(rays_d[...,:1]), far * torch.ones_like(rays_d[...,:1])
rays = torch.cat([rays_o, rays_d, near, far], -1)
if use_viewdirs:
rays = torch.cat([rays, viewdirs], -1)
# Render and reshape
all_ret = batchify_rays(rays, chunk, **kwargs)
for k in all_ret:
k_sh = list(sh[:-1]) + list(all_ret[k].shape[1:])
all_ret[k] = torch.reshape(all_ret[k], k_sh)
k_extract = ['rgb_map', 'disp_map', 'acc_map']
ret_list = [all_ret[k] for k in k_extract]
ret_dict = {k : all_ret[k] for k in all_ret if k not in k_extract}
return ret_list + [ret_dict]
def render_path(render_poses, hwf, K, chunk, render_kwargs, gt_imgs=None, savedir=None, render_factor=0):
H, W, focal = hwf
if render_factor!=0:
# Render downsampled for speed
H = H//render_factor
W = W//render_factor
focal = focal/render_factor
rgbs = []
disps = []
t = time.time()
for i, c2w in enumerate(tqdm(render_poses)):
print(i, time.time() - t)
t = time.time()
rgb, disp, acc, _ = render(H, W, K, chunk=chunk, c2w=c2w[:3,:4], **render_kwargs)
rgbs.append(rgb.cpu().numpy())
disps.append(disp.cpu().numpy())
if i==0:
print(rgb.shape, disp.shape)
"""
if gt_imgs is not None and render_factor==0:
p = -10. * np.log10(np.mean(np.square(rgb.cpu().numpy() - gt_imgs[i])))
print(p)
"""
if savedir is not None:
rgb8 = to8b(rgbs[-1])
filename = os.path.join(savedir, '{:03d}.png'.format(i))
imageio.imwrite(filename, rgb8)
rgbs = np.stack(rgbs, 0)
disps = np.stack(disps, 0)
return rgbs, disps
def test_images_samples(count, indices, images, depths, valid_depths, poses, H, W, K, lpips_alex, args, render_kwargs_test, \
embedcam_fn=None, with_test_time_optimization=False):
far = render_kwargs_test['far']
if count is None:
# take all images in order
count = len(indices)
img_i = indices
else:
# take random images
if count > len(indices):
count = len(indices)
img_i = np.random.choice(indices, size=count, replace=False)
rgbs_res = torch.empty(count, 3, H, W)
rgbs0_res = torch.empty(count, 3, H, W)
target_rgbs_res = torch.empty(count, 3, H, W)
depths_res = torch.empty(count, 1, H, W)
depths0_res = torch.empty(count, 1, H, W)
target_depths_res = torch.empty(count, 1, H, W)
target_valid_depths_res = torch.empty(count, 1, H, W, dtype=bool)
mean_metrics = MeanTracker()
mean_depth_metrics = MeanTracker() # track separately since they are not always available
for n, img_idx in enumerate(img_i):
print("Render image {}/{}".format(n + 1, count))
target = images[img_idx]
target_depth = torch.zeros((target.shape[0], target.shape[1], 1)).to(device)
target_valid_depth = torch.zeros((target.shape[0], target.shape[1]), dtype=bool).to(device)
pose = poses[img_idx, :3,:4]
intrinsic = K
with torch.no_grad():
# rgb, _, _, extras = render(H, W, intrinsic, chunk=(args.chunk // 2), c2w=pose, **render_kwargs_test)
# print(render_kwargs_test)
rgb, _, _, extras = render(H, W, intrinsic, chunk=args.chunk, c2w=pose, **render_kwargs_test)
###
target_hypothesis_repeated = extras['depth_map'].unsqueeze(-1).repeat(1, 1, extras["pred_hyp"].shape[-1])
dists = torch.norm(extras["pred_hyp"].unsqueeze(-1) - target_hypothesis_repeated.unsqueeze(-1), p=2, dim=-1)
mask = extras['depth_map'] < 4.0
dist_masked = dists[mask, ...]
depth_rmse = torch.mean(dists)
if not torch.isnan(depth_rmse):
depth_metrics = {"importance_sampling_error" : depth_rmse.item()}
mean_depth_metrics.add(depth_metrics)
mean_metrics = mean_depth_metrics
result_dir = os.path.join(args.ckpt_dir, args.expname, "test_samples_error" + "_" + str(args.N_importance))
os.makedirs(result_dir, exist_ok=True)
with open(os.path.join(result_dir, 'metrics_expecteddepth.txt'), 'w') as f:
mean_metrics.print(f)
return mean_metrics
def render_images_with_metrics(count, indices, images, depths, valid_depths, poses, H, W, K, lpips_alex, args, render_kwargs_test, \
embedcam_fn=None, with_test_time_optimization=False):
far = render_kwargs_test['far']
if count is None:
# take all images in order
count = len(indices)
img_i = indices
else:
# take random images
if count > len(indices):
count = len(indices)
img_i = np.random.choice(indices, size=count, replace=False)
rgbs_res = torch.empty(count, 3, H, W)
rgbs0_res = torch.empty(count, 3, H, W)
target_rgbs_res = torch.empty(count, 3, H, W)
depths_res = torch.empty(count, 1, H, W)
depths0_res = torch.empty(count, 1, H, W)
target_depths_res = torch.empty(count, 1, H, W)
target_valid_depths_res = torch.empty(count, 1, H, W, dtype=bool)
mean_metrics = MeanTracker()
mean_depth_metrics = MeanTracker() # track separately since they are not always available
for n, img_idx in enumerate(img_i):
print("Render image {}/{}".format(n + 1, count), end="")
target = images[img_idx]
if args.dataset == "scannet":
target_depth = depths[img_idx]
target_valid_depth = valid_depths[img_idx]
else:
target_depth = torch.zeros((target.shape[0], target.shape[1], 1)).to(device)
target_valid_depth = torch.zeros((target.shape[0], target.shape[1]), dtype=bool).to(device)
pose = poses[img_idx, :3,:4]
intrinsic = K
with torch.no_grad():
# rgb, _, _, extras = render(H, W, intrinsic, chunk=(args.chunk // 2), c2w=pose, **render_kwargs_test)
# print(render_kwargs_test)
rgb, _, _, extras = render(H, W, intrinsic, chunk=args.chunk, c2w=pose, **render_kwargs_test)
# compute depth rmse
depth_rmse = compute_rmse(extras['depth_map'][target_valid_depth], target_depth[:, :, 0][target_valid_depth])
if not torch.isnan(depth_rmse):
depth_metrics = {"depth_rmse" : depth_rmse.item()}
mean_depth_metrics.add(depth_metrics)
# compute color metrics
target = torch.tensor(target).to(rgb.device)
img_loss = img2mse(rgb, target)
psnr = mse2psnr(img_loss)
print("PSNR: {}".format(psnr))
rgb = torch.clamp(rgb, 0, 1)
ssim = structural_similarity(rgb.cpu().numpy(), target.cpu().numpy(), data_range=1., channel_axis=-1)
lpips = lpips_alex(rgb.permute(2, 0, 1).unsqueeze(0), target.permute(2, 0, 1).unsqueeze(0), normalize=True)[0]
# store result
rgbs_res[n] = rgb.clamp(0., 1.).permute(2, 0, 1).cpu()
target_rgbs_res[n] = target.permute(2, 0, 1).cpu()
depths_res[n] = (extras['depth_map'] / far).unsqueeze(0).cpu()
target_depths_res[n] = (target_depth[:, :, 0] / far).unsqueeze(0).cpu()
target_valid_depths_res[n] = target_valid_depth.unsqueeze(0).cpu()
metrics = {"img_loss" : img_loss.item(), "psnr" : psnr.item(), "ssim" : ssim, "lpips" : lpips[0, 0, 0],}
if 'rgb0' in extras:
img_loss0 = img2mse(extras['rgb0'], target)
psnr0 = mse2psnr(img_loss0)
depths0_res[n] = (extras['depth0'] / far).unsqueeze(0).cpu()
rgbs0_res[n] = torch.clamp(extras['rgb0'], 0, 1).permute(2, 0, 1).cpu()
metrics.update({"img_loss0" : img_loss0.item(), "psnr0" : psnr0.item()})
mean_metrics.add(metrics)
res = { "rgbs" : rgbs_res, "target_rgbs" : target_rgbs_res, "depths" : depths_res, "target_depths" : target_depths_res, \
"target_valid_depths" : target_valid_depths_res}
if 'rgb0' in extras:
res.update({"rgbs0" : rgbs0_res, "depths0" : depths0_res,})
all_mean_metrics = MeanTracker()
all_mean_metrics.add({**mean_metrics.as_dict(), **mean_depth_metrics.as_dict()})
return all_mean_metrics, res
def write_images_with_metrics(images, mean_metrics, far, args, with_test_time_optimization=False, test_samples=False):
if not test_samples:
result_dir = os.path.join(args.ckpt_dir, args.expname, "test_images_" + str(args.mode)+ "_" + str(args.N_samples) + "_" + str(args.N_importance) + ("with_optimization_" if with_test_time_optimization else "") + args.scene_id)
else:
result_dir = os.path.join(args.ckpt_dir, args.expname, "test_images_samples" + str(args.mode)+ "_" + str(args.N_samples) + "_" + str(args.N_importance) + ("with_optimization_" if with_test_time_optimization else "") + str(args.N_samples) + "_" + str(args.N_importance) + args.scene_id)
os.makedirs(result_dir, exist_ok=True)
for n, (rgb, depth, gt_rgb) in enumerate(zip(images["rgbs"].permute(0, 2, 3, 1).cpu().numpy(), \
images["depths"].permute(0, 2, 3, 1).cpu().numpy(), images["target_rgbs"].permute(0, 2, 3, 1).cpu().numpy())):
# write rgb
cv2.imwrite(os.path.join(result_dir, str(n) + "_rgb" + ".png"), cv2.cvtColor(to8b(rgb), cv2.COLOR_RGB2BGR))
cv2.imwrite(os.path.join(result_dir, str(n) + "_gt" + ".png"), cv2.cvtColor(to8b(gt_rgb), cv2.COLOR_RGB2BGR))
# write depth
cv2.imwrite(os.path.join(result_dir, str(n) + "_d" + ".png"), to16b(depth))
with open(os.path.join(result_dir, 'metrics.txt'), 'w') as f:
mean_metrics.print(f)
mean_metrics.print()
def write_images_with_metrics_testdist(images, mean_metrics, far, args, test_dist, with_test_time_optimization=False, test_samples=False):
if not test_samples:
result_dir = os.path.join(args.ckpt_dir, args.expname, "test_images_dist" + str(test_dist) + "_" + ("with_optimization_" if with_test_time_optimization else "") + args.scene_id)
else:
result_dir = os.path.join(args.ckpt_dir, args.expname, "test_images_samples_dist" + str(test_dist) + "_" + ("with_optimization_" if with_test_time_optimization else "") + str(args.N_samples) + "_" + str(args.N_importance) + args.scene_id)
# if not test_samples:
# result_dir = os.path.join(args.ckpt_dir, args.expname, "train_images_" + ("with_optimization_" if with_test_time_optimization else "") + args.scene_id)
# else:
# result_dir = os.path.join(args.ckpt_dir, args.expname, "train_images_samples" + ("with_optimization_" if with_test_time_optimization else "") + str(args.N_samples) + "_" + str(args.N_importance) + args.scene_id)
os.makedirs(result_dir, exist_ok=True)
for n, (rgb, depth, gt_rgb) in enumerate(zip(images["rgbs"].permute(0, 2, 3, 1).cpu().numpy(), \
images["depths"].permute(0, 2, 3, 1).cpu().numpy(), images["target_rgbs"].permute(0, 2, 3, 1).cpu().numpy())):
# write rgb
# cv2.imwrite(os.path.join(result_dir, str(n) + "_rgb" + ".jpg"), cv2.cvtColor(to8b(rgb), cv2.COLOR_RGB2BGR))
cv2.imwrite(os.path.join(result_dir, str(n) + "_rgb" + ".png"), cv2.cvtColor(to8b(rgb), cv2.COLOR_RGB2BGR))
cv2.imwrite(os.path.join(result_dir, str(n) + "_gt" + ".png"), cv2.cvtColor(to8b(gt_rgb), cv2.COLOR_RGB2BGR))
# write depth
cv2.imwrite(os.path.join(result_dir, str(n) + "_d" + ".png"), to16b(depth))
with open(os.path.join(result_dir, 'metrics.txt'), 'w') as f:
mean_metrics.print(f)
mean_metrics.print()
def create_nerf(args):
"""Instantiate NeRF's MLP model.
"""
embed_fn, input_ch = get_embedder(args.multires, args.i_embed)
input_ch_views = 0
embeddirs_fn = None
if args.use_viewdirs:
embeddirs_fn, input_ch_views = get_embedder(args.multires_views, args.i_embed)
output_ch = 5 if args.N_importance > 0 else 4
skips = [4]
model = NeRF(D=args.netdepth, W=args.netwidth,
input_ch=input_ch, output_ch=output_ch, skips=skips,
input_ch_views=input_ch_views, use_viewdirs=args.use_viewdirs).to(device)
coarse_grad_vars = list(model.parameters())
model_fine = None
if args.N_importance > 0:
model_fine = NeRF(D=args.netdepth_fine, W=args.netwidth_fine,
input_ch=input_ch, output_ch=output_ch, skips=skips,
input_ch_views=input_ch_views, use_viewdirs=args.use_viewdirs).to(device)
grad_vars = list(model_fine.parameters())
network_query_fn = lambda inputs, viewdirs, network_fn : run_network(inputs, viewdirs, network_fn,
embed_fn=embed_fn,
embeddirs_fn=embeddirs_fn,
netchunk=args.netchunk)
# Create optimizer
optimizer = torch.optim.Adam(params=grad_vars, lr=args.lrate, betas=(0.9, 0.999))
optimizer_coarse = torch.optim.Adam(params=coarse_grad_vars, lr=args.coarse_lrate, betas=(0.9, 0.999))
start = 0
##########################
# Load checkpoints
if args.ft_path is not None and args.ft_path!='None':
ckpts = [args.ft_path]
else:
ckpts = [os.path.join(args.ckpt_dir, args.expname, f) for f in sorted(os.listdir(os.path.join(args.ckpt_dir, args.expname))) if 'tar' in f]
print('Found ckpts', ckpts)
if len(ckpts) > 0 and not args.no_reload:
ckpt_path = ckpts[-1]
print('Reloading from', ckpt_path)
ckpt = torch.load(ckpt_path)
start = ckpt['global_step']
optimizer.load_state_dict(ckpt['optimizer_state_dict'])
# Load model
model.load_state_dict(ckpt['network_fn_state_dict'])
if model_fine is not None:
model_fine.load_state_dict(ckpt['network_fine_state_dict'])
##########################
render_kwargs_train = {
'network_query_fn' : network_query_fn,
'perturb' : args.perturb,
'N_importance' : args.N_importance,
'network_fine' : model_fine,
'N_samples' : args.N_samples,
'network_fn' : model,
'use_viewdirs' : args.use_viewdirs,
'white_bkgd' : args.white_bkgd,
'raw_noise_std' : args.raw_noise_std,
'mode' : args.mode,
'color_mode': args.color_mode
}
# NDC only good for LLFF-style forward facing data
if args.dataset != 'llff' or args.no_ndc:
print('Not ndc!')
render_kwargs_train['ndc'] = False
render_kwargs_train['lindisp'] = args.lindisp
render_kwargs_test = {k : render_kwargs_train[k] for k in render_kwargs_train}
render_kwargs_test['perturb'] = True
render_kwargs_test['raw_noise_std'] = 0.
# return render_kwargs_train, render_kwargs_test, start, grad_vars, optimizer, optimizer_coarse
return render_kwargs_train, render_kwargs_test, start, grad_vars, optimizer, optimizer_coarse
def compute_weights(raw, z_vals, rays_d, noise=0.):
raw2alpha = lambda raw, dists, act_fn=F.relu: 1.-torch.exp(-act_fn(raw)*dists)
dists = z_vals[...,1:] - z_vals[...,:-1]
dists = torch.cat([dists, torch.full_like(dists[...,:1], 1e10, device=device)], -1) # [N_rays, N_samples]
dists = dists * torch.norm(rays_d[...,None,:], dim=-1)
alpha = raw2alpha(raw[...,3] + noise, dists) # [N_rays, N_samples]
weights = alpha * torch.cumprod(torch.cat([torch.ones((alpha.shape[0], 1), device=device), 1.-alpha + 1e-10], -1), -1)[:, :-1]
return weights
### Our reformulation to piecewise linear
def compute_weights_piecewise_linear(raw, z_vals, near, far, rays_d, noise=0., return_tau=False):
raw2expr = lambda raw, dists: torch.exp(-raw*dists)
### Concat
z_vals = torch.cat([near, z_vals, far], -1)
dists = z_vals[...,1:] - z_vals[...,:-1]
### Original code
dists = dists * torch.norm(rays_d[...,None,:], dim=-1)
tau = torch.cat([torch.ones((raw.shape[0], 1), device=device)*1e-10, raw[...,3] + noise, torch.ones((raw.shape[0], 1), device=device)*1e10], -1) ### tau(near) = 0, tau(far) = very big (will hit an opaque surface)
tau = F.relu(tau) ## Make positive from proof of DS-NeRF
interval_ave_tau = 0.5 * (tau[...,1:] + tau[...,:-1])
'''
Evaluating exp(-0.5 (tau_{i+1}+tau_i) (s_{i+1}-s_i) )
'''
expr = raw2expr(interval_ave_tau, dists) # [N_rays, N_samples+1]
### Transmittance until s_n
T = torch.cumprod(torch.cat([torch.ones((expr.shape[0], 1), device=device), expr], -1), -1) # [N_rays, N_samples+2], T(near)=1, starts off at 1
### Factor to multiply transmittance with
factor = (1 - expr)
weights = factor * T[:, :-1] # [N_rays, N_samples+1]
if return_tau:
return weights, tau, T
else:
return weights
def raw2outputs(raw, z_vals, near, far, rays_d, mode, color_mode, raw_noise_std=0, pytest=False, white_bkgd=False, farcolorfix=False):
"""Transforms model's predictions to semantically meaningful values.
Args:
raw: [num_rays, num_samples along ray, 4]. Prediction from model.
z_vals: [num_rays, num_samples along ray]. Integration time.
rays_d: [num_rays, 3]. Direction of each ray.
Returns:
rgb_map: [num_rays, 3]. Estimated RGB color of a ray.
disp_map: [num_rays]. Disparity map. Inverse of depth map.
acc_map: [num_rays]. Sum of weights along each ray.
weights: [num_rays, num_samples]. Weights assigned to each sampled color.
depth_map: [num_rays]. Estimated distance to object.
"""
rgb = torch.sigmoid(raw[...,:3]) # [N_rays, N_samples, 3]
noise = 0.
if raw_noise_std > 0.:
noise = torch.randn(raw[...,3].shape) * raw_noise_std
# Overwrite randomly sampled data if pytest
if pytest:
np.random.seed(0)
noise = np.random.rand(*list(raw[...,3].shape)) * raw_noise_std
noise = torch.Tensor(noise)
if mode == "linear":
weights, tau, T = compute_weights_piecewise_linear(raw, z_vals, near, far, rays_d, noise, return_tau=True)
if color_mode == "midpoint":
if farcolorfix:
rgb_concat = torch.cat([rgb[: ,0, :].unsqueeze(1), rgb, torch.zeros((rgb[:, -1].shape), device=device).unsqueeze(1)], 1)
else:
rgb_concat = torch.cat([rgb[: ,0, :].unsqueeze(1), rgb, rgb[: ,-1, :].unsqueeze(1)], 1)
rgb_mid = .5 * (rgb_concat[:, 1:, :] + rgb_concat[:, :-1, :])
rgb_map = torch.sum(weights[...,None] * rgb_mid, -2) # [N_rays, 3]
elif color_mode == "left":
rgb_concat = torch.cat([rgb[: ,0, :].unsqueeze(1), rgb], 1)
rgb_map = torch.sum(weights[...,None] * rgb_concat, -2)
else:
print("ERROR: Color mode unimplemented, please select left or midpoint.")
### Piecewise linear means take the midpoint
z_vals = torch.cat([near, z_vals, far], -1)
z_vals_mid = .5 * (z_vals[...,1:] + z_vals[...,:-1])
depth_map = torch.sum(weights * z_vals_mid, -1)
elif mode == "constant":
weights = compute_weights(raw, z_vals, rays_d, noise)
rgb_map = torch.sum(weights[...,None] * rgb, -2) # [N_rays, 3]
depth_map = torch.sum(weights * z_vals, -1)
tau = None
T = None
disp_map = 1./torch.max(1e-10 * torch.ones_like(depth_map), depth_map / torch.sum(weights, -1))
acc_map = torch.sum(weights, -1)
if white_bkgd:
rgb_map = rgb_map + (1.-acc_map[...,None])
return rgb_map, disp_map, acc_map, weights, depth_map, tau, T
def render_rays(ray_batch,
network_fn,
network_query_fn,
N_samples,
mode,
color_mode,
retraw=False,
lindisp=False,
perturb=0.,
N_importance=0,
network_fine=None,
white_bkgd=False,
raw_noise_std=0.,
verbose=False,
pytest=False,
quad_solution_v2=False,
zero_tol = 1e-4,
epsilon = 1e-3,
farcolorfix = False,
constant_init = False):
"""Volumetric rendering.
Args:
ray_batch: array of shape [batch_size, ...]. All information necessary
for sampling along a ray, including: ray origin, ray direction, min
dist, max dist, and unit-magnitude viewing direction.
network_fn: function. Model for predicting RGB and density at each point
in space.
network_query_fn: function used for passing queries to network_fn.
N_samples: int. Number of different times to sample along each ray.
retraw: bool. If True, include model's raw, unprocessed predictions.
lindisp: bool. If True, sample linearly in inverse depth rather than in depth.
perturb: float, 0 or 1. If non-zero, each ray is sampled at stratified
random points in time.
N_importance: int. Number of additional times to sample along each ray.
These samples are only passed to network_fine.
network_fine: "fine" network with same spec as network_fn.
white_bkgd: bool. If True, assume a white background.
raw_noise_std: ...
verbose: bool. If True, print more debugging info.
Returns:
rgb_map: [num_rays, 3]. Estimated RGB color of a ray. Comes from fine model.
disp_map: [num_rays]. Disparity map. 1 / depth.
acc_map: [num_rays]. Accumulated opacity along each ray. Comes from fine model.
raw: [num_rays, num_samples, 4]. Raw predictions from model.
rgb0: See rgb_map. Output for coarse model.
disp0: See disp_map. Output for coarse model.
acc0: See acc_map. Output for coarse model.
z_std: [num_rays]. Standard deviation of distances along ray for each
sample.
"""
N_rays = ray_batch.shape[0]
rays_o, rays_d = ray_batch[:,0:3], ray_batch[:,3:6] # [N_rays, 3] each
viewdirs = ray_batch[:,-3:] if ray_batch.shape[-1] > 8 else None
bounds = torch.reshape(ray_batch[...,6:8], [-1,1,2])
near, far = bounds[...,0], bounds[...,1] # [-1,1]
t_vals = torch.linspace(0., 1., steps=N_samples)
if not lindisp:
z_vals = near * (1.-t_vals) + far * (t_vals)
else:
z_vals = 1./(1./near * (1.-t_vals) + 1./far * (t_vals))
z_vals = z_vals.expand([N_rays, N_samples])
if perturb > 0.:
# get intervals between samples
mids = .5 * (z_vals[...,1:] + z_vals[...,:-1])
upper = torch.cat([mids, z_vals[...,-1:]], -1)
lower = torch.cat([z_vals[...,:1], mids], -1)
# stratified samples in those intervals
t_rand = torch.rand(z_vals.shape)
# Pytest, overwrite u with numpy's fixed random numbers
if pytest:
np.random.seed(0)
t_rand = np.random.rand(*list(z_vals.shape))
t_rand = torch.Tensor(t_rand)
z_vals = lower + (upper - lower) * t_rand
pts = rays_o[...,None,:] + rays_d[...,None,:] * z_vals[...,:,None] # [N_rays, N_samples, 3]
### If constant init then overwrite mode for coarse model to constant first
if constant_init:
mode = "constant"
# raw = run_network(pts)
raw = network_query_fn(pts, viewdirs, network_fn)
rgb_map, disp_map, acc_map, weights, depth_map, tau, T = raw2outputs(raw, z_vals, near, far, rays_d, mode, color_mode, raw_noise_std, pytest=pytest, white_bkgd=white_bkgd, farcolorfix=farcolorfix)
if N_importance > 0:
rgb_map_0, disp_map_0, acc_map_0, depth_map_0, z_vals_0, weights_0 = rgb_map, disp_map, acc_map, depth_map, z_vals, weights
z_vals_mid = .5 * (z_vals[...,1:] + z_vals[...,:-1])
if mode == "linear":
z_samples, _, _, _ = sample_pdf_reformulation(z_vals, weights, tau, T, near, far, N_importance, det=(perturb==0.), pytest=pytest, quad_solution_v2=quad_solution_v2, zero_threshold = zero_tol, epsilon_=epsilon)
elif mode == "constant":
z_samples = sample_pdf(z_vals_mid, weights[...,1:-1], N_importance, det=(perturb==0.), pytest=pytest)
z_samples = z_samples.detach()
######## Clamping in quad solution should have fixed this
z_samples = torch.clamp(z_samples, near, far)
########
z_vals, _ = torch.sort(torch.cat([z_vals, z_samples], -1), -1)
pts = rays_o[...,None,:] + rays_d[...,None,:] * z_vals[...,:,None] # [N_rays, N_samples + N_importance, 3]
run_fn = network_fn if network_fine is None else network_fine
# raw = run_network(pts, fn=run_fn)
raw = network_query_fn(pts, viewdirs, run_fn)
rgb_map, disp_map, acc_map, weights, depth_map, tau, T = raw2outputs(raw, z_vals, near, far, rays_d, mode, color_mode, raw_noise_std, pytest=pytest, white_bkgd=white_bkgd, farcolorfix=farcolorfix)
# ret = {'rgb_map' : rgb_map, 'disp_map' : disp_map, 'acc_map' : acc_map, 'depth_map' : depth_map, 'pred_hyp' : pred_depth_hyp}
ret = {'rgb_map' : rgb_map, 'disp_map' : disp_map, 'acc_map' : acc_map, 'depth_map' : depth_map}
if retraw:
ret['raw'] = raw
if N_importance > 0:
ret['rgb0'] = rgb_map_0
ret['disp0'] = disp_map_0
ret['depth0'] = depth_map_0
ret['acc0'] = acc_map_0
ret['z_std'] = torch.std(z_samples, dim=-1, unbiased=False) # [N_rays]
for k in ret:
if (torch.isnan(ret[k]).any() or torch.isinf(ret[k]).any()) and DEBUG:
print(f"! [Numerical Error] {k} contains nan or inf.")
return ret
def config_parser():
parser = configargparse.ArgumentParser()
parser.add_argument('--task', default="train", type=str, help='one out of: "train", "test", "video"')
parser.add_argument('--config', is_config_file=True,
help='config file path')
parser.add_argument("--expname", type=str,
help='experiment name')
parser.add_argument("--ckpt_dir", type=str, default="",
help='checkpoint directory')
parser.add_argument("--scene_id", type=str, default="lego",
help='scene identifier')
parser.add_argument("--data_dir", type=str, default="../nerf_synthetic",
help='directory containing the scenes')
parser.add_argument("--dataset", type=str, default="blender",
help='dataset used -- selects which dataloader"')
# training options
parser.add_argument("--netdepth", type=int, default=8,
help='layers in network')
parser.add_argument("--netwidth", type=int, default=256,
help='channels per layer')
parser.add_argument("--netdepth_fine", type=int, default=8,
help='layers in fine network')
parser.add_argument("--netwidth_fine", type=int, default=256,
help='channels per layer in fine network')
parser.add_argument("--N_rand", type=int, default=32*32*4,
help='batch size (number of random rays per gradient step)')
parser.add_argument("--lrate", type=float, default=5e-4,
help='learning rate')
parser.add_argument("--coarse_lrate", type=float, default=5e-4,
help='learning rate')
parser.add_argument("--lrate_decay", type=int, default=250,
help='exponential learning rate decay (in 1000 steps)')
parser.add_argument("--chunk", type=int, default=1024*32,
help='number of rays processed in parallel, decrease if running out of memory')
parser.add_argument("--netchunk", type=int, default=1024*64,
help='number of pts sent through network in parallel, decrease if running out of memory')
parser.add_argument("--no_batching", action='store_true',
help='only take random rays from 1 image at a time')
parser.add_argument("--no_reload", action='store_true',
help='do not reload weights from saved ckpt')
parser.add_argument("--ft_path", type=str, default=None,
help='specific weights npy file to reload for coarse network')
# rendering options
parser.add_argument("--N_samples", type=int, default=64,
help='number of coarse samples per ray')
parser.add_argument("--N_importance", type=int, default=128,
help='number of additional fine samples per ray')
parser.add_argument("--perturb", type=float, default=1.,
help='set to 0. for no jitter, 1. for jitter')
parser.add_argument("--use_viewdirs", action='store_true',
help='use full 5D input instead of 3D')
parser.add_argument("--i_embed", type=int, default=0,
help='set 0 for default positional encoding, -1 for none')
parser.add_argument("--multires", type=int, default=10,
help='log2 of max freq for positional encoding (3D location)')
parser.add_argument("--multires_views", type=int, default=4,
help='log2 of max freq for positional encoding (2D direction)')
parser.add_argument("--raw_noise_std", type=float, default=0.,
help='std dev of noise added to regularize sigma_a output, 1e0 recommended')
parser.add_argument("--render_only", action='store_true',
help='do not optimize, reload weights and render out render_poses path')
parser.add_argument("--render_test", action='store_true',
help='render the test set instead of render_poses path')
parser.add_argument("--render_factor", type=int, default=0,
help='downsampling factor to speed up rendering, set 4 or 8 for fast preview')
# training options
parser.add_argument("--precrop_iters", type=int, default=0,
help='number of steps to train on central crops')
parser.add_argument("--precrop_frac", type=float,
default=.5, help='fraction of img taken for central crops')
# dataset options
parser.add_argument("--testskip", type=int, default=1,
help='will load 1/N images from test/val sets, useful for large datasets like deepvoxels')
## blender flags
parser.add_argument("--white_bkgd", action='store_true',
help='set to render synthetic data on a white bkgd (always use for dvoxels)')
# parser.add_argument('--white_bkgd', default= False, type=bool)
parser.add_argument("--half_res", action='store_true',
help='load blender synthetic data at 400x400 instead of 800x800')
## llff flags
parser.add_argument("--factor", type=int, default=8,
help='downsample factor for LLFF images')
parser.add_argument("--no_ndc", action='store_true',
help='do not use normalized device coordinates (set for non-forward facing scenes)')
parser.add_argument("--lindisp", action='store_true',
help='sampling linearly in disparity rather than depth')
parser.add_argument("--spherify", action='store_true',
help='set for spherical 360 scenes')
parser.add_argument("--llffhold", type=int, default=8,
help='will take every 1/N images as LLFF test set, paper uses 8')
# logging/saving options
parser.add_argument("--num_iterations", type=int, default=500000,
help='number of iterations for training')
parser.add_argument("--i_print", type=int, default=100,
help='frequency of console printout and metric loggin')
parser.add_argument("--i_img", type=int, default=600000,
help='frequency of tensorboard image logging')
parser.add_argument("--i_weights", type=int, default=100000,
help='frequency of weight ckpt saving')
parser.add_argument("--i_testset", type=int, default=500000,
help='frequency of testset saving')
parser.add_argument("--i_video", type=int, default=500000,
help='frequency of render_poses video saving')
### For PWL ###
parser.add_argument("--mode", type=str, default="constant",
help='rendering opacity aggregation mode -- whether to use piecewise constant (vanilla) or piecewise linear (reformulation)."')
parser.add_argument("--color_mode", type=str, default="midpoint",
help='rendering color aggregation mode -- whether to use left bin or midpoint."')
parser.add_argument('--quad_solution_v2', default= True, type=bool)
### Epsilon and zero tol in quadratic solution
parser.add_argument("--zero_tol", type=float, default=1e-4,
help='zero tol to revert to piecewise constant assumption')
parser.add_argument("--epsilon", type=float, default=1e-3,
help='epsilon value in the increasing and decreasing cases or max(x,epsilon)')
parser.add_argument('--set_near_plane', default= 2., type=float)
parser.add_argument("--constant_init", type=int, default=1000,
help='number of iterations to use constant aggregation')
parser.add_argument('--test_dist', default= 1.0, type=float)
parser.add_argument("--eval_scene_id", type=str, default="chair_rgba_fixdist_nv100_dist0.25-1.0-4_depth_sfn",
help='scene identifier for eval')
parser.add_argument("--eval_data_dir", type=str, default="../nerf_synthetic/fixed_dist_new-rgba/",
help='directory containing the scenes for eval')
### DTU flags
parser.add_argument("--dtu_scene_id", type=int, default=21,
help='scan id for DTU dataset to render')
parser.add_argument("--num_train", type=int, default=40,
help='number of training views to use (1 - 49)')
parser.add_argument("--dtu_split", type=str, default=None,
help='number of training views to use (1 - 49)')
##################
return parser
def train():
parser = config_parser()
args = parser.parse_args()
print(args.white_bkgd)
if args.task == "train":
if args.expname is None:
args.expname = "{}_{}".format(datetime.datetime.fromtimestamp(time.time()).strftime('%Y%m%d_%H%M%S'), args.scene_id)
args_file = os.path.join(args.ckpt_dir, args.expname, 'args.json')
os.makedirs(os.path.join(args.ckpt_dir, args.expname), exist_ok=True)
with open(args_file, 'w') as af:
json.dump(vars(args), af, indent=4)
else:
if args.expname is None:
print("Error: Specify experiment name for test or video")
exit()
tmp_task = args.task
tmp_data_dir = args.data_dir
tmp_scene_id = args.scene_id
tmp_dataset = args.dataset
tmp_test_dist = args.test_dist
tmp_ckpt_dir = args.ckpt_dir
tmp_set_near_plane = args.set_near_plane
tmp_white_bkgd = args.white_bkgd
tmp_eval_scene_id = args.eval_scene_id
tmp_eval_data_dir = args.eval_data_dir
# tmp_white_bkgd = False
tmp_test_skip = args.testskip
# tmp_mode = args.mode
# tmp_N_samples = args.N_samples
# tmp_N_importance = args.N_importance
# load nerf parameters from training
args_file = os.path.join(args.ckpt_dir, args.expname, 'args.json')
with open(args_file, 'r') as af:
args_dict = json.load(af)
args = Namespace(**args_dict)
# task and paths are not overwritten
args.task = tmp_task
args.data_dir = tmp_data_dir
args.ckpt_dir = tmp_ckpt_dir
# args.mode = tmp_mode
args.train_jsonfile = 'transforms_train.json'
args.set_near_plane = tmp_set_near_plane
# args.N_samples = tmp_N_samples
# args.N_importance = tmp_N_importance
args.dataset = tmp_dataset
args.test_dist = tmp_test_dist
args.scene_id = tmp_scene_id
args.white_bkgd = tmp_white_bkgd
args.eval_scene_id = tmp_eval_scene_id
args.eval_data_dir = tmp_eval_data_dir
args.testskip = tmp_test_skip
print('\n'.join(f'{k}={v}' for k, v in vars(args).items()))
args.n_gpus = torch.cuda.device_count()
print(f"Using {args.n_gpus} GPU(s).")
# Load data
scene_data_dir = os.path.join(args.data_dir, args.scene_id)
K = None
if args.dataset == 'llff':
images, poses, bds, render_poses, i_test = load_llff_data(scene_data_dir, args.factor,
recenter=True, bd_factor=.75,
spherify=args.spherify)
hwf = poses[0,:3,-1]
poses = poses[:,:3,:4]
print('Loaded llff', images.shape, render_poses.shape, hwf, scene_data_dir)
if not isinstance(i_test, list):
i_test = [i_test]
if args.llffhold > 0:
print('Auto LLFF holdout,', args.llffhold)
i_test = np.arange(images.shape[0])[::args.llffhold]
i_val = i_test
i_train = np.array([i for i in np.arange(int(images.shape[0])) if
(i not in i_test and i not in i_val)])
print('DEFINING BOUNDS')
if args.no_ndc:
near = np.ndarray.min(bds) * .9
far = np.ndarray.max(bds) * 1.
else:
near = 0.
far = 1.
print('NEAR FAR', near, far)
elif args.dataset == 'blender':
images, poses, render_poses, hwf, i_split = load_blender_data(scene_data_dir, args.half_res, args.testskip)
print('Loaded blender', images.shape, render_poses.shape, hwf, scene_data_dir)
i_train, i_val, i_test = i_split
# near = 2.
near = args.set_near_plane
print("Set near plane to: " + str(near))
far = 6.
if args.white_bkgd:
images = images[...,:3]*images[...,-1:] + (1.-images[...,-1:])
else:
images = images[...,:3]
elif args.dataset == "blender2":
images, poses, render_poses, hwf, i_split = load_scene_blender2(scene_data_dir, half_res=args.half_res)
print('Loaded blender2', images.shape, render_poses.shape, hwf, scene_data_dir)
i_train, i_val, i_test = i_split
# near = 2.
near = args.set_near_plane
print("Set near plane to: " + str(near))
far = 6.
if args.white_bkgd:
images = images[...,:3]*images[...,-1:] + (1.-images[...,-1:])
else:
images = images[...,:3]
elif args.dataset == "blender_fixeddist":
images, poses, render_poses, hwf, i_split = load_scene_blender_fixed_dist_new(scene_data_dir, half_res=args.half_res, train_dist=1.0, test_dist=args.test_dist)
print('Loaded blender fixed dist', images.shape, hwf, scene_data_dir)
i_train, i_val, i_test = i_split
near = args.set_near_plane
print("Set near plane to: " + str(near))
far = 6.
if args.white_bkgd:
images = images[...,:3]*images[...,-1:] + (1.-images[...,-1:])
else:
images = images[...,:3]
elif args.dataset == 'LINEMOD':
images, poses, render_poses, hwf, K, i_split, near, far = load_LINEMOD_data(scene_data_dir, args.half_res, args.testskip)
print(f'Loaded LINEMOD, images shape: {images.shape}, hwf: {hwf}, K: {K}')
print(f'[CHECK HERE] near: {near}, far: {far}.')
i_train, i_val, i_test = i_split
if args.white_bkgd:
images = images[...,:3]*images[...,-1:] + (1.-images[...,-1:])
else:
images = images[...,:3]
elif args.dataset == 'DTU':
# use the existing split
if args.dtu_split is not None:
with open(args.dtu_split, 'r') as ff:
train_split = json.load(ff)
else:
train_split = None
images, Ks, poses, render_poses, hwf, i_split, near, far, splits = load_dtu(args.data_dir, args.dtu_scene_id, num_train=args.num_train, half_res=args.half_res, train_split=train_split)
K = Ks[0]
print(f'Loaded DTU, images shape: {images.shape}, hwf: {hwf}, K: {K}')
print(f'[CHECK HERE] near: {near}, far: {far}.')
i_train, i_test = i_split
i_val = i_test
save_json = build_json_for_dtu(splits, Ks, poses, near, far)
save_split_file = os.path.join(args.ckpt_dir, args.expname, 'split.json')
with open(save_split_file, 'w') as f:
json.dump(save_json, f, indent=4)
if args.white_bkgd:
images = images[...,:3]*images[...,-1:] + (1.-images[...,-1:])
else:
images = images[...,:3]
elif args.dataset == 'DTU2':
# use the existing split
if args.dtu_split is not None:
with open(args.dtu_split, 'r') as ff:
train_split = json.load(ff)
else:
train_split = None | images, K, poses, render_poses, hwf, i_split, near, far, splits = load_dtu2(args.data_dir, args.dtu_scene_id, num_train=args.num_train, half_res=args.half_res, train_split=train_split) | 2 | 2023-10-30 06:38:00+00:00 | 12k |
sehyunkwon/ICTC | step1/llava/model/language_model/mpt/modeling_mpt.py | [
{
"identifier": "attn_bias_shape",
"path": "step1/llava/model/language_model/mpt/attention.py",
"snippet": "def attn_bias_shape(attn_impl, n_heads, seq_len, alibi, prefix_lm, causal, use_sequence_id):\n if attn_impl == 'flash':\n return None\n elif attn_impl in ['torch', 'triton']:\n ... | import math
import warnings
import torch
import torch.nn as nn
import torch.nn.functional as F
from typing import List, Optional, Tuple, Union
from transformers import PreTrainedModel, PreTrainedTokenizer, PreTrainedTokenizerFast
from transformers.modeling_outputs import BaseModelOutputWithPast, CausalLMOutputWithPast
from .attention import attn_bias_shape, build_attn_bias
from .blocks import MPTBlock
from .custom_embedding import SharedEmbedding
from .norm import NORM_CLASS_REGISTRY
from .configuration_mpt import MPTConfig
from .adapt_tokenizer import AutoTokenizerForMOD, adapt_tokenizer_for_denoising
from .hf_prefixlm_converter import add_bidirectional_mask_if_missing, convert_hf_causal_lm_to_prefix_lm
from .meta_init_context import init_empty_weights
from .param_init_fns import MODEL_INIT_REGISTRY, generic_param_init_fn_
from .flash_attn_triton import flash_attn_func | 7,385 | """A simple, flexible implementation of a GPT model.
Inspired by https://github.com/karpathy/minGPT/blob/master/mingpt/model.py
"""
try:
except:
pass
Tokenizer = Union[PreTrainedTokenizer, PreTrainedTokenizerFast]
class MPTPreTrainedModel(PreTrainedModel):
config_class = MPTConfig
base_model_prefix = 'model'
_no_split_modules = ['MPTBlock']
class MPTModel(MPTPreTrainedModel):
def __init__(self, config: MPTConfig):
config._validate_config()
super().__init__(config)
self.attn_impl = config.attn_config['attn_impl']
self.prefix_lm = config.attn_config['prefix_lm']
self.attn_uses_sequence_id = config.attn_config['attn_uses_sequence_id']
self.alibi = config.attn_config['alibi']
self.alibi_bias_max = config.attn_config['alibi_bias_max']
if config.init_device == 'mixed':
if dist.get_local_rank() == 0:
config.init_device = 'cpu'
else:
config.init_device = 'meta'
if config.norm_type.lower() not in NORM_CLASS_REGISTRY.keys():
norm_options = ' | '.join(NORM_CLASS_REGISTRY.keys())
raise NotImplementedError(f'Requested norm type ({config.norm_type}) is not implemented within this repo (Options: {norm_options}).')
norm_class = NORM_CLASS_REGISTRY[config.norm_type.lower()]
self.embedding_fraction = config.embedding_fraction
self.wte = SharedEmbedding(config.vocab_size, config.d_model, device=config.init_device)
if not self.alibi:
self.wpe = torch.nn.Embedding(config.max_seq_len, config.d_model, device=config.init_device)
self.emb_drop = nn.Dropout(config.emb_pdrop)
self.blocks = nn.ModuleList([MPTBlock(device=config.init_device, **config.to_dict()) for _ in range(config.n_layers)])
self.norm_f = norm_class(config.d_model, device=config.init_device)
if config.init_device != 'meta':
print(f'You are using config.init_device={config.init_device!r}, but you can also use config.init_device="meta" with Composer + FSDP for fast initialization.')
self.apply(self.param_init_fn)
self.is_causal = not self.prefix_lm
self._attn_bias_initialized = False
self.attn_bias = None
self.attn_bias_shape = attn_bias_shape(self.attn_impl, config.n_heads, config.max_seq_len, self.alibi, prefix_lm=self.prefix_lm, causal=self.is_causal, use_sequence_id=self.attn_uses_sequence_id)
if config.no_bias:
for module in self.modules():
if hasattr(module, 'bias') and isinstance(module.bias, nn.Parameter):
if config.verbose:
warnings.warn(f'Removing bias ({module.bias}) from {module}.')
module.register_parameter('bias', None)
if config.verbose and config.verbose > 2:
print(self)
if 'verbose' not in self.config.init_config:
self.config.init_config['verbose'] = self.config.verbose
if self.config.init_config['verbose'] > 1:
init_fn_name = self.config.init_config['name']
warnings.warn(f'Using {init_fn_name} initialization.')
self.gradient_checkpointing = False
def get_input_embeddings(self):
return self.wte
def set_input_embeddings(self, value):
self.wte = value
@torch.no_grad()
def _attn_bias(self, device, dtype, attention_mask: Optional[torch.ByteTensor]=None, prefix_mask: Optional[torch.ByteTensor]=None, sequence_id: Optional[torch.LongTensor]=None):
if not self._attn_bias_initialized:
if self.attn_bias_shape:
self.attn_bias = torch.zeros(self.attn_bias_shape, device=device, dtype=dtype)
| """A simple, flexible implementation of a GPT model.
Inspired by https://github.com/karpathy/minGPT/blob/master/mingpt/model.py
"""
try:
except:
pass
Tokenizer = Union[PreTrainedTokenizer, PreTrainedTokenizerFast]
class MPTPreTrainedModel(PreTrainedModel):
config_class = MPTConfig
base_model_prefix = 'model'
_no_split_modules = ['MPTBlock']
class MPTModel(MPTPreTrainedModel):
def __init__(self, config: MPTConfig):
config._validate_config()
super().__init__(config)
self.attn_impl = config.attn_config['attn_impl']
self.prefix_lm = config.attn_config['prefix_lm']
self.attn_uses_sequence_id = config.attn_config['attn_uses_sequence_id']
self.alibi = config.attn_config['alibi']
self.alibi_bias_max = config.attn_config['alibi_bias_max']
if config.init_device == 'mixed':
if dist.get_local_rank() == 0:
config.init_device = 'cpu'
else:
config.init_device = 'meta'
if config.norm_type.lower() not in NORM_CLASS_REGISTRY.keys():
norm_options = ' | '.join(NORM_CLASS_REGISTRY.keys())
raise NotImplementedError(f'Requested norm type ({config.norm_type}) is not implemented within this repo (Options: {norm_options}).')
norm_class = NORM_CLASS_REGISTRY[config.norm_type.lower()]
self.embedding_fraction = config.embedding_fraction
self.wte = SharedEmbedding(config.vocab_size, config.d_model, device=config.init_device)
if not self.alibi:
self.wpe = torch.nn.Embedding(config.max_seq_len, config.d_model, device=config.init_device)
self.emb_drop = nn.Dropout(config.emb_pdrop)
self.blocks = nn.ModuleList([MPTBlock(device=config.init_device, **config.to_dict()) for _ in range(config.n_layers)])
self.norm_f = norm_class(config.d_model, device=config.init_device)
if config.init_device != 'meta':
print(f'You are using config.init_device={config.init_device!r}, but you can also use config.init_device="meta" with Composer + FSDP for fast initialization.')
self.apply(self.param_init_fn)
self.is_causal = not self.prefix_lm
self._attn_bias_initialized = False
self.attn_bias = None
self.attn_bias_shape = attn_bias_shape(self.attn_impl, config.n_heads, config.max_seq_len, self.alibi, prefix_lm=self.prefix_lm, causal=self.is_causal, use_sequence_id=self.attn_uses_sequence_id)
if config.no_bias:
for module in self.modules():
if hasattr(module, 'bias') and isinstance(module.bias, nn.Parameter):
if config.verbose:
warnings.warn(f'Removing bias ({module.bias}) from {module}.')
module.register_parameter('bias', None)
if config.verbose and config.verbose > 2:
print(self)
if 'verbose' not in self.config.init_config:
self.config.init_config['verbose'] = self.config.verbose
if self.config.init_config['verbose'] > 1:
init_fn_name = self.config.init_config['name']
warnings.warn(f'Using {init_fn_name} initialization.')
self.gradient_checkpointing = False
def get_input_embeddings(self):
return self.wte
def set_input_embeddings(self, value):
self.wte = value
@torch.no_grad()
def _attn_bias(self, device, dtype, attention_mask: Optional[torch.ByteTensor]=None, prefix_mask: Optional[torch.ByteTensor]=None, sequence_id: Optional[torch.LongTensor]=None):
if not self._attn_bias_initialized:
if self.attn_bias_shape:
self.attn_bias = torch.zeros(self.attn_bias_shape, device=device, dtype=dtype) | self.attn_bias = build_attn_bias(self.attn_impl, self.attn_bias, self.config.n_heads, self.config.max_seq_len, causal=self.is_causal, alibi=self.alibi, alibi_bias_max=self.alibi_bias_max) | 1 | 2023-10-27 05:00:14+00:00 | 12k |
Trustworthy-AI-Group/TransferAttack | transferattack/model_related/dhf.py | [
{
"identifier": "dhf_inception_v3",
"path": "transferattack/model_related/dhf_networks/inception.py",
"snippet": "def dhf_inception_v3(mixup_weight_max: float, random_keep_prob: float, dhf_modules = None, weights: Optional[Inception_V3_Weights] = None, progress: bool = True, **kwargs: Any) -> Inception3... | from torch import Tensor
from ..utils import *
from .dhf_networks.inception import dhf_inception_v3
from .dhf_networks.inc_res_v2 import dhf_inc_res_v2
from .dhf_networks.resnet import dhf_resnet18, dhf_resnet50, dhf_resnet101, dhf_resnet152
from ..gradient.mifgsm import MIFGSM
from ..gradient.nifgsm import NIFGSM
from ..input_transformation.dim import DIM
from ..input_transformation.tim import TIM
from ..input_transformation.sim import SIM
from ..input_transformation.admix import Admix
from .dhf_networks import utils | 7,751 | def __init__(self, model_name='inc_v3', dhf_modules=None, mixup_weight_max=0.2, random_keep_prob=0.9, *args, **kwargs):
self.dhf_moduels = dhf_modules
self.mixup_weight_max = mixup_weight_max
self.random_keep_prob = random_keep_prob
self.benign_images = None
super().__init__(model_name, *args, **kwargs)
def load_model(self, model_name):
if model_name in support_models.keys():
model = wrap_model(support_models[model_name](mixup_weight_max=self.mixup_weight_max,
random_keep_prob=self.random_keep_prob, weights='DEFAULT').eval().cuda())
else:
raise ValueError('Model {} not supported for DHF'.format(model_name))
return model
def update_mixup_feature(self, data: Tensor):
utils.turn_on_dhf_update_mf_setting(model=self.model)
_ = self.model(data)
utils.trun_off_dhf_update_mf_setting(model=self.model)
def forward(self, data: Tensor, label: Tensor, **kwargs):
self.benign_images = data.clone().detach().to(self.device).requires_grad_(False)
self.update_mixup_feature(self.benign_images)
# return super().forward(data, label, **kwargs)
data = data.clone().detach().to(self.device)
label = label.clone().detach().to(self.device)
delta = self.init_delta(data)
# Initialize correct indicator
num_scale = 1 if not hasattr(self, "num_scale") else self.num_scale
num_scale = num_scale if not hasattr(self, "num_admix") else num_scale * self.num_admix
correct_indicator = torch.ones(size=(len(data)*num_scale,), device=self.device)
momentum = 0
for _ in range(self.epoch):
self.preprocess(correct_indicator=correct_indicator)
# Obtain the output
logits = self.get_logits(self.transform(data+delta))
# Update correct indicator
correct_indicator = (torch.max(logits.detach(), dim=1)[1] == label.repeat(num_scale)).to(torch.float32)
# Calculate the loss
loss = self.get_loss(logits, label)
# Calculate the gradients
grad = self.get_grad(loss, delta)
# Calculate the momentum
momentum = self.get_momentum(grad, momentum)
# Update adversarial perturbation
delta = self.update_delta(delta, data, momentum, self.alpha)
return delta.detach()
def preprocess(self, *args, **kwargs):
utils.turn_on_dhf_attack_setting(self.model, dhf_indicator=1-kwargs["correct_indicator"])
class DHF_SIM(SIM):
"""
DHF Attack
Arguments:
model (str): the surrogate model name for attack.
mixup_weight_max (float): the maximium of mixup weight.
random_keep_prob (float): the keep probability when adjusting the feature elements.
"""
def __init__(self, model_name='inc_v3', dhf_modules=None, mixup_weight_max=0.2, random_keep_prob=0.9, *args, **kwargs):
self.dhf_moduels = dhf_modules
self.mixup_weight_max = mixup_weight_max
self.random_keep_prob = random_keep_prob
self.benign_images = None
super().__init__(model_name, *args, **kwargs)
def load_model(self, model_name):
if model_name in support_models.keys():
model = wrap_model(support_models[model_name](mixup_weight_max=self.mixup_weight_max,
random_keep_prob=self.random_keep_prob, weights='DEFAULT').eval().cuda())
else:
raise ValueError('Model {} not supported for DHF'.format(model_name))
return model
def update_mixup_feature(self, data: Tensor):
utils.turn_on_dhf_update_mf_setting(model=self.model)
_ = self.model(data)
utils.trun_off_dhf_update_mf_setting(model=self.model)
def forward(self, data: Tensor, label: Tensor, **kwargs):
self.benign_images = self.transform(data.clone().detach().to(self.device).requires_grad_(False))
self.update_mixup_feature(self.benign_images)
# return super().forward(data, label, **kwargs)
data = data.clone().detach().to(self.device)
label = label.clone().detach().to(self.device)
delta = self.init_delta(data)
# Initialize correct indicator
num_scale = 1 if not hasattr(self, "num_scale") else self.num_scale
num_scale = num_scale if not hasattr(self, "num_admix") else num_scale * self.num_admix
correct_indicator = torch.ones(size=(len(data)*num_scale,), device=self.device)
momentum = 0
for _ in range(self.epoch):
self.preprocess(correct_indicator=correct_indicator)
# Obtain the output
logits = self.get_logits(self.transform(data+delta))
# Update correct indicator
correct_indicator = (torch.max(logits.detach(), dim=1)[1] == label.repeat(num_scale)).to(torch.float32)
# Calculate the loss
loss = self.get_loss(logits, label)
# Calculate the gradients
grad = self.get_grad(loss, delta)
# Calculate the momentum
momentum = self.get_momentum(grad, momentum)
# Update adversarial perturbation
delta = self.update_delta(delta, data, momentum, self.alpha)
return delta.detach()
def preprocess(self, *args, **kwargs):
utils.turn_on_dhf_attack_setting(self.model, dhf_indicator=1-kwargs["correct_indicator"])
| # example bash: python main.py --attack=mifgsm_dhf
support_models = {
"inc_v3": dhf_inception_v3,
"inc_res": dhf_inc_res_v2,
'resnet18': dhf_resnet18,
"resnet50": dhf_resnet50,
"resnet101": dhf_resnet101,
"resnet152": dhf_resnet152,
}
"""
Diversifying the High-level Features for better Adversarial Transferability BMVC 2023 (https://arxiv.org/abs/2304.10136)
"""
class DHF_IFGSM(MIFGSM):
"""
DHF Attack
Arguments:
model (str): the surrogate model name for attack.
mixup_weight_max (float): the maximium of mixup weight.
random_keep_prob (float): the keep probability when adjusting the feature elements.
"""
def __init__(self, model_name='inc_v3', dhf_modules=None, mixup_weight_max=0.2, random_keep_prob=0.9, *args, **kwargs):
self.dhf_moduels = dhf_modules
self.mixup_weight_max = mixup_weight_max
self.random_keep_prob = random_keep_prob
self.benign_images = None
super().__init__(model_name, *args, **kwargs)
self.decay = 0.
def load_model(self, model_name):
if model_name in support_models.keys():
model = wrap_model(support_models[model_name](mixup_weight_max=self.mixup_weight_max,
random_keep_prob=self.random_keep_prob, weights='DEFAULT').eval().cuda())
else:
raise ValueError('Model {} not supported for DHF'.format(model_name))
return model
def update_mixup_feature(self, data: Tensor):
utils.turn_on_dhf_update_mf_setting(model=self.model)
_ = self.model(data)
utils.trun_off_dhf_update_mf_setting(model=self.model)
def forward(self, data: Tensor, label: Tensor, **kwargs):
self.benign_images = data.clone().detach().to(self.device).requires_grad_(False)
self.update_mixup_feature(self.benign_images)
# return super().forward(data, label, **kwargs)
data = data.clone().detach().to(self.device)
label = label.clone().detach().to(self.device)
delta = self.init_delta(data)
# Initialize correct indicator
num_scale = 1 if not hasattr(self, "num_scale") else self.num_scale
num_scale = num_scale if not hasattr(self, "num_admix") else num_scale * self.num_admix
correct_indicator = torch.ones(size=(len(data)*num_scale,), device=self.device)
momentum = 0
for _ in range(self.epoch):
self.preprocess(correct_indicator=correct_indicator)
# Obtain the output
logits = self.get_logits(self.transform(data+delta))
# Update correct indicator
correct_indicator = (torch.max(logits.detach(), dim=1)[1] == label.repeat(num_scale)).to(torch.float32)
# Calculate the loss
loss = self.get_loss(logits, label)
# Calculate the gradients
grad = self.get_grad(loss, delta)
# Calculate the momentum
momentum = self.get_momentum(grad, momentum)
# Update adversarial perturbation
delta = self.update_delta(delta, data, momentum, self.alpha)
return delta.detach()
def preprocess(self, *args, **kwargs):
utils.turn_on_dhf_attack_setting(self.model, dhf_indicator=1-kwargs["correct_indicator"])
class DHF_MIFGSM(MIFGSM):
"""
DHF Attack
Arguments:
model (str): the surrogate model name for attack.
mixup_weight_max (float): the maximium of mixup weight.
random_keep_prob (float): the keep probability when adjusting the feature elements.
"""
def __init__(self, model_name='inc_v3', dhf_modules=None, mixup_weight_max=0.2, random_keep_prob=0.9, *args, **kwargs):
self.dhf_moduels = dhf_modules
self.mixup_weight_max = mixup_weight_max
self.random_keep_prob = random_keep_prob
self.benign_images = None
super().__init__(model_name, *args, **kwargs)
def load_model(self, model_name):
if model_name in support_models.keys():
model = wrap_model(support_models[model_name](mixup_weight_max=self.mixup_weight_max,
random_keep_prob=self.random_keep_prob, weights='DEFAULT').eval().cuda())
else:
raise ValueError('Model {} not supported for DHF'.format(model_name))
return model
def update_mixup_feature(self, data: Tensor):
utils.turn_on_dhf_update_mf_setting(model=self.model)
_ = self.model(data)
utils.trun_off_dhf_update_mf_setting(model=self.model)
def preprocess(self, *args, **kwargs):
utils.turn_on_dhf_attack_setting(self.model, dhf_indicator=1-kwargs["correct_indicator"])
def forward(self, data: Tensor, label: Tensor, **kwargs):
self.benign_images = data.clone().detach().to(self.device).requires_grad_(False)
self.update_mixup_feature(self.benign_images)
# return super().forward(data, label, **kwargs)
data = data.clone().detach().to(self.device)
label = label.clone().detach().to(self.device)
delta = self.init_delta(data)
# Initialize correct indicator
num_scale = 1 if not hasattr(self, "num_scale") else self.num_scale
num_scale = num_scale if not hasattr(self, "num_admix") else num_scale * self.num_admix
correct_indicator = torch.ones(size=(len(data)*num_scale,), device=self.device)
momentum = 0
for _ in range(self.epoch):
self.preprocess(correct_indicator=correct_indicator)
# Obtain the output
logits = self.get_logits(self.transform(data+delta))
# Update correct indicator
correct_indicator = (torch.max(logits.detach(), dim=1)[1] == label.repeat(num_scale)).to(torch.float32)
# Calculate the loss
loss = self.get_loss(logits, label)
# Calculate the gradients
grad = self.get_grad(loss, delta)
# Calculate the momentum
momentum = self.get_momentum(grad, momentum)
# Update adversarial perturbation
delta = self.update_delta(delta, data, momentum, self.alpha)
return delta.detach()
class DHF_NIFGSM(NIFGSM):
"""
DHF Attack
Arguments:
model (str): the surrogate model name for attack.
mixup_weight_max (float): the maximium of mixup weight.
random_keep_prob (float): the keep probability when adjusting the feature elements.
"""
def __init__(self, model_name='inc_v3', dhf_modules=None, mixup_weight_max=0.2, random_keep_prob=0.9, *args, **kwargs):
self.dhf_moduels = dhf_modules
self.mixup_weight_max = mixup_weight_max
self.random_keep_prob = random_keep_prob
self.benign_images = None
super().__init__(model_name, *args, **kwargs)
def load_model(self, model_name):
if model_name in support_models.keys():
model = wrap_model(support_models[model_name](mixup_weight_max=self.mixup_weight_max,
random_keep_prob=self.random_keep_prob, weights='DEFAULT').eval().cuda())
else:
raise ValueError('Model {} not supported for DHF'.format(model_name))
return model
def update_mixup_feature(self, data: Tensor):
utils.turn_on_dhf_update_mf_setting(model=self.model)
_ = self.model(data)
utils.trun_off_dhf_update_mf_setting(model=self.model)
def forward(self, data: Tensor, label: Tensor, **kwargs):
self.benign_images = data.clone().detach().to(self.device).requires_grad_(False)
self.update_mixup_feature(self.benign_images)
# return super().forward(data, label, **kwargs)
data = data.clone().detach().to(self.device)
label = label.clone().detach().to(self.device)
delta = self.init_delta(data)
# Initialize correct indicator
num_scale = 1 if not hasattr(self, "num_scale") else self.num_scale
num_scale = num_scale if not hasattr(self, "num_admix") else num_scale * self.num_admix
correct_indicator = torch.ones(size=(len(data)*num_scale,), device=self.device)
momentum = 0
for _ in range(self.epoch):
self.preprocess(correct_indicator=correct_indicator)
# Obtain the output
logits = self.get_logits(self.transform(data+delta))
# Update correct indicator
correct_indicator = (torch.max(logits.detach(), dim=1)[1] == label.repeat(num_scale)).to(torch.float32)
# Calculate the loss
loss = self.get_loss(logits, label)
# Calculate the gradients
grad = self.get_grad(loss, delta)
# Calculate the momentum
momentum = self.get_momentum(grad, momentum)
# Update adversarial perturbation
delta = self.update_delta(delta, data, momentum, self.alpha)
return delta.detach()
def preprocess(self, *args, **kwargs):
utils.turn_on_dhf_attack_setting(self.model, dhf_indicator=1-kwargs["correct_indicator"])
class DHF_DIM(DIM):
"""
DHF Attack
Arguments:
model (str): the surrogate model name for attack.
mixup_weight_max (float): the maximium of mixup weight.
random_keep_prob (float): the keep probability when adjusting the feature elements.
"""
def __init__(self, model_name='inc_v3', dhf_modules=None, mixup_weight_max=0.2, random_keep_prob=0.9, *args, **kwargs):
self.dhf_moduels = dhf_modules
self.mixup_weight_max = mixup_weight_max
self.random_keep_prob = random_keep_prob
self.benign_images = None
super().__init__(model_name, *args, **kwargs)
def load_model(self, model_name):
if model_name in support_models.keys():
model = wrap_model(support_models[model_name](mixup_weight_max=self.mixup_weight_max,
random_keep_prob=self.random_keep_prob, weights='DEFAULT').eval().cuda())
else:
raise ValueError('Model {} not supported for DHF'.format(model_name))
return model
def update_mixup_feature(self, data: Tensor):
utils.turn_on_dhf_update_mf_setting(model=self.model)
_ = self.model(data)
utils.trun_off_dhf_update_mf_setting(model=self.model)
def forward(self, data: Tensor, label: Tensor, **kwargs):
self.benign_images = data.clone().detach().to(self.device).requires_grad_(False)
data = data.clone().detach().to(self.device)
label = label.clone().detach().to(self.device)
delta = self.init_delta(data)
# Initialize correct indicator
num_scale = 1 if not hasattr(self, "num_scale") else self.num_scale
num_scale = num_scale if not hasattr(self, "num_admix") else num_scale * self.num_admix
correct_indicator = torch.ones(size=(len(data)*num_scale,), device=self.device)
momentum = 0
for _ in range(self.epoch):
self.preprocess(correct_indicator=correct_indicator)
# Obtain the output
logits = self.get_logits(self.transform(data+delta))
# Update correct indicator
correct_indicator = (torch.max(logits.detach(), dim=1)[1] == label.repeat(num_scale)).to(torch.float32)
# Calculate the loss
loss = self.get_loss(logits, label)
# Calculate the gradients
grad = self.get_grad(loss, delta)
# Calculate the momentum
momentum = self.get_momentum(grad, momentum)
# Update adversarial perturbation
delta = self.update_delta(delta, data, momentum, self.alpha)
return delta.detach()
def preprocess(self, *args, **kwargs):
self.reuse_rnds = False
mixup_input = self.transform(self.benign_images)
self.update_mixup_feature(mixup_input)
self.reuse_rnds = True
utils.turn_on_dhf_attack_setting(self.model, dhf_indicator=1-kwargs["correct_indicator"])
class DHF_TIM(TIM):
"""
DHF Attack
Arguments:
model (str): the surrogate model name for attack.
mixup_weight_max (float): the maximium of mixup weight.
random_keep_prob (float): the keep probability when adjusting the feature elements.
"""
def __init__(self, model_name='inc_v3', dhf_modules=None, mixup_weight_max=0.2, random_keep_prob=0.9, *args, **kwargs):
self.dhf_moduels = dhf_modules
self.mixup_weight_max = mixup_weight_max
self.random_keep_prob = random_keep_prob
self.benign_images = None
super().__init__(model_name, *args, **kwargs)
def load_model(self, model_name):
if model_name in support_models.keys():
model = wrap_model(support_models[model_name](mixup_weight_max=self.mixup_weight_max,
random_keep_prob=self.random_keep_prob, weights='DEFAULT').eval().cuda())
else:
raise ValueError('Model {} not supported for DHF'.format(model_name))
return model
def update_mixup_feature(self, data: Tensor):
utils.turn_on_dhf_update_mf_setting(model=self.model)
_ = self.model(data)
utils.trun_off_dhf_update_mf_setting(model=self.model)
def forward(self, data: Tensor, label: Tensor, **kwargs):
self.benign_images = data.clone().detach().to(self.device).requires_grad_(False)
self.update_mixup_feature(self.benign_images)
# return super().forward(data, label, **kwargs)
data = data.clone().detach().to(self.device)
label = label.clone().detach().to(self.device)
delta = self.init_delta(data)
# Initialize correct indicator
num_scale = 1 if not hasattr(self, "num_scale") else self.num_scale
num_scale = num_scale if not hasattr(self, "num_admix") else num_scale * self.num_admix
correct_indicator = torch.ones(size=(len(data)*num_scale,), device=self.device)
momentum = 0
for _ in range(self.epoch):
self.preprocess(correct_indicator=correct_indicator)
# Obtain the output
logits = self.get_logits(self.transform(data+delta))
# Update correct indicator
correct_indicator = (torch.max(logits.detach(), dim=1)[1] == label.repeat(num_scale)).to(torch.float32)
# Calculate the loss
loss = self.get_loss(logits, label)
# Calculate the gradients
grad = self.get_grad(loss, delta)
# Calculate the momentum
momentum = self.get_momentum(grad, momentum)
# Update adversarial perturbation
delta = self.update_delta(delta, data, momentum, self.alpha)
return delta.detach()
def preprocess(self, *args, **kwargs):
utils.turn_on_dhf_attack_setting(self.model, dhf_indicator=1-kwargs["correct_indicator"])
class DHF_SIM(SIM):
"""
DHF Attack
Arguments:
model (str): the surrogate model name for attack.
mixup_weight_max (float): the maximium of mixup weight.
random_keep_prob (float): the keep probability when adjusting the feature elements.
"""
def __init__(self, model_name='inc_v3', dhf_modules=None, mixup_weight_max=0.2, random_keep_prob=0.9, *args, **kwargs):
self.dhf_moduels = dhf_modules
self.mixup_weight_max = mixup_weight_max
self.random_keep_prob = random_keep_prob
self.benign_images = None
super().__init__(model_name, *args, **kwargs)
def load_model(self, model_name):
if model_name in support_models.keys():
model = wrap_model(support_models[model_name](mixup_weight_max=self.mixup_weight_max,
random_keep_prob=self.random_keep_prob, weights='DEFAULT').eval().cuda())
else:
raise ValueError('Model {} not supported for DHF'.format(model_name))
return model
def update_mixup_feature(self, data: Tensor):
utils.turn_on_dhf_update_mf_setting(model=self.model)
_ = self.model(data)
utils.trun_off_dhf_update_mf_setting(model=self.model)
def forward(self, data: Tensor, label: Tensor, **kwargs):
self.benign_images = self.transform(data.clone().detach().to(self.device).requires_grad_(False))
self.update_mixup_feature(self.benign_images)
# return super().forward(data, label, **kwargs)
data = data.clone().detach().to(self.device)
label = label.clone().detach().to(self.device)
delta = self.init_delta(data)
# Initialize correct indicator
num_scale = 1 if not hasattr(self, "num_scale") else self.num_scale
num_scale = num_scale if not hasattr(self, "num_admix") else num_scale * self.num_admix
correct_indicator = torch.ones(size=(len(data)*num_scale,), device=self.device)
momentum = 0
for _ in range(self.epoch):
self.preprocess(correct_indicator=correct_indicator)
# Obtain the output
logits = self.get_logits(self.transform(data+delta))
# Update correct indicator
correct_indicator = (torch.max(logits.detach(), dim=1)[1] == label.repeat(num_scale)).to(torch.float32)
# Calculate the loss
loss = self.get_loss(logits, label)
# Calculate the gradients
grad = self.get_grad(loss, delta)
# Calculate the momentum
momentum = self.get_momentum(grad, momentum)
# Update adversarial perturbation
delta = self.update_delta(delta, data, momentum, self.alpha)
return delta.detach()
def preprocess(self, *args, **kwargs):
utils.turn_on_dhf_attack_setting(self.model, dhf_indicator=1-kwargs["correct_indicator"])
| class DHF_Admix(Admix): | 11 | 2023-10-31 03:43:26+00:00 | 12k |
hydrogram/hydrogram | hydrogram/dispatcher.py | [
{
"identifier": "utils",
"path": "hydrogram/utils.py",
"snippet": "async def ainput(prompt: str = \"\", *, hide: bool = False):\ndef get_input_media_from_file_id(\n file_id: str, expected_file_type: FileType = None, ttl_seconds: Optional[int] = None\n) -> Union[\"raw.types.InputMediaPhoto\", \"raw.ty... | import asyncio
import inspect
import logging
import hydrogram
from collections import OrderedDict
from hydrogram import utils
from hydrogram.handlers import (
CallbackQueryHandler,
ChatJoinRequestHandler,
ChatMemberUpdatedHandler,
ChosenInlineResultHandler,
DeletedMessagesHandler,
EditedMessageHandler,
InlineQueryHandler,
MessageHandler,
PollHandler,
RawUpdateHandler,
UserStatusHandler,
)
from hydrogram.raw.types import (
UpdateBotCallbackQuery,
UpdateBotChatInviteRequester,
UpdateBotInlineQuery,
UpdateBotInlineSend,
UpdateChannelParticipant,
UpdateChatParticipant,
UpdateDeleteChannelMessages,
UpdateDeleteMessages,
UpdateEditChannelMessage,
UpdateEditMessage,
UpdateInlineBotCallbackQuery,
UpdateMessagePoll,
UpdateNewChannelMessage,
UpdateNewMessage,
UpdateNewScheduledMessage,
UpdateUserStatus,
) | 7,219 | # Hydrogram - Telegram MTProto API Client Library for Python
# Copyright (C) 2017-2023 Dan <https://github.com/delivrance>
# Copyright (C) 2023-present Hydrogram <https://hydrogram.org>
#
# This file is part of Hydrogram.
#
# Hydrogram is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Hydrogram is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with Hydrogram. If not, see <http://www.gnu.org/licenses/>.
log = logging.getLogger(__name__)
class Dispatcher:
NEW_MESSAGE_UPDATES = (
UpdateNewMessage,
UpdateNewChannelMessage,
UpdateNewScheduledMessage,
)
EDIT_MESSAGE_UPDATES = (UpdateEditMessage, UpdateEditChannelMessage)
DELETE_MESSAGES_UPDATES = (UpdateDeleteMessages, UpdateDeleteChannelMessages)
CALLBACK_QUERY_UPDATES = (UpdateBotCallbackQuery, UpdateInlineBotCallbackQuery)
CHAT_MEMBER_UPDATES = (UpdateChatParticipant, UpdateChannelParticipant)
USER_STATUS_UPDATES = (UpdateUserStatus,)
BOT_INLINE_QUERY_UPDATES = (UpdateBotInlineQuery,)
POLL_UPDATES = (UpdateMessagePoll,)
CHOSEN_INLINE_RESULT_UPDATES = (UpdateBotInlineSend,)
CHAT_JOIN_REQUEST_UPDATES = (UpdateBotChatInviteRequester,)
def __init__(self, client: "hydrogram.Client"):
self.client = client
self.loop = asyncio.get_event_loop()
self.handler_worker_tasks = []
self.locks_list = []
self.updates_queue = asyncio.Queue()
self.groups = OrderedDict()
async def message_parser(update, users, chats):
return (
await hydrogram.types.Message._parse(
client=self.client,
message=update.message,
users=users,
chats=chats,
is_scheduled=isinstance(update, UpdateNewScheduledMessage),
),
MessageHandler,
)
async def edited_message_parser(update, users, chats):
# Edited messages are parsed the same way as new messages, but the handler is different
parsed, _ = await message_parser(update, users, chats)
return (parsed, EditedMessageHandler)
async def deleted_messages_parser(update, users, chats):
return (
utils.parse_deleted_messages(self.client, update),
DeletedMessagesHandler,
)
async def callback_query_parser(update, users, chats):
return (
await hydrogram.types.CallbackQuery._parse(self.client, update, users),
CallbackQueryHandler,
)
async def user_status_parser(update, users, chats):
return (
hydrogram.types.User._parse_user_status(self.client, update),
UserStatusHandler,
)
async def inline_query_parser(update, users, chats):
return (
hydrogram.types.InlineQuery._parse(self.client, update, users),
InlineQueryHandler,
)
async def poll_parser(update, users, chats):
return (
hydrogram.types.Poll._parse_update(self.client, update),
PollHandler,
)
async def chosen_inline_result_parser(update, users, chats):
return (
hydrogram.types.ChosenInlineResult._parse(self.client, update, users),
ChosenInlineResultHandler,
)
async def chat_member_updated_parser(update, users, chats):
return (
hydrogram.types.ChatMemberUpdated._parse(self.client, update, users, chats),
ChatMemberUpdatedHandler,
)
async def chat_join_request_parser(update, users, chats):
return (
hydrogram.types.ChatJoinRequest._parse(self.client, update, users, chats),
| # Hydrogram - Telegram MTProto API Client Library for Python
# Copyright (C) 2017-2023 Dan <https://github.com/delivrance>
# Copyright (C) 2023-present Hydrogram <https://hydrogram.org>
#
# This file is part of Hydrogram.
#
# Hydrogram is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Hydrogram is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with Hydrogram. If not, see <http://www.gnu.org/licenses/>.
log = logging.getLogger(__name__)
class Dispatcher:
NEW_MESSAGE_UPDATES = (
UpdateNewMessage,
UpdateNewChannelMessage,
UpdateNewScheduledMessage,
)
EDIT_MESSAGE_UPDATES = (UpdateEditMessage, UpdateEditChannelMessage)
DELETE_MESSAGES_UPDATES = (UpdateDeleteMessages, UpdateDeleteChannelMessages)
CALLBACK_QUERY_UPDATES = (UpdateBotCallbackQuery, UpdateInlineBotCallbackQuery)
CHAT_MEMBER_UPDATES = (UpdateChatParticipant, UpdateChannelParticipant)
USER_STATUS_UPDATES = (UpdateUserStatus,)
BOT_INLINE_QUERY_UPDATES = (UpdateBotInlineQuery,)
POLL_UPDATES = (UpdateMessagePoll,)
CHOSEN_INLINE_RESULT_UPDATES = (UpdateBotInlineSend,)
CHAT_JOIN_REQUEST_UPDATES = (UpdateBotChatInviteRequester,)
def __init__(self, client: "hydrogram.Client"):
self.client = client
self.loop = asyncio.get_event_loop()
self.handler_worker_tasks = []
self.locks_list = []
self.updates_queue = asyncio.Queue()
self.groups = OrderedDict()
async def message_parser(update, users, chats):
return (
await hydrogram.types.Message._parse(
client=self.client,
message=update.message,
users=users,
chats=chats,
is_scheduled=isinstance(update, UpdateNewScheduledMessage),
),
MessageHandler,
)
async def edited_message_parser(update, users, chats):
# Edited messages are parsed the same way as new messages, but the handler is different
parsed, _ = await message_parser(update, users, chats)
return (parsed, EditedMessageHandler)
async def deleted_messages_parser(update, users, chats):
return (
utils.parse_deleted_messages(self.client, update),
DeletedMessagesHandler,
)
async def callback_query_parser(update, users, chats):
return (
await hydrogram.types.CallbackQuery._parse(self.client, update, users),
CallbackQueryHandler,
)
async def user_status_parser(update, users, chats):
return (
hydrogram.types.User._parse_user_status(self.client, update),
UserStatusHandler,
)
async def inline_query_parser(update, users, chats):
return (
hydrogram.types.InlineQuery._parse(self.client, update, users),
InlineQueryHandler,
)
async def poll_parser(update, users, chats):
return (
hydrogram.types.Poll._parse_update(self.client, update),
PollHandler,
)
async def chosen_inline_result_parser(update, users, chats):
return (
hydrogram.types.ChosenInlineResult._parse(self.client, update, users),
ChosenInlineResultHandler,
)
async def chat_member_updated_parser(update, users, chats):
return (
hydrogram.types.ChatMemberUpdated._parse(self.client, update, users, chats),
ChatMemberUpdatedHandler,
)
async def chat_join_request_parser(update, users, chats):
return (
hydrogram.types.ChatJoinRequest._parse(self.client, update, users, chats), | ChatJoinRequestHandler, | 2 | 2023-10-29 16:16:37+00:00 | 12k |
chenruduan/OAReactDiff | oa_reactdiff/tests/dynamics/test_egnn_dynamics.py | [
{
"identifier": "LEFTNet",
"path": "oa_reactdiff/model/leftnet.py",
"snippet": "class LEFTNet(torch.nn.Module):\n r\"\"\"\n LEFTNet\n\n Args:\n pos_require_grad (bool, optional): If set to :obj:`True`, will require to take derivative of model output with respect to the atomic positions. ... | import unittest
import torch
from typing import List, Optional
from torch import Tensor, tensor, nn
from pytorch_lightning import seed_everything
from oa_reactdiff.model import LEFTNet
from oa_reactdiff.dynamics import EGNNDynamics, Confidence
from oa_reactdiff.utils import (
get_n_frag_switch,
get_mask_for_frag,
get_edges_index,
) | 8,404 | """Test model forward pass and equivariance."""
seed_everything(0, workers=True)
def init_weights(m):
r"""Weight initialization for all MLP.
Args:
m: a nn.Module
"""
if isinstance(m, nn.Linear):
gain = 0.5
nn.init.xavier_uniform_(m.weight, gain=gain)
if m.bias is not None:
nn.init.uniform_(m.bias, -gain, gain)
egnn_config = dict(
in_node_nf=8,
in_edge_nf=2,
hidden_nf=2,
edge_hidden_nf=3,
act_fn="swish",
n_layers=6,
attention=True,
out_node_nf=None,
tanh=False,
coords_range=15.0,
norm_constant=1.0,
inv_sublayers=2,
sin_embedding=False,
normalization_factor=100.0,
aggregation_method="sum",
)
leftnet_config = dict(
pos_require_grad=False,
cutoff=5.0,
num_layers=2,
hidden_channels=32,
num_radial=8,
in_node_nf=8,
)
node_nfs: List[int] = [4, 5, 6]
edge_nf: int = 3
condition_nf: int = 3
fragment_names: List[str] = ["inorg_node", "org_edge", "org_node"]
pos_dim: int = 3
update_pocket_coords: bool = True
condition_time: bool = True
edge_cutoff: Optional[float] = None
class TestModel(unittest.TestCase):
@classmethod
def setUpClass(cls) -> None:
cls.egnn_dynamics = EGNNDynamics(
model_config=egnn_config,
node_nfs=node_nfs,
edge_nf=edge_nf,
condition_nf=condition_nf,
fragment_names=fragment_names,
pos_dim=pos_dim,
update_pocket_coords=update_pocket_coords,
condition_time=condition_time,
edge_cutoff=edge_cutoff,
)
cls.egnn_dynamics.model.apply(init_weights)
cls.leftnet_dynamics = EGNNDynamics(
model_config=leftnet_config,
node_nfs=node_nfs,
edge_nf=edge_nf,
condition_nf=condition_nf,
fragment_names=fragment_names,
pos_dim=pos_dim,
update_pocket_coords=update_pocket_coords,
condition_time=condition_time,
edge_cutoff=edge_cutoff,
model=LEFTNet,
)
cls.dynamics = [cls.egnn_dynamics, cls.leftnet_dynamics]
cls.n_samples = 2
cls.fragments_nodes = [
torch.tensor([2, 0]),
torch.tensor([2, 3]),
torch.tensor([1, 2]),
]
cls.fragments_masks = [
get_mask_for_frag(natm_nodes) for natm_nodes in cls.fragments_nodes
]
cls.conditions = torch.rand(cls.n_samples, condition_nf)
cls.n_frag_switch = get_n_frag_switch(cls.fragments_nodes)
cls.combined_mask = torch.cat(cls.fragments_masks)
| """Test model forward pass and equivariance."""
seed_everything(0, workers=True)
def init_weights(m):
r"""Weight initialization for all MLP.
Args:
m: a nn.Module
"""
if isinstance(m, nn.Linear):
gain = 0.5
nn.init.xavier_uniform_(m.weight, gain=gain)
if m.bias is not None:
nn.init.uniform_(m.bias, -gain, gain)
egnn_config = dict(
in_node_nf=8,
in_edge_nf=2,
hidden_nf=2,
edge_hidden_nf=3,
act_fn="swish",
n_layers=6,
attention=True,
out_node_nf=None,
tanh=False,
coords_range=15.0,
norm_constant=1.0,
inv_sublayers=2,
sin_embedding=False,
normalization_factor=100.0,
aggregation_method="sum",
)
leftnet_config = dict(
pos_require_grad=False,
cutoff=5.0,
num_layers=2,
hidden_channels=32,
num_radial=8,
in_node_nf=8,
)
node_nfs: List[int] = [4, 5, 6]
edge_nf: int = 3
condition_nf: int = 3
fragment_names: List[str] = ["inorg_node", "org_edge", "org_node"]
pos_dim: int = 3
update_pocket_coords: bool = True
condition_time: bool = True
edge_cutoff: Optional[float] = None
class TestModel(unittest.TestCase):
@classmethod
def setUpClass(cls) -> None:
cls.egnn_dynamics = EGNNDynamics(
model_config=egnn_config,
node_nfs=node_nfs,
edge_nf=edge_nf,
condition_nf=condition_nf,
fragment_names=fragment_names,
pos_dim=pos_dim,
update_pocket_coords=update_pocket_coords,
condition_time=condition_time,
edge_cutoff=edge_cutoff,
)
cls.egnn_dynamics.model.apply(init_weights)
cls.leftnet_dynamics = EGNNDynamics(
model_config=leftnet_config,
node_nfs=node_nfs,
edge_nf=edge_nf,
condition_nf=condition_nf,
fragment_names=fragment_names,
pos_dim=pos_dim,
update_pocket_coords=update_pocket_coords,
condition_time=condition_time,
edge_cutoff=edge_cutoff,
model=LEFTNet,
)
cls.dynamics = [cls.egnn_dynamics, cls.leftnet_dynamics]
cls.n_samples = 2
cls.fragments_nodes = [
torch.tensor([2, 0]),
torch.tensor([2, 3]),
torch.tensor([1, 2]),
]
cls.fragments_masks = [
get_mask_for_frag(natm_nodes) for natm_nodes in cls.fragments_nodes
]
cls.conditions = torch.rand(cls.n_samples, condition_nf)
cls.n_frag_switch = get_n_frag_switch(cls.fragments_nodes)
cls.combined_mask = torch.cat(cls.fragments_masks) | cls.edge_index = get_edges_index(cls.combined_mask, remove_self_edge=True) | 3 | 2023-10-30 02:53:38+00:00 | 12k |
Weitheskmt/WeiDMD | build/lib/weidmd/cdmd.py | [
{
"identifier": "DMDBase",
"path": "build/lib/weidmd/dmdbase.py",
"snippet": "class DMDBase:\n \"\"\"\n Dynamic Mode Decomposition base class.\n\n :param svd_rank: the rank for the truncation; If 0, the method computes the\n optimal rank and uses it for truncation; if positive interger, ... | import numpy as np
import scipy.sparse
from scipy.linalg import sqrtm
from .dmdbase import DMDBase
from .dmdoperator import DMDOperator
from .snapshots import Snapshots
from .utils import compute_svd, compute_tlsq | 10,695 |
from __future__ import division
class CDMDOperator(DMDOperator):
"""
DMD operator for Compressed-DMD.
:param svd_rank: the rank for the truncation; If 0, the method computes the
optimal rank and uses it for truncation; if positive interger, the
method uses the argument for the truncation; if float between 0 and 1,
the rank is the number of the biggest singular values that are needed
to reach the 'energy' specified by `svd_rank`; if -1, the method does
not compute truncation.
:type svd_rank: int or float
:param rescale_mode: Scale Atilde as shown in
10.1016/j.jneumeth.2015.10.010 (section 2.4) before computing its
eigendecomposition. None means no rescaling, 'auto' means automatic
rescaling using singular values, otherwise the scaling factors.
:type rescale_mode: {'auto'} or None or numpy.ndarray
:param bool forward_backward: If True, the low-rank operator is computed
like in fbDMD (reference: https://arxiv.org/abs/1507.02264). Default is
False.
:param sorted_eigs: Sort eigenvalues (and modes/dynamics accordingly) by
magnitude if `sorted_eigs='abs'`, by real part (and then by imaginary
part to break ties) if `sorted_eigs='real'`. Default: False.
:type sorted_eigs: {'real', 'abs'} or False
:param tikhonov_regularization: Tikhonov parameter for the regularization.
If `None`, no regularization is applied, if `float`, it is used as the
:math:`\lambda` tikhonov parameter.
:type tikhonov_regularization: int or float
"""
def __init__(
self,
svd_rank,
rescale_mode,
forward_backward,
sorted_eigs,
tikhonov_regularization,
):
super().__init__(
svd_rank=svd_rank,
exact=True,
rescale_mode=rescale_mode,
forward_backward=forward_backward,
sorted_eigs=sorted_eigs,
tikhonov_regularization=tikhonov_regularization,
)
self._Atilde = None
def compute_operator(self, compressedX, compressedY, nonCompressedY):
"""
Compute the low-rank operator.
:param numpy.ndarray compressedX: the compressed version of the matrix
containing the snapshots x0,..x{n-1} by column.
:param numpy.ndarray compressedY: the compressed version of the matrix
containing the snapshots x1,..x{n} by column.
:param numpy.ndarray nonCompressedY: the matrix containing the
snapshots x1,..x{n} by column.
:return: the (truncated) left-singular vectors matrix, the (truncated)
singular values array, the (truncated) right-singular vectors
matrix of compressedX.
:rtype: numpy.ndarray, numpy.ndarray, numpy.ndarray
"""
|
from __future__ import division
class CDMDOperator(DMDOperator):
"""
DMD operator for Compressed-DMD.
:param svd_rank: the rank for the truncation; If 0, the method computes the
optimal rank and uses it for truncation; if positive interger, the
method uses the argument for the truncation; if float between 0 and 1,
the rank is the number of the biggest singular values that are needed
to reach the 'energy' specified by `svd_rank`; if -1, the method does
not compute truncation.
:type svd_rank: int or float
:param rescale_mode: Scale Atilde as shown in
10.1016/j.jneumeth.2015.10.010 (section 2.4) before computing its
eigendecomposition. None means no rescaling, 'auto' means automatic
rescaling using singular values, otherwise the scaling factors.
:type rescale_mode: {'auto'} or None or numpy.ndarray
:param bool forward_backward: If True, the low-rank operator is computed
like in fbDMD (reference: https://arxiv.org/abs/1507.02264). Default is
False.
:param sorted_eigs: Sort eigenvalues (and modes/dynamics accordingly) by
magnitude if `sorted_eigs='abs'`, by real part (and then by imaginary
part to break ties) if `sorted_eigs='real'`. Default: False.
:type sorted_eigs: {'real', 'abs'} or False
:param tikhonov_regularization: Tikhonov parameter for the regularization.
If `None`, no regularization is applied, if `float`, it is used as the
:math:`\lambda` tikhonov parameter.
:type tikhonov_regularization: int or float
"""
def __init__(
self,
svd_rank,
rescale_mode,
forward_backward,
sorted_eigs,
tikhonov_regularization,
):
super().__init__(
svd_rank=svd_rank,
exact=True,
rescale_mode=rescale_mode,
forward_backward=forward_backward,
sorted_eigs=sorted_eigs,
tikhonov_regularization=tikhonov_regularization,
)
self._Atilde = None
def compute_operator(self, compressedX, compressedY, nonCompressedY):
"""
Compute the low-rank operator.
:param numpy.ndarray compressedX: the compressed version of the matrix
containing the snapshots x0,..x{n-1} by column.
:param numpy.ndarray compressedY: the compressed version of the matrix
containing the snapshots x1,..x{n} by column.
:param numpy.ndarray nonCompressedY: the matrix containing the
snapshots x1,..x{n} by column.
:return: the (truncated) left-singular vectors matrix, the (truncated)
singular values array, the (truncated) right-singular vectors
matrix of compressedX.
:rtype: numpy.ndarray, numpy.ndarray, numpy.ndarray
"""
| U, s, V = compute_svd(compressedX, svd_rank=self._svd_rank) | 3 | 2023-10-30 12:37:40+00:00 | 12k |
nv-tlabs/trace | tbsim/algos/algos.py | [
{
"identifier": "batch_utils",
"path": "tbsim/utils/batch_utils.py",
"snippet": "def batch_utils():\n return trajdataBatchUtils()"
},
{
"identifier": "Action",
"path": "tbsim/policies/common.py",
"snippet": "class Action(Trajectory):\n pass"
},
{
"identifier": "DiffuserMode... | import numpy as np
import copy
import torch
import torch.nn as nn
import torch.optim as optim
import pytorch_lightning as pl
import torch.nn.functional as F
import tbsim.utils.tensor_utils as TensorUtils
import tbsim.utils.metrics as Metrics
from tbsim.utils.batch_utils import batch_utils
from tbsim.policies.common import Action
from tbsim.models.trace import DiffuserModel
from tbsim.models.trace_helpers import EMA
from tbsim.utils.guidance_loss import choose_action_from_guidance, choose_action_from_gt | 10,371 |
class DiffuserTrafficModel(pl.LightningModule):
def __init__(self, algo_config, modality_shapes, guidance_config=None):
"""
Creates networks and places them into @self.nets.
"""
super(DiffuserTrafficModel, self).__init__()
self.algo_config = algo_config
self.nets = nn.ModuleDict()
if algo_config.diffuser_input_mode == 'state_and_action':
# "Observations" are inputs to diffuser that are not outputs
observation_dim = 4 # x, y, vel, yaw
# "Actions" are inputs and outputs
action_dim = 2 # acc, yawvel
# "output" is final output of the entired denoising process
output_dim = 2 # acc, yawvel
else:
raise
self.cond_drop_map_p = algo_config.conditioning_drop_map_p
self.cond_drop_neighbor_p = algo_config.conditioning_drop_neighbor_p
min_cond_drop_p = min([self.cond_drop_map_p, self.cond_drop_neighbor_p])
max_cond_drop_p = max([self.cond_drop_map_p, self.cond_drop_neighbor_p])
assert min_cond_drop_p >= 0.0 and max_cond_drop_p <= 1.0
self.use_cond = self.cond_drop_map_p < 1.0 and self.cond_drop_neighbor_p < 1.0 # no need for conditioning arch if always dropping
self.cond_fill_val = algo_config.conditioning_drop_fill
self.use_rasterized_map = algo_config.rasterized_map
if self.use_cond:
if self.cond_drop_map_p > 0:
print('DIFFUSER: Dropping map input conditioning with p = %f during training...' % (self.cond_drop_map_p))
if self.cond_drop_neighbor_p > 0:
print('DIFFUSER: Dropping neighbor traj input conditioning with p = %f during training...' % (self.cond_drop_neighbor_p))
self.nets["policy"] = DiffuserModel(
rasterized_map=algo_config.rasterized_map,
use_map_feat_global=algo_config.use_map_feat_global,
use_map_feat_grid=algo_config.use_map_feat_grid,
map_encoder_model_arch=algo_config.map_encoder_model_arch,
input_image_shape=modality_shapes["image"], # [C, H, W]
map_feature_dim=algo_config.map_feature_dim,
map_grid_feature_dim=algo_config.map_grid_feature_dim,
hist_num_frames=algo_config.history_num_frames+1, # the current step is concat to the history
hist_feature_dim=algo_config.history_feature_dim,
cond_feature_dim=algo_config.cond_feat_dim,
diffuser_model_arch=algo_config.diffuser_model_arch,
horizon=algo_config.horizon,
observation_dim=observation_dim,
action_dim=action_dim,
output_dim=output_dim,
n_timesteps=algo_config.n_diffusion_steps,
loss_type=algo_config.loss_type,
action_weight=algo_config.action_weight,
loss_discount=algo_config.loss_discount,
dim_mults=algo_config.dim_mults,
dynamics_type=algo_config.dynamics.type,
dynamics_kwargs=algo_config.dynamics,
base_dim=algo_config.base_dim,
diffuser_input_mode=algo_config.diffuser_input_mode,
use_conditioning=self.use_cond,
cond_fill_value=self.cond_fill_val,
diffuser_norm_info=algo_config.diffuser_norm_info,
agent_hist_norm_info=algo_config.agent_hist_norm_info,
neighbor_hist_norm_info=algo_config.neighbor_hist_norm_info,
dt=algo_config.step_time,
)
# set up initial guidance
if guidance_config is not None:
self.set_guidance(guidance_config)
# set up EMA
self.use_ema = algo_config.use_ema
if self.use_ema:
print('DIFFUSER: using EMA... val and get_action will use ema model')
|
class DiffuserTrafficModel(pl.LightningModule):
def __init__(self, algo_config, modality_shapes, guidance_config=None):
"""
Creates networks and places them into @self.nets.
"""
super(DiffuserTrafficModel, self).__init__()
self.algo_config = algo_config
self.nets = nn.ModuleDict()
if algo_config.diffuser_input_mode == 'state_and_action':
# "Observations" are inputs to diffuser that are not outputs
observation_dim = 4 # x, y, vel, yaw
# "Actions" are inputs and outputs
action_dim = 2 # acc, yawvel
# "output" is final output of the entired denoising process
output_dim = 2 # acc, yawvel
else:
raise
self.cond_drop_map_p = algo_config.conditioning_drop_map_p
self.cond_drop_neighbor_p = algo_config.conditioning_drop_neighbor_p
min_cond_drop_p = min([self.cond_drop_map_p, self.cond_drop_neighbor_p])
max_cond_drop_p = max([self.cond_drop_map_p, self.cond_drop_neighbor_p])
assert min_cond_drop_p >= 0.0 and max_cond_drop_p <= 1.0
self.use_cond = self.cond_drop_map_p < 1.0 and self.cond_drop_neighbor_p < 1.0 # no need for conditioning arch if always dropping
self.cond_fill_val = algo_config.conditioning_drop_fill
self.use_rasterized_map = algo_config.rasterized_map
if self.use_cond:
if self.cond_drop_map_p > 0:
print('DIFFUSER: Dropping map input conditioning with p = %f during training...' % (self.cond_drop_map_p))
if self.cond_drop_neighbor_p > 0:
print('DIFFUSER: Dropping neighbor traj input conditioning with p = %f during training...' % (self.cond_drop_neighbor_p))
self.nets["policy"] = DiffuserModel(
rasterized_map=algo_config.rasterized_map,
use_map_feat_global=algo_config.use_map_feat_global,
use_map_feat_grid=algo_config.use_map_feat_grid,
map_encoder_model_arch=algo_config.map_encoder_model_arch,
input_image_shape=modality_shapes["image"], # [C, H, W]
map_feature_dim=algo_config.map_feature_dim,
map_grid_feature_dim=algo_config.map_grid_feature_dim,
hist_num_frames=algo_config.history_num_frames+1, # the current step is concat to the history
hist_feature_dim=algo_config.history_feature_dim,
cond_feature_dim=algo_config.cond_feat_dim,
diffuser_model_arch=algo_config.diffuser_model_arch,
horizon=algo_config.horizon,
observation_dim=observation_dim,
action_dim=action_dim,
output_dim=output_dim,
n_timesteps=algo_config.n_diffusion_steps,
loss_type=algo_config.loss_type,
action_weight=algo_config.action_weight,
loss_discount=algo_config.loss_discount,
dim_mults=algo_config.dim_mults,
dynamics_type=algo_config.dynamics.type,
dynamics_kwargs=algo_config.dynamics,
base_dim=algo_config.base_dim,
diffuser_input_mode=algo_config.diffuser_input_mode,
use_conditioning=self.use_cond,
cond_fill_value=self.cond_fill_val,
diffuser_norm_info=algo_config.diffuser_norm_info,
agent_hist_norm_info=algo_config.agent_hist_norm_info,
neighbor_hist_norm_info=algo_config.neighbor_hist_norm_info,
dt=algo_config.step_time,
)
# set up initial guidance
if guidance_config is not None:
self.set_guidance(guidance_config)
# set up EMA
self.use_ema = algo_config.use_ema
if self.use_ema:
print('DIFFUSER: using EMA... val and get_action will use ema model') | self.ema = EMA(algo_config.ema_decay) | 3 | 2023-10-31 18:43:07+00:00 | 12k |
nv-tlabs/pacer | uhc/smpllib/np_smpl_humanoid_batch.py | [
{
"identifier": "dict_to_torch",
"path": "uhc/utils/torch_ext.py",
"snippet": "def dict_to_torch(input_dict, dtype = None, device = None, add_dim = False):\n if not isinstance(input_dict, dict):\n return None\n out_dict = {}\n for key, value in input_dict.items():\n if isinstance(... | import torch
import glob
import os
import sys
import pdb
import os.path as osp
import joblib
import pytorch3d.transforms as tR
import autograd.numpy as np
import time
import ipdb
from uhc.utils.torch_ext import dict_to_torch
from uhc.utils.torch_utils import *
from uhc.utils.transform_utils import *
from scipy.spatial.transform import Rotation as sRot
from uhc.smpllib.smpl_mujoco import SMPLConverter, smpl_to_qpose, smpl_to_qpose_torch, SMPL_BONE_ORDER_NAMES
from uhc.smpllib.smpl_parser import SMPL_EE_NAMES
from uhc.utils.tools import get_expert, get_expert_master
from uhc.smpllib.smpl_parser import (
SMPL_Parser,
SMPLH_Parser,
SMPLX_Parser,
)
from autograd import elementwise_grad as egrad
from uhc.smpllib.smpl_robot import Robot
from uhc.smpllib.torch_smpl_humanoid import Humanoid
from uhc.utils.config_utils.copycat_config import Config
from uhc.data_loaders.dataset_amass_single import DatasetAMASSSingle
from uhc.utils.torch_ext import dict_to_torch
from uhc.smpllib.smpl_mujoco import smpl_to_qpose_torch, smplh_to_smpl | 9,512 | # import numpy as np
sys.path.append(os.getcwd())
def smpl_op_to_op(pred_joints2d):
new_2d = np.concatenate([pred_joints2d[..., [1, 4], :].mean(axis = -2, keepdims = True), \
pred_joints2d[..., 1:7, :], \
pred_joints2d[..., [7, 8, 11], :].mean(axis = -2, keepdims = True), \
pred_joints2d[..., 9:11, :], \
pred_joints2d[..., 12:, :]], \
axis = -2)
return new_2d
def normalize_screen_coordinates(X, w=1920, h=1080):
assert X.shape[-1] == 2
# Normalize so that [0, w] is mapped to
# [-1, 1], while preserving the aspect ratio
return X / w * 2 - np.array([1, h / w])
def rodrigues(r):
"""
Rodrigues' rotation formula that turns axis-angle vector into rotation
matrix in a batch-ed manner.
Parameter:
----------
r: Axis-angle rotation vector of shape [batch_size, 1, 3].
Return:
-------
Rotation matrix of shape [batch_size, 3, 3].
"""
theta = np.linalg.norm(r, axis=(1, 2))[:, None, None]
# avoid zero divide
theta = np.maximum(theta, np.finfo(r.dtype).eps)
r_hat = r / theta
cos = np.cos(theta)
z_stick = np.zeros(theta.shape[0])
m = np.stack([
z_stick, -r_hat[:, 0, 2], r_hat[:, 0, 1], r_hat[:, 0, 2], z_stick,
-r_hat[:, 0, 0], -r_hat[:, 0, 1], r_hat[:, 0, 0], z_stick
],
axis=1).reshape([-1, 3, 3])
i_cube = np.broadcast_to(np.expand_dims(np.eye(3), axis=0),
[theta.shape[0], 3, 3])
A = np.transpose(r_hat, axes=[0, 2, 1])
B = r_hat
dot = np.matmul(A, B)
R = cos * i_cube + (1 - cos) * dot + np.sin(theta) * m
return R
def rodrigues_vec_to_rotation_mat(rot):
theta = np.linalg.norm(rot, axis=0)
if theta < sys.float_info.epsilon:
rotation_mat = np.eye(3, dtype=float)
else:
rot = rot / theta
I = np.eye(3, dtype=float)
r_rT = np.array([[rot[0] * rot[0], rot[0] * rot[1], rot[0] * rot[2]],
[rot[1] * rot[0], rot[1] * rot[1], rot[1] * rot[2]],
[rot[2] * rot[0], rot[2] * rot[1], rot[2] * rot[2]]])
r_cross = np.array([[0, -rot[2], rot[1]], [rot[2], 0, -rot[0]],
[-rot[1], rot[0], 0]])
rotation_mat = np.cos(theta) * I + (
1 - np.cos(theta)) * r_rT + np.sin(theta) * r_cross
return rotation_mat
class Humanoid_Batch:
def __init__(self, smpl_model="smpl", data_dir="data/smpl"):
self.smpl_model = smpl_model
if self.smpl_model == "smpl":
self.smpl_parser_n = SMPL_Parser(model_path=data_dir,
gender="neutral")
self.smpl_parser_m = SMPL_Parser(model_path=data_dir,
gender="male")
self.smpl_parser_f = SMPL_Parser(model_path=data_dir,
gender="female")
elif self.smpl_model == "smplh":
self.smpl_parser_n = SMPLH_Parser(
model_path=data_dir,
gender="neutral",
use_pca=False,
create_transl=False,
)
self.smpl_parser_m = SMPLH_Parser(model_path=data_dir,
gender="male",
use_pca=False,
create_transl=False)
self.smpl_parser_f = SMPLH_Parser(model_path=data_dir,
gender="female",
use_pca=False,
create_transl=False)
elif self.smpl_model == "smplx":
| # import numpy as np
sys.path.append(os.getcwd())
def smpl_op_to_op(pred_joints2d):
new_2d = np.concatenate([pred_joints2d[..., [1, 4], :].mean(axis = -2, keepdims = True), \
pred_joints2d[..., 1:7, :], \
pred_joints2d[..., [7, 8, 11], :].mean(axis = -2, keepdims = True), \
pred_joints2d[..., 9:11, :], \
pred_joints2d[..., 12:, :]], \
axis = -2)
return new_2d
def normalize_screen_coordinates(X, w=1920, h=1080):
assert X.shape[-1] == 2
# Normalize so that [0, w] is mapped to
# [-1, 1], while preserving the aspect ratio
return X / w * 2 - np.array([1, h / w])
def rodrigues(r):
"""
Rodrigues' rotation formula that turns axis-angle vector into rotation
matrix in a batch-ed manner.
Parameter:
----------
r: Axis-angle rotation vector of shape [batch_size, 1, 3].
Return:
-------
Rotation matrix of shape [batch_size, 3, 3].
"""
theta = np.linalg.norm(r, axis=(1, 2))[:, None, None]
# avoid zero divide
theta = np.maximum(theta, np.finfo(r.dtype).eps)
r_hat = r / theta
cos = np.cos(theta)
z_stick = np.zeros(theta.shape[0])
m = np.stack([
z_stick, -r_hat[:, 0, 2], r_hat[:, 0, 1], r_hat[:, 0, 2], z_stick,
-r_hat[:, 0, 0], -r_hat[:, 0, 1], r_hat[:, 0, 0], z_stick
],
axis=1).reshape([-1, 3, 3])
i_cube = np.broadcast_to(np.expand_dims(np.eye(3), axis=0),
[theta.shape[0], 3, 3])
A = np.transpose(r_hat, axes=[0, 2, 1])
B = r_hat
dot = np.matmul(A, B)
R = cos * i_cube + (1 - cos) * dot + np.sin(theta) * m
return R
def rodrigues_vec_to_rotation_mat(rot):
theta = np.linalg.norm(rot, axis=0)
if theta < sys.float_info.epsilon:
rotation_mat = np.eye(3, dtype=float)
else:
rot = rot / theta
I = np.eye(3, dtype=float)
r_rT = np.array([[rot[0] * rot[0], rot[0] * rot[1], rot[0] * rot[2]],
[rot[1] * rot[0], rot[1] * rot[1], rot[1] * rot[2]],
[rot[2] * rot[0], rot[2] * rot[1], rot[2] * rot[2]]])
r_cross = np.array([[0, -rot[2], rot[1]], [rot[2], 0, -rot[0]],
[-rot[1], rot[0], 0]])
rotation_mat = np.cos(theta) * I + (
1 - np.cos(theta)) * r_rT + np.sin(theta) * r_cross
return rotation_mat
class Humanoid_Batch:
def __init__(self, smpl_model="smpl", data_dir="data/smpl"):
self.smpl_model = smpl_model
if self.smpl_model == "smpl":
self.smpl_parser_n = SMPL_Parser(model_path=data_dir,
gender="neutral")
self.smpl_parser_m = SMPL_Parser(model_path=data_dir,
gender="male")
self.smpl_parser_f = SMPL_Parser(model_path=data_dir,
gender="female")
elif self.smpl_model == "smplh":
self.smpl_parser_n = SMPLH_Parser(
model_path=data_dir,
gender="neutral",
use_pca=False,
create_transl=False,
)
self.smpl_parser_m = SMPLH_Parser(model_path=data_dir,
gender="male",
use_pca=False,
create_transl=False)
self.smpl_parser_f = SMPLH_Parser(model_path=data_dir,
gender="female",
use_pca=False,
create_transl=False)
elif self.smpl_model == "smplx": | self.smpl_parser_n = SMPLX_Parser( | 7 | 2023-10-31 20:47:12+00:00 | 12k |
Improbable-AI/dexenv | dexenv/envs/dclaw_base.py | [
{
"identifier": "VecTask",
"path": "dexenv/envs/base/vec_task.py",
"snippet": "class VecTask(Env):\n\n def __init__(self, config, sim_device, rl_device, graphics_device_id, headless):\n \"\"\"Initialise the `VecTask`.\n Args:\n config: config dictionary for the environment.\n... | import time
import torch
import dexenv
from isaacgym import gymapi
from isaacgym import gymtorch
from isaacgym.gymutil import get_property_getter_map
from isaacgym.gymutil import get_property_setter_map
from isaacgymenvs.utils.torch_jit_utils import *
from loguru import logger
from dexenv.envs.base.vec_task import VecTask
from dexenv.envs.rewards import compute_dclaw_reward
from dexenv.utils.common import get_module_path
from dexenv.utils.common import pathlib_file
from dexenv.utils.hand_color import dclaw_body_color_mapping
from dexenv.utils.isaac_utils import get_camera_params
from dexenv.utils.torch_utils import random_quaternions
from dexenv.utils.torch_utils import torch_long | 10,786 | self.hand_start_states = []
self.hand_indices = []
self.fingertip_indices = []
self.object_indices = []
self.goal_object_indices = []
self.render_camera_handles = []
if self.cfg.rgb_render:
render_cam_pose, render_cam_params = self.get_visual_render_camera_setup()
self.fingertip_handles = [self.gym.find_asset_rigid_body_index(dclaw_asset, name) for name in
self.fingertips]
print(f'Fingertip handles:{self.fingertip_handles}')
dclaw_rb_count = self.gym.get_asset_rigid_body_count(dclaw_asset)
object_rb_count = self.gym.get_asset_rigid_body_count(object_asset)
object_rs_count = self.gym.get_asset_rigid_shape_count(object_asset)
self.object_rb_handles = list(range(dclaw_rb_count, dclaw_rb_count + object_rb_count))
self.object_handles = []
max_agg_bodies = self.num_dclaw_bodies + 2 * object_rb_count + 1
max_agg_shapes = self.num_dclaw_shapes + 2 * object_rs_count + 1
for i in range(self.num_envs):
env_ptr = self.gym.create_env(
self.sim, lower, upper, num_per_row
)
if self.aggregate_mode >= 1:
self.gym.begin_aggregate(env_ptr, max_agg_bodies, max_agg_shapes, True)
self.create_hand_actor(env_ptr=env_ptr,
dclaw_asset=dclaw_asset,
dclaw_start_pose=dclaw_start_pose,
dclaw_dof_props=dclaw_dof_props,
env_id=i)
object_handle = self.gym.create_actor(env_ptr, object_asset, object_start_pose, "object", i, 0, 1)
self.object_handles.append(object_handle)
self.object_init_state.append([object_start_pose.p.x, object_start_pose.p.y, object_start_pose.p.z,
object_start_pose.r.x, object_start_pose.r.y, object_start_pose.r.z,
object_start_pose.r.w,
0, 0, 0, 0, 0, 0])
object_idx = self.gym.get_actor_index(env_ptr, object_handle, gymapi.DOMAIN_SIM)
self.object_indices.append(object_idx)
goal_handle = self.gym.create_actor(env_ptr, goal_asset, goal_start_pose, "goal_object", i + self.num_envs,
0, 2)
goal_object_idx = self.gym.get_actor_index(env_ptr, goal_handle, gymapi.DOMAIN_SIM)
self.goal_object_indices.append(goal_object_idx)
if self.cfg.env.blockscale is not None and self.cfg.env.objectType == 'block':
blockscale = float(self.cfg.env.blockscale)
self.gym.set_actor_scale(env_ptr, object_handle, blockscale)
self.gym.set_actor_scale(env_ptr, goal_handle, blockscale)
if self.object_type != "block":
self.gym.set_rigid_body_color(
env_ptr, object_handle, 0, gymapi.MESH_VISUAL, gymapi.Vec3(0.6, 0.72, 0.98))
self.gym.set_rigid_body_color(
env_ptr, goal_handle, 0, gymapi.MESH_VISUAL, gymapi.Vec3(0.6, 0.72, 0.98))
table_handle = self.gym.create_actor(env_ptr, table_asset, table_pose, "table", i, 0)
if self.cfg.rgb_render:
render_camera_handle = self.create_camera(render_cam_pose, env_ptr, render_cam_params)
self.render_camera_handles.append(render_camera_handle[0])
if self.aggregate_mode > 0:
self.gym.end_aggregate(env_ptr)
self.envs.append(env_ptr)
self.setup_torch_states()
def create_camera(self, camera_poses, env_ptr, camera_params):
cam_handles = []
for ic in range(min(len(camera_poses), self.cfg.cam.cam_num)):
camera_handle = self.gym.create_camera_sensor(env_ptr, camera_params)
if isinstance(camera_poses[ic], tuple):
self.gym.set_camera_location(camera_handle, env_ptr, camera_poses[ic][0], camera_poses[ic][1])
else:
self.gym.set_camera_transform(camera_handle, env_ptr, camera_poses[ic])
cam_handles.append(camera_handle)
return cam_handles
def get_visual_render_camera_setup(self):
cam_pos = np.array([-0.7, 0, 0.5])
cam_focus_pt = np.array([0.08, 0, 0.15])
cam_focus_pt = gymapi.Vec3(*cam_focus_pt)
cam_pos = gymapi.Vec3(*cam_pos)
camera_poses = [(cam_pos, cam_focus_pt)]
camera_params = get_camera_params(width=self.cfg.cam.visual_render_width,
height=self.cfg.cam.visual_render_height,
hov=45,
cuda=False)
return camera_poses, camera_params
def create_hand_actor(self, env_ptr, dclaw_asset, dclaw_start_pose, dclaw_dof_props, env_id):
dclaw_actor = self.gym.create_actor(env_ptr, dclaw_asset, dclaw_start_pose, "hand", env_id, 0, 0)
if self.cfg.env.dof_torque_on:
self.gym.enable_actor_dof_force_sensors(env_ptr, dclaw_actor)
self.hand_start_states.append(
[dclaw_start_pose.p.x, dclaw_start_pose.p.y, dclaw_start_pose.p.z,
dclaw_start_pose.r.x, dclaw_start_pose.r.y, dclaw_start_pose.r.z,
dclaw_start_pose.r.w,
0, 0, 0, 0, 0, 0])
self.gym.set_actor_dof_properties(env_ptr, dclaw_actor, dclaw_dof_props)
hand_idx = self.gym.get_actor_index(env_ptr, dclaw_actor, gymapi.DOMAIN_SIM)
self.hand_indices.append(hand_idx)
self.gym.set_actor_dof_states(env_ptr, dclaw_actor, self.dclaw_default_dof_states, gymapi.STATE_ALL)
if self.obs_type == "full_state":
self.gym.enable_actor_dof_force_sensors(env_ptr, dclaw_actor)
self.dclaws.append(dclaw_actor)
self.set_hand_color(env_ptr, dclaw_actor)
def set_hand_color(self, env_ptr, dclaw_actor):
rgd_dict = self.gym.get_actor_rigid_body_dict(env_ptr, dclaw_actor)
for bd, bd_id in rgd_dict.items():
|
class DClawBase(VecTask):
def __init__(self, cfg, sim_device, rl_device, graphics_device_id):
self.cfg = cfg
headless = self.cfg.headless
self.randomize = self.cfg["task"]["randomize"]
if self.randomize:
logger.warning(f'Domain randomization is enabled!')
self.randomization_params = self.cfg["task"]["randomization_params"]
self.aggregate_mode = self.cfg["env"]["aggregateMode"]
self.dist_reward_scale = self.cfg["env"]["rew"]["distRewardScale"]
self.rot_reward_scale = self.cfg["env"]["rew"]["rotRewardScale"]
self.success_tolerance = self.cfg["env"]["rew"]["successTolerance"]
self.reach_goal_bonus = self.cfg["env"]["rew"]["reachGoalBonus"]
self.fall_dist = self.cfg["env"]["rew"]["fallDistance"]
self.fall_penalty = self.cfg["env"]["rew"]["fallPenalty"]
self.rot_eps = self.cfg["env"]["rew"]["rotEps"]
self.vel_obs_scale = 0.2 # scale factor of velocity based observations
self.force_torque_obs_scale = 10.0 # scale factor of velocity based observations
self.reset_position_noise = self.cfg["env"]["resetPositionNoise"]
self.reset_rotation_noise = self.cfg["env"]["resetRotationNoise"]
self.reset_dof_pos_noise = self.cfg["env"]["resetDofPosRandomInterval"]
self.reset_dof_vel_noise = self.cfg["env"]["resetDofVelRandomInterval"]
self.force_scale = self.cfg["env"].get("forceScale", 0.0)
self.force_prob_range = self.cfg["env"].get("forceProbRange", [0.001, 0.1])
self.force_decay = self.cfg["env"].get("forceDecay", 0.99)
self.force_decay_interval = self.cfg["env"].get("forceDecayInterval", 0.08)
self.dclaw_dof_speed_scale = self.cfg["env"]["dofSpeedScale"]
# self.act_moving_average = self.cfg["env"]["actionsMovingAverage"]
self.debug_viz = self.cfg["env"]["enableDebugVis"]
self.max_episode_length = self.cfg["env"]["episodeLength"]
self.reset_time = self.cfg["env"].get("resetTime", -1.0)
self.print_success_stat = self.cfg["env"]["printNumSuccesses"]
self.max_consecutive_successes = self.cfg["env"]["maxConsecutiveSuccesses"]
self.av_factor = self.cfg["env"].get("averFactor", 0.1)
self.object_type = self.cfg["env"]["objectType"]
self.asset_files_dict = {
"block": "urdf/objects/cube_multicolor.urdf",
"egg": "mjcf/open_ai_assets/hand/egg.xml",
"airplane": "single_objects/airplane/model.urdf",
'power_drill': 'single_objects/power_drill/model.urdf',
'mug': 'single_objects/mug/model.urdf',
'elephant': 'asymm/train/elephant/var_000/model.urdf',
'train': 'asymm/train/train/var_000/model.urdf',
'stanford_bunny': 'asymm/train/stanford_bunny/var_004/model.urdf'
}
self.objs_in_isaacgym = ['block', 'egg']
if "asset" in self.cfg["env"]:
self.asset_files_dict["block"] = self.cfg["env"]["asset"].get("assetFileNameBlock",
self.asset_files_dict["block"])
self.asset_files_dict["egg"] = self.cfg["env"]["asset"].get("assetFileNameEgg",
self.asset_files_dict["egg"])
self.obs_type = self.cfg["env"]["observationType"]
if not (self.obs_type in ["full_no_vel", "full", "full_state"]):
raise Exception(
"Unknown type of observations!\nobservationType should be one of: [openai, full_no_vel, full, full_state]")
print("Obs type:", self.obs_type)
## TODO: change value here
self.num_obs_dict = {
"full_no_vel": 42,
"full": 87,
"full_state": 114
}
self.up_axis = 'z'
num_states = 0
self.cfg["env"]["numObservations"] = self.num_obs_dict[self.obs_type]
self.cfg["env"]["numStates"] = num_states
self.cfg["env"]["numActions"] = 12
self.hist_buf_reset_env_ids = None
super().__init__(config=self.cfg,
sim_device=sim_device,
rl_device=rl_device,
graphics_device_id=graphics_device_id,
headless=headless)
self.dt = self.sim_params.dt
control_freq_inv = self.cfg["env"].get("controlFrequencyInv", 1)
if self.reset_time > 0.0:
self.max_episode_length = int(round(self.reset_time / (control_freq_inv * self.dt)))
print("Reset time: ", self.reset_time)
print("New episode length: ", self.max_episode_length)
if self.viewer != None:
cam_pos = gymapi.Vec3(0.16, -0.5, 0.5)
cam_target = gymapi.Vec3(0.0, 0.0, 0.15)
self.gym.viewer_camera_look_at(self.viewer, None, cam_pos, cam_target)
actor_root_state_tensor = self.gym.acquire_actor_root_state_tensor(self.sim)
dof_state_tensor = self.gym.acquire_dof_state_tensor(self.sim)
rigid_body_tensor = self.gym.acquire_rigid_body_state_tensor(self.sim)
dof_force_tensor = self.gym.acquire_dof_force_tensor(self.sim)
if self.obs_type == "full_state":
sensor_tensor = self.gym.acquire_force_sensor_tensor(self.sim)
self.vec_sensor_tensor = gymtorch.wrap_tensor(sensor_tensor).view(self.num_envs, self.num_fingertips * 6)
dof_force_tensor = self.gym.acquire_dof_force_tensor(self.sim)
self.dof_force_tensor = gymtorch.wrap_tensor(dof_force_tensor).view(self.num_envs,
self.num_dclaw_dofs)
self.gym.refresh_actor_root_state_tensor(self.sim)
self.gym.refresh_dof_state_tensor(self.sim)
if self.cfg.env.dof_torque_on:
self.gym.refresh_dof_force_tensor(self.sim)
self.gym.refresh_rigid_body_state_tensor(self.sim)
self.dof_state = gymtorch.wrap_tensor(dof_state_tensor)
self.dclaw_dof_state = self.dof_state.view(self.num_envs, -1, 2)[:, :self.num_dclaw_dofs]
self.dclaw_dof_pos = self.dclaw_dof_state[..., 0]
self.dclaw_dof_vel = self.dclaw_dof_state[..., 1]
if self.cfg.env.dof_torque_on:
self.dclaw_dof_torque = gymtorch.wrap_tensor(dof_force_tensor).view(self.num_envs, -1)
else:
self.dclaw_dof_torque = None
self.rigid_body_states = gymtorch.wrap_tensor(rigid_body_tensor).view(self.num_envs, -1, 13)
self.num_bodies = self.rigid_body_states.shape[1]
self.root_state_tensor = gymtorch.wrap_tensor(actor_root_state_tensor).view(-1, 13)
if self.cfg.env.rew.pen_tb_contact:
_net_cf = self.gym.acquire_net_contact_force_tensor(self.sim)
self.net_contact_force = gymtorch.wrap_tensor(_net_cf).view(self.num_envs, -1, 3)
table_handle = self.gym.find_actor_handle(self.envs[0], 'table')
self.table_body_index = self.gym.find_actor_rigid_body_index(self.envs[0],
table_handle,
'table',
gymapi.DOMAIN_ENV)
logger.warning(f'Table body index:{self.table_body_index}')
self.table_contact_force = self.net_contact_force[:, self.table_body_index]
self.num_dofs = self.gym.get_sim_dof_count(self.sim) // self.num_envs
self.prev_targets = torch.zeros((self.num_envs, self.num_dofs), dtype=torch.float, device=self.device)
self.cur_targets = torch.zeros((self.num_envs, self.num_dofs), dtype=torch.float, device=self.device)
self.global_indices = torch.arange(self.num_envs * 3, dtype=torch.int32, device=self.device).view(self.num_envs, -1)
self.reset_goal_buf = self.reset_buf.clone()
self.successes = torch.zeros(self.num_envs, dtype=torch.float, device=self.device)
self.consecutive_successes = torch.zeros(1, dtype=torch.float, device=self.device)
self.av_factor = to_torch(self.av_factor, dtype=torch.float, device=self.device)
self.total_successes = 0
self.total_resets = 0
self.force_decay = to_torch(self.force_decay, dtype=torch.float, device=self.device)
self.force_prob_range = to_torch(self.force_prob_range, dtype=torch.float, device=self.device)
self.random_force_prob = torch.exp((torch.log(self.force_prob_range[0]) - torch.log(self.force_prob_range[1]))
* torch.rand(self.num_envs, device=self.device) + torch.log(
self.force_prob_range[1]))
self.rb_forces = torch.zeros((self.num_envs, self.num_bodies, 3), dtype=torch.float, device=self.device)
self.num_actions = self.num_dclaw_dofs
self.actions = self.zero_actions()
DClawBase.compute_observations(self)
self.num_observations = self.obs_buf.shape[-1]
self.cfg.env.numObservations = self.num_observations
self.create_ob_act_space()
def create_sim(self):
self.dt = self.cfg["sim"]["dt"]
self.up_axis_idx = self.set_sim_params_up_axis(self.sim_params, self.up_axis)
self.sim = super().create_sim(self.device_id, self.graphics_device_id, self.physics_engine, self.sim_params)
self._create_ground_plane()
self._create_envs(self.num_envs, self.cfg["env"]['envSpacing'], int(np.sqrt(self.num_envs)))
if self.randomize:
self.apply_randomizations(self.randomization_params)
def _create_ground_plane(self):
plane_params = gymapi.PlaneParams()
plane_params.normal = gymapi.Vec3(0.0, 0.0, 1.0)
plane_params.distance = 0.1
self.gym.add_ground(self.sim, plane_params)
def _create_envs(self, num_envs, spacing, num_per_row):
lower = gymapi.Vec3(-spacing, -spacing, 0.0)
upper = gymapi.Vec3(spacing, spacing, spacing)
asset_root = dexenv.LIB_PATH.joinpath('assets', 'dclaw').as_posix()
object_asset_file = self.asset_files_dict[self.object_type]
dclaw_asset, dclaw_dof_props = self.get_dclaw_asset(asset_root=asset_root)
table_asset = self.get_table_asset()
table_pose = self.get_table_pose()
if self.obs_type == "full_state":
sensor_pose = gymapi.Transform()
for ft_handle in self.fingertip_handles:
self.gym.create_asset_force_sensor(dclaw_asset, ft_handle, sensor_pose)
if self.object_type in self.objs_in_isaacgym:
asset_root = get_module_path('isaacgymenvs').parent.joinpath('assets').as_posix()
else:
asset_root = dexenv.LIB_PATH.joinpath('assets').as_posix()
object_asset_options = gymapi.AssetOptions()
if self.cfg.env.vhacd:
object_asset_options.convex_decomposition_from_submeshes = True
object_asset = self.gym.load_asset(self.sim, asset_root, object_asset_file, object_asset_options)
object_asset_options.disable_gravity = True
goal_asset = self.gym.load_asset(self.sim, asset_root, object_asset_file, object_asset_options)
dclaw_start_pose = self.get_dclaw_start_pose()
object_start_pose = self.get_object_start_pose(dclaw_start_pose)
goal_start_pose = self.get_goal_object_start_pose(object_start_pose=object_start_pose)
self.dclaws = []
self.envs = []
self.object_init_state = []
self.hand_start_states = []
self.hand_indices = []
self.fingertip_indices = []
self.object_indices = []
self.goal_object_indices = []
self.render_camera_handles = []
if self.cfg.rgb_render:
render_cam_pose, render_cam_params = self.get_visual_render_camera_setup()
self.fingertip_handles = [self.gym.find_asset_rigid_body_index(dclaw_asset, name) for name in
self.fingertips]
print(f'Fingertip handles:{self.fingertip_handles}')
dclaw_rb_count = self.gym.get_asset_rigid_body_count(dclaw_asset)
object_rb_count = self.gym.get_asset_rigid_body_count(object_asset)
object_rs_count = self.gym.get_asset_rigid_shape_count(object_asset)
self.object_rb_handles = list(range(dclaw_rb_count, dclaw_rb_count + object_rb_count))
self.object_handles = []
max_agg_bodies = self.num_dclaw_bodies + 2 * object_rb_count + 1
max_agg_shapes = self.num_dclaw_shapes + 2 * object_rs_count + 1
for i in range(self.num_envs):
env_ptr = self.gym.create_env(
self.sim, lower, upper, num_per_row
)
if self.aggregate_mode >= 1:
self.gym.begin_aggregate(env_ptr, max_agg_bodies, max_agg_shapes, True)
self.create_hand_actor(env_ptr=env_ptr,
dclaw_asset=dclaw_asset,
dclaw_start_pose=dclaw_start_pose,
dclaw_dof_props=dclaw_dof_props,
env_id=i)
object_handle = self.gym.create_actor(env_ptr, object_asset, object_start_pose, "object", i, 0, 1)
self.object_handles.append(object_handle)
self.object_init_state.append([object_start_pose.p.x, object_start_pose.p.y, object_start_pose.p.z,
object_start_pose.r.x, object_start_pose.r.y, object_start_pose.r.z,
object_start_pose.r.w,
0, 0, 0, 0, 0, 0])
object_idx = self.gym.get_actor_index(env_ptr, object_handle, gymapi.DOMAIN_SIM)
self.object_indices.append(object_idx)
goal_handle = self.gym.create_actor(env_ptr, goal_asset, goal_start_pose, "goal_object", i + self.num_envs,
0, 2)
goal_object_idx = self.gym.get_actor_index(env_ptr, goal_handle, gymapi.DOMAIN_SIM)
self.goal_object_indices.append(goal_object_idx)
if self.cfg.env.blockscale is not None and self.cfg.env.objectType == 'block':
blockscale = float(self.cfg.env.blockscale)
self.gym.set_actor_scale(env_ptr, object_handle, blockscale)
self.gym.set_actor_scale(env_ptr, goal_handle, blockscale)
if self.object_type != "block":
self.gym.set_rigid_body_color(
env_ptr, object_handle, 0, gymapi.MESH_VISUAL, gymapi.Vec3(0.6, 0.72, 0.98))
self.gym.set_rigid_body_color(
env_ptr, goal_handle, 0, gymapi.MESH_VISUAL, gymapi.Vec3(0.6, 0.72, 0.98))
table_handle = self.gym.create_actor(env_ptr, table_asset, table_pose, "table", i, 0)
if self.cfg.rgb_render:
render_camera_handle = self.create_camera(render_cam_pose, env_ptr, render_cam_params)
self.render_camera_handles.append(render_camera_handle[0])
if self.aggregate_mode > 0:
self.gym.end_aggregate(env_ptr)
self.envs.append(env_ptr)
self.setup_torch_states()
def create_camera(self, camera_poses, env_ptr, camera_params):
cam_handles = []
for ic in range(min(len(camera_poses), self.cfg.cam.cam_num)):
camera_handle = self.gym.create_camera_sensor(env_ptr, camera_params)
if isinstance(camera_poses[ic], tuple):
self.gym.set_camera_location(camera_handle, env_ptr, camera_poses[ic][0], camera_poses[ic][1])
else:
self.gym.set_camera_transform(camera_handle, env_ptr, camera_poses[ic])
cam_handles.append(camera_handle)
return cam_handles
def get_visual_render_camera_setup(self):
cam_pos = np.array([-0.7, 0, 0.5])
cam_focus_pt = np.array([0.08, 0, 0.15])
cam_focus_pt = gymapi.Vec3(*cam_focus_pt)
cam_pos = gymapi.Vec3(*cam_pos)
camera_poses = [(cam_pos, cam_focus_pt)]
camera_params = get_camera_params(width=self.cfg.cam.visual_render_width,
height=self.cfg.cam.visual_render_height,
hov=45,
cuda=False)
return camera_poses, camera_params
def create_hand_actor(self, env_ptr, dclaw_asset, dclaw_start_pose, dclaw_dof_props, env_id):
dclaw_actor = self.gym.create_actor(env_ptr, dclaw_asset, dclaw_start_pose, "hand", env_id, 0, 0)
if self.cfg.env.dof_torque_on:
self.gym.enable_actor_dof_force_sensors(env_ptr, dclaw_actor)
self.hand_start_states.append(
[dclaw_start_pose.p.x, dclaw_start_pose.p.y, dclaw_start_pose.p.z,
dclaw_start_pose.r.x, dclaw_start_pose.r.y, dclaw_start_pose.r.z,
dclaw_start_pose.r.w,
0, 0, 0, 0, 0, 0])
self.gym.set_actor_dof_properties(env_ptr, dclaw_actor, dclaw_dof_props)
hand_idx = self.gym.get_actor_index(env_ptr, dclaw_actor, gymapi.DOMAIN_SIM)
self.hand_indices.append(hand_idx)
self.gym.set_actor_dof_states(env_ptr, dclaw_actor, self.dclaw_default_dof_states, gymapi.STATE_ALL)
if self.obs_type == "full_state":
self.gym.enable_actor_dof_force_sensors(env_ptr, dclaw_actor)
self.dclaws.append(dclaw_actor)
self.set_hand_color(env_ptr, dclaw_actor)
def set_hand_color(self, env_ptr, dclaw_actor):
rgd_dict = self.gym.get_actor_rigid_body_dict(env_ptr, dclaw_actor)
for bd, bd_id in rgd_dict.items(): | if bd not in dclaw_body_color_mapping: | 4 | 2023-10-25 17:22:41+00:00 | 12k |
ai-safety-foundation/sparse_autoencoder | sparse_autoencoder/autoencoder/model.py | [
{
"identifier": "LinearEncoder",
"path": "sparse_autoencoder/autoencoder/components/linear_encoder.py",
"snippet": "class LinearEncoder(Module):\n r\"\"\"Linear encoder layer.\n\n Linear encoder layer (essentially `nn.Linear`, with a ReLU activation function). Designed to be\n used as the encod... | from pathlib import Path
from tempfile import gettempdir
from typing import NamedTuple
from huggingface_hub import HfApi, hf_hub_download
from jaxtyping import Float
from pydantic import (
BaseModel,
DirectoryPath,
NonNegativeInt,
PositiveInt,
validate_call,
)
from torch import Tensor
from torch.nn import Module, Parameter
from torch.serialization import FILE_LIKE
from sparse_autoencoder.autoencoder.components.linear_encoder import LinearEncoder
from sparse_autoencoder.autoencoder.components.tied_bias import TiedBias, TiedBiasPosition
from sparse_autoencoder.autoencoder.components.unit_norm_decoder import UnitNormDecoder
from sparse_autoencoder.autoencoder.types import ResetOptimizerParameterDetails
from sparse_autoencoder.tensor_types import Axis
from sparse_autoencoder.utils.tensor_shape import shape_with_optional_dimensions
import torch
import wandb | 7,329 | """The Sparse Autoencoder Model."""
class SparseAutoencoderConfig(BaseModel, frozen=True):
"""SAE model config."""
n_input_features: PositiveInt
"""Number of input features.
E.g. `d_mlp` if training on MLP activations from TransformerLens).
"""
n_learned_features: PositiveInt
"""Number of learned features.
The initial paper experimented with 1 to 256 times the number of input features, and primarily
used a multiple of 8."""
n_components: PositiveInt | None = None
"""Number of source model components the SAE is trained on.""
This is useful if you want to train the SAE on several components of the source model at once.
If `None`, the SAE is assumed to be trained on just one component (in this case the model won't
contain a component axis in any of the parameters).
"""
class SparseAutoencoderState(BaseModel, arbitrary_types_allowed=True):
"""SAE model state.
Used for saving and loading the model.
"""
config: SparseAutoencoderConfig
"""Model config."""
state_dict: dict[str, Tensor]
"""Model state dict."""
class ForwardPassResult(NamedTuple):
"""SAE model forward pass result."""
learned_activations: Float[
Tensor, Axis.names(Axis.BATCH, Axis.COMPONENT_OPTIONAL, Axis.LEARNT_FEATURE)
]
decoded_activations: Float[
Tensor, Axis.names(Axis.BATCH, Axis.COMPONENT_OPTIONAL, Axis.INPUT_OUTPUT_FEATURE)
]
DEFAULT_TMP_DIR = Path(gettempdir()) / "sparse_autoencoder"
class SparseAutoencoder(Module):
"""Sparse Autoencoder Model."""
config: SparseAutoencoderConfig
"""Model config."""
geometric_median_dataset: Float[
Tensor, Axis.names(Axis.COMPONENT_OPTIONAL, Axis.INPUT_OUTPUT_FEATURE)
]
"""Estimated Geometric Median of the Dataset.
Used for initialising :attr:`tied_bias`.
"""
tied_bias: Float[
Parameter, Axis.names(Axis.BATCH, Axis.COMPONENT_OPTIONAL, Axis.INPUT_OUTPUT_FEATURE)
]
"""Tied Bias Parameter.
The same bias is used pre-encoder and post-decoder.
"""
pre_encoder_bias: TiedBias
"""Pre-Encoder Bias."""
encoder: LinearEncoder
"""Encoder."""
decoder: UnitNormDecoder
"""Decoder."""
post_decoder_bias: TiedBias
"""Post-Decoder Bias."""
def __init__(
self,
config: SparseAutoencoderConfig,
geometric_median_dataset: Float[
Tensor, Axis.names(Axis.COMPONENT_OPTIONAL, Axis.INPUT_OUTPUT_FEATURE)
]
| None = None,
) -> None:
"""Initialize the Sparse Autoencoder Model.
Args:
config: Model config.
geometric_median_dataset: Estimated geometric median of the dataset.
"""
super().__init__()
self.config = config
# Store the geometric median of the dataset (so that we can reset parameters). This is not a
# parameter itself (the tied bias parameter is used for that), so gradients are disabled.
| """The Sparse Autoencoder Model."""
class SparseAutoencoderConfig(BaseModel, frozen=True):
"""SAE model config."""
n_input_features: PositiveInt
"""Number of input features.
E.g. `d_mlp` if training on MLP activations from TransformerLens).
"""
n_learned_features: PositiveInt
"""Number of learned features.
The initial paper experimented with 1 to 256 times the number of input features, and primarily
used a multiple of 8."""
n_components: PositiveInt | None = None
"""Number of source model components the SAE is trained on.""
This is useful if you want to train the SAE on several components of the source model at once.
If `None`, the SAE is assumed to be trained on just one component (in this case the model won't
contain a component axis in any of the parameters).
"""
class SparseAutoencoderState(BaseModel, arbitrary_types_allowed=True):
"""SAE model state.
Used for saving and loading the model.
"""
config: SparseAutoencoderConfig
"""Model config."""
state_dict: dict[str, Tensor]
"""Model state dict."""
class ForwardPassResult(NamedTuple):
"""SAE model forward pass result."""
learned_activations: Float[
Tensor, Axis.names(Axis.BATCH, Axis.COMPONENT_OPTIONAL, Axis.LEARNT_FEATURE)
]
decoded_activations: Float[
Tensor, Axis.names(Axis.BATCH, Axis.COMPONENT_OPTIONAL, Axis.INPUT_OUTPUT_FEATURE)
]
DEFAULT_TMP_DIR = Path(gettempdir()) / "sparse_autoencoder"
class SparseAutoencoder(Module):
"""Sparse Autoencoder Model."""
config: SparseAutoencoderConfig
"""Model config."""
geometric_median_dataset: Float[
Tensor, Axis.names(Axis.COMPONENT_OPTIONAL, Axis.INPUT_OUTPUT_FEATURE)
]
"""Estimated Geometric Median of the Dataset.
Used for initialising :attr:`tied_bias`.
"""
tied_bias: Float[
Parameter, Axis.names(Axis.BATCH, Axis.COMPONENT_OPTIONAL, Axis.INPUT_OUTPUT_FEATURE)
]
"""Tied Bias Parameter.
The same bias is used pre-encoder and post-decoder.
"""
pre_encoder_bias: TiedBias
"""Pre-Encoder Bias."""
encoder: LinearEncoder
"""Encoder."""
decoder: UnitNormDecoder
"""Decoder."""
post_decoder_bias: TiedBias
"""Post-Decoder Bias."""
def __init__(
self,
config: SparseAutoencoderConfig,
geometric_median_dataset: Float[
Tensor, Axis.names(Axis.COMPONENT_OPTIONAL, Axis.INPUT_OUTPUT_FEATURE)
]
| None = None,
) -> None:
"""Initialize the Sparse Autoencoder Model.
Args:
config: Model config.
geometric_median_dataset: Estimated geometric median of the dataset.
"""
super().__init__()
self.config = config
# Store the geometric median of the dataset (so that we can reset parameters). This is not a
# parameter itself (the tied bias parameter is used for that), so gradients are disabled. | tied_bias_shape = shape_with_optional_dimensions( | 6 | 2023-10-27 07:37:15+00:00 | 12k |
LeapLabTHU/FamO2O | jax_cql/JaxCQL/conservative_sac_main.py | [
{
"identifier": "ConservativeSAC",
"path": "jax_cql/JaxCQL/conservative_sac.py",
"snippet": "class ConservativeSAC(object):\n\n @staticmethod\n def get_default_config(updates=None):\n config = ConfigDict()\n config.discount = 0.99\n config.alpha_multiplier = 1.0\n confi... | import os
import time
import uuid
import numpy as np
import pprint
import jax
import jax.numpy as jnp
import flax
import gym
import d4rl
import absl.app
import absl.flags
from copy import deepcopy
from .conservative_sac import ConservativeSAC
from .replay_buffer import get_d4rl_dataset, subsample_batch
from .jax_utils import batch_to_jax
from .model import TanhGaussianPolicy, FullyConnectedQFunction, SamplerPolicy
from .sampler import StepSampler, TrajSampler
from .utils import (
Timer, define_flags_with_default, set_random_seed, print_flags,
get_user_flags, prefix_metrics, WandBLogger
)
from viskit.logging import logger, setup_logger | 7,476 |
FLAGS_DEF = define_flags_with_default(
env='halfcheetah-medium-v2',
max_traj_length=1000,
seed=42,
save_model=False,
batch_size=256,
reward_scale=1.0,
reward_bias=0.0,
clip_action=0.999,
policy_arch='256-256',
qf_arch='256-256',
orthogonal_init=False,
policy_log_std_multiplier=1.0,
policy_log_std_offset=-1.0,
n_epochs=2000,
bc_epochs=0,
n_train_step_per_epoch=1000,
eval_period=10,
eval_n_trajs=5,
cql=ConservativeSAC.get_default_config(),
logging=WandBLogger.get_default_config(),
)
def main(argv):
FLAGS = absl.flags.FLAGS
variant = get_user_flags(FLAGS, FLAGS_DEF)
wandb_logger = WandBLogger(config=FLAGS.logging, variant=variant)
setup_logger(
variant=variant,
exp_id=wandb_logger.experiment_id,
seed=FLAGS.seed,
base_log_dir=FLAGS.logging.output_dir,
include_exp_prefix_sub_dir=False
)
set_random_seed(FLAGS.seed)
eval_sampler = TrajSampler(gym.make(FLAGS.env).unwrapped, FLAGS.max_traj_length)
dataset = get_d4rl_dataset(eval_sampler.env)
dataset['rewards'] = dataset['rewards'] * FLAGS.reward_scale + FLAGS.reward_bias
dataset['actions'] = np.clip(dataset['actions'], -FLAGS.clip_action, FLAGS.clip_action)
observation_dim = eval_sampler.env.observation_space.shape[0]
action_dim = eval_sampler.env.action_space.shape[0]
policy = TanhGaussianPolicy(
observation_dim, action_dim, FLAGS.policy_arch, FLAGS.orthogonal_init,
FLAGS.policy_log_std_multiplier, FLAGS.policy_log_std_offset
)
qf = FullyConnectedQFunction(observation_dim, action_dim, FLAGS.qf_arch, FLAGS.orthogonal_init)
if FLAGS.cql.target_entropy >= 0.0:
FLAGS.cql.target_entropy = -np.prod(eval_sampler.env.action_space.shape).item()
sac = ConservativeSAC(FLAGS.cql, policy, qf)
sampler_policy = SamplerPolicy(sac.policy, sac.train_params['policy'])
viskit_metrics = {}
for epoch in range(FLAGS.n_epochs):
metrics = {'epoch': epoch}
with Timer() as train_timer:
for batch_idx in range(FLAGS.n_train_step_per_epoch):
|
FLAGS_DEF = define_flags_with_default(
env='halfcheetah-medium-v2',
max_traj_length=1000,
seed=42,
save_model=False,
batch_size=256,
reward_scale=1.0,
reward_bias=0.0,
clip_action=0.999,
policy_arch='256-256',
qf_arch='256-256',
orthogonal_init=False,
policy_log_std_multiplier=1.0,
policy_log_std_offset=-1.0,
n_epochs=2000,
bc_epochs=0,
n_train_step_per_epoch=1000,
eval_period=10,
eval_n_trajs=5,
cql=ConservativeSAC.get_default_config(),
logging=WandBLogger.get_default_config(),
)
def main(argv):
FLAGS = absl.flags.FLAGS
variant = get_user_flags(FLAGS, FLAGS_DEF)
wandb_logger = WandBLogger(config=FLAGS.logging, variant=variant)
setup_logger(
variant=variant,
exp_id=wandb_logger.experiment_id,
seed=FLAGS.seed,
base_log_dir=FLAGS.logging.output_dir,
include_exp_prefix_sub_dir=False
)
set_random_seed(FLAGS.seed)
eval_sampler = TrajSampler(gym.make(FLAGS.env).unwrapped, FLAGS.max_traj_length)
dataset = get_d4rl_dataset(eval_sampler.env)
dataset['rewards'] = dataset['rewards'] * FLAGS.reward_scale + FLAGS.reward_bias
dataset['actions'] = np.clip(dataset['actions'], -FLAGS.clip_action, FLAGS.clip_action)
observation_dim = eval_sampler.env.observation_space.shape[0]
action_dim = eval_sampler.env.action_space.shape[0]
policy = TanhGaussianPolicy(
observation_dim, action_dim, FLAGS.policy_arch, FLAGS.orthogonal_init,
FLAGS.policy_log_std_multiplier, FLAGS.policy_log_std_offset
)
qf = FullyConnectedQFunction(observation_dim, action_dim, FLAGS.qf_arch, FLAGS.orthogonal_init)
if FLAGS.cql.target_entropy >= 0.0:
FLAGS.cql.target_entropy = -np.prod(eval_sampler.env.action_space.shape).item()
sac = ConservativeSAC(FLAGS.cql, policy, qf)
sampler_policy = SamplerPolicy(sac.policy, sac.train_params['policy'])
viskit_metrics = {}
for epoch in range(FLAGS.n_epochs):
metrics = {'epoch': epoch}
with Timer() as train_timer:
for batch_idx in range(FLAGS.n_train_step_per_epoch): | batch = batch_to_jax(subsample_batch(dataset, FLAGS.batch_size)) | 3 | 2023-10-25 11:53:25+00:00 | 12k |
Eanya-Tonic/MihiroToolbox | MihiroToolBox.py | [
{
"identifier": "VideoInterface",
"path": "VideoInterface.py",
"snippet": "class VideoInterface(QWidget, Ui_Video):\n\n def __init__(self, parent=None):\n super().__init__(parent)\n self.setupUi(self)\n\n # 编码选项\n self.EncoderType.addItem('x264')\n self.EncoderType.... | import sys
import configparser
from PyQt5.QtCore import Qt
from PyQt5.QtGui import QIcon, QPixmap
from PyQt5.QtWidgets import QApplication, QWidget, QSplashScreen, QDesktopWidget
from qfluentwidgets import SplitFluentWindow, FluentIcon, NavigationItemPosition, setTheme, Theme
from VideoInterface import VideoInterface
from AudioInterface import AudioInterface
from CommonInterface import CommonInterface
from PackageInterface import PackageInterface
from SettingInterface import SettingInterface | 10,515 | # coding:utf-8
class MihiroToolBox(SplitFluentWindow):
def __init__(self):
super().__init__()
self.setWindowTitle('MihiroToolBox')
self.setWindowIcon(QIcon('img/logo.png'))
# 设置默认大小
self.resize(800,800)
# 调整窗口在屏幕中央显示
center_pointer = QDesktopWidget().availableGeometry().center()
x = center_pointer.x()
y = center_pointer.y()
old_x,oldy, width, height = self.frameGeometry().getRect()
self.move(int(x - width / 2), int(y - height / 2))
# 添加视频子界面
self.VideoInterface = VideoInterface(self)
self.addSubInterface(self.VideoInterface, FluentIcon.VIDEO, '视频')
# 添加音频子界面
self.AudioInterface = AudioInterface(self)
self.addSubInterface(self.AudioInterface, FluentIcon.MUSIC, '音频')
# 添加通用子界面
| # coding:utf-8
class MihiroToolBox(SplitFluentWindow):
def __init__(self):
super().__init__()
self.setWindowTitle('MihiroToolBox')
self.setWindowIcon(QIcon('img/logo.png'))
# 设置默认大小
self.resize(800,800)
# 调整窗口在屏幕中央显示
center_pointer = QDesktopWidget().availableGeometry().center()
x = center_pointer.x()
y = center_pointer.y()
old_x,oldy, width, height = self.frameGeometry().getRect()
self.move(int(x - width / 2), int(y - height / 2))
# 添加视频子界面
self.VideoInterface = VideoInterface(self)
self.addSubInterface(self.VideoInterface, FluentIcon.VIDEO, '视频')
# 添加音频子界面
self.AudioInterface = AudioInterface(self)
self.addSubInterface(self.AudioInterface, FluentIcon.MUSIC, '音频')
# 添加通用子界面 | self.CommonInterface = CommonInterface(self) | 2 | 2023-10-25 05:04:58+00:00 | 12k |
RenShuhuai-Andy/TESTA | testa/patch/timesformer.py | [
{
"identifier": "Attention",
"path": "models/timesformer/models/vit.py",
"snippet": "class Attention(nn.Module):\n def __init__(self, dim, num_heads=8, qkv_bias=False, qk_scale=None, attn_drop=0., proj_drop=0., with_qkv=True):\n super().__init__()\n self.num_heads = num_heads\n h... | from typing import Tuple
from models.timesformer.models.vit import Attention, Block, VisionTransformer
from einops import rearrange
from testa.merge import bipartite_soft_matching, merge_source, merge_wavg
from testa.merge_original import original_bipartite_soft_matching, original_merge_wavg
from testa.utils import parse_r, parse_merging_type
import torch.nn.functional as F
import torch | 9,316 | if merging_type == 'patch':
x = rearrange(x, "b t l d -> (b t) l d", b=B)
else: # merging_type == 'frame'
self._testa_info["size"] = self._testa_info["size"].permute(0, 2, 1, 3)
size_cls = torch.ones(B, self._testa_info["size"].size(1), 1, 1).to(self._testa_info["size"])
self._testa_info["size"] = torch.cat([size_cls, self._testa_info["size"]], dim=-2) # add cls
x = rearrange(x, "b l t d -> b (l t) d", l=L)
return x
class TESTAAttention(Attention):
"""
Modifications:
- Apply proportional attention
- Return the mean of k over heads from attention
"""
def forward(
self, x: torch.Tensor, size: torch.Tensor = None
) -> Tuple[torch.Tensor, torch.Tensor]:
# Note: this is copied from timm.models.vision_transformer.Attention with modifications.
B, N, C = x.shape
qkv = (
self.qkv(x)
.reshape(B, N, 3, self.num_heads, C // self.num_heads)
.permute(2, 0, 3, 1, 4)
)
q, k, v = (
qkv[0],
qkv[1],
qkv[2],
) # make torchscript happy (cannot use tensor as tuple)
attn = (q @ k.transpose(-2, -1)) * self.scale
# Apply proportional attention
if size is not None:
attn = attn + size.log()[:, None, None, :, 0]
attn = attn.softmax(dim=-1)
attn = self.attn_drop(attn)
x = (attn @ v).transpose(1, 2).reshape(B, N, C)
x = self.proj(x)
x = self.proj_drop(x)
# Return k as well here
return x, k.mean(1)
def make_testa_class(transformer_class):
class TESTAVisionTransformer(transformer_class):
"""
Modifications:
- Initialize r, token size, and token sources.
"""
def forward_features(self, x, get_all_tokens=True):
B = x.shape[0]
x, T, W = self.patch_embed(x)
cls_tokens = self.cls_token.expand(x.size(0), -1, -1)
x = torch.cat((cls_tokens, x), dim=1)
# resizing the positional embeddings in case they don't match the input at inference
if x.size(1) != self.pos_embed.size(1):
pos_embed = self.pos_embed
cls_pos_embed = pos_embed[0, 0, :].unsqueeze(0).unsqueeze(1)
other_pos_embed = pos_embed[0, 1:, :].unsqueeze(0).transpose(1, 2)
P = int(other_pos_embed.size(2) ** 0.5)
H = x.size(1) // W
other_pos_embed = other_pos_embed.reshape(1, x.size(2), P, P)
new_pos_embed = F.interpolate(other_pos_embed, size=(H, W), mode='nearest')
new_pos_embed = new_pos_embed.flatten(2)
new_pos_embed = new_pos_embed.transpose(1, 2)
new_pos_embed = torch.cat((cls_pos_embed, new_pos_embed), 1)
x = x + new_pos_embed
else:
x = x + self.pos_embed
x = self.pos_drop(x)
# Time Embeddings
if self.attention_type != 'space_only':
cls_tokens = x[:B, 0, :].unsqueeze(1)
x = x[:, 1:]
x = rearrange(x, '(b t) l d -> (b l) t d', b=B, t=T)
# Resizing time embeddings in case they don't match
if T != self.time_embed.size(1):
time_embed = self.time_embed.transpose(1, 2)
new_time_embed = F.interpolate(time_embed, size=(T), mode='nearest')
new_time_embed = new_time_embed.transpose(1, 2)
x = x + new_time_embed
else:
x = x + self.time_embed
x = self.time_drop(x)
x = rearrange(x, '(b l) t d -> b (l t) d', b=B, t=T)
x = torch.cat((cls_tokens, x), dim=1)
# Attention blocks
L = (x.size(1) - 1) // T
for blk in self.blocks:
x, T, L = blk(x, B, T, L)
# Predictions for space-only baseline
if self.attention_type == 'space_only':
x = rearrange(x, '(b t) l d -> b t l d', b=B, t=T)
if get_all_tokens is False:
x = torch.mean(x, 1) # averaging predictions for every frame
else:
x = self.norm(x)
x = rearrange(x, 'b t l d -> b (t l) d', b=B, t=T) # concat tokens of every frame
return x
x = self.norm(x)
if get_all_tokens is False:
return x[:0]
else:
return x
def forward(self, *args, **kwdargs) -> torch.Tensor:
r = self.r.copy() if isinstance(self.r, list) else self.r
merging_type = self.merging_type.copy() if isinstance(self.merging_type, list) else self.merging_type
| '''
Adapted from https://github.com/facebookresearch/ToMe
'''
class TESTABlock(Block):
"""
Modifications:
- Apply TESTA between the attention and mlp blocks
- Compute and propogate token size and potentially the token sources.
"""
def _drop_path1(self, x):
return self.drop_path1(x) if hasattr(self, "drop_path1") else self.drop_path(x)
def _drop_path2(self, x):
return self.drop_path2(x) if hasattr(self, "drop_path2") else self.drop_path(x)
def forward(self, x: torch.Tensor, B, T, L) -> torch.Tensor:
"""
x: [bsz, 1+seq_len*n_frm, dim] for video
"""
attn_size = self._testa_info["size"] if self._testa_info["prop_attn"] else None
merging_type = self._testa_info["merging_type"].pop(0)
if self.attention_type in ['space_only', 'joint_space_time']:
x = self.global_agg(x)
elif self.attention_type == 'divided_space_time':
# Temporal
xt = x[:, 1:, :] # [B, LxT, D]
xt = rearrange(xt, 'b (l t) d -> (b l) t d', b=B, l=L, t=T)
xt_attn, metric_t = self.temporal_attn(self.temporal_norm1(xt))
if self.learnable_temporal_scaling is False:
res_temporal = self.drop_path(xt_attn)
else:
res_temporal = self.drop_path(xt_attn * (torch.tanh(self.temporal_scaling) + 1))
res_temporal = rearrange(res_temporal, '(b l) t d -> b (l t) d', b=B, l=L, t=T)
res_temporal = self.temporal_fc(res_temporal)
xt = x[:, 1:, :] + res_temporal
if 'frame' in merging_type:
xt = self.testa(xt, metric_t, B, L, 'frame')
# reconstruct
T = xt.size(1) // L
# Spatial
init_cls_token = x[:, 0, :].unsqueeze(1) # [B, 1, D]
cls_token = init_cls_token.repeat(1, T, 1) # [B, T, D]
cls_token = rearrange(cls_token, 'b t d -> (b t) d', b=B, t=T).unsqueeze(1) # [BxT, 1, D]
xs = xt # [B, LxT, D]
xs = rearrange(xs, 'b (l t) d -> (b t) l d', b=B, l=L, t=T)
xs = torch.cat((cls_token, xs), 1) # [BxT, 1+L, D]
x_attn, metric_s = self.attn(self.norm1(xs), attn_size) # cal metric for TESTA
res_spatial = self.drop_path(x_attn)
# Taking care of CLS token
cls_token = res_spatial[:, 0, :] # [BxT, 1, D]
cls_token = rearrange(cls_token, '(b t) d -> b t d', b=B, t=T) # [B, T, D]
cls_token = torch.mean(cls_token, 1, True) # averaging for every frame [B, 1, D]
res_spatial = res_spatial[:, 1:, :] # [BxT, L, D]
res_spatial = rearrange(res_spatial, '(b t) l d -> b (l t) d', b=B, l=L, t=T)
res = res_spatial # [B, LxT, D]
x = xt # [B, LxT, D], feature before spatial attn
# Mlp
x = rearrange((x + res), 'b (l t) d -> (b t) l d', b=B, l=L, t=T) # [BxT, L, D]
final_cls = init_cls_token + cls_token
x = torch.cat((final_cls.repeat(x.size(0) // final_cls.size(0), 1, 1), x), 1)
if 'patch' in merging_type:
x = self.testa(x, metric_s, B, L, 'patch')[:, 1:, :] # exclude [cls]
# reconstruct
L = x.size(1)
x = rearrange(x, '(b t) l d -> b (l t) d', b=B, l=L, t=T)
x = torch.cat((final_cls, x), 1)
x = x + self.drop_path(self.mlp(self.norm2(x)))
return x, T, L
def global_agg(self, x: torch.Tensor) -> torch.Tensor:
"""
Global aggregation of all patches in all frames
"""
# Note: this is copied from timm.models.vision_transformer.Block with modifications.
attn_size = self._testa_info["size"] if self._testa_info["prop_attn"] else None
x_attn, metric = self.attn(self.norm1(x), attn_size)
x = x + self._drop_path1(x_attn)
r = self._testa_info["r"].pop(0)
if r > 0:
# Apply ToMe here
merge, _ = original_bipartite_soft_matching(
metric,
r,
self._testa_info["class_token"],
self._testa_info["distill_token"],
)
if self._testa_info["trace_source"]:
self._testa_info["source"] = merge_source(
merge, x, self._testa_info["source"]
)
x, self._testa_info["size"] = original_merge_wavg(merge, x, self._testa_info["size"])
x = x + self._drop_path2(self.mlp(self.norm2(x)))
return x
def testa(self, x, metric, B, L, merging_type):
r = self._testa_info["r"].pop(0)
if r > 0:
if merging_type == 'patch':
x = rearrange(x, "(b t) l d -> b t l d", b=B)
metric = rearrange(metric, "(b t) l d -> b t l d", b=B)
else: # merging_type == 'frame'
x = rearrange(x, "b (l t) d -> b l t d", l=L)
metric = rearrange(metric, "(b l) t d -> b l t d", b=B)
if self._testa_info["size"] is not None:
# by default, the size of self._testa_info["size"] is [b, t, l, d]
self._testa_info["size"] = self._testa_info["size"].permute(0, 2, 1, 3)
self._testa_info["size"] = self._testa_info["size"][:, 1:, ...] # remove cls
# Apply TESTA here
merge, _ = bipartite_soft_matching(
metric,
r,
self._testa_info["class_token"],
self._testa_info["distill_token"],
merging_type,
)
if self._testa_info["trace_source"]:
self._testa_info["source"] = merge_source(
merge, x, self._testa_info["source"]
)
x, self._testa_info["size"] = merge_wavg(merge, x, self._testa_info["size"])
if merging_type == 'patch':
x = rearrange(x, "b t l d -> (b t) l d", b=B)
else: # merging_type == 'frame'
self._testa_info["size"] = self._testa_info["size"].permute(0, 2, 1, 3)
size_cls = torch.ones(B, self._testa_info["size"].size(1), 1, 1).to(self._testa_info["size"])
self._testa_info["size"] = torch.cat([size_cls, self._testa_info["size"]], dim=-2) # add cls
x = rearrange(x, "b l t d -> b (l t) d", l=L)
return x
class TESTAAttention(Attention):
"""
Modifications:
- Apply proportional attention
- Return the mean of k over heads from attention
"""
def forward(
self, x: torch.Tensor, size: torch.Tensor = None
) -> Tuple[torch.Tensor, torch.Tensor]:
# Note: this is copied from timm.models.vision_transformer.Attention with modifications.
B, N, C = x.shape
qkv = (
self.qkv(x)
.reshape(B, N, 3, self.num_heads, C // self.num_heads)
.permute(2, 0, 3, 1, 4)
)
q, k, v = (
qkv[0],
qkv[1],
qkv[2],
) # make torchscript happy (cannot use tensor as tuple)
attn = (q @ k.transpose(-2, -1)) * self.scale
# Apply proportional attention
if size is not None:
attn = attn + size.log()[:, None, None, :, 0]
attn = attn.softmax(dim=-1)
attn = self.attn_drop(attn)
x = (attn @ v).transpose(1, 2).reshape(B, N, C)
x = self.proj(x)
x = self.proj_drop(x)
# Return k as well here
return x, k.mean(1)
def make_testa_class(transformer_class):
class TESTAVisionTransformer(transformer_class):
"""
Modifications:
- Initialize r, token size, and token sources.
"""
def forward_features(self, x, get_all_tokens=True):
B = x.shape[0]
x, T, W = self.patch_embed(x)
cls_tokens = self.cls_token.expand(x.size(0), -1, -1)
x = torch.cat((cls_tokens, x), dim=1)
# resizing the positional embeddings in case they don't match the input at inference
if x.size(1) != self.pos_embed.size(1):
pos_embed = self.pos_embed
cls_pos_embed = pos_embed[0, 0, :].unsqueeze(0).unsqueeze(1)
other_pos_embed = pos_embed[0, 1:, :].unsqueeze(0).transpose(1, 2)
P = int(other_pos_embed.size(2) ** 0.5)
H = x.size(1) // W
other_pos_embed = other_pos_embed.reshape(1, x.size(2), P, P)
new_pos_embed = F.interpolate(other_pos_embed, size=(H, W), mode='nearest')
new_pos_embed = new_pos_embed.flatten(2)
new_pos_embed = new_pos_embed.transpose(1, 2)
new_pos_embed = torch.cat((cls_pos_embed, new_pos_embed), 1)
x = x + new_pos_embed
else:
x = x + self.pos_embed
x = self.pos_drop(x)
# Time Embeddings
if self.attention_type != 'space_only':
cls_tokens = x[:B, 0, :].unsqueeze(1)
x = x[:, 1:]
x = rearrange(x, '(b t) l d -> (b l) t d', b=B, t=T)
# Resizing time embeddings in case they don't match
if T != self.time_embed.size(1):
time_embed = self.time_embed.transpose(1, 2)
new_time_embed = F.interpolate(time_embed, size=(T), mode='nearest')
new_time_embed = new_time_embed.transpose(1, 2)
x = x + new_time_embed
else:
x = x + self.time_embed
x = self.time_drop(x)
x = rearrange(x, '(b l) t d -> b (l t) d', b=B, t=T)
x = torch.cat((cls_tokens, x), dim=1)
# Attention blocks
L = (x.size(1) - 1) // T
for blk in self.blocks:
x, T, L = blk(x, B, T, L)
# Predictions for space-only baseline
if self.attention_type == 'space_only':
x = rearrange(x, '(b t) l d -> b t l d', b=B, t=T)
if get_all_tokens is False:
x = torch.mean(x, 1) # averaging predictions for every frame
else:
x = self.norm(x)
x = rearrange(x, 'b t l d -> b (t l) d', b=B, t=T) # concat tokens of every frame
return x
x = self.norm(x)
if get_all_tokens is False:
return x[:0]
else:
return x
def forward(self, *args, **kwdargs) -> torch.Tensor:
r = self.r.copy() if isinstance(self.r, list) else self.r
merging_type = self.merging_type.copy() if isinstance(self.merging_type, list) else self.merging_type | self._testa_info["r"] = parse_r(len(self.blocks), r) | 8 | 2023-10-29 12:09:38+00:00 | 12k |
OATML-Markslab/ProteinNPT | utils/esm/modules.py | [
{
"identifier": "MultiheadAttention",
"path": "utils/esm/multihead_attention.py",
"snippet": "class MultiheadAttention(nn.Module):\n \"\"\"Multi-headed attention.\n\n See \"Attention Is All You Need\" for more details.\n \"\"\"\n\n def __init__(\n self,\n embed_dim,\n nu... | import math
import torch
import torch.nn as nn
import torch.nn.functional as F
from typing import Optional
from .multihead_attention import MultiheadAttention # noqa
from .axial_attention import ColumnSelfAttention, RowSelfAttention
from apex.normalization import FusedLayerNorm as _FusedLayerNorm
from torch.nn import LayerNorm as ESM1bLayerNorm | 8,506 | """Construct a layernorm layer in the TF style (eps inside the sqrt)."""
super().__init__()
self.hidden_size = (hidden_size,) if isinstance(hidden_size, int) else tuple(hidden_size)
self.eps = eps
self.affine = bool(affine)
if self.affine:
self.weight = nn.Parameter(torch.ones(hidden_size))
self.bias = nn.Parameter(torch.zeros(hidden_size))
else:
self.weight, self.bias = None, None
def forward(self, x):
dims = tuple(-(i + 1) for i in range(len(self.hidden_size)))
means = x.mean(dims, keepdim=True)
x_zeromean = x - means
variances = x_zeromean.pow(2).mean(dims, keepdim=True)
x = x_zeromean / torch.sqrt(variances + self.eps)
if self.affine:
x = (self.weight * x) + self.bias
return x
try:
class ESM1bLayerNorm(_FusedLayerNorm):
@torch.jit.unused
def forward(self, x):
if not x.is_cuda:
return super().forward(x)
else:
with torch.cuda.device(x.device):
return super().forward(x)
except ImportError:
class TransformerLayer(nn.Module):
"""Transformer layer block."""
def __init__(
self,
embed_dim,
ffn_embed_dim,
attention_heads,
add_bias_kv=True,
use_esm1b_layer_norm=False,
use_rotary_embeddings: bool = False,
):
super().__init__()
self.embed_dim = embed_dim
self.ffn_embed_dim = ffn_embed_dim
self.attention_heads = attention_heads
self.use_rotary_embeddings = use_rotary_embeddings
self._init_submodules(add_bias_kv, use_esm1b_layer_norm)
def _init_submodules(self, add_bias_kv, use_esm1b_layer_norm):
BertLayerNorm = ESM1bLayerNorm if use_esm1b_layer_norm else ESM1LayerNorm
self.self_attn = MultiheadAttention(
self.embed_dim,
self.attention_heads,
add_bias_kv=add_bias_kv,
add_zero_attn=False,
use_rotary_embeddings=self.use_rotary_embeddings,
)
self.self_attn_layer_norm = BertLayerNorm(self.embed_dim)
self.fc1 = nn.Linear(self.embed_dim, self.ffn_embed_dim)
self.fc2 = nn.Linear(self.ffn_embed_dim, self.embed_dim)
self.final_layer_norm = BertLayerNorm(self.embed_dim)
def forward(
self, x, self_attn_mask=None, self_attn_padding_mask=None, need_head_weights=False
):
residual = x
x = self.self_attn_layer_norm(x)
x, attn = self.self_attn(
query=x,
key=x,
value=x,
key_padding_mask=self_attn_padding_mask,
need_weights=True,
need_head_weights=need_head_weights,
attn_mask=self_attn_mask,
)
x = residual + x
residual = x
x = self.final_layer_norm(x)
x = gelu(self.fc1(x))
x = self.fc2(x)
x = residual + x
return x, attn
class AxialTransformerLayer(nn.Module):
"""Implements an Axial MSA Transformer block."""
def __init__(
self,
embedding_dim: int = 768,
ffn_embedding_dim: int = 3072,
num_attention_heads: int = 8,
dropout: float = 0.1,
attention_dropout: float = 0.1,
activation_dropout: float = 0.1,
max_tokens_per_msa: int = 2**14,
deactivate_col_attention: bool = False,
tranception_attention: bool = False,
num_targets: int = 1,
) -> None:
super().__init__()
# Initialize parameters
self.embedding_dim = embedding_dim
self.dropout_prob = dropout
self.deactivate_col_attention = deactivate_col_attention
| # Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
def gelu(x):
"""Implementation of the gelu activation function.
For information: OpenAI GPT's gelu is slightly different
(and gives slightly different results):
0.5 * x * (1 + torch.tanh(math.sqrt(2 / math.pi) * (x + 0.044715 * torch.pow(x, 3))))
"""
return x * 0.5 * (1.0 + torch.erf(x / math.sqrt(2.0)))
def symmetrize(x):
"Make layer symmetric in final two dimensions, used for contact prediction."
return x + x.transpose(-1, -2)
def apc(x):
"Perform average product correct, used for contact prediction."
a1 = x.sum(-1, keepdims=True)
a2 = x.sum(-2, keepdims=True)
a12 = x.sum((-1, -2), keepdims=True)
avg = a1 * a2
avg.div_(a12) # in-place to reduce memory
normalized = x - avg
return normalized
class ESM1LayerNorm(nn.Module):
def __init__(self, hidden_size, eps=1e-12, affine=True):
"""Construct a layernorm layer in the TF style (eps inside the sqrt)."""
super().__init__()
self.hidden_size = (hidden_size,) if isinstance(hidden_size, int) else tuple(hidden_size)
self.eps = eps
self.affine = bool(affine)
if self.affine:
self.weight = nn.Parameter(torch.ones(hidden_size))
self.bias = nn.Parameter(torch.zeros(hidden_size))
else:
self.weight, self.bias = None, None
def forward(self, x):
dims = tuple(-(i + 1) for i in range(len(self.hidden_size)))
means = x.mean(dims, keepdim=True)
x_zeromean = x - means
variances = x_zeromean.pow(2).mean(dims, keepdim=True)
x = x_zeromean / torch.sqrt(variances + self.eps)
if self.affine:
x = (self.weight * x) + self.bias
return x
try:
class ESM1bLayerNorm(_FusedLayerNorm):
@torch.jit.unused
def forward(self, x):
if not x.is_cuda:
return super().forward(x)
else:
with torch.cuda.device(x.device):
return super().forward(x)
except ImportError:
class TransformerLayer(nn.Module):
"""Transformer layer block."""
def __init__(
self,
embed_dim,
ffn_embed_dim,
attention_heads,
add_bias_kv=True,
use_esm1b_layer_norm=False,
use_rotary_embeddings: bool = False,
):
super().__init__()
self.embed_dim = embed_dim
self.ffn_embed_dim = ffn_embed_dim
self.attention_heads = attention_heads
self.use_rotary_embeddings = use_rotary_embeddings
self._init_submodules(add_bias_kv, use_esm1b_layer_norm)
def _init_submodules(self, add_bias_kv, use_esm1b_layer_norm):
BertLayerNorm = ESM1bLayerNorm if use_esm1b_layer_norm else ESM1LayerNorm
self.self_attn = MultiheadAttention(
self.embed_dim,
self.attention_heads,
add_bias_kv=add_bias_kv,
add_zero_attn=False,
use_rotary_embeddings=self.use_rotary_embeddings,
)
self.self_attn_layer_norm = BertLayerNorm(self.embed_dim)
self.fc1 = nn.Linear(self.embed_dim, self.ffn_embed_dim)
self.fc2 = nn.Linear(self.ffn_embed_dim, self.embed_dim)
self.final_layer_norm = BertLayerNorm(self.embed_dim)
def forward(
self, x, self_attn_mask=None, self_attn_padding_mask=None, need_head_weights=False
):
residual = x
x = self.self_attn_layer_norm(x)
x, attn = self.self_attn(
query=x,
key=x,
value=x,
key_padding_mask=self_attn_padding_mask,
need_weights=True,
need_head_weights=need_head_weights,
attn_mask=self_attn_mask,
)
x = residual + x
residual = x
x = self.final_layer_norm(x)
x = gelu(self.fc1(x))
x = self.fc2(x)
x = residual + x
return x, attn
class AxialTransformerLayer(nn.Module):
"""Implements an Axial MSA Transformer block."""
def __init__(
self,
embedding_dim: int = 768,
ffn_embedding_dim: int = 3072,
num_attention_heads: int = 8,
dropout: float = 0.1,
attention_dropout: float = 0.1,
activation_dropout: float = 0.1,
max_tokens_per_msa: int = 2**14,
deactivate_col_attention: bool = False,
tranception_attention: bool = False,
num_targets: int = 1,
) -> None:
super().__init__()
# Initialize parameters
self.embedding_dim = embedding_dim
self.dropout_prob = dropout
self.deactivate_col_attention = deactivate_col_attention
| row_self_attention = RowSelfAttention( | 2 | 2023-10-28 11:41:05+00:00 | 12k |
CVHub520/yolov5_obb | export.py | [
{
"identifier": "Conv",
"path": "models/common.py",
"snippet": "class Conv(nn.Module):\n # Standard convolution\n def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups\n super().__init__()\n self.conv = nn.Conv2d(c1, c2, k, s, ... | import argparse
import json
import os
import subprocess
import sys
import time
import torch
import torch.nn as nn
import onnx
import onnxsim
import coremltools as ct
import openvino.inference_engine as ie
import tensorflow as tf
import tensorflow as tf
import tensorflow as tf
import re
import tensorflowjs as tfjs
import tensorrt as trt
from pathlib import Path
from torch.utils.mobile_optimizer import optimize_for_mobile
from models.common import Conv
from models.experimental import attempt_load
from models.yolo import Detect
from utils.activations import SiLU
from utils.datasets import LoadImages
from utils.general import (LOGGER, check_dataset, check_img_size, check_requirements, colorstr, file_size, print_args,
url2file)
from utils.torch_utils import select_device
from tensorflow import keras
from models.tf import TFDetect, TFModel
from tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2
from models.tf import representative_dataset_gen | 8,879 | converter.target_spec.supported_types = []
converter.inference_input_type = tf.uint8 # or tf.int8
converter.inference_output_type = tf.uint8 # or tf.int8
converter.experimental_new_quantizer = False
f = str(file).replace('.pt', '-int8.tflite')
tflite_model = converter.convert()
open(f, "wb").write(tflite_model)
LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')
except Exception as e:
LOGGER.info(f'\n{prefix} export failure: {e}')
def export_tfjs(keras_model, im, file, prefix=colorstr('TensorFlow.js:')):
# YOLOv5 TensorFlow.js export
try:
check_requirements(('tensorflowjs',))
LOGGER.info(f'\n{prefix} starting export with tensorflowjs {tfjs.__version__}...')
f = str(file).replace('.pt', '_web_model') # js dir
f_pb = file.with_suffix('.pb') # *.pb path
f_json = f + '/model.json' # *.json path
cmd = f"tensorflowjs_converter --input_format=tf_frozen_model " \
f"--output_node_names='Identity,Identity_1,Identity_2,Identity_3' {f_pb} {f}"
subprocess.run(cmd, shell=True)
json = open(f_json).read()
with open(f_json, 'w') as j: # sort JSON Identity_* in ascending order
subst = re.sub(
r'{"outputs": {"Identity.?.?": {"name": "Identity.?.?"}, '
r'"Identity.?.?": {"name": "Identity.?.?"}, '
r'"Identity.?.?": {"name": "Identity.?.?"}, '
r'"Identity.?.?": {"name": "Identity.?.?"}}}',
r'{"outputs": {"Identity": {"name": "Identity"}, '
r'"Identity_1": {"name": "Identity_1"}, '
r'"Identity_2": {"name": "Identity_2"}, '
r'"Identity_3": {"name": "Identity_3"}}}',
json)
j.write(subst)
LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')
except Exception as e:
LOGGER.info(f'\n{prefix} export failure: {e}')
def export_engine(model, im, file, train, half, simplify, workspace=4, verbose=False, prefix=colorstr('TensorRT:')):
try:
check_requirements(('tensorrt',))
opset = (12, 13)[trt.__version__[0] == '8'] # test on TensorRT 7.x and 8.x
export_onnx(model, im, file, opset, train, False, simplify)
onnx = file.with_suffix('.onnx')
assert onnx.exists(), f'failed to export ONNX file: {onnx}'
LOGGER.info(f'\n{prefix} starting export with TensorRT {trt.__version__}...')
f = file.with_suffix('.engine') # TensorRT engine file
logger = trt.Logger(trt.Logger.INFO)
if verbose:
logger.min_severity = trt.Logger.Severity.VERBOSE
builder = trt.Builder(logger)
config = builder.create_builder_config()
config.max_workspace_size = workspace * 1 << 30
flag = (1 << int(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH))
network = builder.create_network(flag)
parser = trt.OnnxParser(network, logger)
if not parser.parse_from_file(str(onnx)):
raise RuntimeError(f'failed to load ONNX file: {onnx}')
inputs = [network.get_input(i) for i in range(network.num_inputs)]
outputs = [network.get_output(i) for i in range(network.num_outputs)]
LOGGER.info(f'{prefix} Network Description:')
for inp in inputs:
LOGGER.info(f'{prefix}\tinput "{inp.name}" with shape {inp.shape} and dtype {inp.dtype}')
for out in outputs:
LOGGER.info(f'{prefix}\toutput "{out.name}" with shape {out.shape} and dtype {out.dtype}')
half &= builder.platform_has_fast_fp16
LOGGER.info(f'{prefix} building FP{16 if half else 32} engine in {f}')
if half:
config.set_flag(trt.BuilderFlag.FP16)
with builder.build_engine(network, config) as engine, open(f, 'wb') as t:
t.write(engine.serialize())
LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')
except Exception as e:
LOGGER.info(f'\n{prefix} export failure: {e}')
@torch.no_grad()
def run(data=ROOT / 'data/coco128.yaml', # 'dataset.yaml path'
weights=ROOT / 'yolov5s.pt', # weights path
imgsz=(640, 640), # image (height, width)
batch_size=1, # batch size
device='cpu', # cuda device, i.e. 0 or 0,1,2,3 or cpu
include=('torchscript', 'onnx'), # include formats
half=False, # FP16 half-precision export
inplace=False, # set YOLOv5 Detect() inplace=True
train=False, # model.train() mode
optimize=False, # TorchScript: optimize for mobile
int8=False, # CoreML/TF INT8 quantization
dynamic=False, # ONNX/TF: dynamic axes
simplify=False, # ONNX: simplify model
opset=12, # ONNX: opset version
verbose=False, # TensorRT: verbose log
workspace=4, # TensorRT: workspace size (GB)
nms=False, # TF: add NMS to model
agnostic_nms=False, # TF: add agnostic NMS to model
topk_per_class=100, # TF.js NMS: topk per class to keep
topk_all=100, # TF.js NMS: topk for all classes to keep
iou_thres=0.45, # TF.js NMS: IoU threshold
conf_thres=0.25 # TF.js NMS: confidence threshold
):
t = time.time()
include = [x.lower() for x in include]
tf_exports = list(x in include for x in ('saved_model', 'pb', 'tflite', 'tfjs')) # TensorFlow exports
| # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
"""
Export a YOLOv5 PyTorch model to other formats. TensorFlow exports authored by https://github.com/zldrobit
Format | Example | `--include ...` argument
--- | --- | ---
PyTorch | yolov5s.pt | -
TorchScript | yolov5s.torchscript | `torchscript`
ONNX | yolov5s.onnx | `onnx`
CoreML | yolov5s.mlmodel | `coreml`
OpenVINO | yolov5s_openvino_model/ | `openvino`
TensorFlow SavedModel | yolov5s_saved_model/ | `saved_model`
TensorFlow GraphDef | yolov5s.pb | `pb`
TensorFlow Lite | yolov5s.tflite | `tflite`
TensorFlow.js | yolov5s_web_model/ | `tfjs`
TensorRT | yolov5s.engine | `engine`
Usage:
$ python path/to/export.py --weights yolov5s.pt --include torchscript onnx coreml openvino saved_model tflite tfjs
Inference:
$ python path/to/detect.py --weights yolov5s.pt
yolov5s.torchscript
yolov5s.onnx
yolov5s.mlmodel (under development)
yolov5s_openvino_model (under development)
yolov5s_saved_model
yolov5s.pb
yolov5s.tflite
yolov5s.engine
TensorFlow.js:
$ cd .. && git clone https://github.com/zldrobit/tfjs-yolov5-example.git && cd tfjs-yolov5-example
$ npm install
$ ln -s ../../yolov5/yolov5s_web_model public/yolov5s_web_model
$ npm start
"""
FILE = Path(__file__).resolve()
ROOT = FILE.parents[0] # YOLOv5 root directory
if str(ROOT) not in sys.path:
sys.path.append(str(ROOT)) # add ROOT to PATH
ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative
def export_torchscript(model, im, file, optimize, prefix=colorstr('TorchScript:')):
# YOLOv5 TorchScript model export
try:
LOGGER.info(f'\n{prefix} starting export with torch {torch.__version__}...')
f = file.with_suffix('.torchscript')
ts = torch.jit.trace(model, im, strict=False)
d = {"shape": im.shape, "stride": int(max(model.stride)), "names": model.names}
extra_files = {'config.txt': json.dumps(d)} # torch._C.ExtraFilesMap()
(optimize_for_mobile(ts) if optimize else ts).save(str(f), _extra_files=extra_files)
LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')
except Exception as e:
LOGGER.info(f'{prefix} export failure: {e}')
def export_onnx(model, im, file, opset, train, dynamic, simplify, prefix=colorstr('ONNX:')):
# YOLOv5 ONNX export
try:
check_requirements(('onnx',))
LOGGER.info(f'\n{prefix} starting export with onnx {onnx.__version__}...')
f = file.with_suffix('.onnx')
torch.onnx.export(model, im, f, verbose=False, opset_version=opset,
training=torch.onnx.TrainingMode.TRAINING if train else torch.onnx.TrainingMode.EVAL,
do_constant_folding=not train,
input_names=['images'],
output_names=['output'],
dynamic_axes={'images': {0: 'batch', 2: 'height', 3: 'width'}, # shape(1,3,640,640)
'output': {0: 'batch', 1: 'anchors'} # shape(1,25200,85)
} if dynamic else None)
# Checks
model_onnx = onnx.load(f) # load onnx model
onnx.checker.check_model(model_onnx) # check onnx model
# LOGGER.info(onnx.helper.printable_graph(model_onnx.graph)) # print
# Simplify
if simplify:
try:
check_requirements(('onnx-simplifier',))
LOGGER.info(f'{prefix} simplifying with onnx-simplifier {onnxsim.__version__}...')
model_onnx, check = onnxsim.simplify(
model_onnx,
dynamic_input_shape=dynamic,
input_shapes={'images': list(im.shape)} if dynamic else None)
assert check, 'assert check failed'
onnx.save(model_onnx, f)
except Exception as e:
LOGGER.info(f'{prefix} simplifier failure: {e}')
LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')
LOGGER.info(f"{prefix} run --dynamic ONNX model inference with: 'python detect.py --weights {f}'")
except Exception as e:
LOGGER.info(f'{prefix} export failure: {e}')
def export_coreml(model, im, file, prefix=colorstr('CoreML:')):
# YOLOv5 CoreML export
ct_model = None
try:
check_requirements(('coremltools',))
LOGGER.info(f'\n{prefix} starting export with coremltools {ct.__version__}...')
f = file.with_suffix('.mlmodel')
model.train() # CoreML exports should be placed in model.train() mode
ts = torch.jit.trace(model, im, strict=False) # TorchScript model
ct_model = ct.convert(ts, inputs=[ct.ImageType('image', shape=im.shape, scale=1 / 255, bias=[0, 0, 0])])
ct_model.save(f)
LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')
except Exception as e:
LOGGER.info(f'\n{prefix} export failure: {e}')
return ct_model
def export_openvino(model, im, file, prefix=colorstr('OpenVINO:')):
# YOLOv5 OpenVINO export
try:
check_requirements(('openvino-dev',)) # requires openvino-dev: https://pypi.org/project/openvino-dev/
LOGGER.info(f'\n{prefix} starting export with openvino {ie.__version__}...')
f = str(file).replace('.pt', '_openvino_model' + os.sep)
cmd = f"mo --input_model {file.with_suffix('.onnx')} --output_dir {f}"
subprocess.check_output(cmd, shell=True)
LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')
except Exception as e:
LOGGER.info(f'\n{prefix} export failure: {e}')
def export_saved_model(model, im, file, dynamic,
tf_nms=False, agnostic_nms=False, topk_per_class=100, topk_all=100, iou_thres=0.45,
conf_thres=0.25, prefix=colorstr('TensorFlow saved_model:')):
# YOLOv5 TensorFlow saved_model export
keras_model = None
try:
LOGGER.info(f'\n{prefix} starting export with tensorflow {tf.__version__}...')
f = str(file).replace('.pt', '_saved_model')
batch_size, ch, *imgsz = list(im.shape) # BCHW
tf_model = TFModel(cfg=model.yaml, model=model, nc=model.nc, imgsz=imgsz)
im = tf.zeros((batch_size, *imgsz, 3)) # BHWC order for TensorFlow
y = tf_model.predict(im, tf_nms, agnostic_nms, topk_per_class, topk_all, iou_thres, conf_thres)
inputs = keras.Input(shape=(*imgsz, 3), batch_size=None if dynamic else batch_size)
outputs = tf_model.predict(inputs, tf_nms, agnostic_nms, topk_per_class, topk_all, iou_thres, conf_thres)
keras_model = keras.Model(inputs=inputs, outputs=outputs)
keras_model.trainable = False
keras_model.summary()
keras_model.save(f, save_format='tf')
LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')
except Exception as e:
LOGGER.info(f'\n{prefix} export failure: {e}')
return keras_model
def export_pb(keras_model, im, file, prefix=colorstr('TensorFlow GraphDef:')):
# YOLOv5 TensorFlow GraphDef *.pb export https://github.com/leimao/Frozen_Graph_TensorFlow
try:
LOGGER.info(f'\n{prefix} starting export with tensorflow {tf.__version__}...')
f = file.with_suffix('.pb')
m = tf.function(lambda x: keras_model(x)) # full model
m = m.get_concrete_function(tf.TensorSpec(keras_model.inputs[0].shape, keras_model.inputs[0].dtype))
frozen_func = convert_variables_to_constants_v2(m)
frozen_func.graph.as_graph_def()
tf.io.write_graph(graph_or_graph_def=frozen_func.graph, logdir=str(f.parent), name=f.name, as_text=False)
LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')
except Exception as e:
LOGGER.info(f'\n{prefix} export failure: {e}')
def export_tflite(keras_model, im, file, int8, data, ncalib, prefix=colorstr('TensorFlow Lite:')):
# YOLOv5 TensorFlow Lite export
try:
LOGGER.info(f'\n{prefix} starting export with tensorflow {tf.__version__}...')
batch_size, ch, *imgsz = list(im.shape) # BCHW
f = str(file).replace('.pt', '-fp16.tflite')
converter = tf.lite.TFLiteConverter.from_keras_model(keras_model)
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS]
converter.target_spec.supported_types = [tf.float16]
converter.optimizations = [tf.lite.Optimize.DEFAULT]
if int8:
dataset = LoadImages(check_dataset(data)['train'], img_size=imgsz, auto=False) # representative data
converter.representative_dataset = lambda: representative_dataset_gen(dataset, ncalib)
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.target_spec.supported_types = []
converter.inference_input_type = tf.uint8 # or tf.int8
converter.inference_output_type = tf.uint8 # or tf.int8
converter.experimental_new_quantizer = False
f = str(file).replace('.pt', '-int8.tflite')
tflite_model = converter.convert()
open(f, "wb").write(tflite_model)
LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')
except Exception as e:
LOGGER.info(f'\n{prefix} export failure: {e}')
def export_tfjs(keras_model, im, file, prefix=colorstr('TensorFlow.js:')):
# YOLOv5 TensorFlow.js export
try:
check_requirements(('tensorflowjs',))
LOGGER.info(f'\n{prefix} starting export with tensorflowjs {tfjs.__version__}...')
f = str(file).replace('.pt', '_web_model') # js dir
f_pb = file.with_suffix('.pb') # *.pb path
f_json = f + '/model.json' # *.json path
cmd = f"tensorflowjs_converter --input_format=tf_frozen_model " \
f"--output_node_names='Identity,Identity_1,Identity_2,Identity_3' {f_pb} {f}"
subprocess.run(cmd, shell=True)
json = open(f_json).read()
with open(f_json, 'w') as j: # sort JSON Identity_* in ascending order
subst = re.sub(
r'{"outputs": {"Identity.?.?": {"name": "Identity.?.?"}, '
r'"Identity.?.?": {"name": "Identity.?.?"}, '
r'"Identity.?.?": {"name": "Identity.?.?"}, '
r'"Identity.?.?": {"name": "Identity.?.?"}}}',
r'{"outputs": {"Identity": {"name": "Identity"}, '
r'"Identity_1": {"name": "Identity_1"}, '
r'"Identity_2": {"name": "Identity_2"}, '
r'"Identity_3": {"name": "Identity_3"}}}',
json)
j.write(subst)
LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')
except Exception as e:
LOGGER.info(f'\n{prefix} export failure: {e}')
def export_engine(model, im, file, train, half, simplify, workspace=4, verbose=False, prefix=colorstr('TensorRT:')):
try:
check_requirements(('tensorrt',))
opset = (12, 13)[trt.__version__[0] == '8'] # test on TensorRT 7.x and 8.x
export_onnx(model, im, file, opset, train, False, simplify)
onnx = file.with_suffix('.onnx')
assert onnx.exists(), f'failed to export ONNX file: {onnx}'
LOGGER.info(f'\n{prefix} starting export with TensorRT {trt.__version__}...')
f = file.with_suffix('.engine') # TensorRT engine file
logger = trt.Logger(trt.Logger.INFO)
if verbose:
logger.min_severity = trt.Logger.Severity.VERBOSE
builder = trt.Builder(logger)
config = builder.create_builder_config()
config.max_workspace_size = workspace * 1 << 30
flag = (1 << int(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH))
network = builder.create_network(flag)
parser = trt.OnnxParser(network, logger)
if not parser.parse_from_file(str(onnx)):
raise RuntimeError(f'failed to load ONNX file: {onnx}')
inputs = [network.get_input(i) for i in range(network.num_inputs)]
outputs = [network.get_output(i) for i in range(network.num_outputs)]
LOGGER.info(f'{prefix} Network Description:')
for inp in inputs:
LOGGER.info(f'{prefix}\tinput "{inp.name}" with shape {inp.shape} and dtype {inp.dtype}')
for out in outputs:
LOGGER.info(f'{prefix}\toutput "{out.name}" with shape {out.shape} and dtype {out.dtype}')
half &= builder.platform_has_fast_fp16
LOGGER.info(f'{prefix} building FP{16 if half else 32} engine in {f}')
if half:
config.set_flag(trt.BuilderFlag.FP16)
with builder.build_engine(network, config) as engine, open(f, 'wb') as t:
t.write(engine.serialize())
LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')
except Exception as e:
LOGGER.info(f'\n{prefix} export failure: {e}')
@torch.no_grad()
def run(data=ROOT / 'data/coco128.yaml', # 'dataset.yaml path'
weights=ROOT / 'yolov5s.pt', # weights path
imgsz=(640, 640), # image (height, width)
batch_size=1, # batch size
device='cpu', # cuda device, i.e. 0 or 0,1,2,3 or cpu
include=('torchscript', 'onnx'), # include formats
half=False, # FP16 half-precision export
inplace=False, # set YOLOv5 Detect() inplace=True
train=False, # model.train() mode
optimize=False, # TorchScript: optimize for mobile
int8=False, # CoreML/TF INT8 quantization
dynamic=False, # ONNX/TF: dynamic axes
simplify=False, # ONNX: simplify model
opset=12, # ONNX: opset version
verbose=False, # TensorRT: verbose log
workspace=4, # TensorRT: workspace size (GB)
nms=False, # TF: add NMS to model
agnostic_nms=False, # TF: add agnostic NMS to model
topk_per_class=100, # TF.js NMS: topk per class to keep
topk_all=100, # TF.js NMS: topk for all classes to keep
iou_thres=0.45, # TF.js NMS: IoU threshold
conf_thres=0.25 # TF.js NMS: confidence threshold
):
t = time.time()
include = [x.lower() for x in include]
tf_exports = list(x in include for x in ('saved_model', 'pb', 'tflite', 'tfjs')) # TensorFlow exports | file = Path(url2file(weights) if str(weights).startswith(('http:/', 'https:/')) else weights) | 12 | 2023-10-31 06:06:41+00:00 | 12k |
Kiteretsu77/VCISR-official | test_code/utils.py | [
{
"identifier": "RRDBNet",
"path": "architecture/rrdb.py",
"snippet": "class RRDBNet(nn.Module):\n \"\"\"Networks consisting of Residual in Residual Dense Block, which is used\n in ESRGAN.\n\n ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks.\n\n We extend ESRGAN for scale x... | import os, sys
import torch
from architecture.rrdb import RRDBNet
from architecture.grl import GRL
from architecture.swinir import SwinIR | 8,555 |
# Import files from same folder
root_path = os.path.abspath('.')
sys.path.append(root_path)
def load_grl(generator_weight_PATH, print_options=True):
''' A simpler API to load GRL model
Args:
generator_weight_PATH (str): The path to the weight
print_options (bool): whether to print options to show what kinds of setting is used
Returns:
generator (torch): the generator instance of the model
'''
# Load the checkpoint
checkpoint_g = torch.load(generator_weight_PATH)
# Find the generator weight
if 'model_state_dict' in checkpoint_g:
weight = checkpoint_g['model_state_dict']
# GRL Small
|
# Import files from same folder
root_path = os.path.abspath('.')
sys.path.append(root_path)
def load_grl(generator_weight_PATH, print_options=True):
''' A simpler API to load GRL model
Args:
generator_weight_PATH (str): The path to the weight
print_options (bool): whether to print options to show what kinds of setting is used
Returns:
generator (torch): the generator instance of the model
'''
# Load the checkpoint
checkpoint_g = torch.load(generator_weight_PATH)
# Find the generator weight
if 'model_state_dict' in checkpoint_g:
weight = checkpoint_g['model_state_dict']
# GRL Small | generator = GRL( | 1 | 2023-10-29 04:33:38+00:00 | 12k |
DataCanvasIO/LMS | lms/runtime/evaluation/benchmark/eval.py | [
{
"identifier": "ARCDataset",
"path": "lms/runtime/evaluation/benchmark/eval_dataset.py",
"snippet": "class ARCDataset():\n @staticmethod\n def load(path: str = basepath + \"/data/ARC/ARC-c/ARC-Challenge-Dev.jsonl\"):\n with open(path, 'r', errors='ignore') as in_f:\n rows = []\n... | import os
import torch
import argparse
import json
from lms.runtime.evaluation.benchmark.eval_dataset import ARCDataset, MMLUDataset, CMMLUDataset, CEvalDataset, \
AGIEvalDataset, \
BBHDataset
from lms.runtime.evaluation.benchmark.eval_metric import AccEvaluator, MCAccEvaluator
from tqdm import tqdm
from transformers import AutoModelForCausalLM, AutoTokenizer, TextGenerationPipeline, pipeline | 7,340 |
def parse_args():
parser = argparse.ArgumentParser(description='Run an evaluation task')
parser.add_argument('--model_path', help='model_path')
parser.add_argument('--task', help='task')
parser.add_argument('--output_path', help='output_path')
args = parser.parse_args()
return args
def trunk(text, text_length=800):
return str(text[len(text) - text_length:])
def infer(model_path, datalist,task):
try:
tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_path, device_map="auto", trust_remote_code=True)
pipe = TextGenerationPipeline(model=model, tokenizer=tokenizer, torch_dtype=torch.float16)
except:
pipe = pipeline("text2text-generation", model=model_path, device_map="auto",trust_remote_code=True, torch_dtype=torch.float16)
predict = []
datalist = list(map(trunk, datalist))
for text in tqdm(datalist):
if task=="BigBench":
out = pipe(text, max_new_tokens=32)
else:
out = pipe(text, max_new_tokens=4)
predict.append(out[0]["generated_text"][len(text):])
return predict
task_map = {"ARC": ARCDataset, "MMLU": MMLUDataset, "CMMLU": CMMLUDataset, "ceval": CEvalDataset,
"AGIEval": AGIEvalDataset, "BigBench": BBHDataset}
|
def parse_args():
parser = argparse.ArgumentParser(description='Run an evaluation task')
parser.add_argument('--model_path', help='model_path')
parser.add_argument('--task', help='task')
parser.add_argument('--output_path', help='output_path')
args = parser.parse_args()
return args
def trunk(text, text_length=800):
return str(text[len(text) - text_length:])
def infer(model_path, datalist,task):
try:
tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_path, device_map="auto", trust_remote_code=True)
pipe = TextGenerationPipeline(model=model, tokenizer=tokenizer, torch_dtype=torch.float16)
except:
pipe = pipeline("text2text-generation", model=model_path, device_map="auto",trust_remote_code=True, torch_dtype=torch.float16)
predict = []
datalist = list(map(trunk, datalist))
for text in tqdm(datalist):
if task=="BigBench":
out = pipe(text, max_new_tokens=32)
else:
out = pipe(text, max_new_tokens=4)
predict.append(out[0]["generated_text"][len(text):])
return predict
task_map = {"ARC": ARCDataset, "MMLU": MMLUDataset, "CMMLU": CMMLUDataset, "ceval": CEvalDataset,
"AGIEval": AGIEvalDataset, "BigBench": BBHDataset} | eval_map = {"ARC": AccEvaluator, "MMLU": AccEvaluator, "CMMLU": AccEvaluator, "ceval": AccEvaluator, | 6 | 2023-10-30 10:50:32+00:00 | 12k |
aws-samples/amazon-bedrock-serverless-prompt-chaining | cdk_stacks.py | [
{
"identifier": "WebappStack",
"path": "stacks/webapp_stack.py",
"snippet": "class WebappStack(Stack):\n def __init__(\n self, scope: Construct, construct_id: str, parent_domain: str, **kwargs\n ) -> None:\n super().__init__(scope, construct_id, **kwargs)\n\n # Set up load-bal... | from aws_cdk import (
App,
Environment,
)
from stacks.webapp_stack import WebappStack
from stacks.blog_post_stack import BlogPostStack
from stacks.trip_planner_stack import TripPlannerStack
from stacks.story_writer_stack import StoryWriterStack
from stacks.movie_pitch_stack import MoviePitchStack
from stacks.meal_planner_stack import MealPlannerStack
from stacks.most_popular_repo_bedrock_agent_stack import (
MostPopularRepoBedrockAgentStack,
)
from stacks.most_popular_repo_langchain_stack import (
MostPopularRepoLangchainStack,
)
from stacks.alarms_stack import AlarmsStack
import os | 9,871 |
app = App()
env = Environment(account=os.environ["CDK_DEFAULT_ACCOUNT"], region="us-west-2")
WebappStack(
app,
"PromptChaining-StreamlitWebapp",
env=env,
parent_domain="TODO FILL IN",
)
BlogPostStack(
app,
"PromptChaining-BlogPostDemo",
env=env,
)
TripPlannerStack(
app,
"PromptChaining-TripPlannerDemo",
env=env,
)
StoryWriterStack(
app,
"PromptChaining-StoryWriterDemo",
env=env,
)
MoviePitchStack(
app,
"PromptChaining-MoviePitchDemo",
env=env,
)
MealPlannerStack(
app,
"PromptChaining-MealPlannerDemo",
env=env,
)
MostPopularRepoBedrockAgentStack(
app,
"PromptChaining-MostPopularRepoBedrockAgentsDemo",
env=env,
)
|
app = App()
env = Environment(account=os.environ["CDK_DEFAULT_ACCOUNT"], region="us-west-2")
WebappStack(
app,
"PromptChaining-StreamlitWebapp",
env=env,
parent_domain="TODO FILL IN",
)
BlogPostStack(
app,
"PromptChaining-BlogPostDemo",
env=env,
)
TripPlannerStack(
app,
"PromptChaining-TripPlannerDemo",
env=env,
)
StoryWriterStack(
app,
"PromptChaining-StoryWriterDemo",
env=env,
)
MoviePitchStack(
app,
"PromptChaining-MoviePitchDemo",
env=env,
)
MealPlannerStack(
app,
"PromptChaining-MealPlannerDemo",
env=env,
)
MostPopularRepoBedrockAgentStack(
app,
"PromptChaining-MostPopularRepoBedrockAgentsDemo",
env=env,
) | MostPopularRepoLangchainStack( | 7 | 2023-10-26 22:17:30+00:00 | 12k |
chenran-li/RQL-release | stable_baselines3/ppo/ppo.py | [
{
"identifier": "OnPolicyAlgorithm",
"path": "stable_baselines3/common/on_policy_algorithm.py",
"snippet": "class OnPolicyAlgorithm(BaseAlgorithm):\n \"\"\"\n The base for On-Policy algorithms (ex: A2C/PPO).\n\n :param policy: The policy model to use (MlpPolicy, CnnPolicy, ...)\n :param env:... | import warnings
import numpy as np
import torch as th
from typing import Any, Dict, Optional, Type, TypeVar, Union
from gym import spaces
from torch.nn import functional as F
from stable_baselines3.common.on_policy_algorithm import OnPolicyAlgorithm
from stable_baselines3.common.policies import ActorCriticCnnPolicy, ActorCriticPolicy, BasePolicy, MultiInputActorCriticPolicy
from stable_baselines3.common.type_aliases import GymEnv, MaybeCallback, Schedule
from stable_baselines3.common.utils import explained_variance, get_schedule_fn | 10,399 |
SelfPPO = TypeVar("SelfPPO", bound="PPO")
class PPO(OnPolicyAlgorithm):
"""
Proximal Policy Optimization algorithm (PPO) (clip version)
Paper: https://arxiv.org/abs/1707.06347
Code: This implementation borrows code from OpenAI Spinning Up (https://github.com/openai/spinningup/)
https://github.com/ikostrikov/pytorch-a2c-ppo-acktr-gail and
Stable Baselines (PPO2 from https://github.com/hill-a/stable-baselines)
Introduction to PPO: https://spinningup.openai.com/en/latest/algorithms/ppo.html
:param policy: The policy model to use (MlpPolicy, CnnPolicy, ...)
:param env: The environment to learn from (if registered in Gym, can be str)
:param learning_rate: The learning rate, it can be a function
of the current progress remaining (from 1 to 0)
:param n_steps: The number of steps to run for each environment per update
(i.e. rollout buffer size is n_steps * n_envs where n_envs is number of environment copies running in parallel)
NOTE: n_steps * n_envs must be greater than 1 (because of the advantage normalization)
See https://github.com/pytorch/pytorch/issues/29372
:param batch_size: Minibatch size
:param n_epochs: Number of epoch when optimizing the surrogate loss
:param gamma: Discount factor
:param gae_lambda: Factor for trade-off of bias vs variance for Generalized Advantage Estimator
:param clip_range: Clipping parameter, it can be a function of the current progress
remaining (from 1 to 0).
:param clip_range_vf: Clipping parameter for the value function,
it can be a function of the current progress remaining (from 1 to 0).
This is a parameter specific to the OpenAI implementation. If None is passed (default),
no clipping will be done on the value function.
IMPORTANT: this clipping depends on the reward scaling.
:param normalize_advantage: Whether to normalize or not the advantage
:param ent_coef: Entropy coefficient for the loss calculation
:param vf_coef: Value function coefficient for the loss calculation
:param max_grad_norm: The maximum value for the gradient clipping
:param use_sde: Whether to use generalized State Dependent Exploration (gSDE)
instead of action noise exploration (default: False)
:param sde_sample_freq: Sample a new noise matrix every n steps when using gSDE
Default: -1 (only sample at the beginning of the rollout)
:param target_kl: Limit the KL divergence between updates,
because the clipping is not enough to prevent large update
see issue #213 (cf https://github.com/hill-a/stable-baselines/issues/213)
By default, there is no limit on the kl div.
:param tensorboard_log: the log location for tensorboard (if None, no logging)
:param policy_kwargs: additional arguments to be passed to the policy on creation
:param verbose: Verbosity level: 0 for no output, 1 for info messages (such as device or wrappers used), 2 for
debug messages
:param seed: Seed for the pseudo random generators
:param device: Device (cpu, cuda, ...) on which the code should be run.
Setting it to auto, the code will be run on the GPU if possible.
:param _init_setup_model: Whether or not to build the network at the creation of the instance
"""
|
SelfPPO = TypeVar("SelfPPO", bound="PPO")
class PPO(OnPolicyAlgorithm):
"""
Proximal Policy Optimization algorithm (PPO) (clip version)
Paper: https://arxiv.org/abs/1707.06347
Code: This implementation borrows code from OpenAI Spinning Up (https://github.com/openai/spinningup/)
https://github.com/ikostrikov/pytorch-a2c-ppo-acktr-gail and
Stable Baselines (PPO2 from https://github.com/hill-a/stable-baselines)
Introduction to PPO: https://spinningup.openai.com/en/latest/algorithms/ppo.html
:param policy: The policy model to use (MlpPolicy, CnnPolicy, ...)
:param env: The environment to learn from (if registered in Gym, can be str)
:param learning_rate: The learning rate, it can be a function
of the current progress remaining (from 1 to 0)
:param n_steps: The number of steps to run for each environment per update
(i.e. rollout buffer size is n_steps * n_envs where n_envs is number of environment copies running in parallel)
NOTE: n_steps * n_envs must be greater than 1 (because of the advantage normalization)
See https://github.com/pytorch/pytorch/issues/29372
:param batch_size: Minibatch size
:param n_epochs: Number of epoch when optimizing the surrogate loss
:param gamma: Discount factor
:param gae_lambda: Factor for trade-off of bias vs variance for Generalized Advantage Estimator
:param clip_range: Clipping parameter, it can be a function of the current progress
remaining (from 1 to 0).
:param clip_range_vf: Clipping parameter for the value function,
it can be a function of the current progress remaining (from 1 to 0).
This is a parameter specific to the OpenAI implementation. If None is passed (default),
no clipping will be done on the value function.
IMPORTANT: this clipping depends on the reward scaling.
:param normalize_advantage: Whether to normalize or not the advantage
:param ent_coef: Entropy coefficient for the loss calculation
:param vf_coef: Value function coefficient for the loss calculation
:param max_grad_norm: The maximum value for the gradient clipping
:param use_sde: Whether to use generalized State Dependent Exploration (gSDE)
instead of action noise exploration (default: False)
:param sde_sample_freq: Sample a new noise matrix every n steps when using gSDE
Default: -1 (only sample at the beginning of the rollout)
:param target_kl: Limit the KL divergence between updates,
because the clipping is not enough to prevent large update
see issue #213 (cf https://github.com/hill-a/stable-baselines/issues/213)
By default, there is no limit on the kl div.
:param tensorboard_log: the log location for tensorboard (if None, no logging)
:param policy_kwargs: additional arguments to be passed to the policy on creation
:param verbose: Verbosity level: 0 for no output, 1 for info messages (such as device or wrappers used), 2 for
debug messages
:param seed: Seed for the pseudo random generators
:param device: Device (cpu, cuda, ...) on which the code should be run.
Setting it to auto, the code will be run on the GPU if possible.
:param _init_setup_model: Whether or not to build the network at the creation of the instance
"""
| policy_aliases: Dict[str, Type[BasePolicy]] = { | 3 | 2023-10-28 01:09:21+00:00 | 12k |
AmgdGocha/DriveFS-Sleuth | drivefs_sleuth/setup.py | [
{
"identifier": "get_last_pid",
"path": "drivefs_sleuth/utils.py",
"snippet": "def get_last_pid(drivefs_path):\n try:\n with open(os.path.join(drivefs_path, 'pid.txt')) as pid_file:\n return pid_file.read()\n except OSError:\n return -1"
},
{
"identifier": "get_ite... | import os.path
import datetime
from enum import Enum
from collections import OrderedDict
from drivefs_sleuth.utils import get_last_pid
from drivefs_sleuth.utils import get_item_info
from drivefs_sleuth.utils import get_last_sync
from drivefs_sleuth.utils import parse_protobuf
from drivefs_sleuth.utils import get_max_root_ids
from drivefs_sleuth.utils import get_deleted_items
from drivefs_sleuth.utils import get_mirrored_items
from drivefs_sleuth.utils import get_item_properties
from drivefs_sleuth.utils import get_target_stable_id
from drivefs_sleuth.utils import get_connected_devices
from drivefs_sleuth.utils import get_parent_relationships
from drivefs_sleuth.utils import get_content_caches_paths
from drivefs_sleuth.utils import get_file_content_cache_path
from drivefs_sleuth.utils import get_shared_with_me_without_link
from drivefs_sleuth.utils import get_mirroring_roots_for_account
from drivefs_sleuth.synced_files_tree import File
from drivefs_sleuth.synced_files_tree import Link
from drivefs_sleuth.synced_files_tree import Directory
from drivefs_sleuth.synced_files_tree import DummyItem
from drivefs_sleuth.synced_files_tree import MirrorItem
from drivefs_sleuth.synced_files_tree import SyncedFilesTree
from drivefs_sleuth.tasks import get_accounts | 7,644 | child = orphan_dirs.get(child_id, None)
if child:
child.tree_path = f'{current_parent_dir.tree_path}\\{child.local_title}'
del orphan_dirs[child_id]
else:
child = Directory(child_info[1], child_info[2], child_info[3], child_info[4], child_info[5],
child_info[6], child_info[7], child_info[8], child_info[9],
child_properties,
f'{current_parent_dir.tree_path}\\{child_info[3]}', child_info[10])
added_dirs[child_id] = child
current_parent_dir.add_item(child)
# TODO: check if I can add a link in the shared with me
for shared_with_me_item_info in get_shared_with_me_without_link(self.__profile_path):
shared_with_me_item_properties = get_item_properties(self.__profile_path, shared_with_me_item_info[1])
if shared_with_me_item_info[0] == 0:
content_cache_path = get_file_content_cache_path(
shared_with_me_item_properties.get('content-entry', None), content_caches_paths)
shared_with_me_file = File(shared_with_me_item_info[1], shared_with_me_item_info[2],
shared_with_me_item_info[3], shared_with_me_item_info[4],
shared_with_me_item_info[5], shared_with_me_item_info[6],
shared_with_me_item_info[7], shared_with_me_item_info[8],
shared_with_me_item_info[9], shared_with_me_item_properties,
f'Shared with me\\{shared_with_me_item_info[3]}', content_cache_path,
shared_with_me_item_info[10])
self.__synced_files_tree.add_shared_with_me_item(shared_with_me_file)
if shared_with_me_file:
self.__synced_files_tree.add_recoverable_item_from_cache(shared_with_me_file)
else:
shared_with_me_item = orphan_dirs.get(shared_with_me_item_info[1], None)
if shared_with_me_item:
del orphan_dirs[shared_with_me_item_info[1]]
else:
shared_with_me_item = Directory(shared_with_me_item_info[1], shared_with_me_item_info[2],
shared_with_me_item_info[3], shared_with_me_item_info[4],
shared_with_me_item_info[5], shared_with_me_item_info[6],
shared_with_me_item_info[7], shared_with_me_item_info[8],
shared_with_me_item_info[9], shared_with_me_item_properties,
f'{current_parent_dir.tree_path}\\{shared_with_me_item_info[3]}',
shared_with_me_item_info[10])
self.__synced_files_tree.add_shared_with_me_item(shared_with_me_item)
for orphan_id, orphan_dir in orphan_dirs.items():
self.__synced_files_tree.add_orphan_item(orphan_dir)
mirrored_items = get_mirrored_items(self.__profile_path)
for item in mirrored_items:
self.__synced_files_tree.add_mirrored_item(
MirrorItem(item[0], item[1], item[2], item[3], item[4], item[5], item[6], item[7], item[8], item[9],
item[10], item[11], item[12], item[13], item[14], item[15], item[16]
)
)
for deleted_item in get_deleted_items(self.__profile_path):
parsed_buf = parse_protobuf(deleted_item[1])
properties = {}
for index, props in parsed_buf.items():
if index == '55' or index.startswith('55-'):
for prop in props:
if isinstance(prop, dict):
properties[prop['1']] = prop[[key for key in prop.keys() if key != '1'][0]]
elif isinstance(prop, list):
for p in prop:
properties[p['1']] = p[[key for key in p.keys() if key != '1'][0]]
if parsed_buf['4'] == 'application/vnd.google-apps.folder':
self.__synced_files_tree.add_recovered_deleted_item(
Directory(deleted_item[0], parsed_buf.get('1', ''), parsed_buf.get('3', ''),
parsed_buf.get('4', ''), parsed_buf.get('63', 0), parsed_buf.get('14', 0),
parsed_buf.get('11', 0), parsed_buf.get('13', 0), parsed_buf.get('7', 1),
properties, parsed_buf.get('3', ''), deleted_item[1])
)
elif parsed_buf['4'] == 'application/vnd.google-apps.shortcut':
target_item = None
target_info = parsed_buf.get('132', None)
if target_info:
target_item = self.__synced_files_tree.get_item_by_id(target_info['2'])
self.__synced_files_tree.add_recovered_deleted_item(
Link(deleted_item[0], parsed_buf.get('1', ''), parsed_buf.get('3', ''), parsed_buf.get('4', ''),
parsed_buf.get('63', 0), parsed_buf.get('14', 0), parsed_buf.get('11', 0),
parsed_buf.get('13', 0), parsed_buf.get('7', 1), properties, parsed_buf.get('3', ''),
target_item, deleted_item[1])
)
else:
content_cache_path = get_file_content_cache_path(
properties.get('content-entry', None), content_caches_paths)
recovered_file = File(deleted_item[0], parsed_buf.get('1', ''), parsed_buf.get('3', ''),
parsed_buf.get('4', ''), parsed_buf.get('63', 0), parsed_buf.get('14', 0),
parsed_buf.get('11', 0), parsed_buf.get('13', 0), parsed_buf.get('7', 1),
properties, parsed_buf.get('3', ''), content_cache_path, deleted_item[1])
self.__synced_files_tree.add_recovered_deleted_item(recovered_file)
if content_cache_path:
self.__synced_files_tree.add_recoverable_item_from_cache(recovered_file)
class Setup:
def __init__(self, drivefs_path, accounts=None):
self.__drivefs_path = drivefs_path
self.__last_sync_date = datetime.datetime.fromtimestamp(get_last_sync(drivefs_path), datetime.timezone.utc)
self.__max_root_ids = get_max_root_ids(drivefs_path)
self.__last_pid = get_last_pid(drivefs_path)
self.__connected_devices = []
for connected_device in get_connected_devices(drivefs_path):
device = {
"media_id": connected_device[0],
"name": connected_device[1],
"last_mount_point": connected_device[2],
"ignore": connected_device[4],
}
if int(connected_device[3]) == -1:
device["capacity"] = connected_device[3]
else:
device["capacity"] = round(int(connected_device[3]) / 1e+9, 2)
self.__connected_devices.append(device)
if not accounts:
accounts = []
self.__accounts = []
|
class StorageDestinations(Enum):
DRIVE = "DRIVE"
PHOTOS = "PHOTOS"
class Account:
def __init__(self, drivefs_path, account_id, email, is_logged_in, mirroring_roots, properties):
self.__profile_path = os.path.join(drivefs_path, account_id)
self.__account_id = account_id
self.__account_email = email
self.__is_logged_in = is_logged_in
self.__synced_files_tree = None
if is_logged_in:
self._construct_synced_files_trees()
self.__mirroring_roots = []
for mirroring_root in mirroring_roots:
mirroring_root_info = {
'root_id': mirroring_root[1],
'media_id': mirroring_root[2],
'title': mirroring_root[3],
'root_path': mirroring_root[4],
'sync_type': mirroring_root[5],
'last_seen_absolute_path': mirroring_root[7],
}
if mirroring_root[6] == 1:
mirroring_root_info['destination'] = StorageDestinations.DRIVE.value
else:
mirroring_root_info['destination'] = StorageDestinations.PHOTOS.value
self.__mirroring_roots.append(mirroring_root_info)
self.__name = properties['name']
self.__photo_url = properties['photo_url']
def get_profile_path(self):
return self.__profile_path
def get_account_id(self):
return self.__account_id
def get_account_email(self):
return self.__account_email
def is_logged_in(self):
return self.__is_logged_in
def get_synced_files_tree(self):
return self.__synced_files_tree
def get_mirroring_roots(self):
return self.__mirroring_roots
def get_name(self):
return self.__name
def get_photo_url(self):
return self.__photo_url
def _construct_synced_files_trees(self):
parent_relationships = get_parent_relationships(self.__profile_path)
root_info = get_item_info(self.__profile_path, parent_relationships[0][0])
root = Directory(root_info[1], root_info[2], root_info[3], root_info[4], root_info[5], root_info[6],
root_info[7], root_info[8], root_info[9],
get_item_properties(self.__profile_path, root_info[1]), root_info[3], root_info[10])
self.__synced_files_tree = SyncedFilesTree(root)
content_caches_paths = get_content_caches_paths(os.path.join(self.__profile_path, 'content_cache'))
parent_relationships_dict = OrderedDict()
for parent, child in parent_relationships:
if parent not in parent_relationships_dict.keys():
parent_relationships_dict[parent] = []
parent_relationships_dict[parent].append(child)
added_dirs = {self.__synced_files_tree.get_root().get_stable_id(): self.__synced_files_tree.get_root()}
orphan_dirs = {}
current_parent_dir = self.__synced_files_tree.get_root()
for parent_id, childs_ids in parent_relationships_dict.items():
if parent_id != current_parent_dir.get_stable_id():
if parent_id in added_dirs:
current_parent_dir = added_dirs[parent_id]
elif parent_id in orphan_dirs:
current_parent_dir = orphan_dirs[parent_id]
else:
parent_info = get_item_info(self.__profile_path, parent_id)
if not parent_info:
self.__synced_files_tree.add_deleted_item(DummyItem(parent_id))
else:
current_parent_dir = Directory(parent_info[1], parent_info[2], parent_info[3], parent_info[4],
parent_info[5], parent_info[6], parent_info[7], parent_info[8],
parent_info[9], get_item_properties(self.__profile_path,
parent_id), parent_info[3],
parent_info[10])
orphan_dirs[parent_id] = current_parent_dir
for child_id in childs_ids:
child_info = get_item_info(self.__profile_path, child_id)
child_properties = get_item_properties(self.__profile_path, child_id)
if not child_info:
self.__synced_files_tree.add_deleted_item(DummyItem(child_id))
continue
if child_info[0] == 0:
content_cache_path = get_file_content_cache_path(
child_properties.get('content-entry', None), content_caches_paths)
child_file = File(child_info[1], child_info[2], child_info[3], child_info[4], child_info[5],
child_info[6], child_info[7], child_info[8], child_info[9], child_properties,
f'{current_parent_dir.tree_path}\\{child_info[3]}', content_cache_path,
child_info[10])
current_parent_dir.add_item(child_file)
if content_cache_path:
self.__synced_files_tree.add_recoverable_item_from_cache(child_file)
else:
if child_info[4] == 'application/vnd.google-apps.shortcut':
target_stable_id = get_target_stable_id(self.__profile_path, child_info[1])
if target_stable_id:
target = orphan_dirs.get(target_stable_id, None)
if target:
added_dirs[target_stable_id] = target
del orphan_dirs[target_stable_id]
else:
target_info = get_item_info(self.__profile_path, target_stable_id)
if target_info:
if target_info[0] == 0:
content_cache_path = get_file_content_cache_path(
child_properties.get('content-entry', None), content_caches_paths)
target = File(target_info[1], target_info[2], target_info[3], target_info[4],
target_info[5], target_info[6], target_info[7], target_info[8],
target_info[9],
get_item_properties(self.__profile_path, target_info[1]),
f'{current_parent_dir.tree_path}\\{target_info[3]}',
content_cache_path, target_info[10])
else:
target = Directory(target_info[1], target_info[2], target_info[3],
target_info[4], target_info[5], target_info[6],
target_info[7], target_info[8], target_info[9],
get_item_properties(self.__profile_path, target_info[1]),
f'{current_parent_dir.tree_path}\\{target_info[3]}',
target_info[10])
added_dirs[target_stable_id] = target
else:
target = DummyItem(target_stable_id)
self.__synced_files_tree.add_deleted_item(target)
child = Link(child_info[1], child_info[2], child_info[3], child_info[4], child_info[5],
child_info[6], child_info[7], child_info[8], child_info[9], child_properties,
f'{current_parent_dir.tree_path}\\{child_info[3]}', target, child_info[10])
else:
target = DummyItem('-1')
child = Link(child_info[1], child_info[2], child_info[3], child_info[4], child_info[5],
child_info[6], child_info[7], child_info[8], child_info[9], child_properties,
f'{current_parent_dir.tree_path}\\{child_info[3]}', target, child_info[10])
else:
child = orphan_dirs.get(child_id, None)
if child:
child.tree_path = f'{current_parent_dir.tree_path}\\{child.local_title}'
del orphan_dirs[child_id]
else:
child = Directory(child_info[1], child_info[2], child_info[3], child_info[4], child_info[5],
child_info[6], child_info[7], child_info[8], child_info[9],
child_properties,
f'{current_parent_dir.tree_path}\\{child_info[3]}', child_info[10])
added_dirs[child_id] = child
current_parent_dir.add_item(child)
# TODO: check if I can add a link in the shared with me
for shared_with_me_item_info in get_shared_with_me_without_link(self.__profile_path):
shared_with_me_item_properties = get_item_properties(self.__profile_path, shared_with_me_item_info[1])
if shared_with_me_item_info[0] == 0:
content_cache_path = get_file_content_cache_path(
shared_with_me_item_properties.get('content-entry', None), content_caches_paths)
shared_with_me_file = File(shared_with_me_item_info[1], shared_with_me_item_info[2],
shared_with_me_item_info[3], shared_with_me_item_info[4],
shared_with_me_item_info[5], shared_with_me_item_info[6],
shared_with_me_item_info[7], shared_with_me_item_info[8],
shared_with_me_item_info[9], shared_with_me_item_properties,
f'Shared with me\\{shared_with_me_item_info[3]}', content_cache_path,
shared_with_me_item_info[10])
self.__synced_files_tree.add_shared_with_me_item(shared_with_me_file)
if shared_with_me_file:
self.__synced_files_tree.add_recoverable_item_from_cache(shared_with_me_file)
else:
shared_with_me_item = orphan_dirs.get(shared_with_me_item_info[1], None)
if shared_with_me_item:
del orphan_dirs[shared_with_me_item_info[1]]
else:
shared_with_me_item = Directory(shared_with_me_item_info[1], shared_with_me_item_info[2],
shared_with_me_item_info[3], shared_with_me_item_info[4],
shared_with_me_item_info[5], shared_with_me_item_info[6],
shared_with_me_item_info[7], shared_with_me_item_info[8],
shared_with_me_item_info[9], shared_with_me_item_properties,
f'{current_parent_dir.tree_path}\\{shared_with_me_item_info[3]}',
shared_with_me_item_info[10])
self.__synced_files_tree.add_shared_with_me_item(shared_with_me_item)
for orphan_id, orphan_dir in orphan_dirs.items():
self.__synced_files_tree.add_orphan_item(orphan_dir)
mirrored_items = get_mirrored_items(self.__profile_path)
for item in mirrored_items:
self.__synced_files_tree.add_mirrored_item(
MirrorItem(item[0], item[1], item[2], item[3], item[4], item[5], item[6], item[7], item[8], item[9],
item[10], item[11], item[12], item[13], item[14], item[15], item[16]
)
)
for deleted_item in get_deleted_items(self.__profile_path):
parsed_buf = parse_protobuf(deleted_item[1])
properties = {}
for index, props in parsed_buf.items():
if index == '55' or index.startswith('55-'):
for prop in props:
if isinstance(prop, dict):
properties[prop['1']] = prop[[key for key in prop.keys() if key != '1'][0]]
elif isinstance(prop, list):
for p in prop:
properties[p['1']] = p[[key for key in p.keys() if key != '1'][0]]
if parsed_buf['4'] == 'application/vnd.google-apps.folder':
self.__synced_files_tree.add_recovered_deleted_item(
Directory(deleted_item[0], parsed_buf.get('1', ''), parsed_buf.get('3', ''),
parsed_buf.get('4', ''), parsed_buf.get('63', 0), parsed_buf.get('14', 0),
parsed_buf.get('11', 0), parsed_buf.get('13', 0), parsed_buf.get('7', 1),
properties, parsed_buf.get('3', ''), deleted_item[1])
)
elif parsed_buf['4'] == 'application/vnd.google-apps.shortcut':
target_item = None
target_info = parsed_buf.get('132', None)
if target_info:
target_item = self.__synced_files_tree.get_item_by_id(target_info['2'])
self.__synced_files_tree.add_recovered_deleted_item(
Link(deleted_item[0], parsed_buf.get('1', ''), parsed_buf.get('3', ''), parsed_buf.get('4', ''),
parsed_buf.get('63', 0), parsed_buf.get('14', 0), parsed_buf.get('11', 0),
parsed_buf.get('13', 0), parsed_buf.get('7', 1), properties, parsed_buf.get('3', ''),
target_item, deleted_item[1])
)
else:
content_cache_path = get_file_content_cache_path(
properties.get('content-entry', None), content_caches_paths)
recovered_file = File(deleted_item[0], parsed_buf.get('1', ''), parsed_buf.get('3', ''),
parsed_buf.get('4', ''), parsed_buf.get('63', 0), parsed_buf.get('14', 0),
parsed_buf.get('11', 0), parsed_buf.get('13', 0), parsed_buf.get('7', 1),
properties, parsed_buf.get('3', ''), content_cache_path, deleted_item[1])
self.__synced_files_tree.add_recovered_deleted_item(recovered_file)
if content_cache_path:
self.__synced_files_tree.add_recoverable_item_from_cache(recovered_file)
class Setup:
def __init__(self, drivefs_path, accounts=None):
self.__drivefs_path = drivefs_path
self.__last_sync_date = datetime.datetime.fromtimestamp(get_last_sync(drivefs_path), datetime.timezone.utc)
self.__max_root_ids = get_max_root_ids(drivefs_path)
self.__last_pid = get_last_pid(drivefs_path)
self.__connected_devices = []
for connected_device in get_connected_devices(drivefs_path):
device = {
"media_id": connected_device[0],
"name": connected_device[1],
"last_mount_point": connected_device[2],
"ignore": connected_device[4],
}
if int(connected_device[3]) == -1:
device["capacity"] = connected_device[3]
else:
device["capacity"] = round(int(connected_device[3]) / 1e+9, 2)
self.__connected_devices.append(device)
if not accounts:
accounts = []
self.__accounts = [] | for account_id, account_info in get_accounts(drivefs_path).items(): | 21 | 2023-10-29 11:05:04+00:00 | 12k |
Subsets and Splits
SQL Console for tianyang/repobench_python_v1.1
Identifies repositories that have consistent code formatting levels across multiple scales (2k, 4k, 8k, 12k) and reveals the structured formatting patterns within these repositories.
SQL Console for tianyang/repobench_python_v1.1
Compares cross-file and in-file code structure patterns across different complexity levels, revealing how file organization strategies vary with code size and potentially informing better code architecture decisions.
SQL Console for tianyang/repobench_python_v1.1
Identifies repositories that have complete performance data across all seven code complexity levels, revealing consistent benchmarking patterns across different code sizes.
SQL Console for tianyang/repobench_python_v1.1
Identifies repositories that contain all 7 distinct quality levels (2k through 32k), revealing complete datasets that might be useful for comprehensive analysis.