code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def resize_crop(video: torch.Tensor, oh: int, ow: int):
"""
Resize, center crop and normalize for decord loaded video (torch.Tensor type)
Parameters:
video - video to process (torch.Tensor): Tensor from `reader.get_batch(frame_ids)`, in shape of (T, H, W, C)
oh - target heig... |
Resize, center crop and normalize for decord loaded video (torch.Tensor type)
Parameters:
video - video to process (torch.Tensor): Tensor from `reader.get_batch(frame_ids)`, in shape of (T, H, W, C)
oh - target height (int)
ow - target width (int)
Returns:
... | resize_crop | python | ali-vilab/VACE | vace/models/utils/preprocessor.py | https://github.com/ali-vilab/VACE/blob/master/vace/models/utils/preprocessor.py | Apache-2.0 |
def __init__(
self,
config,
checkpoint_dir,
device_id=0,
rank=0,
t5_fsdp=False,
dit_fsdp=False,
use_usp=False,
t5_cpu=False,
):
r"""
Initializes the Wan text-to-video generation model components.
Args:
confi... |
Initializes the Wan text-to-video generation model components.
Args:
config (EasyDict):
Object containing model parameters initialized from config.py
checkpoint_dir (`str`):
Path to directory containing model checkpoints
device_id (`i... | __init__ | python | ali-vilab/VACE | vace/models/wan/wan_vace.py | https://github.com/ali-vilab/VACE/blob/master/vace/models/wan/wan_vace.py | Apache-2.0 |
def generate(self,
input_prompt,
input_frames,
input_masks,
input_ref_images,
size=(1280, 720),
frame_num=81,
context_scale=1.0,
shift=5.0,
sample_solver='unipc',
... |
Generates video frames from text prompt using diffusion process.
Args:
input_prompt (`str`):
Text prompt for content generation
size (tupele[`int`], *optional*, defaults to (1280,720)):
Controls video resolution, (width,height).
frame... | generate | python | ali-vilab/VACE | vace/models/wan/wan_vace.py | https://github.com/ali-vilab/VACE/blob/master/vace/models/wan/wan_vace.py | Apache-2.0 |
def usp_dit_forward(
self,
x,
t,
vace_context,
context,
seq_len,
vace_context_scale=1.0,
clip_fea=None,
y=None,
):
"""
x: A list of videos each with shape [C, T, H, W].
t: [B].
context: A list of text embeddings each with shape [L, C].... |
x: A list of videos each with shape [C, T, H, W].
t: [B].
context: A list of text embeddings each with shape [L, C].
| usp_dit_forward | python | ali-vilab/VACE | vace/models/wan/distributed/xdit_context_parallel.py | https://github.com/ali-vilab/VACE/blob/master/vace/models/wan/distributed/xdit_context_parallel.py | Apache-2.0 |
def forward(
self,
x,
t,
vace_context,
context,
seq_len,
vace_context_scale=1.0,
clip_fea=None,
y=None,
):
r"""
Forward pass through the diffusion model
Args:
x (List[Tensor]):
List of input ... |
Forward pass through the diffusion model
Args:
x (List[Tensor]):
List of input video tensors, each with shape [C_in, F, H, W]
t (Tensor):
Diffusion timesteps tensor of shape [B]
context (List[Tensor]):
List of text emb... | forward | python | ali-vilab/VACE | vace/models/wan/modules/model.py | https://github.com/ali-vilab/VACE/blob/master/vace/models/wan/modules/model.py | Apache-2.0 |
def get_html_video_template(file_url_path, file_name, width="auto", height="auto"):
"""
Generate an HTML code snippet for embedding and downloading a video.
Parameters:
file_url_path (str): The URL or path to the video file.
file_name (str): The name of the video file.
w... |
Generate an HTML code snippet for embedding and downloading a video.
Parameters:
file_url_path (str): The URL or path to the video file.
file_name (str): The name of the video file.
width (str, optional): The width of the video. Defaults to "auto".
height (str, optional... | get_html_video_template | python | RayVentura/ShortGPT | gui/ui_components_html.py | https://github.com/RayVentura/ShortGPT/blob/master/gui/ui_components_html.py | MIT |
def __verify_and_add_youtube_asset(self, asset_name, yt_url, type):
'''Verify and add a youtube asset to the database'''
self.__validate_asset_name(asset_name)
self.__validate_youtube_url(yt_url)
return self.__add_youtube_asset(asset_name, yt_url, type) | Verify and add a youtube asset to the database | __verify_and_add_youtube_asset | python | RayVentura/ShortGPT | gui/ui_tab_asset_library.py | https://github.com/RayVentura/ShortGPT/blob/master/gui/ui_tab_asset_library.py | MIT |
def __get_asset_embed(self, data, row):
'''Get the embed html for the asset at the given row'''
embed_height = 300
embed_width = 300
asset_link = data.iloc[row]['link']
embed_html = ''
if 'youtube.com' in asset_link:
asset_link_split = asset_link.split('?v=')
... | Get the embed html for the asset at the given row | __get_asset_embed | python | RayVentura/ShortGPT | gui/ui_tab_asset_library.py | https://github.com/RayVentura/ShortGPT/blob/master/gui/ui_tab_asset_library.py | MIT |
def __verify_and_upload_local_asset(self, upload_type, upload_name, video_path, audio_path, image_path):
'''Verify and upload a local asset to the database'''
self.__validate_asset_name(upload_name)
path_dict = {
AssetType.VIDEO.value: video_path,
AssetType.BACKGROUND_VID... | Verify and upload a local asset to the database | __verify_and_upload_local_asset | python | RayVentura/ShortGPT | gui/ui_tab_asset_library.py | https://github.com/RayVentura/ShortGPT/blob/master/gui/ui_tab_asset_library.py | MIT |
def on_show(self, button_text, textbox, button):
'''Show or hide the API key'''
if button_text == "Show":
return gr.update(type="text"), gr.update(value="Hide")
return gr.update(type="password"), gr.update(value="Show") | Show or hide the API key | on_show | python | RayVentura/ShortGPT | gui/ui_tab_config.py | https://github.com/RayVentura/ShortGPT/blob/master/gui/ui_tab_config.py | MIT |
def save_keys(self, openai_key, eleven_key, pexels_key, gemini_key):
'''Save the keys in the database'''
if (self.api_key_manager.get_api_key("OPENAI_API_KEY") != openai_key):
self.api_key_manager.set_api_key("OPENAI_API_KEY", openai_key)
if (self.api_key_manager.get_api_key("PEXELS_... | Save the keys in the database | save_keys | python | RayVentura/ShortGPT | gui/ui_tab_config.py | https://github.com/RayVentura/ShortGPT/blob/master/gui/ui_tab_config.py | MIT |
def get_eleven_remaining(self,):
'''Get the remaining characters from ElevenLabs API'''
if (self.eleven_labs_api):
try:
return self.eleven_labs_api.get_remaining_characters()
except Exception as e:
return e.args[0]
return "" | Get the remaining characters from ElevenLabs API | get_eleven_remaining | python | RayVentura/ShortGPT | gui/ui_tab_config.py | https://github.com/RayVentura/ShortGPT/blob/master/gui/ui_tab_config.py | MIT |
def get_voices(self):
'''Get the list of voices available'''
url = self.url_base + 'voices'
headers = {'accept': 'application/json'}
if self.api_key:
headers['xi-api-key'] = self.api_key
response = requests.get(url, headers=headers)
self.voices = {voice['name'... | Get the list of voices available | get_voices | python | RayVentura/ShortGPT | shortGPT/api_utils/eleven_api.py | https://github.com/RayVentura/ShortGPT/blob/master/shortGPT/api_utils/eleven_api.py | MIT |
def get_remaining_characters(self):
'''Get the number of characters remaining'''
url = self.url_base + 'user'
headers = {'accept': '*/*', 'xi-api-key': self.api_key, 'Content-Type': 'application/json'}
response = requests.get(url, headers=headers)
if response.status_code == 200:... | Get the number of characters remaining | get_remaining_characters | python | RayVentura/ShortGPT | shortGPT/api_utils/eleven_api.py | https://github.com/RayVentura/ShortGPT/blob/master/shortGPT/api_utils/eleven_api.py | MIT |
def sync_local_assets(cls):
"""
Loads all local assets from the static-assets folder into the database.
"""
local_assets = cls.local_assets._get()
local_paths = {asset['path'] for asset in local_assets.values()}
for path in Path('public').rglob('*'):
if path.... |
Loads all local assets from the static-assets folder into the database.
| sync_local_assets | python | RayVentura/ShortGPT | shortGPT/config/asset_db.py | https://github.com/RayVentura/ShortGPT/blob/master/shortGPT/config/asset_db.py | MIT |
def get_asset_link(cls, key: str) -> str:
"""
Get the link to an asset.
Args:
key (str): Name of the asset.
Returns:
str: Link to the asset.
"""
if key in cls.local_assets._get():
return cls._update_local_asset_timestamp_and_get_link(... |
Get the link to an asset.
Args:
key (str): Name of the asset.
Returns:
str: Link to the asset.
| get_asset_link | python | RayVentura/ShortGPT | shortGPT/config/asset_db.py | https://github.com/RayVentura/ShortGPT/blob/master/shortGPT/config/asset_db.py | MIT |
def get_asset_duration(cls, key: str) -> str:
"""
Get the duration of an asset.
Args:
key (str): Name of the asset.
Returns:
str: Duration of the asset.
"""
if key in cls.local_assets._get():
return cls._get_local_asset_duration(key)
... |
Get the duration of an asset.
Args:
key (str): Name of the asset.
Returns:
str: Duration of the asset.
| get_asset_duration | python | RayVentura/ShortGPT | shortGPT/config/asset_db.py | https://github.com/RayVentura/ShortGPT/blob/master/shortGPT/config/asset_db.py | MIT |
def _remove_local_asset(cls, name: str):
"""
Remove a local asset from the database.
Args:
name (str): Name of the asset.
"""
asset = cls.local_assets._get(name)
if 'required' not in asset:
try:
Path(asset['path']).unlink()
... |
Remove a local asset from the database.
Args:
name (str): Name of the asset.
| _remove_local_asset | python | RayVentura/ShortGPT | shortGPT/config/asset_db.py | https://github.com/RayVentura/ShortGPT/blob/master/shortGPT/config/asset_db.py | MIT |
def _add_local_asset_from_path(cls, path: Path):
"""
Add a local asset to the database from a file path.
Args:
path (Path): Path to the asset.
"""
file_ext = path.suffix
if file_ext in AUDIO_EXTENSIONS:
asset_type = AssetType.AUDIO
elif fi... |
Add a local asset to the database from a file path.
Args:
path (Path): Path to the asset.
| _add_local_asset_from_path | python | RayVentura/ShortGPT | shortGPT/config/asset_db.py | https://github.com/RayVentura/ShortGPT/blob/master/shortGPT/config/asset_db.py | MIT |
def _update_local_asset_timestamp_and_get_link(cls, key: str) -> str:
"""
Update the timestamp of a local asset and get its link.
Args:
key (str): Name of the asset.
Returns:
str: Link to the asset.
"""
asset = cls.local_assets._get(key)
... |
Update the timestamp of a local asset and get its link.
Args:
key (str): Name of the asset.
Returns:
str: Link to the asset.
| _update_local_asset_timestamp_and_get_link | python | RayVentura/ShortGPT | shortGPT/config/asset_db.py | https://github.com/RayVentura/ShortGPT/blob/master/shortGPT/config/asset_db.py | MIT |
def _get_remote_asset_link(cls, key: str) -> str:
"""
Get the link to a remote asset.
Args:
key (str): Name of the asset.
Returns:
str: Link to the asset.
"""
asset = cls.remote_assets._get(key)
asset['ts'] = datetime.now().strftime("%Y-%... |
Get the link to a remote asset.
Args:
key (str): Name of the asset.
Returns:
str: Link to the asset.
| _get_remote_asset_link | python | RayVentura/ShortGPT | shortGPT/config/asset_db.py | https://github.com/RayVentura/ShortGPT/blob/master/shortGPT/config/asset_db.py | MIT |
def _get_local_asset_duration(cls, key: str) -> str:
"""
Get the duration of a local asset.
Args:
key (str): Name of the asset.
Returns:
str: Duration of the asset.
"""
asset = cls.local_assets._get(key)
asset['ts'] = datetime.now().strft... |
Get the duration of a local asset.
Args:
key (str): Name of the asset.
Returns:
str: Duration of the asset.
| _get_local_asset_duration | python | RayVentura/ShortGPT | shortGPT/config/asset_db.py | https://github.com/RayVentura/ShortGPT/blob/master/shortGPT/config/asset_db.py | MIT |
def _get_remote_asset_duration(cls, key: str) -> str:
"""
Get the duration of a remote asset.
Args:
key (str): Name of the asset.
Returns:
str: Duration of the asset.
"""
asset = cls.remote_assets._get(key)
asset['ts'] = datetime.now().st... |
Get the duration of a remote asset.
Args:
key (str): Name of the asset.
Returns:
str: Duration of the asset.
| _get_remote_asset_duration | python | RayVentura/ShortGPT | shortGPT/config/asset_db.py | https://github.com/RayVentura/ShortGPT/blob/master/shortGPT/config/asset_db.py | MIT |
def _update_local_asset_duration(cls, key: str) -> str:
"""
Update the duration of a local asset.
Args:
key (str): Name of the asset.
Returns:
str: Duration of the asset.
"""
asset = cls.local_assets._get(key)
path = Path(asset['path'])
... |
Update the duration of a local asset.
Args:
key (str): Name of the asset.
Returns:
str: Duration of the asset.
| _update_local_asset_duration | python | RayVentura/ShortGPT | shortGPT/config/asset_db.py | https://github.com/RayVentura/ShortGPT/blob/master/shortGPT/config/asset_db.py | MIT |
def _update_youtube_asset_duration(cls, key: str) -> str:
"""
Update the duration of a Youtube asset.
Args:
key (str): Name of the asset.
Returns:
str: Duration of the asset.
"""
asset = cls.remote_assets._get(key)
youtube_url = asset['ur... |
Update the duration of a Youtube asset.
Args:
key (str): Name of the asset.
Returns:
str: Duration of the asset.
| _update_youtube_asset_duration | python | RayVentura/ShortGPT | shortGPT/config/asset_db.py | https://github.com/RayVentura/ShortGPT/blob/master/shortGPT/config/asset_db.py | MIT |
def _get_youtube_asset_link(cls, key: str, asset: dict) -> str:
"""
Get the link to a Youtube asset.
Args:
key (str): Name of the asset.
asset (dict): Asset data.
Returns:
str: Link to the asset.
"""
if any(t in asset['type'] for t in... |
Get the link to a Youtube asset.
Args:
key (str): Name of the asset.
asset (dict): Asset data.
Returns:
str: Link to the asset.
| _get_youtube_asset_link | python | RayVentura/ShortGPT | shortGPT/config/asset_db.py | https://github.com/RayVentura/ShortGPT/blob/master/shortGPT/config/asset_db.py | MIT |
def read_yaml_config(file_path: str) -> dict:
"""Reads and returns the contents of a YAML file as dictionary"""
with open(file_path, 'r') as file:
contents = yaml.safe_load(file)
return contents | Reads and returns the contents of a YAML file as dictionary | read_yaml_config | python | RayVentura/ShortGPT | shortGPT/config/config.py | https://github.com/RayVentura/ShortGPT/blob/master/shortGPT/config/config.py | MIT |
def load_editing_assets() -> dict:
"""Loads all local assets from the static-assets folder specified in the yaml_config"""
yaml_config = read_yaml_config("public.yaml")
if yaml_config['local-assets'] == None:
yaml_config['local-assets'] = {}
# Create a copy of the dictionary before iterating ove... | Loads all local assets from the static-assets folder specified in the yaml_config | load_editing_assets | python | RayVentura/ShortGPT | shortGPT/config/config.py | https://github.com/RayVentura/ShortGPT/blob/master/shortGPT/config/config.py | MIT |
def extract_random_clip_from_video(video_url, video_duration, clip_duration, output_file):
"""Extracts a clip from a video using a signed URL.
Args:
video_url (str): The signed URL of the video.
video_url (int): Duration of the video.
start_time (int): The start time of the clip in secon... | Extracts a clip from a video using a signed URL.
Args:
video_url (str): The signed URL of the video.
video_url (int): Duration of the video.
start_time (int): The start time of the clip in seconds.
clip_duration (int): The duration of the clip in seconds.
output_file (str): T... | extract_random_clip_from_video | python | RayVentura/ShortGPT | shortGPT/editing_utils/handle_videos.py | https://github.com/RayVentura/ShortGPT/blob/master/shortGPT/editing_utils/handle_videos.py | MIT |
def _generateScript(self):
"""
Implements Abstract parent method to generate the script for the reddit short
"""
self.logger("Generating reddit question & entertaining story")
self._db_script, _ = self.__getRealisticStory(max_tries=1)
self._db_reddit_question = reddit_gpt... |
Implements Abstract parent method to generate the script for the reddit short
| _generateScript | python | RayVentura/ShortGPT | shortGPT/engine/reddit_short_engine.py | https://github.com/RayVentura/ShortGPT/blob/master/shortGPT/engine/reddit_short_engine.py | MIT |
def _prepareCustomAssets(self):
"""
Override parent method to generate custom reddit image asset
"""
self.logger("Rendering short: (3/4) preparing custom reddit image...")
self.verifyParameters(question=self._db_reddit_question,)
title, header, n_comments, n_upvotes = red... |
Override parent method to generate custom reddit image asset
| _prepareCustomAssets | python | RayVentura/ShortGPT | shortGPT/engine/reddit_short_engine.py | https://github.com/RayVentura/ShortGPT/blob/master/shortGPT/engine/reddit_short_engine.py | MIT |
def _editAndRenderShort(self):
"""
Override parent method to customize video rendering sequence by adding a Reddit image
"""
self.verifyParameters(
voiceover_audio_url=self._db_audio_path,
video_duration=self._db_background_vide... |
Override parent method to customize video rendering sequence by adding a Reddit image
| _editAndRenderShort | python | RayVentura/ShortGPT | shortGPT/engine/reddit_short_engine.py | https://github.com/RayVentura/ShortGPT/blob/master/shortGPT/engine/reddit_short_engine.py | MIT |
def getVideoSearchQueriesTimed(captions_timed):
"""
Generate timed video search queries based on caption timings.
Returns list of [time_range, search_queries] pairs.
"""
err = ""
for _ in range(4):
try:
# Get total video duration from last caption
end_time = capt... |
Generate timed video search queries based on caption timings.
Returns list of [time_range, search_queries] pairs.
| getVideoSearchQueriesTimed | python | RayVentura/ShortGPT | shortGPT/gpt/gpt_editing.py | https://github.com/RayVentura/ShortGPT/blob/master/shortGPT/gpt/gpt_editing.py | MIT |
def num_tokens_from_messages(texts, model="gpt-4o-mini"):
"""Returns the number of tokens used by a list of messages."""
try:
encoding = tiktoken.encoding_for_model(model)
except KeyError:
encoding = tiktoken.get_encoding("cl100k_base")
if model == "gpt-4o-mini": # note: future models m... | Returns the number of tokens used by a list of messages. | num_tokens_from_messages | python | RayVentura/ShortGPT | shortGPT/gpt/gpt_utils.py | https://github.com/RayVentura/ShortGPT/blob/master/shortGPT/gpt/gpt_utils.py | MIT |
def display_header():
'''Display the header of the CLI'''
CLI.display_green_text('''
.d88888b dP dP .88888. 888888ba d888888P .88888. 888888ba d888888P
88. "' 88 88 d8' `8b 88 `8b 88 d8' `88 88 `8b 88
`Y88888b. 88aaaaa88 88 88 88aaaa8P' 88 88 ... | Display the header of the CLI | display_header | python | RayVentura/ShortGPT | shortGPT/utils/cli.py | https://github.com/RayVentura/ShortGPT/blob/master/shortGPT/utils/cli.py | MIT |
def display_requirements_check():
'''Display information about the system and requirements'''
print("Checking requirements...")
requirements_manager = Requirements()
print(" - Requirements : List of requirements and installed version:")
all_req_versions = requirements_manager.get... | Display information about the system and requirements | display_requirements_check | python | RayVentura/ShortGPT | shortGPT/utils/cli.py | https://github.com/RayVentura/ShortGPT/blob/master/shortGPT/utils/cli.py | MIT |
def display_error(error_message, stack_trace):
'''Display an error message in the console'''
print(CLI.bcolors.FAIL + "ERROR : " + error_message + CLI.bcolors.ENDC)
print(stack_trace)
print("If the problem persists, don't hesitate to contact our support. We're here to assist you.")
... | Display an error message in the console | display_error | python | RayVentura/ShortGPT | shortGPT/utils/cli.py | https://github.com/RayVentura/ShortGPT/blob/master/shortGPT/utils/cli.py | MIT |
def get_list_requirements(self):
'''Get the list of requirements packages from requirements.txt'''
with open(self.requirements_path) as f:
requirements = f.read().splitlines()
# remove comments and empty lines
requirements = [line for line in requirements if not line.startsw... | Get the list of requirements packages from requirements.txt | get_list_requirements | python | RayVentura/ShortGPT | shortGPT/utils/requirements.py | https://github.com/RayVentura/ShortGPT/blob/master/shortGPT/utils/requirements.py | MIT |
def is_all_requirements_installed(self):
'''Check if all requirements are installed'''
requirements = self.get_list_requirements()
for requirement in requirements:
if not self.is_requirement_installed(requirement):
return False
return True | Check if all requirements are installed | is_all_requirements_installed | python | RayVentura/ShortGPT | shortGPT/utils/requirements.py | https://github.com/RayVentura/ShortGPT/blob/master/shortGPT/utils/requirements.py | MIT |
def get_all_requirements_versions(self):
'''Get the versions of all requirements'''
requirements = self.get_list_requirements()
versions = {}
for requirement in requirements:
versions[requirement] = self.get_version(requirement)
return versions | Get the versions of all requirements | get_all_requirements_versions | python | RayVentura/ShortGPT | shortGPT/utils/requirements.py | https://github.com/RayVentura/ShortGPT/blob/master/shortGPT/utils/requirements.py | MIT |
def get_all_requirements_not_installed(self):
'''Get the list of all requirements not installed'''
requirements = self.get_list_requirements()
not_installed = {}
for requirement in requirements:
# if version is None then the package is not installed
if self.get_ve... | Get the list of all requirements not installed | get_all_requirements_not_installed | python | RayVentura/ShortGPT | shortGPT/utils/requirements.py | https://github.com/RayVentura/ShortGPT/blob/master/shortGPT/utils/requirements.py | MIT |
def validate_user(username, minlen):
"""Checks if the received username matches the required conditions."""
if type(username) != str:
raise TypeError("username must be a string")
if minlen < 1:
raise ValueError("minlen must be at least 1")
# Usernames can't be shorter than minlen
... | Checks if the received username matches the required conditions. | validate_user | python | google/it-cert-automation-practice | Course3/Lab4/validations.py | https://github.com/google/it-cert-automation-practice/blob/master/Course3/Lab4/validations.py | Apache-2.0 |
def __getitem__(self, idx):
"""
Output:
- target: dict of multiple items
- boxes: Tensor[num_box, 4]. \
Init type: x0,y0,x1,y1. unnormalized data.
Final type: cx,cy,w,h. normalized data.
"""
try:
img, target... |
Output:
- target: dict of multiple items
- boxes: Tensor[num_box, 4]. Init type: x0,y0,x1,y1. unnormalized data.
Final type: cx,cy,w,h. normalized data.
| __getitem__ | python | IDEA-Research/DINO | datasets/coco.py | https://github.com/IDEA-Research/DINO/blob/master/datasets/coco.py | Apache-2.0 |
def evaluate(self):
'''
Run per image evaluation on given images and store results (a list of dict) in self.evalImgs
:return: None
'''
p = self.params
# add backward compatibility if useSegm is specified in params
if p.useSegm is not None:
p.iouType = 'segm' if p.useSegm == 1 else 'b... |
Run per image evaluation on given images and store results (a list of dict) in self.evalImgs
:return: None
| evaluate | python | IDEA-Research/DINO | datasets/coco_eval.py | https://github.com/IDEA-Research/DINO/blob/master/datasets/coco_eval.py | Apache-2.0 |
def __getitem__(self, index):
"""
Args:
index (int): Index
Returns:
tuple: (image, target) where target is class_index of the target class.
"""
row = self.tsv.seek(index)
image_data = base64.b64decode(row[-1])
image = Image.open(io.BytesIO(... |
Args:
index (int): Index
Returns:
tuple: (image, target) where target is class_index of the target class.
| __getitem__ | python | IDEA-Research/DINO | datasets/dataset.py | https://github.com/IDEA-Research/DINO/blob/master/datasets/dataset.py | Apache-2.0 |
def slcopytree(src, dst, symlinks=False, ignore=None, copy_function=shutil.copyfile,
ignore_dangling_symlinks=False):
"""
modified from shutil.copytree without copystat.
Recursively copy a directory tree.
The destination directory must not already exist.
If exception(s) occur, an ... |
modified from shutil.copytree without copystat.
Recursively copy a directory tree.
The destination directory must not already exist.
If exception(s) occur, an Error is raised with a list of reasons.
If the optional symlinks flag is true, symbolic links in the
source tree result in symbol... | slcopytree | python | IDEA-Research/DINO | datasets/data_util.py | https://github.com/IDEA-Research/DINO/blob/master/datasets/data_util.py | Apache-2.0 |
def intersect(boxes1, boxes2):
'''
Find intersection of every box combination between two sets of box
boxes1: bounding boxes 1, a tensor of dimensions (n1, 4)
boxes2: bounding boxes 2, a tensor of dimensions (n2, 4)
Out: Intersection each of boxes1 with respect to each of bo... |
Find intersection of every box combination between two sets of box
boxes1: bounding boxes 1, a tensor of dimensions (n1, 4)
boxes2: bounding boxes 2, a tensor of dimensions (n2, 4)
Out: Intersection each of boxes1 with respect to each of boxes2,
a tensor of dimens... | intersect | python | IDEA-Research/DINO | datasets/random_crop.py | https://github.com/IDEA-Research/DINO/blob/master/datasets/random_crop.py | Apache-2.0 |
def random_crop(image, boxes, labels, difficulties=None):
'''
image: A PIL image
boxes: Bounding boxes, a tensor of dimensions (#objects, 4)
labels: labels of object, a tensor of dimensions (#objects)
difficulties: difficulties of detect object, a tensor of dimensions (#objects)
... |
image: A PIL image
boxes: Bounding boxes, a tensor of dimensions (#objects, 4)
labels: labels of object, a tensor of dimensions (#objects)
difficulties: difficulties of detect object, a tensor of dimensions (#objects)
Out: cropped image , new boxes, new labels, new diff... | random_crop | python | IDEA-Research/DINO | datasets/random_crop.py | https://github.com/IDEA-Research/DINO/blob/master/datasets/random_crop.py | Apache-2.0 |
def __call__(self, img, target):
"""
img (PIL Image or Tensor): Image to be adjusted.
"""
_contrast_factor = ((random.random() + 1.0) / 2.0) * self.contrast_factor
img = F.adjust_contrast(img, _contrast_factor)
return img, target |
img (PIL Image or Tensor): Image to be adjusted.
| __call__ | python | IDEA-Research/DINO | datasets/sltransform.py | https://github.com/IDEA-Research/DINO/blob/master/datasets/sltransform.py | Apache-2.0 |
def __call__(self, img, target):
"""
img (PIL Image or Tensor): Image to be adjusted.
"""
_brightness_factor = ((random.random() + 1.0) / 2.0) * self.brightness_factor
img = F.adjust_brightness(img, _brightness_factor)
return img, target |
img (PIL Image or Tensor): Image to be adjusted.
| __call__ | python | IDEA-Research/DINO | datasets/sltransform.py | https://github.com/IDEA-Research/DINO/blob/master/datasets/sltransform.py | Apache-2.0 |
def lighting_noise(image):
'''
color channel swap in image
image: A PIL image
'''
new_image = image
perms = ((0, 1, 2), (0, 2, 1), (1, 0, 2),
(1, 2, 0), (2, 0, 1), (2, 1, 0))
swap = perms[random.randint(0, len(perms)- 1)]
new_image = F.to_tensor(new_image)
new_i... |
color channel swap in image
image: A PIL image
| lighting_noise | python | IDEA-Research/DINO | datasets/sltransform.py | https://github.com/IDEA-Research/DINO/blob/master/datasets/sltransform.py | Apache-2.0 |
def rotate(image, boxes, angle):
'''
Rotate image and bounding box
image: A Pil image (w, h)
boxes: A tensors of dimensions (#objects, 4)
Out: rotated image (w, h), rotated boxes
'''
new_image = image.copy()
new_boxes = boxes.clone()
#Rotate image, expan... |
Rotate image and bounding box
image: A Pil image (w, h)
boxes: A tensors of dimensions (#objects, 4)
Out: rotated image (w, h), rotated boxes
| rotate | python | IDEA-Research/DINO | datasets/sltransform.py | https://github.com/IDEA-Research/DINO/blob/master/datasets/sltransform.py | Apache-2.0 |
def __call__(self, img, target, p=1.0):
"""
Input:
target['boxes']: xyxy, unnormalized data.
"""
boxes_raw = target['boxes']
labels_raw = target['labels']
img_np = np.array(img)
if self.transform and random.random() < p:
new_res = ... |
Input:
target['boxes']: xyxy, unnormalized data.
| __call__ | python | IDEA-Research/DINO | datasets/sltransform.py | https://github.com/IDEA-Research/DINO/blob/master/datasets/sltransform.py | Apache-2.0 |
def register(self, module_build_function, module_name=None, force=False):
"""Register a module build function.
Args:
module (:obj:`nn.Module`): Module to be registered.
"""
if not inspect.isfunction(module_build_function):
raise TypeError('module_build_function mu... | Register a module build function.
Args:
module (:obj:`nn.Module`): Module to be registered.
| register | python | IDEA-Research/DINO | models/registry.py | https://github.com/IDEA-Research/DINO/blob/master/models/registry.py | Apache-2.0 |
def forward(self, query, key, value, key_padding_mask=None,
need_weights=True, attn_mask=None):
# type: (Tensor, Tensor, Tensor, Optional[Tensor], bool, Optional[Tensor]) -> Tuple[Tensor, Optional[Tensor]]
r"""
Args:
query, key, value: map a query and a set of key-value pairs... |
Args:
query, key, value: map a query and a set of key-value pairs to an output.
See "Attention Is All You Need" for more details.
key_padding_mask: if provided, specified padding elements in the key will
be ignored by the attention. When given a binary mask and a value is Tr... | forward | python | IDEA-Research/DINO | models/dino/attention.py | https://github.com/IDEA-Research/DINO/blob/master/models/dino/attention.py | Apache-2.0 |
def multi_head_attention_forward(query: Tensor,
key: Tensor,
value: Tensor,
embed_dim_to_check: int,
num_heads: int,
in_proj_weight: Tensor,
... |
Args:
query, key, value: map a query and a set of key-value pairs to an output.
See "Attention Is All You Need" for more details.
embed_dim_to_check: total dimension of the model.
num_heads: parallel attention heads.
in_proj_weight, in_proj_bias: input projection weight ... | multi_head_attention_forward | python | IDEA-Research/DINO | models/dino/attention.py | https://github.com/IDEA-Research/DINO/blob/master/models/dino/attention.py | Apache-2.0 |
def build_backbone(args):
"""
Useful args:
- backbone: backbone name
- lr_backbone:
- dilation
- return_interm_indices: available: [0,1,2,3], [1,2,3], [3]
- backbone_freeze_keywords:
- use_checkpoint: for swin only for now
"""
position_embedding = build... |
Useful args:
- backbone: backbone name
- lr_backbone:
- dilation
- return_interm_indices: available: [0,1,2,3], [1,2,3], [3]
- backbone_freeze_keywords:
- use_checkpoint: for swin only for now
| build_backbone | python | IDEA-Research/DINO | models/dino/backbone.py | https://github.com/IDEA-Research/DINO/blob/master/models/dino/backbone.py | Apache-2.0 |
def forward(self, srcs, masks, refpoint_embed, pos_embeds, tgt, attn_mask=None):
"""
Input:
- srcs: List of multi features [bs, ci, hi, wi]
- masks: List of multi masks [bs, hi, wi]
- refpoint_embed: [bs, num_dn, 4]. None in infer
- pos_embeds: List of mul... |
Input:
- srcs: List of multi features [bs, ci, hi, wi]
- masks: List of multi masks [bs, hi, wi]
- refpoint_embed: [bs, num_dn, 4]. None in infer
- pos_embeds: List of multi pos embeds [bs, ci, hi, wi]
- tgt: [bs, num_dn, d_model]. None in infer
... | forward | python | IDEA-Research/DINO | models/dino/deformable_transformer.py | https://github.com/IDEA-Research/DINO/blob/master/models/dino/deformable_transformer.py | Apache-2.0 |
def forward(self,
src: Tensor,
pos: Tensor,
spatial_shapes: Tensor,
level_start_index: Tensor,
valid_ratios: Tensor,
key_padding_mask: Tensor,
ref_token_index: Optional[Tensor]=None,
ref_token_coord: Optional[Tensor]=N... |
Input:
- src: [bs, sum(hi*wi), 256]
- pos: pos embed for src. [bs, sum(hi*wi), 256]
- spatial_shapes: h,w of each level [num_level, 2]
- level_start_index: [num_level] start point of level in sum(hi*wi).
- valid_ratios: [bs, num_level, 2]
... | forward | python | IDEA-Research/DINO | models/dino/deformable_transformer.py | https://github.com/IDEA-Research/DINO/blob/master/models/dino/deformable_transformer.py | Apache-2.0 |
def forward(self, tgt, memory,
tgt_mask: Optional[Tensor] = None,
memory_mask: Optional[Tensor] = None,
tgt_key_padding_mask: Optional[Tensor] = None,
memory_key_padding_mask: Optional[Tensor] = None,
pos: Optional[Tensor] = None,
... |
Input:
- tgt: nq, bs, d_model
- memory: hw, bs, d_model
- pos: hw, bs, d_model
- refpoints_unsigmoid: nq, bs, 2/4
- valid_ratios/spatial_shapes: bs, nlevel, 2
| forward | python | IDEA-Research/DINO | models/dino/deformable_transformer.py | https://github.com/IDEA-Research/DINO/blob/master/models/dino/deformable_transformer.py | Apache-2.0 |
def __init__(self, backbone, transformer, num_classes, num_queries,
aux_loss=False, iter_update=False,
query_dim=2,
random_refpoints_xy=False,
fix_refpoints_hw=-1,
num_feature_levels=1,
nheads=8,
... | Initializes the model.
Parameters:
backbone: torch module of the backbone to be used. See backbone.py
transformer: torch module of the transformer architecture. See transformer.py
num_classes: number of object classes
num_queries: number of object queries, ie det... | __init__ | python | IDEA-Research/DINO | models/dino/dino.py | https://github.com/IDEA-Research/DINO/blob/master/models/dino/dino.py | Apache-2.0 |
def forward(self, samples: NestedTensor, targets:List=None):
""" The forward expects a NestedTensor, which consists of:
- samples.tensor: batched images, of shape [batch_size x 3 x H x W]
- samples.mask: a binary mask of shape [batch_size x H x W], containing 1 on padded pixels
... | The forward expects a NestedTensor, which consists of:
- samples.tensor: batched images, of shape [batch_size x 3 x H x W]
- samples.mask: a binary mask of shape [batch_size x H x W], containing 1 on padded pixels
It returns a dict with the following elements:
... | forward | python | IDEA-Research/DINO | models/dino/dino.py | https://github.com/IDEA-Research/DINO/blob/master/models/dino/dino.py | Apache-2.0 |
def loss_labels(self, outputs, targets, indices, num_boxes, log=True):
"""Classification loss (Binary focal loss)
targets dicts must contain the key "labels" containing a tensor of dim [nb_target_boxes]
"""
assert 'pred_logits' in outputs
src_logits = outputs['pred_logits']
... | Classification loss (Binary focal loss)
targets dicts must contain the key "labels" containing a tensor of dim [nb_target_boxes]
| loss_labels | python | IDEA-Research/DINO | models/dino/dino.py | https://github.com/IDEA-Research/DINO/blob/master/models/dino/dino.py | Apache-2.0 |
def loss_cardinality(self, outputs, targets, indices, num_boxes):
""" Compute the cardinality error, ie the absolute error in the number of predicted non-empty boxes
This is not really a loss, it is intended for logging purposes only. It doesn't propagate gradients
"""
pred_logits = outp... | Compute the cardinality error, ie the absolute error in the number of predicted non-empty boxes
This is not really a loss, it is intended for logging purposes only. It doesn't propagate gradients
| loss_cardinality | python | IDEA-Research/DINO | models/dino/dino.py | https://github.com/IDEA-Research/DINO/blob/master/models/dino/dino.py | Apache-2.0 |
def loss_boxes(self, outputs, targets, indices, num_boxes):
"""Compute the losses related to the bounding boxes, the L1 regression loss and the GIoU loss
targets dicts must contain the key "boxes" containing a tensor of dim [nb_target_boxes, 4]
The target boxes are expected in format (cent... | Compute the losses related to the bounding boxes, the L1 regression loss and the GIoU loss
targets dicts must contain the key "boxes" containing a tensor of dim [nb_target_boxes, 4]
The target boxes are expected in format (center_x, center_y, w, h), normalized by the image size.
| loss_boxes | python | IDEA-Research/DINO | models/dino/dino.py | https://github.com/IDEA-Research/DINO/blob/master/models/dino/dino.py | Apache-2.0 |
def loss_masks(self, outputs, targets, indices, num_boxes):
"""Compute the losses related to the masks: the focal loss and the dice loss.
targets dicts must contain the key "masks" containing a tensor of dim [nb_target_boxes, h, w]
"""
assert "pred_masks" in outputs
src_idx =... | Compute the losses related to the masks: the focal loss and the dice loss.
targets dicts must contain the key "masks" containing a tensor of dim [nb_target_boxes, h, w]
| loss_masks | python | IDEA-Research/DINO | models/dino/dino.py | https://github.com/IDEA-Research/DINO/blob/master/models/dino/dino.py | Apache-2.0 |
def forward(self, outputs, targets, return_indices=False):
""" This performs the loss computation.
Parameters:
outputs: dict of tensors, see the output specification of the model for the format
targets: list of dicts, such that len(targets) == batch_size.
... | This performs the loss computation.
Parameters:
outputs: dict of tensors, see the output specification of the model for the format
targets: list of dicts, such that len(targets) == batch_size.
The expected keys in each dict depends on the losses applied, see each... | forward | python | IDEA-Research/DINO | models/dino/dino.py | https://github.com/IDEA-Research/DINO/blob/master/models/dino/dino.py | Apache-2.0 |
def forward(self, outputs, target_sizes, not_to_xyxy=False, test=False):
""" Perform the computation
Parameters:
outputs: raw outputs of the model
target_sizes: tensor of dimension [batch_size x 2] containing the size of each images of the batch
For eval... | Perform the computation
Parameters:
outputs: raw outputs of the model
target_sizes: tensor of dimension [batch_size x 2] containing the size of each images of the batch
For evaluation, this must be the original image size (before any data augmentation)
... | forward | python | IDEA-Research/DINO | models/dino/dino.py | https://github.com/IDEA-Research/DINO/blob/master/models/dino/dino.py | Apache-2.0 |
def prepare_for_cdn(dn_args, training, num_queries, num_classes, hidden_dim, label_enc):
"""
A major difference of DINO from DN-DETR is that the author process pattern embedding pattern embedding in its detector
forward function and use learnable tgt embedding, so we change this function a little bi... |
A major difference of DINO from DN-DETR is that the author process pattern embedding pattern embedding in its detector
forward function and use learnable tgt embedding, so we change this function a little bit.
:param dn_args: targets, dn_number, label_noise_ratio, box_noise_scale
:param... | prepare_for_cdn | python | IDEA-Research/DINO | models/dino/dn_components.py | https://github.com/IDEA-Research/DINO/blob/master/models/dino/dn_components.py | Apache-2.0 |
def dn_post_process(outputs_class, outputs_coord, dn_meta, aux_loss, _set_aux_loss):
"""
post process of dn after output from the transformer
put the dn part in the dn_meta
"""
if dn_meta and dn_meta['pad_size'] > 0:
output_known_class = outputs_class[:, :, :dn_meta['pad_size'], :]
... |
post process of dn after output from the transformer
put the dn part in the dn_meta
| dn_post_process | python | IDEA-Research/DINO | models/dino/dn_components.py | https://github.com/IDEA-Research/DINO/blob/master/models/dino/dn_components.py | Apache-2.0 |
def __init__(self, cost_class: float = 1, cost_bbox: float = 1, cost_giou: float = 1, focal_alpha = 0.25):
"""Creates the matcher
Params:
cost_class: This is the relative weight of the classification error in the matching cost
cost_bbox: This is the relative weight of the L1 erro... | Creates the matcher
Params:
cost_class: This is the relative weight of the classification error in the matching cost
cost_bbox: This is the relative weight of the L1 error of the bounding box coordinates in the matching cost
cost_giou: This is the relative weight of the giou ... | __init__ | python | IDEA-Research/DINO | models/dino/matcher.py | https://github.com/IDEA-Research/DINO/blob/master/models/dino/matcher.py | Apache-2.0 |
def forward(self, outputs, targets):
""" Performs the matching
Params:
outputs: This is a dict that contains at least these entries:
"pred_logits": Tensor of dim [batch_size, num_queries, num_classes] with the classification logits
"pred_boxes": Tensor of di... | Performs the matching
Params:
outputs: This is a dict that contains at least these entries:
"pred_logits": Tensor of dim [batch_size, num_queries, num_classes] with the classification logits
"pred_boxes": Tensor of dim [batch_size, num_queries, 4] with the predicte... | forward | python | IDEA-Research/DINO | models/dino/matcher.py | https://github.com/IDEA-Research/DINO/blob/master/models/dino/matcher.py | Apache-2.0 |
def __init__(self, cost_class: float = 1, cost_bbox: float = 1, cost_giou: float = 1, focal_alpha = 0.25):
"""Creates the matcher
Params:
cost_class: This is the relative weight of the classification error in the matching cost
cost_bbox: This is the relative weight of the L1 erro... | Creates the matcher
Params:
cost_class: This is the relative weight of the classification error in the matching cost
cost_bbox: This is the relative weight of the L1 error of the bounding box coordinates in the matching cost
cost_giou: This is the relative weight of the giou ... | __init__ | python | IDEA-Research/DINO | models/dino/matcher.py | https://github.com/IDEA-Research/DINO/blob/master/models/dino/matcher.py | Apache-2.0 |
def forward(self, outputs, targets):
""" Performs the matching
Params:
outputs: This is a dict that contains at least these entries:
"pred_logits": Tensor of dim [batch_size, num_queries, num_classes] with the classification logits
"pred_boxes": Tensor of di... | Performs the matching
Params:
outputs: This is a dict that contains at least these entries:
"pred_logits": Tensor of dim [batch_size, num_queries, num_classes] with the classification logits
"pred_boxes": Tensor of dim [batch_size, num_queries, 4] with the predicte... | forward | python | IDEA-Research/DINO | models/dino/matcher.py | https://github.com/IDEA-Research/DINO/blob/master/models/dino/matcher.py | Apache-2.0 |
def dice_loss(inputs, targets, num_boxes):
"""
Compute the DICE loss, similar to generalized IOU for masks
Args:
inputs: A float tensor of arbitrary shape.
The predictions for each example.
targets: A float tensor with the same shape as inputs. Stores the binary
... |
Compute the DICE loss, similar to generalized IOU for masks
Args:
inputs: A float tensor of arbitrary shape.
The predictions for each example.
targets: A float tensor with the same shape as inputs. Stores the binary
classification label for each element in input... | dice_loss | python | IDEA-Research/DINO | models/dino/segmentation.py | https://github.com/IDEA-Research/DINO/blob/master/models/dino/segmentation.py | Apache-2.0 |
def sigmoid_focal_loss(inputs, targets, num_boxes, alpha: float = 0.25, gamma: float = 2):
"""
Loss used in RetinaNet for dense detection: https://arxiv.org/abs/1708.02002.
Args:
inputs: A float tensor of arbitrary shape.
The predictions for each example.
targets: A float ten... |
Loss used in RetinaNet for dense detection: https://arxiv.org/abs/1708.02002.
Args:
inputs: A float tensor of arbitrary shape.
The predictions for each example.
targets: A float tensor with the same shape as inputs. Stores the binary
classification label for eac... | sigmoid_focal_loss | python | IDEA-Research/DINO | models/dino/segmentation.py | https://github.com/IDEA-Research/DINO/blob/master/models/dino/segmentation.py | Apache-2.0 |
def __init__(self, is_thing_map, threshold=0.85):
"""
Parameters:
is_thing_map: This is a whose keys are the class ids, and the values a boolean indicating whether
the class is a thing (True) or a stuff (False) class
threshold: confidence threshold: segme... |
Parameters:
is_thing_map: This is a whose keys are the class ids, and the values a boolean indicating whether
the class is a thing (True) or a stuff (False) class
threshold: confidence threshold: segments with confidence lower than this will be deleted
| __init__ | python | IDEA-Research/DINO | models/dino/segmentation.py | https://github.com/IDEA-Research/DINO/blob/master/models/dino/segmentation.py | Apache-2.0 |
def forward(self, outputs, processed_sizes, target_sizes=None):
""" This function computes the panoptic prediction from the model's predictions.
Parameters:
outputs: This is a dict coming directly from the model. See the model doc for the content.
processed_sizes: This is a list ... | This function computes the panoptic prediction from the model's predictions.
Parameters:
outputs: This is a dict coming directly from the model. See the model doc for the content.
processed_sizes: This is a list of tuples (or torch tensors) of sizes of the images that were passed to the... | forward | python | IDEA-Research/DINO | models/dino/segmentation.py | https://github.com/IDEA-Research/DINO/blob/master/models/dino/segmentation.py | Apache-2.0 |
def window_partition(x, window_size):
"""
Args:
x: (B, H, W, C)
window_size (int): window size
Returns:
windows: (num_windows*B, window_size, window_size, C)
"""
B, H, W, C = x.shape
x = x.view(B, H // window_size, window_size, W // window_size, window_size, C)
window... |
Args:
x: (B, H, W, C)
window_size (int): window size
Returns:
windows: (num_windows*B, window_size, window_size, C)
| window_partition | python | IDEA-Research/DINO | models/dino/swin_transformer.py | https://github.com/IDEA-Research/DINO/blob/master/models/dino/swin_transformer.py | Apache-2.0 |
def window_reverse(windows, window_size, H, W):
"""
Args:
windows: (num_windows*B, window_size, window_size, C)
window_size (int): Window size
H (int): Height of image
W (int): Width of image
Returns:
x: (B, H, W, C)
"""
B = int(windows.shape[0] / (H * W / win... |
Args:
windows: (num_windows*B, window_size, window_size, C)
window_size (int): Window size
H (int): Height of image
W (int): Width of image
Returns:
x: (B, H, W, C)
| window_reverse | python | IDEA-Research/DINO | models/dino/swin_transformer.py | https://github.com/IDEA-Research/DINO/blob/master/models/dino/swin_transformer.py | Apache-2.0 |
def forward(self, x, mask=None):
""" Forward function.
Args:
x: input features with shape of (num_windows*B, N, C)
mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None
"""
B_, N, C = x.shape
qkv = self.qkv(x).reshape(B_, N, 3, self.num_hea... | Forward function.
Args:
x: input features with shape of (num_windows*B, N, C)
mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None
| forward | python | IDEA-Research/DINO | models/dino/swin_transformer.py | https://github.com/IDEA-Research/DINO/blob/master/models/dino/swin_transformer.py | Apache-2.0 |
def forward(self, x, mask_matrix):
""" Forward function.
Args:
x: Input feature, tensor size (B, H*W, C).
H, W: Spatial resolution of the input feature.
mask_matrix: Attention mask for cyclic shift.
"""
B, L, C = x.shape
H, W = self.H, self.W
... | Forward function.
Args:
x: Input feature, tensor size (B, H*W, C).
H, W: Spatial resolution of the input feature.
mask_matrix: Attention mask for cyclic shift.
| forward | python | IDEA-Research/DINO | models/dino/swin_transformer.py | https://github.com/IDEA-Research/DINO/blob/master/models/dino/swin_transformer.py | Apache-2.0 |
def forward(self, x, H, W):
""" Forward function.
Args:
x: Input feature, tensor size (B, H*W, C).
H, W: Spatial resolution of the input feature.
"""
B, L, C = x.shape
assert L == H * W, "input feature has wrong size"
x = x.view(B, H, W, C)
... | Forward function.
Args:
x: Input feature, tensor size (B, H*W, C).
H, W: Spatial resolution of the input feature.
| forward | python | IDEA-Research/DINO | models/dino/swin_transformer.py | https://github.com/IDEA-Research/DINO/blob/master/models/dino/swin_transformer.py | Apache-2.0 |
def forward(self, x, H, W):
""" Forward function.
Args:
x: Input feature, tensor size (B, H*W, C).
H, W: Spatial resolution of the input feature.
"""
# calculate attention mask for SW-MSA
Hp = int(np.ceil(H / self.window_size)) * self.window_size
... | Forward function.
Args:
x: Input feature, tensor size (B, H*W, C).
H, W: Spatial resolution of the input feature.
| forward | python | IDEA-Research/DINO | models/dino/swin_transformer.py | https://github.com/IDEA-Research/DINO/blob/master/models/dino/swin_transformer.py | Apache-2.0 |
def forward(self, srcs, masks, pos_embeds, query_embed=None):
"""
Input:
- srcs: List([bs, c, h, w])
- masks: List([bs, h, w])
"""
assert self.two_stage or query_embed is not None
# prepare input for encoder
src_flatten = []
mask_flatten =... |
Input:
- srcs: List([bs, c, h, w])
- masks: List([bs, h, w])
| forward | python | IDEA-Research/DINO | models/dino/transformer_deformable.py | https://github.com/IDEA-Research/DINO/blob/master/models/dino/transformer_deformable.py | Apache-2.0 |
def forward(self, src, spatial_shapes, level_start_index, valid_ratios, pos=None, padding_mask=None):
"""
Input:
- src: [bs, sum(hi*wi), 256]
- spatial_shapes: h,w of each level [num_level, 2]
- level_start_index: [num_level] start point of level in sum(hi*wi).
... |
Input:
- src: [bs, sum(hi*wi), 256]
- spatial_shapes: h,w of each level [num_level, 2]
- level_start_index: [num_level] start point of level in sum(hi*wi).
- valid_ratios: [bs, num_level, 2]
- pos: pos embed for src. [bs, sum(hi*wi), 256]
... | forward | python | IDEA-Research/DINO | models/dino/transformer_deformable.py | https://github.com/IDEA-Research/DINO/blob/master/models/dino/transformer_deformable.py | Apache-2.0 |
def gen_encoder_output_proposals(memory:Tensor, memory_padding_mask:Tensor, spatial_shapes:Tensor, learnedwh=None):
"""
Input:
- memory: bs, \sum{hw}, d_model
- memory_padding_mask: bs, \sum{hw}
- spatial_shapes: nlevel, 2
- learnedwh: 2
Output:
- output_memory: bs, \... |
Input:
- memory: bs, \sum{hw}, d_model
- memory_padding_mask: bs, \sum{hw}
- spatial_shapes: nlevel, 2
- learnedwh: 2
Output:
- output_memory: bs, \sum{hw}, d_model
- output_proposals: bs, \sum{hw}, 4
| gen_encoder_output_proposals | python | IDEA-Research/DINO | models/dino/utils.py | https://github.com/IDEA-Research/DINO/blob/master/models/dino/utils.py | Apache-2.0 |
def sigmoid_focal_loss(inputs, targets, num_boxes, alpha: float = 0.25, gamma: float = 2):
"""
Loss used in RetinaNet for dense detection: https://arxiv.org/abs/1708.02002.
Args:
inputs: A float tensor of arbitrary shape.
The predictions for each example.
targets: A float ten... |
Loss used in RetinaNet for dense detection: https://arxiv.org/abs/1708.02002.
Args:
inputs: A float tensor of arbitrary shape.
The predictions for each example.
targets: A float tensor with the same shape as inputs. Stores the binary
classification label for eac... | sigmoid_focal_loss | python | IDEA-Research/DINO | models/dino/utils.py | https://github.com/IDEA-Research/DINO/blob/master/models/dino/utils.py | Apache-2.0 |
def _get_activation_fn(activation, d_model=256, batch_dim=0):
"""Return an activation function given a string"""
if activation == "relu":
return F.relu
if activation == "gelu":
return F.gelu
if activation == "glu":
return F.glu
if activation == "prelu":
return nn.PReL... | Return an activation function given a string | _get_activation_fn | python | IDEA-Research/DINO | models/dino/utils.py | https://github.com/IDEA-Research/DINO/blob/master/models/dino/utils.py | Apache-2.0 |
def __init__(self, d_model=256, n_levels=4, n_heads=8, n_points=4):
"""
Multi-Scale Deformable Attention Module
:param d_model hidden dimension
:param n_levels number of feature levels
:param n_heads number of attention heads
:param n_points number of sa... |
Multi-Scale Deformable Attention Module
:param d_model hidden dimension
:param n_levels number of feature levels
:param n_heads number of attention heads
:param n_points number of sampling points per attention head per feature level
| __init__ | python | IDEA-Research/DINO | models/dino/ops/modules/ms_deform_attn.py | https://github.com/IDEA-Research/DINO/blob/master/models/dino/ops/modules/ms_deform_attn.py | Apache-2.0 |
def forward(self, query, reference_points, input_flatten, input_spatial_shapes, input_level_start_index, input_padding_mask=None):
"""
:param query (N, Length_{query}, C)
:param reference_points (N, Length_{query}, n_levels, 2), range in [0, 1], top-left (0,0), b... |
:param query (N, Length_{query}, C)
:param reference_points (N, Length_{query}, n_levels, 2), range in [0, 1], top-left (0,0), bottom-right (1, 1), including padding area
or (N, Length_{query}, n_levels, 4), add additional (w, h) ... | forward | python | IDEA-Research/DINO | models/dino/ops/modules/ms_deform_attn.py | https://github.com/IDEA-Research/DINO/blob/master/models/dino/ops/modules/ms_deform_attn.py | Apache-2.0 |
def get_shape(val: object) -> typing.List[int]:
"""
Get the shapes from a jit value object.
Args:
val (torch._C.Value): jit value object.
Returns:
list(int): return a list of ints.
"""
if val.isCompleteTensor(): # pyre-ignore
r = val.type().sizes() # pyre-ignore
... |
Get the shapes from a jit value object.
Args:
val (torch._C.Value): jit value object.
Returns:
list(int): return a list of ints.
| get_shape | python | IDEA-Research/DINO | tools/benchmark.py | https://github.com/IDEA-Research/DINO/blob/master/tools/benchmark.py | Apache-2.0 |
def addmm_flop_jit(
inputs: typing.List[object], outputs: typing.List[object]
) -> typing.Counter[str]:
"""
This method counts the flops for fully connected layers with torch script.
Args:
inputs (list(torch._C.Value)): The input shape in the form of a list of
jit object.
out... |
This method counts the flops for fully connected layers with torch script.
Args:
inputs (list(torch._C.Value)): The input shape in the form of a list of
jit object.
outputs (list(torch._C.Value)): The output shape in the form of a list
of jit object.
Returns:
... | addmm_flop_jit | python | IDEA-Research/DINO | tools/benchmark.py | https://github.com/IDEA-Research/DINO/blob/master/tools/benchmark.py | Apache-2.0 |
def conv_flop_count(
x_shape: typing.List[int],
w_shape: typing.List[int],
out_shape: typing.List[int],
) -> typing.Counter[str]:
"""
This method counts the flops for convolution. Note only multiplication is
counted. Computation for addition and bias is ignored.
Args:
x_shape (list(i... |
This method counts the flops for convolution. Note only multiplication is
counted. Computation for addition and bias is ignored.
Args:
x_shape (list(int)): The input shape before convolution.
w_shape (list(int)): The filter shape.
out_shape (list(int)): The output shape after convol... | conv_flop_count | python | IDEA-Research/DINO | tools/benchmark.py | https://github.com/IDEA-Research/DINO/blob/master/tools/benchmark.py | Apache-2.0 |
def conv_flop_jit(
inputs: typing.List[object], outputs: typing.List[object]
) -> typing.Counter[str]:
"""
This method counts the flops for convolution using torch script.
Args:
inputs (list(torch._C.Value)): The input shape in the form of a list of
jit object before convolution.
... |
This method counts the flops for convolution using torch script.
Args:
inputs (list(torch._C.Value)): The input shape in the form of a list of
jit object before convolution.
outputs (list(torch._C.Value)): The output shape in the form of a list
of jit object after convol... | conv_flop_jit | python | IDEA-Research/DINO | tools/benchmark.py | https://github.com/IDEA-Research/DINO/blob/master/tools/benchmark.py | Apache-2.0 |
def einsum_flop_jit(
inputs: typing.List[object], outputs: typing.List[object]
) -> typing.Counter[str]:
"""
This method counts the flops for the einsum operation. We currently support
two einsum operations: "nct,ncp->ntp" and "ntg,ncg->nct".
Args:
inputs (list(torch._C.Value)): The input sh... |
This method counts the flops for the einsum operation. We currently support
two einsum operations: "nct,ncp->ntp" and "ntg,ncg->nct".
Args:
inputs (list(torch._C.Value)): The input shape in the form of a list of
jit object before einsum.
outputs (list(torch._C.Value)): The outpu... | einsum_flop_jit | python | IDEA-Research/DINO | tools/benchmark.py | https://github.com/IDEA-Research/DINO/blob/master/tools/benchmark.py | Apache-2.0 |
def matmul_flop_jit(
inputs: typing.List[object], outputs: typing.List[object]
) -> typing.Counter[str]:
"""
This method counts the flops for matmul.
Args:
inputs (list(torch._C.Value)): The input shape in the form of a list of
jit object before matmul.
outputs (list(torch._C... |
This method counts the flops for matmul.
Args:
inputs (list(torch._C.Value)): The input shape in the form of a list of
jit object before matmul.
outputs (list(torch._C.Value)): The output shape in the form of a list
of jit object after matmul.
Returns:
Counte... | matmul_flop_jit | python | IDEA-Research/DINO | tools/benchmark.py | https://github.com/IDEA-Research/DINO/blob/master/tools/benchmark.py | Apache-2.0 |
def batchnorm_flop_jit(
inputs: typing.List[object], outputs: typing.List[object]
) -> typing.Counter[str]:
"""
This method counts the flops for batch norm.
Args:
inputs (list(torch._C.Value)): The input shape in the form of a list of
jit object before batch norm.
outputs (li... |
This method counts the flops for batch norm.
Args:
inputs (list(torch._C.Value)): The input shape in the form of a list of
jit object before batch norm.
outputs (list(torch._C.Value)): The output shape in the form of a list
of jit object after batch norm.
Returns:
... | batchnorm_flop_jit | python | IDEA-Research/DINO | tools/benchmark.py | https://github.com/IDEA-Research/DINO/blob/master/tools/benchmark.py | Apache-2.0 |
def linear_flop_jit(inputs: List[Any], outputs: List[Any]) -> Number:
"""
Count flops for the aten::linear operator.
"""
# Inputs is a list of length 3; unlike aten::addmm, it is the first
# two elements that are relevant.
input_shapes = [get_shape(v) for v in inputs[0:2]]
# input_shapes[0]:... |
Count flops for the aten::linear operator.
| linear_flop_jit | python | IDEA-Research/DINO | tools/benchmark.py | https://github.com/IDEA-Research/DINO/blob/master/tools/benchmark.py | Apache-2.0 |
def norm_flop_counter(affine_arg_index: int) -> Handle:
"""
Args:
affine_arg_index: index of the affine argument in inputs
"""
def norm_flop_jit(inputs: List[Any], outputs: List[Any]) -> Number:
"""
Count flops for norm layers.
"""
# Inputs[0] contains the shape ... |
Args:
affine_arg_index: index of the affine argument in inputs
| norm_flop_counter | python | IDEA-Research/DINO | tools/benchmark.py | https://github.com/IDEA-Research/DINO/blob/master/tools/benchmark.py | Apache-2.0 |
Subsets and Splits
Django Code with Docstrings
Filters Python code examples from Django repository that contain Django-related code, helping identify relevant code snippets for understanding Django framework usage patterns.
SQL Console for Shuu12121/python-treesitter-filtered-datasetsV2
Retrieves specific code examples from the Flask repository but doesn't provide meaningful analysis or patterns beyond basic data retrieval.
HTTPX Repo Code and Docstrings
Retrieves specific code examples from the httpx repository, which is useful for understanding how particular libraries are used but doesn't provide broader analytical insights about the dataset.
Requests Repo Docstrings & Code
Retrieves code examples with their docstrings and file paths from the requests repository, providing basic filtering but limited analytical value beyond finding specific code samples.
Quart Repo Docstrings & Code
Retrieves code examples with their docstrings from the Quart repository, providing basic code samples but offering limited analytical value for understanding broader patterns or relationships in the dataset.