Search is not available for this dataset
repo stringlengths 2 152 ⌀ | file stringlengths 15 239 | code stringlengths 0 58.4M | file_length int64 0 58.4M | avg_line_length float64 0 1.81M | max_line_length int64 0 12.7M | extension_type stringclasses 364
values |
|---|---|---|---|---|---|---|
null | Multi-domain-learning-FAS-main/source_multi_domain/config.py | # -*- coding: utf-8 -*-
# Copyright 2022
#
# Authors: Xiao Guo, Yaojie Liu, Anil Jain, and Xiaoming Liu.
#
# All Rights Reserved.s
#
# This research is based upon work supported by the Office of the Director of
# National Intelligence (ODNI), Intelligence Advanced Research Projects Activity
# (IARPA), via IARPA R&D... | 14,092 | 38.810734 | 126 | py |
null | Multi-domain-learning-FAS-main/source_multi_domain/README.md | # Multi-domain Learning for Updating Face Anti-spoofing Models
<p align="center">
<img src="https://github.com/CHELSEA234/Multi-domain-learning-FAS/blob/main/source_multi_domain/figures/overall_architecture.jpg" alt="drawing" width="1000"/>
</p>
This page contains the official implementation of our ECCV2022 oral pape... | 2,450 | 51.148936 | 272 | md |
null | Multi-domain-learning-FAS-main/source_multi_domain/parameters.py | # Copyright 2022
#
# Authors: Xiao Guo, Yaojie Liu, Anil Jain, and Xiaoming Liu.
#
# All Rights Reserved.s
#
# This research is based upon work supported by the Office of the Director of
# National Intelligence (ODNI), Intelligence Advanced Research Projects Activity
# (IARPA), via IARPA R&D Contract No. 2017-17020... | 4,776 | 190.08 | 821 | py |
null | Multi-domain-learning-FAS-main/source_multi_domain/test_architecture.py | # -*- coding: utf-8 -*-
# Copyright 2022
#
# Authors: Xiao Guo, Yaojie Liu, Anil Jain, and Xiaoming Liu.
#
# All Rights Reserved.s
#
# This research is based upon work supported by the Office of the Director of
# National Intelligence (ODNI), Intelligence Advanced Research Projects Activity
# (IARPA), via IARPA R&D... | 9,538 | 42.756881 | 109 | py |
null | Multi-domain-learning-FAS-main/source_multi_domain/metrics.py | # -*- coding: utf-8 -*-
# Copyright 2022
#
# Multi-domain Learning for Updating Face Anti-spoofing Models (ECCV 2022)
# Xiao Guo, Yaojie Liu, Anil Jain, and Xiaoming Liu
#
# All Rights Reserved.s
#
# This research is based upon work supported by the Office of the Director of
# National Intelligence (ODNI), Intellig... | 2,728 | 37.985714 | 84 | py |
null | Multi-domain-learning-FAS-main/source_multi_domain/warp.py | import tensorflow as tf
import cv2
import numpy as np
from scipy.ndimage.interpolation import map_coordinates as sp_map_coordinates
import matplotlib.tri as mtri
def tf_flatten(a):
"""Flatten tensor"""
return tf.reshape(a, [-1])
def tf_repeat(a, repeats, axis=0):
"""TensorFlow version of np.repeat for 1D... | 7,015 | 33.392157 | 83 | py |
null | Multi-domain-learning-FAS-main/source_multi_domain/FASMD/README.md | ## FASMD Dataset
The FASMD Dataset is constructed on three exsiting datasets: SiW-Mv2, SiW, and Oulu-NPU. FASMD consists of five sub-datasets: dataset A is the
source domain dataset, and B, C, D and E are four target domain datasets. The details can be found in [[PDF]](http://cvlab.cse.msu.edu/pdfs/guo_liu_jain_liu_ecc... | 2,289 | 57.717949 | 285 | md |
null | Multi-domain-learning-FAS-main/source_SiW_Mv2/inference.py | # -*- coding: utf-8 -*-
# Copyright 2022
#
# Authors: Xiao Guo.
#
# All Rights Reserved.s
#
# This research is based upon work supported by the Office of the Director of
# National Intelligence (ODNI), Intelligence Advanced Research Projects Activity
# (IARPA), via IARPA R&D Contract No. 2017-17020200004. The views... | 11,656 | 46.386179 | 127 | py |
null | Multi-domain-learning-FAS-main/source_SiW_Mv2/environment.yml | name: anti_spoofing_siwmv2
channels:
- conda-forge
- defaults
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2022.9.24=ha878542_0
- certifi=2022.9.14=py38h06a4308_0
- dlib=19.24.0=py38he2161a6_0
- jpeg=9e=h166bdaf_1
- ld_impl_linux-64=2.38=h1181459_1
- libblas=3.9... | 2,702 | 24.990385 | 73 | yml |
null | Multi-domain-learning-FAS-main/source_SiW_Mv2/test.py | # -*- coding: utf-8 -*-
# Copyright 2022
#
# Multi-domain Learning for Updating Face Anti-spoofing Models (ECCV 2022)
# Xiao Guo, Yaojie Liu, Anil Jain, and Xiaoming Liu
#
# All Rights Reserved.s
#
# This research is based upon work supported by the Office of the Director of
# National Intelligence (ODNI), Intellig... | 9,685 | 41.296943 | 102 | py |
null | Multi-domain-learning-FAS-main/source_SiW_Mv2/preprocessing.py | # -*- coding: utf-8 -*-
# Copyright 2022
#
# Multi-domain Learning for Updating Face Anti-spoofing Models (ECCV 2022)
# Xiao Guo, Yaojie Liu, Anil Jain, and Xiaoming Liu
#
# All Rights Reserved.s
#
# This research is based upon work supported by the Office of the Director of
# National Intelligence (ODNI), Intellig... | 6,305 | 42.191781 | 111 | py |
null | Multi-domain-learning-FAS-main/source_SiW_Mv2/run.sh | source ~/.bashrc
conda activate anti_spoofing
CUDA_NUM=0
python train.py --cuda=$CUDA_NUM --pro=1
python test.py --cuda=$CUDA_NUM --pro=1
python train.py --cuda=$CUDA_NUM --pro=2 --unknown=Co
python test.py --cuda=$CUDA_NUM --pro=2 --unknown=Co
python train.py --cuda=$CUDA_NUM --pro=2 --unknown=Eye
python test.py --cud... | 995 | 46.428571 | 59 | sh |
null | Multi-domain-learning-FAS-main/source_SiW_Mv2/utils.py | # -*- coding: utf-8 -*-
# Copyright 2022
#
# Multi-domain Learning for Updating Face Anti-spoofing Models (ECCV 2022)
# Xiao Guo, Yaojie Liu, Anil Jain, and Xiaoming Liu
#
# All Rights Reserved.s
#
# This research is based upon work supported by the Office of the Director of
# National Intelligence (ODNI), Intellig... | 14,443 | 34.841191 | 130 | py |
null | Multi-domain-learning-FAS-main/source_SiW_Mv2/model.py | # -*- coding: utf-8 -*-
# Copyright 2022
#
# Multi-domain Learning for Updating Face Anti-spoofing Models (ECCV 2022)
# Xiao Guo, Yaojie Liu, Anil Jain, and Xiaoming Liu
#
# All Rights Reserved.s
#
# This research is based upon work supported by the Office of the Director of
# National Intelligence (ODNI), Intellig... | 10,063 | 38.007752 | 105 | py |
null | Multi-domain-learning-FAS-main/source_SiW_Mv2/dataset.py | # -*- coding: utf-8 -*-
# Copyright 2022
#
# Multi-domain Learning for Updating Face Anti-spoofing Models (ECCV 2022)
# Xiao Guo, Yaojie Liu, Anil Jain, and Xiaoming Liu
#
# All Rights Reserved.s
#
# This research is based upon work supported by the Office of the Director of
# National Intelligence (ODNI), Intellig... | 15,618 | 47.657321 | 122 | py |
null | Multi-domain-learning-FAS-main/source_SiW_Mv2/inference.sh | source ~/.bashrc
conda activate anti_spoofing
CUDA_NUM=0
python inference.py --cuda=$CUDA_NUM --pro=1 --dir=./demo/live/ --overwrite --weight_dir=../saved_model
python inference.py --cuda=$CUDA_NUM --pro=1 --img=./demo/1.png --overwrite --weight_dir=../saved_model | 264 | 52 | 103 | sh |
null | Multi-domain-learning-FAS-main/source_SiW_Mv2/csv_parser.sh | source ~/.bashrc
conda activate anti_spoofing
CUDA_NUM=0
python csv_parser.py --pro=1 --log_dir=../train_log
python csv_parser.py --pro=2 --log_dir=../train_log
python csv_parser.py --pro=3 --log_dir=../train_log | 212 | 34.5 | 51 | sh |
null | Multi-domain-learning-FAS-main/source_SiW_Mv2/README.md | # SiW-Mv2 Dataset
<p align="center">
<img src="https://github.com/CHELSEA234/Multi-domain-learning-FAS/blob/main/source_SiW_Mv2/figures/train_tb.png" alt="drawing" width="500"/>
<img src="https://github.com/CHELSEA234/Multi-domain-learning-FAS/blob/main/source_SiW_Mv2/figures/intermediate_result.png" alt="draw... | 8,510 | 45.005405 | 326 | md |
null | Multi-domain-learning-FAS-main/source_SiW_Mv2/parameters.py | # -*- coding: utf-8 -*-
# Copyright 2022
#
# Multi-domain Learning for Updating Face Anti-spoofing Models (ECCV 2022)
# Xiao Guo, Yaojie Liu, Anil Jain, and Xiaoming Liu
#
# All Rights Reserved.s
#
# This research is based upon work supported by the Office of the Director of
# National Intelligence (ODNI), Intellig... | 4,864 | 179.185185 | 821 | py |
null | Multi-domain-learning-FAS-main/source_SiW_Mv2/metrics.py | # -*- coding: utf-8 -*-
# Copyright 2022
#
# Multi-domain Learning for Updating Face Anti-spoofing Models (ECCV 2022)
# Xiao Guo, Yaojie Liu, Anil Jain, and Xiaoming Liu
#
# All Rights Reserved.s
#
# This research is based upon work supported by the Office of the Director of
# National Intelligence (ODNI), Intellig... | 2,728 | 37.985714 | 84 | py |
null | Multi-domain-learning-FAS-main/source_SiW_Mv2/csv_parser.py | # -*- coding: utf-8 -*-
# Copyright 2022
#
# Multi-domain Learning for Updating Face Anti-spoofing Models (ECCV 2022)
# Xiao Guo, Yaojie Liu, Anil Jain, and Xiaoming Liu
#
# All Rights Reserved.s
#
# This research is based upon work supported by the Office of the Director of
# National Intelligence (ODNI), Intellig... | 16,311 | 44.437326 | 150 | py |
null | Multi-domain-learning-FAS-main/source_SiW_Mv2/train.py | # -*- coding: utf-8 -*-
# Copyright 2022
#
# Authors: Xiao Guo, Yaojie Liu, Anil Jain, and Xiaoming Liu.
#
# All Rights Reserved.s
#
# This research is based upon work supported by the Office of the Director of
# National Intelligence (ODNI), Intelligence Advanced Research Projects Activity
# (IARPA), via IARPA R&D... | 13,417 | 42.283871 | 103 | py |
null | Multi-domain-learning-FAS-main/source_SiW_Mv2/config_siwm.py | # -*- coding: utf-8 -*-
# Copyright 2022
#
# Multi-domain Learning for Updating Face Anti-spoofing Models (ECCV 2022)
# Xiao Guo, Yaojie Liu, Anil Jain, and Xiaoming Liu
#
# All Rights Reserved.s
#
# This research is based upon work supported by the Office of the Director of
# National Intelligence (ODNI), Intellig... | 7,671 | 36.062802 | 121 | py |
null | Multi-domain-learning-FAS-main/source_SiW_Mv2/warp.py | # -*- coding: utf-8 -*-
# Copyright 2022
#
# Multi-domain Learning for Updating Face Anti-spoofing Models (ECCV 2022)
# Xiao Guo, Yaojie Liu, Anil Jain, and Xiaoming Liu
#
# All Rights Reserved.s
#
# This research is based upon work supported by the Office of the Director of
# National Intelligence (ODNI), Intellig... | 7,914 | 34.81448 | 83 | py |
null | DA-Transformer-main/README.md | # DA-Transformer
Directed Acyclic Transformer (DA-Transformer) is a non-autoregressive sequence-to-sequence model designed for parallel text generation. This repository contains the implementation of DA-Transformer, as well as pre-trained checkpoints.
**Abstract**: Unlike traditional sequence-to-sequence models that ... | 34,252 | 62.549165 | 682 | md |
null | DA-Transformer-main/hubconf.py | # Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
"""isort:skip_file"""
import functools
import importlib
dependencies = [
"dataclasses",
"hydra",
"numpy",
"omegaconf",
"... | 2,099 | 27.378378 | 82 | py |
null | DA-Transformer-main/setup.py | #!/usr/bin/env python3
# Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
import os
import subprocess
import site
import sys
site.ENABLE_USER_SITE = "--user" in sys.argv[1:]
from setuptools im... | 8,701 | 28.90378 | 92 | py |
null | DA-Transformer-main/train.py | #!/usr/bin/env python3 -u
# Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
"""
Legacy entry point. Use fairseq_cli/train.py or fairseq-train instead.
"""
from fairseq_cli.train import cli_mai... | 366 | 23.466667 | 70 | py |
null | DA-Transformer-main/training.md | # Training Configs
### Task Configs
```bash
--task translation_dat_task # Task for DA-Transformer
--upsample-base predict # Possible values are: ["predict", "source", "source_old"].
# If set to "predict", the DAG size will be determined by the golden target length during ... | 8,205 | 78.669903 | 192 | md |
null | DA-Transformer-main/examples/DA-Transformer/personachat.sh | data_dir=/path/to/binarized/data/dir
checkpoint_dir=/path/to/checkpoint/dir
tensorboard_dir=/path/to/tensorboard/dir
pretrained_model=/path/to/model.bin
log_txt=/path/to/logfile
CUDA_VISIBLE_DEVICES=0,1 fairseq-train ${data_dir} \
\
`# loading DA-Transformer plugins` \
--user-dir fs_plugins \
\
`#... | 2,604 | 41.704918 | 115 | sh |
null | DA-Transformer-main/examples/DA-Transformer/pretrain.sh | data_dir=/path/to/binarized/data/dir
checkpoint_dir=/path/to/checkpoint/dir
tensorboard_dir=/path/to/tensorboard/dir
log_txt=/path/to/logfile
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 fairseq-train ${data_dir} \
\
`# loading DA-Transformer plugins` \
--user-dir fs_plugins \
\
`# DA-Transformer Task Con... | 2,198 | 38.981818 | 88 | sh |
null | DA-Transformer-main/examples/DA-Transformer/process_bert_uncased.py | from transformers import BertTokenizer
def bert_uncased_tokenize(fin, fout):
fin = open(fin, 'r', encoding='utf-8')
fout = open(fout, 'w', encoding='utf-8')
tok = BertTokenizer.from_pretrained('bert-base-uncased')
for line in fin:
word_pieces = tok.tokenize(line.strip())
new_line = " ".... | 719 | 39 | 60 | py |
null | DA-Transformer-main/examples/DA-Transformer/process_pretrain.py | import argparse
import numpy as np
import random
import math
# import numba
from transformers import BertTokenizer
tok = BertTokenizer.from_pretrained('bert-base-uncased')
parser = argparse.ArgumentParser()
# fmt: off
parser.add_argument('file')
parser.add_argument('--max-seq-length', type=int, default=600)
parser.ad... | 4,214 | 33.268293 | 135 | py |
null | DA-Transformer-main/examples/DA-Transformer/quora.sh | data_dir=/path/to/binarized/data/dir
checkpoint_dir=/path/to/checkpoint/dir
tensorboard_dir=/path/to/tensorboard/dir
pretrained_model=/path/to/model.bin
log_txt=/path/to/logfile
CUDA_VISIBLE_DEVICES=0,1 fairseq-train ${data_dir} \
\
`# loading DA-Transformer plugins` \
--user-dir fs_plugins \
\
`#... | 2,568 | 41.816667 | 115 | sh |
null | DA-Transformer-main/examples/DA-Transformer/rocstory.sh | data_dir=/path/to/binarized/data/dir
checkpoint_dir=/path/to/checkpoint/dir
tensorboard_dir=/path/to/tensorboard/dir
pretrained_model=/path/to/model.bin
log_txt=/path/to/logfile
CUDA_VISIBLE_DEVICES=0,1 fairseq-train ${data_dir} \
\
`# loading DA-Transformer plugins` \
--user-dir fs_plugins \
\
`#... | 2,569 | 41.833333 | 115 | sh |
null | DA-Transformer-main/examples/DA-Transformer/squad.sh | data_dir=/path/to/binarized/data/dir
checkpoint_dir=/path/to/checkpoint/dir
tensorboard_dir=/path/to/tensorboard/dir
pretrained_model=/path/to/model.bin
log_txt=/path/to/logfile
CUDA_VISIBLE_DEVICES=0,1,2,3 fairseq-train ${data_dir} \
\
`# loading DA-Transformer plugins` \
--user-dir fs_plugins \
\
... | 2,607 | 41.754098 | 115 | sh |
null | DA-Transformer-main/examples/DA-Transformer/wmt14_deen.sh | data_dir=/path/to/binarized/data/dir
checkpoint_dir=/path/to/checkpoint/dir
tensorboard_dir=/path/to/tensorboard/dir
log_txt=/path/to/logfile
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 fairseq-train ${data_dir} \
\
`# loading DA-Transformer plugins` \
--user-dir fs_plugins \
\
`# DA-Transformer Task Con... | 2,400 | 41.122807 | 112 | sh |
null | DA-Transformer-main/examples/DA-Transformer/wmt14_ende.sh | data_dir=/path/to/binarized/data/dir
checkpoint_dir=/path/to/checkpoint/dir
tensorboard_dir=/path/to/tensorboard/dir
log_txt=/path/to/logfile
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 fairseq-train ${data_dir} \
\
`# loading DA-Transformer plugins` \
--user-dir fs_plugins \
\
`# DA-Transformer Task Con... | 2,400 | 41.122807 | 112 | sh |
null | DA-Transformer-main/examples/DA-Transformer/wmt17_enzh.sh | data_dir=/path/to/binarized/data/dir
checkpoint_dir=/path/to/checkpoint/dir
tensorboard_dir=/path/to/tensorboard/dir
log_txt=/path/to/logfile
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 fairseq-train ${data_dir} \
\
`# loading DA-Transformer plugins` \
--user-dir fs_plugins \
\
`# DA-Transformer Task Con... | 2,353 | 40.298246 | 90 | sh |
null | DA-Transformer-main/examples/DA-Transformer/wmt17_zhen.sh | data_dir=/path/to/binarized/data/dir
checkpoint_dir=/path/to/checkpoint/dir
tensorboard_dir=/path/to/tensorboard/dir
log_txt=/path/to/logfile
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 fairseq-train ${data_dir} \
\
`# loading DA-Transformer plugins` \
--user-dir fs_plugins \
\
`# DA-Transformer Task Con... | 2,375 | 40.684211 | 112 | sh |
null | DA-Transformer-main/examples/DA-Transformer/xsum.sh | data_dir=/path/to/binarized/data/dir
checkpoint_dir=/path/to/checkpoint/dir
tensorboard_dir=/path/to/tensorboard/dir
pretrained_model=/path/to/model.bin
log_txt=/path/to/logfile
CUDA_VISIBLE_DEVICES=0,1,2,3 fairseq-train ${data_dir} \
\
`# loading DA-Transformer plugins` \
--user-dir fs_plugins \
\
... | 2,609 | 41.786885 | 115 | sh |
null | DA-Transformer-main/examples/DA-Transformer/evaluation/evaluate_personachat.py | # pip install pycocoevalcap
# pip install nltk
from collections import Counter
import numpy as np
from argparse import ArgumentParser
from pycocoevalcap.bleu.bleu import Bleu
def distinct(seqs):
""" Calculate intra/inter distinct 1/2. """
batch_size = len(seqs)
intra_dist1, intra_dist2 = [], []
unigr... | 2,150 | 32.092308 | 106 | py |
null | DA-Transformer-main/examples/DA-Transformer/evaluation/evaluate_quora.py | # pip install pycocoevalcap
#!/usr/bin/env python
from __future__ import print_function
__author__ = 'xinya'
from pycocoevalcap.bleu.bleu import Bleu
from pycocoevalcap.meteor.meteor import Meteor
from pycocoevalcap.rouge.rouge import Rouge
from pycocoevalcap.cider.cider import Cider
from collections import defaultdi... | 8,497 | 35.62931 | 245 | py |
null | DA-Transformer-main/examples/DA-Transformer/evaluation/evaluate_rocstory.py | # pip install pycocoevalcap
# pip install nltk
from collections import Counter
from nltk import ngrams
import numpy as np
from argparse import ArgumentParser
import string
from pycocoevalcap.bleu.bleu import Bleu
_tok_dict = {"(": "-lrb-", ")": "-rrb-",
"[": "-lsb-", "]": "-rsb-",
"{": "-lc... | 5,986 | 36.186335 | 244 | py |
null | DA-Transformer-main/examples/DA-Transformer/evaluation/evaluate_squad1.1.py | # pip install pycocoevalcap
#!/usr/bin/env python
from __future__ import print_function
__author__ = 'xinya'
from pycocoevalcap.bleu.bleu import Bleu
from pycocoevalcap.meteor.meteor import Meteor
from pycocoevalcap.rouge.rouge import Rouge
from pycocoevalcap.cider.cider import Cider
from collections import defaultd... | 8,087 | 34.946667 | 244 | py |
null | DA-Transformer-main/examples/DA-Transformer/evaluation/evaluate_xsum.py | # pip install pycocoevalcap
# check https://github.com/pltrdy/files2rouge to install files2rouge
#!/usr/bin/env python
from __future__ import print_function
__author__ = 'xinya'
from collections import defaultdict
from argparse import ArgumentParser
import string
import os
import sys
#reload(sys)
#sys.setdefaultenc... | 9,622 | 38.277551 | 244 | py |
null | DA-Transformer-main/examples/DA-Transformer/evaluation/extract_log.py | import sys
import re
import argparse
res = {}
for line in sys.stdin.readlines():
m = re.search(r"H-([0-9]+):?\s+(?:[\-0-9.infe]*)\s+(\S.*)$", line)
if m:
res[int(m.group(1))] = m.group(2).remove("## ", "")
for i in range(len(res)):
print(res[i])
| 268 | 19.692308 | 70 | py |
null | DA-Transformer-main/examples/mass/README.md | The codes is modified from https://github.com/microsoft/MASS/tree/master/MASS-summarization
| 93 | 30.333333 | 91 | md |
null | DA-Transformer-main/examples/mass/__init__.py | from . import masked_s2s
from . import s2s_model
from . import translation
| 75 | 18 | 25 | py |
null | DA-Transformer-main/examples/mass/bert_dictionary.py | # Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
from collections import Counter
from multiprocessing import Pool
import os
import torch
from fairseq.tokenizer import tokenize_line
# from f... | 1,760 | 26.092308 | 82 | py |
null | DA-Transformer-main/examples/mass/hub_interface.py | ##########################################################################
# Copyright (C) 2022 COAI @ Tsinghua University
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www... | 7,569 | 38.222798 | 143 | py |
null | DA-Transformer-main/examples/mass/learned_positional_embedding.py | import torch.nn as nn
from fairseq import utils
class LearnedPositionalEmbedding(nn.Embedding):
"""
This module learns positional embeddings up to a fixed maximum size.
Padding ids are ignored by either offsetting based on padding_idx
or by setting padding_idx to None and ensuring that the appropriat... | 1,786 | 35.469388 | 94 | py |
null | DA-Transformer-main/examples/mass/masked_dataset.py | import numpy as np
import torch
import random
import time
import math
from fairseq import utils
from fairseq.data import data_utils, LanguagePairDataset
class MaskedLanguagePairDataset(LanguagePairDataset):
""" Wrapper for masked language datasets
(support monolingual and bilingual)
For monolin... | 4,131 | 35.566372 | 104 | py |
null | DA-Transformer-main/examples/mass/masked_s2s.py | # Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
import os
import numpy as np
import torch
from collections import OrderedDict
from fairseq import utils
from fairseq.data import (
data_... | 6,496 | 36.33908 | 104 | py |
null | DA-Transformer-main/examples/mass/s2s_model.py | import math
import os
import json
import torch
import torch.nn as nn
import torch.nn.functional as F
from fairseq import options, utils
from fairseq.models import (
FairseqEncoder,
FairseqIncrementalDecoder,
FairseqEncoderDecoderModel,
register_model,
register_model_architecture,
)
from fairseq.mo... | 28,612 | 37.406711 | 154 | py |
null | DA-Transformer-main/examples/mass/translation.py | #from fairseq.data import BertDictionary
from fairseq.tasks import register_task
from fairseq import metrics, utils
from fairseq.tasks.translation import TranslationTask, TranslationConfig
from .bert_dictionary import BertDictionary
import torch
import logging
logger = logging.getLogger(__name__)
@register_task('tr... | 2,164 | 39.849057 | 114 | py |
null | DA-Transformer-main/examples/transformer/__init__.py | 0 | 0 | 0 | py | |
null | DA-Transformer-main/examples/transformer/hub_interface.py | ##########################################################################
# Copyright (C) 2022 COAI @ Tsinghua University
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www... | 9,235 | 38.135593 | 143 | py |
null | DA-Transformer-main/fairseq/__init__.py | # Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
"""isort:skip_file"""
import os
import sys
try:
from .version import __version__ # noqa
except ImportError:
version_txt = os.path.jo... | 1,337 | 28.086957 | 72 | py |
null | DA-Transformer-main/fairseq/binarizer.py | # Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
import logging
import os
import typing as tp
from abc import ABC, abstractmethod
from collections import Counter
from dataclasses import datac... | 11,397 | 28.837696 | 99 | py |
null | DA-Transformer-main/fairseq/checkpoint_utils.py | # Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
import ast
import collections
import contextlib
import inspect
import logging
import os
import re
import time
import traceback
from collection... | 34,968 | 37.72536 | 114 | py |
null | DA-Transformer-main/fairseq/file_chunker_utils.py | # Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
import os
import typing as tp
def _safe_readline(fd) -> str:
pos = fd.tell()
while True:
try:
return fd.readline... | 2,691 | 30.670588 | 78 | py |
null | DA-Transformer-main/fairseq/file_io.py | #!/usr/bin/env python3
# Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
import logging
import os
import shutil
from typing import List, Optional
logger = logging.getLogger(__file__)
try:... | 5,614 | 27.502538 | 96 | py |
null | DA-Transformer-main/fairseq/file_utils.py | # Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
"""
Utilities for working with the local dataset cache.
This file is adapted from `AllenNLP <https://github.com/allenai/allennlp>`_.
and `hugg... | 11,912 | 30.768 | 92 | py |
null | DA-Transformer-main/fairseq/hub_utils.py | #!/usr/bin/env python3 -u
# Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
import argparse
import copy
import logging
import os
from typing import Any, Dict, Iterator, List
import torch
from... | 11,350 | 35.034921 | 88 | py |
null | DA-Transformer-main/fairseq/incremental_decoding_utils.py | # Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
import uuid
from typing import Dict, Optional
from torch import Tensor
class FairseqIncrementalState(object):
def __init__(self, *args,... | 1,773 | 33.115385 | 76 | py |
null | DA-Transformer-main/fairseq/iterative_refinement_generator.py | # Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
from collections import namedtuple
import numpy as np
import torch
from fairseq import utils
DecoderOut = namedtuple(
"IterativeRefinem... | 13,238 | 35.775 | 93 | py |
null | DA-Transformer-main/fairseq/nan_detector.py | # Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
import logging
import torch
logger = logging.getLogger(__name__)
class NanDetector:
"""
Detects the first NaN or Inf in forward a... | 3,742 | 33.33945 | 119 | py |
null | DA-Transformer-main/fairseq/ngram_repeat_block.py | # Originally from Microsoft Corporation.
# Licensed under the MIT License.
""" Wrapper for ngram_repeat_block cuda extension """
import math
import warnings
from typing import Dict, List, Optional
import torch
from torch import nn
try:
from fairseq import ngram_repeat_block_cuda
EXTENSION_BUILT = True
excep... | 5,286 | 34.013245 | 102 | py |
null | DA-Transformer-main/fairseq/options.py | # Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
import argparse
from pathlib import Path
from typing import Callable, List, Optional, Union
import torch
from fairseq import utils
from fairs... | 15,823 | 36.320755 | 105 | py |
null | DA-Transformer-main/fairseq/pdb.py | # Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
import multiprocessing
import os
import pdb
import sys
__all__ = ["set_trace"]
_stdin = [None]
_stdin_lock = multiprocessing.Lock()
try:
... | 1,089 | 21.708333 | 65 | py |
null | DA-Transformer-main/fairseq/quantization_utils.py | # Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
import logging
from fairseq.modules.quantization import pq, quantization_options, scalar
from omegaconf import DictConfig
logger = logging.... | 5,507 | 37.25 | 87 | py |
null | DA-Transformer-main/fairseq/registry.py | # Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
from argparse import Namespace
from typing import Union
from fairseq.dataclass import FairseqDataclass
from fairseq.dataclass.utils import me... | 3,449 | 33.158416 | 87 | py |
null | DA-Transformer-main/fairseq/search.py | # Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
import math
from typing import List, Optional
import torch
import torch.nn as nn
from fairseq.token_generation_constraints import (
Const... | 31,337 | 37.451534 | 100 | py |
null | DA-Transformer-main/fairseq/sequence_generator.py | # Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
import math
from typing import Dict, List, Optional
import sys
import torch
import torch.nn as nn
from fairseq import search, utils
from fair... | 39,404 | 38.843276 | 110 | py |
null | DA-Transformer-main/fairseq/sequence_scorer.py | # Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
import sys
import torch
from fairseq import utils
class SequenceScorer(object):
"""Scores the target for a given source sentence."""
... | 5,450 | 34.396104 | 101 | py |
null | DA-Transformer-main/fairseq/speech_generator.py | # Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
import torch
import numpy as np
from fairseq.data.audio.speech_to_text_dataset import S2TDataConfig
class SpeechGenerator(object):
def ... | 8,840 | 37.107759 | 84 | py |
null | DA-Transformer-main/fairseq/token_generation_constraints.py | # Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
"""Implements tracking of constraints for a beam item.
A list of constraints is given as a list of one or more token
sequences, each of lengt... | 16,555 | 31.654832 | 96 | py |
null | DA-Transformer-main/fairseq/tokenizer.py | # Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
import re
SPACE_NORMALIZER = re.compile(r"\s+")
def tokenize_line(line):
line = SPACE_NORMALIZER.sub(" ", line)
line = line.strip(... | 346 | 20.6875 | 65 | py |
null | DA-Transformer-main/fairseq/trainer.py | # Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
"""
Train a network across multiple GPUs.
"""
import contextlib
import logging
import os
import sys
import time
from argparse import Namespac... | 68,031 | 40.635251 | 202 | py |
null | DA-Transformer-main/fairseq/utils.py | # Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
import argparse
import collections
import contextlib
import copy
import importlib
import logging
import os
import sys
import warnings
from ite... | 26,722 | 30.775268 | 111 | py |
null | DA-Transformer-main/fairseq/benchmark/__init__.py | # Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
# import models/tasks to register them
from . import dummy_dataset, dummy_lm, dummy_masked_lm, dummy_model, dummy_mt # noqa
| 303 | 37 | 85 | py |
null | DA-Transformer-main/fairseq/benchmark/dummy_dataset.py | import numpy as np
from fairseq.data import FairseqDataset
class DummyDataset(FairseqDataset):
def __init__(self, batch, num_items, item_size):
super().__init__()
self.batch = batch
self.num_items = num_items
self.item_size = item_size
def __getitem__(self, index):
ret... | 803 | 20.72973 | 58 | py |
null | DA-Transformer-main/fairseq/benchmark/dummy_lm.py | # Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
import logging
from dataclasses import dataclass, field
from typing import Optional
import torch
from .dummy_dataset import DummyDataset
from... | 2,757 | 31.833333 | 84 | py |
null | DA-Transformer-main/fairseq/benchmark/dummy_masked_lm.py | # Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
import logging
from dataclasses import dataclass, field
from typing import Optional
import torch
from omegaconf import II
from .dummy_datase... | 3,123 | 31.884211 | 84 | py |
null | DA-Transformer-main/fairseq/benchmark/dummy_model.py | # Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
import torch.nn as nn
import torch.nn.functional as F
from fairseq.data import Dictionary
from fairseq.models import (
FairseqDecoder,
... | 3,090 | 30.865979 | 84 | py |
null | DA-Transformer-main/fairseq/benchmark/dummy_mt.py | # Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
import logging
import numpy as np
import torch
from fairseq.data import Dictionary, FairseqDataset
from fairseq.tasks import LegacyFairseqTa... | 3,677 | 29.65 | 84 | py |
null | DA-Transformer-main/fairseq/clib/cuda/ngram_repeat_block_cuda.cpp | /*
Copyright (c) Microsoft Corporation.
Licensed under the MIT License.
*/
#include <torch/extension.h>
#include <vector>
/*
CPP Binding for CUDA OP
*/
// CUDA forward declarations
torch::Tensor ngram_repeat_block_cuda_forward(
torch::Tensor tokens,
torch::Tensor lprobs,
int bsz,
int step,
int be... | 1,262 | 21.553571 | 66 | cpp |
null | DA-Transformer-main/fairseq/clib/libbase/balanced_assignment.cpp | /**
* Copyright 2017-present, Facebook, Inc.
* All rights reserved.
*
* This source code is licensed under the license found in the
* LICENSE file in the root directory of this source tree.
*/
/*
C++ code for solving the linear assignment problem.
Based on the Auction Algorithm from
https://dspace.mit.edu/bitstr... | 4,016 | 35.518182 | 80 | cpp |
null | DA-Transformer-main/fairseq/clib/libbleu/libbleu.cpp | /**
* Copyright 2017-present, Facebook, Inc.
* All rights reserved.
*
* This source code is licensed under the license found in the
* LICENSE file in the root directory of this source tree.
*/
#include <array>
#include <cstdio>
#include <cstring>
#include <map>
// NOLINTNEXTLINE
typedef struct {
size_t reflen... | 3,019 | 18.113924 | 77 | cpp |
null | DA-Transformer-main/fairseq/clib/libbleu/module.cpp | /**
* Copyright 2017-present, Facebook, Inc.
* All rights reserved.
*
* This source code is licensed under the license found in the
* LICENSE file in the root directory of this source tree.
*/
#include <Python.h>
static PyMethodDef method_def[] = {{NULL, NULL, 0, NULL}}; // NOLINT
static struct PyModuleDef mod... | 814 | 22.970588 | 68 | cpp |
null | DA-Transformer-main/fairseq/clib/libnat/edit_dist.cpp | /**
* Copyright 2017-present, Facebook, Inc.
* All rights reserved.
*
* This source code is licensed under the license found in the
* LICENSE file in the root directory of this source tree.
*/
#include <pybind11/detail/common.h>
#include <pybind11/pybind11.h>
#include <torch/torch.h> // @manual=//caffe2:torch_ex... | 5,958 | 24.685345 | 76 | cpp |
null | DA-Transformer-main/fairseq/clib/libnat_cuda/binding.cpp | /**
* Copyright 2017-present, Facebook, Inc.
* All rights reserved.
*
* This source code is licensed under the license found in the
* LICENSE file in the root directory of this source tree.
*/
/*
This code is partially adpoted from
https://github.com/1ytic/pytorch-edit-distance
*/
#include <torch/types.h>
#i... | 1,769 | 25.029412 | 79 | cpp |
null | DA-Transformer-main/fairseq/clib/libnat_cuda/edit_dist.h | /**
* Copyright 2017-present, Facebook, Inc.
* All rights reserved.
*
* This source code is licensed under the license found in the
* LICENSE file in the root directory of this source tree.
*/
#pragma once
#include <torch/extension.h>
torch::Tensor LevenshteinDistanceCuda(
torch::Tensor source,
torch::... | 627 | 23.153846 | 67 | h |
null | DA-Transformer-main/fairseq/config/__init__.py | # Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
| 177 | 34.6 | 65 | py |
null | DA-Transformer-main/fairseq/config/config.yaml | # @package _group_
hydra:
run:
dir: .
defaults:
- _self_
- task: null
- model: null
- criterion: cross_entropy
- optimizer: null
- lr_scheduler: fixed
- bpe: null
- tokenizer: null
- scoring: null
- generation: null
- common_eval: null
- eval_lm: null
| 308 | 14.45 | 30 | yaml |
null | DA-Transformer-main/fairseq/config/model/transformer_lm/transformer_lm_baevski_gbw.yaml | # @package _group_
activation_fn: "relu"
dropout: 0.1
attention_dropout: 0.1
activation_dropout: 0.0
relu_dropout: 0.0
decoder_embed_dim: 512
decoder_output_dim: 512
decoder_input_dim: 512
decoder_ffn_embed_dim: 4096
decoder_layers: 12
decoder_attention_heads: 16
decoder_normalize_before: true
no_decoder_final_norm: tr... | 991 | 25.810811 | 90 | yaml |
null | DA-Transformer-main/fairseq/config/model/transformer_lm/transformer_lm_baevski_wiki103.yaml | # @package _group_
activation_fn: "relu"
dropout: 0.3
attention_dropout: 0.1
activation_dropout: 0.1
relu_dropout: 0.1
decoder_embed_dim: 1024
decoder_output_dim: 1024
decoder_input_dim: 1024
decoder_ffn_embed_dim: 4096
decoder_layers: 16
decoder_attention_heads: 8
decoder_normalize_before: true
no_decoder_final_norm: ... | 1,010 | 26.324324 | 90 | yaml |
null | DA-Transformer-main/fairseq/config/model/transformer_lm/transformer_lm_big.yaml | # @package _group_
activation_fn: "relu"
dropout: 0.1
attention_dropout: 0.0
activation_dropout: 0.0
relu_dropout: 0.0
decoder_embed_dim: 1024
decoder_output_dim: 1024
decoder_input_dim: 1024
decoder_ffn_embed_dim: 4096
decoder_layers: 12
decoder_attention_heads: 16
decoder_normalize_before: true
no_decoder_final_norm:... | 995 | 25.918919 | 90 | yaml |
null | DA-Transformer-main/fairseq/config/model/transformer_lm/transformer_lm_gbw.yaml | # @package _group_
activation_fn: "relu"
dropout: 0.1
attention_dropout: 0.1
activation_dropout: 0.0
relu_dropout: 0.0
decoder_embed_dim: 512
decoder_output_dim: 512
decoder_input_dim: 512
decoder_ffn_embed_dim: 4096
decoder_layers: 12
decoder_attention_heads: 16
decoder_normalize_before: true
no_decoder_final_norm: tr... | 991 | 25.810811 | 90 | yaml |