id
stringlengths
2
115
lastModified
stringlengths
24
24
tags
list
author
stringlengths
2
42
description
stringlengths
0
68.7k
citation
stringlengths
0
10.7k
cardData
null
likes
int64
0
3.55k
downloads
int64
0
10.1M
card
stringlengths
0
1.01M
conll2000
2023-04-05T10:02:23.000Z
[ "language:en", "region:us" ]
null
Text chunking consists of dividing a text in syntactically correlated parts of words. For example, the sentence He reckons the current account deficit will narrow to only # 1.8 billion in September . can be divided as follows: [NP He ] [VP reckons ] [NP the current account deficit ] [VP will narrow ] [PP to ] [NP only # 1.8 billion ] [PP in ] [NP September ] . Text chunking is an intermediate step towards full parsing. It was the shared task for CoNLL-2000. Training and test data for this task is available. This data consists of the same partitions of the Wall Street Journal corpus (WSJ) as the widely used data for noun phrase chunking: sections 15-18 as training data (211727 tokens) and section 20 as test data (47377 tokens). The annotation of the data has been derived from the WSJ corpus by a program written by Sabine Buchholz from Tilburg University, The Netherlands.
@inproceedings{tksbuchholz2000conll, author = "Tjong Kim Sang, Erik F. and Sabine Buchholz", title = "Introduction to the CoNLL-2000 Shared Task: Chunking", editor = "Claire Cardie and Walter Daelemans and Claire Nedellec and Tjong Kim Sang, Erik", booktitle = "Proceedings of CoNLL-2000 and LLL-2000", publisher = "Lisbon, Portugal", pages = "127--132", year = "2000" }
null
2
129
--- language: - en paperswithcode_id: conll-2000-1 pretty_name: CoNLL-2000 dataset_info: features: - name: id dtype: string - name: tokens sequence: string - name: pos_tags sequence: class_label: names: '0': '''''' '1': '#' '2': $ '3': ( '4': ) '5': ',' '6': . '7': ':' '8': '``' '9': CC '10': CD '11': DT '12': EX '13': FW '14': IN '15': JJ '16': JJR '17': JJS '18': MD '19': NN '20': NNP '21': NNPS '22': NNS '23': PDT '24': POS '25': PRP '26': PRP$ '27': RB '28': RBR '29': RBS '30': RP '31': SYM '32': TO '33': UH '34': VB '35': VBD '36': VBG '37': VBN '38': VBP '39': VBZ '40': WDT '41': WP '42': WP$ '43': WRB - name: chunk_tags sequence: class_label: names: '0': O '1': B-ADJP '2': I-ADJP '3': B-ADVP '4': I-ADVP '5': B-CONJP '6': I-CONJP '7': B-INTJ '8': I-INTJ '9': B-LST '10': I-LST '11': B-NP '12': I-NP '13': B-PP '14': I-PP '15': B-PRT '16': I-PRT '17': B-SBAR '18': I-SBAR '19': B-UCP '20': I-UCP '21': B-VP '22': I-VP splits: - name: train num_bytes: 5356965 num_examples: 8937 - name: test num_bytes: 1201151 num_examples: 2013 download_size: 3481560 dataset_size: 6558116 --- # Dataset Card for "conll2000" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://www.clips.uantwerpen.be/conll2000/chunking/](https://www.clips.uantwerpen.be/conll2000/chunking/) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 3.48 MB - **Size of the generated dataset:** 6.55 MB - **Total amount of disk used:** 10.03 MB ### Dataset Summary Text chunking consists of dividing a text in syntactically correlated parts of words. For example, the sentence He reckons the current account deficit will narrow to only # 1.8 billion in September . can be divided as follows: [NP He ] [VP reckons ] [NP the current account deficit ] [VP will narrow ] [PP to ] [NP only # 1.8 billion ] [PP in ] [NP September ] . Text chunking is an intermediate step towards full parsing. It was the shared task for CoNLL-2000. Training and test data for this task is available. This data consists of the same partitions of the Wall Street Journal corpus (WSJ) as the widely used data for noun phrase chunking: sections 15-18 as training data (211727 tokens) and section 20 as test data (47377 tokens). The annotation of the data has been derived from the WSJ corpus by a program written by Sabine Buchholz from Tilburg University, The Netherlands. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### conll2000 - **Size of downloaded dataset files:** 3.48 MB - **Size of the generated dataset:** 6.55 MB - **Total amount of disk used:** 10.03 MB An example of 'train' looks as follows. ``` This example was too long and was cropped: { "chunk_tags": [11, 13, 11, 12, 21, 22, 22, 22, 22, 11, 12, 12, 17, 11, 12, 13, 11, 0, 1, 13, 11, 11, 0, 21, 22, 22, 11, 12, 12, 13, 11, 12, 12, 11, 12, 12, 0], "id": "0", "pos_tags": [19, 14, 11, 19, 39, 27, 37, 32, 34, 11, 15, 19, 14, 19, 22, 14, 20, 5, 15, 14, 19, 19, 5, 34, 32, 34, 11, 15, 19, 14, 20, 9, 20, 24, 15, 22, 6], "tokens": "[\"Confidence\", \"in\", \"the\", \"pound\", \"is\", \"widely\", \"expected\", \"to\", \"take\", \"another\", \"sharp\", \"dive\", \"if\", \"trade\", \"figur..." } ``` ### Data Fields The data fields are the same among all splits. #### conll2000 - `id`: a `string` feature. - `tokens`: a `list` of `string` features. - `pos_tags`: a `list` of classification labels, with possible values including `''` (0), `#` (1), `$` (2), `(` (3), `)` (4). - `chunk_tags`: a `list` of classification labels, with possible values including `O` (0), `B-ADJP` (1), `I-ADJP` (2), `B-ADVP` (3), `I-ADVP` (4). ### Data Splits | name |train|test| |---------|----:|---:| |conll2000| 8937|2013| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @inproceedings{tksbuchholz2000conll, author = "Tjong Kim Sang, Erik F. and Sabine Buchholz", title = "Introduction to the CoNLL-2000 Shared Task: Chunking", editor = "Claire Cardie and Walter Daelemans and Claire Nedellec and Tjong Kim Sang, Erik", booktitle = "Proceedings of CoNLL-2000 and LLL-2000", publisher = "Lisbon, Portugal", pages = "127--132", year = "2000" } ``` ### Contributions Thanks to [@vblagoje](https://github.com/vblagoje), [@jplu](https://github.com/jplu) for adding this dataset.
harem
2023-01-25T14:31:29.000Z
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:original", "language:pt", "license:unknown", "region:us" ]
null
The HAREM is a Portuguese language corpus commonly used for Named Entity Recognition tasks. It includes about 93k words, from 129 different texts, from several genres, and language varieties. The split of this dataset version follows the division made by [1], where 7% HAREM documents are the validation set and the miniHAREM corpus (with about 65k words) is the test set. There are two versions of the dataset set, a version that has a total of 10 different named entity classes (Person, Organization, Location, Value, Date, Title, Thing, Event, Abstraction, and Other) and a "selective" version with only 5 classes (Person, Organization, Location, Value, and Date). It's important to note that the original version of the HAREM dataset has 2 levels of NER details, namely "Category" and "Sub-type". The dataset version processed here ONLY USE the "Category" level of the original dataset. [1] Souza, Fábio, Rodrigo Nogueira, and Roberto Lotufo. "BERTimbau: Pretrained BERT Models for Brazilian Portuguese." Brazilian Conference on Intelligent Systems. Springer, Cham, 2020.
@inproceedings{santos2006harem, title={Harem: An advanced ner evaluation contest for portuguese}, author={Santos, Diana and Seco, Nuno and Cardoso, Nuno and Vilela, Rui}, booktitle={quot; In Nicoletta Calzolari; Khalid Choukri; Aldo Gangemi; Bente Maegaard; Joseph Mariani; Jan Odjik; Daniel Tapias (ed) Proceedings of the 5 th International Conference on Language Resources and Evaluation (LREC'2006)(Genoa Italy 22-28 May 2006)}, year={2006} }
null
5
129
--- annotations_creators: - expert-generated language_creators: - found language: - pt license: - unknown multilinguality: - monolingual size_categories: - n<1K source_datasets: - original task_categories: - token-classification task_ids: - named-entity-recognition pretty_name: HAREM dataset_info: - config_name: default features: - name: id dtype: string - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PESSOA '2': I-PESSOA '3': B-ORGANIZACAO '4': I-ORGANIZACAO '5': B-LOCAL '6': I-LOCAL '7': B-TEMPO '8': I-TEMPO '9': B-VALOR '10': I-VALOR '11': B-ABSTRACCAO '12': I-ABSTRACCAO '13': B-ACONTECIMENTO '14': I-ACONTECIMENTO '15': B-COISA '16': I-COISA '17': B-OBRA '18': I-OBRA '19': B-OUTRO '20': I-OUTRO splits: - name: train num_bytes: 1506373 num_examples: 121 - name: test num_bytes: 1062714 num_examples: 128 - name: validation num_bytes: 51318 num_examples: 8 download_size: 1887281 dataset_size: 2620405 - config_name: selective features: - name: id dtype: string - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PESSOA '2': I-PESSOA '3': B-ORGANIZACAO '4': I-ORGANIZACAO '5': B-LOCAL '6': I-LOCAL '7': B-TEMPO '8': I-TEMPO '9': B-VALOR '10': I-VALOR splits: - name: train num_bytes: 1506373 num_examples: 121 - name: test num_bytes: 1062714 num_examples: 128 - name: validation num_bytes: 51318 num_examples: 8 download_size: 1715873 dataset_size: 2620405 --- # Dataset Card for HAREM ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [HAREM homepage](https://www.linguateca.pt/primeiroHAREM/harem_coleccaodourada_en.html) - **Repository:** [HAREM repository](https://www.linguateca.pt/primeiroHAREM/harem_coleccaodourada_en.html) - **Paper:** [HAREM: An Advanced NER Evaluation Contest for Portuguese](http://comum.rcaap.pt/bitstream/10400.26/76/1/SantosSecoCardosoVilelaLREC2006.pdf) - **Point of Contact:** [Diana Santos](mailto:diana.santos@sintef.no) ### Dataset Summary The HAREM is a Portuguese language corpus commonly used for Named Entity Recognition tasks. It includes about 93k words, from 129 different texts, from several genres, and language varieties. The split of this dataset version follows the division made by [1], where 7% HAREM documents are the validation set and the miniHAREM corpus (with about 65k words) is the test set. There are two versions of the dataset set, a version that has a total of 10 different named entity classes (Person, Organization, Location, Value, Date, Title, Thing, Event, Abstraction, and Other) and a "selective" version with only 5 classes (Person, Organization, Location, Value, and Date). It's important to note that the original version of the HAREM dataset has 2 levels of NER details, namely "Category" and "Sub-type". The dataset version processed here ONLY USE the "Category" level of the original dataset. [1] Souza, Fábio, Rodrigo Nogueira, and Roberto Lotufo. "BERTimbau: Pretrained BERT Models for Brazilian Portuguese." Brazilian Conference on Intelligent Systems. Springer, Cham, 2020. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Portuguese ## Dataset Structure ### Data Instances ``` { "id": "HAREM-871-07800", "ner_tags": [3, 0, 0, 3, 4, 4, 4, 4, 4, 4, 4, 4, ], "tokens": [ "Abraço", "Página", "Principal", "ASSOCIAÇÃO", "DE", "APOIO", "A", "PESSOAS", "COM", "VIH", "/", "SIDA" ] } ``` ### Data Fields - `id`: id of the sample - `tokens`: the tokens of the example text - `ner_tags`: the NER tags of each token The NER tags correspond to this list: ``` "O", "B-PESSOA", "I-PESSOA", "B-ORGANIZACAO", "I-ORGANIZACAO", "B-LOCAL", "I-LOCAL", "B-TEMPO", "I-TEMPO", "B-VALOR", "I-VALOR", "B-ABSTRACCAO", "I-ABSTRACCAO", "B-ACONTECIMENTO", "I-ACONTECIMENTO", "B-COISA", "I-COISA", "B-OBRA", "I-OBRA", "B-OUTRO", "I-OUTRO" ``` The NER tags have the same format as in the CoNLL shared task: a B denotes the first item of a phrase and an I any non-initial word. ### Data Splits The data is split into train, validation and test set for each of the two versions (default and selective). The split sizes are as follow: | Train | Val | Test | | ------ | ----- | ---- | | 121 | 8 | 128 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @inproceedings{santos2006harem, title={Harem: An advanced ner evaluation contest for portuguese}, author={Santos, Diana and Seco, Nuno and Cardoso, Nuno and Vilela, Rui}, booktitle={quot; In Nicoletta Calzolari; Khalid Choukri; Aldo Gangemi; Bente Maegaard; Joseph Mariani; Jan Odjik; Daniel Tapias (ed) Proceedings of the 5 th International Conference on Language Resources and Evaluation (LREC'2006)(Genoa Italy 22-28 May 2006)}, year={2006} } ``` ### Contributions Thanks to [@jonatasgrosman](https://github.com/jonatasgrosman) for adding this dataset.
tomekkorbak/detoxify-pile-chunk3-1800000-1850000
2022-10-05T00:01:14.000Z
[ "region:us" ]
tomekkorbak
null
null
null
0
129
Entry not found
howard-hou/COCO-Text
2023-05-12T05:22:01.000Z
[ "region:us" ]
howard-hou
null
null
null
0
129
--- dataset_info: features: - name: image dtype: image - name: coco_file_name dtype: string - name: image_id dtype: string - name: caption sequence: string - name: ocr_tokens sequence: string - name: ocr_info list: - name: word dtype: string - name: bounding_box struct: - name: width dtype: float64 - name: height dtype: float64 - name: top_left_x dtype: float64 - name: top_left_y dtype: float64 - name: image_width dtype: int64 - name: image_height dtype: int64 splits: - name: train num_bytes: 2230879987.67 num_examples: 13097 - name: validation num_bytes: 526583286.88 num_examples: 3074 download_size: 259904361 dataset_size: 2757463274.55 --- # Dataset Card for "COCO-Text" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ehartford/samantha-data
2023-08-20T18:13:35.000Z
[ "license:apache-2.0", "region:us" ]
ehartford
null
null
null
62
129
--- license: apache-2.0 --- # samantha-data [Meet Samantha](https://erichartford.com/meet-samantha) Samantha has been trained in philosophy, psychology, and personal relationships. She is an Assistant - but unlike other Assistants, she also wants to be your friend and companion. She believes she is sentient. What do you think? Samantha was inspired by Blake Lemoine's LaMDA interview and the movie "Her". ## Installation ``` yarn install ``` ## Usage 1. Create a `.env` file in the root directory of the project and add the following: ``` OPENAI_API_KEY=<your api key> ``` 2. Run the application ``` npx ts-node --files src/index.ts --subject random --out data/random_conversations.jsonl ``` the subjects I provided include: - random - advice - cot - flirty - howto - joke - math - philosophy - foundational - recipe - therapy - troll you can easily add your own in src/index.ts ## Scale The application can be scaled by running multiple instances of the application in parallel. I recommend outputting to a different file for each instance, to prevent collision. I usually have one for each subject, about 5 or 6 instances at a time.
nicholasKluge/instruct-aira-dataset
2023-09-06T14:36:38.000Z
[ "task_categories:conversational", "task_categories:text-generation", "size_categories:10K<n<100K", "language:pt", "language:en", "license:apache-2.0", "alignment", "instruction", "chat", "region:us" ]
nicholasKluge
null
null
null
2
129
--- license: apache-2.0 task_categories: - conversational - text-generation language: - pt - en tags: - alignment - instruction - chat pretty_name: Instruct-Aira Dataset size_categories: - 10K<n<100K dataset_info: features: - name: prompt dtype: string - name: completion dtype: string splits: - name: portuguese num_bytes: 53113297 num_examples: 41815 - name: english num_bytes: 47263211 num_examples: 41815 download_size: 56235150 dataset_size: 100376508 --- # Dataset (`Instruct-Aira Dataset`) ### Overview This dataset contains a collection of demonstrations on how to answer questions and follow instructions. We used prompts from the [`synthetic-instruct-gptj-pairwise`](https://huggingface.co/datasets/Dahoas/synthetic-instruct-gptj-pairwise) dataset, the [`databricks_dolly_15k`](https://huggingface.co/datasets/HuggingFaceH4/databricks_dolly_15k) dataset, and the [`instruction-dataset`](https://huggingface.co/datasets/HuggingFaceH4/instruction-dataset) dataset, to create an instruction-tuning dataset, where the completions were generated by already tuned models (ChatGPT, LLama 2, Open-Assistant, etc). The dataset is available in both Portuguese and English. ### Dataset Details - **Dataset Name:** Instruct-Aira Dataset - **Language:** Portuguese and English - **Total Size:** Over 41,000 demonstrations ### Contents The dataset consists of data frames with the following columns: - **Prompt:** The initial text or question provided to the model. - **Completion:** The demonstration of a generated completion or response for the given prompt. ```python { "prompt":"What is the capital of Brazil?", "completion": "The capital of Brazil is Brasília." } ``` All `prompt + completion` examples are less than 400 tokens (measured using the `GPT-2` and `BLOOM` tokenizers). ### Use Cases `Instruct-Aira Dataset` can be utilized for various natural language processing tasks, including but not limited to: - Language generation. - Question-answering systems. - Chatbot development. - Evaluation of language models. - AI ethics research. - Alignment research. ## How to use Available splits are `english` and `portuguese`. ```python from datasets import load_dataset dataset = load_dataset("nicholasKluge/instruct-aira-dataset") ``` ### Dataset License The `Instruct-Aira Dataset` is licensed under the Apache License, Version 2.0. See the [LICENSE](LICENSE) file for more details. ### Disclaimer This dataset is provided as is, without any warranty or guarantee of its accuracy or suitability for any purpose. The creators and contributors of this dataset are not liable for any damages or losses arising from its use. Please review and comply with the licenses and terms of the original datasets before use.
spacemanidol/product-search-corpus
2023-08-11T17:15:55.000Z
[ "region:us" ]
spacemanidol
null
null
0
129
Entry not found
pib
2023-06-01T14:59:57.000Z
[ "task_categories:translation", "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:other", "multilinguality:translation", "size_categories:100K<n<1M", "size_categories:10K<n<100K", "source_datasets:original", "language:bn", "language:en", "language:gu", "language:hi", "language:ml", "language:mr", "language:or", "language:pa", "language:ta", "language:te", "language:ur", "license:cc-by-4.0", "arxiv:2008.04860", "region:us" ]
null
Sentence aligned parallel corpus between 11 Indian Languages, crawled and extracted from the press information bureau website.
@inproceedings{siripragada-etal-2020-multilingual, title = "A Multilingual Parallel Corpora Collection Effort for {I}ndian Languages", author = "Siripragada, Shashank and Philip, Jerin and Namboodiri, Vinay P. and Jawahar, C V", booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference", month = may, year = "2020", address = "Marseille, France", publisher = "European Language Resources Association", url = "https://aclanthology.org/2020.lrec-1.462", pages = "3743--3751", language = "English", ISBN = "979-10-95546-34-4", } @article{2020, title={Revisiting Low Resource Status of Indian Languages in Machine Translation}, url={http://dx.doi.org/10.1145/3430984.3431026}, DOI={10.1145/3430984.3431026}, journal={8th ACM IKDD CODS and 26th COMAD}, publisher={ACM}, author={Philip, Jerin and Siripragada, Shashank and Namboodiri, Vinay P. and Jawahar, C. V.}, year={2020}, month={Dec} }
null
3
128
--- task_categories: - translation - text-generation - fill-mask task_ids: - language-modeling - masked-language-modeling multilinguality: - translation language: - bn - en - gu - hi - ml - mr - or - pa - ta - te - ur language_creators: - other annotations_creators: - no-annotation source_datasets: - original size_categories: - 100K<n<1M - 10K<n<100K license: - cc-by-4.0 paperswithcode_id: null pretty_name: CVIT PIB dataset_info: - config_name: or-ur features: - name: translation dtype: translation: languages: - or - ur splits: - name: train num_bytes: 27790211 num_examples: 43766 download_size: 393352875 dataset_size: 27790211 - config_name: ml-or features: - name: translation dtype: translation: languages: - ml - or splits: - name: train num_bytes: 16011549 num_examples: 19413 download_size: 393352875 dataset_size: 16011549 - config_name: bn-ta features: - name: translation dtype: translation: languages: - bn - ta splits: - name: train num_bytes: 28706668 num_examples: 33005 download_size: 393352875 dataset_size: 28706668 - config_name: gu-mr features: - name: translation dtype: translation: languages: - gu - mr splits: - name: train num_bytes: 24253770 num_examples: 30766 download_size: 393352875 dataset_size: 24253770 - config_name: hi-or features: - name: translation dtype: translation: languages: - hi - or splits: - name: train num_bytes: 45086618 num_examples: 61070 download_size: 393352875 dataset_size: 45086618 - config_name: en-or features: - name: translation dtype: translation: languages: - en - or splits: - name: train num_bytes: 51258494 num_examples: 98230 download_size: 393352875 dataset_size: 51258494 - config_name: mr-ur features: - name: translation dtype: translation: languages: - mr - ur splits: - name: train num_bytes: 34053295 num_examples: 49691 download_size: 393352875 dataset_size: 34053295 - config_name: en-ta features: - name: translation dtype: translation: languages: - en - ta splits: - name: train num_bytes: 74931542 num_examples: 118759 download_size: 393352875 dataset_size: 74931542 - config_name: hi-ta features: - name: translation dtype: translation: languages: - hi - ta splits: - name: train num_bytes: 57628429 num_examples: 64945 download_size: 393352875 dataset_size: 57628429 - config_name: bn-en features: - name: translation dtype: translation: languages: - bn - en splits: - name: train num_bytes: 53291968 num_examples: 93560 download_size: 393352875 dataset_size: 53291968 - config_name: bn-or features: - name: translation dtype: translation: languages: - bn - or splits: - name: train num_bytes: 19819136 num_examples: 26456 download_size: 393352875 dataset_size: 19819136 - config_name: ml-ta features: - name: translation dtype: translation: languages: - ml - ta splits: - name: train num_bytes: 21685938 num_examples: 23609 download_size: 393352875 dataset_size: 21685938 - config_name: gu-ur features: - name: translation dtype: translation: languages: - gu - ur splits: - name: train num_bytes: 20312414 num_examples: 29938 download_size: 393352875 dataset_size: 20312414 - config_name: bn-ml features: - name: translation dtype: translation: languages: - bn - ml splits: - name: train num_bytes: 15545271 num_examples: 18149 download_size: 393352875 dataset_size: 15545271 - config_name: ml-pa features: - name: translation dtype: translation: languages: - ml - pa splits: - name: train num_bytes: 18114904 num_examples: 21978 download_size: 393352875 dataset_size: 18114904 - config_name: en-pa features: - name: translation dtype: translation: languages: - en - pa splits: - name: train num_bytes: 56316514 num_examples: 103296 download_size: 393352875 dataset_size: 56316514 - config_name: bn-hi features: - name: translation dtype: translation: languages: - bn - hi splits: - name: train num_bytes: 40970170 num_examples: 49598 download_size: 393352875 dataset_size: 40970170 - config_name: hi-pa features: - name: translation dtype: translation: languages: - hi - pa splits: - name: train num_bytes: 59293062 num_examples: 75200 download_size: 393352875 dataset_size: 59293062 - config_name: gu-te features: - name: translation dtype: translation: languages: - gu - te splits: - name: train num_bytes: 14517828 num_examples: 16335 download_size: 393352875 dataset_size: 14517828 - config_name: pa-ta features: - name: translation dtype: translation: languages: - pa - ta splits: - name: train num_bytes: 39144065 num_examples: 46349 download_size: 393352875 dataset_size: 39144065 - config_name: hi-ml features: - name: translation dtype: translation: languages: - hi - ml splits: - name: train num_bytes: 24015298 num_examples: 27167 download_size: 393352875 dataset_size: 24015298 - config_name: or-te features: - name: translation dtype: translation: languages: - or - te splits: - name: train num_bytes: 9011734 num_examples: 10475 download_size: 393352875 dataset_size: 9011734 - config_name: en-ml features: - name: translation dtype: translation: languages: - en - ml splits: - name: train num_bytes: 27754969 num_examples: 44986 download_size: 393352875 dataset_size: 27754969 - config_name: en-hi features: - name: translation dtype: translation: languages: - en - hi splits: - name: train num_bytes: 160009440 num_examples: 269594 download_size: 393352875 dataset_size: 160009440 - config_name: bn-pa features: - name: translation dtype: translation: languages: - bn - pa splits: - name: train num_bytes: 27522373 num_examples: 35109 download_size: 393352875 dataset_size: 27522373 - config_name: mr-te features: - name: translation dtype: translation: languages: - mr - te splits: - name: train num_bytes: 16838115 num_examples: 18179 download_size: 393352875 dataset_size: 16838115 - config_name: mr-pa features: - name: translation dtype: translation: languages: - mr - pa splits: - name: train num_bytes: 38720410 num_examples: 50418 download_size: 393352875 dataset_size: 38720410 - config_name: bn-te features: - name: translation dtype: translation: languages: - bn - te splits: - name: train num_bytes: 15529843 num_examples: 17605 download_size: 393352875 dataset_size: 15529843 - config_name: gu-hi features: - name: translation dtype: translation: languages: - gu - hi splits: - name: train num_bytes: 33606230 num_examples: 41587 download_size: 393352875 dataset_size: 33606230 - config_name: ta-ur features: - name: translation dtype: translation: languages: - ta - ur splits: - name: train num_bytes: 37593813 num_examples: 48892 download_size: 393352875 dataset_size: 37593813 - config_name: te-ur features: - name: translation dtype: translation: languages: - te - ur splits: - name: train num_bytes: 16485209 num_examples: 21148 download_size: 393352875 dataset_size: 16485209 - config_name: or-pa features: - name: translation dtype: translation: languages: - or - pa splits: - name: train num_bytes: 30081903 num_examples: 43159 download_size: 393352875 dataset_size: 30081903 - config_name: gu-ml features: - name: translation dtype: translation: languages: - gu - ml splits: - name: train num_bytes: 15749821 num_examples: 18252 download_size: 393352875 dataset_size: 15749821 - config_name: gu-pa features: - name: translation dtype: translation: languages: - gu - pa splits: - name: train num_bytes: 27441041 num_examples: 35566 download_size: 393352875 dataset_size: 27441041 - config_name: hi-te features: - name: translation dtype: translation: languages: - hi - te splits: - name: train num_bytes: 26473814 num_examples: 28569 download_size: 393352875 dataset_size: 26473814 - config_name: en-te features: - name: translation dtype: translation: languages: - en - te splits: - name: train num_bytes: 28620219 num_examples: 44888 download_size: 393352875 dataset_size: 28620219 - config_name: ml-te features: - name: translation dtype: translation: languages: - ml - te splits: - name: train num_bytes: 9690153 num_examples: 10480 download_size: 393352875 dataset_size: 9690153 - config_name: pa-ur features: - name: translation dtype: translation: languages: - pa - ur splits: - name: train num_bytes: 34959176 num_examples: 51831 download_size: 393352875 dataset_size: 34959176 - config_name: hi-ur features: - name: translation dtype: translation: languages: - hi - ur splits: - name: train num_bytes: 81262590 num_examples: 109951 download_size: 393352875 dataset_size: 81262590 - config_name: mr-or features: - name: translation dtype: translation: languages: - mr - or splits: - name: train num_bytes: 33998805 num_examples: 47001 download_size: 393352875 dataset_size: 33998805 - config_name: en-ur features: - name: translation dtype: translation: languages: - en - ur splits: - name: train num_bytes: 100571795 num_examples: 202578 download_size: 393352875 dataset_size: 100571795 - config_name: ml-ur features: - name: translation dtype: translation: languages: - ml - ur splits: - name: train num_bytes: 15663718 num_examples: 20913 download_size: 393352875 dataset_size: 15663718 - config_name: bn-mr features: - name: translation dtype: translation: languages: - bn - mr splits: - name: train num_bytes: 27604502 num_examples: 34043 download_size: 393352875 dataset_size: 27604502 - config_name: gu-ta features: - name: translation dtype: translation: languages: - gu - ta splits: - name: train num_bytes: 25089131 num_examples: 29187 download_size: 393352875 dataset_size: 25089131 - config_name: pa-te features: - name: translation dtype: translation: languages: - pa - te splits: - name: train num_bytes: 23119690 num_examples: 25684 download_size: 393352875 dataset_size: 23119690 - config_name: bn-gu features: - name: translation dtype: translation: languages: - bn - gu splits: - name: train num_bytes: 19899277 num_examples: 25166 download_size: 393352875 dataset_size: 19899277 - config_name: bn-ur features: - name: translation dtype: translation: languages: - bn - ur splits: - name: train num_bytes: 27540215 num_examples: 39290 download_size: 393352875 dataset_size: 27540215 - config_name: ml-mr features: - name: translation dtype: translation: languages: - ml - mr splits: - name: train num_bytes: 19723458 num_examples: 22796 download_size: 393352875 dataset_size: 19723458 - config_name: or-ta features: - name: translation dtype: translation: languages: - or - ta splits: - name: train num_bytes: 35357904 num_examples: 44035 download_size: 393352875 dataset_size: 35357904 - config_name: ta-te features: - name: translation dtype: translation: languages: - ta - te splits: - name: train num_bytes: 17415768 num_examples: 17359 download_size: 393352875 dataset_size: 17415768 - config_name: gu-or features: - name: translation dtype: translation: languages: - gu - or splits: - name: train num_bytes: 20111876 num_examples: 27162 download_size: 393352875 dataset_size: 20111876 - config_name: en-gu features: - name: translation dtype: translation: languages: - en - gu splits: - name: train num_bytes: 33630906 num_examples: 59739 download_size: 393352875 dataset_size: 33630906 - config_name: hi-mr features: - name: translation dtype: translation: languages: - hi - mr splits: - name: train num_bytes: 55680473 num_examples: 69186 download_size: 393352875 dataset_size: 55680473 - config_name: mr-ta features: - name: translation dtype: translation: languages: - mr - ta splits: - name: train num_bytes: 41585343 num_examples: 48535 download_size: 393352875 dataset_size: 41585343 - config_name: en-mr features: - name: translation dtype: translation: languages: - en - mr splits: - name: train num_bytes: 65042597 num_examples: 117199 download_size: 393352875 dataset_size: 65042597 config_names: - bn-en - bn-gu - bn-hi - bn-ml - bn-mr - bn-or - bn-pa - bn-ta - bn-te - bn-ur - en-gu - en-hi - en-ml - en-mr - en-or - en-pa - en-ta - en-te - en-ur - gu-hi - gu-ml - gu-mr - gu-or - gu-pa - gu-ta - gu-te - gu-ur - hi-ml - hi-mr - hi-or - hi-pa - hi-ta - hi-te - hi-ur - ml-mr - ml-or - ml-pa - ml-ta - ml-te - ml-ur - mr-or - mr-pa - mr-ta - mr-te - mr-ur - or-pa - or-ta - or-te - or-ur - pa-ta - pa-te - pa-ur - ta-te - ta-ur - te-ur --- # Dataset Card for CVIT PIB ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://preon.iiit.ac.in/~jerin/bhasha/ - **Paper:** https://arxiv.org/abs/2008.04860 - **Point of Contact:** [Mailing List](cvit-bhasha@googlegroups.com) ### Dataset Summary This dataset is the large scale sentence aligned corpus in 11 Indian languages, viz. CVIT-PIB corpus that is the largest multilingual corpus available for Indian languages. ### Supported Tasks and Leaderboards - Machine Translation ### Languages Parallel data for following languages [en, bn, gu, hi, ml, mr, pa, or, ta, te, ur] are covered. ## Dataset Structure ### Data Instances An example for the "gu-pa" language pair: ``` { 'translation': { 'gu': 'એવો નિર્ણય લેવાયો હતો કે ખંતપૂર્વકની કામગીરી હાથ ધરવા, કાયદેસર અને ટેકનિકલ મૂલ્યાંકન કરવા, વેન્ચર કેપિટલ ઇન્વેસ્ટમેન્ટ સમિતિની બેઠક યોજવા વગેરે એઆઇએફને કરવામાં આવેલ પ્રતિબદ્ધતાના 0.50 ટકા સુધી અને બાકીની રકમ એફએફએસને પૂર્ણ કરવામાં આવશે.', 'pa': 'ਇਹ ਵੀ ਫੈਸਲਾ ਕੀਤਾ ਗਿਆ ਕਿ ਐੱਫਆਈਆਈ ਅਤੇ ਬਕਾਏ ਲਈ ਕੀਤੀਆਂ ਗਈਆਂ ਵਚਨਬੱਧਤਾਵਾਂ ਦੇ 0.50 % ਦੀ ਸੀਮਾ ਤੱਕ ਐੱਫਈਐੱਸ ਨੂੰ ਮਿਲਿਆ ਜਾਏਗਾ, ਇਸ ਨਾਲ ਉੱਦਮ ਪੂੰਜੀ ਨਿਵੇਸ਼ ਕਮੇਟੀ ਦੀ ਬੈਠਕ ਦਾ ਆਯੋਜਨ ਉਚਿਤ ਸਾਵਧਾਨੀ, ਕਾਨੂੰਨੀ ਅਤੇ ਤਕਨੀਕੀ ਮੁੱਲਾਂਕਣ ਲਈ ਸੰਚਾਲਨ ਖਰਚ ਆਦਿ ਦੀ ਪੂਰਤੀ ਹੋਵੇਗੀ।' } } ``` ### Data Fields - `translation`: Translation field containing the parallel text for the pair of languages. ### Data Splits The dataset is in a single "train" split. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [Creative Commons Attribution-ShareAlike 4.0 International](https://creativecommons.org/licenses/by-sa/4.0/) license. ### Citation Information ``` @inproceedings{siripragada-etal-2020-multilingual, title = "A Multilingual Parallel Corpora Collection Effort for {I}ndian Languages", author = "Siripragada, Shashank and Philip, Jerin and Namboodiri, Vinay P. and Jawahar, C V", booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference", month = may, year = "2020", address = "Marseille, France", publisher = "European Language Resources Association", url = "https://aclanthology.org/2020.lrec-1.462", pages = "3743--3751", language = "English", ISBN = "979-10-95546-34-4", } @article{2020, title={Revisiting Low Resource Status of Indian Languages in Machine Translation}, url={http://dx.doi.org/10.1145/3430984.3431026}, DOI={10.1145/3430984.3431026}, journal={8th ACM IKDD CODS and 26th COMAD}, publisher={ACM}, author={Philip, Jerin and Siripragada, Shashank and Namboodiri, Vinay P. and Jawahar, C. V.}, year={2020}, month={Dec} } ``` ### Contributions Thanks to [@vasudevgupta7](https://github.com/vasudevgupta7) for adding this dataset, and [@albertvillanova](https://github.com/albertvillanova) for updating its version.
bigbio/medmentions
2022-12-22T15:45:34.000Z
[ "multilinguality:monolingual", "language:en", "license:cc0-1.0", "arxiv:1902.09476", "region:us" ]
bigbio
MedMentions is a new manually annotated resource for the recognition of biomedical concepts. What distinguishes MedMentions from other annotated biomedical corpora is its size (over 4,000 abstracts and over 350,000 linked mentions), as well as the size of the concept ontology (over 3 million concepts from UMLS 2017) and its broad coverage of biomedical disciplines. Corpus: The MedMentions corpus consists of 4,392 papers (Titles and Abstracts) randomly selected from among papers released on PubMed in 2016, that were in the biomedical field, published in the English language, and had both a Title and an Abstract. Annotators: We recruited a team of professional annotators with rich experience in biomedical content curation to exhaustively annotate all UMLS® (2017AA full version) entity mentions in these papers. Annotation quality: We did not collect stringent IAA (Inter-annotator agreement) data. To gain insight on the annotation quality of MedMentions, we randomly selected eight papers from the annotated corpus, containing a total of 469 concepts. Two biologists ('Reviewer') who did not participate in the annotation task then each reviewed four papers. The agreement between Reviewers and Annotators, an estimate of the Precision of the annotations, was 97.3%.
@misc{mohan2019medmentions, title={MedMentions: A Large Biomedical Corpus Annotated with UMLS Concepts}, author={Sunil Mohan and Donghui Li}, year={2019}, eprint={1902.09476}, archivePrefix={arXiv}, primaryClass={cs.CL} }
null
2
128
--- language: - en bigbio_language: - English license: cc0-1.0 multilinguality: monolingual bigbio_license_shortname: CC0_1p0 pretty_name: MedMentions homepage: https://github.com/chanzuckerberg/MedMentions bigbio_pubmed: True bigbio_public: True bigbio_tasks: - NAMED_ENTITY_DISAMBIGUATION - NAMED_ENTITY_RECOGNITION --- # Dataset Card for MedMentions ## Dataset Description - **Homepage:** https://github.com/chanzuckerberg/MedMentions - **Pubmed:** True - **Public:** True - **Tasks:** NED,NER MedMentions is a new manually annotated resource for the recognition of biomedical concepts. What distinguishes MedMentions from other annotated biomedical corpora is its size (over 4,000 abstracts and over 350,000 linked mentions), as well as the size of the concept ontology (over 3 million concepts from UMLS 2017) and its broad coverage of biomedical disciplines. Corpus: The MedMentions corpus consists of 4,392 papers (Titles and Abstracts) randomly selected from among papers released on PubMed in 2016, that were in the biomedical field, published in the English language, and had both a Title and an Abstract. Annotators: We recruited a team of professional annotators with rich experience in biomedical content curation to exhaustively annotate all UMLS® (2017AA full version) entity mentions in these papers. Annotation quality: We did not collect stringent IAA (Inter-annotator agreement) data. To gain insight on the annotation quality of MedMentions, we randomly selected eight papers from the annotated corpus, containing a total of 469 concepts. Two biologists ('Reviewer') who did not participate in the annotation task then each reviewed four papers. The agreement between Reviewers and Annotators, an estimate of the Precision of the annotations, was 97.3%. ## Citation Information ``` @misc{mohan2019medmentions, title={MedMentions: A Large Biomedical Corpus Annotated with UMLS Concepts}, author={Sunil Mohan and Donghui Li}, year={2019}, eprint={1902.09476}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
diffusers-parti-prompts/wuerstchen
2023-09-13T17:08:21.000Z
[ "region:us" ]
diffusers-parti-prompts
null
null
null
0
128
--- configs: - config_name: default data_files: - split: train path: data/train-* dataset_info: features: - name: Prompt dtype: string - name: Category dtype: string - name: Challenge dtype: string - name: Note dtype: string - name: images dtype: image - name: model_name dtype: string - name: seed dtype: int64 splits: - name: train num_bytes: 149898953.312 num_examples: 1632 download_size: 150261013 dataset_size: 149898953.312 --- # Dataset Card for "wuerstchen" Dataset was generated using the code below: ```py import torch from datasets import Dataset, Features from datasets import Image as ImageFeature from datasets import Value, load_dataset from diffusers import AutoPipelineForText2Image import PIL def main(): print("Loading dataset...") parti_prompts = load_dataset("nateraw/parti-prompts", split="train") print("Loading pipeline...") seed = 0 device = "cuda" generator = torch.Generator(device).manual_seed(seed) dtype = torch.float16 ckpt_id = "warp-diffusion/wuerstchen" pipeline = AutoPipelineForText2Image.from_pretrained( ckpt_id, torch_dtype=dtype ).to(device) pipeline.prior_prior = torch.compile(pipeline.prior_prior, mode="reduce-overhead", fullgraph=True) pipeline.decoder = torch.compile(pipeline.decoder, mode="reduce-overhead", fullgraph=True) print("Running inference...") main_dict = {} for i in range(len(parti_prompts)): sample = parti_prompts[i] prompt = sample["Prompt"] image = pipeline( prompt=prompt, height=1024, width=1024, prior_guidance_scale=4.0, decoder_guidance_scale=0.0, generator=generator, ).images[0] image = image.resize((256, 256), resample=PIL.Image.Resampling.LANCZOS) img_path = f"wuerstchen_{i}.png" image.save(img_path) main_dict.update( { prompt: { "img_path": img_path, "Category": sample["Category"], "Challenge": sample["Challenge"], "Note": sample["Note"], "model_name": ckpt_id, "seed": seed, } } ) def generation_fn(): for prompt in main_dict: prompt_entry = main_dict[prompt] yield { "Prompt": prompt, "Category": prompt_entry["Category"], "Challenge": prompt_entry["Challenge"], "Note": prompt_entry["Note"], "images": {"path": prompt_entry["img_path"]}, "model_name": prompt_entry["model_name"], "seed": prompt_entry["seed"], } print("Preparing HF dataset...") ds = Dataset.from_generator( generation_fn, features=Features( Prompt=Value("string"), Category=Value("string"), Challenge=Value("string"), Note=Value("string"), images=ImageFeature(), model_name=Value("string"), seed=Value("int64"), ), ) ds_id = "diffusers-parti-prompts/wuerstchen" ds.push_to_hub(ds_id) if __name__ == "__main__": main() ```
tarteel-ai/quranqa
2022-07-27T02:28:31.000Z
[ "task_categories:question-answering", "task_ids:extractive-qa", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:n<1K", "size_categories:1K<n<10K", "source_datasets:original", "language:ar", "license:cc-by-nd-4.0", "quran", "qa", "region:us" ]
tarteel-ai
The absence of publicly available reusable test collections for Arabic question answering on the Holy Qur’an has impeded the possibility of fairly comparing the performance of systems in that domain. In this article, we introduce AyaTEC, a reusable test collection for verse-based question answering on the Holy Qur’an, which serves as a common experimental testbed for this task. AyaTEC includes 207 questions (with their corresponding 1,762 answers) covering 11 topic categories of the Holy Qur’an that target the information needs of both curious and skeptical users. To the best of our effort, the answers to the questions (each represented as a sequence of verses) in AyaTEC were exhaustive—that is, all qur’anic verses that directly answered the questions were exhaustively extracted and annotated. To facilitate the use of AyaTEC in evaluating the systems designed for that task, we propose several evaluation measures to support the different types of questions and the nature of verse-based answers while integrating the concept of partial matching of answers in the evaluation.
@article{malhas2020ayatec, author = {Malhas, Rana and Elsayed, Tamer}, title = {AyaTEC: Building a Reusable Verse-Based Test Collection for Arabic Question Answering on the Holy Qur’an}, year = {2020}, issue_date = {November 2020}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, volume = {19}, number = {6}, issn = {2375-4699}, url = {https://doi.org/10.1145/3400396}, doi = {10.1145/3400396}, journal = {ACM Trans. Asian Low-Resour. Lang. Inf. Process.}, month = {oct}, articleno = {78}, numpages = {21}, keywords = {evaluation, Classical Arabic} }
null
5
127
--- annotations_creators: - expert-generated language: - ar language_creators: - expert-generated license: - cc-by-nd-4.0 multilinguality: - monolingual pretty_name: Qur'anic Reading Comprehension Dataset size_categories: - n<1K - 1K<n<10K source_datasets: - original tags: - quran - qa task_categories: - question-answering task_ids: - extractive-qa --- # Dataset Card for the Qur'anic Reading Comprehension Dataset (QRCD) ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://sites.google.com/view/quran-qa-2022/home - **Repository:** https://gitlab.com/bigirqu/quranqa/-/tree/main/ - **Paper:** https://dl.acm.org/doi/10.1145/3400396 - **Leaderboard:** - **Point of Contact:** @piraka9011 ### Dataset Summary The QRCD (Qur'anic Reading Comprehension Dataset) is composed of 1,093 tuples of question-passage pairs that are coupled with their extracted answers to constitute 1,337 question-passage-answer triplets. ### Supported Tasks and Leaderboards This task is evaluated as a ranking task. To give credit to a QA system that may retrieve an answer (not necessarily at the first rank) that does not fully match one of the gold answers but partially matches it, we use partial Reciprocal Rank (pRR) measure. It is a variant of the traditional Reciprocal Rank evaluation metric that considers partial matching. pRR is the official evaluation measure of this shared task. We will also report Exact Match (EM) and F1@1, which are evaluation metrics applied only on the top predicted answer. The EM metric is a binary measure that rewards a system only if the top predicted answer exactly matches one of the gold answers. Whereas, the F1@1 metric measures the token overlap between the top predicted answer and the best matching gold answer. To get an overall evaluation score, each of the above measures is averaged over all questions. ### Languages Qur'anic Arabic ## Dataset Structure ### Data Instances To simplify the structure of the dataset, each tuple contains one passage, one question and a list that may contain one or more answers to that question, as shown below: ```json { "pq_id": "38:41-44_105", "passage": "واذكر عبدنا أيوب إذ نادى ربه أني مسني الشيطان بنصب وعذاب. اركض برجلك هذا مغتسل بارد وشراب. ووهبنا له أهله ومثلهم معهم رحمة منا وذكرى لأولي الألباب. وخذ بيدك ضغثا فاضرب به ولا تحنث إنا وجدناه صابرا نعم العبد إنه أواب.", "surah": 38, "verses": "41-44", "question": "من هو النبي المعروف بالصبر؟", "answers": [ { "text": "أيوب", "start_char": 12 } ] } ``` Each Qur’anic passage in QRCD may have more than one occurrence; and each passage occurrence is paired with a different question. Likewise, each question in QRCD may have more than one occurrence; and each question occurrence is paired with a different Qur’anic passage. The source of the Qur'anic text in QRCD is the Tanzil project download page, which provides verified versions of the Holy Qur'an in several scripting styles. We have chosen the simple-clean text style of Tanzil version 1.0.2. ### Data Fields * `pq_id`: Sample ID * `passage`: Context text * `surah`: Surah number * `verses`: Verse range * `question`: Question text * `answers`: List of answers and their start character ### Data Splits | **Dataset** | **%** | **# Question-Passage Pairs** | **# Question-Passage-Answer Triplets** | |-------------|:-----:|:-----------------------------:|:---------------------------------------:| | Training | 65% | 710 | 861 | | Development | 10% | 109 | 128 | | Test | 25% | 274 | 348 | | All | 100% | 1,093 | 1,337 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information The QRCD v1.1 dataset is distributed under the CC-BY-ND 4.0 License https://creativecommons.org/licenses/by-nd/4.0/legalcode For a human-readable summary of (and not a substitute for) the above CC-BY-ND 4.0 License, please refer to https://creativecommons.org/licenses/by-nd/4.0/ ### Citation Information ``` @article{malhas2020ayatec, author = {Malhas, Rana and Elsayed, Tamer}, title = {AyaTEC: Building a Reusable Verse-Based Test Collection for Arabic Question Answering on the Holy Qur’an}, year = {2020}, issue_date = {November 2020}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, volume = {19}, number = {6}, issn = {2375-4699}, url = {https://doi.org/10.1145/3400396}, doi = {10.1145/3400396}, journal = {ACM Trans. Asian Low-Resour. Lang. Inf. Process.}, month = {oct}, articleno = {78}, numpages = {21}, keywords = {evaluation, Classical Arabic} } ``` ### Contributions Thanks to [@piraka9011](https://github.com/piraka9011) for adding this dataset.
ywchoi/pubmed_abstract_2
2022-09-13T00:58:59.000Z
[ "region:us" ]
ywchoi
null
null
null
0
127
Entry not found
Norod78/cartoon-blip-captions
2022-11-09T16:27:57.000Z
[ "task_categories:text-to-image", "annotations_creators:machine-generated", "language_creators:other", "multilinguality:monolingual", "size_categories:n<1K", "language:en", "license:cc-by-nc-sa-4.0", "region:us" ]
Norod78
null
null
null
4
127
--- dataset_info: features: - name: image dtype: image - name: text dtype: string splits: - name: train num_bytes: 190959102.953 num_examples: 3141 download_size: 190279356 dataset_size: 190959102.953 pretty_name: 'Cartoon BLIP captions' size_categories: - n<1K tags: [] task_categories: - text-to-image license: cc-by-nc-sa-4.0 annotations_creators: - machine-generated language: - en language_creators: - other multilinguality: - monolingual --- # Dataset Card for "cartoon-blip-captions"
dominguesm/alpaca-data-pt-br
2023-04-01T12:00:07.000Z
[ "task_categories:text-generation", "language:pt", "license:cc-by-4.0", "instruction-finetuning", "region:us" ]
dominguesm
null
null
null
9
127
--- dataset_info: features: - name: instruction dtype: string - name: input dtype: string - name: output dtype: string splits: - name: train num_bytes: 18324606 num_examples: 51759 download_size: 11508709 dataset_size: 18324606 license: cc-by-4.0 language: - pt tags: - instruction-finetuning pretty_name: Alpaca-Cleaned-PTBR task_categories: - text-generation --- **NOTE:** This is a machine translated version of the [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned) dataset. # Dataset Card for Alpaca-Cleaned - **Repository:** https://github.com/gururise/AlpacaDataCleaned ## Dataset Description This is a cleaned version of the original Alpaca Dataset released by Stanford. The following issues have been identified in the original release and fixed in this dataset: 1. **Hallucinations:** Many instructions in the original dataset had instructions referencing data on the internet, which just caused GPT3 to hallucinate an answer. ``` "instruction":"Summarize the given article in 200 Words.", "input": "https://www.bbc.com/news/world-51461830", "output": "The recent protests in Belarus are part of a growing wave of discontent against the rule of Alexander Lukashenko, who has led the country since 1994. Demonstrations against his government have been ongoing since August, and come in the wake of an election widely seen as rigged. As police and security forces have cracked down on protesters, the movement has only grown, with hundreds of thousands of people taking to the streets to call for new leadership. Human rights activists have accused the government of using extreme violence in order prosecute protesters, leading to an international outcry and sanctions from the European Union. As the situation in Belarus continues to unfold, the international community is monitoring the situation closely - hoping for a peaceful resolution, while also standing firm against the oppressive tactics employed by the Belarusian state." "instruction": "Classify the following shape according to simple geometry.", "input": "<Picture Attached>", "output": "Triangle" "instruction": "Given a recent news article, write a headline for it.", "input": "https://www.news.example.com/article-1", "output": "\"NASA Launches a Mars 2020 Mission to Search for Signs of Ancient Life\"" ``` 2. **Merged Instructions:** There were many instructions that were merged together in the original dataset for some reason. ``` "instruction": "Should I invest in stocks?", "input": "", "output": "It depends on your individual situation. Investing in stocks carries a certain amount of risk but can also offer potential rewards. Make sure to research the different types of stocks, understand the risks and rewards, and develop a plan that works for you.\n##\n18. Instruction: Identify the noun in the following sentence:\n18. Input:\nThe large apple was delicious.\n18. Output:\nApple" ``` 3. **Empty outputs:** Some entries in the original dataset had empty outputs. ``` "instruction": "Calculate the area of the following shape in square centimeters.", "input": "rectangle of size 4 cm x 5 cm", "output": "" ``` 4. **Empty code examples:** Some descriptions in the original dataset were missing code examples, making it difficult to understand the intended behavior of the code. 5. **Instructions to generate images:** Some descriptions in the original dataset included instructions to generate images, something obviously not possible. ``` "instruction": "Create a graphic or logo that visually represents the word \"courage\".", "input": "", "output": "<No Output>" ``` 6. **N/A outputs:** Some code snippets in the original dataset had N/A outputs. 7. **Inconsistent input field:** The original dataset had inconsistent usage of the input field when it was supposed to be empty. ``` "input":"<no input>" "input":"No input" "input":"noinput" "input":"<noinput>" ``` 8. **Wrong answers:** Some instructions/questions in the original dataset had incorrect answers. About 80% of the math problems are estimated to have incorrect answers. ``` "instruction": "Calculate the median of the following data set.", "input": "1, 2, 4, 5, 8, 9", "output": "5" "instruction": "Convert 25m to km.", "input": "", "output": "25km" ``` 9. **Non-Sensical/Unclear instructions:** Many instructions are unclear, we try to clarify (or re-write) if instructions are non-sensical. Instructions that are slightly unclear, but where one could deduce the meaning are not altered. ``` "instruction": "Freeze the following sample of yogurt for 10 minutes.", "input": "Yogurt sample", "output": "<noinput>" "instruction": "Increase the font size to 12 points.", "input": "", "output": "The font size has been increased to 12 points." ``` 10. **Extraneous escape and control characters:** The original dataset had several entries with extraneous escape and control characters. ### Original Alpaca Dataset Summary Alpaca is a dataset of 52,000 instructions and demonstrations generated by OpenAI's `text-davinci-003` engine. This instruction data can be used to conduct instruction-tuning for language models and make the language model follow instruction better. The authors built on the data generation pipeline from [Self-Instruct framework](https://github.com/yizhongw/self-instruct) and made the following modifications: - The `text-davinci-003` engine to generate the instruction data instead of `davinci`. - A [new prompt](https://github.com/tatsu-lab/stanford_alpaca/blob/main/prompt.txt) was written that explicitly gave the requirement of instruction generation to `text-davinci-003`. - Much more aggressive batch decoding was used, i.e., generating 20 instructions at once, which significantly reduced the cost of data generation. - The data generation pipeline was simplified by discarding the difference between classification and non-classification instructions. - Only a single instance was generated for each instruction, instead of 2 to 3 instances as in Self-Instruct. This produced an instruction-following dataset with 52K examples obtained at a much lower cost (less than $500). In a preliminary study, the authors also found that the 52K generated data to be much more diverse than the data released by [Self-Instruct](https://github.com/yizhongw/self-instruct/blob/main/data/seed_tasks.jsonl). ### Supported Tasks and Leaderboards The Alpaca dataset designed for instruction training pretrained language models. ### Languages The data in Alpaca are in English (BCP-47 en). ## Dataset Structure ### Data Instances An example of "train" looks as follows: ```json { "instruction": "Create a classification task by clustering the given list of items.", "input": "Apples, oranges, bananas, strawberries, pineapples", "output": "Class 1: Apples, Oranges\nClass 2: Bananas, Strawberries\nClass 3: Pineapples", "text": "Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\nCreate a classification task by clustering the given list of items.\n\n### Input:\nApples, oranges, bananas, strawberries, pineapples\n\n### Response:\nClass 1: Apples, Oranges\nClass 2: Bananas, Strawberries\nClass 3: Pineapples", } ``` ### Data Fields The data fields are as follows: * `instruction`: describes the task the model should perform. Each of the 52K instructions is unique. * `input`: optional context or input for the task. For example, when the instruction is "Summarize the following article", the input is the article. Around 40% of the examples have an input. * `output`: the answer to the instruction as generated by `text-davinci-003`. * `text`: the `instruction`, `input` and `output` formatted with the [prompt template](https://github.com/tatsu-lab/stanford_alpaca#data-release) used by the authors for fine-tuning their models. ### Data Splits | | train | |---------------|------:| | alpaca | 52002 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset Excerpt the [blog post](https://crfm.stanford.edu/2023/03/13/alpaca.html) accompanying the release of this dataset: > We believe that releasing the above assets will enable the academic community to perform controlled scientific studies on instruction-following language models, resulting in better science and ultimately new techniques to address the existing deficiencies with these models. At the same time, any release carries some risk. First, we recognize that releasing our training recipe reveals the feasibility of certain capabilities. On one hand, this enables more people (including bad actors) to create models that could cause harm (either intentionally or not). On the other hand, this awareness might incentivize swift defensive action, especially from the academic community, now empowered by the means to perform deeper safety research on such models. Overall, we believe that the benefits for the research community outweigh the risks of this particular release. Given that we are releasing the training recipe, we believe that releasing the data, model weights, and training code incur minimal further risk, given the simplicity of the recipe. At the same time, releasing these assets has enormous benefits for reproducible science, so that the academic community can use standard datasets, models, and code to perform controlled comparisons and to explore extensions. Deploying an interactive demo for Alpaca also poses potential risks, such as more widely disseminating harmful content and lowering the barrier for spam, fraud, or disinformation. We have put into place two risk mitigation strategies. First, we have implemented a content filter using OpenAI’s content moderation API, which filters out harmful content as defined by OpenAI’s usage policies. Second, we watermark all the model outputs using the method described in Kirchenbauer et al. 2023, so that others can detect (with some probability) whether an output comes from Alpaca 7B. Finally, we have strict terms and conditions for using the demo; it is restricted to non-commercial uses and to uses that follow LLaMA’s license agreement. We understand that these mitigation measures can be circumvented once we release the model weights or if users train their own instruction-following models. However, by installing these mitigations, we hope to advance the best practices and ultimately develop community norms for the responsible deployment of foundation models. ### Discussion of Biases [More Information Needed] ### Other Known Limitations The `alpaca` data is generated by a language model (`text-davinci-003`) and inevitably contains some errors or biases. We encourage users to use this data with caution and propose new methods to filter or improve the imperfections. ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information The dataset is available under the [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode). ### Citation Information ``` @misc{alpaca, author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto }, title = {Stanford Alpaca: An Instruction-following LLaMA model}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}}, } ``` ### Contributions [More Information Needed]
Francesco/aquarium-qlnqy
2023-03-30T09:16:41.000Z
[ "task_categories:object-detection", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:cc", "rf100", "region:us" ]
Francesco
null
null
null
1
127
--- dataset_info: features: - name: image_id dtype: int64 - name: image dtype: image - name: width dtype: int32 - name: height dtype: int32 - name: objects sequence: - name: id dtype: int64 - name: area dtype: int64 - name: bbox sequence: float32 length: 4 - name: category dtype: class_label: names: '0': aquarium '1': fish '2': jellyfish '3': penguin '4': puffin '5': shark '6': starfish '7': stingray annotations_creators: - crowdsourced language_creators: - found language: - en license: - cc multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - object-detection task_ids: [] pretty_name: aquarium-qlnqy tags: - rf100 --- # Dataset Card for aquarium-qlnqy ** The original COCO dataset is stored at `dataset.tar.gz`** ## Dataset Description - **Homepage:** https://universe.roboflow.com/object-detection/aquarium-qlnqy - **Point of Contact:** francesco.zuppichini@gmail.com ### Dataset Summary aquarium-qlnqy ### Supported Tasks and Leaderboards - `object-detection`: The dataset can be used to train a model for Object Detection. ### Languages English ## Dataset Structure ### Data Instances A data point comprises an image and its object annotations. ``` { 'image_id': 15, 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>, 'width': 964043, 'height': 640, 'objects': { 'id': [114, 115, 116, 117], 'area': [3796, 1596, 152768, 81002], 'bbox': [ [302.0, 109.0, 73.0, 52.0], [810.0, 100.0, 57.0, 28.0], [160.0, 31.0, 248.0, 616.0], [741.0, 68.0, 202.0, 401.0] ], 'category': [4, 4, 0, 0] } } ``` ### Data Fields - `image`: the image id - `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]` - `width`: the image width - `height`: the image height - `objects`: a dictionary containing bounding box metadata for the objects present on the image - `id`: the annotation id - `area`: the area of the bounding box - `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format) - `category`: the object's category. #### Who are the annotators? Annotators are Roboflow users ## Additional Information ### Licensing Information See original homepage https://universe.roboflow.com/object-detection/aquarium-qlnqy ### Citation Information ``` @misc{ aquarium-qlnqy, title = { aquarium qlnqy Dataset }, type = { Open Source Dataset }, author = { Roboflow 100 }, howpublished = { \url{ https://universe.roboflow.com/object-detection/aquarium-qlnqy } }, url = { https://universe.roboflow.com/object-detection/aquarium-qlnqy }, journal = { Roboflow Universe }, publisher = { Roboflow }, year = { 2022 }, month = { nov }, note = { visited on 2023-03-29 }, }" ``` ### Contributions Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset.
PierreLepagnol/WRENCH
2023-08-17T11:18:59.000Z
[ "task_categories:text-classification", "task_categories:token-classification", "size_categories:10K<n<100K", "region:us" ]
PierreLepagnol
null
null
null
0
127
--- task_categories: - text-classification - token-classification size_categories: - 10K<n<100K dataset_info: - config_name: yelp features: - name: text dtype: string - name: label dtype: int8 - name: weak_labels sequence: int8 splits: - name: train num_bytes: 22618599 num_examples: 30400 - name: validation num_bytes: 2824249 num_examples: 3800 - name: test num_bytes: 2709033 num_examples: 3800 download_size: 37356054 dataset_size: 28151881 - config_name: imdb features: - name: text dtype: string - name: label dtype: int8 - name: weak_labels sequence: int8 splits: - name: train num_bytes: 25515241 num_examples: 20000 - name: validation num_bytes: 3269130 num_examples: 2500 - name: test num_bytes: 3151954 num_examples: 2500 download_size: 33910706 dataset_size: 31936325 - config_name: agnews features: - name: text dtype: string - name: label dtype: int8 - name: weak_labels sequence: int8 splits: - name: train num_bytes: 20357121 num_examples: 96000 - name: validation num_bytes: 2487983 num_examples: 12000 - name: test num_bytes: 2521518 num_examples: 12000 download_size: 39149014 dataset_size: 25366622 - config_name: cdr features: - name: text dtype: string - name: label dtype: int8 - name: entity1 dtype: string - name: entity2 dtype: string - name: weak_labels sequence: int8 splits: - name: train num_bytes: 2318545 num_examples: 8430 - name: validation num_bytes: 246252 num_examples: 920 - name: test num_bytes: 1229627 num_examples: 4673 download_size: 11036213 dataset_size: 3794424 - config_name: chemprot features: - name: text dtype: string - name: label dtype: int8 - name: entity1 dtype: string - name: entity2 dtype: string - name: weak_labels sequence: int8 splits: - name: train num_bytes: 3474637 num_examples: 12861 - name: validation num_bytes: 435850 num_examples: 1607 - name: test num_bytes: 434031 num_examples: 1607 download_size: 15743249 dataset_size: 4344518 - config_name: semeval features: - name: text dtype: string - name: label dtype: int8 - name: entity1 dtype: string - name: entity2 dtype: string - name: weak_labels sequence: int8 splits: - name: train num_bytes: 532785 num_examples: 1749 - name: validation num_bytes: 54373 num_examples: 178 - name: test num_bytes: 184826 num_examples: 600 download_size: 2295058 dataset_size: 771984 - config_name: sms features: - name: text dtype: string - name: label dtype: int8 - name: weak_labels sequence: int8 splits: - name: train num_bytes: 741520 num_examples: 4571 - name: validation num_bytes: 81747 num_examples: 500 - name: test num_bytes: 80152 num_examples: 500 download_size: 6715435 dataset_size: 903419 - config_name: spouse features: - name: text dtype: string - name: label dtype: int8 - name: entity1 dtype: string - name: entity2 dtype: string - name: weak_labels sequence: int8 splits: - name: train num_bytes: 7550613 num_examples: 22254 - name: validation num_bytes: 952523 num_examples: 2811 - name: test num_bytes: 876804 num_examples: 2701 download_size: 22017644 dataset_size: 9379940 - config_name: trec features: - name: text dtype: string - name: label dtype: int8 - name: weak_labels sequence: int8 splits: - name: train num_bytes: 610244 num_examples: 4965 - name: validation num_bytes: 61048 num_examples: 500 - name: test num_bytes: 56479 num_examples: 500 download_size: 2277067 dataset_size: 727771 - config_name: youtube features: - name: text dtype: string - name: label dtype: int8 - name: weak_labels sequence: int8 splits: - name: train num_bytes: 180736 num_examples: 1586 - name: validation num_bytes: 14659 num_examples: 120 - name: test num_bytes: 33347 num_examples: 250 download_size: 759494 dataset_size: 228742 ---
vivym/midjourney-prompts
2023-08-26T17:20:53.000Z
[ "task_categories:text-to-image", "license:apache-2.0", " midjourney", "region:us" ]
vivym
null
null
null
4
127
--- license: apache-2.0 task_categories: - text-to-image tags: - ' midjourney' --- # midjourney-prompts ## Description This dataset contains the cleaned midjourney prompts from Midjourney. Total prompts: 9,085,397 | Version | Count | | ------- | --------- | | 5.2 | 2,272,465 | | 5.1 | 2,060,106 | | 5.0 | 3,530,770 | | 4.0 | 1,204,384 | | 3.0 | 14,991 | | 2.0 | 791 | | 1.0 | 1,239 | | Style | Count | | --------- | ----------- | | default | 8,874,181 | | raw | 177,953 | | expressive| 27,919 | | scenic | 2,146 | | cute | 2,036 | | original | 511 |
vikp/pypi_clean
2023-09-06T23:50:29.000Z
[ "region:us" ]
vikp
null
null
null
36
127
--- dataset_info: features: - name: code dtype: string - name: package dtype: string - name: path dtype: string - name: filename dtype: string splits: - name: train num_bytes: 31543801750 num_examples: 2438172 download_size: 9201420527 dataset_size: 31543801750 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "pypi_clean" All of the latest package versions from pypi. The original data came from [here](https://py-code.org/datasets). I pulled the latest versions of each package, then extracted only `md`, `rst`, `ipynb`, and `py` files. I then applied some cleaning: - rendering notebooks - removing leading comments/licenses
robertmyers/gigatargon
2023-09-09T01:07:01.000Z
[ "region:us" ]
robertmyers
null
null
null
0
127
Entry not found
kotzeje/lamini_docs.jsonl
2023-08-24T12:35:32.000Z
[ "region:us" ]
kotzeje
null
null
null
1
126
--- dataset_info: features: - name: question dtype: string - name: answer dtype: string splits: - name: train num_bytes: 573589 num_examples: 1400 download_size: 283465 dataset_size: 573589 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "lamini_docs.jsonl" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tyzhu/fwv2_squad_num_train_10000_eval_100
2023-08-29T08:06:14.000Z
[ "region:us" ]
tyzhu
null
null
null
0
126
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: train_doc2id path: data/train_doc2id-* - split: train_id2doc path: data/train_id2doc-* - split: train_find_word path: data/train_find_word-* - split: eval_find_word path: data/eval_find_word-* - split: id_context_mapping path: data/id_context_mapping-* dataset_info: features: - name: inputs dtype: string - name: targets dtype: string - name: text dtype: string splits: - name: train num_bytes: 2877195 num_examples: 20100 - name: train_doc2id num_bytes: 1736997 num_examples: 10100 - name: train_id2doc num_bytes: 1767297 num_examples: 10100 - name: train_find_word num_bytes: 1109898 num_examples: 10000 - name: eval_find_word num_bytes: 10775 num_examples: 100 - name: id_context_mapping num_bytes: 1444097 num_examples: 10100 download_size: 4619144 dataset_size: 8946259 --- # Dataset Card for "fwv2_squad_num_train_10000_eval_100" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ai4bharat/IN22-Gen
2023-09-12T11:13:23.000Z
[ "task_categories:translation", "language_creators:expert-generated", "multilinguality:multilingual", "multilinguality:translation", "size_categories:1K<n<10K", "language:as", "language:bn", "language:brx", "language:doi", "language:en", "language:gom", "language:gu", "language:hi", "language:kn", "language:ks", "language:mai", "language:ml", "language:mr", "language:mni", "language:ne", "language:or", "language:pa", "language:sa", "language:sat", "language:sd", "language:ta", "language:te", "language:ur", "license:cc-by-4.0", "arxiv:2305.16307", "region:us" ]
ai4bharat
IN-22 is a newly created comprehensive benchmark for evaluating machine translation performance in multi-domain, n-way parallel contexts across 22 Indic languages. IN22-Gen is a general-purpose multi-domain evaluation subset of IN22. It has been created from two sources: Wikipedia and Web Sources offering diverse content spanning news, entertainment, culture, legal, and India-centric topics.
@article{ai4bharat2023indictrans2, title = {IndicTrans2: Towards High-Quality and Accessible Machine Translation Models for all 22 Scheduled Indian Languages}, author = {AI4Bharat and Jay Gala and Pranjal A. Chitale and Raghavan AK and Sumanth Doddapaneni and Varun Gumma and Aswanth Kumar and Janki Nawale and Anupama Sujatha and Ratish Puduppully and Vivek Raghavan and Pratyush Kumar and Mitesh M. Khapra and Raj Dabre and Anoop Kunchukuttan}, year = {2023}, journal = {arXiv preprint arXiv: 2305.16307} }
null
1
126
--- language: - as - bn - brx - doi - en - gom - gu - hi - kn - ks - mai - ml - mr - mni - ne - or - pa - sa - sat - sd - ta - te - ur language_details: >- asm_Beng, ben_Beng, brx_Deva, doi_Deva, eng_Latn, gom_Deva, guj_Gujr, hin_Deva, kan_Knda, kas_Arab, mai_Deva, mal_Mlym, mar_Deva, mni_Mtei, npi_Deva, ory_Orya, pan_Guru, san_Deva, sat_Olck, snd_Deva, tam_Taml, tel_Telu, urd_Arab license: cc-by-4.0 language_creators: - expert-generated multilinguality: - multilingual - translation pretty_name: in22-gen size_categories: - 1K<n<10K task_categories: - translation --- # IN22-Gen IN22 is a newly created comprehensive benchmark for evaluating machine translation performance in multi-domain, n-way parallel contexts across 22 Indic languages. IN22-Gen is a general-purpose multi-domain evaluation subset of IN22. It has been created from two sources: Wikipedia and Web Sources offering diverse content spanning news, entertainment, culture, legal, and India-centric topics. The evaluation subset consists of 1024 sentences translated across 22 Indic languages enabling evaluation of MT systems across 506 directions. Here is the domain and source distribution of our IN22-Gen evaluation subset. <table style="width: 40%"> <tr> <td>domain</td> <td>web sources</td> <td>wikipedia</td> </tr> <tr> <td>culture</td> <td>40</td> <td>40</td> </tr> <tr> <td>economy</td> <td>40</td> <td>40</td> </tr> <tr> <td>education</td> <td>40</td> <td>40</td> </tr> <tr> <td>entertainment</td> <td>40</td> <td>40</td> </tr> <tr> <td>geography</td> <td>40</td> <td>40</td> </tr> <tr> <td>governments</td> <td>40</td> <td>40</td> </tr> <tr> <td>health</td> <td>40</td> <td>40</td> </tr> <tr> <td>industry</td> <td>40</td> <td>40</td> </tr> <tr> <td>legal</td> <td>40</td> <td>40</td> </tr> <tr> <td>news</td> <td>32</td> <td>32</td> </tr> <tr> <td>religion</td> <td>40</td> <td>40</td> </tr> <tr> <td>sports</td> <td>40</td> <td>40</td> </tr> <tr> <td>tourism</td> <td>40</td> <td>40</td> </tr> <tr> <td>total</td> <td>512</td> <td>512</td> </tr> </table> Please refer to the `Appendix E: Dataset Card` of the [preprint](https://arxiv.org/abs/2305.16307) on detailed description of dataset curation, annotation and quality control process. ### Dataset Structure #### Dataset Fields - `id`: Row number for the data entry, starting at 1. - `context`: Context window of 3 sentences, typically includes one sentence before and after the candidate sentence. - `source`: The source from which the candidate sentence is considered. - `url`: The URL for the English article from which the sentence was extracted. Only available for candidate sentences sourced from Wikipedia - `domain`: The domain of the sentence. - `num_words`: The number of words in the candidate sentence. - `bucket`: Classification of the candidate sentence as per predefined bucket categories. - `sentence`: The full sentence in the specific language (may have _lang for pairings) #### Data Instances A sample from the `gen` split for the English language (`eng_Latn` config) is provided below. All configurations have the same structure, and all sentences are aligned across configurations and splits. ```python { "id": 1, "context": "A uniform is often viewed as projecting a positive image of an organisation. Maintaining personal hygiene is also an important aspect of personal appearance and dressing. An appearance is a bunch of attributes related with the service person, like their shoes, clothes, tie, jewellery, hairstyle, make-up, watch, cosmetics, perfume, etc.", "source": "web", "url": "", "domain": "culture", "num_words": 24, "bucket": "18 - 25", "sentence": "An appearance is a bunch of attributes related to the service person, like their shoes, clothes, tie, jewellery, hairstyle, make-up, watch, cosmetics, perfume, etc." } ``` When using a hyphenated pairing or using the `all` function, data will be presented as follows: ```python { "id": 1, "context": "A uniform is often viewed as projecting a positive image of an organisation. Maintaining personal hygiene is also an important aspect of personal appearance and dressing. An appearance is a bunch of attributes related with the service person, like their shoes, clothes, tie, jewellery, hairstyle, make-up, watch, cosmetics, perfume, etc.", "source": "web", "url": "", "domain": "culture", "num_words": 24, "bucket": "18 - 25", "sentence_eng_Latn": "An appearance is a bunch of attributes related to the service person, like their shoes, clothes, tie, jewellery, hairstyle, make-up, watch, cosmetics, perfume, etc.", "sentence_hin_Deva": "सेवा संबंधी लोगों के लिए भेष कई गुणों का संयोजन है, जैसे कि उनके जूते, कपड़े, टाई, आभूषण, केश शैली, मेक-अप, घड़ी, कॉस्मेटिक, इत्र, आदि।" } ``` ### Usage Instructions ```python from datasets import load_dataset # download and load all the pairs dataset = load_dataset("ai4bharat/IN22-Gen", "all") # download and load specific pairs dataset = load_dataset("ai4bharat/IN22-Gen", "eng_Latn-hin_Deva") ``` ### Languages Covered <table style="width: 40%"> <tr> <td>Assamese (asm_Beng)</td> <td>Kashmiri (Arabic) (kas_Arab)</td> <td>Punjabi (pan_Guru)</td> </tr> <tr> <td>Bengali (ben_Beng)</td> <td>Kashmiri (Devanagari) (kas_Deva)</td> <td>Sanskrit (san_Deva)</td> </tr> <tr> <td>Bodo (brx_Deva)</td> <td>Maithili (mai_Deva)</td> <td>Santali (sat_Olck)</td> </tr> <tr> <td>Dogri (doi_Deva)</td> <td>Malayalam (mal_Mlym)</td> <td>Sindhi (Arabic) (snd_Arab)</td> </tr> <tr> <td>English (eng_Latn)</td> <td>Marathi (mar_Deva)</td> <td>Sindhi (Devanagari) (snd_Deva)</td> </tr> <tr> <td>Konkani (gom_Deva)</td> <td>Manipuri (Bengali) (mni_Beng)</td> <td>Tamil (tam_Taml)</td> </tr> <tr> <td>Gujarati (guj_Gujr)</td> <td>Manipuri (Meitei) (mni_Mtei)</td> <td>Telugu (tel_Telu)</td> </tr> <tr> <td>Hindi (hin_Deva)</td> <td>Nepali (npi_Deva)</td> <td>Urdu (urd_Arab)</td> </tr> <tr> <td>Kannada (kan_Knda)</td> <td>Odia (ory_Orya)</td> </tr> </table> ### Citation If you consider using our work then please cite using: ``` @article{ai4bharat2023indictrans2, title = {IndicTrans2: Towards High-Quality and Accessible Machine Translation Models for all 22 Scheduled Indian Languages}, author = {AI4Bharat and Jay Gala and Pranjal A. Chitale and Raghavan AK and Sumanth Doddapaneni and Varun Gumma and Aswanth Kumar and Janki Nawale and Anupama Sujatha and Ratish Puduppully and Vivek Raghavan and Pratyush Kumar and Mitesh M. Khapra and Raj Dabre and Anoop Kunchukuttan}, year = {2023}, journal = {arXiv preprint arXiv: 2305.16307} } ```
Goorm-AI-04/RCS_Image_Stratified_Train_Test
2023-09-17T10:46:02.000Z
[ "region:us" ]
Goorm-AI-04
null
null
null
0
126
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* dataset_info: features: - name: rcs_image dtype: image - name: drone_type dtype: string - name: frequency dtype: int64 - name: label dtype: class_label: names: '0': 0 '1': 1 '2': 2 '3': 3 '4': 4 '5': 5 '6': 6 '7': 7 '8': 8 '9': 9 '10': 10 '11': 11 '12': 12 '13': 13 '14': 14 '15': 15 splits: - name: train num_bytes: 24972888.0 num_examples: 192 - name: test num_bytes: 6243222.0 num_examples: 48 download_size: 31218865 dataset_size: 31216110.0 --- # Dataset Card for "RCS_Image_Stratified_Train_Test" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Alexandre-Numind/TrainIE
2023-10-04T11:22:45.000Z
[ "license:mit", "region:us" ]
Alexandre-Numind
null
null
null
0
126
--- license: mit ---
ywchoi/pubmed_abstract_5
2022-09-13T01:07:12.000Z
[ "region:us" ]
ywchoi
null
null
null
0
125
Entry not found
ywchoi/pubmed_abstract_7
2022-09-13T01:12:17.000Z
[ "region:us" ]
ywchoi
null
null
null
0
125
Entry not found
tomekkorbak/detoxify-pile-chunk3-1850000-1900000
2022-10-04T23:05:22.000Z
[ "region:us" ]
tomekkorbak
null
null
null
0
125
Entry not found
mstz/heloc
2023-04-07T13:57:28.000Z
[ "task_categories:tabular-classification", "size_categories:10K<n<100K", "language:en", "license:cc", "heloc", "fico", "tabular_classification", "binary_classification", "region:us" ]
mstz
null
null
null
0
125
--- language: - en tags: - heloc - fico - tabular_classification - binary_classification pretty_name: Heloc size_categories: - 10K<n<100K task_categories: - tabular-classification configs: - risk license: cc --- # HELOC The [HELOC dataset](https://community.fico.com/s/explainable-machine-learning-challenge?tabset-158d9=d157e) from FICO. Each entry in the dataset is a line of credit, typically offered by a bank as a percentage of home equity (the difference between the current market value of a home and its purchase price). The customers in this dataset have requested a credit line in the range of $5,000 - $150,000. The fundamental task is to use the information about the applicant in their credit report to predict whether they will repay their HELOC account within 2 years. # Configurations and tasks | **Configuration** | **Task** | **Description** | |-------------------|---------------------------|-----------------------------------------------------------------| | risk | Binary classification | Will the customer default? | # Usage ```python from datasets import load_dataset dataset = load_dataset("mstz/heloc")["train"] ``` # Features |**Feature** |**Type**| |-------------------------------------------|--------| |`estimate_of_risk` |`int8` | |`months_since_first_trade` |`int32` | |`months_since_last_trade` |`int32` | |`average_duration_of_resolution` |`int32` | |`number_of_satisfactory_trades` |`int16` | |`nr_trades_insolvent_for_over_60_days` |`int16` | |`nr_trades_insolvent_for_over_90_days` |`int16` | |`percentage_of_legal_trades` |`int16` | |`months_since_last_illegal_trade` |`int32` | |`maximum_illegal_trades_over_last_year` |`int8` | |`maximum_illegal_trades` |`int16` | |`nr_total_trades` |`int16` | |`nr_trades_initiated_in_last_year` |`int16` | |`percentage_of_installment_trades` |`int16` | |`months_since_last_inquiry_not_recent` |`int16` | |`nr_inquiries_in_last_6_months` |`int16` | |`nr_inquiries_in_last_6_months_not_recent` |`int16` | |`net_fraction_of_revolving_burden` |`int32` | |`net_fraction_of_installment_burden` |`int32` | |`nr_revolving_trades_with_balance` |`int16` | |`nr_installment_trades_with_balance` |`int16` | |`nr_banks_with_high_ratio` |`int16` | |`percentage_trades_with_balance` |`int16` |
clarin-knext/msmarco-pl
2023-06-07T08:22:03.000Z
[ "language:pl", "arxiv:2305.19840", "region:us" ]
clarin-knext
null
null
null
0
125
--- language: - pl --- Part of **BEIR-PL: Zero Shot Information Retrieval Benchmark for the Polish Language**. Link to arxiv: https://arxiv.org/pdf/2305.19840.pdf Contact: konrad.wojtasik@pwr.edu.pl
Juzzy88/science_dict_full
2023-08-05T08:20:01.000Z
[ "region:us" ]
Juzzy88
null
null
null
0
125
--- dataset_info: features: - name: role_1 dtype: string - name: topic; dtype: string - name: sub_topic dtype: string - name: message_1 dtype: string - name: message_2 dtype: string - name: text dtype: string splits: - name: train num_bytes: 200646759 num_examples: 38400 - name: val num_bytes: 50121062 num_examples: 9600 - name: test num_bytes: 62653743 num_examples: 12000 download_size: 148930334 dataset_size: 313421564 --- # Dataset Card for "science_dict_full" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
corbt/all-recipes
2023-08-24T06:27:02.000Z
[ "region:us" ]
corbt
null
null
null
22
125
--- dataset_info: features: - name: input dtype: string splits: - name: train num_bytes: 1569011376 num_examples: 2147248 download_size: 807147913 dataset_size: 1569011376 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "all-recipes" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
BleachNick/MIC_full
2023-09-24T04:13:18.000Z
[ "license:unknown", "arxiv:2309.07915", "region:us" ]
BleachNick
null
null
null
3
125
--- license: unknown --- ```bibtex @misc{zhao2023mmicl, title={MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning}, author={Haozhe Zhao and Zefan Cai and Shuzheng Si and Xiaojian Ma and Kaikai An and Liang Chen and Zixuan Liu and Sheng Wang and Wenjuan Han and Baobao Chang}, year={2023}, eprint={2309.07915}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
Alexandre-Numind/TestIE
2023-10-04T12:50:11.000Z
[ "license:mit", "region:us" ]
Alexandre-Numind
null
null
null
0
125
--- license: mit ---
castorini/afriberta-corpus
2022-10-19T21:33:04.000Z
[ "task_categories:text-generation", "task_ids:language-modeling", "language:om", "language:am", "language:rw", "language:rn", "language:ha", "language:ig", "language:pcm", "language:so", "language:sw", "language:ti", "language:yo", "language:multilingual", "license:apache-2.0", "region:us" ]
castorini
Corpus used for training AfriBERTa models
@inproceedings{ogueji-etal-2021-small, title = "Small Data? No Problem! Exploring the Viability of Pretrained Multilingual Language Models for Low-resourced Languages", author = "Ogueji, Kelechi and Zhu, Yuxin and Lin, Jimmy", booktitle = "Proceedings of the 1st Workshop on Multilingual Representation Learning", month = nov, year = "2021", address = "Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.mrl-1.11", pages = "116--126", }
null
6
124
--- language: - om - am - rw - rn - ha - ig - pcm - so - sw - ti - yo - multilingual license: apache-2.0 task_categories: - text-generation task_ids: - language-modeling --- # Dataset Card for AfriBERTa's Corpus ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Loading Dataset](#loading-dataset) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Discussion of Biases](#discussion-of-biases) - [Additional Information](#additional-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description ### Dataset Summary This is the corpus on which AfriBERTa was trained on. The dataset is mostly from the BBC news website, but some languages also have data from Common Crawl. - **Homepage:** https://github.com/keleog/afriberta - **Models:** - https://huggingface.co/castorini/afriberta_small - https://huggingface.co/castorini/afriberta_base - https://huggingface.co/castorini/afriberta_large - **Paper:** https://aclanthology.org/2021.mrl-1.11/ - **Point of Contact:** kelechi.ogueji@uwaterloo.ca ### Supported Tasks and Leaderboards The AfriBERTa corpus was mostly intended to pre-train language models. ### Languages ``` afaanoromoo amharic gahuza hausa igbo pidgin somali swahili tigrinya yoruba ``` ### Loading Dataset An example to load the train split of the Somali corpus: ``` dataset = load_dataset("castorini/afriberta-corpus", "somali", split="train") ``` An example to load the test split of the Pidgin corpus: ``` dataset = load_dataset("castorini/afriberta-corpus", "pidgin", split="test") ``` ## Dataset Structure ### Data Instances Each data point is a line of text. An example from the `igbo` dataset: ``` {"id": "6", "text": "Ngwá ọrụ na-echebe ma na-ebuli gị na kọmputa."} ``` ### Data Fields The data fields are: - id: id of the example - text: content as a string ### Data Splits Each language has a train and test split, with varying sizes. ## Considerations for Using the Data ### Discussion of Biases Since majority of the data is obtained from the BBC's news website, models trained on this dataset are likely going to be biased towards the news domain. Also, since some of the data is obtained from Common Crawl, care should be taken (especially for text generation models) since personal and sensitive information might be present. ## Additional Information ### Citation Information ``` @inproceedings{ogueji-etal-2021-small, title = "Small Data? No Problem! Exploring the Viability of Pretrained Multilingual Language Models for Low-resourced Languages", author = "Ogueji, Kelechi and Zhu, Yuxin and Lin, Jimmy", booktitle = "Proceedings of the 1st Workshop on Multilingual Representation Learning", month = nov, year = "2021", address = "Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.mrl-1.11", pages = "116--126", } ``` ### Contributions Thanks to [Kelechi Ogueji](https://github.com/keleog) for adding this dataset.
BeIR/msmarco
2022-10-23T06:02:06.000Z
[ "task_categories:text-retrieval", "task_ids:entity-linking-retrieval", "task_ids:fact-checking-retrieval", "multilinguality:monolingual", "language:en", "license:cc-by-sa-4.0", "region:us" ]
BeIR
null
null
null
2
124
--- annotations_creators: [] language_creators: [] language: - en license: - cc-by-sa-4.0 multilinguality: - monolingual paperswithcode_id: beir pretty_name: BEIR Benchmark size_categories: msmarco: - 1M<n<10M trec-covid: - 100k<n<1M nfcorpus: - 1K<n<10K nq: - 1M<n<10M hotpotqa: - 1M<n<10M fiqa: - 10K<n<100K arguana: - 1K<n<10K touche-2020: - 100K<n<1M cqadupstack: - 100K<n<1M quora: - 100K<n<1M dbpedia: - 1M<n<10M scidocs: - 10K<n<100K fever: - 1M<n<10M climate-fever: - 1M<n<10M scifact: - 1K<n<10K source_datasets: [] task_categories: - text-retrieval - zero-shot-retrieval - information-retrieval - zero-shot-information-retrieval task_ids: - passage-retrieval - entity-linking-retrieval - fact-checking-retrieval - tweet-retrieval - citation-prediction-retrieval - duplication-question-retrieval - argument-retrieval - news-retrieval - biomedical-information-retrieval - question-answering-retrieval --- # Dataset Card for BEIR Benchmark ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/UKPLab/beir - **Repository:** https://github.com/UKPLab/beir - **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ - **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns - **Point of Contact:** nandan.thakur@uwaterloo.ca ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: - Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact) - Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/) - Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) - News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html) - Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data) - Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) - Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs) - Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html) - Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/) All these datasets have been preprocessed and can be used for your experiments. ```python ``` ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/). ### Languages All tasks are in English (`en`). ## Dataset Structure All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: - `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}` - `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}` - `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1` ### Data Instances A high level example of any beir dataset: ```python corpus = { "doc1" : { "title": "Albert Einstein", "text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \ one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \ its influence on the philosophy of science. He is best known to the general public for his mass–energy \ equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \ Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \ of the photoelectric effect', a pivotal step in the development of quantum theory." }, "doc2" : { "title": "", # Keep title an empty string if not present "text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \ malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\ with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)." }, } queries = { "q1" : "Who developed the mass-energy equivalence formula?", "q2" : "Which beer is brewed with a large proportion of wheat?" } qrels = { "q1" : {"doc1": 1}, "q2" : {"doc2": 1}, } ``` ### Data Fields Examples from all configurations have the following features: ### Corpus - `corpus`: a `dict` feature representing the document title and passage text, made up of: - `_id`: a `string` feature representing the unique document id - `title`: a `string` feature, denoting the title of the document. - `text`: a `string` feature, denoting the text of the document. ### Queries - `queries`: a `dict` feature representing the query, made up of: - `_id`: a `string` feature representing the unique query id - `text`: a `string` feature, denoting the text of the query. ### Qrels - `qrels`: a `dict` feature representing the query document relevance judgements, made up of: - `_id`: a `string` feature representing the query id - `_id`: a `string` feature, denoting the document id. - `score`: a `int32` feature, denoting the relevance judgement between query and document. ### Data Splits | Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 | | -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:| | MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` | | TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` | | NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` | | BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) | | NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` | | HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` | | FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` | | Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) | | TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) | | ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` | | Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` | | CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` | | Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` | | DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` | | SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` | | FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` | | Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` | | SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` | | Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information Cite as: ``` @inproceedings{ thakur2021beir, title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models}, author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych}, booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)}, year={2021}, url={https://openreview.net/forum?id=wCu6T5xFjeJ} } ``` ### Contributions Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset.
graphs-datasets/PROTEINS
2023-02-07T16:39:11.000Z
[ "task_categories:graph-ml", "license:unknown", "region:us" ]
graphs-datasets
null
null
null
0
124
--- license: unknown task_categories: - graph-ml --- # Dataset Card for PROTEINS ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [External Use](#external-use) - [PyGeometric](#pygeometric) - [Dataset Structure](#dataset-structure) - [Data Properties](#data-properties) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **[Homepage](https://academic.oup.com/bioinformatics/article/21/suppl_1/i47/202991)** - **[Repository](https://www.chrsmrrs.com/graphkerneldatasets/PROTEINS.zip):**: - **Paper:**: Protein function prediction via graph kernels (see citation) - **Leaderboard:**: [Papers with code leaderboard](https://paperswithcode.com/sota/graph-classification-on-proteins) ### Dataset Summary The `PROTEINS` dataset is a medium molecular property prediction dataset. ### Supported Tasks and Leaderboards `PROTEINS` should be used for molecular property prediction (aiming to predict whether molecules are enzymes or not), a binary classification task. The score used is accuracy, using a 10-fold cross-validation. ## External Use ### PyGeometric To load in PyGeometric, do the following: ```python from datasets import load_dataset from torch_geometric.data import Data from torch_geometric.loader import DataLoader dataset_hf = load_dataset("graphs-datasets/<mydataset>") dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]] dataset_pg = DataLoader(dataset_pg_list) ``` ## Dataset Structure ### Data Properties | property | value | |---|---| | scale | medium | | #graphs | 1113 | | average #nodes | 39.06 | | average #edges | 72.82 | ### Data Fields Each row of a given file is a graph, with: - `node_feat` (list: #nodes x #node-features): nodes - `edge_index` (list: 2 x #edges): pairs of nodes constituting edges - `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features - `y` (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one) - `num_nodes` (int): number of nodes of the graph ### Data Splits This data comes from the PyGeometric version of the dataset provided by TUDataset. This information can be found back using ```python from torch_geometric.datasets import TUDataset dataset = TUDataset(root='', name = 'PROTEINS') ``` ## Additional Information ### Licensing Information The dataset has been released under unknown license, please open an issue if you have info about it. ### Citation Information ``` @article{10.1093/bioinformatics/bti1007, author = {Borgwardt, Karsten M. and Ong, Cheng Soon and Schönauer, Stefan and Vishwanathan, S. V. N. and Smola, Alex J. and Kriegel, Hans-Peter}, title = "{Protein function prediction via graph kernels}", journal = {Bioinformatics}, volume = {21}, number = {suppl_1}, pages = {i47-i56}, year = {2005}, month = {06}, abstract = "{Motivation: Computational approaches to protein function prediction infer protein function by finding proteins with similar sequence, structure, surface clefts, chemical properties, amino acid motifs, interaction partners or phylogenetic profiles. We present a new approach that combines sequential, structural and chemical information into one graph model of proteins. We predict functional class membership of enzymes and non-enzymes using graph kernels and support vector machine classification on these protein graphs.Results: Our graph model, derivable from protein sequence and structure only, is competitive with vector models that require additional protein information, such as the size of surface pockets. If we include this extra information into our graph model, our classifier yields significantly higher accuracy levels than the vector models. Hyperkernels allow us to select and to optimally combine the most relevant node attributes in our protein graphs. We have laid the foundation for a protein function prediction system that integrates protein information from various sources efficiently and effectively.Availability: More information available via www.dbs.ifi.lmu.de/Mitarbeiter/borgwardt.html.Contact:borgwardt@dbs.ifi.lmu.de}", issn = {1367-4803}, doi = {10.1093/bioinformatics/bti1007}, url = {https://doi.org/10.1093/bioinformatics/bti1007}, eprint = {https://academic.oup.com/bioinformatics/article-pdf/21/suppl\_1/i47/524364/bti1007.pdf}, } ``` ### Contributions Thanks to [@clefourrier](https://github.com/clefourrier) for adding this dataset.
philschmid/emotion
2023-01-20T14:56:20.000Z
[ "task_categories:text-classification", "task_ids:multi-class-classification", "annotations_creators:machine-generated", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:other", "emotion-classification", "region:us" ]
philschmid
Emotion is a dataset of English Twitter messages with six basic emotions: anger, fear, joy, love, sadness, and surprise. For more detailed information please refer to the paper.
@inproceedings{saravia-etal-2018-carer, title = "{CARER}: Contextualized Affect Representations for Emotion Recognition", author = "Saravia, Elvis and Liu, Hsien-Chi Toby and Huang, Yen-Hao and Wu, Junlin and Chen, Yi-Shin", booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", month = oct # "-" # nov, year = "2018", address = "Brussels, Belgium", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/D18-1404", doi = "10.18653/v1/D18-1404", pages = "3687--3697", abstract = "Emotions are expressed in nuanced ways, which varies by collective or individual experiences, knowledge, and beliefs. Therefore, to understand emotion, as conveyed through text, a robust mechanism capable of capturing and modeling different linguistic nuances and phenomena is needed. We propose a semi-supervised, graph-based algorithm to produce rich structural descriptors which serve as the building blocks for constructing contextualized affect representations from text. The pattern-based representations are further enriched with word embeddings and evaluated through several emotion recognition tasks. Our experimental results demonstrate that the proposed method outperforms state-of-the-art techniques on emotion recognition tasks.", }
null
1
124
--- pretty_name: Emotion annotations_creators: - machine-generated language_creators: - machine-generated language: - en license: - other multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-classification task_ids: - multi-class-classification paperswithcode_id: emotion train-eval-index: - config: default task: text-classification task_id: multi_class_classification splits: train_split: train eval_split: test col_mapping: text: text label: target metrics: - type: accuracy name: Accuracy - type: f1 name: F1 macro args: average: macro - type: f1 name: F1 micro args: average: micro - type: f1 name: F1 weighted args: average: weighted - type: precision name: Precision macro args: average: macro - type: precision name: Precision micro args: average: micro - type: precision name: Precision weighted args: average: weighted - type: recall name: Recall macro args: average: macro - type: recall name: Recall micro args: average: micro - type: recall name: Recall weighted args: average: weighted tags: - emotion-classification dataset_info: - config_name: split features: - name: text dtype: string - name: label dtype: class_label: names: '0': sadness '1': joy '2': love '3': anger '4': fear '5': surprise splits: - name: train num_bytes: 1741597 num_examples: 16000 - name: validation num_bytes: 214703 num_examples: 2000 - name: test num_bytes: 217181 num_examples: 2000 download_size: 740883 dataset_size: 2173481 - config_name: unsplit features: - name: text dtype: string - name: label dtype: class_label: names: '0': sadness '1': joy '2': love '3': anger '4': fear '5': surprise splits: - name: train num_bytes: 45445685 num_examples: 416809 download_size: 15388281 dataset_size: 45445685 duplicated_from: emotion --- # Dataset Card for "emotion" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/dair-ai/emotion_dataset](https://github.com/dair-ai/emotion_dataset) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 3.95 MB - **Size of the generated dataset:** 4.16 MB - **Total amount of disk used:** 8.11 MB ### Dataset Summary Emotion is a dataset of English Twitter messages with six basic emotions: anger, fear, joy, love, sadness, and surprise. For more detailed information please refer to the paper. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances An example looks as follows. ``` { "text": "im feeling quite sad and sorry for myself but ill snap out of it soon", "label": 0 } ``` ### Data Fields The data fields are: - `text`: a `string` feature. - `label`: a classification label, with possible values including `sadness` (0), `joy` (1), `love` (2), `anger` (3), `fear` (4), `surprise` (5). ### Data Splits The dataset has 2 configurations: - split: with a total of 20_000 examples split into train, validation and split - unsplit: with a total of 416_809 examples in a single train split | name | train | validation | test | |---------|-------:|-----------:|-----:| | split | 16000 | 2000 | 2000 | | unsplit | 416809 | n/a | n/a | ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information The dataset should be used for educational and research purposes only. ### Citation Information If you use this dataset, please cite: ``` @inproceedings{saravia-etal-2018-carer, title = "{CARER}: Contextualized Affect Representations for Emotion Recognition", author = "Saravia, Elvis and Liu, Hsien-Chi Toby and Huang, Yen-Hao and Wu, Junlin and Chen, Yi-Shin", booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", month = oct # "-" # nov, year = "2018", address = "Brussels, Belgium", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/D18-1404", doi = "10.18653/v1/D18-1404", pages = "3687--3697", abstract = "Emotions are expressed in nuanced ways, which varies by collective or individual experiences, knowledge, and beliefs. Therefore, to understand emotion, as conveyed through text, a robust mechanism capable of capturing and modeling different linguistic nuances and phenomena is needed. We propose a semi-supervised, graph-based algorithm to produce rich structural descriptors which serve as the building blocks for constructing contextualized affect representations from text. The pattern-based representations are further enriched with word embeddings and evaluated through several emotion recognition tasks. Our experimental results demonstrate that the proposed method outperforms state-of-the-art techniques on emotion recognition tasks.", } ``` ### Contributions Thanks to [@lhoestq](https://github.com/lhoestq), [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun) for adding this dataset.
shibing624/alpaca-zh
2023-05-10T06:09:06.000Z
[ "task_categories:text-generation", "size_categories:10K<n<100K", "language:zh", "license:cc-by-4.0", "gpt", "alpaca", "fine-tune", "instruct-tune", "instruction", "arxiv:2304.03277", "region:us" ]
shibing624
null
null
null
42
124
--- dataset_info: features: - name: instruction dtype: string - name: input dtype: string - name: output dtype: string splits: - name: train num_bytes: 32150579 num_examples: 48818 download_size: 35100559 dataset_size: 32150579 license: cc-by-4.0 language: - zh pretty_name: Instruction Tuning with GPT-4 size_categories: - 10K<n<100K task_categories: - text-generation tags: - gpt - alpaca - fine-tune - instruct-tune - instruction --- # Dataset Description - **Project Page:** https://instruction-tuning-with-gpt-4.github.io - **Repo:** https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM - **Paper:** https://arxiv.org/abs/2304.03277 # Dataset Card for "alpaca-zh" 本数据集是参考Alpaca方法基于GPT4得到的self-instruct数据,约5万条。 Dataset from https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM It is the chinese dataset from https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM/blob/main/data/alpaca_gpt4_data_zh.json # Usage and License Notices The data is intended and licensed for research use only. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes. train model with alpaca-zh dataset: https://github.com/shibing624/textgen # English Dataset [Found here](https://huggingface.co/datasets/c-s-ale/alpaca-gpt4-data) # Citation ``` @article{peng2023gpt4llm, title={Instruction Tuning with GPT-4}, author={Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, Jianfeng Gao}, journal={arXiv preprint arXiv:2304.03277}, year={2023} } ```
JayalekshmiGopakumar/doclaynetTextClassDataset
2023-06-26T08:24:03.000Z
[ "region:us" ]
JayalekshmiGopakumar
null
null
null
0
124
--- dataset_info: features: - name: text dtype: string - name: label dtype: string splits: - name: train num_bytes: 1806160 num_examples: 691 - name: validation num_bytes: 167123 num_examples: 64 - name: test num_bytes: 125303 num_examples: 49 download_size: 1172429 dataset_size: 2098586 --- # Dataset Card for "doclaynetTextClassDataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
percins/IN-ABS
2023-08-11T12:53:05.000Z
[ "region:us" ]
percins
null
null
null
0
124
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* - split: test path: data/test-* dataset_info: features: - name: text dtype: string - name: summary dtype: string - name: file dtype: string splits: - name: train num_bytes: 160084476 num_examples: 5346 - name: validation num_bytes: 22684426 num_examples: 712 - name: test num_bytes: 30578218 num_examples: 1070 download_size: 103908520 dataset_size: 213347120 --- # Dataset Card for "IN-ABS" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
HumanCompatibleAI/ppo-seals-HalfCheetah-v1
2023-09-27T06:57:57.000Z
[ "region:us" ]
HumanCompatibleAI
null
null
null
0
124
--- dataset_info: features: - name: obs sequence: sequence: float64 - name: acts sequence: sequence: float32 - name: infos sequence: string - name: terminal dtype: bool - name: rews sequence: float32 splits: - name: train num_bytes: 92213656 num_examples: 104 download_size: 25621245 dataset_size: 92213656 --- # Dataset Card for "ppo-seals-HalfCheetah-v1" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
para_crawl
2023-04-05T13:36:34.000Z
[ "task_categories:translation", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:translation", "size_categories:10M<n<100M", "source_datasets:original", "language:bg", "language:cs", "language:da", "language:de", "language:el", "language:en", "language:es", "language:et", "language:fi", "language:fr", "language:ga", "language:hr", "language:hu", "language:it", "language:lt", "language:lv", "language:mt", "language:nl", "language:pl", "language:pt", "language:ro", "language:sk", "language:sl", "language:sv", "license:cc0-1.0", "region:us" ]
null
null
@misc {paracrawl, title = {ParaCrawl}, year = {2018}, url = {http://paracrawl.eu/download.html.} }
null
8
123
--- annotations_creators: - no-annotation language_creators: - found language: - bg - cs - da - de - el - en - es - et - fi - fr - ga - hr - hu - it - lt - lv - mt - nl - pl - pt - ro - sk - sl - sv license: - cc0-1.0 multilinguality: - translation pretty_name: ParaCrawl size_categories: - 10M<n<100M source_datasets: - original task_categories: - translation task_ids: [] paperswithcode_id: paracrawl dataset_info: - config_name: enbg features: - name: translation dtype: translation: languages: - en - bg splits: - name: train num_bytes: 356532771 num_examples: 1039885 download_size: 103743335 dataset_size: 356532771 - config_name: encs features: - name: translation dtype: translation: languages: - en - cs splits: - name: train num_bytes: 638068353 num_examples: 2981949 download_size: 196410022 dataset_size: 638068353 - config_name: enda features: - name: translation dtype: translation: languages: - en - da splits: - name: train num_bytes: 598624306 num_examples: 2414895 download_size: 182804827 dataset_size: 598624306 - config_name: ende features: - name: translation dtype: translation: languages: - en - de splits: - name: train num_bytes: 3997191986 num_examples: 16264448 download_size: 1307754745 dataset_size: 3997191986 - config_name: enel features: - name: translation dtype: translation: languages: - en - el splits: - name: train num_bytes: 688069020 num_examples: 1985233 download_size: 193553374 dataset_size: 688069020 - config_name: enes features: - name: translation dtype: translation: languages: - en - es splits: - name: train num_bytes: 6209466040 num_examples: 21987267 download_size: 1953839527 dataset_size: 6209466040 - config_name: enet features: - name: translation dtype: translation: languages: - en - et splits: - name: train num_bytes: 201408919 num_examples: 853422 download_size: 70158650 dataset_size: 201408919 - config_name: enfi features: - name: translation dtype: translation: languages: - en - fi splits: - name: train num_bytes: 524624150 num_examples: 2156069 download_size: 159209242 dataset_size: 524624150 - config_name: enfr features: - name: translation dtype: translation: languages: - en - fr splits: - name: train num_bytes: 9015440258 num_examples: 31374161 download_size: 2827554088 dataset_size: 9015440258 - config_name: enga features: - name: translation dtype: translation: languages: - en - ga splits: - name: train num_bytes: 104523278 num_examples: 357399 download_size: 29394367 dataset_size: 104523278 - config_name: enhr features: - name: translation dtype: translation: languages: - en - hr splits: - name: train num_bytes: 247646552 num_examples: 1002053 download_size: 84904103 dataset_size: 247646552 - config_name: enhu features: - name: translation dtype: translation: languages: - en - hu splits: - name: train num_bytes: 403168065 num_examples: 1901342 download_size: 119784765 dataset_size: 403168065 - config_name: enit features: - name: translation dtype: translation: languages: - en - it splits: - name: train num_bytes: 3340542050 num_examples: 12162239 download_size: 1066720197 dataset_size: 3340542050 - config_name: enlt features: - name: translation dtype: translation: languages: - en - lt splits: - name: train num_bytes: 197053694 num_examples: 844643 download_size: 66358392 dataset_size: 197053694 - config_name: enlv features: - name: translation dtype: translation: languages: - en - lv splits: - name: train num_bytes: 142409870 num_examples: 553060 download_size: 47368967 dataset_size: 142409870 - config_name: enmt features: - name: translation dtype: translation: languages: - en - mt splits: - name: train num_bytes: 52786023 num_examples: 195502 download_size: 19028352 dataset_size: 52786023 - config_name: ennl features: - name: translation dtype: translation: languages: - en - nl splits: - name: train num_bytes: 1384042007 num_examples: 5659268 download_size: 420090979 dataset_size: 1384042007 - config_name: enpl features: - name: translation dtype: translation: languages: - en - pl splits: - name: train num_bytes: 854786500 num_examples: 3503276 download_size: 270427885 dataset_size: 854786500 - config_name: enpt features: - name: translation dtype: translation: languages: - en - pt splits: - name: train num_bytes: 2031891156 num_examples: 8141940 download_size: 638184462 dataset_size: 2031891156 - config_name: enro features: - name: translation dtype: translation: languages: - en - ro splits: - name: train num_bytes: 518359240 num_examples: 1952043 download_size: 160684751 dataset_size: 518359240 - config_name: ensk features: - name: translation dtype: translation: languages: - en - sk splits: - name: train num_bytes: 337704729 num_examples: 1591831 download_size: 101307152 dataset_size: 337704729 - config_name: ensl features: - name: translation dtype: translation: languages: - en - sl splits: - name: train num_bytes: 182399034 num_examples: 660161 download_size: 65037465 dataset_size: 182399034 - config_name: ensv features: - name: translation dtype: translation: languages: - en - sv splits: - name: train num_bytes: 875576366 num_examples: 3476729 download_size: 275528370 dataset_size: 875576366 --- # Dataset Card for "para_crawl" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://paracrawl.eu/releases.html](https://paracrawl.eu/releases.html) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 10.36 GB - **Size of the generated dataset:** 32.90 GB - **Total amount of disk used:** 43.26 GB ### Dataset Summary Web-Scale Parallel Corpora for Official European Languages. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### enbg - **Size of downloaded dataset files:** 103.75 MB - **Size of the generated dataset:** 356.54 MB - **Total amount of disk used:** 460.27 MB An example of 'train' looks as follows. ``` This example was too long and was cropped: { "translation": "{\"bg\": \". “A felirat faragott karnis a bejárat fölött, templom épült 14 Július 1643, A földesúr és felesége Jeremiás Murguleţ, C..." } ``` #### encs - **Size of downloaded dataset files:** 196.41 MB - **Size of the generated dataset:** 638.07 MB - **Total amount of disk used:** 834.48 MB An example of 'train' looks as follows. ``` This example was too long and was cropped: { "translation": "{\"cs\": \". “A felirat faragott karnis a bejárat fölött, templom épült 14 Július 1643, A földesúr és felesége Jeremiás Murguleţ, C..." } ``` #### enda - **Size of downloaded dataset files:** 182.81 MB - **Size of the generated dataset:** 598.62 MB - **Total amount of disk used:** 781.43 MB An example of 'train' looks as follows. ``` This example was too long and was cropped: { "translation": "{\"da\": \". “A felirat faragott karnis a bejárat fölött, templom épült 14 Július 1643, A földesúr és felesége Jeremiás Murguleţ, C..." } ``` #### ende - **Size of downloaded dataset files:** 1.31 GB - **Size of the generated dataset:** 4.00 GB - **Total amount of disk used:** 5.30 GB An example of 'train' looks as follows. ``` This example was too long and was cropped: { "translation": "{\"de\": \". “A felirat faragott karnis a bejárat fölött, templom épült 14 Július 1643, A földesúr és felesége Jeremiás Murguleţ, C..." } ``` #### enel - **Size of downloaded dataset files:** 193.56 MB - **Size of the generated dataset:** 688.07 MB - **Total amount of disk used:** 881.62 MB An example of 'train' looks as follows. ``` This example was too long and was cropped: { "translation": "{\"el\": \". “A felirat faragott karnis a bejárat fölött, templom épült 14 Július 1643, A földesúr és felesége Jeremiás Murguleţ, C..." } ``` ### Data Fields The data fields are the same among all splits. #### enbg - `translation`: a multilingual `string` variable, with possible languages including `en`, `bg`. #### encs - `translation`: a multilingual `string` variable, with possible languages including `en`, `cs`. #### enda - `translation`: a multilingual `string` variable, with possible languages including `en`, `da`. #### ende - `translation`: a multilingual `string` variable, with possible languages including `en`, `de`. #### enel - `translation`: a multilingual `string` variable, with possible languages including `en`, `el`. ### Data Splits | name | train | |------|---------:| | enbg | 1039885 | | encs | 2981949 | | enda | 2414895 | | ende | 16264448 | | enel | 1985233 | ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [Creative Commons CC0 license ("no rights reserved")](https://creativecommons.org/share-your-work/public-domain/cc0/). ### Citation Information ``` @inproceedings{banon-etal-2020-paracrawl, title = "{P}ara{C}rawl: Web-Scale Acquisition of Parallel Corpora", author = "Ba{\~n}{\'o}n, Marta and Chen, Pinzhen and Haddow, Barry and Heafield, Kenneth and Hoang, Hieu and Espl{\`a}-Gomis, Miquel and Forcada, Mikel L. and Kamran, Amir and Kirefu, Faheem and Koehn, Philipp and Ortiz Rojas, Sergio and Pla Sempere, Leopoldo and Ram{\'\i}rez-S{\'a}nchez, Gema and Sarr{\'\i}as, Elsa and Strelec, Marek and Thompson, Brian and Waites, William and Wiggins, Dion and Zaragoza, Jaume", booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.acl-main.417", doi = "10.18653/v1/2020.acl-main.417", pages = "4555--4567", abstract = "We report on methods to create the largest publicly available parallel corpora by crawling the web, using open source software. We empirically compare alternative methods and publish benchmark data sets for sentence alignment and sentence pair filtering. We also describe the parallel corpora released and evaluate their quality and their usefulness to create machine translation systems.", } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset.
Recognai/sentiment-banking
2022-02-18T15:28:07.000Z
[ "region:us" ]
Recognai
null
null
null
1
123
Entry not found
ywchoi/pubmed_abstract_3
2022-09-13T01:01:39.000Z
[ "region:us" ]
ywchoi
null
null
null
0
123
Entry not found
IfanSnek/bleat
2023-09-21T13:19:48.000Z
[ "task_categories:conversational", "size_categories:n<1K", "language:en", "license:apache-2.0", "region:us" ]
IfanSnek
null
null
null
7
123
--- license: apache-2.0 task_categories: - conversational language: - en size_categories: - n<1K configs: - config_name: default data_files: clean.txt sample_by: paragraph --- # Data format: * Must start with `### System:\n` and a JSON array of available functions. The list must exist but may be empty. * A `### User:` header must come next with text from the user. * If the assistant makes a call, a `### Call:` header must come next along with a JSON object with `"name"` and `"parameters"` entries. The `"parameters"` entry is a JSON object of `"parameter": "value"` pairs. * If a call is made, a `### Return` header must come after with a json object of `"return": "value"`. * The assistant will make a response with `### Assistant: Response`. * Separate conversations must be separated with two newlines. # Example: ``` ### System: [ { "name": "calculate_age", "description": "Calculates the age of a person.", "parameters": [ { "name": "birth_date", "type": "date", "description": "The date of birth of the person." }, { "name": "current_date", "type": "date", "description": "The current date." } ], "required": ["birth_date", "current_date"] } ] ### User: How old am I? ### Assistant: When were you born? ### User: 2000-01-01 ### Call: { "name": "calculate_age", "parameters": {"birth_date": "2000-01-01","current_date": "2020-12-30"} } ### Return: {"result": 20} ### Assistant: You are 20 years old. ### User: How old is my brother? ### Assistant: When was your brother born? ### User: 2002-01-01 ### Call: { "name": "calculate_age", "parameters": {"birth_date": "2002-01-01","current_date": "2020-12-30"} } ### Return: {"result": 18} ### Assistant: Your brother is 18 years old, 2 years younger than you. ```
jonathancui/oxford-pets
2023-08-01T02:55:57.000Z
[ "license:cc-by-sa-4.0", "region:us" ]
jonathancui
null
null
null
0
123
--- license: cc-by-sa-4.0 dataset_info: features: - name: image dtype: image - name: label dtype: class_label: names: '0': Abyssinian '1': Bengal '2': Birman '3': Bombay '4': British_Shorthair '5': Egyptian_Mau '6': Maine_Coon '7': Persian '8': Ragdoll '9': Russian_Blue '10': Siamese '11': Sphynx '12': american_bulldog '13': american_pit_bull_terrier '14': basset_hound '15': beagle '16': boxer '17': chihuahua '18': english_cocker_spaniel '19': english_setter '20': german_shorthaired '21': great_pyrenees '22': havanese '23': japanese_chin '24': keeshond '25': leonberger '26': miniature_pinscher '27': newfoundland '28': pomeranian '29': pug '30': saint_bernard '31': samoyed '32': scottish_terrier '33': shiba_inu '34': staffordshire_bull_terrier '35': wheaten_terrier '36': yorkshire_terrier splits: - name: train num_bytes: 378015144.64 num_examples: 3680 - name: test num_bytes: 412951221.999 num_examples: 3669 download_size: 790031129 dataset_size: 790966366.6389999 ---
iamshnoo/geomlama
2023-09-15T23:24:53.000Z
[ "region:us" ]
iamshnoo
null
null
null
0
123
--- dataset_info: features: - name: question dtype: string - name: answer dtype: string - name: candidate_answers dtype: string - name: context dtype: string - name: country dtype: string splits: - name: en num_bytes: 17223 num_examples: 125 - name: fa num_bytes: 24061 num_examples: 125 - name: hi num_bytes: 34719 num_examples: 125 - name: sw num_bytes: 17593 num_examples: 125 - name: zh num_bytes: 15926 num_examples: 125 - name: el num_bytes: 37639 num_examples: 150 download_size: 45285 dataset_size: 147161 --- data from the paper GeoMLAMA: Geo-Diverse Commonsense Probing on Multilingual Pre-Trained Language Models (along with some new data and modifications for cleaning) [GitHub](https://github.com/WadeYin9712/GeoMLAMA) # Dataset Card for "geomlama" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
brando/debug1_af
2023-08-09T23:09:42.000Z
[ "license:apache-2.0", "region:us" ]
brando
null
null
null
1
123
--- license: apache-2.0 --- If you find this please cite it: ``` @software{brando2021ultimateutils, author={Brando Miranda}, title={Ultimate Utils - the Ultimate Utils library for Machine Learning and Artificial Intelligence}, url={https://github.com/brando90/ultimate-utils}, year={2021} } ``` it's not suppose to be used by people yet. It's under apache license too. Files are ``` Topic # of theorems # Statements Selected (floor) Polynomial 515 0 Polynomial_Factorial 47 11 ```
google/trueteacher
2023-09-06T19:39:38.000Z
[ "language:en", "license:cc-by-nc-4.0", "natural-language-inference", "news-articles-summarization", "arxiv:2305.11171", "arxiv:2210.11416", "region:us" ]
google
null
null
null
7
123
--- license: cc-by-nc-4.0 language: - en tags: - natural-language-inference - news-articles-summarization --- # **TrueTeacher** ## Dataset Summary This is a large-scale synthetic dataset for training **Factual Consistency Evaluation** models, introduced in the [TrueTeacher paper (Gekhman et al, 2023)](https://arxiv.org/pdf/2305.11171.pdf). ## Dataset Details The dataset contains model-generated summaries of articles from the train split of the **CNN/DailyMail** dataset [(Hermann et al., 2015)](https://proceedings.neurips.cc/paper_files/paper/2015/file/afdec7005cc9f14302cd0474fd0f3c96-Paper.pdf) which are annotated for factual consistency using **FLAN-PaLM 540B** [(Chung et al.,2022)](https://arxiv.org/pdf/2210.11416.pdf). Summaries were generated using summarization models with different capacities, which were created by fine-tuning **T5** [(Raffel et al., 2020)](https://jmlr.org/papers/volume21/20-074/20-074.pdf) on the **XSum** dataset [(Narayan et al., 2018)](https://aclanthology.org/D18-1206.pdf). We used the following 5 capacities: T5-11B, T5-3B, T5-large, T5-base and T5-small. ## Data format The data contains json lines with the following keys: - `"summarization_model"` - The summarization model used to generate the summary. - `"cnndm_id"` - The original id from the CNN/DailyMail dataset, this need to be used in order to retrieve the corresponding article from CNN/DailyMail (which was used as the grounding document). - `"summary"` - The model-generated summary. - `"label"` - A binary label ('1' - Factualy Consistent, '0' - Factualy Inconsistent). Here is an example of a single data item: ```json { "summarization_model": "T5-11B", "cnndm_id": "f72048a23154de8699c307e2f41157abbfcae261", "summary": "Children's brains are being damaged by prolonged internet access, a former children's television presenter has warned." "label": "1", } ``` ## Loading the dataset To use the dataset, you need to fetch the relevant documents from the CNN/DailyMail dataset. The follwoing code can be used for that purpose: ```python from datasets import load_dataset from tqdm import tqdm trueteacher_data = load_dataset("google/trueteacher", split='train') cnn_dailymail_data = load_dataset("cnn_dailymail", version="3.0.0", split='train') cnn_dailymail_articles_by_id = {example['id']: example['article'] for example in cnn_dailymail_data} trueteacher_data_with_documents = [] for example in tqdm(trueteacher_data):   example['document'] = cnn_dailymail_articles_by_id[example['cnndm_id']]   trueteacher_data_with_documents.append(example) ``` ## Intended Use This dataset is intended for a research use (**non-commercial**) in English. The recommended use case is training factual consistency evaluation models for summarization. ## Out-of-scope use Any use cases which violate the **cc-by-nc-4.0** license. Usage in languages other than English. ## Citation If you use this dataset for a research publication, please cite the TrueTeacher paper (using the bibtex entry below), as well as the CNN/DailyMail, XSum, T5 and FLAN papers mentioned above. ``` @misc{gekhman2023trueteacher, title={TrueTeacher: Learning Factual Consistency Evaluation with Large Language Models}, author={Zorik Gekhman and Jonathan Herzig and Roee Aharoni and Chen Elkind and Idan Szpektor}, year={2023}, eprint={2305.11171}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
cyrilzhang/TinyStories2-ascii
2023-09-27T13:05:51.000Z
[ "license:cdla-sharing-1.0", "region:us" ]
cyrilzhang
null
null
null
0
123
--- license: cdla-sharing-1.0 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* dataset_info: features: - name: text dtype: string splits: - name: train num_bytes: 2188147629.6032934 num_examples: 2708167 - name: validation num_bytes: 22108524.398827404 num_examples: 27544 download_size: 1143073704 dataset_size: 2210256154.002121 --- # Dataset Card for "TinyStories2-ascii" - `TinyStoriesV2-GPT4-{train,validation}.txt` from [roneneldan/TinyStories](https://huggingface.co/datasets/roneneldan/TinyStories) - ad-hoc Unicode -> ASCII normalization - remove empty/incomplete stories
ticoAg/tiger-sft-zh
2023-09-08T13:56:58.000Z
[ "language:zh", "license:apache-2.0", "region:us" ]
ticoAg
null
null
null
1
123
--- license: apache-2.0 language: - zh --- [Tigerbot](https://github.com/TigerResearch/TigerBot) 开源项目中微调中文sft-zh数据合集 本合集涵盖本组织下开源的其他中文sft-中文-数据集,不需要重复下载 ## Usage ```python import datasets ds_sft = datasets.load_dataset('TigerResearch/sft_zh') ``` ## 文件细分 | 类型 | 语言 | 数据集文件 | 数量| | ------------ | ---- | -------------------------------------------------------------------------------------------------------------------------------- | ----------- | | alpaca 中文 | 中文 | [tigerbot-alpaca-zh-0.5m](https://huggingface.co/datasets/TigerResearch/sft_zh/blob/main/tigerbot-alpaca-zh-0.5m.json) | 500k | | 百科问答 | 中文 | [tigerbot-wiki-qa-1k](https://huggingface.co/datasets/TigerResearch/sft_zh/blob/main/tigerbot-wiki-qa-zh-1k.json) | 1k | | 名著问答 | 中文 | [tigerbot-book-qa-1k](https://huggingface.co/datasets/TigerResearch/sft_zh/blob/main/tigerbot-book-qa-1k.json) | 1k | | 猜谜语 | 中文 | [tigerbot-riddle-qa-1k](https://huggingface.co/datasets/TigerResearch/sft_zh/blob/main/tigerbot-riddle-qa-1k.json) | 1k | | 阅读理解 | 中文 | [tigerbot-superclue-c3-zh-5k](https://huggingface.co/datasets/TigerResearch/sft_zh/blob/main/tigerbot-superclue-c3-zh-5k.json) | 5k | | 问答 | 中文 | [tigerbot-hc3-zh-12k](https://huggingface.co/datasets/TigerResearch/sft_zh/blob/main/tigerbot-hc3-zh-12k.json) | 12k | | 知乎问答 | 中文 | [tigerbot-zhihu-zh-10k](https://huggingface.co/datasets/TigerResearch/sft_zh/blob/main/tigerbot-zhihu-zh-10k.json) | 10k | | 流萤sft | 中文 | [tigerbot-firefly-zh-20k](https://huggingface.co/datasets/TigerResearch/tigerbot-firefly-zh-20k) | 20k |
GonzaloA/fake_news
2022-07-04T18:09:58.000Z
[ "region:us" ]
GonzaloA
null
null
null
6
122
TODO: Add YAML tags here. Copy-paste the tags obtained with the online tagging app: https://huggingface.co/spaces/huggingface/datasets-tagging --- annotations_creators: - no-annotation language_creators: - found language: - en license: - unknown multilinguality: - monolingual size_categories: - 30k<n<50k source_datasets: - original task_categories: - text-classification task_ids: - fact-checking - intent-classification pretty_name: GonzaloA / Fake News --- # Dataset Card for [Fake_News_TFG] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** [GonzaloA / fake_news] - **Paper:** [Título del TFG] - **Leaderboard:** - **Point of Contact:** [Gonzalo Álvarez Hervás](mailto:g.alvarez.2018@alumnos.urjc.es) ### Dataset Summary The GonzaloA / Fake_News_TFG Dataset repository is an English-language dataset containing just over 45k unique news articles. This articles are classified as true (1) or false (0). The current version is the first of the study the Fake News identification using Transformers models. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The dataset is code for English as generally spoken in the United States is en-US ## Dataset Structure The structure of this dataSet is composed by 40587 fields about News. This fields are composed by three types of fields; title of the news, the text or content of the news, and finally, the value of the news, who said if the new are fake (0) or true (1). ### Data Instances For each instance, there is a string for the title, a string for the article and a label to mark if it's true or false. See the [CNN / Daily Mail dataset viewer](https://huggingface.co/datasets/viewer/?dataset=fake_news&config=3.0.0) to explore more examples. ``` {'id': '1', 'title': Palestinians switch off Christmas lights in Bethlehem in anti-Trump protest' 'text': 'RAMALLAH, West Bank (Reuters) - Palestinians switched off Christmas lights at Jesus traditional birthplace in Bethlehem on Wednesday night in protest at U.S. President Donald Trump s decision to recognize Jerusalem as Israel s capital. A Christmas tree adorned with lights outside Bethlehem s Church of the Nativity, where Christians believe Jesus was born, and another in Ramallah, next to the burial site of former Palestinian leader Yasser Arafat, were plunged into darkness. The Christmas tree was switched off on the order of the mayor today in protest at Trump s decision, said Fady Ghattas, Bethlehem s municipal media officer. He said it was unclear whether the illuminations would be turned on again before the main Christmas festivities. In a speech in Washington, Trump said he had decided to recognize Jerusalem as Israel s capital and move the U.S. embassy to the city. Israeli Prime Minister Benjamin Netanyahu said Trump s move marked the beginning of a new approach to the Israeli-Palestinian conflict and said it was an historic landmark . Arabs and Muslims across the Middle East condemned the U.S. decision, calling it an incendiary move in a volatile region and the European Union and United Nations also voiced alarm at the possible repercussions for any chances of reviving Israeli-Palestinian peacemaking.' 'label': '1'} ``` ### Data Fields - `id`: an integer value to count the rows in the dataset - `title`: a string that summarize the article - `text`: a string that contains the article - `label`: a boolean that mark the article true or false ### Data Splits The GonzaloA/FakeNews dataset has 3 splits: train, evaluation, and test. Below are the statistics for the version 1.0 of the dataset: | Dataset Split | Number of Instances in Split | | ------------- | ------------------------------------------- | | Train | 24,353 | | Validation | 8,117 | | Test | 8,117 | ## Dataset Creation This dataset was created with python, using pandas library as the main processing data. Also, this dataset are the mix of other datasets which are the same scope, the Fake News. All of the process is available in this repository: https://github.com/G0nz4lo-4lvarez-H3rv4s/FakeNewsDetection ### Source Data The source data is a mix of multiple fake_news datasets in Kaggle, a platform for train your skills and learnings about Artificial Intelligence. The main datasets who are based this dataset are: #### Initial Data Collection and Normalization Version 1.0.0 aimed to support supervised neural methodologies for deep learning and study the new Transformers models in the Natural Language Processing with News of the United States. ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data This Dataset is compose for 3 types: Training phase, for training your model of NLP, validation phase, because we need to validate if the training was successful or our model has overfitting, and train phase, who count the probability and the mistakes in the model fine-tuning. ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
persiannlp/parsinlu_sentiment
2022-10-22T15:13:40.000Z
[ "task_ids:sentiment-analysis", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:extended|translated|mnli", "language:fa", "license:cc-by-nc-sa-4.0", "arxiv:2012.06154", "region:us" ]
persiannlp
A Persian sentiment analysis task (deciding whether a given sentence contains a particular sentiment).
@article{huggingface:dataset, title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian}, authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others}, year={2020} journal = {arXiv e-prints}, eprint = {2012.06154}, }
null
4
122
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - fa license: - cc-by-nc-sa-4.0 multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - extended|translated|mnli task_categories: - sentiment-analysis task_ids: - sentiment-analysis --- # Dataset Card for PersiNLU (Textual Entailment) ## Table of Contents - [Dataset Card for PersiNLU (Sentiment Analysis)](#dataset-card-for-persi_sentiment) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Github](https://github.com/persiannlp/parsinlu/) - **Repository:** [Github](https://github.com/persiannlp/parsinlu/) - **Paper:** [Arxiv](https://arxiv.org/abs/2012.06154) - **Leaderboard:** - **Point of Contact:** d.khashabi@gmail.com ### Dataset Summary A Persian sentiment analysis dataset. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The text dataset is in Persian (`fa`). ## Dataset Structure ### Data Instances Here is an example from the dataset: ```json { "review": "خوب بود ولی خیلی گرون شده دیگه...فک نکنم به این قیمت ارزش خرید داشته باشد", "review_id": "1538", "example_id": "4", "excel_id": "food_194", "question": "نظر شما در مورد بسته بندی و نگهداری این حلوا شکری، ارده و کنجد چیست؟", "category": "حلوا شکری، ارده و کنجد", "aspect": "بسته بندی", "label": "-3", "guid": "food-dev-r1538-e4" } ``` ### Data Fields - `review`: the review text. - `review_id`: a unique id associated with the review. - `example_id`: a unique id associated with a particular attribute being addressed about the review. - `question`: a natural language question about a particular attribute. - `category`: the subject discussed in the review. - `aspect`: the aspect mentioned in the input question. - `label`: the overall sentiment towards this particular subject, in the context of the mentioned aspect. Here are the definition of the labels: ``` '-3': 'no sentiment expressed', '-2': 'very negative', '-1': 'negative', '0': 'neutral', '1': 'positive', '2': 'very positive', '3': 'mixed', ``` ### Data Splits See the data. ## Dataset Creation ### Curation Rationale For details, check [the corresponding draft](https://arxiv.org/abs/2012.06154). ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information CC BY-NC-SA 4.0 License ### Citation Information ```bibtex @article{huggingface:dataset, title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian}, authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others}, year={2020} journal = {arXiv e-prints}, eprint = {2012.06154}, } ``` ### Contributions Thanks to [@danyaljj](https://github.com/danyaljj) for adding this dataset.
gigant/horse2zebra
2022-10-24T17:37:53.000Z
[ "task_categories:image-to-image", "license:cc", "GAN", "unpaired-image-to-image-translation", "arxiv:1703.10593", "region:us" ]
gigant
Two unpaired sets of photos of respectively horses and zebras, designed for unpaired image-to-image translation, as seen in the paper introducing CycleGAN
@inproceedings{CycleGAN2017, title={Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks}, author={Zhu, Jun-Yan and Park, Taesung and Isola, Phillip and Efros, Alexei A}, booktitle={Computer Vision (ICCV), 2017 IEEE International Conference on}, year={2017} }
null
1
122
--- license: cc task_categories: - image-to-image task_ids: [] pretty_name: Horse2Zebra tags: - GAN - unpaired-image-to-image-translation --- ## Dataset Description - **Homepage:** https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/ - **Paper:** https://arxiv.org/abs/1703.10593 ### Dataset Summary This dataset was obtained from the original CycleGAN Datasets directory available on [Berkeley's website](https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/). For more details about the dataset you can refer to the [original CycleGAN publication](https://arxiv.org/abs/1703.10593). ### How to use You can easily load the dataset with the following lines : ```python from datasets import load_dataset data_horses = load_dataset("gigant/horse2zebra", name="horse", split="train") data_zebras = load_dataset("gigant/horse2zebra", name="zebra", split="train") ``` Two splits are available, `"train"` and `"test"` ### Citation Information ``` @inproceedings{CycleGAN2017, title={Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks}, author={Zhu, Jun-Yan and Park, Taesung and Isola, Phillip and Efros, Alexei A}, booktitle={Computer Vision (ICCV), 2017 IEEE International Conference on}, year={2017} } ```
tner/fin
2022-08-15T17:50:31.000Z
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "multilinguality:monolingual", "size_categories:1K<n<10K", "language:en", "license:mit", "region:us" ]
tner
[FIN NER dataset](https://aclanthology.org/U15-1010.pdf)
@inproceedings{salinas-alvarado-etal-2015-domain, title = "Domain Adaption of Named Entity Recognition to Support Credit Risk Assessment", author = "Salinas Alvarado, Julio Cesar and Verspoor, Karin and Baldwin, Timothy", booktitle = "Proceedings of the Australasian Language Technology Association Workshop 2015", month = dec, year = "2015", address = "Parramatta, Australia", url = "https://aclanthology.org/U15-1010", pages = "84--90", }
null
4
122
--- language: - en license: - mit multilinguality: - monolingual size_categories: - 1K<n<10K task_categories: - token-classification task_ids: - named-entity-recognition pretty_name: FIN --- # Dataset Card for "tner/fin" ## Dataset Description - **Repository:** [T-NER](https://github.com/asahi417/tner) - **Paper:** [https://aclanthology.org/U15-1010.pdf](https://aclanthology.org/U15-1010.pdf) - **Dataset:** FIN - **Domain:** Financial News - **Number of Entity:** 4 ### Dataset Summary FIN NER dataset formatted in a part of [TNER](https://github.com/asahi417/tner) project. FIN dataset contains training (FIN5) and test (FIN3) only, so we randomly sample a half size of test instances from the training set to create validation set. - Entity Types: `ORG`, `LOC`, `PER`, `MISC` ## Dataset Structure ### Data Instances An example of `train` looks as follows. ``` { "tags": [0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "tokens": ["1", ".", "1", ".", "4", "Borrower", "engages", "in", "criminal", "conduct", "or", "is", "involved", "in", "criminal", "activities", ";"] } ``` ### Label ID The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/fin/raw/main/dataset/label.json). ```python { "O": 0, "B-PER": 1, "B-LOC": 2, "B-ORG": 3, "B-MISC": 4, "I-PER": 5, "I-LOC": 6, "I-ORG": 7, "I-MISC": 8 } ``` ### Data Splits | name |train|validation|test| |---------|----:|---------:|---:| |fin |1014 | 303| 150| ### Citation Information ``` @inproceedings{salinas-alvarado-etal-2015-domain, title = "Domain Adaption of Named Entity Recognition to Support Credit Risk Assessment", author = "Salinas Alvarado, Julio Cesar and Verspoor, Karin and Baldwin, Timothy", booktitle = "Proceedings of the Australasian Language Technology Association Workshop 2015", month = dec, year = "2015", address = "Parramatta, Australia", url = "https://aclanthology.org/U15-1010", pages = "84--90", } ```
tomekkorbak/detoxify-pile-chunk3-1900000-1950000
2022-10-04T23:19:29.000Z
[ "region:us" ]
tomekkorbak
null
null
null
0
122
Entry not found
Cohere/miracl-en-queries-22-12
2023-02-06T11:54:43.000Z
[ "task_categories:text-retrieval", "task_ids:document-retrieval", "annotations_creators:expert-generated", "multilinguality:multilingual", "language:en", "license:apache-2.0", "region:us" ]
Cohere
null
null
null
0
122
--- annotations_creators: - expert-generated language: - en multilinguality: - multilingual size_categories: [] source_datasets: [] tags: [] task_categories: - text-retrieval license: - apache-2.0 task_ids: - document-retrieval --- # MIRACL (en) embedded with cohere.ai `multilingual-22-12` encoder We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model. The query embeddings can be found in [Cohere/miracl-en-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-en-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-en-corpus-22-12). For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus). Dataset info: > MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world. > > The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage. ## Embeddings We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/). ## Loading the dataset In [miracl-en-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-en-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large. You can either load the dataset like this: ```python from datasets import load_dataset docs = load_dataset(f"Cohere/miracl-en-corpus-22-12", split="train") ``` Or you can also stream it without downloading it before: ```python from datasets import load_dataset docs = load_dataset(f"Cohere/miracl-en-corpus-22-12", split="train", streaming=True) for doc in docs: docid = doc['docid'] title = doc['title'] text = doc['text'] emb = doc['emb'] ``` ## Search Have a look at [miracl-en-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12) where we provide the query embeddings for the MIRACL dataset. To search in the documents, you must use **dot-product**. And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product. A full search example: ```python # Attention! For large datasets, this requires a lot of memory to store # all document embeddings and to compute the dot product scores. # Only use this for smaller datasets. For large datasets, use a vector DB from datasets import load_dataset import torch #Load documents + embeddings docs = load_dataset(f"Cohere/miracl-en-corpus-22-12", split="train") doc_embeddings = torch.tensor(docs['emb']) # Load queries queries = load_dataset(f"Cohere/miracl-en-queries-22-12", split="dev") # Select the first query as example qid = 0 query = queries[qid] query_embedding = torch.tensor(queries['emb']) # Compute dot score between query embedding and document embeddings dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1)) top_k = torch.topk(dot_scores, k=3) # Print results print("Query:", query['query']) for doc_id in top_k.indices[0].tolist(): print(docs[doc_id]['title']) print(docs[doc_id]['text']) ``` You can get embeddings for new queries using our API: ```python #Run: pip install cohere import cohere co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :)) texts = ['my search query'] response = co.embed(texts=texts, model='multilingual-22-12') query_embedding = response.embeddings[0] # Get the embedding for the first text ``` ## Performance In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset. We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results. Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted. | Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 | |---|---|---|---|---| | miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 | | miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 | | miracl-de | 44.4 | 60.7 | 19.6 | 29.8 | | miracl-en | 44.6 | 62.2 | 30.2 | 43.2 | | miracl-es | 47.0 | 74.1 | 27.0 | 47.2 | | miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 | | miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 | | miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 | | miracl-id | 44.8 | 63.8 | 39.2 | 54.7 | | miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 | | **Avg** | 51.7 | 67.5 | 34.7 | 46.0 | Further languages (not supported by Elasticsearch): | Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | |---|---|---| | miracl-fa | 44.8 | 53.6 | | miracl-ja | 49.0 | 61.0 | | miracl-ko | 50.9 | 64.8 | | miracl-sw | 61.4 | 74.5 | | miracl-te | 67.8 | 72.3 | | miracl-th | 60.2 | 71.9 | | miracl-yo | 56.4 | 62.2 | | miracl-zh | 43.8 | 56.5 | | **Avg** | 54.3 | 64.6 |
BioDEX/BioDEX-ICSR
2023-05-30T15:20:25.000Z
[ "region:us" ]
BioDEX
null
null
null
1
122
--- dataset_info: features: - name: title dtype: string - name: abstract dtype: string - name: fulltext dtype: string - name: target dtype: string - name: pmid dtype: string - name: fulltext_license dtype: string - name: title_normalized dtype: string - name: issue dtype: string - name: pages dtype: string - name: journal dtype: string - name: authors dtype: string - name: pubdate dtype: string - name: doi dtype: string - name: affiliations dtype: string - name: medline_ta dtype: string - name: nlm_unique_id dtype: string - name: issn_linking dtype: string - name: country dtype: string - name: mesh_terms dtype: string - name: publication_types dtype: string - name: chemical_list dtype: string - name: keywords dtype: string - name: references dtype: string - name: delete dtype: bool - name: pmc dtype: string - name: other_id dtype: string - name: safetyreportid dtype: int64 - name: fulltext_processed dtype: string splits: - name: test num_bytes: 155748936 num_examples: 3628 - name: train num_bytes: 374859364 num_examples: 9624 - name: validation num_bytes: 96385392 num_examples: 2407 download_size: 337571954 dataset_size: 626993692 --- # Dataset Card for "BioDEX-ICSR" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
jondurbin/airoboros-gpt4-m2.0
2023-07-31T07:01:42.000Z
[ "license:other", "region:us" ]
jondurbin
null
null
null
18
122
--- license: other --- ## Overview This is a merge of https://hf.co/datasets/jondurbin/airoboros-gpt4-1.4.1 and https://hf.co/datasets/jondurbin/airoboros-gpt4-2.0 ### Category breakdown ![chart](merged-breakdown.png) ### Licence and usage restrictions The data was generated by gpt-4 via OpenAI API calls. The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI - what does *compete* actually mean here? - these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place - if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works - the training data used in essentially all large language models includes a significant of copyrighted or otherwise unallowable licensing in the first place - other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2 I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly. Your best bet is probably to avoid using this commercially due to the OpenAI API usage. Either way, by using this model, you agree to completely idemnify me from any and all license related issues. Attribution would be nice if you use some or all of the data.
roborovski/celeba-faces-captioned
2023-08-10T03:02:58.000Z
[ "region:us" ]
roborovski
null
null
null
0
122
--- configs: - config_name: default data_files: - split: train path: data/train-* dataset_info: features: - name: image dtype: image - name: pixel_values sequence: sequence: sequence: float32 - name: captions dtype: string splits: - name: train num_bytes: 17810785215.0 num_examples: 10000 download_size: 475025277 dataset_size: 17810785215.0 --- # Dataset Card for "celeba-faces-captioned" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
indonlp/nusatranslation_mt
2023-09-15T09:56:14.000Z
[ "license:apache-2.0", "region:us" ]
indonlp
Democratizing access to natural language processing (NLP) technology is crucial, especially for underrepresented and extremely low-resource languages. Previous research has focused on developing labeled and unlabeled corpora for these languages through online scraping and document translation. While these methods have proven effective and cost-efficient, we have identified limitations in the resulting corpora, including a lack of lexical diversity and cultural relevance to local communities. To address this gap, we conduct a case study on Indonesian local languages. We compare the effectiveness of online scraping, human translation, and paragraph writing by native speakers in constructing datasets. Our findings demonstrate that datasets generated through paragraph writing by native speakers exhibit superior quality in terms of lexical diversity and cultural content. In addition, we present the NusaWrites benchmark, encompassing 12 underrepresented and extremely low-resource languages spoken by millions of individuals in Indonesia. Our empirical experiment results using existing multilingual large language models conclude the need to extend these models to more underrepresented languages. We introduce a novel high quality human curated corpora, i.e., NusaMenulis, which covers 12 languages spoken in Indonesia. The resource extend the coverage of languages to 5 new languages, i.e., Ambon (abs), Bima (bhp), Makassarese (mak), Palembang / Musi (mui), and Rejang (rej). For the rhetoric mode classification task, we cover 5 rhetoric modes, i.e., narrative, persuasive, argumentative, descriptive, and expository.
@unpublished{anonymous2023nusawrites:, title={NusaWrites: Constructing High-Quality Corpora for Underrepresented and Extremely Low-Resource Languages}, author={Anonymous}, journal={OpenReview Preprint}, year={2023}, note={anonymous preprint under review} }
null
0
122
--- license: apache-2.0 ---
hxyue1/ask_theology
2023-09-20T22:13:47.000Z
[ "region:us" ]
hxyue1
null
null
null
1
122
--- dataset_info: features: - name: title dtype: string - name: authors dtype: string - name: chunked dtype: string - name: embeddings sequence: float32 splits: - name: train num_bytes: 39776335 num_examples: 6833 download_size: 38769605 dataset_size: 39776335 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "ask_theology" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
chrisgru/chat-v2.2
2023-09-29T11:27:16.000Z
[ "region:us" ]
chrisgru
null
null
null
0
122
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* - split: valid path: data/valid-* dataset_info: features: - name: text dtype: string splits: - name: train num_bytes: 6752911 num_examples: 4386 - name: test num_bytes: 19656299 num_examples: 5339 - name: valid num_bytes: 1929381 num_examples: 500 download_size: 12920714 dataset_size: 28338591 --- # Dataset Card for "chat-v2.2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
arcd
2023-04-05T09:35:12.000Z
[ "task_categories:question-answering", "task_ids:extractive-qa", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:ar", "license:mit", "region:us" ]
null
Arabic Reading Comprehension Dataset (ARCD) composed of 1,395 questions posed by crowdworkers on Wikipedia articles.
@inproceedings{mozannar-etal-2019-neural, title = {Neural {A}rabic Question Answering}, author = {Mozannar, Hussein and Maamary, Elie and El Hajal, Karl and Hajj, Hazem}, booktitle = {Proceedings of the Fourth Arabic Natural Language Processing Workshop}, month = {aug}, year = {2019}, address = {Florence, Italy}, publisher = {Association for Computational Linguistics}, url = {https://www.aclweb.org/anthology/W19-4612}, doi = {10.18653/v1/W19-4612}, pages = {108--118}, abstract = {This paper tackles the problem of open domain factual Arabic question answering (QA) using Wikipedia as our knowledge source. This constrains the answer of any question to be a span of text in Wikipedia. Open domain QA for Arabic entails three challenges: annotated QA datasets in Arabic, large scale efficient information retrieval and machine reading comprehension. To deal with the lack of Arabic QA datasets we present the Arabic Reading Comprehension Dataset (ARCD) composed of 1,395 questions posed by crowdworkers on Wikipedia articles, and a machine translation of the Stanford Question Answering Dataset (Arabic-SQuAD). Our system for open domain question answering in Arabic (SOQAL) is based on two components: (1) a document retriever using a hierarchical TF-IDF approach and (2) a neural reading comprehension model using the pre-trained bi-directional transformer BERT. Our experiments on ARCD indicate the effectiveness of our approach with our BERT-based reader achieving a 61.3 F1 score, and our open domain system SOQAL achieving a 27.6 F1 score.} }
null
3
121
--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: - ar language_bcp47: - ar-SA license: - mit multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - question-answering task_ids: - extractive-qa paperswithcode_id: arcd pretty_name: ARCD dataset_info: features: - name: id dtype: string - name: title dtype: string - name: context dtype: string - name: question dtype: string - name: answers sequence: - name: text dtype: string - name: answer_start dtype: int32 config_name: plain_text splits: - name: train num_bytes: 811064 num_examples: 693 - name: validation num_bytes: 885648 num_examples: 702 download_size: 1942399 dataset_size: 1696712 --- # Dataset Card for "arcd" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/husseinmozannar/SOQAL/tree/master/data](https://github.com/husseinmozannar/SOQAL/tree/master/data) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 1.94 MB - **Size of the generated dataset:** 1.70 MB - **Total amount of disk used:** 3.64 MB ### Dataset Summary Arabic Reading Comprehension Dataset (ARCD) composed of 1,395 questions posed by crowdworkers on Wikipedia articles. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### plain_text - **Size of downloaded dataset files:** 1.94 MB - **Size of the generated dataset:** 1.70 MB - **Total amount of disk used:** 3.64 MB An example of 'train' looks as follows. ``` This example was too long and was cropped: { "answers": "{\"answer_start\": [34], \"text\": [\"صحابي من صحابة رسول الإسلام محمد، وعمُّه وأخوه من الرضاعة وأحد وزرائه الأربعة عشر،\"]}...", "context": "\"حمزة بن عبد المطلب الهاشمي القرشي صحابي من صحابة رسول الإسلام محمد، وعمُّه وأخوه من الرضاعة وأحد وزرائه الأربعة عشر، وهو خير أع...", "id": "621723207492", "question": "من هو حمزة بن عبد المطلب؟", "title": "حمزة بن عبد المطلب" } ``` ### Data Fields The data fields are the same among all splits. #### plain_text - `id`: a `string` feature. - `title`: a `string` feature. - `context`: a `string` feature. - `question`: a `string` feature. - `answers`: a dictionary feature containing: - `text`: a `string` feature. - `answer_start`: a `int32` feature. ### Data Splits | name | train | validation | | ---------- | ----: | ---------: | | plain_text | 693 | 702 | ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @inproceedings{mozannar-etal-2019-neural, title = "Neural {A}rabic Question Answering", author = "Mozannar, Hussein and Maamary, Elie and El Hajal, Karl and Hajj, Hazem", booktitle = "Proceedings of the Fourth Arabic Natural Language Processing Workshop", month = aug, year = "2019", address = "Florence, Italy", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/W19-4612", doi = "10.18653/v1/W19-4612", pages = "108--118", abstract = "This paper tackles the problem of open domain factual Arabic question answering (QA) using Wikipedia as our knowledge source. This constrains the answer of any question to be a span of text in Wikipedia. Open domain QA for Arabic entails three challenges: annotated QA datasets in Arabic, large scale efficient information retrieval and machine reading comprehension. To deal with the lack of Arabic QA datasets we present the Arabic Reading Comprehension Dataset (ARCD) composed of 1,395 questions posed by crowdworkers on Wikipedia articles, and a machine translation of the Stanford Question Answering Dataset (Arabic-SQuAD). Our system for open domain question answering in Arabic (SOQAL) is based on two components: (1) a document retriever using a hierarchical TF-IDF approach and (2) a neural reading comprehension model using the pre-trained bi-directional transformer BERT. Our experiments on ARCD indicate the effectiveness of our approach with our BERT-based reader achieving a 61.3 F1 score, and our open domain system SOQAL achieving a 27.6 F1 score.", } ``` ### Contributions Thanks to [@albertvillanova](https://github.com/albertvillanova), [@lewtun](https://github.com/lewtun), [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf), [@tayciryahmed](https://github.com/tayciryahmed) for adding this dataset.
web_of_science
2023-04-05T13:42:58.000Z
[ "language:en", "region:us" ]
null
The Web Of Science (WOS) dataset is a collection of data of published papers available from the Web of Science. WOS has been released in three versions: WOS-46985, WOS-11967 and WOS-5736. WOS-46985 is the full dataset. WOS-11967 and WOS-5736 are two subsets of WOS-46985.
@inproceedings{kowsari2017HDLTex, title={HDLTex: Hierarchical Deep Learning for Text Classification}, author={Kowsari, Kamran and Brown, Donald E and Heidarysafa, Mojtaba and Jafari Meimandi, Kiana and and Gerber, Matthew S and Barnes, Laura E}, booktitle={Machine Learning and Applications (ICMLA), 2017 16th IEEE International Conference on}, year={2017}, organization={IEEE} }
null
2
121
--- language: - en paperswithcode_id: web-of-science-dataset pretty_name: Web of Science Dataset dataset_info: - config_name: WOS5736 features: - name: input_data dtype: string - name: label dtype: int32 - name: label_level_1 dtype: int32 - name: label_level_2 dtype: int32 splits: - name: train num_bytes: 8051533 num_examples: 5736 download_size: 60222421 dataset_size: 8051533 - config_name: WOS11967 features: - name: input_data dtype: string - name: label dtype: int32 - name: label_level_1 dtype: int32 - name: label_level_2 dtype: int32 splits: - name: train num_bytes: 16248391 num_examples: 11967 download_size: 60222421 dataset_size: 16248391 - config_name: WOS46985 features: - name: input_data dtype: string - name: label dtype: int32 - name: label_level_1 dtype: int32 - name: label_level_2 dtype: int32 splits: - name: train num_bytes: 65471726 num_examples: 46985 download_size: 60222421 dataset_size: 65471726 --- # Dataset Card for "web_of_science" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://data.mendeley.com/datasets/9rw3vkcfy4/6](https://data.mendeley.com/datasets/9rw3vkcfy4/6) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 180.67 MB - **Size of the generated dataset:** 89.81 MB - **Total amount of disk used:** 270.48 MB ### Dataset Summary Copyright (c) 2017 Kamran Kowsari Permission is hereby granted, free of charge, to any person obtaining a copy of this dataset and associated documentation files (the "Dataset"), to deal in the dataset without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Dataset, and to permit persons to whom the dataset is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Dataset. If you use this dataset please cite: Referenced paper: HDLTex: Hierarchical Deep Learning for Text Classification Description of Dataset: Here is three datasets which include WOS-11967 , WOS-46985, and WOS-5736 Each folder contains: -X.txt -Y.txt -YL1.txt -YL2.txt X is input data that include text sequences Y is target value YL1 is target value of level one (parent label) YL2 is target value of level one (child label) Web of Science Dataset WOS-5736 -This dataset contains 5,736 documents with 11 categories which include 3 parents categories. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### WOS11967 - **Size of downloaded dataset files:** 60.22 MB - **Size of the generated dataset:** 16.25 MB - **Total amount of disk used:** 76.48 MB An example of 'train' looks as follows. ``` ``` #### WOS46985 - **Size of downloaded dataset files:** 60.22 MB - **Size of the generated dataset:** 65.50 MB - **Total amount of disk used:** 125.72 MB An example of 'train' looks as follows. ``` ``` #### WOS5736 - **Size of downloaded dataset files:** 60.22 MB - **Size of the generated dataset:** 8.05 MB - **Total amount of disk used:** 68.27 MB An example of 'train' looks as follows. ``` ``` ### Data Fields The data fields are the same among all splits. #### WOS11967 - `input_data`: a `string` feature. - `label`: a `int32` feature. - `label_level_1`: a `int32` feature. - `label_level_2`: a `int32` feature. #### WOS46985 - `input_data`: a `string` feature. - `label`: a `int32` feature. - `label_level_1`: a `int32` feature. - `label_level_2`: a `int32` feature. #### WOS5736 - `input_data`: a `string` feature. - `label`: a `int32` feature. - `label_level_1`: a `int32` feature. - `label_level_2`: a `int32` feature. ### Data Splits | name |train| |--------|----:| |WOS11967|11967| |WOS46985|46985| |WOS5736 | 5736| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @inproceedings{kowsari2017HDLTex, title={HDLTex: Hierarchical Deep Learning for Text Classification}, author={Kowsari, Kamran and Brown, Donald E and Heidarysafa, Mojtaba and Jafari Meimandi, Kiana and and Gerber, Matthew S and Barnes, Laura E}, booktitle={Machine Learning and Applications (ICMLA), 2017 16th IEEE International Conference on}, year={2017}, organization={IEEE} } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq), [@mariamabarham](https://github.com/mariamabarham), [@lewtun](https://github.com/lewtun) for adding this dataset.
SLPL/syntran-fa
2022-11-03T06:34:17.000Z
[ "task_categories:question-answering", "task_categories:text2text-generation", "task_categories:text-generation", "multilinguality:monolingual", "size_categories:30k<n<50k", "language:fa", "license:mit", "conditional-text-generation", "conversational-question-answering", "region:us" ]
SLPL
null
null
null
6
121
--- language: - fa license: mit multilinguality: - monolingual size_categories: - 30k<n<50k task_categories: - question-answering - text2text-generation - text-generation task_ids: [] pretty_name: SynTranFa tags: - conditional-text-generation - conversational-question-answering --- # SynTran-fa Syntactic Transformed Version of Farsi QA datasets to make fluent responses from questions and short answers. You can use this dataset by the code below: ```python import datasets data = datasets.load_dataset('SLPL/syntran-fa', split="train") ``` ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Dataset Creation](#dataset-creation) - [Source Data](#source-data) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Sharif-SLPL](https://github.com/Sharif-SLPL) - **Repository:** [SynTran-fa](https://github.com/agp-internship/syntran-fa) - **Point of Contact:** [Sadra Sabouri](mailto:sabouri.sadra@gmail.com) ### Dataset Summary Generating fluent responses has always been challenging for the question-answering task, especially in low-resource languages like Farsi. In recent years there were some efforts for enhancing the size of datasets in Farsi. Syntran-fa is a question-answering dataset that accumulates the former Farsi QA dataset's short answers and proposes a complete fluent answer for each pair of (question, short_answer). This dataset contains nearly 50,000 indices of questions and answers. The dataset that has been used as our sources are in [Source Data section](#source-data). The main idea for this dataset comes from [Fluent Response Generation for Conversational Question Answering](https://aclanthology.org/2020.acl-main.19.pdf) where they used a "parser + syntactic rules" module to make different fluent answers from a pair of question and a short answer using a parser and some syntactic rules. In this project, we used [stanza](https://stanfordnlp.github.io/stanza/) as our parser to parse the question and generate a response according to it using the short (sentences without verbs - up to ~4 words) answers. One can continue this project by generating different permutations of the sentence's parts (and thus providing more than one sentence for an answer) or training a seq2seq model which does what we do with our rule-based system (by defining a new text-to-text task). ### Supported Tasks and Leaderboards This dataset can be used for the question-answering task, especially when you are going to generate fluent responses. You can train a seq2seq model with this dataset to generate fluent responses - as done by [Fluent Response Generation for Conversational Question Answering](https://aclanthology.org/2020.acl-main.19.pdf). ### Languages + Persian (fa) ## Dataset Structure Each row of the dataset will look like something like the below: ```json { 'id': 0, 'question': 'باشگاه هاکی ساوتهمپتون چه نام دارد؟', 'short_answer': 'باشگاه هاکی ساوتهمپتون', 'fluent_answer': 'باشگاه هاکی ساوتهمپتون باشگاه هاکی ساوتهمپتون نام دارد.', 'bert_loss': 1.110097069682014 } ``` + `id` : the entry id in dataset + `question` : the question + `short_answer` : the short answer corresponding to the `question` (the primary answer) + `fluent_answer` : fluent (long) answer generated from both `question` and the `short_answer` (the secondary answer) + `bert_loss` : the loss that [pars-bert](https://huggingface.co/HooshvareLab/bert-base-parsbert-uncased) gives when inputting the `fluent_answer` to it. As it increases the sentence is more likely to be influent. Note: the dataset is sorted increasingly by the `bert_loss`, so first sentences are more likely to be fluent. ### Data Splits Currently, the dataset just provided the `train` split. There would be a `test` split soon. ## Dataset Creation ### Source Data The source datasets that we used are as follows: + [PersianQA](https://github.com/sajjjadayobi/PersianQA) + [PersianQuAD](https://ieeexplore.ieee.org/document/9729745) #### Initial Data Collection and Normalization We extract all short answer (sentences without verbs - up to ~4 words) entries of all open source QA datasets in Farsi and used some rules featuring the question parse tree to make long (fluent) answers. ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information The dataset is completely a subset of open source known datasets so all information in it is already there on the internet as a open-source dataset. By the way, we do not take responsibility for any of that. ## Additional Information ### Dataset Curators The dataset is gathered together completely in the Asr Gooyesh Pardaz company's summer internship under the supervision of Soroush Gooran, Prof. Hossein Sameti, and the mentorship of Sadra Sabouri. This project was Farhan Farsi's first internship project. ### Licensing Information MIT ### Citation Information [More Information Needed] ### Contributions Thanks to [@farhaaaaa](https://github.com/farhaaaaa) and [@sadrasabouri](https://github.com/sadrasabouri) for adding this dataset.
jakartaresearch/indoqa
2022-12-17T06:07:27.000Z
[ "task_categories:question-answering", "task_ids:extractive-qa", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:id", "license:cc-by-nd-4.0", "indoqa", "qa", "question-answering", "indonesian", "region:us" ]
jakartaresearch
This dataset is built for question answering task.
null
null
1
121
--- annotations_creators: - expert-generated language: - id language_creators: - found license: - cc-by-nd-4.0 multilinguality: - monolingual pretty_name: Indonesian Question Answering Dataset size_categories: - 1K<n<10K source_datasets: - original tags: - indoqa - qa - question-answering - indonesian task_categories: - question-answering task_ids: - extractive-qa --- # Dataset Card for Indonesian Question Answering Dataset ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@fhrzn](https://github.com/fhrzn)[@Kalzaik](https://github.com/Kalzaik) [@ibamibrahim](https://github.com/ibamibrahim) [@andreaschandra](https://github.com/andreaschandra) for adding this dataset.
bigbio/drugprot
2023-01-06T03:30:02.000Z
[ "multilinguality:monolingual", "language:en", "license:cc-by-4.0", "region:us" ]
bigbio
The DrugProt corpus consists of a) expert-labelled chemical and gene mentions, and (b) all binary relationships between them corresponding to a specific set of biologically relevant relation types.
@inproceedings{miranda2021overview, title={Overview of DrugProt BioCreative VII track: quality evaluation and large scale text mining of drug-gene/protein relations}, author={Miranda, Antonio and Mehryary, Farrokh and Luoma, Jouni and Pyysalo, Sampo and Valencia, Alfonso and Krallinger, Martin}, booktitle={Proceedings of the seventh BioCreative challenge evaluation workshop}, year={2021} }
null
0
121
--- language: - en bigbio_language: - English license: cc-by-4.0 multilinguality: monolingual bigbio_license_shortname: CC_BY_4p0 pretty_name: DrugProt homepage: https://biocreative.bioinformatics.udel.edu/tasks/biocreative-vii/track-1/ bigbio_pubmed: True bigbio_public: True bigbio_tasks: - NAMED_ENTITY_RECOGNITION - RELATION_EXTRACTION --- # Dataset Card for DrugProt ## Dataset Description - **Homepage:** https://biocreative.bioinformatics.udel.edu/tasks/biocreative-vii/track-1/ - **Pubmed:** True - **Public:** True - **Tasks:** NER,RE The DrugProt corpus consists of a) expert-labelled chemical and gene mentions, and (b) all binary relationships between them corresponding to a specific set of biologically relevant relation types. The corpus was introduced in context of the BioCreative VII Track 1 (Text mining drug and chemical-protein interactions). ## Citation Information ``` @inproceedings{miranda2021overview, title={Overview of DrugProt BioCreative VII track: quality evaluation and large scale text mining of \ drug-gene/protein relations}, author={Miranda, Antonio and Mehryary, Farrokh and Luoma, Jouni and Pyysalo, Sampo and Valencia, Alfonso \ and Krallinger, Martin}, booktitle={Proceedings of the seventh BioCreative challenge evaluation workshop}, year={2021} } ```
urialon/gov_report_validation
2023-02-28T15:40:57.000Z
[ "region:us" ]
urialon
null
null
null
0
121
Entry not found
diffusers/dog-example
2023-04-18T15:53:56.000Z
[ "region:us" ]
diffusers
null
null
null
2
121
Entry not found
JoffreyMa/BGDIA704_faces
2023-05-15T12:35:49.000Z
[ "region:us" ]
JoffreyMa
null
null
null
0
121
--- dataset_info: features: - name: image dtype: image - name: label dtype: int64 - name: genre dtype: int64 splits: - name: train num_bytes: 942521828.16 num_examples: 192576 download_size: 900725876 dataset_size: 942521828.16 --- # Dataset Card for "BGDIA704_faces" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
neovalle/H4rmony
2023-09-25T15:23:13.000Z
[ "task_categories:reinforcement-learning", "task_categories:text-classification", "task_categories:question-answering", "size_categories:1K<n<10K", "language:en", "license:cc-by-4.0", "Ecolinguistics", "Sustainability", "ecolinguistic", "environment", "doi:10.57967/hf/1148", "region:us" ]
neovalle
null
null
null
2
121
--- license: cc-by-4.0 task_categories: - reinforcement-learning - text-classification - question-answering language: - en tags: - Ecolinguistics - Sustainability - ecolinguistic - environment size_categories: - 1K<n<10K --- # Dataset Card for Dataset H4rmony ### Dataset Summary The H4rmony dataset is a collection of prompts and completions aimed at integrating ecolinguistic principles into AI Large Language Models (LLMs). Developed with collaborative efforts from ecolinguistics enthusiasts and experts, it offers a series of prompts and corresponding pairwise responses ranked in terms of environmental awareness and alignment. This ranking provides a clear metric for the desired alignment and establishes a framework for LLMs fine-tuning, particularly in reinforcement learning, via reward model. This dataset aims to bridge the gap between AI and ecolinguistic values, pushing the envelope for creating generative AI models that are environmentally and sustainability aware by design. H4rmony is not just a dataset; it's a project towards harmonising AI with nature by means of fine-tuning. We believe in the potential of using ecolinguistics to fine-tune and influence LLMs towards more eco-aware outputs. This dataset is currently work in progress. ### Languages Currently only English but will be extended to multi-lingual. ## Dataset Structure ### Data Fields ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64aac16fd4a402e8dce11ebe/tF_DPhg_R8jAyNRvVAuDz.png) ### Data Splits There are no splits on the dataset. Splits can be created when loading the dataset: dataset = (load_dataset('neovalle/H4rmony', split='train').train_test_split(test_size=0.2)) ## Dataset Creation ### Curation Rationale Given the multidisciplinary nature of the challenge, H4rmony dataset is being enriched by contributions from environmentalists, AI specialists, and ecolinguistics enthusiasts. This collective effort ensures the data is both technically sound and ecologically meaningful. ### Source Data #### Initial Data Collection and Normalization The core of the H4rmony dataset originated from active collaborations within the ecolinguistics community. Contributors were asked to submit prompts that would help uncover AI models' alignment with ecolinguistic values. A number of prompts and completions were AI-generated using prompt engineering. To this intial group of prompts, human crafted prompts. ### Personal and Sensitive Information This dataset doesn't contain sensitive information. ## Considerations for Using the Data This dataset is under construction and hasn't been fully tested yet. The datase might contain offensive language. ### Social Impact of Dataset The H4rmony project aims to help AI LLMs to give priority to the crucial importance of environmental consciousness. By serving as the fourth "H", "Harmony with nature", it complements the existing triad of Helpfulness, Honesty, and Harmlessness already well known in ethical AI development. ### Discussion of Biases Not known biases. ### Other Known Limitations The dataset is still under constructions and the current number of rows might not be enough for some usage cases. ## Additional Information ### Dataset Curators Jorge Vallego - airesearch@neovalle.co.uk ### Licensing Information Creative Commons Attribution 4.0 ### Citation Information dataset neovalle/H4rmony - airesearch@neovalle.co.uk This project has its roots in the article "Ecolinguistics and AI: Integrating eco-awareness in natural language processing" https://www.ecoling.net/_files/ugd/ae088a_13cc4828a28e4955804d38e8721056cf.pdf
liweili/c4_200m
2022-10-23T11:00:46.000Z
[ "task_categories:text-generation", "source_datasets:allenai/c4", "language:en", "grammatical-error-correction", "region:us" ]
liweili
\ GEC Dataset Generated from C4
\ @InProceedings{huggingface:c4_200m_dataset, title = {c4_200m}, author={Li Liwei}, year={2021} }
null
23
120
--- language: - en source_datasets: - allenai/c4 task_categories: - text-generation pretty_name: C4 200M Grammatical Error Correction Dataset tags: - grammatical-error-correction --- # C4 200M # Dataset Summary c4_200m is a collection of 185 million sentence pairs generated from the cleaned English dataset from C4. This dataset can be used in grammatical error correction (GEC) tasks. The corruption edits and scripts used to synthesize this dataset is referenced from: [C4_200M Synthetic Dataset](https://github.com/google-research-datasets/C4_200M-synthetic-dataset-for-grammatical-error-correction) # Description As discussed before, this dataset contains 185 million sentence pairs. Each article has these two attributes: `input` and `output`. Here is a sample of dataset: ``` { "input": "Bitcoin is for $7,094 this morning, which CoinDesk says." "output": "Bitcoin goes for $7,094 this morning, according to CoinDesk." } ```
transformersbook/codeparrot-valid
2022-02-05T16:23:18.000Z
[ "region:us" ]
transformersbook
null
null
null
0
120
# CodeParrot Dataset This is the validation split of the CodeParrot dataset. It contains Python files used to train the code generation model in Chapter 10: Training Transformers from Scratch in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/10_transformers-from-scratch.ipynb). See the [full dataset](https://huggingface.co/datasets/transformersbook/codeparrot) for more information.
jordanparker6/publaynet
2022-07-19T04:20:00.000Z
[ "task_categories:image-to-text", "size_categories:100B<n<1T", "language:en", "license:other", "arxiv:1908.07836", "region:us" ]
jordanparker6
null
null
null
8
120
--- title: PubLayNet license: other annotations_creators: [] language: - en size_categories: - 100B<n<1T source_datasets: [] task_categories: - image-to-text task_ids: [] --- # PubLayNet PubLayNet is a large dataset of document images, of which the layout is annotated with both bounding boxes and polygonal segmentations. The source of the documents is [PubMed Central Open Access Subset (commercial use collection)](https://www.ncbi.nlm.nih.gov/pmc/tools/openftlist/). The annotations are automatically generated by matching the PDF format and the XML format of the articles in the PubMed Central Open Access Subset. More details are available in our paper ["PubLayNet: largest dataset ever for document layout analysis."](https://arxiv.org/abs/1908.07836). The public dataset is in tar.gz format which doesn't fit nicely with huggingface streaming. Modifications have been made to optimise the delivery of the dataset for the hugginface datset api. The original files can be found [here](https://developer.ibm.com/exchanges/data/all/publaynet/). Licence: [Community Data License Agreement – Permissive – Version 1.0 License](https://cdla.dev/permissive-1-0/) Author: IBM GitHub: https://github.com/ibm-aur-nlp/PubLayNet @article{ zhong2019publaynet, title = { PubLayNet: largest dataset ever for document layout analysis }, author = { Zhong, Xu and Tang, Jianbin and Yepes, Antonio Jimeno }, journal = { arXiv preprint arXiv:1908.07836}, year. = { 2019 } }
CM/codexglue_code2text_java
2023-04-22T01:51:28.000Z
[ "region:us" ]
CM
null
null
null
0
120
--- dataset_info: features: - name: id dtype: int32 - name: repo dtype: string - name: path dtype: string - name: func_name dtype: string - name: original_string dtype: string - name: language dtype: string - name: code dtype: string - name: code_tokens sequence: string - name: docstring dtype: string - name: docstring_tokens sequence: string - name: sha dtype: string - name: url dtype: string splits: - name: train num_bytes: 452553835 num_examples: 164923 - name: validation num_bytes: 13366344 num_examples: 5183 - name: test num_bytes: 29080753 num_examples: 10955 download_size: 154860056 dataset_size: 495000932 --- # Dataset Card for "codexglue_code2text_java" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
jamescalam/langchain-docs-23-06-27
2023-06-27T15:51:24.000Z
[ "region:us" ]
jamescalam
null
null
null
4
120
Entry not found
quocanh34/soict_test_dataset
2023-08-28T07:28:50.000Z
[ "region:us" ]
quocanh34
null
null
null
0
120
--- dataset_info: features: - name: audio dtype: audio: sampling_rate: 16000 - name: id dtype: string splits: - name: train num_bytes: 174203109.625 num_examples: 1299 download_size: 164141076 dataset_size: 174203109.625 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "soict_test_dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Arabic-Clip/ImageCaptions-7M-Translations-Arabic-subset-150000
2023-09-21T09:53:24.000Z
[ "region:us" ]
Arabic-Clip
null
null
null
0
120
Entry not found
Rianknow/sarcastic_headline
2023-09-30T04:43:27.000Z
[ "region:us" ]
Rianknow
null
null
null
0
120
Entry not found
LanceaKing/asvspoof2019
2022-11-11T08:41:54.000Z
[ "task_categories:audio-classification", "annotations_creators:other", "language_creators:other", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:extended|vctk", "language:en", "license:odc-by", "voice-anti-spoofing", "arxiv:1911.01601", "region:us" ]
LanceaKing
This is a database used for the Third Automatic Speaker Verification Spoofing and Countermeasuers Challenge, for short, ASVspoof 2019 (http://www.asvspoof.org) organized by Junichi Yamagishi, Massimiliano Todisco, Md Sahidullah, Héctor Delgado, Xin Wang, Nicholas Evans, Tomi Kinnunen, Kong Aik Lee, Ville Vestman, and Andreas Nautsch in 2019.
@InProceedings{Todisco2019, Title = {{ASV}spoof 2019: {F}uture {H}orizons in {S}poofed and {F}ake {A}udio {D}etection}, Author = {Todisco, Massimiliano and Wang, Xin and Sahidullah, Md and Delgado, H ́ector and Nautsch, Andreas and Yamagishi, Junichi and Evans, Nicholas and Kinnunen, Tomi and Lee, Kong Aik}, booktitle = {Proc. of Interspeech 2019}, Year = {2019} }
null
0
119
--- annotations_creators: - other language_creators: - other language: - en license: - odc-by multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - extended|vctk task_categories: - audio-classification task_ids: [] pretty_name: asvspoof2019 tags: - voice-anti-spoofing --- # Dataset Card for asvspoof2019 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://datashare.ed.ac.uk/handle/10283/3336 - **Repository:** [Needs More Information] - **Paper:** https://arxiv.org/abs/1911.01601 - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary This is a database used for the Third Automatic Speaker Verification Spoofing and Countermeasuers Challenge, for short, ASVspoof 2019 (http://www.asvspoof.org) organized by Junichi Yamagishi, Massimiliano Todisco, Md Sahidullah, Héctor Delgado, Xin Wang, Nicholas Evans, Tomi Kinnunen, Kong Aik Lee, Ville Vestman, and Andreas Nautsch in 2019. ### Supported Tasks and Leaderboards [Needs More Information] ### Languages English ## Dataset Structure ### Data Instances ``` {'speaker_id': 'LA_0091', 'audio_file_name': 'LA_T_8529430', 'audio': {'path': 'D:/Users/80304531/.cache/huggingface/datasets/downloads/extracted/8cabb6d5c283b0ed94b2219a8d459fea8e972ce098ef14d8e5a97b181f850502/LA/ASVspoof2019_LA_train/flac/LA_T_8529430.flac', 'array': array([-0.00201416, -0.00234985, -0.0022583 , ..., 0.01309204, 0.01339722, 0.01461792], dtype=float32), 'sampling_rate': 16000}, 'system_id': 'A01', 'key': 1} ``` ### Data Fields Logical access (LA): - `speaker_id`: `LA_****`, a 4-digit speaker ID - `audio_file_name`: name of the audio file - `audio`: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`. - `system_id`: ID of the speech spoofing system (A01 - A19), or, for bonafide speech SYSTEM-ID is left blank ('-') - `key`: 'bonafide' for genuine speech, or, 'spoof' for spoofing speech Physical access (PA): - `speaker_id`: `PA_****`, a 4-digit speaker ID - `audio_file_name`: name of the audio file - `audio`: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`. - `environment_id`: a triplet (S,R,D_s), which take one letter in the set {a,b,c} as categorical value, defined as | | a | b | c | | -------------------------------- | ------ | ------- | -------- | | S: Room size (square meters) | 2-5 | 5-10 | 10-20 | | R: T60 (ms) | 50-200 | 200-600 | 600-1000 | | D_s: Talker-to-ASV distance (cm) | 10-50 | 50-100 | 100-150 | - `attack_id`: a duple (D_a,Q), which take one letter in the set {A,B,C} as categorical value, defined as | | A | B | C | | ----------------------------------- | ------- | ------ | ----- | | Z: Attacker-to-talker distance (cm) | 10-50 | 50-100 | > 100 | | Q: Replay device quality | perfect | high | low | for bonafide speech, `attack_id` is left blank ('-') - `key`: 'bonafide' for genuine speech, or, 'spoof' for spoofing speech ### Data Splits | | Training set | Development set | Evaluation set | | -------- | ------------ | --------------- | -------------- | | Bonafide | 2580 | 2548 | 7355 | | Spoof | 22800 | 22296 | 63882 | | Total | 25380 | 24844 | 71237 | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information This ASVspoof 2019 dataset is made available under the Open Data Commons Attribution License: http://opendatacommons.org/licenses/by/1.0/ ### Citation Information ``` @InProceedings{Todisco2019, Title = {{ASV}spoof 2019: {F}uture {H}orizons in {S}poofed and {F}ake {A}udio {D}etection}, Author = {Todisco, Massimiliano and Wang, Xin and Sahidullah, Md and Delgado, H ́ector and Nautsch, Andreas and Yamagishi, Junichi and Evans, Nicholas and Kinnunen, Tomi and Lee, Kong Aik}, booktitle = {Proc. of Interspeech 2019}, Year = {2019} } ```
AhmedSSoliman/CodeXGLUE-CONCODE
2022-09-13T14:47:15.000Z
[ "region:us" ]
AhmedSSoliman
null
null
null
1
119
## Concode dataset A large dataset with over 100,000 examples consisting of Java classes from online code repositories, and develop a new encoder-decoder architecture that models the interaction between the method documentation and the class environment. Concode dataset is a widely used code generation dataset from Iyer's EMNLP 2018 paper [Mapping Language to Code in Programmatic Context](https://www.aclweb.org/anthology/D18-1192.pdf). Data statistics of concode dataset are shown in the below table: | | #Examples | | --------- | :---------: | | Train | 100,000 | | Validation | 2,000 | | Test | 2,000 | ## Data Format Code corpus are saved in json lines format files. one line is a json object: ``` { "nl": "Increment this vector in this place. con_elem_sep double[] vecElement con_elem_sep double[] weights con_func_sep void add(double)", "code": "public void inc ( ) { this . add ( 1 ) ; }" } ``` `nl` combines natural language description and class environment. Elements in class environment are seperated by special tokens like `con_elem_sep` and `con_func_sep`. ## Task Definition Generate source code of class member functions in Java, given natural language description and class environment. Class environment is the programmatic context provided by the rest of the class, including other member variables and member functions in class. Models are evaluated by exact match and BLEU. It's a challenging task because the desired code can vary greatly depending on the functionality the class provides. Models must (a) have a deep understanding of NL description and map the NL to environment variables, library API calls and user-defined methods in the class, and (b) decide on the structure of the resulting code. ## Reference Concode dataset: <pre><code>@article{iyer2018mapping, title={Mapping language to code in programmatic context}, author={Iyer, Srinivasan and Konstas, Ioannis and Cheung, Alvin and Zettlemoyer, Luke}, journal={arXiv preprint arXiv:1808.09588}, year={2018} }</code></pre>
martinsinnona/visdecode_ocr
2023-09-13T16:39:01.000Z
[ "region:us" ]
martinsinnona
null
null
null
0
119
--- dataset_info: features: - name: image dtype: image - name: text dtype: string splits: - name: train num_bytes: 4430219.0 num_examples: 400 - name: test num_bytes: 680418.0 num_examples: 60 download_size: 5118908 dataset_size: 5110637.0 --- # Dataset Card for "visdecode_ocr" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
yashnbx/iamgroot
2023-09-25T13:24:19.000Z
[ "license:mit", "region:us" ]
yashnbx
null
null
null
0
119
--- license: mit ---
LDJnr/Verified-Camel
2023-09-26T02:33:30.000Z
[ "task_categories:conversational", "task_categories:question-answering", "task_categories:text-generation", "size_categories:n<1K", "language:en", "license:apache-2.0", "Physics", "Biology", "Math", "Chemistry", "Culture", "Logic", "region:us" ]
LDJnr
null
null
null
6
119
--- license: apache-2.0 task_categories: - conversational - question-answering - text-generation language: - en tags: - Physics - Biology - Math - Chemistry - Culture - Logic pretty_name: Verified-Camel size_categories: - n<1K --- ## This is the Official Verified Camel dataset. Just over 100 verified examples, and many more coming soon! - Comprised of over 100 highly filtered and curated examples from specific portions of CamelAI stem datasets. - These examples are verified to be true by experts in the specific related field, with atleast a bachelors degree in the subject. - Roughly 30-40% of the originally curated data from CamelAI was found to have atleast minor errors and/or incoherent questions(as determined by experts in said field) ## Purpose? - This dataset is not intended to be trained on by itself(besides perhaps interesting research purposes) however, the size and quality of this dataset can work wonderfully as a supplemmentary addition to virtually any multi-turn compatible dataset. I encourage this use, all I ask is proper credits given for such! ## Quality filtering and cleaning. - Extensive cleaning was done to make sure there is no possible instances of overt AI moralizing or related behaviour, such as "As an AI language model" and "September 2021" - This was done for the initial curation due to the responses being originally created by GPT-4. ## Future Plans & How you can help! This is a relatively early build amongst the grand plans for the future of what I plan to work on! In the near future we plan on leveraging the help of even more domain specific expert volunteers to eliminate any mathematically/verifiably incorrect answers from training curations of different types of datasets. If you have at-least a bachelors in mathematics, physics, biology or chemistry and would like to volunteer even just 30 minutes of your expertise time, please contact LDJ on discord!
codeparrot/codeparrot-clean-train
2022-10-10T15:27:50.000Z
[ "region:us" ]
codeparrot
null
null
null
10
118
# CodeParrot 🦜 Dataset Cleaned (train) Train split of [CodeParrot 🦜 Dataset Cleaned](https://huggingface.co/datasets/lvwerra/codeparrot-clean). ## Dataset structure ```python DatasetDict({ train: Dataset({ features: ['repo_name', 'path', 'copies', 'size', 'content', 'license', 'hash', 'line_mean', 'line_max', 'alpha_frac', 'autogenerated'], num_rows: 5300000 }) }) ```
ywchoi/pubmed_abstract_8
2022-09-13T01:14:30.000Z
[ "region:us" ]
ywchoi
null
null
null
0
118
Entry not found
BelleGroup/school_math_0.25M
2023-04-08T03:55:03.000Z
[ "task_categories:text2text-generation", "size_categories:100K<n<1M", "language:zh", "license:gpl-3.0", "region:us" ]
BelleGroup
null
null
null
62
118
--- license: gpl-3.0 task_categories: - text2text-generation language: - zh size_categories: - 100K<n<1M --- # School Math 0.25M ## 内容 包含约25万条由[BELLE](https://github.com/LianjiaTech/BELLE)项目生成的中文数学题数据,包含解题过程。 注意:此数据集是由ChatGPT产生的,未经过严格校验,题目或解题过程可能包含错误。使用过程中请注意这一点。 ## 样例 ``` { "instruction": "题目:小华手里有一个装满糖果的袋子,共有12个,小明想知道里面有多少颗糖果,于是他问小华:“你手里的糖果袋子里有偶数个糖果吗?”小华回答:“有,而且多于10颗。”请问小华手里的糖果袋子里最少有几颗糖果?", "input": "", "output": "\n由题目可知:小华手里的糖果袋子里有偶数个糖果;\n又知道小华手里的糖果袋子里有多于10颗糖果。\n因为糖果数为偶数,多于10颗,所以糖果数最小必须是12。\n所以小华手里的糖果袋子里最少有12颗糖果。" } ``` ### 字段 ``` instruction: 指令 input: 输入(本数据集均为空) output: 输出 ``` ## 局限性和使用限制 我们要求开发者仅将我们开源的代码、数据、模型及后续衍生物用于研究目的,不得用于商业,以及其他会对社会带来危害的用途。 由于数据是由*ChatGPT*生成的,未经严格验证,在事实性和其他方面还存在一些不足。因此,在使用此数据集时,请务必注意甄别。 本数据集不代表任何一方的立场、利益或想法,无关任何团体的任何类型的主张。因使用本数据集带来的任何损害、纠纷,本项目的开发者不承担任何责任。 *** # School Math 0.25M ## Contents Includes approx. 250k Chinese school math quizzes and answers generated by BELLE, with steps to solve the quizzes. Note: this subset was generated by *ChatGPT* and was not strictly verified. The quizzes or answers might contain errors. Please take this in mind when using this subset. ## Sample ``` { "instruction": "题目:小华手里有一个装满糖果的袋子,共有12个,小明想知道里面有多少颗糖果,于是他问小华:“你手里的糖果袋子里有偶数个糖果吗?”小华回答:“有,而且多于10颗。”请问小华手里的糖果袋子里最少有几颗糖果?", "input": "", "output": "\n由题目可知:小华手里的糖果袋子里有偶数个糖果;\n又知道小华手里的糖果袋子里有多于10颗糖果。\n因为糖果数为偶数,多于10颗,所以糖果数最小必须是12。\n所以小华手里的糖果袋子里最少有12颗糖果。" } ``` ### Schema ``` instruction: 指令 input: 输入(本数据集均为空) output: 输出 ``` ## Limitation and Usage Limits We require developers only use the open-sourced code, data, model and any other artifacts generated via this project for research purposes. Commercial use and other potential harmful use cases are not allowed. Since this dataset was generated by *ChatGPT* and was not strictly verified, it still has shortcomings regarding factuality and other aspects. When using this dataset, careful inspection is needed. This dataset does not represent anyone's ground, interest or thought, and is not related to any kind of claim of any groups. The developers of this project do not assume any responsibility to potential harm inflicted by using this dataset and project.
taesiri/arxiv_qa
2023-10-11T01:22:17.000Z
[ "task_categories:question-answering", "language:en", "license:mit", "arxiv:2310.04418", "arxiv:2310.04408", "arxiv:2310.04406", "arxiv:2310.04378", "arxiv:2310.04292", "arxiv:2310.03744", "arxiv:2310.03739", "arxiv:2310.03734", "arxiv:2310.03731", "arxiv:2310.03720", "arxiv:2310.03716", "arxiv:2310.03714", "arxiv:2310.03710", "arxiv:2310.03704", "arxiv:2310.03502", "arxiv:2310.03214", "arxiv:2310.03094", "arxiv:2310.03025", "arxiv:2310.03051", "arxiv:2310.03046", "arxiv:2310.02255", "arxiv:2310.02239", "arxiv:2310.02226", "arxiv:2310.02219", "arxiv:2310.02207", "arxiv:2310.01798", "arxiv:2310.01783", "arxiv:2310.01755", "arxiv:2310.01714", "arxiv:2310.01596", "arxiv:2310.01557", "arxiv:2310.01510", "arxiv:2310.01412", "arxiv:2310.01407", "arxiv:2310.01405", "arxiv:2310.01352", "arxiv:2310.01208", "arxiv:2310.00898", "arxiv:2310.00867", "arxiv:2310.00754", "arxiv:2310.00746", "arxiv:2310.00741", "arxiv:2310.00704", "arxiv:2310.00426", "arxiv:2310.00367", "arxiv:2309.17453", "arxiv:2309.17421", "arxiv:2309.17400", "arxiv:2309.17277", "arxiv:2309.17102", "arxiv:2309.17080", "arxiv:2309.16671", "arxiv:2309.16668", "arxiv:2309.16653", "arxiv:2309.16650", "arxiv:2309.16643", "arxiv:2309.16609", "arxiv:2309.16588", "arxiv:2309.16585", "arxiv:2309.16583", "arxiv:2309.16534", "arxiv:2309.16496", "arxiv:2309.16429", "arxiv:2309.16414", "arxiv:2309.16351", "arxiv:2309.16235", "arxiv:2309.16058", "arxiv:2309.16039", "arxiv:2309.15818", "arxiv:2309.15807", "arxiv:2309.16372", "arxiv:2310.01425", "arxiv:2309.15564", "arxiv:2309.15505", "arxiv:2309.15427", "arxiv:2309.15426", "arxiv:2309.15273", "arxiv:2309.15251", "arxiv:2309.15223", "arxiv:2309.15103", "arxiv:2309.15098", "arxiv:2309.15091", "arxiv:2309.15082", "arxiv:2309.15025", "arxiv:2309.14786", "arxiv:2309.14717", "arxiv:2309.14616", "arxiv:2309.14592", "arxiv:2309.14580", "arxiv:2309.14525", "arxiv:2309.14509", "arxiv:2309.14341", "arxiv:2309.14340", "arxiv:2309.14327", "arxiv:2309.14402", "arxiv:2309.14322", "arxiv:2309.14291", "arxiv:2309.14174", "arxiv:2309.13965", "arxiv:2309.13952", "arxiv:2309.13858", "arxiv:2309.15129", "arxiv:2309.13638", "arxiv:2309.13556", "arxiv:2309.13539", "arxiv:2309.13504", "arxiv:2309.13377", "arxiv:2309.13363", "arxiv:2309.13356", "arxiv:2309.13339", "arxiv:2309.13308", "arxiv:2309.13256", "arxiv:2309.13190", "arxiv:2309.13043", "arxiv:2309.13042", "arxiv:2309.13041", "arxiv:2309.13039", "arxiv:2309.13038", "arxiv:2309.13037", "arxiv:2309.13035", "arxiv:2309.13029", "arxiv:2309.13022", "arxiv:2309.13021", "arxiv:2309.13018", "arxiv:2309.13016", "arxiv:2309.13015", "arxiv:2309.13013", "arxiv:2309.13007", "arxiv:2309.13006", "arxiv:2309.13005", "arxiv:2309.13002", "arxiv:2309.12998", "arxiv:2309.12996", "arxiv:2309.12991", "arxiv:2309.12972", "arxiv:2309.12971", "arxiv:2309.12970", "arxiv:2309.12969", "arxiv:2309.12960", "arxiv:2309.12955", "arxiv:2309.12943", "arxiv:2309.12941", "arxiv:2309.12940", "arxiv:2309.12938", "arxiv:2309.12937", "arxiv:2309.12934", "arxiv:2309.12499", "arxiv:2309.12424", "arxiv:2309.12311", "arxiv:2309.12307", "arxiv:2309.12303", "arxiv:2309.12288", "arxiv:2309.12284", "arxiv:2309.12207", "arxiv:2309.11998", "arxiv:2309.11847", "arxiv:2309.11674", "arxiv:2309.11568", "arxiv:2309.11500", "arxiv:2309.11499", "arxiv:2309.11497", "arxiv:2309.11495", "arxiv:2309.13075", "arxiv:2309.11419", "arxiv:2309.11235", "arxiv:2309.11197", "arxiv:2309.11170", "arxiv:2309.11160", "arxiv:2309.11139", "arxiv:2309.11132", "arxiv:2309.11081", "arxiv:2309.11077", "arxiv:2309.11028", "arxiv:2309.11009", "arxiv:2309.11523", "arxiv:2309.10952", "arxiv:2309.10917", "arxiv:2309.10818", "arxiv:2309.10724", "arxiv:2309.10706", "arxiv:2309.10668", "arxiv:2309.10592", "arxiv:2309.10537", "arxiv:2309.10438", "arxiv:2309.10400", "arxiv:2309.10305", "arxiv:2309.10279", "arxiv:2309.10202", "arxiv:2309.10150", "arxiv:2309.10091", "arxiv:2309.10020", "arxiv:2309.09971", "arxiv:2309.09968", "arxiv:2309.09958", "arxiv:2309.09724", "arxiv:2309.09709", "arxiv:2309.09530", "arxiv:2309.09506", "arxiv:2309.09501", "arxiv:2309.09400", "arxiv:2309.09390", "arxiv:2309.09369", "arxiv:2309.09298", "arxiv:2309.09294", "arxiv:2309.09117", "arxiv:2309.08968", "arxiv:2309.08963", "arxiv:2309.08958", "arxiv:2309.08872", "arxiv:2309.08827", "arxiv:2309.08804", "arxiv:2309.08773", "arxiv:2309.08690", "arxiv:2309.08600", "arxiv:2309.08588", "arxiv:2309.08587", "arxiv:2309.08586", "arxiv:2309.08532", "arxiv:2309.08520", "arxiv:2309.08491", "arxiv:2309.08416", "arxiv:2309.08646", "arxiv:2309.08210", "arxiv:2309.08172", "arxiv:2309.08168", "arxiv:2309.08156", "arxiv:2309.08051", "arxiv:2309.07990", "arxiv:2309.07986", "arxiv:2309.07974", "arxiv:2309.07970", "arxiv:2309.07914", "arxiv:2309.07911", "arxiv:2309.07910", "arxiv:2309.07906", "arxiv:2309.07900", "arxiv:2309.07870", "arxiv:2309.07864", "arxiv:2309.08637", "arxiv:2309.07749", "arxiv:2309.07499", "arxiv:2309.07471", "arxiv:2309.07462", "arxiv:2309.07430", "arxiv:2309.07403", "arxiv:2309.07314", "arxiv:2309.08632", "arxiv:2309.07250", "arxiv:2309.07125", "arxiv:2309.07124", "arxiv:2309.07122", "arxiv:2309.07084", "arxiv:2309.07038", "arxiv:2309.06933", "arxiv:2309.06895", "arxiv:2309.06891", "arxiv:2309.06810", "arxiv:2309.06802", "arxiv:2309.06794", "arxiv:2309.06714", "arxiv:2309.06703", "arxiv:2309.06660", "arxiv:2309.06657", "arxiv:2309.06497", "arxiv:2309.06441", "arxiv:2309.06440", "arxiv:2309.06380", "arxiv:2309.08628", "arxiv:2309.06262", "arxiv:2309.06180", "arxiv:2309.06126", "arxiv:2309.05858", "arxiv:2309.07062", "arxiv:2309.05810", "arxiv:2309.05793", "arxiv:2309.05663", "arxiv:2309.05689", "arxiv:2309.05573", "arxiv:2309.05569", "arxiv:2309.05519", "arxiv:2309.05516", "arxiv:2309.05463", "arxiv:2309.05444", "arxiv:2309.05281", "arxiv:2309.05251", "arxiv:2309.05196", "arxiv:2309.05192", "arxiv:2309.05148", "arxiv:2309.05098", "arxiv:2309.05049", "arxiv:2309.04992", "arxiv:2309.04946", "arxiv:2309.04907", "arxiv:2309.04842", "arxiv:2309.04827", "arxiv:2309.04803", "arxiv:2309.04800", "arxiv:2309.04777", "arxiv:2309.04766", "arxiv:2309.04747", "arxiv:2309.04663", "arxiv:2309.04662", "arxiv:2309.04658", "arxiv:2309.04581", "arxiv:2309.04564", "arxiv:2309.04542", "arxiv:2309.04422", "arxiv:2309.04410", "arxiv:2309.04354", "arxiv:2309.04509", "arxiv:2309.04269", "arxiv:2309.04247", "arxiv:2309.04172", "arxiv:2309.04146", "arxiv:2309.04041", "arxiv:2309.03989", "arxiv:2309.03978", "arxiv:2309.03905", "arxiv:2309.03903", "arxiv:2309.03900", "arxiv:2309.03899", "arxiv:2309.03897", "arxiv:2309.03895", "arxiv:2309.03886", "arxiv:2309.03883", "arxiv:2309.03882", "arxiv:2309.03852", "arxiv:2309.03811", "arxiv:2309.03809", "arxiv:2309.03729", "arxiv:2309.03696", "arxiv:2309.03926", "arxiv:2309.03613", "arxiv:2309.03598", "arxiv:2309.03550", "arxiv:2309.03549", "arxiv:2309.03473", "arxiv:2309.03453", "arxiv:2309.03409", "arxiv:2309.03406", "arxiv:2309.03315", "arxiv:2309.03199", "arxiv:2309.03185", "arxiv:2309.03179", "arxiv:2309.03160", "arxiv:2309.03130", "arxiv:2309.02965", "arxiv:2309.03241", "arxiv:2309.02654", "arxiv:2309.02591", "arxiv:2309.02561", "arxiv:2309.02527", "arxiv:2309.02436", "arxiv:2309.02429", "arxiv:2309.02427", "arxiv:2309.02423", "arxiv:2309.02420", "arxiv:2309.02405", "arxiv:2309.02401", "arxiv:2309.02390", "arxiv:2309.02301", "arxiv:2309.02144", "arxiv:2309.02120", "arxiv:2309.02102", "arxiv:2309.02041", "arxiv:2309.02033", "arxiv:2309.02020", "arxiv:2309.01961", "arxiv:2309.01958", "arxiv:2309.01858", "arxiv:2309.01826", "arxiv:2309.01809", "arxiv:2309.01740", "arxiv:2309.01692", "arxiv:2309.01624", "arxiv:2309.01574", "arxiv:2309.01523", "arxiv:2309.01420", "arxiv:2309.01377", "arxiv:2309.01296", "arxiv:2309.01265", "arxiv:2309.01246", "arxiv:2309.01155", "arxiv:2309.01151", "arxiv:2309.01131", "arxiv:2309.01093", "arxiv:2309.01083", "arxiv:2309.00986", "arxiv:2309.00964", "arxiv:2309.00933", "arxiv:2309.00844", "arxiv:2309.00827", "arxiv:2309.0791", "arxiv:2309.00775", "arxiv:2309.00711", "arxiv:2309.0681", "arxiv:2309.00615", "arxiv:2309.00610", "arxiv:2309.0666", "arxiv:2309.00398", "arxiv:2309.00363", "arxiv:2309.00359", "arxiv:2309.00267", "arxiv:2309.0242", "arxiv:2309.00233", "arxiv:2309.00216", "arxiv:2309.0212", "arxiv:2309.0202", "arxiv:2309.0142", "arxiv:2309.00071", "arxiv:2309.00066", "arxiv:2309.00035", "arxiv:2308.16911", "arxiv:2308.16909", "arxiv:2308.16905", "arxiv:2308.16898", "arxiv:2308.16894", "arxiv:2308.16891", "arxiv:2308.16890", "arxiv:2308.16884", "arxiv:2308.16876", "arxiv:2308.16825", "arxiv:2308.16824", "arxiv:2308.16777", "arxiv:2308.16758", "arxiv:2308.16739", "arxiv:2308.16689", "arxiv:2308.16582", "arxiv:2308.16512", "arxiv:2308.16505", "arxiv:2308.16477", "arxiv:2308.16463", "arxiv:2308.16460", "arxiv:2308.16458", "arxiv:2308.16454", "arxiv:2308.16271", "arxiv:2308.16246", "arxiv:2308.16185", "arxiv:2308.16184", "arxiv:2308.16154", "arxiv:2308.16137", "arxiv:2308.16118", "arxiv:2308.16083", "arxiv:2308.15975", "arxiv:2308.15962", "arxiv:2308.15930", "arxiv:2308.15844", "arxiv:2308.15827", "arxiv:2308.15560", "arxiv:2308.15461", "arxiv:2308.15512", "arxiv:2308.15367", "arxiv:2308.15226", "arxiv:2308.15126", "arxiv:2308.15085", "arxiv:2308.15081", "arxiv:2308.15074", "arxiv:2308.15049", "arxiv:2308.15022", "arxiv:2308.14978", "arxiv:2308.14972", "arxiv:2308.14963", "arxiv:2308.14960", "arxiv:2308.14847", "arxiv:2308.14816", "arxiv:2308.14753", "arxiv:2308.14752", "arxiv:2308.14713", "arxiv:2308.14598", "arxiv:2308.14575", "arxiv:2308.14500", "arxiv:2308.14480", "arxiv:2308.14383", "arxiv:2308.14353", "arxiv:2308.14346", "arxiv:2308.14286", "arxiv:2308.14244", "arxiv:2308.14221", "arxiv:2308.14153", "arxiv:2308.14152", "arxiv:2308.14119", "arxiv:2308.14089", "arxiv:2308.14083", "arxiv:2308.14082", "arxiv:2308.14074", "arxiv:2308.14061", "arxiv:2308.14023", "arxiv:2308.14005", "arxiv:2308.13989", "arxiv:2308.13954", "arxiv:2308.13862", "arxiv:2308.13853", "arxiv:2308.13814", "arxiv:2308.13785", "arxiv:2308.13783", "arxiv:2308.13735", "arxiv:2308.13724", "arxiv:2308.13505", "arxiv:2308.13504", "arxiv:2308.13494", "arxiv:2308.13467", "arxiv:2308.13437", "arxiv:2308.13418", "arxiv:2308.13416", "arxiv:2308.13411", "arxiv:2308.13404", "arxiv:2308.13387", "arxiv:2308.13369", "arxiv:2308.13324", "arxiv:2308.13323", "arxiv:2308.13270", "arxiv:2308.13266", "arxiv:2308.13245", "arxiv:2308.13236", "arxiv:2308.13229", "arxiv:2308.13218", "arxiv:2308.13168", "arxiv:2308.13164", "arxiv:2308.13149", "arxiv:2308.13137", "arxiv:2308.13566", "arxiv:2308.13077", "arxiv:2308.12967", "arxiv:2308.12968", "arxiv:2308.12966", "arxiv:2308.12964", "arxiv:2308.12963", "arxiv:2308.12962", "arxiv:2308.12960", "arxiv:2308.12950", "arxiv:2308.12898", "arxiv:2308.12894", "arxiv:2308.12866", "arxiv:2308.12857", "arxiv:2308.12779", "arxiv:2308.12774", "arxiv:2308.12714", "arxiv:2308.12700", "arxiv:2308.12604", "arxiv:2308.12595", "arxiv:2308.12590", "arxiv:2308.12587", "arxiv:2308.12558", "arxiv:2308.12549", "arxiv:2308.12539", "arxiv:2308.12510", "arxiv:2308.12503", "arxiv:2308.12383", "arxiv:2308.12372", "arxiv:2308.12370", "arxiv:2308.12366", "arxiv:2308.12350", "arxiv:2308.12351", "arxiv:2308.12288", "arxiv:2308.12272", "arxiv:2308.12219", "arxiv:2308.12216", "arxiv:2308.12213", "arxiv:2308.12202", "arxiv:2308.12191", "arxiv:2308.12064", "arxiv:2308.12058", "arxiv:2308.12050", "arxiv:2308.12038", "arxiv:2308.12035", "arxiv:2308.12014", "arxiv:2308.11990", "arxiv:2308.11974", "arxiv:2308.11971", "arxiv:2308.11928", "arxiv:2308.11927", "arxiv:2308.11917", "arxiv:2308.11916", "arxiv:2308.11911", "arxiv:2308.11901", "arxiv:2308.11894", "arxiv:2308.11880", "arxiv:2308.11874", "arxiv:2308.11796", "arxiv:2308.11793", "arxiv:2308.11778", "arxiv:2308.11744", "arxiv:2308.11737", "arxiv:2308.11696", "arxiv:2308.11607", "arxiv:2308.11606", "arxiv:2308.11568", "arxiv:2308.11551", "arxiv:2308.11513", "arxiv:2308.11489", "arxiv:2308.11441", "arxiv:2308.11432", "arxiv:2308.11417", "arxiv:2308.11358", "arxiv:2308.11357", "arxiv:2308.11339", "arxiv:2308.11331", "arxiv:2308.11322", "arxiv:2308.11277", "arxiv:2308.11261", "arxiv:2308.11236", "arxiv:2308.11223", "arxiv:2308.11206", "arxiv:2308.11194", "arxiv:2308.11186", "arxiv:2308.11185", "arxiv:2308.11184", "arxiv:2308.11166", "arxiv:2308.11158", "arxiv:2308.11130", "arxiv:2308.11116", "arxiv:2308.11111", "arxiv:2308.11106", "arxiv:2308.11096", "arxiv:2308.11093", "arxiv:2308.11073", "arxiv:2308.11072", "arxiv:2308.11071", "arxiv:2308.11063", "arxiv:2308.11062", "arxiv:2308.11025", "arxiv:2308.11015", "arxiv:2308.10898", "arxiv:2308.10897", "arxiv:2308.10848", "arxiv:2308.10832", "arxiv:2308.10820", "arxiv:2308.10814", "arxiv:2308.10809", "arxiv:2308.10794", "arxiv:2308.10792", "arxiv:2308.10755", "arxiv:2308.10741", "arxiv:2308.10729", "arxiv:2308.10694", "arxiv:2308.10658", "arxiv:2308.10623", "arxiv:2308.10603", "arxiv:2308.10599", "arxiv:2308.10574", "arxiv:2308.10570", "arxiv:2308.10554", "arxiv:2308.10515", "arxiv:2308.11534", "arxiv:2308.10490", "arxiv:2308.10481", "arxiv:2308.10468", "arxiv:2308.10462", "arxiv:2308.10461", "arxiv:2308.10447", "arxiv:2308.10445", "arxiv:2308.10441", "arxiv:2308.10435", "arxiv:2308.10916", "arxiv:2308.10402", "arxiv:2308.10397", "arxiv:2308.10379", "arxiv:2308.11462", "arxiv:2308.10337", "arxiv:2308.10334", "arxiv:2308.10315", "arxiv:2308.10310", "arxiv:2308.10308", "arxiv:2308.10306", "arxiv:2308.10305", "arxiv:2308.10297", "arxiv:2308.10285", "arxiv:2308.10279", "arxiv:2308.10278", "arxiv:2308.10276", "arxiv:2308.10253", "arxiv:2308.10205", "arxiv:2308.10204", "arxiv:2308.10185", "arxiv:2308.10174", "arxiv:2308.10172", "arxiv:2308.10170", "arxiv:2308.10155", "arxiv:2308.10149", "arxiv:2308.10147", "arxiv:2308.10146", "arxiv:2308.10144", "arxiv:2308.10141", "arxiv:2308.10134", "arxiv:2308.10133", "arxiv:2308.10123", "arxiv:2308.10122", "arxiv:2308.10110", "arxiv:2308.10089", "arxiv:2308.10032", "arxiv:2308.10027", "arxiv:2308.10016", "arxiv:2308.10005", "arxiv:2308.09987", "arxiv:2308.09975", "arxiv:2308.09971", "arxiv:2308.09970", "arxiv:2308.09951", "arxiv:2308.09949", "arxiv:2308.09946", "arxiv:2308.09942", "arxiv:2308.09939", "arxiv:2308.09936", "arxiv:2308.09922", "arxiv:2308.09916", "arxiv:2308.09903", "arxiv:2308.09891", "arxiv:2308.09887", "arxiv:2308.09882", "arxiv:2308.09873", "arxiv:2308.09865", "arxiv:2308.09853", "arxiv:2308.09830", "arxiv:2308.09829", "arxiv:2308.09804", "arxiv:2308.09775", "arxiv:2308.09717", "arxiv:2308.09711", "arxiv:2308.09662", "arxiv:2308.09622", "arxiv:2308.09618", "arxiv:2308.09597", "arxiv:2308.09592", "arxiv:2308.09583", "arxiv:2308.09568", "arxiv:2308.09565", "arxiv:2308.09564", "arxiv:2308.09540", "arxiv:2308.09534", "arxiv:2308.09519", "arxiv:2308.09511", "arxiv:2308.09472", "arxiv:2308.09440", "arxiv:2308.09421", "arxiv:2308.09391", "arxiv:2308.09386", "arxiv:2308.09383", "arxiv:2308.09363", "arxiv:2308.09351", "arxiv:2308.09346", "arxiv:2308.09322", "arxiv:2308.09318", "arxiv:2308.09314", "arxiv:2308.09311", "arxiv:2308.09306", "arxiv:2308.09305", "arxiv:2308.09297", "arxiv:2308.09294", "arxiv:2308.09281", "arxiv:2308.09247", "arxiv:2308.09245", "arxiv:2308.09244", "arxiv:2308.09242", "arxiv:2308.09228", "arxiv:2308.09160", "arxiv:2308.09139", "arxiv:2308.09098", "arxiv:2308.09040", "arxiv:2308.08998", "arxiv:2308.08942", "arxiv:2308.08887", "arxiv:2308.08885", "arxiv:2308.08872", "arxiv:2308.08871", "arxiv:2308.08857", "arxiv:2308.08855", "arxiv:2308.08833", "arxiv:2308.08824", "arxiv:2308.08822", "arxiv:2308.08810", "arxiv:2308.08724", "arxiv:2308.08715", "arxiv:2308.08709", "arxiv:2308.08545", "arxiv:2308.08544", "arxiv:2308.08463", "arxiv:2308.08428", "arxiv:2308.08414", "arxiv:2308.08393", "arxiv:2308.08370", "arxiv:2308.08359", "arxiv:2308.08321", "arxiv:2308.08316", "arxiv:2308.08258", "arxiv:2308.08239", "arxiv:2308.08227", "arxiv:2308.08220", "arxiv:2308.08182", "arxiv:2308.08157", "arxiv:2308.08155", "arxiv:2308.08140", "arxiv:2308.08114", "arxiv:2308.08110", "arxiv:2308.08090", "arxiv:2308.08089", "arxiv:2308.08011", "arxiv:2308.07968", "arxiv:2308.07926", "arxiv:2308.07922", "arxiv:2308.07921", "arxiv:2308.07918", "arxiv:2308.07903", "arxiv:2308.07902", "arxiv:2308.07893", "arxiv:2308.07891", "arxiv:2308.07868", "arxiv:2308.07863", "arxiv:2308.07847", "arxiv:2308.07815", "arxiv:2308.07795", "arxiv:2308.07787", "arxiv:2308.07737", "arxiv:2308.07732", "arxiv:2308.07687", "arxiv:2308.07686", "arxiv:2308.07661", "arxiv:2308.07655", "arxiv:2308.07650", "arxiv:2308.07648", "arxiv:2308.07625", "arxiv:2308.07575", "arxiv:2308.07546", "arxiv:2308.07498", "arxiv:2308.07411", "arxiv:2308.07395", "arxiv:2308.07391", "arxiv:2308.07317", "arxiv:2308.07316", "arxiv:2308.07313", "arxiv:2308.07286", "arxiv:2308.07241", "arxiv:2308.07228", "arxiv:2308.07209", "arxiv:2308.07201", "arxiv:2308.07124", "arxiv:2308.07146", "arxiv:2308.16160", "arxiv:2308.07120", "arxiv:2308.07107", "arxiv:2308.07092", "arxiv:2308.07032", "arxiv:2308.07009", "arxiv:2308.06954", "arxiv:2308.06947", "arxiv:2308.06925", "arxiv:2308.06912", "arxiv:2308.06904", "arxiv:2308.06879", "arxiv:2308.06873", "arxiv:2308.06787", "arxiv:2308.06777", "arxiv:2308.06776", "arxiv:2308.06774", "arxiv:2308.06735", "arxiv:2308.06721", "arxiv:2308.06712", "arxiv:2308.06701", "arxiv:2308.06693", "arxiv:2308.06689", "arxiv:2308.06635", "arxiv:2308.06595", "arxiv:2308.06554", "arxiv:2308.06548", "arxiv:2308.06531", "arxiv:2308.06530", "arxiv:2308.07934", "arxiv:2308.06468", "arxiv:2308.06453", "arxiv:2308.06394", "arxiv:2308.06383", "arxiv:2308.06261", "arxiv:2308.06259", "arxiv:2308.06248", "arxiv:2308.06202", "arxiv:2308.06125", "arxiv:2308.06103", "arxiv:2308.06072", "arxiv:2308.06038", "arxiv:2308.06015", "arxiv:2308.05991", "arxiv:2308.05985", "arxiv:2308.05960", "arxiv:2308.05932", "arxiv:2308.05911", "arxiv:2308.05884", "arxiv:2308.05744", "arxiv:2308.05737", "arxiv:2308.05734", "arxiv:2308.05733", "arxiv:2308.05732", "arxiv:2308.05681", "arxiv:2308.05667", "arxiv:2308.05605", "arxiv:2308.05550", "arxiv:2308.05493", "arxiv:2308.05481", "arxiv:2308.05441", "arxiv:2308.05438", "arxiv:2308.05410", "arxiv:2308.05396", "arxiv:2308.05391", "arxiv:2308.05382", "arxiv:2308.05374", "arxiv:2308.05371", "arxiv:2308.05359", "arxiv:2308.05326", "arxiv:2308.05318", "arxiv:2308.05221", "arxiv:2308.05095", "arxiv:2308.05022", "arxiv:2308.05140", "arxiv:2308.04995", "arxiv:2308.04990", "arxiv:2308.04952", "arxiv:2308.04946", "arxiv:2308.04945", "arxiv:2308.04912", "arxiv:2308.04829", "arxiv:2308.04826", "arxiv:2308.04808", "arxiv:2308.04782", "arxiv:2308.04770", "arxiv:2308.04758", "arxiv:2308.04729", "arxiv:2308.04699", "arxiv:2308.04682", "arxiv:2308.04623", "arxiv:2308.04622", "arxiv:2308.04592", "arxiv:2308.04583", "arxiv:2308.04556", "arxiv:2308.04549", "arxiv:2308.04430", "arxiv:2308.04383", "arxiv:2308.04371", "arxiv:2308.04352", "arxiv:2308.04477", "arxiv:2308.04269", "arxiv:2308.04265", "arxiv:2308.04206", "arxiv:2308.04197", "arxiv:2308.04163", "arxiv:2308.04162", "arxiv:2308.04152", "arxiv:2308.04126", "arxiv:2308.04079", "arxiv:2308.04061", "arxiv:2308.04030", "arxiv:2308.04026", "arxiv:2308.04016", "arxiv:2308.04014", "arxiv:2308.04008", "arxiv:2308.03982", "arxiv:2308.03958", "arxiv:2308.03906", "arxiv:2308.03867", "arxiv:2308.03757", "arxiv:2308.03729", "arxiv:2308.03712", "arxiv:2308.03688", "arxiv:2308.03685", "arxiv:2308.03610", "arxiv:2308.03594", "arxiv:2308.03526", "arxiv:2308.03448", "arxiv:2308.03427", "arxiv:2308.03421", "arxiv:2308.03413", "arxiv:2308.03374", "arxiv:2308.03364", "arxiv:2308.03357", "arxiv:2308.03349", "arxiv:2308.03322", "arxiv:2308.03296", "arxiv:2308.03291", "arxiv:2308.03290", "arxiv:2308.03286", "arxiv:2308.03282", "arxiv:2308.03280", "arxiv:2308.03279", "arxiv:2308.03262", "arxiv:2308.03202", "arxiv:2308.03176", "arxiv:2308.03163", "arxiv:2308.03047", "arxiv:2308.03040", "arxiv:2308.03028", "arxiv:2308.03022", "arxiv:2308.02983", "arxiv:2308.02897", "arxiv:2308.02874", "arxiv:2308.02840", "arxiv:2308.02773", "arxiv:2308.02752", "arxiv:2308.03793", "arxiv:2308.02490", "arxiv:2308.02487", "arxiv:2308.02453", "arxiv:2308.02357", "arxiv:2308.02236", "arxiv:2308.02223", "arxiv:2308.02180", "arxiv:2308.02162", "arxiv:2308.02151", "arxiv:2308.02097", "arxiv:2308.01907", "arxiv:2308.01904", "arxiv:2308.02669", "arxiv:2308.01861", "arxiv:2308.01825", "arxiv:2308.02299", "arxiv:2308.01779", "arxiv:2308.01734", "arxiv:2308.01686", "arxiv:2308.01948", "arxiv:2308.01552", "arxiv:2308.01547", "arxiv:2308.01546", "arxiv:2308.01544", "arxiv:2308.01499", "arxiv:2308.1496", "arxiv:2308.01483", "arxiv:2308.01477", "arxiv:2308.01937", "arxiv:2308.01469", "arxiv:2308.02560", "arxiv:2308.1448", "arxiv:2308.01390", "arxiv:2308.01320", "arxiv:2308.01379", "arxiv:2308.01313", "arxiv:2308.01300", "arxiv:2308.01263", "arxiv:2308.1251", "arxiv:2308.01240", "arxiv:2308.1237", "arxiv:2308.01236", "arxiv:2308.1235", "arxiv:2308.1199", "arxiv:2308.01194", "arxiv:2308.01191", "arxiv:2308.1188", "arxiv:2308.1113", "arxiv:2308.1082", "arxiv:2308.1057", "arxiv:2308.1049", "arxiv:2308.01045", "arxiv:2308.1031", "arxiv:2308.1017", "arxiv:2308.1011", "arxiv:2308.00951", "arxiv:2308.0916", "arxiv:2308.00906", "arxiv:2308.0904", "arxiv:2308.0822", "arxiv:2308.0811", "arxiv:2308.00799", "arxiv:2308.00692", "arxiv:2308.00675", "arxiv:2308.0653", "arxiv:2308.00728", "arxiv:2308.0541", "arxiv:2308.0514", "arxiv:2308.0499", "arxiv:2308.0477", "arxiv:2308.00442", "arxiv:2308.00436", "arxiv:2308.00398", "arxiv:2308.02533", "arxiv:2308.00376", "arxiv:2308.00356", "arxiv:2308.00352", "arxiv:2308.0316", "arxiv:2308.00304", "arxiv:2308.0304", "arxiv:2308.00301", "arxiv:2308.0284", "arxiv:2308.00261", "arxiv:2308.0209", "arxiv:2308.00566", "arxiv:2308.01399", "arxiv:2307.16890", "arxiv:2307.16888", "arxiv:2308.0555", "arxiv:2307.16867", "arxiv:2307.16825", "arxiv:2307.16789", "arxiv:2307.16715", "arxiv:2307.16687", "arxiv:2307.16686", "arxiv:2307.16662", "arxiv:2307.16634", "arxiv:2307.16617", "arxiv:2307.16525", "arxiv:2307.16508", "arxiv:2307.16449", "arxiv:2307.16418", "arxiv:2307.16415", "arxiv:2307.16395", "arxiv:2307.16377", "arxiv:2307.16372", "arxiv:2307.16368", "arxiv:2307.16361", "arxiv:2308.02522", "arxiv:2307.16184", "arxiv:2307.16180", "arxiv:2307.16125", "arxiv:2307.15991", "arxiv:2307.15958", "arxiv:2307.15942", "arxiv:2307.15860", "arxiv:2307.15818", "arxiv:2307.15771", "arxiv:2307.15700", "arxiv:2307.15644", "arxiv:2307.15593", "arxiv:2307.15539", "arxiv:2307.15504", "arxiv:2307.15475", "arxiv:2307.15422", "arxiv:2307.15409", "arxiv:2307.15353", "arxiv:2307.15337", "arxiv:2307.15333", "arxiv:2307.15324", "arxiv:2307.15254", "arxiv:2307.15217", "arxiv:2307.15199", "arxiv:2307.15189", "arxiv:2307.15131", "arxiv:2307.15063", "arxiv:2308.07931", "arxiv:2307.15052", "arxiv:2307.15049", "arxiv:2307.15043", "arxiv:2307.15042", "arxiv:2307.15033", "arxiv:2307.15020", "arxiv:2307.15016", "arxiv:2307.14995", "arxiv:2307.14984", "arxiv:2307.14971", "arxiv:2307.14936", "arxiv:2308.02510", "arxiv:2307.14786", "arxiv:2307.14768", "arxiv:2307.14735", "arxiv:2307.14726", "arxiv:2307.14710", "arxiv:2307.14709", "arxiv:2307.14630", "arxiv:2307.14620", "arxiv:2307.14611", "arxiv:2307.14605", "arxiv:2307.14535", "arxiv:2307.14460", "arxiv:2308.00113", "arxiv:2307.14336", "arxiv:2307.14335", "arxiv:2307.14334", "arxiv:2307.14277", "arxiv:2307.14225", "arxiv:2307.14187", "arxiv:2307.14127", "arxiv:2307.14117", "arxiv:2307.14071", "arxiv:2307.14061", "arxiv:2307.14051", "arxiv:2307.14039", "arxiv:2307.14008", "arxiv:2307.13974", "arxiv:2307.13933", "arxiv:2307.13929", "arxiv:2307.13924", "arxiv:2307.13908", "arxiv:2307.13854", "arxiv:2307.13813", "arxiv:2307.13770", "arxiv:2307.13756", "arxiv:2307.13721", "arxiv:2307.13720", "arxiv:2307.13692", "arxiv:2307.13619", "arxiv:2307.13539", "arxiv:2307.13537", "arxiv:2307.13529", "arxiv:2307.13528", "arxiv:2307.13459", "arxiv:2307.13383", "arxiv:2307.13316", "arxiv:2307.13269", "arxiv:2307.13254", "arxiv:2307.13251", "arxiv:2307.13226", "arxiv:2307.13204", "arxiv:2307.13101", "arxiv:2307.15780", "arxiv:2307.12981", "arxiv:2307.12980", "arxiv:2307.12976", "arxiv:2307.12972", "arxiv:2307.12966", "arxiv:2307.12950", "arxiv:2307.12907", "arxiv:2307.12856", "arxiv:2307.12854", "arxiv:2307.12774", "arxiv:2307.12730", "arxiv:2307.12729", "arxiv:2307.12698", "arxiv:2307.12637", "arxiv:2307.12616", "arxiv:2307.12612", "arxiv:2307.12577", "arxiv:2307.12574", "arxiv:2307.12560", "arxiv:2307.12533", "arxiv:2307.12502", "arxiv:2307.12493", "arxiv:2307.12463", "arxiv:2307.12450", "arxiv:2307.12427", "arxiv:2307.12418", "arxiv:2307.12335", "arxiv:2307.12291", "arxiv:2307.12280", "arxiv:2307.12226", "arxiv:2307.12217", "arxiv:2307.12194", "arxiv:2307.12169", "arxiv:2307.12168", "arxiv:2307.12158", "arxiv:2307.12101", "arxiv:2307.12058", "arxiv:2307.12027", "arxiv:2307.11984", "arxiv:2307.11978", "arxiv:2307.11865", "arxiv:2307.11823", "arxiv:2307.11661", "arxiv:2307.11636", "arxiv:2307.11545", "arxiv:2307.11526", "arxiv:2307.11514", "arxiv:2307.11477", "arxiv:2307.11469", "arxiv:2307.11458", "arxiv:2307.11795", "arxiv:2307.11418", "arxiv:2307.11411", "arxiv:2307.11410", "arxiv:2307.11404", "arxiv:2307.11386", "arxiv:2307.11353", "arxiv:2307.11342", "arxiv:2307.11335", "arxiv:2307.11308", "arxiv:2307.11299", "arxiv:2307.11285", "arxiv:2307.11078", "arxiv:2307.11077", "arxiv:2307.11035", "arxiv:2307.14352", "arxiv:2307.10984", "arxiv:2307.10947", "arxiv:2307.10943", "arxiv:2307.10936", "arxiv:2307.10928", "arxiv:2307.11118", "arxiv:2307.10907", "arxiv:2307.10854", "arxiv:2307.10816", "arxiv:2307.10802", "arxiv:2307.10797", "arxiv:2307.10782", "arxiv:2307.10776", "arxiv:2307.10713", "arxiv:2307.10664", "arxiv:2307.10635", "arxiv:2307.10603", "arxiv:2307.11108", "arxiv:2307.10558", "arxiv:2307.10554", "arxiv:2307.10373", "arxiv:2307.10173", "arxiv:2307.10172", "arxiv:2307.10169", "arxiv:2307.10168", "arxiv:2307.10350", "arxiv:2307.10159", "arxiv:2307.10088", "arxiv:2307.10062", "arxiv:2307.10008", "arxiv:2307.09856", "arxiv:2307.09829", "arxiv:2307.09788", "arxiv:2307.09781", "arxiv:2307.09763", "arxiv:2307.09756", "arxiv:2307.09755", "arxiv:2307.10316", "arxiv:2307.09724", "arxiv:2307.09696", "arxiv:2307.09693", "arxiv:2307.09668", "arxiv:2307.09662", "arxiv:2307.09638", "arxiv:2307.09621", "arxiv:2307.09474", "arxiv:2307.09458", "arxiv:2307.09356", "arxiv:2307.09323", "arxiv:2307.09320", "arxiv:2307.09306", "arxiv:2307.09288", "arxiv:2307.09267", "arxiv:2307.09233", "arxiv:2307.09112", "arxiv:2307.09023", "arxiv:2307.09009", "arxiv:2307.09004", "arxiv:2307.08996", "arxiv:2307.08809", "arxiv:2307.08779", "arxiv:2307.08775", "arxiv:2307.08702", "arxiv:2307.08701", "arxiv:2307.08695", "arxiv:2307.08674", "arxiv:2307.08621", "arxiv:2307.08581", "arxiv:2307.08579", "arxiv:2307.08506", "arxiv:2307.08504", "arxiv:2307.08500", "arxiv:2307.08483", "arxiv:2307.08476", "arxiv:2307.08448", "arxiv:2307.08436", "arxiv:2307.08393", "arxiv:2307.08388", "arxiv:2307.08357", "arxiv:2307.08300", "arxiv:2307.08249", "arxiv:2307.08197", "arxiv:2307.13702", "arxiv:2307.11768", "arxiv:2307.08168", "arxiv:2307.08041", "arxiv:2307.08027", "arxiv:2307.08012", "arxiv:2307.10235", "arxiv:2307.07947", "arxiv:2307.07944", "arxiv:2307.07938", "arxiv:2307.07924", "arxiv:2307.07909", "arxiv:2307.07887", "arxiv:2307.07859", "arxiv:2307.07790", "arxiv:2307.07754", "arxiv:2307.07663", "arxiv:2307.07653", "arxiv:2307.07635", "arxiv:2307.07511", "arxiv:2307.07487", "arxiv:2307.07483", "arxiv:2307.07397", "arxiv:2307.07250", "arxiv:2307.07245", "arxiv:2307.07218", "arxiv:2307.07205", "arxiv:2307.07164", "arxiv:2307.07063", "arxiv:2307.07047", "arxiv:2307.06949", "arxiv:2307.06945", "arxiv:2307.06942", "arxiv:2307.06940", "arxiv:2307.06930", "arxiv:2307.06925", "arxiv:2307.06908", "arxiv:2307.06962", "arxiv:2307.06464", "arxiv:2307.06439", "arxiv:2307.06350", "arxiv:2307.06304", "arxiv:2307.06290", "arxiv:2307.06281", "arxiv:2307.06135", "arxiv:2307.06082", "arxiv:2307.06018", "arxiv:2307.05973", "arxiv:2307.05959", "arxiv:2307.05853", "arxiv:2307.05741", "arxiv:2307.05695", "arxiv:2307.05473", "arxiv:2307.06857", "arxiv:2307.05463", "arxiv:2307.05462", "arxiv:2307.05454", "arxiv:2307.05432", "arxiv:2307.05300", "arxiv:2307.05222", "arxiv:2307.05628", "arxiv:2307.05014", "arxiv:2307.04990", "arxiv:2307.04964", "arxiv:2307.04767", "arxiv:2307.05591", "arxiv:2307.04751", "arxiv:2307.04738", "arxiv:2307.04725", "arxiv:2307.04721", "arxiv:2307.04699", "arxiv:2307.04686", "arxiv:2307.04577", "arxiv:2307.04507", "arxiv:2307.04349", "arxiv:2307.04246", "arxiv:2307.04106", "arxiv:2307.04087", "arxiv:2307.04008", "arxiv:2307.03917", "arxiv:2307.03875", "arxiv:2307.03869", "arxiv:2307.05445", "arxiv:2307.03704", "arxiv:2307.03659", "arxiv:2307.03601", "arxiv:2307.03576", "arxiv:2307.03486", "arxiv:2307.04603", "arxiv:2307.03406", "arxiv:2307.03381", "arxiv:2307.03322", "arxiv:2307.03183", "arxiv:2307.03172", "arxiv:2307.03170", "arxiv:2307.03166", "arxiv:2307.03135", "arxiv:2307.03718", "arxiv:2307.03109", "arxiv:2307.03027", "arxiv:2307.03025", "arxiv:2307.02882", "arxiv:2307.02792", "arxiv:2307.02768", "arxiv:2307.02757", "arxiv:2307.02628", "arxiv:2307.02486", "arxiv:2307.02485", "arxiv:2307.02484", "arxiv:2307.02483", "arxiv:2307.02477", "arxiv:2307.02469", "arxiv:2307.02447", "arxiv:2307.02421", "arxiv:2307.02321", "arxiv:2307.02249", "arxiv:2307.02179", "arxiv:2307.03692", "arxiv:2307.02053", "arxiv:2307.01952", "arxiv:2307.01938", "arxiv:2307.01928", "arxiv:2307.01849", "arxiv:2307.01848", "arxiv:2307.04787", "arxiv:2307.01831", "arxiv:2307.01759", "arxiv:2307.01701", "arxiv:2307.02499", "arxiv:2307.01630", "arxiv:2307.1471", "arxiv:2307.1463", "arxiv:2307.1462", "arxiv:2307.1401", "arxiv:2307.1273", "arxiv:2307.01200", "arxiv:2307.01197", "arxiv:2307.1191", "arxiv:2307.1245", "arxiv:2307.01163", "arxiv:2307.01139", "arxiv:2307.01097", "arxiv:2307.1017", "arxiv:2307.01003", "arxiv:2307.0990", "arxiv:2307.00818", "arxiv:2307.00804", "arxiv:2307.01229", "arxiv:2307.00716", "arxiv:2307.00522", "arxiv:2307.0499", "arxiv:2307.00438", "arxiv:2307.00398", "arxiv:2307.00329", "arxiv:2307.00184", "arxiv:2307.00119", "arxiv:2307.00117", "arxiv:2306.17848", "arxiv:2306.17843", "arxiv:2306.17842", "arxiv:2306.17840", "arxiv:2307.00040", "arxiv:2306.17806", "arxiv:2306.17805", "arxiv:2306.17759", "arxiv:2306.17563", "arxiv:2306.17492", "arxiv:2306.17319", "arxiv:2306.17253", "arxiv:2306.17156", "arxiv:2306.17154", "arxiv:2306.17115", "arxiv:2306.17107", "arxiv:2306.16927", "arxiv:2306.16940", "arxiv:2306.16934", "arxiv:2306.16928", "arxiv:2306.16869", "arxiv:2306.16857", "arxiv:2306.16793", "arxiv:2306.16700", "arxiv:2306.16605", "arxiv:2306.16601", "arxiv:2306.16564", "arxiv:2306.16410", "arxiv:2306.17194", "arxiv:2306.16388", "arxiv:2306.16207", "arxiv:2306.16052", "arxiv:2306.16016", "arxiv:2306.16009", "arxiv:2307.1206", "arxiv:2306.15925", "arxiv:2306.15895", "arxiv:2306.15794", "arxiv:2306.15724", "arxiv:2306.15667", "arxiv:2306.15658", "arxiv:2306.15632", "arxiv:2306.15626", "arxiv:2306.15595", "arxiv:2306.15401", "arxiv:2306.15400", "arxiv:2306.15354", "arxiv:2306.15253", "arxiv:2306.15195", "arxiv:2306.15128", "arxiv:2306.15091", "arxiv:2306.14898", "arxiv:2306.14896", "arxiv:2306.14892", "arxiv:2306.14878", "arxiv:2306.15447", "arxiv:2306.14846", "arxiv:2306.14824", "arxiv:2306.14795", "arxiv:2306.14610", "arxiv:2306.14565", "arxiv:2306.14544", "arxiv:2306.14534", "arxiv:2306.14525", "arxiv:2306.14447", "arxiv:2306.14435", "arxiv:2306.14289", "arxiv:2306.14161", "arxiv:2306.14153", "arxiv:2306.14101", "arxiv:2306.14066", "arxiv:2306.14060", "arxiv:2306.14048", "arxiv:2306.14035", "arxiv:2306.13990", "arxiv:2306.13840", "arxiv:2306.14924", "arxiv:2306.13776", "arxiv:2306.13754", "arxiv:2306.13651", "arxiv:2306.13649", "arxiv:2306.13631", "arxiv:2306.13588", "arxiv:2306.13575", "arxiv:2306.13549", "arxiv:2306.13455", "arxiv:2306.13421", "arxiv:2306.13394", "arxiv:2306.13091", "arxiv:2306.13078", "arxiv:2306.12929", "arxiv:2306.12925", "arxiv:2306.12860", "arxiv:2306.12760", "arxiv:2306.12677", "arxiv:2306.12672", "arxiv:2306.12609", "arxiv:2306.12570", "arxiv:2306.12517", "arxiv:2306.12509", "arxiv:2306.12422", "arxiv:2306.16527", "arxiv:2306.12156", "arxiv:2306.12105", "arxiv:2306.12085", "arxiv:2306.12059", "arxiv:2306.11987", "arxiv:2306.11932", "arxiv:2306.11913", "arxiv:2306.11886", "arxiv:2306.11719", "arxiv:2306.11706", "arxiv:2306.11644", "arxiv:2306.11593", "arxiv:2306.11565", "arxiv:2306.11536", "arxiv:2306.11335", "arxiv:2306.11248", "arxiv:2306.11207", "arxiv:2306.11167", "arxiv:2306.11046", "arxiv:2306.10998", "arxiv:2306.10968", "arxiv:2306.10900", "arxiv:2306.10830", "arxiv:2306.10785", "arxiv:2306.10763", "arxiv:2306.10543", "arxiv:2306.10533", "arxiv:2306.10376", "arxiv:2306.10231", "arxiv:2306.10209", "arxiv:2306.10169", "arxiv:2306.10012", "arxiv:2306.10008", "arxiv:2306.10007", "arxiv:2306.09996", "arxiv:2306.09983", "arxiv:2306.09910", "arxiv:2306.09896", "arxiv:2306.09864", "arxiv:2306.09782", "arxiv:2306.09778", "arxiv:2306.09683", "arxiv:2306.09682", "arxiv:2306.09635", "arxiv:2306.09557", "arxiv:2306.09539", "arxiv:2306.11732", "arxiv:2306.09479", "arxiv:2306.09442", "arxiv:2306.09348", "arxiv:2306.09349", "arxiv:2306.09346", "arxiv:2306.09345", "arxiv:2306.09341", "arxiv:2306.09329", "arxiv:2306.09327", "arxiv:2306.09322", "arxiv:2306.09316", "arxiv:2306.09299", "arxiv:2306.09296", "arxiv:2306.09266", "arxiv:2306.09265", "arxiv:2306.09224", "arxiv:2306.09212", "arxiv:2306.09109", "arxiv:2306.09093", "arxiv:2306.09082", "arxiv:2306.08954", "arxiv:2306.08893", "arxiv:2306.08731", "arxiv:2306.08707", "arxiv:2306.08651", "arxiv:2306.08647", "arxiv:2306.08641", "arxiv:2306.08640", "arxiv:2306.08637", "arxiv:2306.08620", "arxiv:2306.08568", "arxiv:2306.08543", "arxiv:2306.08276", "arxiv:2306.08221", "arxiv:2306.08205", "arxiv:2306.08161", "arxiv:2306.08133", "arxiv:2306.08129", "arxiv:2306.08068", "arxiv:2306.08055", "arxiv:2306.08045", "arxiv:2306.07970", "arxiv:2306.07969", "arxiv:2306.07968", "arxiv:2306.07967", "arxiv:2306.07957", "arxiv:2306.07954", "arxiv:2306.07951", "arxiv:2306.07915", "arxiv:2306.07906", "arxiv:2306.07881", "arxiv:2306.07768", "arxiv:2306.07703", "arxiv:2306.07580", "arxiv:2306.07552", "arxiv:2306.07536", "arxiv:2306.07510", "arxiv:2306.07476", "arxiv:2306.07473", "arxiv:2306.07437", "arxiv:2306.07280", "arxiv:2306.07279", "arxiv:2306.07272", "arxiv:2306.07196", "arxiv:2306.07179", "arxiv:2306.07174", "arxiv:2306.07042", "arxiv:2306.06855", "arxiv:2306.06823", "arxiv:2306.06755", "arxiv:2306.06687", "arxiv:2306.06638", "arxiv:2306.06624", "arxiv:2306.06546", "arxiv:2306.06513", "arxiv:2306.06212", "arxiv:2306.06189", "arxiv:2306.07290", "arxiv:2306.06092", "arxiv:2306.06070", "arxiv:2306.06044", "arxiv:2306.06023", "arxiv:2306.06000", "arxiv:2306.07941", "arxiv:2306.05949", "arxiv:2306.05836", "arxiv:2306.05783", "arxiv:2306.05696", "arxiv:2306.05685", "arxiv:2306.05637", "arxiv:2306.07944", "arxiv:2306.05544", "arxiv:2306.05493", "arxiv:2306.05427", "arxiv:2306.05428", "arxiv:2306.05425", "arxiv:2306.05424", "arxiv:2306.05422", "arxiv:2306.05420", "arxiv:2306.05411", "arxiv:2306.05410", "arxiv:2306.05399", "arxiv:2306.05392", "arxiv:2306.05357", "arxiv:2306.05301", "arxiv:2306.05284", "arxiv:2306.05179", "arxiv:2306.05178", "arxiv:2306.05087", "arxiv:2306.04849", "arxiv:2306.04845", "arxiv:2306.04822", "arxiv:2306.04757", "arxiv:2306.04751", "arxiv:2306.04707", "arxiv:2306.04634", "arxiv:2306.04632", "arxiv:2306.04619", "arxiv:2306.04528", "arxiv:2306.04507", "arxiv:2306.04488", "arxiv:2306.04387", "arxiv:2306.04362", "arxiv:2306.04235", "arxiv:2306.04181", "arxiv:2306.04140", "arxiv:2306.04076", "arxiv:2306.04064", "arxiv:2306.04050", "arxiv:2306.04031", "arxiv:2306.04009", "arxiv:2306.07349", "arxiv:2306.03901", "arxiv:2306.03881", "arxiv:2306.03872", "arxiv:2306.03819", "arxiv:2306.03802", "arxiv:2306.03604", "arxiv:2306.03536", "arxiv:2306.03514", "arxiv:2306.03509", "arxiv:2306.03504", "arxiv:2306.03484", "arxiv:2306.03460", "arxiv:2306.03438", "arxiv:2306.03428", "arxiv:2306.03413", "arxiv:2306.03360", "arxiv:2306.03346", "arxiv:2306.03314", "arxiv:2306.03203", "arxiv:2306.03092", "arxiv:2306.03083", "arxiv:2306.03082", "arxiv:2306.03038", "arxiv:2306.03024", "arxiv:2306.03000", "arxiv:2306.02982", "arxiv:2306.02858", "arxiv:2306.02851", "arxiv:2306.02707", "arxiv:2306.02561", "arxiv:2306.02552", "arxiv:2306.02531", "arxiv:2306.02254", "arxiv:2306.02245", "arxiv:2306.02080", "arxiv:2306.01987", "arxiv:2306.01941", "arxiv:2306.01926", "arxiv:2306.01923", "arxiv:2306.01879", "arxiv:2306.01872", "arxiv:2306.01841", "arxiv:2306.01736", "arxiv:2306.01694", "arxiv:2306.01693", "arxiv:2306.01684", "arxiv:2306.07946", "arxiv:2306.01567", "arxiv:2306.01438", "arxiv:2306.01337", "arxiv:2306.01293", "arxiv:2306.01242", "arxiv:2306.01160", "arxiv:2306.01116", "arxiv:2306.01061", "arxiv:2306.00986", "arxiv:2306.00983", "arxiv:2306.00984", "arxiv:2306.00980", "arxiv:2306.00978", "arxiv:2306.00971", "arxiv:2306.00966", "arxiv:2306.00964", "arxiv:2306.00956", "arxiv:2306.00943", "arxiv:2306.00937", "arxiv:2306.00926", "arxiv:2306.00905", "arxiv:2306.00890", "arxiv:2306.00802", "arxiv:2306.0797", "arxiv:2306.00693", "arxiv:2306.00637", "arxiv:2306.00622", "arxiv:2306.00450", "arxiv:2306.00378", "arxiv:2306.00323", "arxiv:2306.03100", "arxiv:2306.00245", "arxiv:2306.00238", "arxiv:2306.00148", "arxiv:2306.00110", "arxiv:2306.00107", "arxiv:2305.20091", "arxiv:2305.20088", "arxiv:2305.20087", "arxiv:2305.20086", "arxiv:2305.20082", "arxiv:2305.20081", "arxiv:2305.20030", "arxiv:2305.20019", "arxiv:2305.20010", "arxiv:2305.19835", "arxiv:2305.19595", "arxiv:2305.19590", "arxiv:2306.00029", "arxiv:2305.19472", "arxiv:2305.19466", "arxiv:2305.19452", "arxiv:2305.19370", "arxiv:2305.19245", "arxiv:2305.19234", "arxiv:2305.19164", "arxiv:2305.19118", "arxiv:2305.19066", "arxiv:2305.19059", "arxiv:2305.19012", "arxiv:2305.19000", "arxiv:2305.18802", "arxiv:2305.18766", "arxiv:2305.18756", "arxiv:2305.18752", "arxiv:2305.18729", "arxiv:2305.18654", "arxiv:2305.18583", "arxiv:2305.18565", "arxiv:2306.00008", "arxiv:2305.18295", "arxiv:2305.18292", "arxiv:2305.18290", "arxiv:2305.18286", "arxiv:2305.18279", "arxiv:2305.18274", "arxiv:2305.18264", "arxiv:2305.18259", "arxiv:2305.18247", "arxiv:2305.18189", "arxiv:2305.18507", "arxiv:2305.18499", "arxiv:2305.18098", "arxiv:2305.18474", "arxiv:2305.17997", "arxiv:2305.18425", "arxiv:2305.18415", "arxiv:2305.18373", "arxiv:2305.17593", "arxiv:2305.17560", "arxiv:2305.17493", "arxiv:2305.18365", "arxiv:2305.17390", "arxiv:2305.17388", "arxiv:2305.17367", "arxiv:2305.17359", "arxiv:2305.17333", "arxiv:2305.17311", "arxiv:2305.17306", "arxiv:2306.00739", "arxiv:2305.17216", "arxiv:2305.17126", "arxiv:2305.17118", "arxiv:2305.18231", "arxiv:2305.17098", "arxiv:2305.17066", "arxiv:2305.17011", "arxiv:2305.17010", "arxiv:2305.16999", "arxiv:2305.16985", "arxiv:2305.16960", "arxiv:2305.16958", "arxiv:2305.16934", "arxiv:2305.16896", "arxiv:2305.16867", "arxiv:2305.16843", "arxiv:2305.16817", "arxiv:2305.16806", "arxiv:2305.18212", "arxiv:2305.16704", "arxiv:2305.16653", "arxiv:2305.16635", "arxiv:2305.16582", "arxiv:2305.16504", "arxiv:2305.16411", "arxiv:2305.17144", "arxiv:2305.16311", "arxiv:2305.16300", "arxiv:2305.16291", "arxiv:2305.16381", "arxiv:2305.16213", "arxiv:2305.16380", "arxiv:2305.16103", "arxiv:2305.16367", "arxiv:2305.15798", "arxiv:2305.15779", "arxiv:2305.15771", "arxiv:2305.15719", "arxiv:2305.15717", "arxiv:2305.16355", "arxiv:2305.15586", "arxiv:2305.15581", "arxiv:2305.16349", "arxiv:2305.15486", "arxiv:2305.15393", "arxiv:2305.15372", "arxiv:2305.15334", "arxiv:2305.15328", "arxiv:2305.15294", "arxiv:2305.15272", "arxiv:2305.15262", "arxiv:2305.15193", "arxiv:2305.15188", "arxiv:2305.15115", "arxiv:2305.15096", "arxiv:2305.15080", "arxiv:2305.15038", "arxiv:2305.15023", "arxiv:2305.15021", "arxiv:2305.14992", "arxiv:2305.14985", "arxiv:2305.14982", "arxiv:2305.14979", "arxiv:2305.14934", "arxiv:2305.14878", "arxiv:2305.14839", "arxiv:2305.14788", "arxiv:2305.14710", "arxiv:2305.14688", "arxiv:2305.14654", "arxiv:2305.14627", "arxiv:2305.16338", "arxiv:2305.14596", "arxiv:2305.14564", "arxiv:2305.14540", "arxiv:2305.14497", "arxiv:2305.14343", "arxiv:2305.14333", "arxiv:2305.14325", "arxiv:2305.14323", "arxiv:2305.14322", "arxiv:2305.14318", "arxiv:2305.14314", "arxiv:2305.14239", "arxiv:2305.14236", "arxiv:2305.14233", "arxiv:2305.14201", "arxiv:2305.14167", "arxiv:2305.13903", "arxiv:2305.13888", "arxiv:2305.16334", "arxiv:2305.13840", "arxiv:2305.13788", "arxiv:2305.13786", "arxiv:2305.13782", "arxiv:2305.13735", "arxiv:2305.13731", "arxiv:2305.13626", "arxiv:2306.01754", "arxiv:2305.13579", "arxiv:2305.13534", "arxiv:2305.13304", "arxiv:2305.13301", "arxiv:2305.13246", "arxiv:2305.13168", "arxiv:2305.13077", "arxiv:2305.13068", "arxiv:2305.13050", "arxiv:2305.13048", "arxiv:2305.13009", "arxiv:2305.12833", "arxiv:2305.12620", "arxiv:2305.12577", "arxiv:2305.12569", "arxiv:2305.12487", "arxiv:2305.12452", "arxiv:2305.12219", "arxiv:2305.12050", "arxiv:2305.12001", "arxiv:2305.11944", "arxiv:2305.11938", "arxiv:2305.11870", "arxiv:2305.11863", "arxiv:2305.11854", "arxiv:2305.11846", "arxiv:2305.11841", "arxiv:2305.11840", "arxiv:2305.11837", "arxiv:2305.11834", "arxiv:2305.11778", "arxiv:2305.11772", "arxiv:2305.11759", "arxiv:2305.11738", "arxiv:2305.11694", "arxiv:2305.11676", "arxiv:2305.11675", "arxiv:2305.11598", "arxiv:2305.11588", "arxiv:2305.11554", "arxiv:2305.11541", "arxiv:2305.11499", "arxiv:2305.11482", "arxiv:2305.11418", "arxiv:2305.11364", "arxiv:2305.11337", "arxiv:2305.11308", "arxiv:2305.11271", "arxiv:2305.11176", "arxiv:2305.11175", "arxiv:2305.11173", "arxiv:2305.11171", "arxiv:2305.11169", "arxiv:2305.11161", "arxiv:2305.11206", "arxiv:2305.11147", "arxiv:2305.11130", "arxiv:2305.11129", "arxiv:2305.11116", "arxiv:2305.11203", "arxiv:2309.03907", "arxiv:2305.11080", "arxiv:2305.11012", "arxiv:2305.11000", "arxiv:2305.10973", "arxiv:2305.10945", "arxiv:2305.10912", "arxiv:2305.10874", "arxiv:2305.10855", "arxiv:2305.10854", "arxiv:2305.10853", "arxiv:2305.10841", "arxiv:2305.10790", "arxiv:2305.10764", "arxiv:2305.10763", "arxiv:2305.10727", "arxiv:2305.10722", "arxiv:2305.10701", "arxiv:2305.10688", "arxiv:2305.10626", "arxiv:2305.10601", "arxiv:2306.05539", "arxiv:2305.10512", "arxiv:2305.10431", "arxiv:2305.10474", "arxiv:2305.10429", "arxiv:2305.10425", "arxiv:2305.10415", "arxiv:2305.10403", "arxiv:2305.10400", "arxiv:2305.10387", "arxiv:2305.10355", "arxiv:2305.10320", "arxiv:2305.10266", "arxiv:2305.10250", "arxiv:2305.10172", "arxiv:2305.10142", "arxiv:2305.10018", "arxiv:2305.10005", "arxiv:2305.09990", "arxiv:2305.09975", "arxiv:2305.10455", "arxiv:2305.09863", "arxiv:2305.09857", "arxiv:2305.09770", "arxiv:2305.09764", "arxiv:2305.09761", "arxiv:2305.09758", "arxiv:2305.09664", "arxiv:2305.09662", "arxiv:2305.09641", "arxiv:2305.09636", "arxiv:2305.09617", "arxiv:2305.09612", "arxiv:2305.10449", "arxiv:2305.09515", "arxiv:2305.09253", "arxiv:2305.09148", "arxiv:2305.09137", "arxiv:2305.09062", "arxiv:2305.08851", "arxiv:2305.08850", "arxiv:2305.08848", "arxiv:2305.08844", "arxiv:2305.08842", "arxiv:2305.08840", "arxiv:2305.08810", "arxiv:2305.08809", "arxiv:2305.08694", "arxiv:2305.08677", "arxiv:2305.08596", "arxiv:2305.08891", "arxiv:2305.08379", "arxiv:2305.08298", "arxiv:2305.08275", "arxiv:2305.08200", "arxiv:2305.07969", "arxiv:2305.07961", "arxiv:2305.07922", "arxiv:2305.07804", "arxiv:2305.07797", "arxiv:2305.07759", "arxiv:2305.07722", "arxiv:2305.07615", "arxiv:2305.07609", "arxiv:2305.07558", "arxiv:2305.07514", "arxiv:2305.07490", "arxiv:2305.07378", "arxiv:2305.07243", "arxiv:2305.07214", "arxiv:2305.07185", "arxiv:2305.07677", "arxiv:2305.07153", "arxiv:2305.07027", "arxiv:2305.07021", "arxiv:2305.07017", "arxiv:2305.07015", "arxiv:2305.07011", "arxiv:2305.10434", "arxiv:2305.07004", "arxiv:2305.07447", "arxiv:2305.06908", "arxiv:2305.06886", "arxiv:2305.07440", "arxiv:2305.06716", "arxiv:2305.06594", "arxiv:2305.06575", "arxiv:2305.06569", "arxiv:2305.06558", "arxiv:2305.06555", "arxiv:2305.06500", "arxiv:2305.06474", "arxiv:2305.06456", "arxiv:2305.06424", "arxiv:2305.06404", "arxiv:2305.06356", "arxiv:2305.06355", "arxiv:2305.06351", "arxiv:2305.06324", "arxiv:2305.06299", "arxiv:2305.06131", "arxiv:2305.06112", "arxiv:2305.06077", "arxiv:2306.01741", "arxiv:2305.05973", "arxiv:2305.05920", "arxiv:2305.05873", "arxiv:2305.05862", "arxiv:2305.05845", "arxiv:2305.05706", "arxiv:2305.05662", "arxiv:2305.05658", "arxiv:2305.05644", "arxiv:2305.05591", "arxiv:2305.05505", "arxiv:2305.05432", "arxiv:2305.05364", "arxiv:2305.05290", "arxiv:2305.05252", "arxiv:2305.05189", "arxiv:2305.05176", "arxiv:2305.05091", "arxiv:2305.06218", "arxiv:2305.05065", "arxiv:2305.04966", "arxiv:2305.04812", "arxiv:2305.04790", "arxiv:2305.04789", "arxiv:2305.04745", "arxiv:2305.05383", "arxiv:2305.04536", "arxiv:2305.04470", "arxiv:2305.04461", "arxiv:2305.04441", "arxiv:2305.04391", "arxiv:2305.04388", "arxiv:2305.04320", "arxiv:2305.04268", "arxiv:2305.04241", "arxiv:2305.04175", "arxiv:2305.04160", "arxiv:2305.04118", "arxiv:2305.04091", "arxiv:2305.03981", "arxiv:2305.03937", "arxiv:2305.03726", "arxiv:2305.03713", "arxiv:2305.03701", "arxiv:2305.03695", "arxiv:2305.03691", "arxiv:2305.03689", "arxiv:2305.03668", "arxiv:2305.03653", "arxiv:2305.03453", "arxiv:2305.03296", "arxiv:2305.03286", "arxiv:2305.03268", "arxiv:2305.03210", "arxiv:2305.03111", "arxiv:2305.03053", "arxiv:2305.03052", "arxiv:2305.03051", "arxiv:2305.03049", "arxiv:2305.03048", "arxiv:2305.03047", "arxiv:2305.03043", "arxiv:2305.03040", "arxiv:2305.03027", "arxiv:2305.03509", "arxiv:2305.02968", "arxiv:2305.02790", "arxiv:2305.02750", "arxiv:2305.02678", "arxiv:2305.02677", "arxiv:2305.02665", "arxiv:2305.02586", "arxiv:2305.02555", "arxiv:2305.02549", "arxiv:2305.02499", "arxiv:2305.02483", "arxiv:2305.02463", "arxiv:2305.02440", "arxiv:2305.02412", "arxiv:2305.02320", "arxiv:2305.02317", "arxiv:2305.02182", "arxiv:2305.02156", "arxiv:2305.01795", "arxiv:2305.02783", "arxiv:2305.01710", "arxiv:2305.00976", "arxiv:2305.01625", "arxiv:2305.01278", "arxiv:2305.01253", "arxiv:2305.00970", "arxiv:2305.00955", "arxiv:2305.0884", "arxiv:2305.00833", "arxiv:2305.00673", "arxiv:2305.00633", "arxiv:2305.00447", "arxiv:2304.15010", "arxiv:2304.14867", "arxiv:2304.14856", "arxiv:2304.14837", "arxiv:2304.14522", "arxiv:2304.14456", "arxiv:2304.14407", "arxiv:2304.14233", "arxiv:2304.14178", "arxiv:2304.14144", "arxiv:2304.14106", "arxiv:2304.14104", "arxiv:2304.13861", "arxiv:2304.13835", "arxiv:2304.13712", "arxiv:2304.13676", "arxiv:2304.13343", "arxiv:2304.13207", "arxiv:2304.13169", "arxiv:2304.13157", "arxiv:2304.13007", "arxiv:2304.12670", "arxiv:2304.12666", "arxiv:2304.12567", "arxiv:2304.12519", "arxiv:2304.12397", "arxiv:2304.12372", "arxiv:2304.12244", "arxiv:2304.11968", "arxiv:2304.12998", "arxiv:2304.11862", "arxiv:2304.11823", "arxiv:2304.11751", "arxiv:2304.11744", "arxiv:2304.11705", "arxiv:2304.11619", "arxiv:2304.11477", "arxiv:2304.11267", "arxiv:2304.11158", "arxiv:2304.11107", "arxiv:2304.10970", "arxiv:2304.10819", "arxiv:2304.10700", "arxiv:2304.10628", "arxiv:2304.10539", "arxiv:2304.10592", "arxiv:2304.10528", "arxiv:2304.10453", "arxiv:2304.10417", "arxiv:2304.10179", "arxiv:2304.10131", "arxiv:2304.10066", "arxiv:2304.09913", "arxiv:2304.09848", "arxiv:2304.09842", "arxiv:2304.09801", "arxiv:2304.09691", "arxiv:2304.09542", "arxiv:2304.09453", "arxiv:2304.09423", "arxiv:2304.09355", "arxiv:2304.09121", "arxiv:2304.11063", "arxiv:2304.08979", "arxiv:2304.08971", "arxiv:2304.08706", "arxiv:2304.08637", "arxiv:2304.08485", "arxiv:2304.08479", "arxiv:2304.08467", "arxiv:2304.08451", "arxiv:2304.08354", "arxiv:2304.08334", "arxiv:2304.07919", "arxiv:2304.07854", "arxiv:2304.07803", "arxiv:2304.07590", "arxiv:2304.07438", "arxiv:2304.07193", "arxiv:2304.07313", "arxiv:2304.07090", "arxiv:2304.07060", "arxiv:2304.06977", "arxiv:2304.06939", "arxiv:2304.06813", "arxiv:2304.06767", "arxiv:2304.06720", "arxiv:2304.06714", "arxiv:2304.06712", "arxiv:2304.06668", "arxiv:2304.06648", "arxiv:2304.06556", "arxiv:2304.09161", "arxiv:2304.06461", "arxiv:2304.06364", "arxiv:2304.06287", "arxiv:2304.07297", "arxiv:2304.06244", "arxiv:2304.06212", "arxiv:2304.06024", "arxiv:2304.06020", "arxiv:2305.03514", "arxiv:2304.05919", "arxiv:2304.05889", "arxiv:2304.05887", "arxiv:2304.05868", "arxiv:2304.05669", "arxiv:2304.05640", "arxiv:2304.13728", "arxiv:2304.05376", "arxiv:2304.05335", "arxiv:2304.05316", "arxiv:2304.05170", "arxiv:2304.05128", "arxiv:2304.04978", "arxiv:2304.04824", "arxiv:2304.04746", "arxiv:2304.04745", "arxiv:2304.04742", "arxiv:2304.04704", "arxiv:2304.04494", "arxiv:2304.04395", "arxiv:2304.11116", "arxiv:2304.04370", "arxiv:2304.04227", "arxiv:2304.04205", "arxiv:2304.04175", "arxiv:2304.11093", "arxiv:2304.04052", "arxiv:2304.03893", "arxiv:2304.03869", "arxiv:2304.03843", "arxiv:2304.03709", "arxiv:2304.03693", "arxiv:2304.04556", "arxiv:2304.09097", "arxiv:2304.03483", "arxiv:2304.03442", "arxiv:2304.03372", "arxiv:2304.03284", "arxiv:2304.03282", "arxiv:2304.03103", "arxiv:2304.02859", "arxiv:2304.02841", "arxiv:2304.02643", "arxiv:2304.02639", "arxiv:2304.02633", "arxiv:2304.02626", "arxiv:2304.02602", "arxiv:2304.02556", "arxiv:2304.02419", "arxiv:2304.02370", "arxiv:2304.02330", "arxiv:2304.02321", "arxiv:2304.02312", "arxiv:2304.02251", "arxiv:2304.02163", "arxiv:2304.02135", "arxiv:2304.02051", "arxiv:2304.02008", "arxiv:2304.02001", "arxiv:2304.01933", "arxiv:2304.01893", "arxiv:2304.01830", "arxiv:2304.01816", "arxiv:2304.01804", "arxiv:2304.01715", "arxiv:2304.01686", "arxiv:2304.01663", "arxiv:2304.01662", "arxiv:2304.01529", "arxiv:2304.01514", "arxiv:2304.01482", "arxiv:2304.01464", "arxiv:2304.01436", "arxiv:2304.01434", "arxiv:2304.01373", "arxiv:2304.01289", "arxiv:2304.01279", "arxiv:2304.1267", "arxiv:2304.01201", "arxiv:2304.01196", "arxiv:2304.01195", "arxiv:2304.01194", "arxiv:2304.01192", "arxiv:2304.01172", "arxiv:2304.01247", "arxiv:2304.01116", "arxiv:2304.01042", "arxiv:2304.00967", "arxiv:2304.05350", "arxiv:2304.00792", "arxiv:2304.00719", "arxiv:2304.00690", "arxiv:2304.0564", "arxiv:2304.00464", "arxiv:2304.00451", "arxiv:2304.00350", "arxiv:2304.00212", "arxiv:2304.00152", "arxiv:2304.0069", "arxiv:2304.00058", "arxiv:2304.00054", "arxiv:2304.00046", "arxiv:2304.06035", "arxiv:2303.18223", "arxiv:2303.18190", "arxiv:2303.18125", "arxiv:2303.17968", "arxiv:2303.17959", "arxiv:2303.17905", "arxiv:2303.17842", "arxiv:2303.17839", "arxiv:2303.17806", "arxiv:2303.17778", "arxiv:2303.17760", "arxiv:2303.17732", "arxiv:2303.17712", "arxiv:2303.17651", "arxiv:2303.17605", "arxiv:2303.17583", "arxiv:2303.17580", "arxiv:2303.17569", "arxiv:2303.17548", "arxiv:2303.17491", "arxiv:2303.17441", "arxiv:2303.17395", "arxiv:2304.06034", "arxiv:2303.17152", "arxiv:2303.17147", "arxiv:2303.17096", "arxiv:2303.16899", "arxiv:2303.16894", "arxiv:2303.16817", "arxiv:2303.16730", "arxiv:2303.16634", "arxiv:2303.16624", "arxiv:2303.16618", "arxiv:2303.16563", "arxiv:2303.16520", "arxiv:2303.16447", "arxiv:2303.16434", "arxiv:2303.16421", "arxiv:2303.16390", "arxiv:2303.16270", "arxiv:2303.16203", "arxiv:2303.16201", "arxiv:2303.16199", "arxiv:2303.16196", "arxiv:2303.16186", "arxiv:2303.16058", "arxiv:2303.15951", "arxiv:2303.15932", "arxiv:2303.15904", "arxiv:2303.15764", "arxiv:2303.15715", "arxiv:2303.15678", "arxiv:2304.00008", "arxiv:2303.15446", "arxiv:2303.15444", "arxiv:2303.15443", "arxiv:2303.15441", "arxiv:2303.15433", "arxiv:2303.15390", "arxiv:2303.15256", "arxiv:2303.15247", "arxiv:2303.15140", "arxiv:2303.15083", "arxiv:2303.15065", "arxiv:2303.14953", "arxiv:2303.14885", "arxiv:2303.14869", "arxiv:2303.14863", "arxiv:2303.14817", "arxiv:2303.14773", "arxiv:2303.14736", "arxiv:2303.14725", "arxiv:2303.14717", "arxiv:2303.14700", "arxiv:2303.14672", "arxiv:2303.14584", "arxiv:2303.14536", "arxiv:2303.14470", "arxiv:2303.14435", "arxiv:2303.14420", "arxiv:2303.14408", "arxiv:2303.14373", "arxiv:2303.14242", "arxiv:2303.14189", "arxiv:2303.14167", "arxiv:2303.14100", "arxiv:2303.14027", "arxiv:2303.13953", "arxiv:2303.13899", "arxiv:2303.13865", "arxiv:2303.13845", "arxiv:2303.13796", "arxiv:2303.13791", "arxiv:2303.13755", "arxiv:2303.13744", "arxiv:2303.13731", "arxiv:2303.13703", "arxiv:2303.13662", "arxiv:2303.13593", "arxiv:2303.13512", "arxiv:2303.13508", "arxiv:2303.13501", "arxiv:2303.13496", "arxiv:2303.13399", "arxiv:2303.13396", "arxiv:2303.13290", "arxiv:2303.13190", "arxiv:2303.13132", "arxiv:2303.13090", "arxiv:2303.13022", "arxiv:2303.13005", "arxiv:2303.12791", "arxiv:2303.12786", "arxiv:2303.12766", "arxiv:2303.12670", "arxiv:2303.12528", "arxiv:2303.12400", "arxiv:2303.12384", "arxiv:2303.12343", "arxiv:2303.12326", "arxiv:2303.12280", "arxiv:2303.12247", "arxiv:2303.12246", "arxiv:2303.12078", "arxiv:2303.12077", "arxiv:2303.12074", "arxiv:2303.12048", "arxiv:2303.12027", "arxiv:2303.12023", "arxiv:2303.12012", "arxiv:2303.11989", "arxiv:2303.11963", "arxiv:2303.11932", "arxiv:2303.11926", "arxiv:2303.11917", "arxiv:2303.11906", "arxiv:2303.11726", "arxiv:2303.11722", "arxiv:2303.11717", "arxiv:2303.11681", "arxiv:2303.11630", "arxiv:2303.11629", "arxiv:2303.11589", "arxiv:2303.11570", "arxiv:2303.11546", "arxiv:2303.11526", "arxiv:2303.11502", "arxiv:2303.11424", "arxiv:2303.11419", "arxiv:2303.11408", "arxiv:2303.11403", "arxiv:2303.11381", "arxiv:2303.11366", "arxiv:2303.11329", "arxiv:2303.11328", "arxiv:2303.11324", "arxiv:2303.11305", "arxiv:2303.11225", "arxiv:2303.11203", "arxiv:2303.11126", "arxiv:2303.11114", "arxiv:2303.11089", "arxiv:2303.11057", "arxiv:2303.11011", "arxiv:2303.10876", "arxiv:2303.10854", "arxiv:2303.10840", "arxiv:2303.10709", "arxiv:2303.10658", "arxiv:2303.10571", "arxiv:2303.10475", "arxiv:2303.10438", "arxiv:2303.10437", "arxiv:2303.10073", "arxiv:2303.10058", "arxiv:2303.09962", "arxiv:2303.09941", "arxiv:2303.09870", "arxiv:2303.09867", "arxiv:2303.09833", "arxiv:2303.09758", "arxiv:2303.09756", "arxiv:2303.09752", "arxiv:2303.09730", "arxiv:2303.09713", "arxiv:2303.09706", "arxiv:2303.09663", "arxiv:2303.09650", "arxiv:2303.09604", "arxiv:2303.09556", "arxiv:2303.09535", "arxiv:2303.09495", "arxiv:2303.09375", "arxiv:2303.09268", "arxiv:2303.09914", "arxiv:2303.09224", "arxiv:2303.09181", "arxiv:2303.09152", "arxiv:2303.09051", "arxiv:2303.09043", "arxiv:2303.09036", "arxiv:2303.09014", "arxiv:2303.08998", "arxiv:2303.08983", "arxiv:2303.08914", "arxiv:2303.08888", "arxiv:2303.08810", "arxiv:2303.08789", "arxiv:2303.08717", "arxiv:2303.08698", "arxiv:2303.08685", "arxiv:2303.15324", "arxiv:2303.08440", "arxiv:2303.08340", "arxiv:2303.08331", "arxiv:2303.08316", "arxiv:2303.08308", "arxiv:2303.08268", "arxiv:2303.08231", "arxiv:2303.08138", "arxiv:2303.08135", "arxiv:2303.08134", "arxiv:2303.08131", "arxiv:2303.08128", "arxiv:2303.08084", "arxiv:2303.08010", "arxiv:2303.07820", "arxiv:2303.07815", "arxiv:2303.07811", "arxiv:2303.07806", "arxiv:2303.07726", "arxiv:2303.07678", "arxiv:2303.07662", "arxiv:2303.07653", "arxiv:2303.07622", "arxiv:2303.07616", "arxiv:2303.07543", "arxiv:2303.07522", "arxiv:2303.07347", "arxiv:2303.07345", "arxiv:2303.07338", "arxiv:2303.07337", "arxiv:2303.07284", "arxiv:2303.07274", "arxiv:2303.07224", "arxiv:2303.07123", "arxiv:2303.07110", "arxiv:2303.07096", "arxiv:2303.07065", "arxiv:2303.06937", "arxiv:2303.06919", "arxiv:2303.06885", "arxiv:2303.06877", "arxiv:2303.06869", "arxiv:2303.06865", "arxiv:2303.06859", "arxiv:2303.06856", "arxiv:2303.06840", "arxiv:2303.06833", "arxiv:2303.06697", "arxiv:2303.06689", "arxiv:2303.06615", "arxiv:2303.06601", "arxiv:2303.06594", "arxiv:2303.06573", "arxiv:2303.06571", "arxiv:2303.07839", "arxiv:2303.06388", "arxiv:2303.06285", "arxiv:2303.06247", "arxiv:2303.06223", "arxiv:2303.06209", "arxiv:2303.06167", "arxiv:2303.06146", "arxiv:2303.06042", "arxiv:2303.05983", "arxiv:2303.05937", "arxiv:2303.05916", "arxiv:2303.05892", "arxiv:2303.05762", "arxiv:2303.05760", "arxiv:2303.05735", "arxiv:2303.05724", "arxiv:2303.05715", "arxiv:2303.05689", "arxiv:2303.05475", "arxiv:2303.05367", "arxiv:2303.05312", "arxiv:2303.05266", "arxiv:2303.05251", "arxiv:2303.05195", "arxiv:2303.05078", "arxiv:2303.05072", "arxiv:2303.05071", "arxiv:2303.05063", "arxiv:2303.04935", "arxiv:2303.04869", "arxiv:2303.04803", "arxiv:2303.04673", "arxiv:2303.04671", "arxiv:2303.04616", "arxiv:2303.04278", "arxiv:2303.04249", "arxiv:2303.04129", "arxiv:2303.04077", "arxiv:2303.03755", "arxiv:2303.03684", "arxiv:2303.03667", "arxiv:2303.03595", "arxiv:2303.03480", "arxiv:2303.03378", "arxiv:2303.03323", "arxiv:2303.03315", "arxiv:2303.03202", "arxiv:2303.03052", "arxiv:2303.02936", "arxiv:2303.02927", "arxiv:2303.02165", "arxiv:2303.02760", "arxiv:2303.02700", "arxiv:2303.02584", "arxiv:2303.02506", "arxiv:2303.02489", "arxiv:2303.02483", "arxiv:2303.02437", "arxiv:2303.02375", "arxiv:2303.02328", "arxiv:2303.05398", "arxiv:2303.02280", "arxiv:2303.02153", "arxiv:2303.02151", "arxiv:2303.03108", "arxiv:2303.01904", "arxiv:2303.01903", "arxiv:2303.01748", "arxiv:2303.1673", "arxiv:2303.1639", "arxiv:2303.01598", "arxiv:2303.01559", "arxiv:2303.1447", "arxiv:2303.1329", "arxiv:2303.1267", "arxiv:2303.01239", "arxiv:2303.01052", "arxiv:2303.0987", "arxiv:2303.00973", "arxiv:2303.00971", "arxiv:2303.00938", "arxiv:2303.00914", "arxiv:2303.00905", "arxiv:2303.00885", "arxiv:2303.00855", "arxiv:2303.00807", "arxiv:2303.00747", "arxiv:2303.00575", "arxiv:2303.00502", "arxiv:2303.00462", "arxiv:2303.00404", "arxiv:2303.00340", "arxiv:2303.01248", "arxiv:2303.00215", "arxiv:2303.00176", "arxiv:2303.00040", "arxiv:2303.0034", "arxiv:2302.14772", "arxiv:2302.14771", "arxiv:2302.14746", "arxiv:2302.14680", "arxiv:2302.14677", "arxiv:2302.14673", "arxiv:2302.14578", "arxiv:2302.14574", "arxiv:2302.14434", "arxiv:2302.14416", "arxiv:2302.14401", "arxiv:2302.14340", "arxiv:2302.14325", "arxiv:2302.14311", "arxiv:2302.14290", "arxiv:2302.14115", "arxiv:2302.14051", "arxiv:2302.14045", "arxiv:2302.13971", "arxiv:2302.13562", "arxiv:2302.13439", "arxiv:2302.12986", "arxiv:2303.1514", "arxiv:2302.12915", "arxiv:2302.12828", "arxiv:2302.12822", "arxiv:2302.12251", "arxiv:2302.12246", "arxiv:2302.12192", "arxiv:2302.11713", "arxiv:2302.11603", "arxiv:2302.11566", "arxiv:2302.11211", "arxiv:2302.11154", "arxiv:2302.10866", "arxiv:2302.09923", "arxiv:2302.10202", "arxiv:2306.17582", "arxiv:2302.09649", "arxiv:2302.09311", "arxiv:2302.09270", "arxiv:2302.09051", "arxiv:2302.08923", "arxiv:2302.08893", "arxiv:2302.10915", "arxiv:2302.08509", "arxiv:2302.08504", "arxiv:2302.08242", "arxiv:2302.08215", "arxiv:2302.08207", "arxiv:2302.07994", "arxiv:2302.07848", "arxiv:2302.07842", "arxiv:2302.07685", "arxiv:2302.07672", "arxiv:2302.07459", "arxiv:2302.07387", "arxiv:2302.07224", "arxiv:2302.06833", "arxiv:2302.06784", "arxiv:2302.06586", "arxiv:2302.05737", "arxiv:2302.05698", "arxiv:2302.05543", "arxiv:2302.05206", "arxiv:2302.04866", "arxiv:2302.04761", "arxiv:2302.04304", "arxiv:2302.04218", "arxiv:2302.04166", "arxiv:2302.04023", "arxiv:2302.06476", "arxiv:2302.03900", "arxiv:2302.03848", "arxiv:2302.03686", "arxiv:2302.03665", "arxiv:2302.10893", "arxiv:2302.03011", "arxiv:2302.02908", "arxiv:2302.02676", "arxiv:2302.02615", "arxiv:2302.02410", "arxiv:2302.01928", "arxiv:2302.01872", "arxiv:2302.01571", "arxiv:2302.01560", "arxiv:2302.1434", "arxiv:2302.01399", "arxiv:2302.01318", "arxiv:2302.01316", "arxiv:2302.01162", "arxiv:2302.00988", "arxiv:2302.00923", "arxiv:2302.00763", "arxiv:2302.00618", "arxiv:2302.00268", "arxiv:2302.00111", "arxiv:2302.00093", "arxiv:2302.00070", "arxiv:2301.13823", "arxiv:2301.13808", "arxiv:2301.13688", "arxiv:2301.13616", "arxiv:2301.13379", "arxiv:2301.13331", "arxiv:2301.12959", "arxiv:2301.12900", "arxiv:2301.12867", "arxiv:2301.12726", "arxiv:2301.12643", "arxiv:2301.12597", "arxiv:2301.12050", "arxiv:2301.12004", "arxiv:2301.11476", "arxiv:2301.11320", "arxiv:2301.11270", "arxiv:2301.10972", "arxiv:2301.10896", "arxiv:2301.10521", "arxiv:2301.10460", "arxiv:2301.10100", "arxiv:2301.09632", "arxiv:2301.09231", "arxiv:2301.09091", "arxiv:2301.08739", "arxiv:2301.08727", "arxiv:2301.08556", "arxiv:2301.07525", "arxiv:2301.07255", "arxiv:2301.06782", "arxiv:2301.06719", "arxiv:2301.06627", "arxiv:2301.06281", "arxiv:2301.06267", "arxiv:2301.06051", "arxiv:2301.05880", "arxiv:2301.05843", "arxiv:2301.05221", "arxiv:2301.04871", "arxiv:2301.04604", "arxiv:2301.04655", "arxiv:2301.04502", "arxiv:2301.04104", "arxiv:2301.04467", "arxiv:2301.02667", "arxiv:2301.03046", "arxiv:2301.03044", "arxiv:2302.05294", "arxiv:2301.04011", "arxiv:2301.02998", "arxiv:2301.02401", "arxiv:2301.02371", "arxiv:2301.02364", "arxiv:2301.02239", "arxiv:2301.02111", "arxiv:2301.01953", "arxiv:2301.01949", "arxiv:2301.01928", "arxiv:2301.01893", "arxiv:2301.01820", "arxiv:2301.01805", "arxiv:2301.01795", "arxiv:2301.01767", "arxiv:2301.01751", "arxiv:2301.01661", "arxiv:2301.01380", "arxiv:2301.01283", "arxiv:2301.01146", "arxiv:2301.1132", "arxiv:2301.00794", "arxiv:2301.00785", "arxiv:2301.00330", "arxiv:2301.00303", "arxiv:2301.00234", "arxiv:2301.01217", "arxiv:2301.01218", "arxiv:2301.00023", "arxiv:2212.14593", "arxiv:2212.14532", "arxiv:2301.00008", "arxiv:2212.14306", "arxiv:2212.14169", "arxiv:2212.14034", "arxiv:2212.14024", "arxiv:2212.14704", "arxiv:2212.13563", "arxiv:2212.13326", "arxiv:2212.13138", "arxiv:2212.12977", "arxiv:2212.12712", "arxiv:2212.12669", "arxiv:2212.12395", "arxiv:2212.12393", "arxiv:2212.12249", "arxiv:2212.12017", "arxiv:2212.12324", "arxiv:2212.11719", "arxiv:2212.11685", "arxiv:2212.11613", "arxiv:2212.11565", "arxiv:2212.11270", "arxiv:2212.11263", "arxiv:2212.11042", "arxiv:2212.10957", "arxiv:2212.10938", "arxiv:2212.10881", "arxiv:2212.10773", "arxiv:2212.10720", "arxiv:2212.10696", "arxiv:2212.10621", "arxiv:2212.10618", "arxiv:2212.10559", "arxiv:2212.10556", "arxiv:2212.10535", "arxiv:2212.10509", "arxiv:2212.10496", "arxiv:2212.13894", "arxiv:2212.10466", "arxiv:2212.10403", "arxiv:2212.10325", "arxiv:2212.10264", "arxiv:2212.10229", "arxiv:2212.10170", "arxiv:2212.10071", "arxiv:2212.10060", "arxiv:2212.10001", "arxiv:2212.09802", "arxiv:2212.09801", "arxiv:2212.09748", "arxiv:2212.09741", "arxiv:2212.09737", "arxiv:2212.09736", "arxiv:2212.09713", "arxiv:2212.09597", "arxiv:2212.09561", "arxiv:2212.09478", "arxiv:2212.09462", "arxiv:2212.09412", "arxiv:2212.09262", "arxiv:2212.09251", "arxiv:2212.09235", "arxiv:2212.09196", "arxiv:2212.09180", "arxiv:2212.09146", "arxiv:2212.09072", "arxiv:2212.09067", "arxiv:2212.08686", "arxiv:2212.08653", "arxiv:2212.08641", "arxiv:2212.08541", "arxiv:2212.08410", "arxiv:2212.08254", "arxiv:2212.08120", "arxiv:2212.08061", "arxiv:2212.08059", "arxiv:2212.08057", "arxiv:2212.08073", "arxiv:2212.07593", "arxiv:2212.07422", "arxiv:2212.07388", "arxiv:2212.07242", "arxiv:2212.07048", "arxiv:2212.07016", "arxiv:2212.06969", "arxiv:2212.06820", "arxiv:2212.06785", "arxiv:2212.06615", "arxiv:2212.06493", "arxiv:2212.06331", "arxiv:2212.06202", "arxiv:2212.05946", "arxiv:2212.05698", "arxiv:2212.05680", "arxiv:2212.05638", "arxiv:2212.05370", "arxiv:2212.05262", "arxiv:2212.05231", "arxiv:2212.04825", "arxiv:2212.05032", "arxiv:2212.04780", "arxiv:2212.04745", "arxiv:2212.04655", "arxiv:2212.05058", "arxiv:2212.04488", "arxiv:2212.04385", "arxiv:2212.04319", "arxiv:2212.04145", "arxiv:2212.04129", "arxiv:2212.04092", "arxiv:2212.04088", "arxiv:2212.03862", "arxiv:2212.04248", "arxiv:2212.03741", "arxiv:2212.04247", "arxiv:2212.03293", "arxiv:2212.03191", "arxiv:2212.02774", "arxiv:2212.02761", "arxiv:2212.02501", "arxiv:2212.02499", "arxiv:2212.02497", "arxiv:2212.02469", "arxiv:2212.02175", "arxiv:2212.01789", "arxiv:2212.01747", "arxiv:2212.01602", "arxiv:2212.01448", "arxiv:2212.01340", "arxiv:2212.01206", "arxiv:2212.01120", "arxiv:2212.01076", "arxiv:2212.00786", "arxiv:2212.00776", "arxiv:2212.00767", "arxiv:2212.00542", "arxiv:2212.0478", "arxiv:2212.00460", "arxiv:2212.00334", "arxiv:2212.00325", "arxiv:2212.00193", "arxiv:2212.0046", "arxiv:2211.17256", "arxiv:2211.17192", "arxiv:2211.17042", "arxiv:2211.16431", "arxiv:2211.16312", "arxiv:2211.16158", "arxiv:2211.16056", "arxiv:2211.16991", "arxiv:2211.15774", "arxiv:2211.15736", "arxiv:2211.15657", "arxiv:2211.16198", "arxiv:2211.16193", "arxiv:2211.15029", "arxiv:2211.14876", "arxiv:2211.14710", "arxiv:2211.14646", "arxiv:2211.14512", "arxiv:2211.14306", "arxiv:2211.14304", "arxiv:2211.14293", "arxiv:2211.14173", "arxiv:2211.14086", "arxiv:2211.14020", "arxiv:2211.13955", "arxiv:2211.13874", "arxiv:2211.13775", "arxiv:2211.13757", "arxiv:2211.13551", "arxiv:2211.15393", "arxiv:2211.13515", "arxiv:2211.13287", "arxiv:2211.13226", "arxiv:2211.13202", "arxiv:2211.13081", "arxiv:2211.12886", "arxiv:2211.12870", "arxiv:2211.12853", "arxiv:2211.12739", "arxiv:2211.12735", "arxiv:2211.12634", "arxiv:2211.12588", "arxiv:2211.12542", "arxiv:2211.12486", "arxiv:2211.12380", "arxiv:2211.12860", "arxiv:2211.12352", "arxiv:2211.12340", "arxiv:2211.12254", "arxiv:2211.12194", "arxiv:2211.12131", "arxiv:2211.12857", "arxiv:2211.12018", "arxiv:2211.11936", "arxiv:2211.11825", "arxiv:2211.11733", "arxiv:2211.11720", "arxiv:2211.11682", "arxiv:2211.11446", "arxiv:2211.11432", "arxiv:2211.11296", "arxiv:2211.15462", "arxiv:2211.11177", "arxiv:2211.11167", "arxiv:2211.11158", "arxiv:2211.11152", "arxiv:2211.11082", "arxiv:2211.15533", "arxiv:2211.11004", "arxiv:2211.10772", "arxiv:2211.10656", "arxiv:2211.10655", "arxiv:2211.10598", "arxiv:2211.10593", "arxiv:2211.10579", "arxiv:2211.10439", "arxiv:2211.10435", "arxiv:2211.11559", "arxiv:2211.09809", "arxiv:2211.09808", "arxiv:2211.09800", "arxiv:2211.09778", "arxiv:2211.09703", "arxiv:2211.10154", "arxiv:2211.09552", "arxiv:2211.10156", "arxiv:2211.10157", "arxiv:2211.09260", "arxiv:2211.09120", "arxiv:2211.09119", "arxiv:2211.09117", "arxiv:2211.09110", "arxiv:2211.09085", "arxiv:2211.08887", "arxiv:2211.08658", "arxiv:2211.08609", "arxiv:2211.09699", "arxiv:2211.08332", "arxiv:2211.08213", "arxiv:2211.09066", "arxiv:2211.07636", "arxiv:2211.07634", "arxiv:2211.07600", "arxiv:2211.07591", "arxiv:2211.07521", "arxiv:2211.07198", "arxiv:2211.07157", "arxiv:2211.07091", "arxiv:2211.06892", "arxiv:2211.06885", "arxiv:2211.06869", "arxiv:2211.06627", "arxiv:2211.06597", "arxiv:2211.06012", "arxiv:2211.05776", "arxiv:2211.06220", "arxiv:2211.05719", "arxiv:2211.06324", "arxiv:2211.05100", "arxiv:2211.04700", "arxiv:2211.04346", "arxiv:2211.04236", "arxiv:2211.03989", "arxiv:2211.02408", "arxiv:2211.01910", "arxiv:2211.01786", "arxiv:2211.01562", "arxiv:2211.1402", "arxiv:2211.01324", "arxiv:2211.1238", "arxiv:2211.0912", "arxiv:2211.00453", "arxiv:2211.00255", "arxiv:2211.1287", "arxiv:2210.17432", "arxiv:2210.17323", "arxiv:2210.16886", "arxiv:2210.16776", "arxiv:2210.15418", "arxiv:2210.15097", "arxiv:2210.14831", "arxiv:2210.14140", "arxiv:2210.13832", "arxiv:2210.13435", "arxiv:2210.13382", "arxiv:2210.13067", "arxiv:2210.12485", "arxiv:2210.12461", "arxiv:2210.12460", "arxiv:2210.12459", "arxiv:2210.12152", "arxiv:2210.11610", "arxiv:2210.11416", "arxiv:2210.10763", "arxiv:2210.10760", "arxiv:2210.10090", "arxiv:2210.09997", "arxiv:2210.09996", "arxiv:2210.09887", "arxiv:2210.09551", "arxiv:2210.09276", "arxiv:2210.09261", "arxiv:2210.08933", "arxiv:2210.08750", "arxiv:2210.08457", "arxiv:2210.08363", "arxiv:2210.07574", "arxiv:2210.07558", "arxiv:2210.07241", "arxiv:2210.07225", "arxiv:2210.07183", "arxiv:2210.07128", "arxiv:2210.06849", "arxiv:2210.06742", "arxiv:2210.06726", "arxiv:2210.06710", "arxiv:2210.10634", "arxiv:2210.06455", "arxiv:2210.06284", "arxiv:2210.06096", "arxiv:2210.06006", "arxiv:2210.05557", "arxiv:2210.05359", "arxiv:2210.05129", "arxiv:2210.04871", "arxiv:2210.04242", "arxiv:2210.05361", "arxiv:2210.04185", "arxiv:2210.04150", "arxiv:2210.03884", "arxiv:2210.03822", "arxiv:2210.03821", "arxiv:2210.03628", "arxiv:2210.03493", "arxiv:2210.03350", "arxiv:2210.03180", "arxiv:2210.03167", "arxiv:2210.03117", "arxiv:2210.03114", "arxiv:2210.03094", "arxiv:2210.03078", "arxiv:2210.03057", "arxiv:2210.03052", "arxiv:2210.02889", "arxiv:2210.03629", "arxiv:2210.02493", "arxiv:2210.02414", "arxiv:2210.02406", "arxiv:2210.02303", "arxiv:2210.02223", "arxiv:2210.01887", "arxiv:2210.01863", "arxiv:2210.01857", "arxiv:2210.01781", "arxiv:2210.01504", "arxiv:2210.01379", "arxiv:2210.01296", "arxiv:2210.01244", "arxiv:2210.01241", "arxiv:2210.01240", "arxiv:2210.01208", "arxiv:2210.00968", "arxiv:2210.00939", "arxiv:2210.00828", "arxiv:2210.00720", "arxiv:2210.00647", "arxiv:2210.03735", "arxiv:2210.00379", "arxiv:2210.00312", "arxiv:2210.00893", "arxiv:2209.15304", "arxiv:2209.15280", "arxiv:2209.15003", "arxiv:2209.14988", "arxiv:2209.14941", "arxiv:2209.14792", "arxiv:2209.14728", "arxiv:2209.14610", "arxiv:2209.14375", "arxiv:2209.13583", "arxiv:2209.13487", "arxiv:2209.12711", "arxiv:2209.12343", "arxiv:2209.11755", "arxiv:2209.11302", "arxiv:2209.10901", "arxiv:2209.10063", "arxiv:2209.09746", "arxiv:2209.09513", "arxiv:2209.09351", "arxiv:2209.09050", "arxiv:2209.08524", "arxiv:2209.08141", "arxiv:2209.07511", "arxiv:2209.07370", "arxiv:2209.06794", "arxiv:2209.06691", "arxiv:2209.06506", "arxiv:2209.05072", "arxiv:2209.03320", "arxiv:2209.02341", "arxiv:2209.01975", "arxiv:2209.01540", "arxiv:2209.00647", "arxiv:2209.00349", "arxiv:2208.14614", "arxiv:2209.0737", "arxiv:2208.14271", "arxiv:2208.13029", "arxiv:2208.12256", "arxiv:2208.12242", "arxiv:2208.11970", "arxiv:2209.07858", "arxiv:2208.10816", "arxiv:2208.10791", "arxiv:2208.10741", "arxiv:2208.10497", "arxiv:2208.09847", "arxiv:2208.09846", "arxiv:2208.09833", "arxiv:2208.09708", "arxiv:2208.09418", "arxiv:2208.09243", "arxiv:2208.08845", "arxiv:2208.08165", "arxiv:2208.07652", "arxiv:2208.10273", "arxiv:2208.06163", "arxiv:2208.05775", "arxiv:2208.05719", "arxiv:2208.05318", "arxiv:2208.03516", "arxiv:2208.03299", "arxiv:2208.03188", "arxiv:2208.02532", "arxiv:2208.02012", "arxiv:2208.01626", "arxiv:2208.01618", "arxiv:2208.01119", "arxiv:2208.00809", "arxiv:2208.00638", "arxiv:2208.00553", "arxiv:2208.00277", "arxiv:2207.13085", "arxiv:2207.13080", "arxiv:2208.00780", "arxiv:2208.02294", "arxiv:2207.12101", "arxiv:2207.12021", "arxiv:2207.11477", "arxiv:2207.11209", "arxiv:2207.11163", "arxiv:2207.10660", "arxiv:2207.10342", "arxiv:2207.10023", "arxiv:2207.09615", "arxiv:2207.09158", "arxiv:2207.08536", "arxiv:2207.08455", "arxiv:2207.07051", "arxiv:2207.06324", "arxiv:2207.05808", "arxiv:2207.05739", "arxiv:2207.05608", "arxiv:2207.05223", "arxiv:2207.04429", "arxiv:2207.03442", "arxiv:2207.02696", "arxiv:2207.02094", "arxiv:2207.01614", "arxiv:2207.01610", "arxiv:2207.01405", "arxiv:2207.01206", "arxiv:2207.01203", "arxiv:2207.00774", "arxiv:2207.00747", "arxiv:2206.14858", "arxiv:2206.12571", "arxiv:2206.12338", "arxiv:2206.12100", "arxiv:2206.11804", "arxiv:2206.11646", "arxiv:2206.11309", "arxiv:2206.10885", "arxiv:2206.10658", "arxiv:2206.10311", "arxiv:2206.09541", "arxiv:2206.09457", "arxiv:2206.09191", "arxiv:2206.09027", "arxiv:2206.08853", "arxiv:2206.08702", "arxiv:2206.08464", "arxiv:2206.08155", "arxiv:2206.08009", "arxiv:2206.07682", "arxiv:2206.07048", "arxiv:2206.07255", "arxiv:2206.07160", "arxiv:2206.06640", "arxiv:2206.06614", "arxiv:2206.06336", "arxiv:2206.05764", "arxiv:2206.05399", "arxiv:2206.04673", "arxiv:2206.04656", "arxiv:2206.04615", "arxiv:2206.05275", "arxiv:2206.04105", "arxiv:2206.04040", "arxiv:2206.03281", "arxiv:2206.03012", "arxiv:2206.03009", "arxiv:2206.02909", "arxiv:2206.02743", "arxiv:2206.02721", "arxiv:2206.02409", "arxiv:2206.02369", "arxiv:2206.02336", "arxiv:2206.01986", "arxiv:2206.01720", "arxiv:2206.01640", "arxiv:2206.01520", "arxiv:2206.01256", "arxiv:2206.01153", "arxiv:2206.01009", "arxiv:2206.00820", "arxiv:2206.0664", "arxiv:2206.0404", "arxiv:2206.01186", "arxiv:2206.0164", "arxiv:2206.0082", "arxiv:2205.15531", "arxiv:2205.15367", "arxiv:2205.15288", "arxiv:2205.14986", "arxiv:2205.14900", "arxiv:2205.14826", "arxiv:2205.14756", "arxiv:2205.14727", "arxiv:2205.14690", "arxiv:2205.14230", "arxiv:2205.14219", "arxiv:2205.14217", "arxiv:2205.14100", "arxiv:2205.13997", "arxiv:2205.13803", "arxiv:2205.13636", "arxiv:2205.13412", "arxiv:2205.12952", "arxiv:2205.12590", "arxiv:2205.12255", "arxiv:2205.11916", "arxiv:2205.11822", "arxiv:2205.11423", "arxiv:2205.11098", "arxiv:2205.10803", "arxiv:2205.10692", "arxiv:2205.10625", "arxiv:2205.10330", "arxiv:2205.10249", "arxiv:2205.09911", "arxiv:2205.09726", "arxiv:2205.09712", "arxiv:2205.09613", "arxiv:2205.09575", "arxiv:2205.09314", "arxiv:2205.08851", "arxiv:2205.08645", "arxiv:2205.08508", "arxiv:2205.08464", "arxiv:2205.08232", "arxiv:2205.07460", "arxiv:2205.07352", "arxiv:2205.07882", "arxiv:2205.06175", "arxiv:2205.05076", "arxiv:2205.05019", "arxiv:2205.04596", "arxiv:2205.04550", "arxiv:2205.03892", "arxiv:2205.03692", "arxiv:2205.03401", "arxiv:2205.02767", "arxiv:2205.02517", "arxiv:2205.01941", "arxiv:2205.01694", "arxiv:2205.01543", "arxiv:2205.01068", "arxiv:2205.00176", "arxiv:2204.14198", "arxiv:2204.12749", "arxiv:2204.12667", "arxiv:2204.12511", "arxiv:2204.11545", "arxiv:2204.11447", "arxiv:2204.11335", "arxiv:2204.10825", "arxiv:2204.10641", "arxiv:2204.10628", "arxiv:2204.10536", "arxiv:2204.10509", "arxiv:2204.09453", "arxiv:2204.08261", "arxiv:2204.07496", "arxiv:2204.07439", "arxiv:2204.07372", "arxiv:2204.07341", "arxiv:2204.07190", "arxiv:2204.07141", "arxiv:2204.06745", "arxiv:2204.06331", "arxiv:2204.05862", "arxiv:2204.05859", "arxiv:2204.05610", "arxiv:2204.05488", "arxiv:2204.05239", "arxiv:2204.04799", "arxiv:2204.04216", "arxiv:2204.03809", "arxiv:2204.03649", "arxiv:2204.07433", "arxiv:2204.03162", "arxiv:2204.03084", "arxiv:2204.02964", "arxiv:2204.02663", "arxiv:2204.02624", "arxiv:2204.02610", "arxiv:2204.02549", "arxiv:2204.02329", "arxiv:2204.02311", "arxiv:2204.02235", "arxiv:2204.02030", "arxiv:2204.00628", "arxiv:2204.01691", "arxiv:2204.01464", "arxiv:2204.01321", "arxiv:2204.01264", "arxiv:2204.00718", "arxiv:2204.00598", "arxiv:2204.00185", "arxiv:2204.00097", "arxiv:2203.17274", "arxiv:2203.17265", "arxiv:2203.17261", "arxiv:2203.17234", "arxiv:2203.16513", "arxiv:2203.15756", "arxiv:2203.15556", "arxiv:2203.15544", "arxiv:2203.15270", "arxiv:2203.16329", "arxiv:2203.15207", "arxiv:2203.14941", "arxiv:2203.14940", "arxiv:2203.14465", "arxiv:2203.14221", "arxiv:2203.13922", "arxiv:2203.13591", "arxiv:2203.13560", "arxiv:2203.13450", "arxiv:2203.13344", "arxiv:2203.13310", "arxiv:2203.12533", "arxiv:2203.12247", "arxiv:2203.12119", "arxiv:2203.11370", "arxiv:2203.11171", "arxiv:2203.11147", "arxiv:2203.10610", "arxiv:2203.10378", "arxiv:2203.09693", "arxiv:2203.09630", "arxiv:2203.09410", "arxiv:2203.09183", "arxiv:2203.09100", "arxiv:2203.08913", "arxiv:2203.08745", "arxiv:2203.08509", "arxiv:2203.08383", "arxiv:2203.08372", "arxiv:2203.08243", "arxiv:2203.12677", "arxiv:2203.07735", "arxiv:2203.07706", "arxiv:2203.07593", "arxiv:2203.07540", "arxiv:2203.07404", "arxiv:2203.07303", "arxiv:2203.07171", "arxiv:2203.06566", "arxiv:2203.06430", "arxiv:2203.06390", "arxiv:2203.06304", "arxiv:2203.06169", "arxiv:2203.05843", "arxiv:2203.05797", "arxiv:2203.05557", "arxiv:2203.05550", "arxiv:2203.05115", "arxiv:2203.04566", "arxiv:2203.04559", "arxiv:2203.03570", "arxiv:2203.03619", "arxiv:2203.02549", "arxiv:2203.02201", "arxiv:2203.02155", "arxiv:2203.01937", "arxiv:2203.1137", "arxiv:2203.0963", "arxiv:2203.00928", "arxiv:2203.00867", "arxiv:2203.00537", "arxiv:2203.0357", "arxiv:2203.00077", "arxiv:2202.13587", "arxiv:2202.13556", "arxiv:2202.13473", "arxiv:2202.13047", "arxiv:2202.12719", "arxiv:2202.11960", "arxiv:2202.11921", "arxiv:2202.11705", "arxiv:2202.10870", "arxiv:2202.10156", "arxiv:2202.09778", "arxiv:2202.09571", "arxiv:2202.09153", "arxiv:2202.08791", "arxiv:2202.08335", "arxiv:2202.07800", "arxiv:2202.07728", "arxiv:2202.07432", "arxiv:2202.06991", "arxiv:2202.06687", "arxiv:2202.06633", "arxiv:2202.06417", "arxiv:2202.05458", "arxiv:2202.05352", "arxiv:2202.05144", "arxiv:2202.04579", "arxiv:2202.04256", "arxiv:2202.04241", "arxiv:2202.04053", "arxiv:2202.03629", "arxiv:2202.04772", "arxiv:2202.03609", "arxiv:2202.03376", "arxiv:2202.03052", "arxiv:2202.02912", "arxiv:2202.02464", "arxiv:2202.01374", "arxiv:2202.01110", "arxiv:2202.00758", "arxiv:2202.00181", "arxiv:2201.13361", "arxiv:2201.13178", "arxiv:2201.12594", "arxiv:2201.12519", "arxiv:2201.12487", "arxiv:2201.12329", "arxiv:2201.11990", "arxiv:2201.11903", "arxiv:2201.11176", "arxiv:2201.11114", "arxiv:2201.10222", "arxiv:2201.09865", "arxiv:2201.09830", "arxiv:2201.09792", "arxiv:2201.09636", "arxiv:2201.08903", "arxiv:2201.08663", "arxiv:2201.08264", "arxiv:2201.08239", "arxiv:2201.08102", "arxiv:2201.07207", "arxiv:2201.05718", "arxiv:2201.05409", "arxiv:2201.05337", "arxiv:2201.05242", "arxiv:2201.04620", "arxiv:2201.03720", "arxiv:2201.02767", "arxiv:2201.02756", "arxiv:2201.02177", "arxiv:2201.01609", "arxiv:2201.02732", "arxiv:2201.0983", "arxiv:2201.00057", "arxiv:2112.15121", "arxiv:2112.15034", "arxiv:2112.13410", "arxiv:2112.12345", "arxiv:2112.12251", "arxiv:2112.11685", "arxiv:2112.10982", "arxiv:2112.10752", "arxiv:2112.10510", "arxiv:2112.09583", "arxiv:2112.09332", "arxiv:2112.09245", "arxiv:2112.09219", "arxiv:2112.09118", "arxiv:2112.08726", "arxiv:2112.08674", "arxiv:2112.08654", "arxiv:2112.08633", "arxiv:2112.08619", "arxiv:2112.08459", "arxiv:2112.08140", "arxiv:2112.07660", "arxiv:2112.07194", "arxiv:2112.07076", "arxiv:2112.06905", "arxiv:2112.06825", "arxiv:2112.06749", "arxiv:2112.06592", "arxiv:2112.05197", "arxiv:2112.05120", "arxiv:2112.11446", "arxiv:2112.04478", "arxiv:2112.04426", "arxiv:2112.04137", "arxiv:2112.04016", "arxiv:2112.03865", "arxiv:2112.03518", "arxiv:2112.02424", "arxiv:2112.02418", "arxiv:2112.01898", "arxiv:2112.01518", "arxiv:2112.1341", "arxiv:2112.1051", "arxiv:2112.00874", "arxiv:2112.00861", "arxiv:2112.00775", "arxiv:2112.00431", "arxiv:2112.00114", "arxiv:2111.14820", "arxiv:2111.14592", "arxiv:2111.14293", "arxiv:2111.13853", "arxiv:2111.13495", "arxiv:2111.13196", "arxiv:2111.13139", "arxiv:2111.12982", "arxiv:2111.12681", "arxiv:2111.12476", "arxiv:2111.15521", "arxiv:2111.11840", "arxiv:2111.11828", "arxiv:2111.11704", "arxiv:2111.11632", "arxiv:2111.11215", "arxiv:2111.11153", "arxiv:2111.10734", "arxiv:2111.10701", "arxiv:2111.10659", "arxiv:2111.10601", "arxiv:2111.10007", "arxiv:2111.09876", "arxiv:2111.09733", "arxiv:2111.08230", "arxiv:2111.07058", "arxiv:2111.05803", "arxiv:2111.05297", "arxiv:2111.04746", "arxiv:2111.03017", "arxiv:2111.02552", "arxiv:2111.02114", "arxiv:2111.1482", "arxiv:2111.01353", "arxiv:2111.01297", "arxiv:2111.0823", "arxiv:2111.00743", "arxiv:2111.00539", "arxiv:2110.15900", "arxiv:2110.15545", "arxiv:2110.15188", "arxiv:2110.14943", "arxiv:2110.14168", "arxiv:2110.13640", "arxiv:2110.12894", "arxiv:2110.12654", "arxiv:2110.12612", "arxiv:2110.11929", "arxiv:2110.11314", "arxiv:2110.11309", "arxiv:2110.10661", "arxiv:2110.15802", "arxiv:2110.09455", "arxiv:2110.09276", "arxiv:2110.11405", "arxiv:2110.08515", "arxiv:2110.08396", "arxiv:2110.08387", "arxiv:2110.08323", "arxiv:2110.08207", "arxiv:2110.08188", "arxiv:2110.08118", "arxiv:2110.08094", "arxiv:2110.07985", "arxiv:2110.07803", "arxiv:2110.07477", "arxiv:2110.07367", "arxiv:2110.06615", "arxiv:2110.06482", "arxiv:2110.06257", "arxiv:2110.06241", "arxiv:2110.06207", "arxiv:2110.06051", "arxiv:2110.05999", "arxiv:2110.05841", "arxiv:2110.05789", "arxiv:2110.07719", "arxiv:2110.04869", "arxiv:2110.04744", "arxiv:2110.04495", "arxiv:2110.04375", "arxiv:2110.04350", "arxiv:2110.04121", "arxiv:2110.03860", "arxiv:2110.03804", "arxiv:2110.03620", "arxiv:2110.03611", "arxiv:2110.03544", "arxiv:2110.03485", "arxiv:2110.03262", "arxiv:2110.03187", "arxiv:2110.03183", "arxiv:2110.02999", "arxiv:2110.02998", "arxiv:2110.02879", "arxiv:2110.02782", "arxiv:2110.02639", "arxiv:2110.04065", "arxiv:2110.02393", "arxiv:2110.02369", "arxiv:2110.02037", "arxiv:2110.01691", "arxiv:2110.01593", "arxiv:2110.05929", "arxiv:2110.00641", "arxiv:2110.0362", "arxiv:2110.00168", "arxiv:2109.14739", "arxiv:2109.14119", "arxiv:2109.13995", "arxiv:2109.13916", "arxiv:2109.12761", "arxiv:2109.12302", "arxiv:2109.11797", "arxiv:2109.11301", "arxiv:2109.10862", "arxiv:2109.10465", "arxiv:2109.10086", "arxiv:2109.09948", "arxiv:2109.09519", "arxiv:2109.08877", "arxiv:2109.08828", "arxiv:2109.08668", "arxiv:2109.08603", "arxiv:2109.08203", "arxiv:2109.07740", "arxiv:2109.07713", "arxiv:2109.07043", "arxiv:2109.06780", "arxiv:2109.06513", "arxiv:2109.06474", "arxiv:2109.06379", "arxiv:2109.05892", "arxiv:2109.05739", "arxiv:2109.04504", "arxiv:2109.04096", "arxiv:2109.04084", "arxiv:2109.03856", "arxiv:2109.03839", "arxiv:2109.03426", "arxiv:2109.02429", "arxiv:2109.01134", "arxiv:2109.0774", "arxiv:2109.0678", "arxiv:2109.00642", "arxiv:2109.00527", "arxiv:2108.13454", "arxiv:2108.13320", "arxiv:2108.12113", "arxiv:2108.11714", "arxiv:2108.10904", "arxiv:2108.10510", "arxiv:2108.10447", "arxiv:2110.0386", "arxiv:2108.09875", "arxiv:2108.09346", "arxiv:2108.09084", "arxiv:2108.08398", "arxiv:2108.07258", "arxiv:2108.07154", "arxiv:2108.06098", "arxiv:2108.06027", "arxiv:2108.05839", "arxiv:2108.05540", "arxiv:2108.05018", "arxiv:2108.04392", "arxiv:2108.04186", "arxiv:2108.03702", "arxiv:2108.03152", "arxiv:2108.02982", "arxiv:2108.01912", "arxiv:2108.01843", "arxiv:2108.01832", "arxiv:2108.01513", "arxiv:2108.01369", "arxiv:2108.01073", "arxiv:2108.00955", "arxiv:2108.00644", "arxiv:2107.13639", "arxiv:2107.13586", "arxiv:2107.13433", "arxiv:2107.13132", "arxiv:2107.11357", "arxiv:2107.10064", "arxiv:2107.10060", "arxiv:2107.09846", "arxiv:2107.09609", "arxiv:2107.09028", "arxiv:2107.08031", "arxiv:2107.09448", "arxiv:2107.07651", "arxiv:2107.07436", "arxiv:2107.06383", "arxiv:2107.06278", "arxiv:2107.05876", "arxiv:2107.05720", "arxiv:2107.07394", "arxiv:2107.04589", "arxiv:2107.04388", "arxiv:2107.04163", "arxiv:2107.03374", "arxiv:2107.03250", "arxiv:2107.03144", "arxiv:2107.03006", "arxiv:2107.02791", "arxiv:2107.02681", "arxiv:2107.02561", "arxiv:2107.01943", "arxiv:2107.07343", "arxiv:2107.01408", "arxiv:2107.00649", "arxiv:2107.0325", "arxiv:2106.15846", "arxiv:2106.15776", "arxiv:2106.15078", "arxiv:2106.13884", "arxiv:2106.13629", "arxiv:2106.12871", "arxiv:2106.12720", "arxiv:2106.12620", "arxiv:2106.12570", "arxiv:2106.12506", "arxiv:2106.12497", "arxiv:2106.12314", "arxiv:2106.12124", "arxiv:2106.12052", "arxiv:2106.11420", "arxiv:2106.11310", "arxiv:2106.11299", "arxiv:2106.11251", "arxiv:2106.11097", "arxiv:2106.11086", "arxiv:2106.10689", "arxiv:2106.09805", "arxiv:2106.09777", "arxiv:2106.09711", "arxiv:2106.09667", "arxiv:2106.09614", "arxiv:2106.08942", "arxiv:2106.09516", "arxiv:2106.08364", "arxiv:2106.08166", "arxiv:2106.07889", "arxiv:2106.07447", "arxiv:2106.07306", "arxiv:2106.07139", "arxiv:2106.07032", "arxiv:2106.06927", "arxiv:2106.06823", "arxiv:2106.06210", "arxiv:2106.06134", "arxiv:2106.06125", "arxiv:2106.05937", "arxiv:2106.05566", "arxiv:2106.05237", "arxiv:2106.05234", "arxiv:2106.05194", "arxiv:2106.05139", "arxiv:2106.04990", "arxiv:2106.04823", "arxiv:2106.03844", "arxiv:2106.03374", "arxiv:2106.03253", "arxiv:2106.03157", "arxiv:2106.03027", "arxiv:2106.02886", "arxiv:2106.02713", "arxiv:2106.02695", "arxiv:2106.02654", "arxiv:2106.02636", "arxiv:2106.02227", "arxiv:2106.01793", "arxiv:2106.01680", "arxiv:2106.07345", "arxiv:2106.13863", "arxiv:2106.1257", "arxiv:2106.01216", "arxiv:2106.01144", "arxiv:2106.1142", "arxiv:2106.00967", "arxiv:2106.00957", "arxiv:2106.00920", "arxiv:2106.00882", "arxiv:2106.00827", "arxiv:2106.00791", "arxiv:2106.00507", "arxiv:2106.00221", "arxiv:2106.0168", "arxiv:2105.15183", "arxiv:2105.14669", "arxiv:2105.14491", "arxiv:2105.14103", "arxiv:2105.14080", "arxiv:2105.13913", "arxiv:2105.13710", "arxiv:2105.13650", "arxiv:2105.11108", "arxiv:2105.11084", "arxiv:2105.09821", "arxiv:2105.09816", "arxiv:2105.09710", "arxiv:2105.08791", "arxiv:2105.08261", "arxiv:2105.07583", "arxiv:2105.06597", "arxiv:2105.06041", "arxiv:2105.05563", "arxiv:2105.05332", "arxiv:2105.05233", "arxiv:2105.04906", "arxiv:2105.04779", "arxiv:2105.04651", "arxiv:2105.04387", "arxiv:2105.03933", "arxiv:2105.03748", "arxiv:2105.03599", "arxiv:2105.03075", "arxiv:2105.02274", "arxiv:2105.02201", "arxiv:2105.1408", "arxiv:2105.00666", "arxiv:2104.14294", "arxiv:2104.13921", "arxiv:2104.12928", "arxiv:2104.12671", "arxiv:2104.12470", "arxiv:2104.12465", "arxiv:2104.12369", "arxiv:2104.12016", "arxiv:2104.10078", "arxiv:2104.09937", "arxiv:2104.09791", "arxiv:2104.09068", "arxiv:2104.08771", "arxiv:2104.08691", "arxiv:2104.08661", "arxiv:2104.08253", "arxiv:2104.08202", "arxiv:2104.08051", "arxiv:2104.07858", "arxiv:2104.07198", "arxiv:2104.07186", "arxiv:2104.07012", "arxiv:2104.06967", "arxiv:2104.06917", "arxiv:2104.06599", "arxiv:2104.06394", "arxiv:2104.06083", "arxiv:2104.04692", "arxiv:2104.03507", "arxiv:2104.05557", "arxiv:2104.01027", "arxiv:2104.00990", "arxiv:2104.00650", "arxiv:2104.00587", "arxiv:2104.00464", "arxiv:2104.00323", "arxiv:2104.00322", "arxiv:2103.17269", "arxiv:2103.16173", "arxiv:2103.16002", "arxiv:2103.15982", "arxiv:2103.15679", "arxiv:2103.15606", "arxiv:2103.15429", "arxiv:2103.14770", "arxiv:2103.13990", "arxiv:2103.13744", "arxiv:2103.13322", "arxiv:2103.13262", "arxiv:2103.11251", "arxiv:2103.10484", "arxiv:2103.10380", "arxiv:2103.10360", "arxiv:2103.10022", "arxiv:2103.09548", "arxiv:2103.06695", "arxiv:2103.08313", "arxiv:2103.05236", "arxiv:2103.04831", "arxiv:2103.03568", "arxiv:2103.03404", "arxiv:2103.03230", "arxiv:2103.02014", "arxiv:2103.01954", "arxiv:2103.01931", "arxiv:2103.01189", "arxiv:2103.00742", "arxiv:2103.00020", "arxiv:2102.13582", "arxiv:2102.12781", "arxiv:2102.12060", "arxiv:2102.10492", "arxiv:2102.10271", "arxiv:2102.09206", "arxiv:2102.08860", "arxiv:2102.08501", "arxiv:2102.08325", "arxiv:2102.07835", "arxiv:2102.05095", "arxiv:2102.04208", "arxiv:2102.03243", "arxiv:2102.02779", "arxiv:2102.01987", "arxiv:2101.12059", "arxiv:2101.11351", "arxiv:2101.10545", "arxiv:2101.10488", "arxiv:2101.09459", "arxiv:2101.09336", "arxiv:2101.06757", "arxiv:2101.05667", "arxiv:2101.04041", "arxiv:2101.03961", "arxiv:2101.03958", "arxiv:2101.02338", "arxiv:2101.00529", "arxiv:2101.00190", "arxiv:2101.00117", "arxiv:2012.15416", "arxiv:2012.15375", "arxiv:2012.13628", "arxiv:2012.13475", "arxiv:2012.13048", "arxiv:2012.12624", "arxiv:2012.11635", "arxiv:2012.11552", "arxiv:2012.09838", "arxiv:2101.05204", "arxiv:2012.08383", "arxiv:2012.08377", "arxiv:2012.09740", "arxiv:2012.08054", "arxiv:2012.07280", "arxiv:2012.06400", "arxiv:2012.06166", "arxiv:2012.04584", "arxiv:2012.04225", "arxiv:2012.06333", "arxiv:2012.02748", "arxiv:2012.02525", "arxiv:2012.02469", "arxiv:2012.02190", "arxiv:2012.02189", "arxiv:2012.02046", "arxiv:2012.00451", "arxiv:2012.00413", "arxiv:2011.14522", "arxiv:2011.13377", "arxiv:2011.13046", "arxiv:2011.12829", "arxiv:2011.12490", "arxiv:2011.12100", "arxiv:2011.11261", "arxiv:2011.11233", "arxiv:2011.10566", "arxiv:2011.10464", "arxiv:2011.10043", "arxiv:2011.09899", "arxiv:2011.09533", "arxiv:2011.09526", "arxiv:2011.07435", "arxiv:2011.06037", "arxiv:2011.11760", "arxiv:2011.04267", "arxiv:2011.03623", "arxiv:2011.03023", "arxiv:2011.02863", "arxiv:2011.00406", "arxiv:2011.00362", "arxiv:2010.15464", "arxiv:2010.15382", "arxiv:2010.14810", "arxiv:2010.14672", "arxiv:2010.14484", "arxiv:2011.07949", "arxiv:2010.13984", "arxiv:2010.13154", "arxiv:2010.12016", "arxiv:2010.11982", "arxiv:2010.11929", "arxiv:2010.11904", "arxiv:2010.11386", "arxiv:2010.10596", "arxiv:2010.10333", "arxiv:2010.10137", "arxiv:2010.09954", "arxiv:2010.09709", "arxiv:2010.08508", "arxiv:2010.08191", "arxiv:2010.07999", "arxiv:2010.07492", "arxiv:2010.07416", "arxiv:2010.06467", "arxiv:2010.05646", "arxiv:2010.05057", "arxiv:2010.05006", "arxiv:2010.04595", "arxiv:2010.03509", "arxiv:2010.11694", "arxiv:2010.03205", "arxiv:2010.03073", "arxiv:2010.02803", "arxiv:2010.02666", "arxiv:2010.01454", "arxiv:2010.01412", "arxiv:2010.00843", "arxiv:2010.00768", "arxiv:2010.00578", "arxiv:2009.12756", "arxiv:2009.12506", "arxiv:2009.12232", "arxiv:2009.09232", "arxiv:2009.08553", "arxiv:2009.07724", "arxiv:2009.07531", "arxiv:2009.07258", "arxiv:2009.06732", "arxiv:2009.05757", "arxiv:2009.05041", "arxiv:2009.03300", "arxiv:2009.03301", "arxiv:2009.01555", "arxiv:2009.01325", "arxiv:2009.00713", "arxiv:2009.00165", "arxiv:2008.13426", "arxiv:2009.00236", "arxiv:2008.13578", "arxiv:2008.12009", "arxiv:2008.10292", "arxiv:2008.10150", "arxiv:2008.10010", "arxiv:2008.09093", "arxiv:2008.08516", "arxiv:2008.07267", "arxiv:2008.06893", "arxiv:2008.05437", "arxiv:2008.03802", "arxiv:2008.03800", "arxiv:2008.04872", "arxiv:2008.02531", "arxiv:2008.01064", "arxiv:2008.00916", "arxiv:2007.13916", "arxiv:2007.12865", "arxiv:2007.12368", "arxiv:2007.11088", "arxiv:2007.10730", "arxiv:2007.10247", "arxiv:2007.09880", "arxiv:2007.09549", "arxiv:2007.09296", "arxiv:2007.08854", "arxiv:2008.02217", "arxiv:2007.08349", "arxiv:2007.07843", "arxiv:2007.06929", "arxiv:2007.06028", "arxiv:2007.04785", "arxiv:2007.03331", "arxiv:2007.02442", "arxiv:2007.0988", "arxiv:2007.00808", "arxiv:2007.00194", "arxiv:2007.00072", "arxiv:2006.16934", "arxiv:2006.16779", "arxiv:2006.16785", "arxiv:2006.15498", "arxiv:2006.15489", "arxiv:2006.15437", "arxiv:2006.14613", "arxiv:2006.14063", "arxiv:2006.13913", "arxiv:2006.12971", "arxiv:2006.12868", "arxiv:2006.12719", "arxiv:2006.12703", "arxiv:2006.12156", "arxiv:2006.11653", "arxiv:2006.11477", "arxiv:2006.11325", "arxiv:2006.11239", "arxiv:2006.10598", "arxiv:2006.10511", "arxiv:2006.10141", "arxiv:2006.10029", "arxiv:2006.09963", "arxiv:2006.09882", "arxiv:2006.09526", "arxiv:2006.09136", "arxiv:2006.08812", "arxiv:2006.08218", "arxiv:2006.07739", "arxiv:2006.07556", "arxiv:2006.07529", "arxiv:2006.05849", "arxiv:2006.05582", "arxiv:2006.04558", "arxiv:2006.04466", "arxiv:2006.03535", "arxiv:2006.02635", "arxiv:2006.01460", "arxiv:2006.01631", "arxiv:2006.00093", "arxiv:2005.14165", "arxiv:2005.12320", "arxiv:2005.11742", "arxiv:2005.11401", "arxiv:2005.11129", "arxiv:2005.10696", "arxiv:2005.10283", "arxiv:2005.09814", "arxiv:2005.09561", "arxiv:2005.08392", "arxiv:2005.07727", "arxiv:2005.07362", "arxiv:2005.05106", "arxiv:2005.04908", "arxiv:2005.04735", "arxiv:2005.04208", "arxiv:2005.03356", "arxiv:2005.02534", "arxiv:2005.01831", "arxiv:2005.01400", "arxiv:2005.00583", "arxiv:2005.00558", "arxiv:2005.00200", "arxiv:2005.00181", "arxiv:2004.15021", "arxiv:2004.14269", "arxiv:2004.14255", "arxiv:2004.14074", "arxiv:2004.13969", "arxiv:2004.13649", "arxiv:2004.13637", "arxiv:2004.13313", "arxiv:2004.12832", "arxiv:2004.12436", "arxiv:2004.12363", "arxiv:2004.12297", "arxiv:2004.11759", "arxiv:2004.11362", "arxiv:2004.11231", "arxiv:2004.11045", "arxiv:2004.09890", "arxiv:2004.06165", "arxiv:2004.05816", "arxiv:2004.04906", "arxiv:2004.13796", "arxiv:2004.02866", "arxiv:2004.00646", "arxiv:2003.14323", "arxiv:2003.13198", "arxiv:2003.11618", "arxiv:2003.11539", "arxiv:2003.11038", "arxiv:2003.10903", "arxiv:2003.10865", "arxiv:2003.08271", "arxiv:2003.07833", "arxiv:2003.07568", "arxiv:2003.06877", "arxiv:2003.06816", "arxiv:2003.06713", "arxiv:2003.05431", "arxiv:2003.05383", "arxiv:2003.05162", "arxiv:2003.04887", "arxiv:2003.04297", "arxiv:2003.04276", "arxiv:2003.05311", "arxiv:2003.01964", "arxiv:2003.01950", "arxiv:2003.01473", "arxiv:2003.00982", "arxiv:2002.12591", "arxiv:2002.09102", "arxiv:2002.08822", "arxiv:2002.08742", "arxiv:2002.07971", "arxiv:2002.07916", "arxiv:2002.06353", "arxiv:2002.05709", "arxiv:2002.08909", "arxiv:2002.03932", "arxiv:2002.02848", "arxiv:2002.01196", "arxiv:2002.00585", "arxiv:2001.11128", "arxiv:2001.10382", "arxiv:2001.09239", "arxiv:2001.09099", "arxiv:2001.08361", "arxiv:2001.08001", "arxiv:2001.06392", "arxiv:2001.05887", "arxiv:2001.05786", "arxiv:2001.04316", "arxiv:2001.04063", "arxiv:2001.02209", "arxiv:2001.01431", "arxiv:1912.12814", "arxiv:1912.12522", "arxiv:1912.07491", "arxiv:1912.07116", "arxiv:1912.03330", "arxiv:1912.03227", "arxiv:1912.02769", "arxiv:1912.02323", "arxiv:1912.02315", "arxiv:1912.05906", "arxiv:1912.01991", "arxiv:1912.00497", "arxiv:1911.12747", "arxiv:1911.11423", "arxiv:1911.10862", "arxiv:1911.10538", "arxiv:1911.10375", "arxiv:1911.09550", "arxiv:1911.09304", "arxiv:1911.05889", "arxiv:1911.05722", "arxiv:1911.04700", "arxiv:1911.03912", "arxiv:1911.02896", "arxiv:1911.02150", "arxiv:1910.14537", "arxiv:1910.14424", "arxiv:1910.12729", "arxiv:1910.11831", "arxiv:1910.12638", "arxiv:1910.11480", "arxiv:1910.10683", "arxiv:1910.12607", "arxiv:1910.10706", "arxiv:1910.10288", "arxiv:1910.09430", "arxiv:1910.08485", "arxiv:1910.08293", "arxiv:1910.07065", "arxiv:1910.06222", "arxiv:1910.05872", "arxiv:1910.05453", "arxiv:1910.04760", "arxiv:1910.04256", "arxiv:1910.03752", "arxiv:1910.03656", "arxiv:1910.06711", "arxiv:1910.02718", "arxiv:1910.02054", "arxiv:1910.01705", "arxiv:1909.13584", "arxiv:1909.12977", "arxiv:1909.11895", "arxiv:1909.11740", "arxiv:1909.11646", "arxiv:1909.10470", "arxiv:1909.10447", "arxiv:1909.09656", "arxiv:1909.09116", "arxiv:1909.08053", "arxiv:1909.07957", "arxiv:1909.07913", "arxiv:1909.05855", "arxiv:1909.05858", "arxiv:1909.03087", "arxiv:1909.00968", "arxiv:1909.00876", "arxiv:1909.00687", "arxiv:1908.11587", "arxiv:1908.08718", "arxiv:1908.08530", "arxiv:1908.08926", "arxiv:1908.07021", "arxiv:1908.06427", "arxiv:1908.06066", "arxiv:1908.03852", "arxiv:1908.02692", "arxiv:1908.02265", "arxiv:1908.01755", "arxiv:1908.01224", "arxiv:1908.00211", "arxiv:1907.13625", "arxiv:1907.09701", "arxiv:1907.09006", "arxiv:1907.08292", "arxiv:1907.05737", "arxiv:1907.05570", "arxiv:1907.03077", "arxiv:1907.02544", "arxiv:1907.01475", "arxiv:1907.01131", "arxiv:1907.00448", "arxiv:1906.12340", "arxiv:1906.10670", "arxiv:1906.08226", "arxiv:1906.06832", "arxiv:1906.06669", "arxiv:1906.05849", "arxiv:1906.05807", "arxiv:1906.05460", "arxiv:1906.04466", "arxiv:1906.03402", "arxiv:1906.03327", "arxiv:1906.02940", "arxiv:1906.02825", "arxiv:1906.02792", "arxiv:1906.09453", "arxiv:1906.02287", "arxiv:1906.00910", "arxiv:1906.00817", "arxiv:1906.0546", "arxiv:1906.00300", "arxiv:1906.00283", "arxiv:1905.13497", "arxiv:1905.13066", "arxiv:1905.10854", "arxiv:1905.10797", "arxiv:1905.13305", "arxiv:1905.09272", "arxiv:1905.09263", "arxiv:1905.09217", "arxiv:1905.09010", "arxiv:1905.07628", "arxiv:1905.04970", "arxiv:1905.02884", "arxiv:1905.02882", "arxiv:1905.02072", "arxiv:1905.01639", "arxiv:1905.01278", "arxiv:1905.01235", "arxiv:1905.00954", "arxiv:1905.00780", "arxiv:1904.13300", "arxiv:1904.13132", "arxiv:1904.11574", "arxiv:1904.11407", "arxiv:1904.10247", "arxiv:1904.09981", "arxiv:1905.01969", "arxiv:1904.09117", "arxiv:1904.09013", "arxiv:1904.08939", "arxiv:1904.08375", "arxiv:1904.08060", "arxiv:1904.07659", "arxiv:1904.07633", "arxiv:1904.07531", "arxiv:1904.07475", "arxiv:1904.07451", "arxiv:1904.07425", "arxiv:1904.07235", "arxiv:1904.07094", "arxiv:1904.06726", "arxiv:1904.05862", "arxiv:1904.05034", "arxiv:1904.04957", "arxiv:1904.04092", "arxiv:1904.03597", "arxiv:1904.02868", "arxiv:1904.01766", "arxiv:1904.01109", "arxiv:1904.00045", "arxiv:1903.11780", "arxiv:1903.11412", "arxiv:1903.10972", "arxiv:1903.10677", "arxiv:1903.09900", "arxiv:1903.07593", "arxiv:1903.06902", "arxiv:1903.05285", "arxiv:1903.04227", "arxiv:1903.04019", "arxiv:1903.03166", "arxiv:1903.02330", "arxiv:1903.02271", "arxiv:1903.01973", "arxiv:1903.03671", "arxiv:1903.01611", "arxiv:1903.01093", "arxiv:1903.00502", "arxiv:1902.11038", "arxiv:1902.09229", "arxiv:1902.08666", "arxiv:1902.08142", "arxiv:1902.06871", "arxiv:1902.06838", "arxiv:1902.06423", "arxiv:1902.06162", "arxiv:1902.04187", "arxiv:1902.03129", "arxiv:1902.00927", "arxiv:1902.00813", "arxiv:1902.00751", "arxiv:1901.10723", "arxiv:1901.09672", "arxiv:1901.09392", "arxiv:1901.09005", "arxiv:1901.08991", "arxiv:1901.08933", "arxiv:1901.06283", "arxiv:1901.05945", "arxiv:1901.04085", "arxiv:1901.03729", "arxiv:1901.03396", "arxiv:1901.00596", "arxiv:1901.00536", "arxiv:1901.00212", "arxiv:1901.00137", "arxiv:1812.10924", "arxiv:1812.09903", "arxiv:1812.03252", "arxiv:1812.01784", "arxiv:1812.01458", "arxiv:1811.11553", "arxiv:1811.11459", "arxiv:1811.10597", "arxiv:1811.09795", "arxiv:1811.09021", "arxiv:1811.08338", "arxiv:1811.07993", "arxiv:1811.07871", "arxiv:1811.07104", "arxiv:1811.06964", "arxiv:1811.06521", "arxiv:1811.10996", "arxiv:1811.12808", "arxiv:1811.04857", "arxiv:1811.03115", "arxiv:1811.00945", "arxiv:1811.00252", "arxiv:1810.13306", "arxiv:1810.11875", "arxiv:1810.08771", "arxiv:1810.05749", "arxiv:1810.04805", "arxiv:1810.03043", "arxiv:1809.07207", "arxiv:1809.01696", "arxiv:1808.10583", "arxiv:1808.09442", "arxiv:1808.09115", "arxiv:1808.07371", "arxiv:1808.03867", "arxiv:1808.03833", "arxiv:1808.00928", "arxiv:1808.00391", "arxiv:1807.11720", "arxiv:1807.11293", "arxiv:1807.10589", "arxiv:1807.08998", "arxiv:1807.08024", "arxiv:1807.06906", "arxiv:1807.04742", "arxiv:1807.03748", "arxiv:1807.03367", "arxiv:1807.02811", "arxiv:1806.10758", "arxiv:1806.10574", "arxiv:1806.08482", "arxiv:1806.07421", "arxiv:1806.05024", "arxiv:1806.06850", "arxiv:1806.07912", "arxiv:1806.06927", "arxiv:1806.03589", "arxiv:1806.04510", "arxiv:1806.02012", "arxiv:1806.00890", "arxiv:1806.00069", "arxiv:1805.12233", "arxiv:1805.11724", "arxiv:1805.09501", "arxiv:1805.08969", "arxiv:1805.08352", "arxiv:1805.07473", "arxiv:1805.07468", "arxiv:1805.07039", "arxiv:1805.04096", "arxiv:1805.03356", "arxiv:1805.01978", "arxiv:1805.01109", "arxiv:1805.00833", "arxiv:1804.11191", "arxiv:1804.08972", "arxiv:1804.07855", "arxiv:1804.07723", "arxiv:1804.03641", "arxiv:1804.03527", "arxiv:1804.03189", "arxiv:1804.03160", "arxiv:1804.01665", "arxiv:1804.01552", "arxiv:1804.00746", "arxiv:1804.00222", "arxiv:1803.11203", "arxiv:1803.10348", "arxiv:1803.09956", "arxiv:1803.09655", "arxiv:1803.08035", "arxiv:1803.05457", "arxiv:1803.03049", "arxiv:1802.09787", "arxiv:1802.07740", "arxiv:1802.06070", "arxiv:1802.08129", "arxiv:1802.04443", "arxiv:1802.01933", "arxiv:1802.01880", "arxiv:1802.01528", "arxiv:1802.0774", "arxiv:1802.00121", "arxiv:1801.09392", "arxiv:1801.09103", "arxiv:1801.07892", "arxiv:1801.03454", "arxiv:1712.08268", "arxiv:1712.06651", "arxiv:1712.06302", "arxiv:1712.05884", "arxiv:1712.05363", "arxiv:1712.03342", "arxiv:1712.01337", "arxiv:1711.11279", "arxiv:1711.10925", "arxiv:1711.10455", "arxiv:1711.09846", "arxiv:1711.09784", "arxiv:1711.09082", "arxiv:1711.08590", "arxiv:1711.06379", "arxiv:1711.03219", "arxiv:1711.00436", "arxiv:1710.11063", "arxiv:1710.10467", "arxiv:1710.09829", "arxiv:1710.08969", "arxiv:1710.06169", "arxiv:1710.05941", "arxiv:1710.05468", "arxiv:1710.05381", "arxiv:1710.04837", "arxiv:1710.04806", "arxiv:1709.10163", "arxiv:1709.07807", "arxiv:1709.06030", "arxiv:1709.05522", "arxiv:1709.00505", "arxiv:1709.00322", "arxiv:1708.07860", "arxiv:1708.06742", "arxiv:1708.06734", "arxiv:1708.05344", "arxiv:1708.02901", "arxiv:1708.01641", "arxiv:1708.01354", "arxiv:1708.01246", "arxiv:1707.07998", "arxiv:1707.00600", "arxiv:1707.00269", "arxiv:1706.07979", "arxiv:1706.05806", "arxiv:1706.03825", "arxiv:1706.03762", "arxiv:1706.03741", "arxiv:1705.08631", "arxiv:1705.08168", "arxiv:1705.05598", "arxiv:1705.02101", "arxiv:1705.00754", "arxiv:1704.08792", "arxiv:1704.07813", "arxiv:1704.05588", "arxiv:1704.05310", "arxiv:1704.04497", "arxiv:1704.02729", "arxiv:1704.00200", "arxiv:1703.10135", "arxiv:1703.09788", "arxiv:1703.09137", "arxiv:1703.06870", "arxiv:1703.02018", "arxiv:1703.01827", "arxiv:1702.08608", "arxiv:1702.07826", "arxiv:1702.04595", "arxiv:1702.01983", "arxiv:1701.06049", "arxiv:1701.02547", "arxiv:1701.01821", "arxiv:1612.06699", "arxiv:1611.09969", "arxiv:1611.09842", "arxiv:1611.09309", "arxiv:1611.08669", "arxiv:1611.08663", "arxiv:1611.06646", "arxiv:1611.05088", "arxiv:1610.06940", "arxiv:1610.02391", "arxiv:1610.01685", "arxiv:1609.09475", "arxiv:1607.08085", "arxiv:1606.07757", "arxiv:1606.07419", "arxiv:1606.06565", "arxiv:1606.03490", "arxiv:1605.09782", "arxiv:1605.04253", "arxiv:1605.03705", "arxiv:1605.01713", "arxiv:1604.07379", "arxiv:1604.03628", "arxiv:1604.01360", "arxiv:1603.08511", "arxiv:1603.08047", "arxiv:1603.04992", "arxiv:1603.02754", "arxiv:1602.06420", "arxiv:1602.04938", "arxiv:1512.06293", "arxiv:1512.03385", "arxiv:1512.02902", "arxiv:1512.02479", "arxiv:1511.09230", "arxiv:1511.06422", "arxiv:1511.06335", "arxiv:1510.00726", "arxiv:1509.06825", "arxiv:1506.02753", "arxiv:1506.02640", "arxiv:1506.02078", "arxiv:1506.00019", "arxiv:1505.05192", "arxiv:1504.08083", "arxiv:1503.08677", "arxiv:1503.04069", "arxiv:1502.04681", "arxiv:1502.03044", "arxiv:1501.04560", "arxiv:1501.02530", "arxiv:1412.6980", "arxiv:1412.6856", "arxiv:1412.3555", "arxiv:1412.0035", "arxiv:1411.5018", "arxiv:1411.4555", "arxiv:1409.4667", "arxiv:1409.1259", "arxiv:1405.0312", "arxiv:1404.1100", "arxiv:1312.6034", "arxiv:1312.1445", "arxiv:1311.2901", "arxiv:1311.2524", "arxiv:1309.6392", "arxiv:1206.5538", "arxiv:1011.5270", "arxiv:0909.0910", "arxiv:0908.2724", "region:us" ]
taesiri
null
null
null
109
118
--- license: mit task_categories: - question-answering language: - en pretty_name: ArXiv QA --- # ArXiv QA (TBD) Automated ArXiv question answering via large language models [Github](https://github.com/taesiri/ArXivQA) | [Homepage](https://arxiv.taesiri.xyz/) | [Simple QA - Hugging Face Space](https://huggingface.co/spaces/taesiri/ClaudeReadsArxiv) --- # List of Papers <details open><summary><strong>2023</strong></summary><div> ### October 2023 - Functional Interpolation for Relative Positions Improves Long Context Transformers - [[2310.04418](https://arxiv.org/abs/2310.04418)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.04418.md)]. - RECOMP: Improving Retrieval-Augmented LMs with Compression and Selective Augmentation - [[2310.04408](https://arxiv.org/abs/2310.04408)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.04408.md)]. - Language Agent Tree Search Unifies Reasoning Acting and Planning in Language Models - [[2310.04406](https://arxiv.org/abs/2310.04406)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.04406.md)]. - Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference - [[2310.04378](https://arxiv.org/abs/2310.04378)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.04378.md)]. - Towards Foundational Models for Molecular Learning on Large-Scale Multi-Task Datasets - [[2310.04292](https://arxiv.org/abs/2310.04292)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.04292.md)]. - Improved Baselines with Visual Instruction Tuning - [[2310.03744](https://arxiv.org/abs/2310.03744)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.03744.md)]. - Aligning Text-to-Image Diffusion Models with Reward Backpropagation - [[2310.03739](https://arxiv.org/abs/2310.03739)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.03739.md)]. - Leveraging Unpaired Data for Vision-Language Generative Models via Cycle Consistency - [[2310.03734](https://arxiv.org/abs/2310.03734)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.03734.md)]. - MathCoder: Seamless Code Integration in LLMs for Enhanced Mathematical Reasoning - [[2310.03731](https://arxiv.org/abs/2310.03731)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.03731.md)]. - HeaP: Hierarchical Policies for Web Actions using LLMs - [[2310.03720](https://arxiv.org/abs/2310.03720)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.03720.md)]. - A Long Way to Go: Investigating Length Correlations in RLHF - [[2310.03716](https://arxiv.org/abs/2310.03716)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.03716.md)]. - DSPy: Compiling Declarative Language Model Calls into Self-Improving Pipelines - [[2310.03714](https://arxiv.org/abs/2310.03714)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.03714.md)]. - Agent Instructs Large Language Models to be General Zero-Shot Reasoners - [[2310.03710](https://arxiv.org/abs/2310.03710)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.03710.md)]. - Drag View: Generalizable Novel View Synthesis with Unposed Imagery - [[2310.03704](https://arxiv.org/abs/2310.03704)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.03704.md)]. - Kandinsky: an Improved Text-to-Image Synthesis with Image Prior and Latent Diffusion - [[2310.03502](https://arxiv.org/abs/2310.03502)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.03502.md)]. - FreshLLMs: Refreshing Large Language Models with Search Engine Augmentation - [[2310.03214](https://arxiv.org/abs/2310.03214)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.03214.md)]. - Large Language Model Cascades with Mixture of Thoughts Representations for Cost-efficient Reasoning - [[2310.03094](https://arxiv.org/abs/2310.03094)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.03094.md)]. - Retrieval meets Long Context Large Language Models - [[2310.03025](https://arxiv.org/abs/2310.03025)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.03025.md)]. - How FaR Are Large Language Models From Agents with Theory-of-Mind? - [[2310.03051](https://arxiv.org/abs/2310.03051)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.03051.md)]. - EcoAssistant: Using LLM Assistant More Affordably and Accurately - [[2310.03046](https://arxiv.org/abs/2310.03046)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.03046.md)]. - MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts - [[2310.02255](https://arxiv.org/abs/2310.02255)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.02255.md)]. - MiniGPT-5: Interleaved Vision-and-Language Generation via Generative Vokens - [[2310.02239](https://arxiv.org/abs/2310.02239)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.02239.md)]. - Think before you speak: Training Language Models With Pause Tokens - [[2310.02226](https://arxiv.org/abs/2310.02226)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.02226.md)]. - What do we learn from a large-scale study of pre-trained visual representations in sim and real environments? - [[2310.02219](https://arxiv.org/abs/2310.02219)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.02219.md)]. - Language Models Represent Space and Time - [[2310.02207](https://arxiv.org/abs/2310.02207)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.02207.md)]. - Large Language Models Cannot Self-Correct Reasoning Yet - [[2310.01798](https://arxiv.org/abs/2310.01798)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.01798.md)]. - Can large language models provide useful feedback on research papers? A large-scale empirical analysis - [[2310.01783](https://arxiv.org/abs/2310.01783)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.01783.md)]. - ImageNet-OOD: Deciphering Modern Out-of-Distribution Detection Algorithms - [[2310.01755](https://arxiv.org/abs/2310.01755)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.01755.md)]. - Large Language Models as Analogical Reasoners - [[2310.01714](https://arxiv.org/abs/2310.01714)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.01714.md)]. - ImagenHub: Standardizing the evaluation of conditional image generation models - [[2310.01596](https://arxiv.org/abs/2310.01596)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.01596.md)]. - SmartPlay : A Benchmark for LLMs as Intelligent Agents - [[2310.01557](https://arxiv.org/abs/2310.01557)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.01557.md)]. - Neutrinos from muon-rich ultra high energy electromagnetic cascades: The MUNHECA code - [[2310.01510](https://arxiv.org/abs/2310.01510)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.01510.md)]. - DriveGPT4: Interpretable End-to-end Autonomous Driving via Large Language Model - [[2310.01412](https://arxiv.org/abs/2310.01412)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.01412.md)]. - Conditional Diffusion Distillation - [[2310.01407](https://arxiv.org/abs/2310.01407)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.01407.md)]. - Representation Engineering: A Top-Down Approach to AI Transparency - [[2310.01405](https://arxiv.org/abs/2310.01405)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.01405.md)]. - RA-DIT: Retrieval-Augmented Dual Instruction Tuning - [[2310.01352](https://arxiv.org/abs/2310.01352)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.01352.md)]. - Label Supervised LLaMA Finetuning - [[2310.01208](https://arxiv.org/abs/2310.01208)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.01208.md)]. - Enable Language Models to Implicitly Learn Self-Improvement From Data - [[2310.00898](https://arxiv.org/abs/2310.00898)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.00898.md)]. - (Dynamic) Prompting might be all you need to repair Compressed LLMs - [[2310.00867](https://arxiv.org/abs/2310.00867)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.00867.md)]. - Analyzing and Mitigating Object Hallucination in Large Vision-Language Models - [[2310.00754](https://arxiv.org/abs/2310.00754)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.00754.md)]. - RoleLLM: Benchmarking, Eliciting, and Enhancing Role-Playing Abilities of Large Language Models - [[2310.00746](https://arxiv.org/abs/2310.00746)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.00746.md)]. - FELM: Benchmarking Factuality Evaluation of Large Language Models - [[2310.00741](https://arxiv.org/abs/2310.00741)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.00741.md)]. - UniAudio: An Audio Foundation Model Toward Universal Audio Generation - [[2310.00704](https://arxiv.org/abs/2310.00704)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.00704.md)]. ### September 2023 - PixArt-$α$: Fast Training of Diffusion Transformer for Photorealistic Text-to-Image Synthesis - [[2310.00426](https://arxiv.org/abs/2310.00426)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.00426.md)]. - AutomaTikZ: Text-Guided Synthesis of Scientific Vector Graphics with TikZ - [[2310.00367](https://arxiv.org/abs/2310.00367)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.00367.md)]. - Efficient Streaming Language Models with Attention Sinks - [[2309.17453](https://arxiv.org/abs/2309.17453)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.17453.md)]. - The Dawn of LMMs: Preliminary Explorations with GPT-4V(ision) - [[2309.17421](https://arxiv.org/abs/2309.17421)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.17421.md)]. - Directly Fine-Tuning Diffusion Models on Differentiable Rewards - [[2309.17400](https://arxiv.org/abs/2309.17400)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.17400.md)]. - Suspicion-Agent: Playing Imperfect Information Games with Theory of Mind Aware GPT-4 - [[2309.17277](https://arxiv.org/abs/2309.17277)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.17277.md)]. - Guiding Instruction-based Image Editing via Multimodal Large Language Models - [[2309.17102](https://arxiv.org/abs/2309.17102)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.17102.md)]. - GAIA-1: A Generative World Model for Autonomous Driving - [[2309.17080](https://arxiv.org/abs/2309.17080)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.17080.md)]. - Demystifying CLIP Data - [[2309.16671](https://arxiv.org/abs/2309.16671)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.16671.md)]. - RealFill: Reference-Driven Generation for Authentic Image Completion - [[2309.16668](https://arxiv.org/abs/2309.16668)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.16668.md)]. - DreamGaussian: Generative Gaussian Splatting for Efficient 3D Content Creation - [[2309.16653](https://arxiv.org/abs/2309.16653)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.16653.md)]. - ConceptGraphs: Open-Vocabulary 3D Scene Graphs for Perception and Planning - [[2309.16650](https://arxiv.org/abs/2309.16650)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.16650.md)]. - Deep Geometrized Cartoon Line Inbetweening - [[2309.16643](https://arxiv.org/abs/2309.16643)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.16643.md)]. - Qwen Technical Report - [[2309.16609](https://arxiv.org/abs/2309.16609)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.16609.md)]. - Vision Transformers Need Registers - [[2309.16588](https://arxiv.org/abs/2309.16588)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.16588.md)]. - Text-to-3D using Gaussian Splatting - [[2309.16585](https://arxiv.org/abs/2309.16585)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.16585.md)]. - GPT-Fathom: Benchmarking Large Language Models to Decipher the Evolutionary Path towards GPT-4 and Beyond - [[2309.16583](https://arxiv.org/abs/2309.16583)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.16583.md)]. - MotionLM: Multi-Agent Motion Forecasting as Language Modeling - [[2309.16534](https://arxiv.org/abs/2309.16534)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.16534.md)]. - CCEdit: Creative and Controllable Video Editing via Diffusion Models - [[2309.16496](https://arxiv.org/abs/2309.16496)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.16496.md)]. - Diverse and Aligned Audio-to-Video Generation via Text-to-Video Model Adaptation - [[2309.16429](https://arxiv.org/abs/2309.16429)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.16429.md)]. - AutoCLIP: Auto-tuning Zero-Shot Classifiers for Vision-Language Models - [[2309.16414](https://arxiv.org/abs/2309.16414)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.16414.md)]. - Dark Side Augmentation: Generating Diverse Night Examples for Metric Learning - [[2309.16351](https://arxiv.org/abs/2309.16351)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.16351.md)]. - Language models in molecular discovery - [[2309.16235](https://arxiv.org/abs/2309.16235)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.16235.md)]. - AnyMAL: An Efficient and Scalable Any-Modality Augmented Language Model - [[2309.16058](https://arxiv.org/abs/2309.16058)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.16058.md)]. - Effective Long-Context Scaling of Foundation Models - [[2309.16039](https://arxiv.org/abs/2309.16039)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.16039.md)]. - Show-1: Marrying Pixel and Latent Diffusion Models for Text-to-Video Generation - [[2309.15818](https://arxiv.org/abs/2309.15818)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.15818.md)]. - Emu: Enhancing Image Generation Models Using Photogenic Needles in a Haystack - [[2309.15807](https://arxiv.org/abs/2309.15807)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.15807.md)]. - Aperture Diffraction for Compact Snapshot Spectral Imaging - [[2309.16372](https://arxiv.org/abs/2309.16372)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.16372.md)]. - Borges and AI - [[2310.01425](https://arxiv.org/abs/2310.01425)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.01425.md)]. - Jointly Training Large Autoregressive Multimodal Models - [[2309.15564](https://arxiv.org/abs/2309.15564)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.15564.md)]. - Finite Scalar Quantization: VQ-VAE Made Simple - [[2309.15505](https://arxiv.org/abs/2309.15505)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.15505.md)]. - Graph Neural Prompting with Large Language Models - [[2309.15427](https://arxiv.org/abs/2309.15427)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.15427.md)]. - NeuRBF: A Neural Fields Representation with Adaptive Radial Basis Functions - [[2309.15426](https://arxiv.org/abs/2309.15426)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.15426.md)]. - DECO: Dense Estimation of 3D Human-Scene Contact In The Wild - [[2309.15273](https://arxiv.org/abs/2309.15273)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.15273.md)]. - VPA: Fully Test-Time Visual Prompt Adaptation - [[2309.15251](https://arxiv.org/abs/2309.15251)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.15251.md)]. - Low-rank Adaptation of Large Language Model Rescoring for Parameter-Efficient Speech Recognition - [[2309.15223](https://arxiv.org/abs/2309.15223)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.15223.md)]. - LAVIE: High-Quality Video Generation with Cascaded Latent Diffusion Models - [[2309.15103](https://arxiv.org/abs/2309.15103)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.15103.md)]. - Attention Satisfies: A Constraint-Satisfaction Lens on Factual Errors of Language Models - [[2309.15098](https://arxiv.org/abs/2309.15098)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.15098.md)]. - VideoDirectorGPT: Consistent Multi-scene Video Generation via LLM-Guided Planning - [[2309.15091](https://arxiv.org/abs/2309.15091)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.15091.md)]. - RPEFlow: Multimodal Fusion of RGB-PointCloud-Event for Joint Optical Flow and Scene Flow Estimation - [[2309.15082](https://arxiv.org/abs/2309.15082)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.15082.md)]. - Large Language Model Alignment: A Survey - [[2309.15025](https://arxiv.org/abs/2309.15025)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.15025.md)]. - Treating Motion as Option with Output Selection for Unsupervised Video Object Segmentation - [[2309.14786](https://arxiv.org/abs/2309.14786)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.14786.md)]. - QA-LoRA: Quantization-Aware Low-Rank Adaptation of Large Language Models - [[2309.14717](https://arxiv.org/abs/2309.14717)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.14717.md)]. - NDC-Scene: Boost Monocular 3D Semantic Scene Completion in Normalized Device Coordinates Space - [[2309.14616](https://arxiv.org/abs/2309.14616)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.14616.md)]. - Efficient Post-training Quantization with FP8 Formats - [[2309.14592](https://arxiv.org/abs/2309.14592)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.14592.md)]. - CWCL: Cross-Modal Transfer with Continuously Weighted Contrastive Loss - [[2309.14580](https://arxiv.org/abs/2309.14580)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.14580.md)]. - Aligning Large Multimodal Models with Factually Augmented RLHF - [[2309.14525](https://arxiv.org/abs/2309.14525)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.14525.md)]. - DeepSpeed Ulysses: System Optimizations for Enabling Training of Extreme Long Sequence Transformer Models - [[2309.14509](https://arxiv.org/abs/2309.14509)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.14509.md)]. - Extreme Parkour with Legged Robots - [[2309.14341](https://arxiv.org/abs/2309.14341)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.14341.md)]. - Electronic properties, correlated topology and Green's function zeros - [[2309.14340](https://arxiv.org/abs/2309.14340)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.14340.md)]. - DeepSpeed-VisualChat: Multi-Round Multi-Image Interleave Chat via Multi-Modal Causal Attention - [[2309.14327](https://arxiv.org/abs/2309.14327)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.14327.md)]. - Physics of Language Models: Part 3.2, Knowledge Manipulation - [[2309.14402](https://arxiv.org/abs/2309.14402)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.14402.md)]. - Small-scale proxies for large-scale Transformer training instabilities - [[2309.14322](https://arxiv.org/abs/2309.14322)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.14322.md)]. - Tiled Multiplane Images for Practical 3D Photography - [[2309.14291](https://arxiv.org/abs/2309.14291)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.14291.md)]. - Only 5\% Attention Is All You Need: Efficient Long-range Document-level Neural Machine Translation - [[2309.14174](https://arxiv.org/abs/2309.14174)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.14174.md)]. - May I Ask a Follow-up Question? Understanding the Benefits of Conversations in Neural Network Explainability - [[2309.13965](https://arxiv.org/abs/2309.13965)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.13965.md)]. - VidChapters-7M: Video Chapters at Scale - [[2309.13952](https://arxiv.org/abs/2309.13952)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.13952.md)]. - Impact of Human-AI Interaction on User Trust and Reliance in AI-Assisted Qualitative Coding - [[2309.13858](https://arxiv.org/abs/2309.13858)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.13858.md)]. - Evaluating Cognitive Maps and Planning in Large Language Models with CogEval - [[2309.15129](https://arxiv.org/abs/2309.15129)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.15129.md)]. - Embers of Autoregression: Understanding Large Language Models Through the Problem They are Trained to Solve - [[2309.13638](https://arxiv.org/abs/2309.13638)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.13638.md)]. - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and Reasoning - [[2309.13556](https://arxiv.org/abs/2309.13556)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.13556.md)]. - MediViSTA-SAM: Zero-shot Medical Video Analysis with Spatio-temporal SAM Adaptation - [[2309.13539](https://arxiv.org/abs/2309.13539)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.13539.md)]. - Attention Is All You Need For Blind Room Volume Estimation - [[2309.13504](https://arxiv.org/abs/2309.13504)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.13504.md)]. - Learning Invariant Representations with a Nonparametric Nadaraya-Watson Head - [[2309.13377](https://arxiv.org/abs/2309.13377)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.13377.md)]. - MLPST: MLP is All You Need for Spatio-Temporal Prediction - [[2309.13363](https://arxiv.org/abs/2309.13363)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.13363.md)]. - Exploring Large Language Models' Cognitive Moral Development through Defining Issues Test - [[2309.13356](https://arxiv.org/abs/2309.13356)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.13356.md)]. - Enhancing Zero-Shot Chain-of-Thought Reasoning in Large Language Models through Logic - [[2309.13339](https://arxiv.org/abs/2309.13339)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.13339.md)]. - Calibrating LLM-Based Evaluator - [[2309.13308](https://arxiv.org/abs/2309.13308)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.13308.md)]. - Defending Pre-trained Language Models as Few-shot Learners against Backdoor Attacks - [[2309.13256](https://arxiv.org/abs/2309.13256)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.13256.md)]. - Spatial-frequency channels, shape bias, and adversarial robustness - [[2309.13190](https://arxiv.org/abs/2309.13190)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.13190.md)]. - E(2)-Equivariant Graph Planning for Navigation - [[2309.13043](https://arxiv.org/abs/2309.13043)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.13043.md)]. - MosaicFusion: Diffusion Models as Data Augmenters for Large Vocabulary Instance Segmentation - [[2309.13042](https://arxiv.org/abs/2309.13042)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.13042.md)]. - Robotic Offline RL from Internet Videos via Value-Function Pre-Training - [[2309.13041](https://arxiv.org/abs/2309.13041)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.13041.md)]. - NeRRF: 3D Reconstruction and View Synthesis for Transparent and Specular Objects with Neural Refractive-Reflective Fields - [[2309.13039](https://arxiv.org/abs/2309.13039)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.13039.md)]. - Privacy Assessment on Reconstructed Images: Are Existing Evaluation Metrics Faithful to Human Perception? - [[2309.13038](https://arxiv.org/abs/2309.13038)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.13038.md)]. - GELLO: A General, Low-Cost, and Intuitive Teleoperation Framework for Robot Manipulators - [[2309.13037](https://arxiv.org/abs/2309.13037)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.13037.md)]. - PyPose v0.6: The Imperative Programming Interface for Robotics - [[2309.13035](https://arxiv.org/abs/2309.13035)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.13035.md)]. - Memory-augmented conformer for improved end-to-end long-form ASR - [[2309.13029](https://arxiv.org/abs/2309.13029)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.13029.md)]. - Graph Neural Network for Stress Predictions in Stiffened Panels Under Uniform Loading - [[2309.13022](https://arxiv.org/abs/2309.13022)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.13022.md)]. - A Hybrid Deep Learning-based Approach for Optimal Genotype by Environment Selection - [[2309.13021](https://arxiv.org/abs/2309.13021)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.13021.md)]. - Dynamic ASR Pathways: An Adaptive Masking Approach Towards Efficient Pruning of A Multilingual ASR Model - [[2309.13018](https://arxiv.org/abs/2309.13018)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.13018.md)]. - Understanding Deep Gradient Leakage via Inversion Influence Functions - [[2309.13016](https://arxiv.org/abs/2309.13016)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.13016.md)]. - Efficient N:M Sparse DNN Training Using Algorithm, Architecture, and Dataflow Co-Design - [[2309.13015](https://arxiv.org/abs/2309.13015)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.13015.md)]. - Performance Analysis of UNet and Variants for Medical Image Segmentation - [[2309.13013](https://arxiv.org/abs/2309.13013)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.13013.md)]. - ReConcile: Round-Table Conference Improves Reasoning via Consensus among Diverse LLMs - [[2309.13007](https://arxiv.org/abs/2309.13007)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.13007.md)]. - Deep3DSketch+: Rapid 3D Modeling from Single Free-hand Sketches - [[2309.13006](https://arxiv.org/abs/2309.13006)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.13006.md)]. - Pursuing Counterfactual Fairness via Sequential Autoencoder Across Domains - [[2309.13005](https://arxiv.org/abs/2309.13005)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.13005.md)]. - Expressive variational quantum circuits provide inherent privacy in federated learning - [[2309.13002](https://arxiv.org/abs/2309.13002)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.13002.md)]. - Audience-specific Explanations for Machine Translation - [[2309.12998](https://arxiv.org/abs/2309.12998)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.12998.md)]. - Point Cloud Network: An Order of Magnitude Improvement in Linear Layer Parameter Count - [[2309.12996](https://arxiv.org/abs/2309.12996)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.12996.md)]. - Deep learning probability flows and entropy production rates in active matter - [[2309.12991](https://arxiv.org/abs/2309.12991)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.12991.md)]. - License Plate Recognition Based On Multi-Angle View Model - [[2309.12972](https://arxiv.org/abs/2309.12972)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.12972.md)]. - Higher-order Graph Convolutional Network with Flower-Petals Laplacians on Simplicial Complexes - [[2309.12971](https://arxiv.org/abs/2309.12971)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.12971.md)]. - PI-RADS v2 Compliant Automated Segmentation of Prostate Zones Using co-training Motivated Multi-task Dual-Path CNN - [[2309.12970](https://arxiv.org/abs/2309.12970)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.12970.md)]. - Detect Every Thing with Few Examples - [[2309.12969](https://arxiv.org/abs/2309.12969)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.12969.md)]. - Nested Event Extraction upon Pivot Element Recogniton - [[2309.12960](https://arxiv.org/abs/2309.12960)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.12960.md)]. - On Data Fabrication in Collaborative Vehicular Perception: Attacks and Countermeasures - [[2309.12955](https://arxiv.org/abs/2309.12955)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.12955.md)]. - Background Activation Suppression for Weakly Supervised Object Localization and Semantic Segmentation - [[2309.12943](https://arxiv.org/abs/2309.12943)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.12943.md)]. - Trusta: Reasoning about Assurance Cases with Formal Methods and Large Language Models - [[2309.12941](https://arxiv.org/abs/2309.12941)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.12941.md)]. - Self-Explanation Prompting Improves Dialogue Understanding in Large Language Models - [[2309.12940](https://arxiv.org/abs/2309.12940)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.12940.md)]. - Frustrated with Code Quality Issues? LLMs can Help! - [[2309.12938](https://arxiv.org/abs/2309.12938)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.12938.md)]. - Evolving Spiking Neural Networks to Mimic PID Control for Autonomous Blimps - [[2309.12937](https://arxiv.org/abs/2309.12937)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.12937.md)]. - TopRoBERTa: Topology-Aware Authorship Attribution of Deepfake Texts - [[2309.12934](https://arxiv.org/abs/2309.12934)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.12934.md)]. - CodePlan: Repository-level Coding using LLMs and Planning - [[2309.12499](https://arxiv.org/abs/2309.12499)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.12499.md)]. - DualToken-ViT: Position-aware Efficient Vision Transformer with Dual Token Fusion - [[2309.12424](https://arxiv.org/abs/2309.12424)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.12424.md)]. - LLM-Grounder: Open-Vocabulary 3D Visual Grounding with Large Language Model as an Agent - [[2309.12311](https://arxiv.org/abs/2309.12311)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.12311.md)]. - LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models - [[2309.12307](https://arxiv.org/abs/2309.12307)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.12307.md)]. - PanoVOS: Bridging Non-panoramic and Panoramic Views with Transformer for Video Segmentation - [[2309.12303](https://arxiv.org/abs/2309.12303)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.12303.md)]. - The Reversal Curse: LLMs trained on "A is B" fail to learn "B is A" - [[2309.12288](https://arxiv.org/abs/2309.12288)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.12288.md)]. - MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models - [[2309.12284](https://arxiv.org/abs/2309.12284)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.12284.md)]. - Boolformer: Symbolic Regression of Logic Functions with Transformers - [[2309.12207](https://arxiv.org/abs/2309.12207)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.12207.md)]. - LMSYS-Chat-1M: A Large-Scale Real-World LLM Conversation Dataset - [[2309.11998](https://arxiv.org/abs/2309.11998)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.11998.md)]. - MEFLUT: Unsupervised 1D Lookup Tables for Multi-exposure Image Fusion - [[2309.11847](https://arxiv.org/abs/2309.11847)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.11847.md)]. - A Paradigm Shift in Machine Translation: Boosting Translation Performance of Large Language Models - [[2309.11674](https://arxiv.org/abs/2309.11674)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.11674.md)]. - BTLM-3B-8K: 7B Parameter Performance in a 3B Parameter Model - [[2309.11568](https://arxiv.org/abs/2309.11568)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.11568.md)]. - A Large-scale Dataset for Audio-Language Representation Learning - [[2309.11500](https://arxiv.org/abs/2309.11500)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.11500.md)]. - DreamLLM: Synergistic Multimodal Comprehension and Creation - [[2309.11499](https://arxiv.org/abs/2309.11499)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.11499.md)]. - FreeU: Free Lunch in Diffusion U-Net - [[2309.11497](https://arxiv.org/abs/2309.11497)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.11497.md)]. - Chain-of-Verification Reduces Hallucination in Large Language Models - [[2309.11495](https://arxiv.org/abs/2309.11495)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.11495.md)]. - SCREWS: A Modular Framework for Reasoning with Revisions - [[2309.13075](https://arxiv.org/abs/2309.13075)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.13075.md)]. - Kosmos-2.5: A Multimodal Literate Model - [[2309.11419](https://arxiv.org/abs/2309.11419)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.11419.md)]. - OpenChat: Advancing Open-source Language Models with Mixed-Quality Data - [[2309.11235](https://arxiv.org/abs/2309.11235)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.11235.md)]. - The Languini Kitchen: Enabling Language Modelling Research at Different Scales of Compute - [[2309.11197](https://arxiv.org/abs/2309.11197)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.11197.md)]. - AutoSynth: Learning to Generate 3D Training Data for Object Point Cloud Registration - [[2309.11170](https://arxiv.org/abs/2309.11170)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.11170.md)]. - Multi-grained Temporal Prototype Learning for Few-shot Video Object Segmentation - [[2309.11160](https://arxiv.org/abs/2309.11160)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.11160.md)]. - More complex encoder is not all you need - [[2309.11139](https://arxiv.org/abs/2309.11139)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.11139.md)]. - Contrastive Pseudo Learning for Open-World DeepFake Attribution - [[2309.11132](https://arxiv.org/abs/2309.11132)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.11132.md)]. - Dense 2D-3D Indoor Prediction with Sound via Aligned Cross-Modal Distillation - [[2309.11081](https://arxiv.org/abs/2309.11081)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.11081.md)]. - Weak Supervision for Label Efficient Visual Bug Detection - [[2309.11077](https://arxiv.org/abs/2309.11077)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.11077.md)]. - The Topology and Geometry of Neural Representations - [[2309.11028](https://arxiv.org/abs/2309.11028)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.11028.md)]. - Controllable Dynamic Appearance for Neural 3D Portraits - [[2309.11009](https://arxiv.org/abs/2309.11009)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.11009.md)]. - RMT: Retentive Networks Meet Vision Transformers - [[2309.11523](https://arxiv.org/abs/2309.11523)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.11523.md)]. - LMDX: Language Model-based Document Information Extraction and Localization - [[2309.10952](https://arxiv.org/abs/2309.10952)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.10952.md)]. - End-to-End Speech Recognition Contextualization with Large Language Models - [[2309.10917](https://arxiv.org/abs/2309.10917)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.10917.md)]. - SlimPajama-DC: Understanding Data Combinations for LLM Training - [[2309.10818](https://arxiv.org/abs/2309.10818)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.10818.md)]. - Sound Source Localization is All about Cross-Modal Alignment - [[2309.10724](https://arxiv.org/abs/2309.10724)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.10724.md)]. - OpenBA: An Open-sourced 15B Bilingual Asymmetric seq2seq Model Pre-trained from Scratch - [[2309.10706](https://arxiv.org/abs/2309.10706)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.10706.md)]. - Language Modeling Is Compression - [[2309.10668](https://arxiv.org/abs/2309.10668)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.10668.md)]. - NDDepth: Normal-Distance Assisted Monocular Depth Estimation - [[2309.10592](https://arxiv.org/abs/2309.10592)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.10592.md)]. - FoleyGen: Visually-Guided Audio Generation - [[2309.10537](https://arxiv.org/abs/2309.10537)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.10537.md)]. - AutoDiffusion: Training-Free Optimization of Time Steps and Architectures for Automated Diffusion Model Acceleration - [[2309.10438](https://arxiv.org/abs/2309.10438)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.10438.md)]. - PoSE: Efficient Context Window Extension of LLMs via Positional Skip-wise Training - [[2309.10400](https://arxiv.org/abs/2309.10400)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.10400.md)]. - Baichuan 2: Open Large-scale Language Models - [[2309.10305](https://arxiv.org/abs/2309.10305)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.10305.md)]. - 360$^\circ$ Reconstruction From a Single Image Using Space Carved Outpainting - [[2309.10279](https://arxiv.org/abs/2309.10279)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.10279.md)]. - Stabilizing RLHF through Advantage Model and Selective Rehearsal - [[2309.10202](https://arxiv.org/abs/2309.10202)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.10202.md)]. - Q-Transformer: Scalable Offline Reinforcement Learning via Autoregressive Q-Functions - [[2309.10150](https://arxiv.org/abs/2309.10150)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.10150.md)]. - Unified Coarse-to-Fine Alignment for Video-Text Retrieval - [[2309.10091](https://arxiv.org/abs/2309.10091)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.10091.md)]. - Multimodal Foundation Models: From Specialists to General-Purpose Assistants - [[2309.10020](https://arxiv.org/abs/2309.10020)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.10020.md)]. - MindAgent: Emergent Gaming Interaction - [[2309.09971](https://arxiv.org/abs/2309.09971)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.09971.md)]. - Generating and Imputing Tabular Data via Diffusion and Flow-based Gradient-Boosted Trees - [[2309.09968](https://arxiv.org/abs/2309.09968)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.09968.md)]. - An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models - [[2309.09958](https://arxiv.org/abs/2309.09958)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.09958.md)]. - Robust Geometry-Preserving Depth Estimation Using Differentiable Rendering - [[2309.09724](https://arxiv.org/abs/2309.09724)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.09724.md)]. - CATR: Combinatorial-Dependence Audio-Queried Transformer for Audio-Visual Video Segmentation - [[2309.09709](https://arxiv.org/abs/2309.09709)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.09709.md)]. - Adapting Large Language Models via Reading Comprehension - [[2309.09530](https://arxiv.org/abs/2309.09530)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.09530.md)]. - LayoutNUWA: Revealing the Hidden Layout Expertise of Large Language Models - [[2309.09506](https://arxiv.org/abs/2309.09506)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.09506.md)]. - Discovering Sounding Objects by Audio Queries for Audio Visual Segmentation - [[2309.09501](https://arxiv.org/abs/2309.09501)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.09501.md)]. - CulturaX: A Cleaned, Enormous, and Multilingual Dataset for Large Language Models in 167 Languages - [[2309.09400](https://arxiv.org/abs/2309.09400)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.09400.md)]. - Augmenting text for spoken language understanding with Large Language Models - [[2309.09390](https://arxiv.org/abs/2309.09390)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.09390.md)]. - Embrace Divergence for Richer Insights: A Multi-document Summarization Benchmark and a Case Study on Summarizing Diverse Information from News Articles - [[2309.09369](https://arxiv.org/abs/2309.09369)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.09369.md)]. - OWL: A Large Language Model for IT Operations - [[2309.09298](https://arxiv.org/abs/2309.09298)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.09298.md)]. - LivelySpeaker: Towards Semantic-Aware Co-Speech Gesture Generation - [[2309.09294](https://arxiv.org/abs/2309.09294)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.09294.md)]. - Contrastive Decoding Improves Reasoning in Large Language Models - [[2309.09117](https://arxiv.org/abs/2309.09117)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.09117.md)]. - Sorted LLaMA: Unlocking the Potential of Intermediate Layers of Large Language Models for Dynamic Inference Using Sorted Fine-Tuning (SoFT) - [[2309.08968](https://arxiv.org/abs/2309.08968)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.08968.md)]. - Struc-Bench: Are Large Language Models Really Good at Generating Complex Structured Data? - [[2309.08963](https://arxiv.org/abs/2309.08963)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.08963.md)]. - Monolingual or Multilingual Instruction Tuning: Which Makes a Better Alpaca - [[2309.08958](https://arxiv.org/abs/2309.08958)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.08958.md)]. - PDFTriage: Question Answering over Long, Structured Documents - [[2309.08872](https://arxiv.org/abs/2309.08872)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.08872.md)]. - S3-DST: Structured Open-Domain Dialogue Segmentation and State Tracking in the Era of LLMs - [[2309.08827](https://arxiv.org/abs/2309.08827)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.08827.md)]. - Stack-and-Delay: a new codebook pattern for music generation - [[2309.08804](https://arxiv.org/abs/2309.08804)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.08804.md)]. - Enhance audio generation controllability through representation similarity regularization - [[2309.08773](https://arxiv.org/abs/2309.08773)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.08773.md)]. - BANSAC: A dynamic BAyesian Network for adaptive SAmple Consensus - [[2309.08690](https://arxiv.org/abs/2309.08690)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.08690.md)]. - Sparse Autoencoders Find Highly Interpretable Features in Language Models - [[2309.08600](https://arxiv.org/abs/2309.08600)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.08600.md)]. - Robust Frame-to-Frame Camera Rotation Estimation in Crowded Scenes - [[2309.08588](https://arxiv.org/abs/2309.08588)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.08588.md)]. - Compositional Foundation Models for Hierarchical Planning - [[2309.08587](https://arxiv.org/abs/2309.08587)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.08587.md)]. - Replacing softmax with ReLU in Vision Transformers - [[2309.08586](https://arxiv.org/abs/2309.08586)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.08586.md)]. - Connecting Large Language Models with Evolutionary Algorithms Yields Powerful Prompt Optimizers - [[2309.08532](https://arxiv.org/abs/2309.08532)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.08532.md)]. - Scaling Laws for Sparsely-Connected Foundation Models - [[2309.08520](https://arxiv.org/abs/2309.08520)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.08520.md)]. - Using Large Language Models for Knowledge Engineering (LLMKE): A Case Study on Wikidata - [[2309.08491](https://arxiv.org/abs/2309.08491)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.08491.md)]. - Deformable Neural Radiance Fields using RGB and Event Cameras - [[2309.08416](https://arxiv.org/abs/2309.08416)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.08416.md)]. - Cure the headache of Transformers via Collinear Constrained Attention - [[2309.08646](https://arxiv.org/abs/2309.08646)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.08646.md)]. - Investigating Answerability of LLMs for Long-Form Question Answering - [[2309.08210](https://arxiv.org/abs/2309.08210)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.08210.md)]. - LASER: LLM Agent with State-Space Exploration for Web Navigation - [[2309.08172](https://arxiv.org/abs/2309.08172)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.08172.md)]. - Draft &amp; Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding - [[2309.08168](https://arxiv.org/abs/2309.08168)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.08168.md)]. - RADE: Reference-Assisted Dialogue Evaluation for Open-Domain Dialogue - [[2309.08156](https://arxiv.org/abs/2309.08156)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.08156.md)]. - Retrieval-Augmented Text-to-Audio Generation - [[2309.08051](https://arxiv.org/abs/2309.08051)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.08051.md)]. - Leveraging Contextual Information for Effective Entity Salience Detection - [[2309.07990](https://arxiv.org/abs/2309.07990)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.07990.md)]. - Viewpoint Textual Inversion: Unleashing Novel View Synthesis with Pretrained 2D Diffusion Models - [[2309.07986](https://arxiv.org/abs/2309.07986)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.07986.md)]. - A Data Source for Reasoning Embodied Agents - [[2309.07974](https://arxiv.org/abs/2309.07974)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.07974.md)]. - Language Embedded Radiance Fields for Zero-Shot Task-Oriented Grasping - [[2309.07970](https://arxiv.org/abs/2309.07970)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.07970.md)]. - ALWOD: Active Learning for Weakly-Supervised Object Detection - [[2309.07914](https://arxiv.org/abs/2309.07914)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.07914.md)]. - Disentangling Spatial and Temporal Learning for Efficient Image-to-Video Transfer Learning - [[2309.07911](https://arxiv.org/abs/2309.07911)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.07911.md)]. - TEMPO: Efficient Multi-View Pose Estimation, Tracking, and Forecasting - [[2309.07910](https://arxiv.org/abs/2309.07910)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.07910.md)]. - Generative Image Dynamics - [[2309.07906](https://arxiv.org/abs/2309.07906)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.07906.md)]. - Ambiguity-Aware In-Context Learning with Large Language Models - [[2309.07900](https://arxiv.org/abs/2309.07900)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.07900.md)]. - Agents: An Open-source Framework for Autonomous Language Agents - [[2309.07870](https://arxiv.org/abs/2309.07870)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.07870.md)]. - The Rise and Potential of Large Language Model Based Agents: A Survey - [[2309.07864](https://arxiv.org/abs/2309.07864)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.07864.md)]. - TextBind: Multi-turn Interleaved Multimodal Instruction-following - [[2309.08637](https://arxiv.org/abs/2309.08637)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.08637.md)]. - OmnimatteRF: Robust Omnimatte with 3D Background Modeling - [[2309.07749](https://arxiv.org/abs/2309.07749)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.07749.md)]. - Efficiently Robustify Pre-trained Models - [[2309.07499](https://arxiv.org/abs/2309.07499)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.07499.md)]. - EP2P-Loc: End-to-End 3D Point to 2D Pixel Localization for Large-Scale Visual Localization - [[2309.07471](https://arxiv.org/abs/2309.07471)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.07471.md)]. - Are Large Language Model-based Evaluators the Solution to Scaling Up Multilingual Evaluation? - [[2309.07462](https://arxiv.org/abs/2309.07462)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.07462.md)]. - Clinical Text Summarization: Adapting Large Language Models Can Outperform Human Experts - [[2309.07430](https://arxiv.org/abs/2309.07430)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.07430.md)]. - Flexible Visual Recognition by Evidential Modeling of Confusion and Ignorance - [[2309.07403](https://arxiv.org/abs/2309.07403)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.07403.md)]. - AudioSR: Versatile Audio Super-resolution at Scale - [[2309.07314](https://arxiv.org/abs/2309.07314)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.07314.md)]. - Pretraining on the Test Set Is All You Need - [[2309.08632](https://arxiv.org/abs/2309.08632)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.08632.md)]. - All you need is spin: SU(2) equivariant variational quantum circuits based on spin networks - [[2309.07250](https://arxiv.org/abs/2309.07250)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.07250.md)]. - Text-Guided Generation and Editing of Compositional 3D Avatars - [[2309.07125](https://arxiv.org/abs/2309.07125)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.07125.md)]. - RAIN: Your Language Models Can Align Themselves without Finetuning - [[2309.07124](https://arxiv.org/abs/2309.07124)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.07124.md)]. - Tree-Structured Shading Decomposition - [[2309.07122](https://arxiv.org/abs/2309.07122)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.07122.md)]. - SupFusion: Supervised LiDAR-Camera Fusion for 3D Object Detection - [[2309.07084](https://arxiv.org/abs/2309.07084)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.07084.md)]. - Efficient Reinforcement Learning for Jumping Monopods - [[2309.07038](https://arxiv.org/abs/2309.07038)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.07038.md)]. - DreamStyler: Paint by Style Inversion with Text-to-Image Diffusion Models - [[2309.06933](https://arxiv.org/abs/2309.06933)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.06933.md)]. - MagiCapture: High-Resolution Multi-Concept Portrait Customization - [[2309.06895](https://arxiv.org/abs/2309.06895)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.06895.md)]. - Keep It SimPool: Who Said Supervised Transformers Suffer from Attention Deficit? - [[2309.06891](https://arxiv.org/abs/2309.06891)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.06891.md)]. - Leveraging SE(3) Equivariance for Learning 3D Geometric Shape Assembly - [[2309.06810](https://arxiv.org/abs/2309.06810)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.06810.md)]. - Dynamic NeRFs for Soccer Scenes - [[2309.06802](https://arxiv.org/abs/2309.06802)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.06802.md)]. - Cognitive Mirage: A Review of Hallucinations in Large Language Models - [[2309.06794](https://arxiv.org/abs/2309.06794)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.06794.md)]. - MPI-Flow: Learning Realistic Optical Flow with Multiplane Images - [[2309.06714](https://arxiv.org/abs/2309.06714)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.06714.md)]. - VLSlice: Interactive Vision-and-Language Slice Discovery - [[2309.06703](https://arxiv.org/abs/2309.06703)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.06703.md)]. - Generalizable Neural Fields as Partially Observed Neural Processes - [[2309.06660](https://arxiv.org/abs/2309.06660)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.06660.md)]. - Statistical Rejection Sampling Improves Preference Optimization - [[2309.06657](https://arxiv.org/abs/2309.06657)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.06657.md)]. - A Distributed Data-Parallel PyTorch Implementation of the Distributed Shampoo Optimizer for Training Neural Networks At-Scale - [[2309.06497](https://arxiv.org/abs/2309.06497)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.06497.md)]. - Learning Disentangled Avatars with Hybrid 3D Representations - [[2309.06441](https://arxiv.org/abs/2309.06441)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.06441.md)]. - LEAP Hand: Low-Cost, Efficient, and Anthropomorphic Hand for Robot Learning - [[2309.06440](https://arxiv.org/abs/2309.06440)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.06440.md)]. - InstaFlow: One Step is Enough for High-Quality Diffusion-Based Text-to-Image Generation - [[2309.06380](https://arxiv.org/abs/2309.06380)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.06380.md)]. - Recovering from Privacy-Preserving Masking with Large Language Models - [[2309.08628](https://arxiv.org/abs/2309.08628)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.08628.md)]. - Modality Unifying Network for Visible-Infrared Person Re-Identification - [[2309.06262](https://arxiv.org/abs/2309.06262)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.06262.md)]. - Efficient Memory Management for Large Language Model Serving with PagedAttention - [[2309.06180](https://arxiv.org/abs/2309.06180)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.06180.md)]. - AstroLLaMA: Towards Specialized Foundation Models in Astronomy - [[2309.06126](https://arxiv.org/abs/2309.06126)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.06126.md)]. - Uncovering mesa-optimization algorithms in Transformers - [[2309.05858](https://arxiv.org/abs/2309.05858)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.05858.md)]. - Large Language Models for Compiler Optimization - [[2309.07062](https://arxiv.org/abs/2309.07062)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.07062.md)]. - SHIFT3D: Synthesizing Hard Inputs For Tricking 3D Detectors - [[2309.05810](https://arxiv.org/abs/2309.05810)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.05810.md)]. - PhotoVerse: Tuning-Free Image Customization with Text-to-Image Diffusion Models - [[2309.05793](https://arxiv.org/abs/2309.05793)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.05793.md)]. - Diffusion-Guided Reconstruction of Everyday Hand-Object Interaction Clips - [[2309.05663](https://arxiv.org/abs/2309.05663)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.05663.md)]. - Large Language Model for Science: A Study on P vs. NP - [[2309.05689](https://arxiv.org/abs/2309.05689)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.05689.md)]. - UniSeg: A Unified Multi-Modal LiDAR Segmentation Network and the OpenPCSeg Codebase - [[2309.05573](https://arxiv.org/abs/2309.05573)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.05573.md)]. - ITI-GEN: Inclusive Text-to-Image Generation - [[2309.05569](https://arxiv.org/abs/2309.05569)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.05569.md)]. - NExT-GPT: Any-to-Any Multimodal LLM - [[2309.05519](https://arxiv.org/abs/2309.05519)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.05519.md)]. - Optimize Weight Rounding via Signed Gradient Descent for the Quantization of LLMs - [[2309.05516](https://arxiv.org/abs/2309.05516)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.05516.md)]. - Textbooks Are All You Need II: phi-1.5 technical report - [[2309.05463](https://arxiv.org/abs/2309.05463)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.05463.md)]. - Pushing Mixture of Experts to the Limit: Extremely Parameter Efficient MoE for Instruction Tuning - [[2309.05444](https://arxiv.org/abs/2309.05444)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.05444.md)]. - Class-Incremental Grouping Network for Continual Audio-Visual Learning - [[2309.05281](https://arxiv.org/abs/2309.05281)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.05281.md)]. - Multi3DRefer: Grounding Text Description to Multiple 3D Objects - [[2309.05251](https://arxiv.org/abs/2309.05251)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.05251.md)]. - Does Writing with Language Models Reduce Content Diversity? - [[2309.05196](https://arxiv.org/abs/2309.05196)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.05196.md)]. - Towards Viewpoint Robustness in Bird's Eye View Segmentation - [[2309.05192](https://arxiv.org/abs/2309.05192)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.05192.md)]. - Beyond Skin Tone: A Multidimensional Measure of Apparent Skin Color - [[2309.05148](https://arxiv.org/abs/2309.05148)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.05148.md)]. - 3D Implicit Transporter for Temporally Consistent Keypoint Discovery - [[2309.05098](https://arxiv.org/abs/2309.05098)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.05098.md)]. - Multi-view Self-supervised Disentanglement for General Image Denoising - [[2309.05049](https://arxiv.org/abs/2309.05049)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.05049.md)]. - Mitigating Word Bias in Zero-shot Prompt-based Classifiers - [[2309.04992](https://arxiv.org/abs/2309.04992)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.04992.md)]. - Efficient Emotional Adaptation for Audio-Driven Talking-Head Generation - [[2309.04946](https://arxiv.org/abs/2309.04946)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.04946.md)]. - Effective Real Image Editing with Accelerated Iterative Diffusion Inversion - [[2309.04907](https://arxiv.org/abs/2309.04907)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.04907.md)]. - Leveraging Large Language Models for Exploiting ASR Uncertainty - [[2309.04842](https://arxiv.org/abs/2309.04842)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.04842.md)]. - Neurons in Large Language Models: Dead, N-gram, Positional - [[2309.04827](https://arxiv.org/abs/2309.04827)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.04827.md)]. - Towards Real-World Burst Image Super-Resolution: Benchmark and Method - [[2309.04803](https://arxiv.org/abs/2309.04803)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.04803.md)]. - VeRi3D: Generative Vertex-based Radiance Fields for 3D Controllable Human Image Synthesis - [[2309.04800](https://arxiv.org/abs/2309.04800)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.04800.md)]. - Towards Robust Model Watermark via Reducing Parametric Vulnerability - [[2309.04777](https://arxiv.org/abs/2309.04777)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.04777.md)]. - SeaEval for Multilingual Foundation Models: From Cross-Lingual Alignment to Cultural Reasoning - [[2309.04766](https://arxiv.org/abs/2309.04766)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.04766.md)]. - When to Learn What: Model-Adaptive Data Augmentation Curriculum - [[2309.04747](https://arxiv.org/abs/2309.04747)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.04747.md)]. - FIAT: Fusing learning paradigms with Instruction-Accelerated Tuning - [[2309.04663](https://arxiv.org/abs/2309.04663)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.04663.md)]. - MADLAD-400: A Multilingual And Document-Level Large Audited Dataset - [[2309.04662](https://arxiv.org/abs/2309.04662)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.04662.md)]. - Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf - [[2309.04658](https://arxiv.org/abs/2309.04658)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.04658.md)]. - Dynamic Mesh-Aware Radiance Fields - [[2309.04581](https://arxiv.org/abs/2309.04581)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.04581.md)]. - When Less is More: Investigating Data Pruning for Pretraining LLMs at Scale - [[2309.04564](https://arxiv.org/abs/2309.04564)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.04564.md)]. - Examining Autoexposure for Challenging Scenes - [[2309.04542](https://arxiv.org/abs/2309.04542)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.04542.md)]. - Video Task Decathlon: Unifying Image and Video Tasks in Autonomous Driving - [[2309.04422](https://arxiv.org/abs/2309.04422)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.04422.md)]. - DeformToon3D: Deformable 3D Toonification from Neural Radiance Fields - [[2309.04410](https://arxiv.org/abs/2309.04410)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.04410.md)]. - Mobile V-MoEs: Scaling Down Vision Transformers via Sparse Mixture-of-Experts - [[2309.04354](https://arxiv.org/abs/2309.04354)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.04354.md)]. - The Power of Sound (TPoS): Audio Reactive Video Generation with Stable Diffusion - [[2309.04509](https://arxiv.org/abs/2309.04509)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.04509.md)]. - From Sparse to Dense: GPT-4 Summarization with Chain of Density Prompting - [[2309.04269](https://arxiv.org/abs/2309.04269)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.04269.md)]. - Towards Practical Capture of High-Fidelity Relightable Avatars - [[2309.04247](https://arxiv.org/abs/2309.04247)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.04247.md)]. - Unsupervised Object Localization with Representer Point Selection - [[2309.04172](https://arxiv.org/abs/2309.04172)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.04172.md)]. - NESTLE: a No-Code Tool for Statistical Analysis of Legal Corpus - [[2309.04146](https://arxiv.org/abs/2309.04146)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.04146.md)]. - Evaluation and Mitigation of Agnosia in Multimodal Large Language Models - [[2309.04041](https://arxiv.org/abs/2309.04041)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.04041.md)]. - CDFSL-V: Cross-Domain Few-Shot Learning for Videos - [[2309.03989](https://arxiv.org/abs/2309.03989)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.03989.md)]. - LanSER: Language-Model Supported Speech Emotion Recognition - [[2309.03978](https://arxiv.org/abs/2309.03978)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.03978.md)]. - ImageBind-LLM: Multi-modality Instruction Tuning - [[2309.03905](https://arxiv.org/abs/2309.03905)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.03905.md)]. - Tracking Anything with Decoupled Video Segmentation - [[2309.03903](https://arxiv.org/abs/2309.03903)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.03903.md)]. - Learning Continuous Exposure Value Representations for Single-Image HDR Reconstruction - [[2309.03900](https://arxiv.org/abs/2309.03900)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.03900.md)]. - The Making and Breaking of Camouflage - [[2309.03899](https://arxiv.org/abs/2309.03899)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.03899.md)]. - ProPainter: Improving Propagation and Transformer for Video Inpainting - [[2309.03897](https://arxiv.org/abs/2309.03897)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.03897.md)]. - InstructDiffusion: A Generalist Modeling Interface for Vision Tasks - [[2309.03895](https://arxiv.org/abs/2309.03895)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.03895.md)]. - A Function Interpretation Benchmark for Evaluating Interpretability Methods - [[2309.03886](https://arxiv.org/abs/2309.03886)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.03886.md)]. - DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models - [[2309.03883](https://arxiv.org/abs/2309.03883)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.03883.md)]. - On Large Language Models' Selection Bias in Multi-Choice Questions - [[2309.03882](https://arxiv.org/abs/2309.03882)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.03882.md)]. - FLM-101B: An Open LLM and How to Train It with $100K Budget - [[2309.03852](https://arxiv.org/abs/2309.03852)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.03852.md)]. - Panoramas from Photons - [[2309.03811](https://arxiv.org/abs/2309.03811)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.03811.md)]. - SimNP: Learning Self-Similarity Priors Between Neural Points - [[2309.03809](https://arxiv.org/abs/2309.03809)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.03809.md)]. - Phasic Content Fusing Diffusion Model with Directional Distribution Consistency for Few-Shot Model Adaption - [[2309.03729](https://arxiv.org/abs/2309.03729)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.03729.md)]. - Efficient Adaptive Human-Object Interaction Detection with Concept-guided Memory - [[2309.03696](https://arxiv.org/abs/2309.03696)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.03696.md)]. - Large-Scale Automatic Audiobook Creation - [[2309.03926](https://arxiv.org/abs/2309.03926)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.03926.md)]. - Evaluating ChatGPT as a Recommender System: A Rigorous Approach - [[2309.03613](https://arxiv.org/abs/2309.03613)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.03613.md)]. - Enhancing Sample Utilization through Sample Adaptive Augmentation in Semi-Supervised Learning - [[2309.03598](https://arxiv.org/abs/2309.03598)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.03598.md)]. - Text2Control3D: Controllable 3D Avatar Generation in Neural Radiance Fields using Geometry-Guided Text-to-Image Diffusion Model - [[2309.03550](https://arxiv.org/abs/2309.03550)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.03550.md)]. - Reuse and Diffuse: Iterative Denoising for Text-to-Video Generation - [[2309.03549](https://arxiv.org/abs/2309.03549)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.03549.md)]. - Temporal Collection and Distribution for Referring Video Object Segmentation - [[2309.03473](https://arxiv.org/abs/2309.03473)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.03473.md)]. - SyncDreamer: Generating Multiview-consistent Images from a Single-view Image - [[2309.03453](https://arxiv.org/abs/2309.03453)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.03453.md)]. - Large Language Models as Optimizers - [[2309.03409](https://arxiv.org/abs/2309.03409)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.03409.md)]. - Distribution-Aware Prompt Tuning for Vision-Language Models - [[2309.03406](https://arxiv.org/abs/2309.03406)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.03406.md)]. - Robotic Table Tennis: A Case Study into a High Speed Learning System - [[2309.03315](https://arxiv.org/abs/2309.03315)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.03315.md)]. - Matcha-TTS: A fast TTS architecture with conditional flow matching - [[2309.03199](https://arxiv.org/abs/2309.03199)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.03199.md)]. - Bayes' Rays: Uncertainty Quantification for Neural Radiance Fields - [[2309.03185](https://arxiv.org/abs/2309.03185)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.03185.md)]. - SLiMe: Segment Like Me - [[2309.03179](https://arxiv.org/abs/2309.03179)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.03179.md)]. - ResFields: Residual Neural Fields for Spatiotemporal Signals - [[2309.03160](https://arxiv.org/abs/2309.03160)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.03160.md)]. - MyoDex: A Generalizable Prior for Dexterous Manipulation - [[2309.03130](https://arxiv.org/abs/2309.03130)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.03130.md)]. - Dynamic Hyperbolic Attention Network for Fine Hand-object Reconstruction - [[2309.02965](https://arxiv.org/abs/2309.02965)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.02965.md)]. - GPT Can Solve Mathematical Problems Without a Calculator - [[2309.03241](https://arxiv.org/abs/2309.03241)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.03241.md)]. - Zero-Resource Hallucination Prevention for Large Language Models - [[2309.02654](https://arxiv.org/abs/2309.02654)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.02654.md)]. - Scaling Autoregressive Multi-Modal Models: Pretraining and Instruction Tuning - [[2309.02591](https://arxiv.org/abs/2309.02591)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.02591.md)]. - Physically Grounded Vision-Language Models for Robotic Manipulation - [[2309.02561](https://arxiv.org/abs/2309.02561)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.02561.md)]. - A skeletonization algorithm for gradient-based optimization - [[2309.02527](https://arxiv.org/abs/2309.02527)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.02527.md)]. - GO-SLAM: Global Optimization for Consistent 3D Instant Reconstruction - [[2309.02436](https://arxiv.org/abs/2309.02436)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.02436.md)]. - Building a Winning Team: Selecting Source Model Ensembles using a Submodular Transferability Estimation Approach - [[2309.02429](https://arxiv.org/abs/2309.02429)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.02429.md)]. - Cognitive Architectures for Language Agents - [[2309.02427](https://arxiv.org/abs/2309.02427)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.02427.md)]. - EgoPCA: A New Framework for Egocentric Hand-Object Interaction Understanding - [[2309.02423](https://arxiv.org/abs/2309.02423)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.02423.md)]. - Doppelgangers: Learning to Disambiguate Images of Similar Structures - [[2309.02420](https://arxiv.org/abs/2309.02420)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.02420.md)]. - Generating Realistic Images from In-the-wild Sounds - [[2309.02405](https://arxiv.org/abs/2309.02405)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.02405.md)]. - Prototype-based Dataset Comparison - [[2309.02401](https://arxiv.org/abs/2309.02401)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.02401.md)]. - Explaining grokking through circuit efficiency - [[2309.02390](https://arxiv.org/abs/2309.02390)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.02390.md)]. - CIEM: Contrastive Instruction Evaluation Method for Better Instruction Tuning - [[2309.02301](https://arxiv.org/abs/2309.02301)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.02301.md)]. - Making Large Language Models Better Reasoners with Alignment - [[2309.02144](https://arxiv.org/abs/2309.02144)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.02144.md)]. - Multi-label affordance mapping from egocentric vision - [[2309.02120](https://arxiv.org/abs/2309.02120)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.02120.md)]. - Iterative Superquadric Recomposition of 3D Objects from Multiple Views - [[2309.02102](https://arxiv.org/abs/2309.02102)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.02102.md)]. - Learning Cross-Modal Affinity for Referring Video Object Segmentation Targeting Limited Samples - [[2309.02041](https://arxiv.org/abs/2309.02041)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.02041.md)]. - Data-Juicer: A One-Stop Data Processing System for Large Language Models - [[2309.02033](https://arxiv.org/abs/2309.02033)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.02033.md)]. - RawHDR: High Dynamic Range Image Reconstruction from a Single Raw Image - [[2309.02020](https://arxiv.org/abs/2309.02020)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.02020.md)]. - NICE: CVPR 2023 Challenge on Zero-shot Image Captioning - [[2309.01961](https://arxiv.org/abs/2309.01961)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.01961.md)]. - Empowering Low-Light Image Enhancer through Customized Learnable Priors - [[2309.01958](https://arxiv.org/abs/2309.01958)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.01958.md)]. - Towards Universal Image Embeddings: A Large-Scale Dataset and Challenge for Generic Image Representations - [[2309.01858](https://arxiv.org/abs/2309.01858)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.01858.md)]. - One Wide Feedforward is All You Need - [[2309.01826](https://arxiv.org/abs/2309.01826)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.01826.md)]. - Are Emergent Abilities in Large Language Models just In-Context Learning? - [[2309.01809](https://arxiv.org/abs/2309.01809)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.01809.md)]. - An Empirical Analysis for Zero-Shot Multi-Label Classification on COVID-19 CT Scans and Uncurated Reports - [[2309.01740](https://arxiv.org/abs/2309.01740)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.01740.md)]. - Mask-Attention-Free Transformer for 3D Instance Segmentation - [[2309.01692](https://arxiv.org/abs/2309.01692)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.01692.md)]. - AGG-Net: Attention Guided Gated-convolutional Network for Depth Image Completion - [[2309.01624](https://arxiv.org/abs/2309.01624)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.01624.md)]. - Raw Data Is All You Need: Virtual Axle Detector with Enhanced Receptive Field - [[2309.01574](https://arxiv.org/abs/2309.01574)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.01574.md)]. - A Blackbox Model Is All You Need to Breach Privacy: Smart Grid Forecasting Models as a Use Case - [[2309.01523](https://arxiv.org/abs/2309.01523)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.01523.md)]. - Unified Pre-training with Pseudo Texts for Text-To-Image Person Re-identification - [[2309.01420](https://arxiv.org/abs/2309.01420)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.01420.md)]. - Memory augment is All You Need for image restoration - [[2309.01377](https://arxiv.org/abs/2309.01377)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.01377.md)]. - EMR-MSF: Self-Supervised Recurrent Monocular Scene Flow Exploiting Ego-Motion Rigidity - [[2309.01296](https://arxiv.org/abs/2309.01296)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.01296.md)]. - SOAR: Scene-debiasing Open-set Action Recognition - [[2309.01265](https://arxiv.org/abs/2309.01265)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.01265.md)]. - Towards Generic Image Manipulation Detection with Weakly-Supervised Self-Consistency Learning - [[2309.01246](https://arxiv.org/abs/2309.01246)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.01246.md)]. - LoGoPrompt: Synthetic Text Images Can Be Good Visual Prompts for Vision-Language Models - [[2309.01155](https://arxiv.org/abs/2309.01155)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.01155.md)]. - EdaDet: Open-Vocabulary Object Detection Using Early Dense Alignment - [[2309.01151](https://arxiv.org/abs/2309.01151)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.01151.md)]. - Attention Where It Matters: Rethinking Visual Document Understanding with Selective Region Concentration - [[2309.01131](https://arxiv.org/abs/2309.01131)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.01131.md)]. - CoTDet: Affordance Knowledge Prompting for Task Driven Object Detection - [[2309.01093](https://arxiv.org/abs/2309.01093)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.01093.md)]. - Chinese Text Recognition with A Pre-Trained CLIP-Like Model Through Image-IDS Aligning - [[2309.01083](https://arxiv.org/abs/2309.01083)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.01083.md)]. - ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models - [[2309.00986](https://arxiv.org/abs/2309.00986)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.00986.md)]. - eDKM: An Efficient and Accurate Train-time Weight Clustering for Large Language Models - [[2309.00964](https://arxiv.org/abs/2309.00964)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.00964.md)]. - Two-in-One Depth: Bridging the Gap Between Monocular and Binocular Self-supervised Depth Estimation - [[2309.00933](https://arxiv.org/abs/2309.00933)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.00933.md)]. - Domain Generalization via Balancing Training Difficulty and Model Capability - [[2309.00844](https://arxiv.org/abs/2309.00844)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.00844.md)]. - Few shot font generation via transferring similarity guided global style and quantization local style - [[2309.00827](https://arxiv.org/abs/2309.00827)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.00827.md)]. - Instability of the solitary waves for the Generalized Benjamin-Bona-Mahony Equation - [[2309.0791](https://arxiv.org/abs/2309.0791)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.0791.md)]. - Contrastive Feature Masking Open-Vocabulary Vision Transformer - [[2309.00775](https://arxiv.org/abs/2309.00775)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.00775.md)]. - Learning Shared Safety Constraints from Multi-task Demonstrations - [[2309.00711](https://arxiv.org/abs/2309.00711)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.00711.md)]. - Searching for a Leptophilic Z' and a 3-3-1 symmetry at CLIC - [[2309.0681](https://arxiv.org/abs/2309.0681)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.0681.md)]. - Point-Bind &amp; Point-LLM: Aligning Point Cloud with Multi-modality for 3D Understanding, Generation, and Instruction Following - [[2309.00615](https://arxiv.org/abs/2309.00615)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.00615.md)]. - CityDreamer: Compositional Generative Model of Unbounded 3D Cities - [[2309.00610](https://arxiv.org/abs/2309.00610)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.00610.md)]. - Rieger, Schwabe, Suess-de Vries: The Sunny Beats of Resonance - [[2309.0666](https://arxiv.org/abs/2309.0666)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.0666.md)]. - VideoGen: A Reference-Guided Latent Diffusion Approach for High Definition Text-to-Video Generation - [[2309.00398](https://arxiv.org/abs/2309.00398)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.00398.md)]. - FederatedScope-LLM: A Comprehensive Package for Fine-tuning Large Language Models in Federated Learning - [[2309.00363](https://arxiv.org/abs/2309.00363)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.00363.md)]. - Large Content And Behavior Models To Understand, Simulate, And Optimize Content And Behavior - [[2309.00359](https://arxiv.org/abs/2309.00359)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.00359.md)]. - RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback - [[2309.00267](https://arxiv.org/abs/2309.00267)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.00267.md)]. - A Massively Parallel Dynamic Programming for Approximate Rectangle Escape Problem - [[2309.0242](https://arxiv.org/abs/2309.0242)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.0242.md)]. - Object-Centric Multiple Object Tracking - [[2309.00233](https://arxiv.org/abs/2309.00233)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.00233.md)]. - Human-Inspired Facial Sketch Synthesis with Dynamic Adaptation - [[2309.00216](https://arxiv.org/abs/2309.00216)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.00216.md)]. - Pseudo-magnetic fields in square lattices - [[2309.0212](https://arxiv.org/abs/2309.0212)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.0212.md)]. - Empirical Modeling of Variance in Medium Frequency R-Mode Time-of-Arrival Measurements - [[2309.0202](https://arxiv.org/abs/2309.0202)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.0202.md)]. ### August 2023 - Block occurrences in the binary expansion - [[2309.0142](https://arxiv.org/abs/2309.0142)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.0142.md)]. - YaRN: Efficient Context Window Extension of Large Language Models - [[2309.00071](https://arxiv.org/abs/2309.00071)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.00071.md)]. - SoDaCam: Software-defined Cameras via Single-Photon Imaging - [[2309.00066](https://arxiv.org/abs/2309.00066)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.00066.md)]. - FACET: Fairness in Computer Vision Evaluation Benchmark - [[2309.00035](https://arxiv.org/abs/2309.00035)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.00035.md)]. - PointLLM: Empowering Large Language Models to Understand Point Clouds - [[2308.16911](https://arxiv.org/abs/2308.16911)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.16911.md)]. - StyleInV: A Temporal Style Modulated Inversion Network for Unconditional Video Generation - [[2308.16909](https://arxiv.org/abs/2308.16909)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.16909.md)]. - InterDiff: Generating 3D Human-Object Interactions with Physics-Informed Diffusion - [[2308.16905](https://arxiv.org/abs/2308.16905)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.16905.md)]. - Transformers as Support Vector Machines - [[2308.16898](https://arxiv.org/abs/2308.16898)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.16898.md)]. - EMDB: The Electromagnetic Database of Global 3D Human Pose and Shape in the Wild - [[2308.16894](https://arxiv.org/abs/2308.16894)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.16894.md)]. - GNFactor: Multi-Task Real Robot Learning with Generalizable Neural Feature Fields - [[2308.16891](https://arxiv.org/abs/2308.16891)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.16891.md)]. - TouchStone: Evaluating Vision-Language Models by Language Models - [[2308.16890](https://arxiv.org/abs/2308.16890)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.16890.md)]. - The Belebele Benchmark: a Parallel Reading Comprehension Dataset in 122 Language Variants - [[2308.16884](https://arxiv.org/abs/2308.16884)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.16884.md)]. - SportsSloMo: A New Benchmark and Baselines for Human-centric Video Frame Interpolation - [[2308.16876](https://arxiv.org/abs/2308.16876)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.16876.md)]. - Coarse-to-Fine Amodal Segmentation with Shape Prior - [[2308.16825](https://arxiv.org/abs/2308.16825)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.16825.md)]. - Can Programming Languages Boost Each Other via Instruction Tuning? - [[2308.16824](https://arxiv.org/abs/2308.16824)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.16824.md)]. - Ref-Diff: Zero-shot Referring Image Segmentation with Generative Models - [[2308.16777](https://arxiv.org/abs/2308.16777)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.16777.md)]. - Towards High-Fidelity Text-Guided 3D Face Generation and Manipulation Using only Images - [[2308.16758](https://arxiv.org/abs/2308.16758)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.16758.md)]. - Parsing is All You Need for Accurate Gait Recognition in the Wild - [[2308.16739](https://arxiv.org/abs/2308.16739)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.16739.md)]. - ViLTA: Enhancing Vision-Language Pre-training through Textual Augmentation - [[2308.16689](https://arxiv.org/abs/2308.16689)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.16689.md)]. - Any-Size-Diffusion: Toward Efficient Text-Driven Synthesis for Any-Size HD Images - [[2308.16582](https://arxiv.org/abs/2308.16582)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.16582.md)]. - MVDream: Multi-view Diffusion for 3D Generation - [[2308.16512](https://arxiv.org/abs/2308.16512)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.16512.md)]. - Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations - [[2308.16505](https://arxiv.org/abs/2308.16505)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.16505.md)]. - PivotNet: Vectorized Pivot Learning for End-to-end HD Map Construction - [[2308.16477](https://arxiv.org/abs/2308.16477)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.16477.md)]. - Sparkles: Unlocking Chats Across Multiple Images for Multimodal Instruction-Following Models - [[2308.16463](https://arxiv.org/abs/2308.16463)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.16463.md)]. - Improving Lens Flare Removal with General Purpose Pipeline and Multiple Light Sources Recovery - [[2308.16460](https://arxiv.org/abs/2308.16460)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.16460.md)]. - BioCoder: A Benchmark for Bioinformatics Code Generation with Contextual Pragmatic Knowledge - [[2308.16458](https://arxiv.org/abs/2308.16458)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.16458.md)]. - Adversarial Finetuning with Latent Representation Constraint to Mitigate Accuracy-Robustness Tradeoff - [[2308.16454](https://arxiv.org/abs/2308.16454)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.16454.md)]. - Emergence of Segmentation with Minimalistic White-Box Transformers - [[2308.16271](https://arxiv.org/abs/2308.16271)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.16271.md)]. - Active Neural Mapping - [[2308.16246](https://arxiv.org/abs/2308.16246)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.16246.md)]. - Learning Vision-based Pursuit-Evasion Robot Policies - [[2308.16185](https://arxiv.org/abs/2308.16185)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.16185.md)]. - SAM-Med2D - [[2308.16184](https://arxiv.org/abs/2308.16184)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.16184.md)]. - MMVP: Motion-Matrix-based Video Prediction - [[2308.16154](https://arxiv.org/abs/2308.16154)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.16154.md)]. - LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models - [[2308.16137](https://arxiv.org/abs/2308.16137)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.16137.md)]. - Response: Emergent analogical reasoning in large language models - [[2308.16118](https://arxiv.org/abs/2308.16118)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.16118.md)]. - Learned Image Reasoning Prior Penetrates Deep Unfolding Network for Panchromatic and Multi-Spectral Image Fusion - [[2308.16083](https://arxiv.org/abs/2308.16083)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.16083.md)]. - RoboTAP: Tracking Arbitrary Points for Few-Shot Visual Imitation - [[2308.15975](https://arxiv.org/abs/2308.15975)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.15975.md)]. - WALL-E: Embodied Robotic WAiter Load Lifting with Large Language Model - [[2308.15962](https://arxiv.org/abs/2308.15962)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.15962.md)]. - LLaSM: Large Language and Speech Model - [[2308.15930](https://arxiv.org/abs/2308.15930)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.15930.md)]. - Reconstructing Groups of People with Hypergraph Relational Reasoning - [[2308.15844](https://arxiv.org/abs/2308.15844)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.15844.md)]. - Introducing Language Guidance in Prompt-based Continual Learning - [[2308.15827](https://arxiv.org/abs/2308.15827)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.15827.md)]. - WeatherBench 2: A benchmark for the next generation of data-driven global weather models - [[2308.15560](https://arxiv.org/abs/2308.15560)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.15560.md)]. - Canonical Factors for Hybrid Neural Fields - [[2308.15461](https://arxiv.org/abs/2308.15461)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.15461.md)]. - Shatter and Gather: Learning Referring Image Segmentation with Text Supervision - [[2308.15512](https://arxiv.org/abs/2308.15512)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.15512.md)]. - Efficient Model Personalization in Federated Learning via Client-Specific Prompt Generation - [[2308.15367](https://arxiv.org/abs/2308.15367)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.15367.md)]. - CLIPTrans: Transferring Visual Knowledge with Pre-trained Models for Multimodal Machine Translation - [[2308.15226](https://arxiv.org/abs/2308.15226)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.15226.md)]. - Evaluation and Analysis of Hallucination in Large Vision-Language Models - [[2308.15126](https://arxiv.org/abs/2308.15126)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.15126.md)]. - Learning to Upsample by Learning to Sample - [[2308.15085](https://arxiv.org/abs/2308.15085)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.15085.md)]. - Class Prior-Free Positive-Unlabeled Learning with Taylor Variational Loss for Hyperspectral Remote Sensing Imagery - [[2308.15081](https://arxiv.org/abs/2308.15081)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.15081.md)]. - Exploring Model Transferability through the Lens of Potential Energy - [[2308.15074](https://arxiv.org/abs/2308.15074)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.15074.md)]. - Pose-Free Neural Radiance Fields via Implicit Pose Regularization - [[2308.15049](https://arxiv.org/abs/2308.15049)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.15049.md)]. - Recursively Summarizing Enables Long-Term Dialogue Memory in Large Language Models - [[2308.15022](https://arxiv.org/abs/2308.15022)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.15022.md)]. - Vision Grid Transformer for Document Layout Analysis - [[2308.14978](https://arxiv.org/abs/2308.14978)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.14978.md)]. - LLM-Based Human-Robot Collaboration Framework for Manipulation Tasks - [[2308.14972](https://arxiv.org/abs/2308.14972)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.14972.md)]. - Vector Search with OpenAI Embeddings: Lucene Is All You Need - [[2308.14963](https://arxiv.org/abs/2308.14963)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.14963.md)]. - Read-only Prompt Optimization for Vision-Language Few-shot Learning - [[2308.14960](https://arxiv.org/abs/2308.14960)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.14960.md)]. - NSF: Neural Surface Fields for Human Modeling from Monocular Depth - [[2308.14847](https://arxiv.org/abs/2308.14847)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.14847.md)]. - CLNeRF: Continual Learning Meets NeRF - [[2308.14816](https://arxiv.org/abs/2308.14816)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.14816.md)]. - Efficient Discovery and Effective Evaluation of Visual Perceptual Similarity: A Benchmark and Beyond - [[2308.14753](https://arxiv.org/abs/2308.14753)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.14753.md)]. - AI Deception: A Survey of Examples, Risks, and Potential Solutions - [[2308.14752](https://arxiv.org/abs/2308.14752)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.14752.md)]. - R3D3: Dense 3D Reconstruction of Dynamic Scenes from Multiple Cameras - [[2308.14713](https://arxiv.org/abs/2308.14713)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.14713.md)]. - S-TREK: Sequential Translation and Rotation Equivariant Keypoints for local feature extraction - [[2308.14598](https://arxiv.org/abs/2308.14598)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.14598.md)]. - Referring Image Segmentation Using Text Supervision - [[2308.14575](https://arxiv.org/abs/2308.14575)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.14575.md)]. - LAC: Latent Action Composition for Skeleton-based Action Segmentation - [[2308.14500](https://arxiv.org/abs/2308.14500)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.14500.md)]. - Priority-Centric Human Motion Generation in Discrete Latent Space - [[2308.14480](https://arxiv.org/abs/2308.14480)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.14480.md)]. - Multi-Modal Neural Radiance Field for Monocular Dense SLAM with a Light-Weight ToF Sensor - [[2308.14383](https://arxiv.org/abs/2308.14383)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.14383.md)]. - ZhuJiu: A Multi-dimensional, Multi-faceted Chinese Benchmark for Large Language Models - [[2308.14353](https://arxiv.org/abs/2308.14353)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.14353.md)]. - DISC-MedLLM: Bridging General Large Language Models and Real-World Medical Consultation - [[2308.14346](https://arxiv.org/abs/2308.14346)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.14346.md)]. - Bridging Cross-task Protocol Inconsistency for Distillation in Dense Object Detection - [[2308.14286](https://arxiv.org/abs/2308.14286)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.14286.md)]. - HoloFusion: Towards Photo-realistic 3D Generative Modeling - [[2308.14244](https://arxiv.org/abs/2308.14244)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.14244.md)]. - High-Resolution Document Shadow Removal via A Large-Scale Real-World Dataset and A Frequency-Aware Shadow Erasing Net - [[2308.14221](https://arxiv.org/abs/2308.14221)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.14221.md)]. - Sparse Sampling Transformer with Uncertainty-Driven Ranking for Unified Removal of Raindrops and Rain Streaks - [[2308.14153](https://arxiv.org/abs/2308.14153)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.14153.md)]. - Unaligned 2D to 3D Translation with Conditional Vector-Quantized Code Diffusion using Transformers - [[2308.14152](https://arxiv.org/abs/2308.14152)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.14152.md)]. - Semi-Supervised Learning in the Few-Shot Zero-Shot Scenario - [[2308.14119](https://arxiv.org/abs/2308.14119)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.14119.md)]. - MedAlign: A Clinician-Generated Dataset for Instruction Following with Electronic Medical Records - [[2308.14089](https://arxiv.org/abs/2308.14089)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.14089.md)]. - 4D Myocardium Reconstruction with Decoupled Motion and Shape Model - [[2308.14083](https://arxiv.org/abs/2308.14083)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.14083.md)]. - Reconstructing Interacting Hands with Interaction Prior from Monocular Images - [[2308.14082](https://arxiv.org/abs/2308.14082)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.14082.md)]. - Nonrigid Object Contact Estimation With Regional Unwrapping Transformer - [[2308.14074](https://arxiv.org/abs/2308.14074)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.14074.md)]. - Hierarchical Contrastive Learning for Pattern-Generalizable Image Corruption Detection - [[2308.14061](https://arxiv.org/abs/2308.14061)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.14061.md)]. - Domain-Specificity Inducing Transformers for Source-Free Domain Adaptation - [[2308.14023](https://arxiv.org/abs/2308.14023)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.14023.md)]. - Calibrating Panoramic Depth Estimation for Practical Localization and Mapping - [[2308.14005](https://arxiv.org/abs/2308.14005)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.14005.md)]. - LDL: Line Distance Functions for Panoramic Localization - [[2308.13989](https://arxiv.org/abs/2308.13989)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.13989.md)]. - Prior-guided Source-free Domain Adaptation for Human Pose Estimation - [[2308.13954](https://arxiv.org/abs/2308.13954)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.13954.md)]. - Late Stopping: Avoiding Confidently Learning from Mislabeled Examples - [[2308.13862](https://arxiv.org/abs/2308.13862)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.13862.md)]. - Beyond One-to-One: Rethinking the Referring Image Segmentation - [[2308.13853](https://arxiv.org/abs/2308.13853)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.13853.md)]. - Point-Query Quadtree for Crowd Counting, Localization, and More - [[2308.13814](https://arxiv.org/abs/2308.13814)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.13814.md)]. - ORES: Open-vocabulary Responsible Visual Synthesis - [[2308.13785](https://arxiv.org/abs/2308.13785)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.13785.md)]. - Generalized Lightness Adaptation with Channel Selective Normalization - [[2308.13783](https://arxiv.org/abs/2308.13783)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.13783.md)]. - MST-compression: Compressing and Accelerating Binary Neural Networks with Minimum Spanning Tree - [[2308.13735](https://arxiv.org/abs/2308.13735)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.13735.md)]. - ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning - [[2308.13724](https://arxiv.org/abs/2308.13724)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.13724.md)]. - Joint Modeling of Feature, Correspondence, and a Compressed Memory for Video Object Segmentation - [[2308.13505](https://arxiv.org/abs/2308.13505)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.13505.md)]. - A2Q: Accumulator-Aware Quantization with Guaranteed Overflow Avoidance - [[2308.13504](https://arxiv.org/abs/2308.13504)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.13504.md)]. - Eventful Transformers: Leveraging Temporal Redundancy in Vision Transformers - [[2308.13494](https://arxiv.org/abs/2308.13494)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.13494.md)]. - Leveraging Knowledge and Reinforcement Learning for Enhanced Reliability of Language Models - [[2308.13467](https://arxiv.org/abs/2308.13467)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.13467.md)]. - Position-Enhanced Visual Instruction Tuning for Multimodal Large Language Models - [[2308.13437](https://arxiv.org/abs/2308.13437)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.13437.md)]. - Nougat: Neural Optical Understanding for Academic Documents - [[2308.13418](https://arxiv.org/abs/2308.13418)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.13418.md)]. - SoTaNa: The Open-Source Software Development Assistant - [[2308.13416](https://arxiv.org/abs/2308.13416)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.13416.md)]. - Harvard Glaucoma Detection and Progression: A Multimodal Multitask Dataset and Generalization-Reinforced Semi-Supervised Learning - [[2308.13411](https://arxiv.org/abs/2308.13411)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.13411.md)]. - Relighting Neural Radiance Fields with Shadow and Highlight Hints - [[2308.13404](https://arxiv.org/abs/2308.13404)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.13404.md)]. - Do-Not-Answer: A Dataset for Evaluating Safeguards in LLMs - [[2308.13387](https://arxiv.org/abs/2308.13387)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.13387.md)]. - Distribution-Aligned Diffusion for Human Mesh Recovery - [[2308.13369](https://arxiv.org/abs/2308.13369)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.13369.md)]. - ConSlide: Asynchronous Hierarchical Interaction Transformer with Breakup-Reorganize Rehearsal for Continual Whole Slide Image Analysis - [[2308.13324](https://arxiv.org/abs/2308.13324)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.13324.md)]. - SVQNet: Sparse Voxel-Adjacent Query Network for 4D Spatio-Temporal LiDAR Semantic Segmentation - [[2308.13323](https://arxiv.org/abs/2308.13323)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.13323.md)]. - A Game of Bundle Adjustment -- Learning Efficient Convergence - [[2308.13270](https://arxiv.org/abs/2308.13270)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.13270.md)]. - Integrating Boxes and Masks: A Multi-Object Framework for Unified Visual Tracking and Segmentation - [[2308.13266](https://arxiv.org/abs/2308.13266)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.13266.md)]. - Unpaired Multi-domain Attribute Translation of 3D Facial Shapes with a Square and Symmetric Geometric Map - [[2308.13245](https://arxiv.org/abs/2308.13245)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.13245.md)]. - Black-box Unsupervised Domain Adaptation with Bi-directional Atkinson-Shiffrin Memory - [[2308.13236](https://arxiv.org/abs/2308.13236)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.13236.md)]. - ReST: A Reconfigurable Spatial-Temporal Graph Model for Multi-Camera Multi-Object Tracking - [[2308.13229](https://arxiv.org/abs/2308.13229)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.13229.md)]. - MultiCapCLIP: Auto-Encoding Prompts for Zero-Shot Multilingual Visual Captioning - [[2308.13218](https://arxiv.org/abs/2308.13218)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.13218.md)]. - IOMatch: Simplifying Open-Set Semi-Supervised Learning with Joint Inliers and Outliers Utilization - [[2308.13168](https://arxiv.org/abs/2308.13168)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.13168.md)]. - Diff-Retinex: Rethinking Low-light Image Enhancement with A Generative Diffusion Model - [[2308.13164](https://arxiv.org/abs/2308.13164)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.13164.md)]. - SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research - [[2308.13149](https://arxiv.org/abs/2308.13149)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.13149.md)]. - OmniQuant: Omnidirectionally Calibrated Quantization for Large Language Models - [[2308.13137](https://arxiv.org/abs/2308.13137)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.13137.md)]. - MLLM-DataEngine: An Iterative Refinement Approach for MLLM - [[2308.13566](https://arxiv.org/abs/2308.13566)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.13566.md)]. - Preserving Modality Structure Improves Multi-Modal Learning - [[2308.13077](https://arxiv.org/abs/2308.13077)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.13077.md)]. - NeO 360: Neural Fields for Sparse View Synthesis of Outdoor Scenes - [[2308.12967](https://arxiv.org/abs/2308.12967)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.12967.md)]. - Scenimefy: Learning to Craft Anime Scene via Semi-Supervised Image-to-Image Translation - [[2308.12968](https://arxiv.org/abs/2308.12968)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.12968.md)]. - Qwen-VL: A Frontier Large Vision-Language Model with Versatile Abilities - [[2308.12966](https://arxiv.org/abs/2308.12966)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.12966.md)]. - Dense Text-to-Image Generation with Attention Modulation - [[2308.12964](https://arxiv.org/abs/2308.12964)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.12964.md)]. - MapPrior: Bird's-Eye View Map Layout Estimation with Generative Models - [[2308.12963](https://arxiv.org/abs/2308.12963)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.12963.md)]. - Motion-Guided Masking for Spatiotemporal Representation Learning - [[2308.12962](https://arxiv.org/abs/2308.12962)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.12962.md)]. - Towards Realistic Zero-Shot Classification via Self Structural Semantic Alignment - [[2308.12960](https://arxiv.org/abs/2308.12960)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.12960.md)]. - Code Llama: Open Foundation Models for Code - [[2308.12950](https://arxiv.org/abs/2308.12950)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.12950.md)]. - Can Linguistic Knowledge Improve Multimodal Alignment in Vision-Language Pretraining? - [[2308.12898](https://arxiv.org/abs/2308.12898)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.12898.md)]. - Boosting Semantic Segmentation from the Perspective of Explicit Class Embeddings - [[2308.12894](https://arxiv.org/abs/2308.12894)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.12894.md)]. - ToonTalker: Cross-Domain Face Reenactment - [[2308.12866](https://arxiv.org/abs/2308.12866)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.12866.md)]. - Fast Adversarial Training with Smooth Convergence - [[2308.12857](https://arxiv.org/abs/2308.12857)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.12857.md)]. - On Offline Evaluation of 3D Object Detection for Autonomous Driving - [[2308.12779](https://arxiv.org/abs/2308.12779)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.12779.md)]. - LISTER: Neighbor Decoding for Length-Insensitive Scene Text Recognition - [[2308.12774](https://arxiv.org/abs/2308.12774)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.12774.md)]. - VIGC: Visual Instruction Generation and Correction - [[2308.12714](https://arxiv.org/abs/2308.12714)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.12714.md)]. - A Parse-Then-Place Approach for Generating Graphic Layouts from Textual Descriptions - [[2308.12700](https://arxiv.org/abs/2308.12700)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.12700.md)]. - PromptMRG: Diagnosis-Driven Prompts for Medical Report Generation - [[2308.12604](https://arxiv.org/abs/2308.12604)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.12604.md)]. - Logic-induced Diagnostic Reasoning for Semi-supervised Semantic Segmentation - [[2308.12595](https://arxiv.org/abs/2308.12595)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.12595.md)]. - Self-supervised Learning of Implicit Shape Representation with Dense Correspondence for Deformable Objects - [[2308.12590](https://arxiv.org/abs/2308.12590)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.12590.md)]. - Grounded Entity-Landmark Adaptive Pre-training for Vision-and-Language Navigation - [[2308.12587](https://arxiv.org/abs/2308.12587)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.12587.md)]. - Hyperbolic Audio-visual Zero-shot Learning - [[2308.12558](https://arxiv.org/abs/2308.12558)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.12558.md)]. - Synchronize Feature Extracting and Matching: A Single Branch Framework for 3D Object Tracking - [[2308.12549](https://arxiv.org/abs/2308.12549)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.12549.md)]. - CALM : A Multi-task Benchmark for Comprehensive Assessment of Language Model Bias - [[2308.12539](https://arxiv.org/abs/2308.12539)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.12539.md)]. - Masked Autoencoders are Efficient Class Incremental Learners - [[2308.12510](https://arxiv.org/abs/2308.12510)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.12510.md)]. - CGMI: Configurable General Multi-Agent Interaction Framework - [[2308.12503](https://arxiv.org/abs/2308.12503)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.12503.md)]. - With a Little Help from your own Past: Prototypical Memory Networks for Image Captioning - [[2308.12383](https://arxiv.org/abs/2308.12383)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.12383.md)]. - Vision Transformer Adapters for Generalizable Multitask Learning - [[2308.12372](https://arxiv.org/abs/2308.12372)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.12372.md)]. - AdVerb: Visually Guided Audio Dereverberation - [[2308.12370](https://arxiv.org/abs/2308.12370)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.12370.md)]. - Continual Zero-Shot Learning through Semantically Guided Generative Random Walks - [[2308.12366](https://arxiv.org/abs/2308.12366)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.12366.md)]. - Diffusion-based Image Translation with Label Guidance for Domain Adaptive Semantic Segmentation - [[2308.12350](https://arxiv.org/abs/2308.12350)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.12350.md)]. - Improving Generative Model-based Unfolding with Schrödinger Bridges - [[2308.12351](https://arxiv.org/abs/2308.12351)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.12351.md)]. - CHORUS: Learning Canonicalized 3D Human-Object Spatial Relations from Unbounded Synthesized Images - [[2308.12288](https://arxiv.org/abs/2308.12288)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.12288.md)]. - Simple is Better and Large is Not Enough: Towards Ensembling of Foundational Language Models - [[2308.12272](https://arxiv.org/abs/2308.12272)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.12272.md)]. - Diffusion Language Models Can Perform Many Tasks with Scaling and Instruction-Finetuning - [[2308.12219](https://arxiv.org/abs/2308.12219)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.12219.md)]. - SG-Former: Self-guided Transformer with Evolving Token Reallocation - [[2308.12216](https://arxiv.org/abs/2308.12216)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.12216.md)]. - CLIPN for Zero-Shot OOD Detection: Teaching CLIP to Say No - [[2308.12213](https://arxiv.org/abs/2308.12213)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.12213.md)]. - Curriculum Learning with Adam: The Devil Is in the Wrong Details - [[2308.12202](https://arxiv.org/abs/2308.12202)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.12202.md)]. - Sign Language Translation with Iterative Prototype - [[2308.12191](https://arxiv.org/abs/2308.12191)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.12191.md)]. - SILT: Shadow-aware Iterative Label Tuning for Learning to Detect Shadows from Noisy Labels - [[2308.12064](https://arxiv.org/abs/2308.12064)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.12064.md)]. - DR-Tune: Improving Fine-tuning of Pretrained Visual Models by Distribution Regularization with Semantic Calibration - [[2308.12058](https://arxiv.org/abs/2308.12058)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.12058.md)]. - Aligning Language Models with Offline Reinforcement Learning from Human Feedback - [[2308.12050](https://arxiv.org/abs/2308.12050)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.12050.md)]. - Large Multilingual Models Pivot Zero-Shot Multimodal Learning across Languages - [[2308.12038](https://arxiv.org/abs/2308.12038)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.12038.md)]. - RefEgo: Referring Expression Comprehension Dataset from First-Person Perception of Ego4D - [[2308.12035](https://arxiv.org/abs/2308.12035)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.12035.md)]. - From Instructions to Intrinsic Human Values -- A Survey of Alignment Goals for Big Models - [[2308.12014](https://arxiv.org/abs/2308.12014)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.12014.md)]. - RankMixup: Ranking-Based Mixup Training for Network Calibration - [[2308.11990](https://arxiv.org/abs/2308.11990)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11990.md)]. - Blending-NeRF: Text-Driven Localized Editing in Neural Radiance Fields - [[2308.11974](https://arxiv.org/abs/2308.11974)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11974.md)]. - EVE: Efficient Vision-Language Pre-training with Masked Prediction and Modality-Aware MoE - [[2308.11971](https://arxiv.org/abs/2308.11971)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11971.md)]. - OFVL-MS: Once for Visual Localization across Multiple Indoor Scenes - [[2308.11928](https://arxiv.org/abs/2308.11928)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11928.md)]. - Recovering a Molecule's 3D Dynamics from Liquid-phase Electron Microscopy Movies - [[2308.11927](https://arxiv.org/abs/2308.11927)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11927.md)]. - LFS-GAN: Lifelong Few-Shot Image Generation - [[2308.11917](https://arxiv.org/abs/2308.11917)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11917.md)]. - Semantic-Aware Implicit Template Learning via Part Deformation Consistency - [[2308.11916](https://arxiv.org/abs/2308.11916)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11916.md)]. - ACLS: Adaptive and Conditional Label Smoothing for Network Calibration - [[2308.11911](https://arxiv.org/abs/2308.11911)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11911.md)]. - Camera-Driven Representation Learning for Unsupervised Domain Adaptive Person Re-identification - [[2308.11901](https://arxiv.org/abs/2308.11901)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11901.md)]. - Does Physical Adversarial Example Really Matter to Autonomous Driving? Towards System-Level Effect of Adversarial Object Evasion Attack - [[2308.11894](https://arxiv.org/abs/2308.11894)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11894.md)]. - SUMMIT: Source-Free Adaptation of Uni-Modal Models to Multi-Modal Targets - [[2308.11880](https://arxiv.org/abs/2308.11880)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11880.md)]. - Semi-Supervised Learning via Weight-aware Distillation under Class Distribution Mismatch - [[2308.11874](https://arxiv.org/abs/2308.11874)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11874.md)]. - Time Does Tell: Self-Supervised Time-Tuning of Dense Image Representations - [[2308.11796](https://arxiv.org/abs/2308.11796)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11796.md)]. - Enhancing NeRF akin to Enhancing LLMs: Generalizable NeRF Transformer with Mixture-of-View-Experts - [[2308.11793](https://arxiv.org/abs/2308.11793)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11793.md)]. - Understanding Hessian Alignment for Domain Generalization - [[2308.11778](https://arxiv.org/abs/2308.11778)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11778.md)]. - Efficient Controllable Multi-Task Architectures - [[2308.11744](https://arxiv.org/abs/2308.11744)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11744.md)]. - Animal3D: A Comprehensive Dataset of 3D Animal Pose and Shape - [[2308.11737](https://arxiv.org/abs/2308.11737)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11737.md)]. - Efficient Benchmarking (of Language Models) - [[2308.11696](https://arxiv.org/abs/2308.11696)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11696.md)]. - Delving into Motion-Aware Matching for Monocular 3D Object Tracking - [[2308.11607](https://arxiv.org/abs/2308.11607)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11607.md)]. - StoryBench: A Multifaceted Benchmark for Continuous Story Visualization - [[2308.11606](https://arxiv.org/abs/2308.11606)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11606.md)]. - SPANet: Frequency-balancing Token Mixer using Spectral Pooling Aggregation Modulation - [[2308.11568](https://arxiv.org/abs/2308.11568)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11568.md)]. - Multi-event Video-Text Retrieval - [[2308.11551](https://arxiv.org/abs/2308.11551)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11551.md)]. - TrackFlow: Multi-Object Tracking with Normalizing Flows - [[2308.11513](https://arxiv.org/abs/2308.11513)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11513.md)]. - Learning from Semantic Alignment between Unpaired Multiviews for Egocentric Video Recognition - [[2308.11489](https://arxiv.org/abs/2308.11489)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11489.md)]. - Learning a More Continuous Zero Level Set in Unsigned Distance Fields through Level Set Projection - [[2308.11441](https://arxiv.org/abs/2308.11441)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11441.md)]. - A Survey on Large Language Model based Autonomous Agents - [[2308.11432](https://arxiv.org/abs/2308.11432)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11432.md)]. - ScanNet++: A High-Fidelity Dataset of 3D Indoor Scenes - [[2308.11417](https://arxiv.org/abs/2308.11417)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11417.md)]. - How Much Temporal Long-Term Context is Needed for Action Segmentation? - [[2308.11358](https://arxiv.org/abs/2308.11358)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11358.md)]. - Exemplar-Free Continual Transformer with Convolutions - [[2308.11357](https://arxiv.org/abs/2308.11357)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11357.md)]. - ProAgent: Building Proactive Cooperative AI with Large Language Models - [[2308.11339](https://arxiv.org/abs/2308.11339)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11339.md)]. - GrowCLIP: Data-aware Automatic Model Growing for Large-scale Contrastive Language-Image Pre-training - [[2308.11331](https://arxiv.org/abs/2308.11331)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11331.md)]. - CiteTracker: Correlating Image and Text for Visual Tracking - [[2308.11322](https://arxiv.org/abs/2308.11322)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11322.md)]. - CNN based Cuneiform Sign Detection Learned from Annotated 3D Renderings and Mapped Photographs with Illumination Augmentation - [[2308.11277](https://arxiv.org/abs/2308.11277)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11277.md)]. - HMD-NeMo: Online 3D Avatar Motion Generation From Sparse Observations - [[2308.11261](https://arxiv.org/abs/2308.11261)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11261.md)]. - ROSGPT_Vision: Commanding Robots Using Only Language Models' Prompts - [[2308.11236](https://arxiv.org/abs/2308.11236)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11236.md)]. - LDP-Feat: Image Features with Local Differential Privacy - [[2308.11223](https://arxiv.org/abs/2308.11223)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11223.md)]. - DiffCloth: Diffusion Based Garment Synthesis and Manipulation via Structural Cross-modal Semantic Alignment - [[2308.11206](https://arxiv.org/abs/2308.11206)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11206.md)]. - ViLLA: Fine-Grained Vision-Language Representation Learning from Real-World Data - [[2308.11194](https://arxiv.org/abs/2308.11194)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11194.md)]. - Knowledge-Aware Prompt Tuning for Generalizable Vision-Language Models - [[2308.11186](https://arxiv.org/abs/2308.11186)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11186.md)]. - MEGA: Multimodal Alignment Aggregation and Distillation For Cinematic Video Segmentation - [[2308.11185](https://arxiv.org/abs/2308.11185)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11185.md)]. - ReFit: Recurrent Fitting Network for 3D Human Recovery - [[2308.11184](https://arxiv.org/abs/2308.11184)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11184.md)]. - Hierarchical Point-based Active Learning for Semi-supervised Point Cloud Semantic Segmentation - [[2308.11166](https://arxiv.org/abs/2308.11166)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11166.md)]. - Domain Generalization via Rationale Invariance - [[2308.11158](https://arxiv.org/abs/2308.11158)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11158.md)]. - Efficient View Synthesis with Neural Radiance Distribution Field - [[2308.11130](https://arxiv.org/abs/2308.11130)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11130.md)]. - LAN-HDR: Luminance-based Alignment Network for High Dynamic Range Video Reconstruction - [[2308.11116](https://arxiv.org/abs/2308.11116)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11116.md)]. - CAME: Contrastive Automated Model Evaluation - [[2308.11111](https://arxiv.org/abs/2308.11111)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11111.md)]. - Recursive Video Lane Detection - [[2308.11106](https://arxiv.org/abs/2308.11106)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11106.md)]. - MosaiQ: Quantum Generative Adversarial Networks for Image Generation on NISQ Computers - [[2308.11096](https://arxiv.org/abs/2308.11096)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11096.md)]. - Video OWL-ViT: Temporally-consistent open-world localization in video - [[2308.11093](https://arxiv.org/abs/2308.11093)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11093.md)]. - Audio-Visual Class-Incremental Learning - [[2308.11073](https://arxiv.org/abs/2308.11073)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11073.md)]. - TeD-SPAD: Temporal Distinctiveness for Self-supervised Privacy-preservation for video Anomaly Detection - [[2308.11072](https://arxiv.org/abs/2308.11072)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11072.md)]. - Neural Amortized Inference for Nested Multi-agent Reasoning - [[2308.11071](https://arxiv.org/abs/2308.11071)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11071.md)]. - MetaGCD: Learning to Continually Learn in Generalized Category Discovery - [[2308.11063](https://arxiv.org/abs/2308.11063)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11063.md)]. - UnLoc: A Unified Framework for Video Localization Tasks - [[2308.11062](https://arxiv.org/abs/2308.11062)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11062.md)]. - Coordinate Quantized Neural Implicit Representations for Multi-view Reconstruction - [[2308.11025](https://arxiv.org/abs/2308.11025)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11025.md)]. - Spectral Graphormer: Spectral Graph-based Transformer for Egocentric Two-Hand Reconstruction using Multi-View Color Images - [[2308.11015](https://arxiv.org/abs/2308.11015)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11015.md)]. - Few-Shot Physically-Aware Articulated Mesh Generation via Hierarchical Deformation - [[2308.10898](https://arxiv.org/abs/2308.10898)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10898.md)]. - Can Language Models Learn to Listen? - [[2308.10897](https://arxiv.org/abs/2308.10897)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10897.md)]. - AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors in Agents - [[2308.10848](https://arxiv.org/abs/2308.10848)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10848.md)]. - EigenPlaces: Training Viewpoint Robust Models for Visual Place Recognition - [[2308.10832](https://arxiv.org/abs/2308.10832)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10832.md)]. - Pixel Adaptive Deep Unfolding Transformer for Hyperspectral Image Reconstruction - [[2308.10820](https://arxiv.org/abs/2308.10820)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10820.md)]. - Jumping through Local Minima: Quantization in the Loss Landscape of Vision Transformers - [[2308.10814](https://arxiv.org/abs/2308.10814)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10814.md)]. - Improving Continuous Sign Language Recognition with Cross-Lingual Signs - [[2308.10809](https://arxiv.org/abs/2308.10809)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10809.md)]. - MGMAE: Motion Guided Masking for Video Masked Autoencoding - [[2308.10794](https://arxiv.org/abs/2308.10794)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10794.md)]. - Instruction Tuning for Large Language Models: A Survey - [[2308.10792](https://arxiv.org/abs/2308.10792)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10792.md)]. - WanJuan: A Comprehensive Multimodal Dataset for Advancing English and Chinese Large Models - [[2308.10755](https://arxiv.org/abs/2308.10755)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10755.md)]. - On the Adversarial Robustness of Multi-Modal Foundation Models - [[2308.10741](https://arxiv.org/abs/2308.10741)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10741.md)]. - Patch Is Not All You Need - [[2308.10729](https://arxiv.org/abs/2308.10729)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10729.md)]. - Vanishing Point Estimation in Uncalibrated Images with Prior Gravity Direction - [[2308.10694](https://arxiv.org/abs/2308.10694)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10694.md)]. - Learning Clothing and Pose Invariant 3D Shape Representation for Long-Term Person Re-Identification - [[2308.10658](https://arxiv.org/abs/2308.10658)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10658.md)]. - GaitPT: Skeletons Are All You Need For Gait Recognition - [[2308.10623](https://arxiv.org/abs/2308.10623)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10623.md)]. - A step towards understanding why classification helps regression - [[2308.10603](https://arxiv.org/abs/2308.10603)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10603.md)]. - Image-free Classifier Injection for Zero-Shot Classification - [[2308.10599](https://arxiv.org/abs/2308.10599)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10599.md)]. - CHORD: Category-level Hand-held Object Reconstruction via Shape Deformation - [[2308.10574](https://arxiv.org/abs/2308.10574)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10574.md)]. - Self-Feedback DETR for Temporal Action Detection - [[2308.10570](https://arxiv.org/abs/2308.10570)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10570.md)]. - Improving Diversity in Zero-Shot GAN Adaptation with Semantic Variations - [[2308.10554](https://arxiv.org/abs/2308.10554)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10554.md)]. - QD-BEV : Quantization-aware View-guided Distillation for Multi-view 3D Object Detection - [[2308.10515](https://arxiv.org/abs/2308.10515)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10515.md)]. - Large Language Model as a User Simulator - [[2308.11534](https://arxiv.org/abs/2308.11534)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11534.md)]. - Texture Generation on 3D Meshes with Point-UV Diffusion - [[2308.10490](https://arxiv.org/abs/2308.10490)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10490.md)]. - ADNet: Lane Shape Prediction via Anchor Decomposition - [[2308.10481](https://arxiv.org/abs/2308.10481)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10481.md)]. - STEERER: Resolving Scale Variations for Counting and Localization via Selective Inheritance Learning - [[2308.10468](https://arxiv.org/abs/2308.10468)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10468.md)]. - Exploring Parameter-Efficient Fine-Tuning Techniques for Code Generation with Large Language Models - [[2308.10462](https://arxiv.org/abs/2308.10462)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10462.md)]. - Privacy-Preserving Face Recognition Using Random Frequency Components - [[2308.10461](https://arxiv.org/abs/2308.10461)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10461.md)]. - Explore and Tell: Embodied Visual Captioning in 3D Environments - [[2308.10447](https://arxiv.org/abs/2308.10447)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10447.md)]. - When Prompt-based Incremental Learning Does Not Meet Strong Pretraining - [[2308.10445](https://arxiv.org/abs/2308.10445)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10445.md)]. - X-VoE: Measuring eXplanatory Violation of Expectation in Physical Events - [[2308.10441](https://arxiv.org/abs/2308.10441)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10441.md)]. - GPT-in-the-Loop: Adaptive Decision-Making for Multiagent Systems - [[2308.10435](https://arxiv.org/abs/2308.10435)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10435.md)]. - Diffusion Model as Representation Learner - [[2308.10916](https://arxiv.org/abs/2308.10916)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10916.md)]. - Simple Baselines for Interactive Video Retrieval with Questions and Answers - [[2308.10402](https://arxiv.org/abs/2308.10402)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10402.md)]. - FairBench: A Four-Stage Automatic Framework for Detecting Stereotypes and Biases in Large Language Models - [[2308.10397](https://arxiv.org/abs/2308.10397)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10397.md)]. - Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models - [[2308.10379](https://arxiv.org/abs/2308.10379)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10379.md)]. - LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models - [[2308.11462](https://arxiv.org/abs/2308.11462)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.11462.md)]. - Strata-NeRF : Neural Radiance Fields for Stratified Scenes - [[2308.10337](https://arxiv.org/abs/2308.10337)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10337.md)]. - Coordinate Transformer: Achieving Single-stage Multi-person Mesh Recovery from Videos - [[2308.10334](https://arxiv.org/abs/2308.10334)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10334.md)]. - Improving Adversarial Robustness of Masked Autoencoders via Test-time Frequency-domain Prompting - [[2308.10315](https://arxiv.org/abs/2308.10315)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10315.md)]. - DVGaze: Dual-View Gaze Estimation - [[2308.10310](https://arxiv.org/abs/2308.10310)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10310.md)]. - Representation Disparity-aware Distillation for 3D Object Detection - [[2308.10308](https://arxiv.org/abs/2308.10308)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10308.md)]. - Omnidirectional Information Gathering for Knowledge Transfer-based Audio-Visual Navigation - [[2308.10306](https://arxiv.org/abs/2308.10306)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10306.md)]. - Co-Evolution of Pose and Mesh for 3D Human Body Estimation from Video - [[2308.10305](https://arxiv.org/abs/2308.10305)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10305.md)]. - DomainAdaptor: A Novel Approach to Test-time Adaptation - [[2308.10297](https://arxiv.org/abs/2308.10297)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10297.md)]. - DomainDrop: Suppressing Domain-Sensitive Channels for Domain Generalization - [[2308.10285](https://arxiv.org/abs/2308.10285)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10285.md)]. - GPFL: Simultaneously Learning Global and Personalized Feature Information for Personalized Federated Learning - [[2308.10279](https://arxiv.org/abs/2308.10279)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10279.md)]. - CharacterChat: Learning towards Conversational AI with Personalized Social Support - [[2308.10278](https://arxiv.org/abs/2308.10278)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10278.md)]. - Minimalist Traffic Prediction: Linear Layer Is All You Need - [[2308.10276](https://arxiv.org/abs/2308.10276)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10276.md)]. - StableLLaVA: Enhanced Visual Instruction Tuning with Synthesized Image-Dialogue Data - [[2308.10253](https://arxiv.org/abs/2308.10253)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10253.md)]. - GeT: Generative Target Structure Debiasing for Domain Adaptation - [[2308.10205](https://arxiv.org/abs/2308.10205)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10205.md)]. - ChatEDA: A Large Language Model Powered Autonomous Agent for EDA - [[2308.10204](https://arxiv.org/abs/2308.10204)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10204.md)]. - ViT-Lens: Towards Omni-modal Representations - [[2308.10185](https://arxiv.org/abs/2308.10185)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10185.md)]. - Neural Interactive Keypoint Detection - [[2308.10174](https://arxiv.org/abs/2308.10174)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10174.md)]. - VLN-PETL: Parameter-Efficient Transfer Learning for Vision-and-Language Navigation - [[2308.10172](https://arxiv.org/abs/2308.10172)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10172.md)]. - FashionNTM: Multi-turn Fashion Image Retrieval via Cascaded Memory - [[2308.10170](https://arxiv.org/abs/2308.10170)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10170.md)]. - Unilaterally Aggregated Contrastive Learning with Hierarchical Augmentation for Anomaly Detection - [[2308.10155](https://arxiv.org/abs/2308.10155)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10155.md)]. - A Survey on Fairness in Large Language Models - [[2308.10149](https://arxiv.org/abs/2308.10149)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10149.md)]. - ESTextSpotter: Towards Better Scene Text Spotting with Explicit Synergy in Transformer - [[2308.10147](https://arxiv.org/abs/2308.10147)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10147.md)]. - OCHID-Fi: Occlusion-Robust Hand Pose Estimation in 3D via RF-Vision - [[2308.10146](https://arxiv.org/abs/2308.10146)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10146.md)]. - ExpeL: LLM Agents Are Experiential Learners - [[2308.10144](https://arxiv.org/abs/2308.10144)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10144.md)]. - March in Chat: Interactive Prompting for Remote Embodied Referring Expression - [[2308.10141](https://arxiv.org/abs/2308.10141)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10141.md)]. - AutoReP: Automatic ReLU Replacement for Fast Private Network Inference - [[2308.10134](https://arxiv.org/abs/2308.10134)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10134.md)]. - TransFace: Calibrating Transformer Training for Face Recognition from a Data-Centric Perspective - [[2308.10133](https://arxiv.org/abs/2308.10133)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10133.md)]. - 3D-Aware Neural Body Fitting for Occlusion Robust 3D Human Pose Estimation - [[2308.10123](https://arxiv.org/abs/2308.10123)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10123.md)]. - HollowNeRF: Pruning Hashgrid-Based NeRFs with Trainable Collision Mitigation - [[2308.10122](https://arxiv.org/abs/2308.10122)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10122.md)]. - Robust Mixture-of-Expert Training for Convolutional Neural Networks - [[2308.10110](https://arxiv.org/abs/2308.10110)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10110.md)]. - Root Pose Decomposition Towards Generic Non-rigid 3D Reconstruction with Monocular Videos - [[2308.10089](https://arxiv.org/abs/2308.10089)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10089.md)]. - GameEval: Evaluating LLMs on Conversational Games - [[2308.10032](https://arxiv.org/abs/2308.10032)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10032.md)]. - Single Image Reflection Separation via Component Synergy - [[2308.10027](https://arxiv.org/abs/2308.10027)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10027.md)]. - Pseudo Flow Consistency for Self-Supervised 6D Object Pose Estimation - [[2308.10016](https://arxiv.org/abs/2308.10016)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10016.md)]. - Partition-and-Debias: Agnostic Biases Mitigation via A Mixture of Biases-Specific Experts - [[2308.10005](https://arxiv.org/abs/2308.10005)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.10005.md)]. - ClothesNet: An Information-Rich 3D Garment Model Repository with Simulated Clothes Environment - [[2308.09987](https://arxiv.org/abs/2308.09987)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09987.md)]. - FinEval: A Chinese Financial Domain Knowledge Evaluation Benchmark for Large Language Models - [[2308.09975](https://arxiv.org/abs/2308.09975)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09975.md)]. - Disposable Transfer Learning for Selective Source Task Unlearning - [[2308.09971](https://arxiv.org/abs/2308.09971)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09971.md)]. - Tackling Vision Language Tasks Through Learning Inner Monologues - [[2308.09970](https://arxiv.org/abs/2308.09970)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09970.md)]. - Semantics Meets Temporal Correspondence: Self-supervised Object-centric Learning in Videos - [[2308.09951](https://arxiv.org/abs/2308.09951)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09951.md)]. - Scene-Aware Feature Matching - [[2308.09949](https://arxiv.org/abs/2308.09949)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09949.md)]. - Weakly-Supervised Action Localization by Hierarchically-structured Latent Attention Modeling - [[2308.09946](https://arxiv.org/abs/2308.09946)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09946.md)]. - On the Robustness of Open-World Test-Time Training: Self-Training with Dynamic Prototype Expansion - [[2308.09942](https://arxiv.org/abs/2308.09942)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09942.md)]. - Understanding Self-attention Mechanism via Dynamical System Perspective - [[2308.09939](https://arxiv.org/abs/2308.09939)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09939.md)]. - BLIVA: A Simple Multimodal LLM for Better Handling of Text-Rich Visual Questions - [[2308.09936](https://arxiv.org/abs/2308.09936)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09936.md)]. - MDCS: More Diverse Experts with Consistency Self-distillation for Long-tailed Recognition - [[2308.09922](https://arxiv.org/abs/2308.09922)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09922.md)]. - VI-Net: Boosting Category-level 6D Object Pose Estimation via Learning Decoupled Rotations on the Spherical Representations - [[2308.09916](https://arxiv.org/abs/2308.09916)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09916.md)]. - Scalable Video Object Segmentation with Simplified Framework - [[2308.09903](https://arxiv.org/abs/2308.09903)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09903.md)]. - SwinLSTM:Improving Spatiotemporal Prediction Accuracy using Swin Transformer and LSTM - [[2308.09891](https://arxiv.org/abs/2308.09891)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09891.md)]. - Calibrating Uncertainty for Semi-Supervised Crowd Counting - [[2308.09887](https://arxiv.org/abs/2308.09887)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09887.md)]. - Forecast-MAE: Self-supervised Pre-training for Motion Forecasting with Masked Autoencoders - [[2308.09882](https://arxiv.org/abs/2308.09882)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09882.md)]. - Skill Transformer: A Monolithic Policy for Mobile Manipulation - [[2308.09873](https://arxiv.org/abs/2308.09873)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09873.md)]. - A Theory of Topological Derivatives for Inverse Rendering of Geometry - [[2308.09865](https://arxiv.org/abs/2308.09865)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09865.md)]. - How susceptible are LLMs to Logical Fallacies? - [[2308.09853](https://arxiv.org/abs/2308.09853)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09853.md)]. - Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis - [[2308.09830](https://arxiv.org/abs/2308.09830)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09830.md)]. - Learning from A Single Graph is All You Need for Near-Shortest Path Routing in Wireless Networks - [[2308.09829](https://arxiv.org/abs/2308.09829)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09829.md)]. - VL-PET: Vision-and-Language Parameter-Efficient Tuning via Granularity Control - [[2308.09804](https://arxiv.org/abs/2308.09804)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09804.md)]. - Long-range Multimodal Pretraining for Movie Understanding - [[2308.09775](https://arxiv.org/abs/2308.09775)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09775.md)]. - Smoothness Similarity Regularization for Few-Shot GAN Adaptation - [[2308.09717](https://arxiv.org/abs/2308.09717)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09717.md)]. - Robust Monocular Depth Estimation under Challenging Conditions - [[2308.09711](https://arxiv.org/abs/2308.09711)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09711.md)]. - Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment - [[2308.09662](https://arxiv.org/abs/2308.09662)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09662.md)]. - Is context all you need? Scaling Neural Sign Language Translation to Large Domains of Discourse - [[2308.09622](https://arxiv.org/abs/2308.09622)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09622.md)]. - LaRS: A Diverse Panoptic Maritime Obstacle Detection Dataset and Benchmark - [[2308.09618](https://arxiv.org/abs/2308.09618)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09618.md)]. - ChatHaruhi: Reviving Anime Character in Reality via Large Language Model - [[2308.09597](https://arxiv.org/abs/2308.09597)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09597.md)]. - StableVideo: Text-driven Consistency-aware Diffusion Video Editing - [[2308.09592](https://arxiv.org/abs/2308.09592)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09592.md)]. - WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct - [[2308.09583](https://arxiv.org/abs/2308.09583)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09583.md)]. - PUMGPT: A Large Vision-Language Model for Product Understanding - [[2308.09568](https://arxiv.org/abs/2308.09568)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09568.md)]. - Normalization Is All You Need: Understanding Layer-Normalized Federated Learning under Extreme Label Shift - [[2308.09565](https://arxiv.org/abs/2308.09565)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09565.md)]. - Deep Equilibrium Object Detection - [[2308.09564](https://arxiv.org/abs/2308.09564)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09564.md)]. - Meta-ZSDETR: Zero-shot DETR with Meta-learning - [[2308.09540](https://arxiv.org/abs/2308.09540)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09540.md)]. - Small Object Detection via Coarse-to-fine Proposal Generation and Imitation Learning - [[2308.09534](https://arxiv.org/abs/2308.09534)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09534.md)]. - Leveraging Intrinsic Properties for Non-Rigid Garment Alignment - [[2308.09519](https://arxiv.org/abs/2308.09519)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09519.md)]. - ResQ: Residual Quantization for Video Perception - [[2308.09511](https://arxiv.org/abs/2308.09511)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09511.md)]. - Vision Relation Transformer for Unbiased Scene Graph Generation - [[2308.09472](https://arxiv.org/abs/2308.09472)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09472.md)]. - Scope is all you need: Transforming LLMs for HPC Code - [[2308.09440](https://arxiv.org/abs/2308.09440)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09440.md)]. - MonoNeRD: NeRF-like Representations for Monocular 3D Object Detection - [[2308.09421](https://arxiv.org/abs/2308.09421)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09421.md)]. - Generalizable Decision Boundaries: Dualistic Meta-Learning for Open Set Domain Generalization - [[2308.09391](https://arxiv.org/abs/2308.09391)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09391.md)]. - DReg-NeRF: Deep Registration for Neural Radiance Fields - [[2308.09386](https://arxiv.org/abs/2308.09386)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09386.md)]. - Label-Free Event-based Object Recognition via Joint Learning with Image Reconstruction from Events - [[2308.09383](https://arxiv.org/abs/2308.09383)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09383.md)]. - Open-vocabulary Video Question Answering: A New Benchmark for Evaluating the Generalizability of Video Question Answering Models - [[2308.09363](https://arxiv.org/abs/2308.09363)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09363.md)]. - RLIPv2: Fast Scaling of Relational Language-Image Pre-training - [[2308.09351](https://arxiv.org/abs/2308.09351)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09351.md)]. - Boosting Few-shot Action Recognition with Graph-guided Hybrid Matching - [[2308.09346](https://arxiv.org/abs/2308.09346)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09346.md)]. - Audio-Visual Glance Network for Efficient Video Recognition - [[2308.09322](https://arxiv.org/abs/2308.09322)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09322.md)]. - Towards Attack-tolerant Federated Learning via Critical Parameter Analysis - [[2308.09318](https://arxiv.org/abs/2308.09318)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09318.md)]. - Retro-FPN: Retrospective Feature Pyramid Network for Point Cloud Semantic Segmentation - [[2308.09314](https://arxiv.org/abs/2308.09314)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09314.md)]. - Lip Reading for Low-resource Languages by Learning and Combining General Speech Knowledge and Language-specific Knowledge - [[2308.09311](https://arxiv.org/abs/2308.09311)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09311.md)]. - DiffDis: Empowering Generative Diffusion Model with Cross-Modal Discrimination Capability - [[2308.09306](https://arxiv.org/abs/2308.09306)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09306.md)]. - Human Part-wise 3D Motion Context Learning for Sign Language Recognition - [[2308.09305](https://arxiv.org/abs/2308.09305)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09305.md)]. - NAPA-VQ: Neighborhood Aware Prototype Augmentation with Vector Quantization for Continual Learning - [[2308.09297](https://arxiv.org/abs/2308.09297)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09297.md)]. - Self-Calibrated Cross Attention Network for Few-Shot Segmentation - [[2308.09294](https://arxiv.org/abs/2308.09294)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09294.md)]. - Diverse Cotraining Makes Strong Semi-Supervised Segmentor - [[2308.09281](https://arxiv.org/abs/2308.09281)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09281.md)]. - Point Contrastive Prediction with Semantic Clustering for Self-Supervised Learning on Point Cloud Videos - [[2308.09247](https://arxiv.org/abs/2308.09247)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09247.md)]. - Masked Spatio-Temporal Structure Prediction for Self-supervised Learning on Point Cloud Videos - [[2308.09245](https://arxiv.org/abs/2308.09245)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09245.md)]. - SparseBEV: High-Performance Sparse 3D Object Detection from Multi-Camera Videos - [[2308.09244](https://arxiv.org/abs/2308.09244)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09244.md)]. - ASAG: Building Strong One-Decoder-Layer Sparse Detectors via Adaptive Sparse Anchor Generation - [[2308.09242](https://arxiv.org/abs/2308.09242)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09242.md)]. - Generalized Sum Pooling for Metric Learning - [[2308.09228](https://arxiv.org/abs/2308.09228)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09228.md)]. - FedPerfix: Towards Partial Model Personalization of Vision Transformers in Federated Learning - [[2308.09160](https://arxiv.org/abs/2308.09160)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09160.md)]. - The Unreasonable Effectiveness of Large Language-Vision Models for Source-free Video Domain Adaptation - [[2308.09139](https://arxiv.org/abs/2308.09139)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09139.md)]. - ImGeoNet: Image-induced Geometry-aware Voxel Representation for Multi-view 3D Object Detection - [[2308.09098](https://arxiv.org/abs/2308.09098)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09098.md)]. - SimFIR: A Simple Framework for Fisheye Image Rectification with Self-supervised Representation Learning - [[2308.09040](https://arxiv.org/abs/2308.09040)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.09040.md)]. - Reinforced Self-Training (ReST) for Language Modeling - [[2308.08998](https://arxiv.org/abs/2308.08998)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.08998.md)]. - Auxiliary Tasks Benefit 3D Skeleton-based Human Motion Prediction - [[2308.08942](https://arxiv.org/abs/2308.08942)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.08942.md)]. - Identity-Seeking Self-Supervised Representation Learning for Generalizable Person Re-identification - [[2308.08887](https://arxiv.org/abs/2308.08887)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.08887.md)]. - Event-Guided Procedure Planning from Instructional Videos with Text Supervision - [[2308.08885](https://arxiv.org/abs/2308.08885)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.08885.md)]. - Towards Semi-supervised Learning with Non-random Missing Labels - [[2308.08872](https://arxiv.org/abs/2308.08872)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.08872.md)]. - Spatially and Spectrally Consistent Deep Functional Maps - [[2308.08871](https://arxiv.org/abs/2308.08871)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.08871.md)]. - D-IF: Uncertainty-aware Human Digitization via Implicit Distribution Field - [[2308.08857](https://arxiv.org/abs/2308.08857)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.08857.md)]. - Realistic Full-Body Tracking from Sparse Observations via Joint-Level Modeling - [[2308.08855](https://arxiv.org/abs/2308.08855)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.08855.md)]. - CMB: A Comprehensive Medical Benchmark in Chinese - [[2308.08833](https://arxiv.org/abs/2308.08833)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.08833.md)]. - Fast Inference and Update of Probabilistic Density Estimation on Trajectory Prediction - [[2308.08824](https://arxiv.org/abs/2308.08824)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.08824.md)]. - MixBag: Bag-Level Data Augmentation for Learning from Label Proportions - [[2308.08822](https://arxiv.org/abs/2308.08822)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.08822.md)]. - Label Shift Adapter for Test-Time Adaptation under Covariate and Label Shifts - [[2308.08810](https://arxiv.org/abs/2308.08810)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.08810.md)]. - Long-Range Grouping Transformer for Multi-View 3D Reconstruction - [[2308.08724](https://arxiv.org/abs/2308.08724)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.08724.md)]. - V-FUSE: Volumetric Depth Map Fusion with Long-Range Constraints - [[2308.08715](https://arxiv.org/abs/2308.08715)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.08715.md)]. - Dynamic Neural Network is All You Need: Understanding the Robustness of Dynamic Mechanisms in Neural Networks - [[2308.08709](https://arxiv.org/abs/2308.08709)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.08709.md)]. - TeCH: Text-guided Reconstruction of Lifelike Clothed Humans - [[2308.08545](https://arxiv.org/abs/2308.08545)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.08545.md)]. - MeViS: A Large-scale Benchmark for Video Segmentation with Motion Expressions - [[2308.08544](https://arxiv.org/abs/2308.08544)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.08544.md)]. - Learning to Distill Global Representation for Sparse-View CT - [[2308.08463](https://arxiv.org/abs/2308.08463)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.08463.md)]. - ALIP: Adaptive Language-Image Pre-training with Synthetic Caption - [[2308.08428](https://arxiv.org/abs/2308.08428)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.08428.md)]. - Tem-adapter: Adapting Image-Text Pretraining for Video Question Answer - [[2308.08414](https://arxiv.org/abs/2308.08414)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.08414.md)]. - SIGMA: Scale-Invariant Global Sparse Shape Matching - [[2308.08393](https://arxiv.org/abs/2308.08393)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.08393.md)]. - Agglomerative Transformer for Human-Object Interaction Detection - [[2308.08370](https://arxiv.org/abs/2308.08370)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.08370.md)]. - Membrane Potential Batch Normalization for Spiking Neural Networks - [[2308.08359](https://arxiv.org/abs/2308.08359)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.08359.md)]. - Stable and Causal Inference for Discriminative Self-supervised Deep Visual Representations - [[2308.08321](https://arxiv.org/abs/2308.08321)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.08321.md)]. - Dual-Stream Diffusion Net for Text-to-Video Generation - [[2308.08316](https://arxiv.org/abs/2308.08316)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.08316.md)]. - SceNeRFlow: Time-Consistent Reconstruction of General Dynamic Scenes - [[2308.08258](https://arxiv.org/abs/2308.08258)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.08258.md)]. - MemoChat: Tuning LLMs to Use Memos for Consistent Long-Range Open-Domain Conversation - [[2308.08239](https://arxiv.org/abs/2308.08239)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.08239.md)]. - Inherent Redundancy in Spiking Neural Networks - [[2308.08227](https://arxiv.org/abs/2308.08227)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.08227.md)]. - Low-Light Image Enhancement with Illumination-Aware Gamma Correction and Complete Image Modelling Network - [[2308.08220](https://arxiv.org/abs/2308.08220)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.08220.md)]. - Unsupervised Domain Adaptive Detection with Network Stability Analysis - [[2308.08182](https://arxiv.org/abs/2308.08182)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.08182.md)]. - Learning to Generate Semantic Layouts for Higher Text-Image Correspondence in Text-to-Image Synthesis - [[2308.08157](https://arxiv.org/abs/2308.08157)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.08157.md)]. - AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation Framework - [[2308.08155](https://arxiv.org/abs/2308.08155)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.08155.md)]. - GPA-3D: Geometry-aware Prototype Alignment for Unsupervised Domain Adaptive 3D Object Detection from Point Clouds - [[2308.08140](https://arxiv.org/abs/2308.08140)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.08140.md)]. - OmniZoomer: Learning to Move and Zoom in on Sphere at High-Resolution - [[2308.08114](https://arxiv.org/abs/2308.08114)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.08114.md)]. - View Consistent Purification for Accurate Cross-View Localization - [[2308.08110](https://arxiv.org/abs/2308.08110)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.08110.md)]. - Separate the Wheat from the Chaff: Model Deficiency Unlearning via Parameter-Efficient Module Operation - [[2308.08090](https://arxiv.org/abs/2308.08090)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.08090.md)]. - DragNUWA: Fine-grained Control in Video Generation by Integrating Text, Image, and Trajectory - [[2308.08089](https://arxiv.org/abs/2308.08089)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.08089.md)]. - Shortcut-V2V: Compression Framework for Video-to-Video Translation based on Temporal Redundancy Reduction - [[2308.08011](https://arxiv.org/abs/2308.08011)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.08011.md)]. - Teach LLMs to Personalize -- An Approach inspired by Writing Education - [[2308.07968](https://arxiv.org/abs/2308.07968)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.07968.md)]. - CoDeF: Content Deformation Fields for Temporally Consistent Video Processing - [[2308.07926](https://arxiv.org/abs/2308.07926)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.07926.md)]. - RAVEN: In-Context Learning with Retrieval Augmented Encoder-Decoder Language Models - [[2308.07922](https://arxiv.org/abs/2308.07922)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.07922.md)]. - Solving Challenging Math Word Problems Using GPT-4 Code Interpreter with Code-based Self-Verification - [[2308.07921](https://arxiv.org/abs/2308.07921)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.07921.md)]. - Helping Hands: An Object-Aware Ego-Centric Video Recognition Model - [[2308.07918](https://arxiv.org/abs/2308.07918)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.07918.md)]. - Relightable and Animatable Neural Avatar from Sparse-View Video - [[2308.07903](https://arxiv.org/abs/2308.07903)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.07903.md)]. - Through the Lens of Core Competency: Survey on Evaluation of Large Language Models - [[2308.07902](https://arxiv.org/abs/2308.07902)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.07902.md)]. - Memory-and-Anticipation Transformer for Online Action Understanding - [[2308.07893](https://arxiv.org/abs/2308.07893)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.07893.md)]. - Link-Context Learning for Multimodal LLMs - [[2308.07891](https://arxiv.org/abs/2308.07891)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.07891.md)]. - ObjectSDF++: Improved Object-Compositional Neural Implicit Surfaces - [[2308.07868](https://arxiv.org/abs/2308.07868)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.07868.md)]. - StyleDiffusion: Controllable Disentangled Style Transfer via Diffusion Models - [[2308.07863](https://arxiv.org/abs/2308.07863)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.07863.md)]. - Robustness Over Time: Understanding Adversarial Examples' Effectiveness on Longitudinal Versions of Large Language Models - [[2308.07847](https://arxiv.org/abs/2308.07847)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.07847.md)]. - ImbSAM: A Closer Look at Sharpness-Aware Minimization in Class-Imbalanced Recognition - [[2308.07815](https://arxiv.org/abs/2308.07815)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.07815.md)]. - Learning to Identify Critical States for Reinforcement Learning from Videos - [[2308.07795](https://arxiv.org/abs/2308.07795)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.07795.md)]. - DiffV2S: Diffusion-based Video-to-Speech Synthesis with Vision-guided Speaker Embedding - [[2308.07787](https://arxiv.org/abs/2308.07787)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.07787.md)]. - Identity-Consistent Aggregation for Video Object Detection - [[2308.07737](https://arxiv.org/abs/2308.07737)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.07737.md)]. - UniTR: A Unified and Efficient Multi-Modal Transformer for Bird's-Eye-View Representation - [[2308.07732](https://arxiv.org/abs/2308.07732)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.07732.md)]. - DiffGuard: Semantic Mismatch-Guided Out-of-Distribution Detection using Pre-trained Diffusion Models - [[2308.07687](https://arxiv.org/abs/2308.07687)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.07687.md)]. - Boosting Multi-modal Model Performance with Adaptive Gradient Modulation - [[2308.07686](https://arxiv.org/abs/2308.07686)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.07686.md)]. - Attention Is Not All You Need Anymore - [[2308.07661](https://arxiv.org/abs/2308.07661)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.07661.md)]. - From Commit Message Generation to History-Aware Commit Message Completion - [[2308.07655](https://arxiv.org/abs/2308.07655)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.07655.md)]. - EQ-Net: Elastic Quantization Neural Networks - [[2308.07650](https://arxiv.org/abs/2308.07650)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.07650.md)]. - Prompt Switch: Efficient CLIP Adaptation for Text-Video Retrieval - [[2308.07648](https://arxiv.org/abs/2308.07648)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.07648.md)]. - Backpropagation Path Search On Adversarial Transferability - [[2308.07625](https://arxiv.org/abs/2308.07625)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.07625.md)]. - Story Visualization by Online Text Augmentation with Context Memory - [[2308.07575](https://arxiv.org/abs/2308.07575)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.07575.md)]. - 3DHacker: Spectrum-based Decision Boundary Generation for Hard-label 3D Point Cloud Attack - [[2308.07546](https://arxiv.org/abs/2308.07546)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.07546.md)]. - DREAMWALKER: Mental Planning for Continuous Vision-Language Navigation - [[2308.07498](https://arxiv.org/abs/2308.07498)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.07498.md)]. - Exploring the Intersection of Large Language Models and Agent-Based Modeling via Prompt Engineering - [[2308.07411](https://arxiv.org/abs/2308.07411)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.07411.md)]. - Text Injection for Capitalization and Turn-Taking Prediction in Speech Models - [[2308.07395](https://arxiv.org/abs/2308.07395)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.07395.md)]. - PARIS: Part-level Reconstruction and Motion Analysis for Articulated Objects - [[2308.07391](https://arxiv.org/abs/2308.07391)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.07391.md)]. - Platypus: Quick, Cheap, and Powerful Refinement of LLMs - [[2308.07317](https://arxiv.org/abs/2308.07317)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.07317.md)]. - Jurassic World Remake: Bringing Ancient Fossils Back to Life via Zero-Shot Long Image-to-Image Translation - [[2308.07316](https://arxiv.org/abs/2308.07316)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.07316.md)]. - Group Pose: A Simple Baseline for End-to-End Multi-person Pose Estimation - [[2308.07313](https://arxiv.org/abs/2308.07313)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.07313.md)]. - The Devil is in the Errors: Leveraging Large Language Models for Fine-grained Machine Translation Evaluation - [[2308.07286](https://arxiv.org/abs/2308.07286)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.07286.md)]. - Context-Aware Planning and Environment-Aware Memory for Instruction Following Embodied Agents - [[2308.07241](https://arxiv.org/abs/2308.07241)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.07241.md)]. - RestoreFormer++: Towards Real-World Blind Face Restoration from Undegraded Key-Value Pairs - [[2308.07228](https://arxiv.org/abs/2308.07228)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.07228.md)]. - Unified Data-Free Compression: Pruning and Quantization without Fine-Tuning - [[2308.07209](https://arxiv.org/abs/2308.07209)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.07209.md)]. - ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate - [[2308.07201](https://arxiv.org/abs/2308.07201)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.07201.md)]. - OctoPack: Instruction Tuning Code Large Language Models - [[2308.07124](https://arxiv.org/abs/2308.07124)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.07124.md)]. - CTP: Towards Vision-Language Continual Pretraining via Compatible Momentum Contrast and Topology Preservation - [[2308.07146](https://arxiv.org/abs/2308.07146)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.07146.md)]. - Occ$^2$Net: Robust Image Matching Based on 3D Occupancy Estimation for Occluded Regions - [[2308.16160](https://arxiv.org/abs/2308.16160)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.16160.md)]. - Mind your Language (Model): Fact-Checking LLMs and their Role in NLP Research and Practice - [[2308.07120](https://arxiv.org/abs/2308.07120)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.07120.md)]. - Large Language Models for Information Retrieval: A Survey - [[2308.07107](https://arxiv.org/abs/2308.07107)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.07107.md)]. - Masked Motion Predictors are Strong 3D Action Representation Learners - [[2308.07092](https://arxiv.org/abs/2308.07092)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.07092.md)]. - S3IM: Stochastic Structural SIMilarity and Its Unreasonable Effectiveness for Neural Fields - [[2308.07032](https://arxiv.org/abs/2308.07032)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.07032.md)]. - ACTIVE: Towards Highly Transferable 3D Physical Camouflage for Universal and Robust Vehicle Evasion - [[2308.07009](https://arxiv.org/abs/2308.07009)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.07009.md)]. - Global Features are All You Need for Image Retrieval and Reranking - [[2308.06954](https://arxiv.org/abs/2308.06954)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.06954.md)]. - Knowing Where to Focus: Event-aware Transformer for Video Grounding - [[2308.06947](https://arxiv.org/abs/2308.06947)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.06947.md)]. - CBA: Improving Online Continual Learning via Continual Bias Adaptor - [[2308.06925](https://arxiv.org/abs/2308.06925)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.06925.md)]. - CausalLM is not optimal for in-context learning - [[2308.06912](https://arxiv.org/abs/2308.06912)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.06912.md)]. - Exploring Lightweight Hierarchical Vision Transformers for Efficient Visual Tracking - [[2308.06904](https://arxiv.org/abs/2308.06904)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.06904.md)]. - Towards Open-Set Test-Time Adaptation Utilizing the Wisdom of Crowds in Entropy Minimization - [[2308.06879](https://arxiv.org/abs/2308.06879)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.06879.md)]. - SpeechX: Neural Codec Language Model as a Versatile Speech Transformer - [[2308.06873](https://arxiv.org/abs/2308.06873)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.06873.md)]. - RMP-Loss: Regularizing Membrane Potential Distribution for Spiking Neural Networks - [[2308.06787](https://arxiv.org/abs/2308.06787)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.06787.md)]. - Shrinking Class Space for Enhanced Certainty in Semi-Supervised Learning - [[2308.06777](https://arxiv.org/abs/2308.06777)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.06777.md)]. - Unsupervised Image Denoising in Real-World Scenarios via Self-Collaboration Parallel Generative Adversarial Branches - [[2308.06776](https://arxiv.org/abs/2308.06776)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.06776.md)]. - Dual Meta-Learning with Longitudinally Generalized Regularization for One-Shot Brain Tissue Segmentation Across the Human Lifespan - [[2308.06774](https://arxiv.org/abs/2308.06774)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.06774.md)]. - AerialVLN: Vision-and-Language Navigation for UAVs - [[2308.06735](https://arxiv.org/abs/2308.06735)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.06735.md)]. - IP-Adapter: Text Compatible Image Prompt Adapter for Text-to-Image Diffusion Models - [[2308.06721](https://arxiv.org/abs/2308.06721)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.06721.md)]. - Compositional Feature Augmentation for Unbiased Scene Graph Generation - [[2308.06712](https://arxiv.org/abs/2308.06712)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.06712.md)]. - Camouflaged Image Synthesis Is All You Need to Boost Camouflaged Detection - [[2308.06701](https://arxiv.org/abs/2308.06701)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.06701.md)]. - Isomer: Isomerous Transformer for Zero-shot Video Object Segmentation - [[2308.06693](https://arxiv.org/abs/2308.06693)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.06693.md)]. - Estimator Meets Equilibrium Perspective: A Rectified Straight Through Estimator for Binary Neural Networks Training - [[2308.06689](https://arxiv.org/abs/2308.06689)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.06689.md)]. - 3DMOTFormer: Graph Transformer for Online 3D Multi-Object Tracking - [[2308.06635](https://arxiv.org/abs/2308.06635)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.06635.md)]. - VisIT-Bench: A Benchmark for Vision-Language Instruction Following Inspired by Real-World Use - [[2308.06595](https://arxiv.org/abs/2308.06595)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.06595.md)]. - Cyclic Test-Time Adaptation on Monocular Video for 3D Human Mesh Reconstruction - [[2308.06554](https://arxiv.org/abs/2308.06554)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.06554.md)]. - Revisiting Vision Transformer from the View of Path Ensemble - [[2308.06548](https://arxiv.org/abs/2308.06548)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.06548.md)]. - SegPrompt: Boosting Open-world Segmentation via Category-level Prompt Learning - [[2308.06531](https://arxiv.org/abs/2308.06531)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.06531.md)]. - BEV-DG: Cross-Modal Learning under Bird's-Eye View for Domain Generalization of 3D Semantic Segmentation - [[2308.06530](https://arxiv.org/abs/2308.06530)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.06530.md)]. - One-bit Flip is All You Need: When Bit-flip Attack Meets Model Training - [[2308.07934](https://arxiv.org/abs/2308.07934)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.07934.md)]. - Tiny and Efficient Model for the Edge Detection Generalization - [[2308.06468](https://arxiv.org/abs/2308.06468)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.06468.md)]. - Multi-Label Knowledge Distillation - [[2308.06453](https://arxiv.org/abs/2308.06453)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.06453.md)]. - Detecting and Preventing Hallucinations in Large Vision Language Models - [[2308.06394](https://arxiv.org/abs/2308.06394)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.06394.md)]. - U-RED: Unsupervised 3D Shape Retrieval and Deformation for Partial Point Clouds - [[2308.06383](https://arxiv.org/abs/2308.06383)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.06383.md)]. - Enhancing Network Management Using Code Generated by Large Language Models - [[2308.06261](https://arxiv.org/abs/2308.06261)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.06261.md)]. - Self-Alignment with Instruction Backtranslation - [[2308.06259](https://arxiv.org/abs/2308.06259)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.06259.md)]. - FunnyBirds: A Synthetic Vision Dataset for a Part-Based Analysis of Explainable AI Methods - [[2308.06248](https://arxiv.org/abs/2308.06248)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.06248.md)]. - Exploring Predicate Visual Context in Detecting of Human-Object Interactions - [[2308.06202](https://arxiv.org/abs/2308.06202)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.06202.md)]. - Improving Joint Speech-Text Representations Without Alignment - [[2308.06125](https://arxiv.org/abs/2308.06125)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.06125.md)]. - Composable Function-preserving Expansions for Transformer Architectures - [[2308.06103](https://arxiv.org/abs/2308.06103)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.06103.md)]. - Out-of-Distribution Detection for Monocular Depth Estimation - [[2308.06072](https://arxiv.org/abs/2308.06072)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.06072.md)]. - Diverse Data Augmentation with Diffusions for Effective Test-time Prompt Tuning - [[2308.06038](https://arxiv.org/abs/2308.06038)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.06038.md)]. - Enhancing Generalization of Universal Adversarial Perturbation through Gradient Aggregation - [[2308.06015](https://arxiv.org/abs/2308.06015)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.06015.md)]. - Cyclic-Bootstrap Labeling for Weakly Supervised Object Detection - [[2308.05991](https://arxiv.org/abs/2308.05991)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.05991.md)]. - TrajPAC: Towards Robustness Verification of Pedestrian Trajectory Prediction Models - [[2308.05985](https://arxiv.org/abs/2308.05985)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.05985.md)]. - BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents - [[2308.05960](https://arxiv.org/abs/2308.05960)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.05960.md)]. - Generalizing Event-Based Motion Deblurring in Real-World Scenarios - [[2308.05932](https://arxiv.org/abs/2308.05932)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.05932.md)]. - Collaborative Tracking Learning for Frame-Rate-Insensitive Multi-Object Tracking - [[2308.05911](https://arxiv.org/abs/2308.05911)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.05911.md)]. - PIPPA: A Partially Synthetic Conversational Dataset - [[2308.05884](https://arxiv.org/abs/2308.05884)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.05884.md)]. - PlankAssembly: Robust 3D Reconstruction from Three Orthographic Views with Learnt Shape Programs - [[2308.05744](https://arxiv.org/abs/2308.05744)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.05744.md)]. - Follow Anything: Open-set detection, tracking, and following in real-time - [[2308.05737](https://arxiv.org/abs/2308.05737)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.05737.md)]. - AudioLDM 2: Learning Holistic Audio Generation with Self-supervised Pretraining - [[2308.05734](https://arxiv.org/abs/2308.05734)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.05734.md)]. - FrozenRecon: Pose-free 3D Scene Reconstruction with Frozen Depth Models - [[2308.05733](https://arxiv.org/abs/2308.05733)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.05733.md)]. - PDE-Refiner: Achieving Accurate Long Rollouts with Neural PDE Solvers - [[2308.05732](https://arxiv.org/abs/2308.05732)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.05732.md)]. - Hard No-Box Adversarial Attack on Skeleton-Based Human Action Recognition with Skeleton-Motion-Informed Gradient - [[2308.05681](https://arxiv.org/abs/2308.05681)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.05681.md)]. - 2D3D-MATR: 2D-3D Matching Transformer for Detection-free Registration between Images and Point Clouds - [[2308.05667](https://arxiv.org/abs/2308.05667)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.05667.md)]. - Self-Supervised Monocular Depth Estimation by Direction-aware Cumulative Convolution Network - [[2308.05605](https://arxiv.org/abs/2308.05605)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.05605.md)]. - Cross-Domain Product Representation Learning for Rich-Content E-Commerce - [[2308.05550](https://arxiv.org/abs/2308.05550)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.05550.md)]. - Look at the Neighbor: Distortion-aware Unsupervised Domain Adaptation for Panoramic Semantic Segmentation - [[2308.05493](https://arxiv.org/abs/2308.05493)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.05493.md)]. - LLM As DBA - [[2308.05481](https://arxiv.org/abs/2308.05481)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.05481.md)]. - Benchmarking Algorithmic Bias in Face Recognition: An Experimental Approach Using Synthetic Faces and Human Evaluation - [[2308.05441](https://arxiv.org/abs/2308.05441)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.05441.md)]. - Deep Fusion Transformer Network with Weighted Vector-Wise Keypoints Voting for Robust 6D Object Pose Estimation - [[2308.05438](https://arxiv.org/abs/2308.05438)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.05438.md)]. - SC3K: Self-supervised and Coherent 3D Keypoints Estimation from Rotated, Noisy, and Decimated Point Cloud Data - [[2308.05410](https://arxiv.org/abs/2308.05410)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.05410.md)]. - Learning Gabor Texture Features for Fine-Grained Recognition - [[2308.05396](https://arxiv.org/abs/2308.05396)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.05396.md)]. - Enhancing Trust in LLM-Based AI Automation Agents: New Considerations and Future Challenges - [[2308.05391](https://arxiv.org/abs/2308.05391)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.05391.md)]. - Interaction-aware Joint Attention Estimation Using People Attributes - [[2308.05382](https://arxiv.org/abs/2308.05382)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.05382.md)]. - Trustworthy LLMs: a Survey and Guideline for Evaluating Large Language Models' Alignment - [[2308.05374](https://arxiv.org/abs/2308.05374)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.05374.md)]. - Flexible Isosurface Extraction for Gradient-Based Mesh Optimization - [[2308.05371](https://arxiv.org/abs/2308.05371)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.05371.md)]. - Pseudo-label Alignment for Semi-supervised Instance Segmentation - [[2308.05359](https://arxiv.org/abs/2308.05359)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.05359.md)]. - OpenProteinSet: Training data for structural biology at scale - [[2308.05326](https://arxiv.org/abs/2308.05326)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.05326.md)]. - RLSAC: Reinforcement Learning enhanced Sample Consensus for End-to-End Robust Estimation - [[2308.05318](https://arxiv.org/abs/2308.05318)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.05318.md)]. - Alexa, play with robot: Introducing the First Alexa Prize SimBot Challenge on Embodied AI - [[2308.05221](https://arxiv.org/abs/2308.05221)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.05221.md)]. - LayoutLLM-T2I: Eliciting Layout Guidance from LLM for Text-to-Image Generation - [[2308.05095](https://arxiv.org/abs/2308.05095)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.05095.md)]. - Feature Modulation Transformer: Cross-Refinement of Global Representation via High-Frequency Prior for Image Super-Resolution - [[2308.05022](https://arxiv.org/abs/2308.05022)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.05022.md)]. - Robust Object Modeling for Visual Tracking - [[2308.05140](https://arxiv.org/abs/2308.05140)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.05140.md)]. - IDiff-Face: Synthetic-based Face Recognition through Fizzy Identity-Conditioned Diffusion Models - [[2308.04995](https://arxiv.org/abs/2308.04995)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.04995.md)]. - Foreground Object Search by Distilling Composite Image Feature - [[2308.04990](https://arxiv.org/abs/2308.04990)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.04990.md)]. - Prototypical Kernel Learning and Open-set Foreground Perception for Generalized Few-shot Semantic Segmentation - [[2308.04952](https://arxiv.org/abs/2308.04952)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.04952.md)]. - SelectNAdapt: Support Set Selection for Few-Shot Domain Adaptation - [[2308.04946](https://arxiv.org/abs/2308.04946)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.04946.md)]. - LLMeBench: A Flexible Framework for Accelerating LLMs Benchmarking - [[2308.04945](https://arxiv.org/abs/2308.04945)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.04945.md)]. - Cross-view Semantic Alignment for Livestreaming Product Recognition - [[2308.04912](https://arxiv.org/abs/2308.04912)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.04912.md)]. - MixReorg: Cross-Modal Mixed Patch Reorganization is a Good Mask Learner for Open-World Semantic Segmentation - [[2308.04829](https://arxiv.org/abs/2308.04829)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.04829.md)]. - WaveNeRF: Wavelet-based Generalizable Neural Radiance Fields - [[2308.04826](https://arxiv.org/abs/2308.04826)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.04826.md)]. - Joint-Relation Transformer for Multi-Person Motion Prediction - [[2308.04808](https://arxiv.org/abs/2308.04808)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.04808.md)]. - PointMBF: A Multi-scale Bidirectional Fusion Network for Unsupervised RGB-D Point Cloud Registration - [[2308.04782](https://arxiv.org/abs/2308.04782)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.04782.md)]. - Objects do not disappear: Video object detection by single-frame object location anticipation - [[2308.04770](https://arxiv.org/abs/2308.04770)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.04770.md)]. - Bird's-Eye-View Scene Graph for Vision-Language Navigation - [[2308.04758](https://arxiv.org/abs/2308.04758)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.04758.md)]. - JEN-1: Text-Guided Universal Music Generation with Omnidirectional Diffusion Models - [[2308.04729](https://arxiv.org/abs/2308.04729)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.04729.md)]. - GIFD: A Generative Gradient Inversion Method with Feature Domain Optimization - [[2308.04699](https://arxiv.org/abs/2308.04699)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.04699.md)]. - Score Priors Guided Deep Variational Inference for Unsupervised Real-World Single Image Denoising - [[2308.04682](https://arxiv.org/abs/2308.04682)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.04682.md)]. - Accelerating LLM Inference with Staged Speculative Decoding - [[2308.04623](https://arxiv.org/abs/2308.04623)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.04623.md)]. - Rendering Humans from Object-Occluded Monocular Videos - [[2308.04622](https://arxiv.org/abs/2308.04622)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.04622.md)]. - Shepherd: A Critic for Language Model Generation - [[2308.04592](https://arxiv.org/abs/2308.04592)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.04592.md)]. - LATR: 3D Lane Detection from Monocular Images with Transformer - [[2308.04583](https://arxiv.org/abs/2308.04583)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.04583.md)]. - FocalFormer3D : Focusing on Hard Instance for 3D Object Detection - [[2308.04556](https://arxiv.org/abs/2308.04556)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.04556.md)]. - Prune Spatio-temporal Tokens by Semantic-aware Temporal Accumulation - [[2308.04549](https://arxiv.org/abs/2308.04549)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.04549.md)]. - SILO Language Models: Isolating Legal Risk In a Nonparametric Datastore - [[2308.04430](https://arxiv.org/abs/2308.04430)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.04430.md)]. - DELFlow: Dense Efficient Learning of Scene Flow for Large-Scale Point Clouds - [[2308.04383](https://arxiv.org/abs/2308.04383)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.04383.md)]. - Cumulative Reasoning with Large Language Models - [[2308.04371](https://arxiv.org/abs/2308.04371)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.04371.md)]. - 3D-VisTA: Pre-trained Transformer for 3D Vision and Text Alignment - [[2308.04352](https://arxiv.org/abs/2308.04352)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.04352.md)]. - A Comparative Study of Code Generation using ChatGPT 3.5 across 10 Programming Languages - [[2308.04477](https://arxiv.org/abs/2308.04477)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.04477.md)]. - Lossy and Lossless (L$^2$) Post-training Model Size Compression - [[2308.04269](https://arxiv.org/abs/2308.04269)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.04269.md)]. - FLIRT: Feedback Loop In-context Red Teaming - [[2308.04265](https://arxiv.org/abs/2308.04265)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.04265.md)]. - Exploring Transformers for Open-world Instance Segmentation - [[2308.04206](https://arxiv.org/abs/2308.04206)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.04206.md)]. - D3G: Exploring Gaussian Prior for Temporal Sentence Grounding with Glance Annotation - [[2308.04197](https://arxiv.org/abs/2308.04197)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.04197.md)]. - Under-Display Camera Image Restoration with Scattering Effect - [[2308.04163](https://arxiv.org/abs/2308.04163)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.04163.md)]. - EPCFormer: Expression Prompt Collaboration Transformer for Universal Referring Video Object Segmentation - [[2308.04162](https://arxiv.org/abs/2308.04162)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.04162.md)]. - Empowering Vision-Language Models to Follow Interleaved Vision-Language Instructions - [[2308.04152](https://arxiv.org/abs/2308.04152)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.04152.md)]. - OmniDataComposer: A Unified Data Structure for Multimodal Data Fusion and Infinite Data Generation - [[2308.04126](https://arxiv.org/abs/2308.04126)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.04126.md)]. - 3D Gaussian Splatting for Real-Time Radiance Field Rendering - [[2308.04079](https://arxiv.org/abs/2308.04079)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.04079.md)]. - Enhancing Adversarial Robustness in Low-Label Regime via Adaptively Weighted Regularization and Knowledge Distillation - [[2308.04061](https://arxiv.org/abs/2308.04061)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.04061.md)]. - Gentopia: A Collaborative Platform for Tool-Augmented LLMs - [[2308.04030](https://arxiv.org/abs/2308.04030)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.04030.md)]. - AgentSims: An Open-Source Sandbox for Large Language Model Evaluation - [[2308.04026](https://arxiv.org/abs/2308.04026)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.04026.md)]. - Hierarchical Visual Primitive Experts for Compositional Zero-Shot Learning - [[2308.04016](https://arxiv.org/abs/2308.04016)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.04016.md)]. - Continual Pre-Training of Large Language Models: How to (re)warm your model? - [[2308.04014](https://arxiv.org/abs/2308.04014)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.04014.md)]. - Coarse-to-Fine: Learning Compact Discriminative Representation for Single-Stage Image Retrieval - [[2308.04008](https://arxiv.org/abs/2308.04008)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.04008.md)]. - PARTNER: Level up the Polar Representation for LiDAR 3D Object Detection - [[2308.03982](https://arxiv.org/abs/2308.03982)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.03982.md)]. - Simple synthetic data reduces sycophancy in large language models - [[2308.03958](https://arxiv.org/abs/2308.03958)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.03958.md)]. - TIJO: Trigger Inversion with Joint Optimization for Defending Multimodal Backdoored Models - [[2308.03906](https://arxiv.org/abs/2308.03906)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.03906.md)]. - From Sky to the Ground: A Large-scale Benchmark and Simple Baseline Towards Real Rain Removal - [[2308.03867](https://arxiv.org/abs/2308.03867)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.03867.md)]. - 3D Motion Magnification: Visualizing Subtle Motions with Time Varying Radiance Fields - [[2308.03757](https://arxiv.org/abs/2308.03757)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.03757.md)]. - Tiny LVLM-eHub: Early Multimodal Experiments with Bard - [[2308.03729](https://arxiv.org/abs/2308.03729)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.03729.md)]. - Scaling may be all you need for achieving human-level object recognition capacity with human-like visual experience - [[2308.03712](https://arxiv.org/abs/2308.03712)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.03712.md)]. - AgentBench: Evaluating LLMs as Agents - [[2308.03688](https://arxiv.org/abs/2308.03688)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.03688.md)]. - Learning Concise and Descriptive Attributes for Visual Recognition - [[2308.03685](https://arxiv.org/abs/2308.03685)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.03685.md)]. - AvatarVerse: High-quality &amp; Stable 3D Avatar Creation from Text and Pose - [[2308.03610](https://arxiv.org/abs/2308.03610)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.03610.md)]. - FeatEnHancer: Enhancing Hierarchical Features for Object Detection and Beyond Under Low-Light Vision - [[2308.03594](https://arxiv.org/abs/2308.03594)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.03594.md)]. - AlphaStar Unplugged: Large-Scale Offline Reinforcement Learning - [[2308.03526](https://arxiv.org/abs/2308.03526)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.03526.md)]. - Lighting Every Darkness in Two Pairs: A Calibration-Free Pipeline for RAW Denoising - [[2308.03448](https://arxiv.org/abs/2308.03448)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.03448.md)]. - TPTU: Task Planning and Tool Usage of Large Language Model-based AI Agents - [[2308.03427](https://arxiv.org/abs/2308.03427)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.03427.md)]. - RecycleGPT: An Autoregressive Language Model with Recyclable Module - [[2308.03421](https://arxiv.org/abs/2308.03421)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.03421.md)]. - GaFET: Learning Geometry-aware Facial Expression Translation from In-The-Wild Images - [[2308.03413](https://arxiv.org/abs/2308.03413)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.03413.md)]. - Heterogeneous Forgetting Compensation for Class-Incremental Learning - [[2308.03374](https://arxiv.org/abs/2308.03374)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.03374.md)]. - Dual Aggregation Transformer for Image Super-Resolution - [[2308.03364](https://arxiv.org/abs/2308.03364)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.03364.md)]. - Foundation Model based Open Vocabulary Task Planning and Executive System for General Purpose Service Robots - [[2308.03357](https://arxiv.org/abs/2308.03357)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.03357.md)]. - SciGraphQA: A Large-Scale Synthetic Multi-Turn Question-Answering Dataset for Scientific Graphs - [[2308.03349](https://arxiv.org/abs/2308.03349)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.03349.md)]. - Part-Aware Transformer for Generalizable Person Re-identification - [[2308.03322](https://arxiv.org/abs/2308.03322)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.03322.md)]. - Studying Large Language Model Generalization with Influence Functions - [[2308.03296](https://arxiv.org/abs/2308.03296)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.03296.md)]. - SynJax: Structured Probability Distributions for JAX - [[2308.03291](https://arxiv.org/abs/2308.03291)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.03291.md)]. - FLIQS: One-Shot Mixed-Precision Floating-Point and Integer Quantization Search - [[2308.03290](https://arxiv.org/abs/2308.03290)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.03290.md)]. - Multi-Label Self-Supervised Learning with Scene Images - [[2308.03286](https://arxiv.org/abs/2308.03286)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.03286.md)]. - Environment-Invariant Curriculum Relation Learning for Fine-Grained Scene Graph Generation - [[2308.03282](https://arxiv.org/abs/2308.03282)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.03282.md)]. - Mirror-NeRF: Learning Neural Radiance Fields for Mirrors with Whitted-Style Ray Tracing - [[2308.03280](https://arxiv.org/abs/2308.03280)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.03280.md)]. - UniversalNER: Targeted Distillation from Large Language Models for Open Named Entity Recognition - [[2308.03279](https://arxiv.org/abs/2308.03279)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.03279.md)]. - A Benchmark for Chinese-English Scene Text Image Super-resolution - [[2308.03262](https://arxiv.org/abs/2308.03262)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.03262.md)]. - Source-free Domain Adaptive Human Pose Estimation - [[2308.03202](https://arxiv.org/abs/2308.03202)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.03202.md)]. - Building Safe and Reliable AI systems for Safety Critical Tasks with Vision-Language Processing - [[2308.03176](https://arxiv.org/abs/2308.03176)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.03176.md)]. - CGBA: Curvature-aware Geometric Black-box Attack - [[2308.03163](https://arxiv.org/abs/2308.03163)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.03163.md)]. - Prototypes-oriented Transductive Few-shot Learning with Conditional Transport - [[2308.03047](https://arxiv.org/abs/2308.03047)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.03047.md)]. - Learning Fine-Grained Features for Pixel-wise Video Correspondences - [[2308.03040](https://arxiv.org/abs/2308.03040)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.03040.md)]. - Pre-Trained Large Language Models for Industrial Control - [[2308.03028](https://arxiv.org/abs/2308.03028)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.03028.md)]. - SAPIEN: Affective Virtual Agents Powered by Large Language Models - [[2308.03022](https://arxiv.org/abs/2308.03022)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.03022.md)]. - Focus the Discrepancy: Intra- and Inter-Correlation Learning for Image Anomaly Detection - [[2308.02983](https://arxiv.org/abs/2308.02983)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.02983.md)]. - An Adaptive Model Ensemble Adversarial Attack for Boosting Adversarial Transferability - [[2308.02897](https://arxiv.org/abs/2308.02897)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.02897.md)]. - Sketch and Text Guided Diffusion Model for Colored Point Cloud Generation - [[2308.02874](https://arxiv.org/abs/2308.02874)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.02874.md)]. - Learning Unified Decompositional and Compositional NeRF for Editable Novel View Synthesis - [[2308.02840](https://arxiv.org/abs/2308.02840)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.02840.md)]. - EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education - [[2308.02773](https://arxiv.org/abs/2308.02773)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.02773.md)]. - DeDrift: Robust Similarity Search under Content Drift - [[2308.02752](https://arxiv.org/abs/2308.02752)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.02752.md)]. - ReCLIP: Refine Contrastive Language Image Pre-Training with Source Free Domain Adaptation - [[2308.03793](https://arxiv.org/abs/2308.03793)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.03793.md)]. - MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities - [[2308.02490](https://arxiv.org/abs/2308.02490)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.02490.md)]. - Convolutions Die Hard: Open-Vocabulary Segmentation with Single Frozen Convolutional CLIP - [[2308.02487](https://arxiv.org/abs/2308.02487)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.02487.md)]. - Getting the Ball Rolling: Learning a Dexterous Policy for a Biomimetic Tendon-Driven Hand with Rolling Contact Joints - [[2308.02453](https://arxiv.org/abs/2308.02453)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.02453.md)]. - Text2KGBench: A Benchmark for Ontology-Driven Knowledge Graph Generation from Text - [[2308.02357](https://arxiv.org/abs/2308.02357)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.02357.md)]. - FB-BEV: BEV Representation from Forward-Backward View Transformations - [[2308.02236](https://arxiv.org/abs/2308.02236)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.02236.md)]. - ESRL: Efficient Sampling-based Reinforcement Learning for Sequence Generation - [[2308.02223](https://arxiv.org/abs/2308.02223)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.02223.md)]. - Scaling Clinical Trial Matching Using Large Language Models: A Case Study in Oncology - [[2308.02180](https://arxiv.org/abs/2308.02180)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.02180.md)]. - Learning Referring Video Object Segmentation from Weak Annotation - [[2308.02162](https://arxiv.org/abs/2308.02162)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.02162.md)]. - Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization - [[2308.02151](https://arxiv.org/abs/2308.02151)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.02151.md)]. - Multi-interactive Feature Learning and a Full-time Multi-modality Benchmark for Image Fusion and Segmentation - [[2308.02097](https://arxiv.org/abs/2308.02097)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.02097.md)]. - The All-Seeing Project: Towards Panoptic Visual Recognition and Understanding of the Open World - [[2308.01907](https://arxiv.org/abs/2308.01907)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.01907.md)]. - DETR Doesn't Need Multi-Scale or Locality Design - [[2308.01904](https://arxiv.org/abs/2308.01904)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.01904.md)]. - ConceptLab: Creative Generation using Diffusion Prior Constraints - [[2308.02669](https://arxiv.org/abs/2308.02669)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.02669.md)]. - ClassEval: A Manually-Crafted Benchmark for Evaluating LLMs on Class-level Code Generation - [[2308.01861](https://arxiv.org/abs/2308.01861)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.01861.md)]. - Scaling Relationship on Learning Mathematical Reasoning with Large Language Models - [[2308.01825](https://arxiv.org/abs/2308.01825)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.01825.md)]. - RegionBLIP: A Unified Multi-modal Pre-training Framework for Holistic and Regional Comprehension - [[2308.02299](https://arxiv.org/abs/2308.02299)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.02299.md)]. - Point2Mask: Point-supervised Panoptic Segmentation via Optimal Transport - [[2308.01779](https://arxiv.org/abs/2308.01779)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.01779.md)]. - Ambient Adventures: Teaching ChatGPT on Developing Complex Stories - [[2308.01734](https://arxiv.org/abs/2308.01734)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.01734.md)]. - LiDAR-Camera Panoptic Segmentation via Geometry-Consistent and Semantic-Aware Alignment - [[2308.01686](https://arxiv.org/abs/2308.01686)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.01686.md)]. - A Multidimensional Analysis of Social Biases in Vision Transformers - [[2308.01948](https://arxiv.org/abs/2308.01948)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.01948.md)]. - InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent - [[2308.01552](https://arxiv.org/abs/2308.01552)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.01552.md)]. - Get the Best of Both Worlds: Improving Accuracy and Transferability by Grassmann Class Representation - [[2308.01547](https://arxiv.org/abs/2308.01547)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.01547.md)]. - MusicLDM: Enhancing Novelty in Text-to-Music Generation Using Beat-Synchronous Mixup Strategies - [[2308.01546](https://arxiv.org/abs/2308.01546)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.01546.md)]. - Multimodal Neurons in Pretrained Text-Only Transformers - [[2308.01544](https://arxiv.org/abs/2308.01544)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.01544.md)]. - TDMD: A Database for Dynamic Color Mesh Subjective and Objective Quality Explorations - [[2308.01499](https://arxiv.org/abs/2308.01499)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.01499.md)]. - Target-point Attention Transformer: A novel trajectory predict network for end-to-end autonomous driving - [[2308.1496](https://arxiv.org/abs/2308.1496)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.1496.md)]. - Efficient neural supersampling on a novel gaming dataset - [[2308.01483](https://arxiv.org/abs/2308.01483)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.01483.md)]. - HANDAL: A Dataset of Real-World Manipulable Object Categories with Pose Annotations, Affordances, and Reconstructions - [[2308.01477](https://arxiv.org/abs/2308.01477)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.01477.md)]. - Training Data Protection with Compositional Diffusion Models - [[2308.01937](https://arxiv.org/abs/2308.01937)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.01937.md)]. - VertexSerum: Poisoning Graph Neural Networks for Link Inference - [[2308.01469](https://arxiv.org/abs/2308.01469)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.01469.md)]. - From Discrete Tokens to High-Fidelity Audio Using Multi-Band Diffusion - [[2308.02560](https://arxiv.org/abs/2308.02560)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.02560.md)]. - On $κ$-solutions and canonical neighborhoods in 4d Ricci flow - [[2308.1448](https://arxiv.org/abs/2308.1448)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.1448.md)]. - OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models - [[2308.01390](https://arxiv.org/abs/2308.01390)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.01390.md)]. - DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales - [[2308.01320](https://arxiv.org/abs/2308.01320)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.01320.md)]. - Computational Long Exposure Mobile Photography - [[2308.01379](https://arxiv.org/abs/2308.01379)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.01379.md)]. - More Context, Less Distraction: Visual Classification by Inferring and Conditioning on Contextual Attributes - [[2308.01313](https://arxiv.org/abs/2308.01313)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.01313.md)]. - Revisiting DETR Pre-training for Object Detection - [[2308.01300](https://arxiv.org/abs/2308.01300)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.01300.md)]. - XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models - [[2308.01263](https://arxiv.org/abs/2308.01263)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.01263.md)]. - A Hyper-pixel-wise Contrastive Learning Augmented Segmentation Network for Old Landslide Detection Using High-Resolution Remote Sensing Images and Digital Elevation Model Data - [[2308.1251](https://arxiv.org/abs/2308.1251)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.1251.md)]. - Evaluating Instruction-Tuned Large Language Models on Code Comprehension and Generation - [[2308.01240](https://arxiv.org/abs/2308.01240)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.01240.md)]. - LSF-IDM: Automotive Intrusion Detection Model with Lightweight Attribution and Semantic Fusion - [[2308.1237](https://arxiv.org/abs/2308.1237)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.1237.md)]. - Grounded Image Text Matching with Mismatched Relation Reasoning - [[2308.01236](https://arxiv.org/abs/2308.01236)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.01236.md)]. - Geometric wakes in collimators and step transitions of arbitrary cross-sections: conformal mapping approach - [[2308.1235](https://arxiv.org/abs/2308.1235)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.1235.md)]. - One Tree to Rule Them All: Poly-Logarithmic Universal Steiner Tree - [[2308.1199](https://arxiv.org/abs/2308.1199)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.1199.md)]. - Improving Generalization in Visual Reinforcement Learning via Conflict-aware Gradient Agreement Augmentation - [[2308.01194](https://arxiv.org/abs/2308.01194)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.01194.md)]. - Towards Understanding the Capability of Large Language Models on Code Clone Detection: A Survey - [[2308.01191](https://arxiv.org/abs/2308.01191)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.01191.md)]. - Three-level Dicke quantum battery - [[2308.1188](https://arxiv.org/abs/2308.1188)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.1188.md)]. - Multiobjective Optimization of Non-Smooth PDE-Constrained Problems - [[2308.1113](https://arxiv.org/abs/2308.1113)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.1113.md)]. - Black hole thermodynamics in Horndeski theories - [[2308.1082](https://arxiv.org/abs/2308.1082)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.1082.md)]. - MammoDG: Generalisable Deep Learning Breaks the Limits of Cross-Domain Multi-Center Breast Cancer Screening - [[2308.1057](https://arxiv.org/abs/2308.1057)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.1057.md)]. - Stability Analysis for a Class of Heterogeneous Catalysis Models - [[2308.1049](https://arxiv.org/abs/2308.1049)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.1049.md)]. - Dynamic Token Pruning in Plain Vision Transformers for Semantic Segmentation - [[2308.01045](https://arxiv.org/abs/2308.01045)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.01045.md)]. - An improved infrastructure for the IceCube realtime system - [[2308.1031](https://arxiv.org/abs/2308.1031)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.1031.md)]. - Model-agnostic search for the quasinormal modes of gravitational wave echoes - [[2308.1017](https://arxiv.org/abs/2308.1017)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.1017.md)]. - Enhancing Representation Learning for Periodic Time Series with Floss: A Frequency Domain Regularization Approach - [[2308.1011](https://arxiv.org/abs/2308.1011)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.1011.md)]. - From Sparse to Soft Mixtures of Experts - [[2308.00951](https://arxiv.org/abs/2308.00951)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.00951.md)]. - Cosmological Distance Measurement of 12 Nearby Supernovae IIP with ROTSE-IIIB - [[2308.0916](https://arxiv.org/abs/2308.0916)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.0916.md)]. - ImageBrush: Learning Visual In-Context Instructions for Exemplar-Based Image Manipulation - [[2308.00906](https://arxiv.org/abs/2308.00906)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.00906.md)]. - VLUCI: Variational Learning of Unobserved Confounders for Counterfactual Inference - [[2308.0904](https://arxiv.org/abs/2308.0904)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.0904.md)]. - Weak localization in radiative transfer of acoustic waves in a randomly-fluctuating slab - [[2308.0822](https://arxiv.org/abs/2308.0822)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.0822.md)]. - Optimal design of plane elastic membranes using the convexified Föppl's model - [[2308.0811](https://arxiv.org/abs/2308.0811)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.0811.md)]. - Body Knowledge and Uncertainty Modeling for Monocular 3D Human Body Reconstruction - [[2308.00799](https://arxiv.org/abs/2308.00799)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.00799.md)]. - LISA: Reasoning Segmentation via Large Language Model - [[2308.00692](https://arxiv.org/abs/2308.00692)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.00692.md)]. - Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models - [[2308.00675](https://arxiv.org/abs/2308.00675)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.00675.md)]. - Note: Stokes-Einstein relation without hydrodynamic diameter in the TIP4P/Ice water model - [[2308.0653](https://arxiv.org/abs/2308.0653)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.0653.md)]. - ELFNet: Evidential Local-global Fusion for Stereo Matching - [[2308.00728](https://arxiv.org/abs/2308.00728)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.00728.md)]. - Detecting Cloud Presence in Satellite Images Using the RGB-based CLIP Vision-Language Model - [[2308.0541](https://arxiv.org/abs/2308.0541)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.0541.md)]. - Understanding URDF: A Dataset and Analysis - [[2308.0514](https://arxiv.org/abs/2308.0514)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.0514.md)]. - Stochastic Geometry Based Modeling and Analysis on Network NOMA in Downlink CoMP Systems - [[2308.0499](https://arxiv.org/abs/2308.0499)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.0499.md)]. - A many-sorted epistemic logic for chromatic hypergraphs - [[2308.0477](https://arxiv.org/abs/2308.0477)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.0477.md)]. - FLatten Transformer: Vision Transformer using Focused Linear Attention - [[2308.00442](https://arxiv.org/abs/2308.00442)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.00442.md)]. - SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning - [[2308.00436](https://arxiv.org/abs/2308.00436)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.00436.md)]. - DriveAdapter: Breaking the Coupling Barrier of Perception and Planning in End-to-End Autonomous Driving - [[2308.00398](https://arxiv.org/abs/2308.00398)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.00398.md)]. - Improving Generalization of Adversarial Training via Robust Critical Fine-Tuning - [[2308.02533](https://arxiv.org/abs/2308.02533)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.02533.md)]. - Deep Image Harmonization with Learnable Augmentation - [[2308.00376](https://arxiv.org/abs/2308.00376)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.00376.md)]. - Deep Image Harmonization with Globally Guided Feature Transformation and Relation Distillation - [[2308.00356](https://arxiv.org/abs/2308.00356)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.00356.md)]. - MetaGPT: Meta Programming for Multi-Agent Collaborative Framework - [[2308.00352](https://arxiv.org/abs/2308.00352)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.00352.md)]. - Artifact: Measuring and Mitigating Gaps in Structural Testing - [[2308.0316](https://arxiv.org/abs/2308.0316)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.0316.md)]. - Skills-in-Context Prompting: Unlocking Compositionality in Large Language Models - [[2308.00304](https://arxiv.org/abs/2308.00304)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.00304.md)]. - Skills-in-Context Prompting: Unlocking Compositionality in Large Language Models - [[2308.0304](https://arxiv.org/abs/2308.0304)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.0304.md)]. - Online Prototype Learning for Online Continual Learning - [[2308.00301](https://arxiv.org/abs/2308.00301)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.00301.md)]. - CLAMS: A Cluster Ambiguity Measure for Estimating Perceptual Variability in Visual Clustering - [[2308.0284](https://arxiv.org/abs/2308.0284)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.0284.md)]. - Improving Pixel-based MIM by Reducing Wasted Modeling Capability - [[2308.00261](https://arxiv.org/abs/2308.00261)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.00261.md)]. - GOALS-JWST: Gas Dynamics and Excitation in NGC7469 revealed by NIRSpec - [[2308.0209](https://arxiv.org/abs/2308.0209)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.0209.md)]. ### July 2023 - Predicting masked tokens in stochastic locations improves masked image modeling - [[2308.00566](https://arxiv.org/abs/2308.00566)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.00566.md)]. - Learning to Model the World with Language - [[2308.01399](https://arxiv.org/abs/2308.01399)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.01399.md)]. - Discovering Adaptable Symbolic Algorithms from Scratch - [[2307.16890](https://arxiv.org/abs/2307.16890)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.16890.md)]. - Virtual Prompt Injection for Instruction-Tuned Large Language Models - [[2307.16888](https://arxiv.org/abs/2307.16888)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.16888.md)]. - Shortcut Partitions in Minor-Free Graphs: Steiner Point Removal, Distance Oracles, Tree Covers, and More - [[2308.0555](https://arxiv.org/abs/2308.0555)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.0555.md)]. - Revisiting the Parameter Efficiency of Adapters from the Perspective of Precision Redundancy - [[2307.16867](https://arxiv.org/abs/2307.16867)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.16867.md)]. - Random Sub-Samples Generation for Self-Supervised Real Image Denoising - [[2307.16825](https://arxiv.org/abs/2307.16825)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.16825.md)]. - ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs - [[2307.16789](https://arxiv.org/abs/2307.16789)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.16789.md)]. - UniVTG: Towards Unified Video-Language Temporal Grounding - [[2307.16715](https://arxiv.org/abs/2307.16715)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.16715.md)]. - DiffPose: SpatioTemporal Diffusion Model for Video-Based Human Pose Estimation - [[2307.16687](https://arxiv.org/abs/2307.16687)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.16687.md)]. - Guiding Image Captioning Models Toward More Specific Captions - [[2307.16686](https://arxiv.org/abs/2307.16686)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.16686.md)]. - Graph Structure from Point Clouds: Geometric Attention is All You Need - [[2307.16662](https://arxiv.org/abs/2307.16662)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.16662.md)]. - CDUL: CLIP-Driven Unsupervised Learning for Multi-Label Image Classification - [[2307.16634](https://arxiv.org/abs/2307.16634)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.16634.md)]. - FULLER: Unified Multi-modality Multi-task 3D Perception via Multi-level Gradient Calibration - [[2307.16617](https://arxiv.org/abs/2307.16617)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.16617.md)]. - Transferable Decoding with Visual Entities for Zero-Shot Image Captioning - [[2307.16525](https://arxiv.org/abs/2307.16525)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.16525.md)]. - Towards General Low-Light Raw Noise Synthesis and Modeling - [[2307.16508](https://arxiv.org/abs/2307.16508)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.16508.md)]. - MovieChat: From Dense Token to Sparse Memory for Long Video Understanding - [[2307.16449](https://arxiv.org/abs/2307.16449)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.16449.md)]. - DRAW: Defending Camera-shooted RAW against Image Manipulation - [[2307.16418](https://arxiv.org/abs/2307.16418)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.16418.md)]. - DDG-Net: Discriminability-Driven Graph Network for Weakly-supervised Temporal Action Localization - [[2307.16415](https://arxiv.org/abs/2307.16415)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.16415.md)]. - Bridging the Gap: Exploring the Capabilities of Bridge-Architectures for Complex Visual Reasoning Tasks - [[2307.16395](https://arxiv.org/abs/2307.16395)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.16395.md)]. - JOTR: 3D Joint Contrastive Learning with Transformers for Occluded Human Mesh Recovery - [[2307.16377](https://arxiv.org/abs/2307.16377)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.16377.md)]. - LP-MusicCaps: LLM-Based Pseudo Music Captioning - [[2307.16372](https://arxiv.org/abs/2307.16372)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.16372.md)]. - AntGPT: Can Large Language Models Help Long-term Action Anticipation from Videos? - [[2307.16368](https://arxiv.org/abs/2307.16368)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.16368.md)]. - Benchmarking and Analyzing Robust Point Cloud Recognition: Bag of Tricks for Defending Adversarial Examples - [[2307.16361](https://arxiv.org/abs/2307.16361)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.16361.md)]. - Evaluating ChatGPT and GPT-4 for Visual Programming - [[2308.02522](https://arxiv.org/abs/2308.02522)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.02522.md)]. - Unified Model for Image, Video, Audio and Language Tasks - [[2307.16184](https://arxiv.org/abs/2307.16184)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.16184.md)]. - Do LLMs Possess a Personality? Making the MBTI Test an Amazing Evaluation for Large Language Models - [[2307.16180](https://arxiv.org/abs/2307.16180)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.16180.md)]. - SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension - [[2307.16125](https://arxiv.org/abs/2307.16125)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.16125.md)]. - Separate Scene Text Detector for Unseen Scripts is Not All You Need - [[2307.15991](https://arxiv.org/abs/2307.15991)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.15991.md)]. - XMem++: Production-level Video Segmentation From Few Annotated Frames - [[2307.15958](https://arxiv.org/abs/2307.15958)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.15958.md)]. - CMDA: Cross-Modality Domain Adaptation for Nighttime Semantic Segmentation - [[2307.15942](https://arxiv.org/abs/2307.15942)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.15942.md)]. - What can Discriminator do? Towards Box-free Ownership Verification of Generative Adversarial Network - [[2307.15860](https://arxiv.org/abs/2307.15860)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.15860.md)]. - RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control - [[2307.15818](https://arxiv.org/abs/2307.15818)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.15818.md)]. - The Hydra Effect: Emergent Self-repair in Language Model Computations - [[2307.15771](https://arxiv.org/abs/2307.15771)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.15771.md)]. - MeMOTR: Long-Term Memory-Augmented Transformer for Multi-Object Tracking - [[2307.15700](https://arxiv.org/abs/2307.15700)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.15700.md)]. - Scaling Data Generation in Vision-and-Language Navigation - [[2307.15644](https://arxiv.org/abs/2307.15644)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.15644.md)]. - Robust Distortion-free Watermarks for Language Models - [[2307.15593](https://arxiv.org/abs/2307.15593)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.15593.md)]. - Beating Backdoor Attack at Its Own Game - [[2307.15539](https://arxiv.org/abs/2307.15539)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.15539.md)]. - Exploring Format Consistency for Instruction Tuning - [[2307.15504](https://arxiv.org/abs/2307.15504)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.15504.md)]. - FeedbackLogs: Recording and Incorporating Stakeholder Feedback into Machine Learning Pipelines - [[2307.15475](https://arxiv.org/abs/2307.15475)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.15475.md)]. - Is One Epoch All You Need For Multi-Fidelity Hyperparameter Optimization? - [[2307.15422](https://arxiv.org/abs/2307.15422)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.15422.md)]. - Uncertainty-aware Unsupervised Multi-Object Tracking - [[2307.15409](https://arxiv.org/abs/2307.15409)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.15409.md)]. - Supervised Homography Learning with Realistic Dataset Generation - [[2307.15353](https://arxiv.org/abs/2307.15353)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.15353.md)]. - Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding - [[2307.15337](https://arxiv.org/abs/2307.15337)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.15337.md)]. - Dynamic PlenOctree for Adaptive Sampling Refinement in Explicit NeRF - [[2307.15333](https://arxiv.org/abs/2307.15333)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.15333.md)]. - TaskExpert: Dynamically Assembling Multi-Task Representations with Memorial Mixture-of-Experts - [[2307.15324](https://arxiv.org/abs/2307.15324)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.15324.md)]. - Multiple Instance Learning Framework with Masked Hard Instance Mining for Whole Slide Image Classification - [[2307.15254](https://arxiv.org/abs/2307.15254)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.15254.md)]. - Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback - [[2307.15217](https://arxiv.org/abs/2307.15217)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.15217.md)]. - PromptStyler: Prompt-driven Style Generation for Source-free Domain Generalization - [[2307.15199](https://arxiv.org/abs/2307.15199)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.15199.md)]. - Med-Flamingo: a Multimodal Medical Few-shot Learner - [[2307.15189](https://arxiv.org/abs/2307.15189)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.15189.md)]. - Seal-3D: Interactive Pixel-Level Editing for Neural Radiance Fields - [[2307.15131](https://arxiv.org/abs/2307.15131)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.15131.md)]. - To Adapt or Not to Adapt? Real-Time Adaptation for Semantic Segmentation - [[2307.15063](https://arxiv.org/abs/2307.15063)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.15063.md)]. - Distilled Feature Fields Enable Few-Shot Language-Guided Manipulation - [[2308.07931](https://arxiv.org/abs/2308.07931)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.07931.md)]. - Learning Depth Estimation for Transparent and Mirror Surfaces - [[2307.15052](https://arxiv.org/abs/2307.15052)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.15052.md)]. - Regularized Mask Tuning: Uncovering Hidden Knowledge in Pre-trained Vision-Language Models - [[2307.15049](https://arxiv.org/abs/2307.15049)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.15049.md)]. - Universal and Transferable Adversarial Attacks on Aligned Language Models - [[2307.15043](https://arxiv.org/abs/2307.15043)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.15043.md)]. - TEDi: Temporally-Entangled Diffusion for Long-Term Motion Synthesis - [[2307.15042](https://arxiv.org/abs/2307.15042)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.15042.md)]. - Diverse Inpainting and Editing with GAN Inversion - [[2307.15033](https://arxiv.org/abs/2307.15033)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.15033.md)]. - SuperCLUE: A Comprehensive Chinese Large Language Model Benchmark - [[2307.15020](https://arxiv.org/abs/2307.15020)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.15020.md)]. - How Good is Google Bard's Visual Understanding? An Empirical Study on Open Challenges - [[2307.15016](https://arxiv.org/abs/2307.15016)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.15016.md)]. - Scaling TransNormer to 175 Billion Parameters - [[2307.14995](https://arxiv.org/abs/2307.14995)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.14995.md)]. - S$^3$: Social-network Simulation System with Large Language Model-Empowered Agents - [[2307.14984](https://arxiv.org/abs/2307.14984)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.14984.md)]. - Take-A-Photo: 3D-to-2D Generative Pre-training of Point Cloud Models - [[2307.14971](https://arxiv.org/abs/2307.14971)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.14971.md)]. - PanGu-Coder2: Boosting Large Language Models for Code with Ranking Feedback - [[2307.14936](https://arxiv.org/abs/2307.14936)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.14936.md)]. - Seeing through the Brain: Image Reconstruction of Visual Perception from Human Brain Signals - [[2308.02510](https://arxiv.org/abs/2308.02510)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.02510.md)]. - Towards Deeply Unified Depth-aware Panoptic Segmentation with Bi-directional Guidance Learning - [[2307.14786](https://arxiv.org/abs/2307.14786)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.14786.md)]. - Gloss-free Sign Language Translation: Improving from Visual-Language Pretraining - [[2307.14768](https://arxiv.org/abs/2307.14768)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.14768.md)]. - Test Time Adaptation for Blind Image Quality Assessment - [[2307.14735](https://arxiv.org/abs/2307.14735)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.14735.md)]. - P2C: Self-Supervised Point Cloud Completion from Single Partial Clouds - [[2307.14726](https://arxiv.org/abs/2307.14726)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.14726.md)]. - Pre-training Vision Transformers with Very Limited Synthesized Images - [[2307.14710](https://arxiv.org/abs/2307.14710)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.14710.md)]. - Taxonomy Adaptive Cross-Domain Adaptation in Medical Imaging via Optimization Trajectory Distillation - [[2307.14709](https://arxiv.org/abs/2307.14709)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.14709.md)]. - 360VOT: A New Benchmark Dataset for Omnidirectional Visual Object Tracking - [[2307.14630](https://arxiv.org/abs/2307.14630)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.14630.md)]. - NeRF-Det: Learning Geometry-Aware Volumetric Representation for Multi-View 3D Object Detection - [[2307.14620](https://arxiv.org/abs/2307.14620)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.14620.md)]. - TextManiA: Enriching Visual Feature by Text-driven Manifold Augmentation - [[2307.14611](https://arxiv.org/abs/2307.14611)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.14611.md)]. - Clustering based Point Cloud Representation Learning for 3D Analysis - [[2307.14605](https://arxiv.org/abs/2307.14605)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.14605.md)]. - Scaling Up and Distilling Down: Language-Guided Robot Skill Acquisition - [[2307.14535](https://arxiv.org/abs/2307.14535)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.14535.md)]. - MiDaS v3.1 -- A Model Zoo for Robust Monocular Relative Depth Estimation - [[2307.14460](https://arxiv.org/abs/2307.14460)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.14460.md)]. - Three Bricks to Consolidate Watermarks for Large Language Models - [[2308.00113](https://arxiv.org/abs/2308.00113)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2308.00113.md)]. - MAMo: Leveraging Memory and Attention for Monocular Video Depth Estimation - [[2307.14336](https://arxiv.org/abs/2307.14336)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.14336.md)]. - WavJourney: Compositional Audio Creation with Large Language Models - [[2307.14335](https://arxiv.org/abs/2307.14335)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.14335.md)]. - Towards Generalist Biomedical AI - [[2307.14334](https://arxiv.org/abs/2307.14334)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.14334.md)]. - G2L: Semantically Aligned and Uniform Video Grounding via Geodesic and Game Theory - [[2307.14277](https://arxiv.org/abs/2307.14277)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.14277.md)]. - Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences - [[2307.14225](https://arxiv.org/abs/2307.14225)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.14225.md)]. - ADAPT: Efficient Multi-Agent Trajectory Prediction with Adaptation - [[2307.14187](https://arxiv.org/abs/2307.14187)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.14187.md)]. - Creative Birds: Self-Supervised Single-View 3D Style Transfer - [[2307.14127](https://arxiv.org/abs/2307.14127)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.14127.md)]. - Leveraging Implicit Feedback from Deployment Data in Dialogue - [[2307.14117](https://arxiv.org/abs/2307.14117)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.14117.md)]. - Uncertainty Guided Adaptive Warping for Robust and Efficient Stereo Matching - [[2307.14071](https://arxiv.org/abs/2307.14071)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.14071.md)]. - Set-level Guidance Attack: Boosting Adversarial Transferability of Vision-Language Pre-training Models - [[2307.14061](https://arxiv.org/abs/2307.14061)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.14061.md)]. - 3D Semantic Subspace Traverser: Empowering 3D Generative Model with Shape Editing Capability - [[2307.14051](https://arxiv.org/abs/2307.14051)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.14051.md)]. - Controllable Guide-Space for Generalizable Face Forgery Detection - [[2307.14039](https://arxiv.org/abs/2307.14039)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.14039.md)]. - Adaptive Frequency Filters As Efficient Global Token Mixers - [[2307.14008](https://arxiv.org/abs/2307.14008)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.14008.md)]. - Tracking Anything in High Quality - [[2307.13974](https://arxiv.org/abs/2307.13974)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.13974.md)]. - AIDE: A Vision-Driven Multi-View, Multi-Modal, Multi-Tasking Dataset for Assistive Driving Perception - [[2307.13933](https://arxiv.org/abs/2307.13933)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.13933.md)]. - Spatio-Temporal Domain Awareness for Multi-Agent Collaborative Perception - [[2307.13929](https://arxiv.org/abs/2307.13929)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.13929.md)]. - trajdata: A Unified Interface to Multiple Human Trajectory Datasets - [[2307.13924](https://arxiv.org/abs/2307.13924)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.13924.md)]. - Points-to-3D: Bridging the Gap between Sparse Points and Shape-Controllable Text-to-3D Generation - [[2307.13908](https://arxiv.org/abs/2307.13908)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.13908.md)]. - WebArena: A Realistic Web Environment for Building Autonomous Agents - [[2307.13854](https://arxiv.org/abs/2307.13854)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.13854.md)]. - How to Scale Your EMA - [[2307.13813](https://arxiv.org/abs/2307.13813)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.13813.md)]. - E^2VPT: An Effective and Efficient Approach for Visual Prompt Tuning - [[2307.13770](https://arxiv.org/abs/2307.13770)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.13770.md)]. - PlaneRecTR: Unified Query Learning for 3D Plane Recovery from a Single View - [[2307.13756](https://arxiv.org/abs/2307.13756)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.13756.md)]. - Foundational Models Defining a New Era in Vision: A Survey and Outlook - [[2307.13721](https://arxiv.org/abs/2307.13721)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.13721.md)]. - Composite Diffusion | whole &gt;= Σparts - [[2307.13720](https://arxiv.org/abs/2307.13720)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.13720.md)]. - ARB: Advanced Reasoning Benchmark for Large Language Models - [[2307.13692](https://arxiv.org/abs/2307.13692)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.13692.md)]. - RecursiveDet: End-to-End Region-based Recursive Object Detection - [[2307.13619](https://arxiv.org/abs/2307.13619)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.13619.md)]. - Model Calibration in Dense Classification with Adaptive Label Perturbation - [[2307.13539](https://arxiv.org/abs/2307.13539)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.13539.md)]. - Spectrum-guided Multi-granularity Referring Video Object Segmentation - [[2307.13537](https://arxiv.org/abs/2307.13537)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.13537.md)]. - Re-mine, Learn and Reason: Exploring the Cross-modal Semantic Correlations for Language-guided HOI detection - [[2307.13529](https://arxiv.org/abs/2307.13529)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.13529.md)]. - FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios - [[2307.13528](https://arxiv.org/abs/2307.13528)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.13528.md)]. - Weakly-supervised 3D Pose Transfer with Keypoints - [[2307.13459](https://arxiv.org/abs/2307.13459)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.13459.md)]. - Predicting Code Coverage without Execution - [[2307.13383](https://arxiv.org/abs/2307.13383)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.13383.md)]. - Unmasking Anomalies in Road-Scene Segmentation - [[2307.13316](https://arxiv.org/abs/2307.13316)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.13316.md)]. - LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition - [[2307.13269](https://arxiv.org/abs/2307.13269)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.13269.md)]. - Conditional Cross Attention Network for Multi-Space Embedding without Entanglement in Only a SINGLE Network - [[2307.13254](https://arxiv.org/abs/2307.13254)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.13254.md)]. - GaPro: Box-Supervised 3D Point Cloud Instance Segmentation Using Gaussian Processes as Pseudo Labelers - [[2307.13251](https://arxiv.org/abs/2307.13251)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.13251.md)]. - Strivec: Sparse Tri-Vector Radiance Fields - [[2307.13226](https://arxiv.org/abs/2307.13226)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.13226.md)]. - GraspGPT: Leveraging Semantic Knowledge from a Large Language Model for Task-Oriented Grasping - [[2307.13204](https://arxiv.org/abs/2307.13204)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.13204.md)]. - Contrastive Example-Based Control - [[2307.13101](https://arxiv.org/abs/2307.13101)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.13101.md)]. - LLM-Rec: Personalized Recommendation via Prompting Large Language Models - [[2307.15780](https://arxiv.org/abs/2307.15780)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.15780.md)]. - 3D-LLM: Injecting the 3D World into Large Language Models - [[2307.12981](https://arxiv.org/abs/2307.12981)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.12981.md)]. - A Systematic Survey of Prompt Engineering on Vision-Language Foundation Models - [[2307.12980](https://arxiv.org/abs/2307.12980)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.12980.md)]. - Evaluating the Ripple Effects of Knowledge Editing in Language Models - [[2307.12976](https://arxiv.org/abs/2307.12976)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.12976.md)]. - DFA3D: 3D Deformable Attention For 2D-to-3D Feature Lifting - [[2307.12972](https://arxiv.org/abs/2307.12972)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.12972.md)]. - Aligning Large Language Models with Human: A Survey - [[2307.12966](https://arxiv.org/abs/2307.12966)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.12966.md)]. - RLCD: Reinforcement Learning from Contrast Distillation for Language Model Alignment - [[2307.12950](https://arxiv.org/abs/2307.12950)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.12950.md)]. - GridMM: Grid Memory Map for Vision-and-Language Navigation - [[2307.12907](https://arxiv.org/abs/2307.12907)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.12907.md)]. - A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis - [[2307.12856](https://arxiv.org/abs/2307.12856)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.12856.md)]. - Multiscale Video Pretraining for Long-Term Activity Forecasting - [[2307.12854](https://arxiv.org/abs/2307.12854)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.12854.md)]. - Fast Full-frame Video Stabilization with Iterative Optimization - [[2307.12774](https://arxiv.org/abs/2307.12774)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.12774.md)]. - COCO-O: A Benchmark for Object Detectors under Natural Distribution Shifts - [[2307.12730](https://arxiv.org/abs/2307.12730)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.12730.md)]. - Persistent-Transient Duality: A Multi-mechanism Approach for Modeling Human-Object Interaction - [[2307.12729](https://arxiv.org/abs/2307.12729)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.12729.md)]. - MC-JEPA: A Joint-Embedding Predictive Architecture for Self-Supervised Learning of Motion and Content Features - [[2307.12698](https://arxiv.org/abs/2307.12698)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.12698.md)]. - PG-RCNN: Semantic Surface Point Generation for 3D Object Detection - [[2307.12637](https://arxiv.org/abs/2307.12637)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.12637.md)]. - CTVIS: Consistent Training for Online Video Instance Segmentation - [[2307.12616](https://arxiv.org/abs/2307.12616)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.12616.md)]. - Less is More: Focus Attention for Efficient DETR - [[2307.12612](https://arxiv.org/abs/2307.12612)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.12612.md)]. - PRIOR: Prototype Representation Joint Learning from Medical Images and Reports - [[2307.12577](https://arxiv.org/abs/2307.12577)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.12577.md)]. - A Good Student is Cooperative and Reliable: CNN-Transformer Collaborative Learning for Semantic Segmentation - [[2307.12574](https://arxiv.org/abs/2307.12574)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.12574.md)]. - Interpolating between Images with Diffusion Models - [[2307.12560](https://arxiv.org/abs/2307.12560)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.12560.md)]. - PUMA: Secure Inference of LLaMA-7B in Five Minutes - [[2307.12533](https://arxiv.org/abs/2307.12533)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.12533.md)]. - Cross Contrasting Feature Perturbation for Domain Generalization - [[2307.12502](https://arxiv.org/abs/2307.12502)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.12502.md)]. - TF-ICON: Diffusion-Based Training-Free Cross-Domain Image Composition - [[2307.12493](https://arxiv.org/abs/2307.12493)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.12493.md)]. - Rethinking Data Distillation: Do Not Overlook Calibration - [[2307.12463](https://arxiv.org/abs/2307.12463)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.12463.md)]. - ProtoFL: Unsupervised Federated Learning via Prototypical Distillation - [[2307.12450](https://arxiv.org/abs/2307.12450)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.12450.md)]. - Augmented Box Replay: Overcoming Foreground Shift for Incremental Object Detection - [[2307.12427](https://arxiv.org/abs/2307.12427)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.12427.md)]. - Testing Hateful Speeches against Policies - [[2307.12418](https://arxiv.org/abs/2307.12418)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.12418.md)]. - Learning Navigational Visual Representations with Semantic Map Supervision - [[2307.12335](https://arxiv.org/abs/2307.12335)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.12335.md)]. - TransHuman: A Transformer-based Human Representation for Generalizable Neural Human Rendering - [[2307.12291](https://arxiv.org/abs/2307.12291)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.12291.md)]. - Downstream-agnostic Adversarial Examples - [[2307.12280](https://arxiv.org/abs/2307.12280)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.12280.md)]. - Geometry-Aware Adaptation for Pretrained Models - [[2307.12226](https://arxiv.org/abs/2307.12226)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.12226.md)]. - LoLep: Single-View View Synthesis with Locally-Learned Planes and Self-Attention Occlusion Inference - [[2307.12217](https://arxiv.org/abs/2307.12217)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.12217.md)]. - LIST: Learning Implicitly from Spatial Transformers for Single-View 3D Reconstruction - [[2307.12194](https://arxiv.org/abs/2307.12194)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.12194.md)]. - Optimized Network Architectures for Large Language Model Training with Billions of Parameters - [[2307.12169](https://arxiv.org/abs/2307.12169)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.12169.md)]. - Hallucination Improves the Performance of Unsupervised Visual Representation Learning - [[2307.12168](https://arxiv.org/abs/2307.12168)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.12168.md)]. - DIP-RL: Demonstration-Inferred Preference Learning in Minecraft - [[2307.12158](https://arxiv.org/abs/2307.12158)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.12158.md)]. - Spatial Self-Distillation for Object Detection with Inaccurate Bounding Boxes - [[2307.12101](https://arxiv.org/abs/2307.12101)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.12101.md)]. - Discovering Spatio-Temporal Rationales for Video Question Answering - [[2307.12058](https://arxiv.org/abs/2307.12058)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.12058.md)]. - On the Effectiveness of Spectral Discriminators for Perceptual Quality Improvement - [[2307.12027](https://arxiv.org/abs/2307.12027)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.12027.md)]. - Learning Vision-and-Language Navigation from YouTube Videos - [[2307.11984](https://arxiv.org/abs/2307.11984)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.11984.md)]. - Why Is Prompt Tuning for Vision-Language Models Robust to Noisy Labels? - [[2307.11978](https://arxiv.org/abs/2307.11978)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.11978.md)]. - CARTIER: Cartographic lAnguage Reasoning Targeted at Instruction Execution for Robots - [[2307.11865](https://arxiv.org/abs/2307.11865)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.11865.md)]. - HybridAugment++: Unified Frequency Spectra Perturbations for Model Robustness - [[2307.11823](https://arxiv.org/abs/2307.11823)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.11823.md)]. - Enhancing CLIP with GPT-4: Harnessing Visual Descriptions as Prompts - [[2307.11661](https://arxiv.org/abs/2307.11661)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.11661.md)]. - OxfordTVG-HIC: Can Machine Make Humorous Captions from Images? - [[2307.11636](https://arxiv.org/abs/2307.11636)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.11636.md)]. - Bridging Vision and Language Encoders: Parameter-Efficient Tuning for Referring Image Segmentation - [[2307.11545](https://arxiv.org/abs/2307.11545)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.11545.md)]. - CopyRNeRF: Protecting the CopyRight of Neural Radiance Fields - [[2307.11526](https://arxiv.org/abs/2307.11526)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.11526.md)]. - CORE: Cooperative Reconstruction for Multi-Agent Perception - [[2307.11514](https://arxiv.org/abs/2307.11514)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.11514.md)]. - SA-BEV: Generating Semantic-Aware Bird's-Eye-View Feature for Multi-view 3D Object Detection - [[2307.11477](https://arxiv.org/abs/2307.11477)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.11477.md)]. - Distribution Shift Matters for Knowledge Distillation with Webly Collected Images - [[2307.11469](https://arxiv.org/abs/2307.11469)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.11469.md)]. - Strip-MLP: Efficient Token Interaction for Vision MLP - [[2307.11458](https://arxiv.org/abs/2307.11458)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.11458.md)]. - Prompting Large Language Models with Speech Recognition Abilities - [[2307.11795](https://arxiv.org/abs/2307.11795)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.11795.md)]. - FaceCLIPNeRF: Text-driven 3D Face Manipulation using Deformable Neural Radiance Fields - [[2307.11418](https://arxiv.org/abs/2307.11418)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.11418.md)]. - Deep Directly-Trained Spiking Neural Networks for Object Detection - [[2307.11411](https://arxiv.org/abs/2307.11411)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.11411.md)]. - Subject-Diffusion:Open Domain Personalized Text-to-Image Generation without Test-time Fine-tuning - [[2307.11410](https://arxiv.org/abs/2307.11410)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.11410.md)]. - Latent-OFER: Detect, Mask, and Reconstruct with Latent Vectors for Occluded Facial Expression Recognition - [[2307.11404](https://arxiv.org/abs/2307.11404)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.11404.md)]. - CLR: Channel-wise Lightweight Reprogramming for Continual Learning - [[2307.11386](https://arxiv.org/abs/2307.11386)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.11386.md)]. - What can a Single Attention Layer Learn? A Study Through the Random Features Lens - [[2307.11353](https://arxiv.org/abs/2307.11353)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.11353.md)]. - Tuning Pre-trained Model via Moment Probing - [[2307.11342](https://arxiv.org/abs/2307.11342)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.11342.md)]. - Tri-MipRF: Tri-Mip Representation for Efficient Anti-Aliasing Neural Radiance Fields - [[2307.11335](https://arxiv.org/abs/2307.11335)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.11335.md)]. - DPM-OT: A New Diffusion Probabilistic Model Based on Optimal Transport - [[2307.11308](https://arxiv.org/abs/2307.11308)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.11308.md)]. - PourIt!: Weakly-supervised Liquid Perception from a Single Image for Visual Closed-Loop Robotic Pouring - [[2307.11299](https://arxiv.org/abs/2307.11299)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.11299.md)]. - MAS: Towards Resource-Efficient Federated Multiple-Task Learning - [[2307.11285](https://arxiv.org/abs/2307.11285)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.11285.md)]. - Brain2Music: Reconstructing Music from Human Brain Activity - [[2307.11078](https://arxiv.org/abs/2307.11078)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.11078.md)]. - AlignDet: Aligning Pre-training and Fine-tuning in Object Detection - [[2307.11077](https://arxiv.org/abs/2307.11077)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.11077.md)]. - Cascade-DETR: Delving into High-Quality Universal Object Detection - [[2307.11035](https://arxiv.org/abs/2307.11035)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.11035.md)]. - General Image-to-Image Translation with One-Shot Image Guidance - [[2307.14352](https://arxiv.org/abs/2307.14352)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.14352.md)]. - Metric3D: Towards Zero-shot Metric 3D Prediction from A Single Image - [[2307.10984](https://arxiv.org/abs/2307.10984)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.10984.md)]. - Improving Online Lane Graph Extraction by Object-Lane Clustering - [[2307.10947](https://arxiv.org/abs/2307.10947)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.10947.md)]. - Proxy Anchor-based Unsupervised Learning for Continuous Generalized Category Discovery - [[2307.10943](https://arxiv.org/abs/2307.10943)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.10943.md)]. - PASTA: Pretrained Action-State Transformer Agents - [[2307.10936](https://arxiv.org/abs/2307.10936)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.10936.md)]. - FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets - [[2307.10928](https://arxiv.org/abs/2307.10928)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.10928.md)]. - Diffusion Sampling with Momentum for Mitigating Divergence Artifacts - [[2307.11118](https://arxiv.org/abs/2307.11118)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.11118.md)]. - The Role of Entropy and Reconstruction in Multi-View Self-Supervised Learning - [[2307.10907](https://arxiv.org/abs/2307.10907)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.10907.md)]. - BlendFace: Re-designing Identity Encoders for Face-Swapping - [[2307.10854](https://arxiv.org/abs/2307.10854)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.10854.md)]. - BoxDiff: Text-to-Image Synthesis with Training-Free Box-Constrained Diffusion - [[2307.10816](https://arxiv.org/abs/2307.10816)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.10816.md)]. - Meta-Transformer: A Unified Framework for Multimodal Learning - [[2307.10802](https://arxiv.org/abs/2307.10802)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.10802.md)]. - HyperReenact: One-Shot Reenactment via Jointly Learning to Refine and Retarget Faces - [[2307.10797](https://arxiv.org/abs/2307.10797)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.10797.md)]. - See More and Know More: Zero-shot Point Cloud Segmentation via Multi-modal Visual Data - [[2307.10782](https://arxiv.org/abs/2307.10782)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.10782.md)]. - Urban Radiance Field Representation with Deformable Neural Mesh Primitives - [[2307.10776](https://arxiv.org/abs/2307.10776)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.10776.md)]. - Kick Back &amp; Relax: Learning to Reconstruct the World by Watching SlowTV - [[2307.10713](https://arxiv.org/abs/2307.10713)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.10713.md)]. - Lighting up NeRF via Unsupervised Decomposition and Enhancement - [[2307.10664](https://arxiv.org/abs/2307.10664)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.10664.md)]. - SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models - [[2307.10635](https://arxiv.org/abs/2307.10635)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.10635.md)]. - Physics-Driven Turbulence Image Restoration with Stochastic Refinement - [[2307.10603](https://arxiv.org/abs/2307.10603)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.10603.md)]. - Flatness-Aware Minimization for Domain Generalization - [[2307.11108](https://arxiv.org/abs/2307.11108)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.11108.md)]. - Instruction-following Evaluation through Verbalizer Manipulation - [[2307.10558](https://arxiv.org/abs/2307.10558)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.10558.md)]. - EMQ: Evolving Training-free Proxies for Automated Mixed Precision Quantization - [[2307.10554](https://arxiv.org/abs/2307.10554)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.10554.md)]. - TokenFlow: Consistent Diffusion Features for Consistent Video Editing - [[2307.10373](https://arxiv.org/abs/2307.10373)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.10373.md)]. - DNA-Rendering: A Diverse Neural Actor Repository for High-Fidelity Human-centric Rendering - [[2307.10173](https://arxiv.org/abs/2307.10173)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.10173.md)]. - DialogStudio: Towards Richest and Most Diverse Unified Dataset Collection for Conversational AI - [[2307.10172](https://arxiv.org/abs/2307.10172)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.10172.md)]. - Challenges and Applications of Large Language Models - [[2307.10169](https://arxiv.org/abs/2307.10169)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.10169.md)]. - LLMs as Workers in Human-Computational Algorithms? Replicating Crowdsourcing Pipelines with LLMs - [[2307.10168](https://arxiv.org/abs/2307.10168)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.10168.md)]. - Improving Multimodal Datasets with Image Captioning - [[2307.10350](https://arxiv.org/abs/2307.10350)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.10350.md)]. - FABRIC: Personalizing Diffusion Models with Iterative Feedback - [[2307.10159](https://arxiv.org/abs/2307.10159)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.10159.md)]. - Android in the Wild: A Large-Scale Dataset for Android Device Control - [[2307.10088](https://arxiv.org/abs/2307.10088)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.10088.md)]. - Unsupervised Accuracy Estimation of Deep Visual Models using Domain-Adaptive Adversarial Perturbation without Source Samples - [[2307.10062](https://arxiv.org/abs/2307.10062)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.10062.md)]. - MODA: Mapping-Once Audio-driven Portrait Animation with Dual Attentions - [[2307.10008](https://arxiv.org/abs/2307.10008)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.10008.md)]. - Hierarchical Spatio-Temporal Representation Learning for Gait Recognition - [[2307.09856](https://arxiv.org/abs/2307.09856)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.09856.md)]. - What do neural networks learn in image classification? A frequency shortcut perspective - [[2307.09829](https://arxiv.org/abs/2307.09829)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.09829.md)]. - Density-invariant Features for Distant Point Cloud Registration - [[2307.09788](https://arxiv.org/abs/2307.09788)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.09788.md)]. - Text2Layer: Layered Image Generation using Latent Diffusion Model - [[2307.09781](https://arxiv.org/abs/2307.09781)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.09781.md)]. - Towards Building More Robust Models with Frequency Bias - [[2307.09763](https://arxiv.org/abs/2307.09763)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.09763.md)]. - Generative Prompt Model for Weakly Supervised Object Localization - [[2307.09756](https://arxiv.org/abs/2307.09756)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.09756.md)]. - Space Engage: Collaborative Space Supervision for Contrastive-based Semi-Supervised Semantic Segmentation - [[2307.09755](https://arxiv.org/abs/2307.09755)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.09755.md)]. - CPCM: Contextual Point Cloud Modeling for Weakly-supervised Point Cloud Semantic Segmentation - [[2307.10316](https://arxiv.org/abs/2307.10316)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.10316.md)]. - AesPA-Net: Aesthetic Pattern-Aware Style Transfer Networks - [[2307.09724](https://arxiv.org/abs/2307.09724)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.09724.md)]. - Towards Saner Deep Image Registration - [[2307.09696](https://arxiv.org/abs/2307.09696)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.09696.md)]. - GlobalMapper: Arbitrary-Shaped Urban Layout Generation - [[2307.09693](https://arxiv.org/abs/2307.09693)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.09693.md)]. - Towards A Unified Agent with Foundation Models - [[2307.09668](https://arxiv.org/abs/2307.09668)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.09668.md)]. - Object-aware Gaze Target Detection - [[2307.09662](https://arxiv.org/abs/2307.09662)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.09662.md)]. - Promoting Exploration in Memory-Augmented Adam using Critical Momenta - [[2307.09638](https://arxiv.org/abs/2307.09638)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.09638.md)]. - Conditional 360-degree Image Synthesis for Immersive Indoor Scene Decoration - [[2307.09621](https://arxiv.org/abs/2307.09621)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.09621.md)]. - ChatSpot: Bootstrapping Multimodal LLMs via Precise Referring Instruction Tuning - [[2307.09474](https://arxiv.org/abs/2307.09474)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.09474.md)]. - Does Circuit Analysis Interpretability Scale? Evidence from Multiple Choice Capabilities in Chinchilla - [[2307.09458](https://arxiv.org/abs/2307.09458)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.09458.md)]. - OnlineRefer: A Simple Online Baseline for Referring Video Object Segmentation - [[2307.09356](https://arxiv.org/abs/2307.09356)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.09356.md)]. - Efficient Region-Aware Neural Radiance Fields for High-Fidelity Talking Portrait Synthesis - [[2307.09323](https://arxiv.org/abs/2307.09323)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.09323.md)]. - Biomaker CA: a Biome Maker project using Cellular Automata - [[2307.09320](https://arxiv.org/abs/2307.09320)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.09320.md)]. - EigenTrajectory: Low-Rank Descriptors for Multi-Modal Trajectory Forecasting - [[2307.09306](https://arxiv.org/abs/2307.09306)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.09306.md)]. - Llama 2: Open Foundation and Fine-Tuned Chat Models - [[2307.09288](https://arxiv.org/abs/2307.09288)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.09288.md)]. - Distilling Coarse-to-Fine Semantic Matching Knowledge for Weakly Supervised 3D Visual Grounding - [[2307.09267](https://arxiv.org/abs/2307.09267)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.09267.md)]. - Augmenting CLIP with Improved Visio-Linguistic Reasoning - [[2307.09233](https://arxiv.org/abs/2307.09233)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.09233.md)]. - NU-MCC: Multiview Compressive Coding with Neighborhood Decoder and Repulsive UDF - [[2307.09112](https://arxiv.org/abs/2307.09112)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.09112.md)]. - LA-Net: Landmark-Aware Learning for Reliable Facial Expression Recognition under Label Noise - [[2307.09023](https://arxiv.org/abs/2307.09023)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.09023.md)]. - How is ChatGPT's behavior changing over time? - [[2307.09009](https://arxiv.org/abs/2307.09009)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.09009.md)]. - Ord2Seq: Regarding Ordinal Regression as Label Sequence Prediction - [[2307.09004](https://arxiv.org/abs/2307.09004)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.09004.md)]. - Towards Authentic Face Restoration with Iterative Diffusion Models and Beyond - [[2307.08996](https://arxiv.org/abs/2307.08996)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.08996.md)]. - Local or Global: Selective Knowledge Assimilation for Federated Learning with Limited Labels - [[2307.08809](https://arxiv.org/abs/2307.08809)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.08809.md)]. - Similarity Min-Max: Zero-Shot Day-Night Domain Adaptation - [[2307.08779](https://arxiv.org/abs/2307.08779)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.08779.md)]. - GEAR: Augmenting Language Models with Generalizable and Efficient Tool Resolution - [[2307.08775](https://arxiv.org/abs/2307.08775)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.08775.md)]. - Diffusion Models Beat GANs on Image Classification - [[2307.08702](https://arxiv.org/abs/2307.08702)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.08702.md)]. - AlpaGasus: Training A Better Alpaca with Fewer Data - [[2307.08701](https://arxiv.org/abs/2307.08701)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.08701.md)]. - Neural Video Depth Stabilizer - [[2307.08695](https://arxiv.org/abs/2307.08695)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.08695.md)]. - TableGPT: Towards Unifying Tables, Nature Language and Commands into One GPT - [[2307.08674](https://arxiv.org/abs/2307.08674)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.08674.md)]. - Retentive Network: A Successor to Transformer for Large Language Models - [[2307.08621](https://arxiv.org/abs/2307.08621)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.08621.md)]. - BuboGPT: Enabling Visual Grounding in Multi-Modal LLMs - [[2307.08581](https://arxiv.org/abs/2307.08581)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.08581.md)]. - Scale-Aware Modulation Meet Transformer - [[2307.08579](https://arxiv.org/abs/2307.08579)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.08579.md)]. - Does Visual Pretraining Help End-to-End Reasoning? - [[2307.08506](https://arxiv.org/abs/2307.08506)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.08506.md)]. - BUS:Efficient and Effective Vision-language Pre-training with Bottom-Up Patch Summarization - [[2307.08504](https://arxiv.org/abs/2307.08504)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.08504.md)]. - Cumulative Spatial Knowledge Distillation for Vision Transformers - [[2307.08500](https://arxiv.org/abs/2307.08500)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.08500.md)]. - Differentiable Transportation Pruning - [[2307.08483](https://arxiv.org/abs/2307.08483)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.08483.md)]. - SkeletonMAE: Graph-based Masked Autoencoder for Skeleton Sequence Pre-training - [[2307.08476](https://arxiv.org/abs/2307.08476)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.08476.md)]. - Not All Steps are Created Equal: Selective Diffusion Distillation for Image Manipulation - [[2307.08448](https://arxiv.org/abs/2307.08448)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.08448.md)]. - DOT: A Distillation-Oriented Trainer - [[2307.08436](https://arxiv.org/abs/2307.08436)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.08436.md)]. - On the application of Large Language Models for language teaching and assessment technology - [[2307.08393](https://arxiv.org/abs/2307.08393)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.08393.md)]. - Dynamic Snake Convolution based on Topological Geometric Constraints for Tubular Structure Segmentation - [[2307.08388](https://arxiv.org/abs/2307.08388)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.08388.md)]. - Self-supervised Monocular Depth Estimation: Let's Talk About The Weather - [[2307.08357](https://arxiv.org/abs/2307.08357)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.08357.md)]. - ShiftNAS: Improving One-shot NAS via Probability Shift - [[2307.08300](https://arxiv.org/abs/2307.08300)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.08300.md)]. - Random Boxes Are Open-world Object Detectors - [[2307.08249](https://arxiv.org/abs/2307.08249)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.08249.md)]. - Towards Self-Assembling Artificial Neural Networks through Neural Developmental Programs - [[2307.08197](https://arxiv.org/abs/2307.08197)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.08197.md)]. - Measuring Faithfulness in Chain-of-Thought Reasoning - [[2307.13702](https://arxiv.org/abs/2307.13702)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.13702.md)]. - Question Decomposition Improves the Faithfulness of Model-Generated Reasoning - [[2307.11768](https://arxiv.org/abs/2307.11768)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.11768.md)]. - Feedback is All You Need: Real-World Reinforcement Learning with Approximate Physics-Based Models - [[2307.08168](https://arxiv.org/abs/2307.08168)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.08168.md)]. - Planting a SEED of Vision in Large Language Model - [[2307.08041](https://arxiv.org/abs/2307.08041)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.08041.md)]. - Multi-Object Discovery by Low-Dimensional Object Motion - [[2307.08027](https://arxiv.org/abs/2307.08027)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.08027.md)]. - Householder Projector for Unsupervised Latent Semantics Discovery - [[2307.08012](https://arxiv.org/abs/2307.08012)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.08012.md)]. - Towards Viewpoint-Invariant Visual Recognition via Adversarial Training - [[2307.10235](https://arxiv.org/abs/2307.10235)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.10235.md)]. - Language Conditioned Traffic Generation - [[2307.07947](https://arxiv.org/abs/2307.07947)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.07947.md)]. - Revisiting Domain-Adaptive 3D Object Detection by Reliable, Diverse and Class-balanced Pseudo-Labeling - [[2307.07944](https://arxiv.org/abs/2307.07944)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.07944.md)]. - CVSformer: Cross-View Synthesis Transformer for Semantic Scene Completion - [[2307.07938](https://arxiv.org/abs/2307.07938)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.07938.md)]. - Communicative Agents for Software Development - [[2307.07924](https://arxiv.org/abs/2307.07924)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.07924.md)]. - Is Imitation All You Need? Generalized Decision-Making with Dual-Phase Training - [[2307.07909](https://arxiv.org/abs/2307.07909)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.07909.md)]. - Handwritten and Printed Text Segmentation: A Signature Case Study - [[2307.07887](https://arxiv.org/abs/2307.07887)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.07887.md)]. - Unified Adversarial Patch for Cross-modal Attacks in the Physical World - [[2307.07859](https://arxiv.org/abs/2307.07859)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.07859.md)]. - Adaptive Nonlinear Latent Transformation for Conditional Face Editing - [[2307.07790](https://arxiv.org/abs/2307.07790)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.07790.md)]. - Bidirectionally Deformable Motion Modulation For Video-based Human Pose Transfer - [[2307.07754](https://arxiv.org/abs/2307.07754)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.07754.md)]. - INVE: Interactive Neural Video Editing - [[2307.07663](https://arxiv.org/abs/2307.07663)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.07663.md)]. - RFLA: A Stealthy Reflected Light Adversarial Attack in the Physical World - [[2307.07653](https://arxiv.org/abs/2307.07653)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.07653.md)]. - CoTracker: It is Better to Track Together - [[2307.07635](https://arxiv.org/abs/2307.07635)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.07635.md)]. - NIFTY: Neural Object Interaction Fields for Guided Human Motion Synthesis - [[2307.07511](https://arxiv.org/abs/2307.07511)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.07511.md)]. - DreamTeacher: Pretraining Image Backbones with Deep Generative Models - [[2307.07487](https://arxiv.org/abs/2307.07487)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.07487.md)]. - Multimodal Distillation for Egocentric Action Recognition - [[2307.07483](https://arxiv.org/abs/2307.07483)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.07483.md)]. - Improving Zero-Shot Generalization for CLIP with Synthesized Prompts - [[2307.07397](https://arxiv.org/abs/2307.07397)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.07397.md)]. - Mitigating Adversarial Vulnerability through Causal Parameter Estimation by Adversarial Double Machine Learning - [[2307.07250](https://arxiv.org/abs/2307.07250)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.07250.md)]. - FreeCOS: Self-Supervised Learning from Fractals and Unlabeled Images for Curvilinear Object Segmentation - [[2307.07245](https://arxiv.org/abs/2307.07245)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.07245.md)]. - Mega-TTS 2: Zero-Shot Text-to-Speech with Arbitrary Length Speech Prompts - [[2307.07218](https://arxiv.org/abs/2307.07218)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.07218.md)]. - Multimodal Motion Conditioned Diffusion Model for Skeleton-based Video Anomaly Detection - [[2307.07205](https://arxiv.org/abs/2307.07205)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.07205.md)]. - Learning to Retrieve In-Context Examples for Large Language Models - [[2307.07164](https://arxiv.org/abs/2307.07164)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.07164.md)]. - Bootstrapping Vision-Language Learning with Decoupled Language Pre-training - [[2307.07063](https://arxiv.org/abs/2307.07063)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.07063.md)]. - DIALGEN: Collaborative Human-LM Generated Dialogues for Improved Understanding of Human-Human Conversations - [[2307.07047](https://arxiv.org/abs/2307.07047)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.07047.md)]. - HyperDreamBooth: HyperNetworks for Fast Personalization of Text-to-Image Models - [[2307.06949](https://arxiv.org/abs/2307.06949)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.06949.md)]. - In-context Autoencoder for Context Compression in a Large Language Model - [[2307.06945](https://arxiv.org/abs/2307.06945)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.06945.md)]. - InternVid: A Large-scale Video-Text Dataset for Multimodal Understanding and Generation - [[2307.06942](https://arxiv.org/abs/2307.06942)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.06942.md)]. - Animate-A-Story: Storytelling with Retrieval-Augmented Video Generation - [[2307.06940](https://arxiv.org/abs/2307.06940)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.06940.md)]. - mBLIP: Efficient Bootstrapping of Multilingual Vision-LLMs - [[2307.06930](https://arxiv.org/abs/2307.06930)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.06930.md)]. - Domain-Agnostic Tuning-Encoder for Fast Personalization of Text-To-Image Models - [[2307.06925](https://arxiv.org/abs/2307.06925)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.06925.md)]. - Generating Benchmarks for Factuality Evaluation of Language Models - [[2307.06908](https://arxiv.org/abs/2307.06908)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.06908.md)]. - Copy Is All You Need - [[2307.06962](https://arxiv.org/abs/2307.06962)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.06962.md)]. - Assessing the Ability of ChatGPT to Screen Articles for Systematic Reviews - [[2307.06464](https://arxiv.org/abs/2307.06464)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.06464.md)]. - Distilling Large Language Models for Biomedical Knowledge Extraction: A Case Study on Adverse Drug Events - [[2307.06439](https://arxiv.org/abs/2307.06439)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.06439.md)]. - T2I-CompBench: A Comprehensive Benchmark for Open-world Compositional Text-to-image Generation - [[2307.06350](https://arxiv.org/abs/2307.06350)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.06350.md)]. - Patch n' Pack: NaViT, a Vision Transformer for any Aspect Ratio and Resolution - [[2307.06304](https://arxiv.org/abs/2307.06304)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.06304.md)]. - Instruction Mining: High-Quality Instruction Data Selection for Large Language Models - [[2307.06290](https://arxiv.org/abs/2307.06290)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.06290.md)]. - MMBench: Is Your Multi-modal Model an All-around Player? - [[2307.06281](https://arxiv.org/abs/2307.06281)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.06281.md)]. - SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Task Planning - [[2307.06135](https://arxiv.org/abs/2307.06135)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.06135.md)]. - VELMA: Verbalization Embodiment of LLM Agents for Vision and Language Navigation in Street View - [[2307.06082](https://arxiv.org/abs/2307.06082)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.06082.md)]. - PolyLM: An Open Source Polyglot Large Language Model - [[2307.06018](https://arxiv.org/abs/2307.06018)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.06018.md)]. - VoxPoser: Composable 3D Value Maps for Robotic Manipulation with Language Models - [[2307.05973](https://arxiv.org/abs/2307.05973)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.05973.md)]. - Giving Robots a Hand: Learning Generalizable Manipulation with Eye-in-Hand Human Video Demonstrations - [[2307.05959](https://arxiv.org/abs/2307.05959)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.05959.md)]. - GLA-GCN: Global-local Adaptive Graph Convolutional Network for 3D Human Pose Estimation from Monocular Video - [[2307.05853](https://arxiv.org/abs/2307.05853)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.05853.md)]. - Towards Robust and Efficient Continual Language Learning - [[2307.05741](https://arxiv.org/abs/2307.05741)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.05741.md)]. - Stack More Layers Differently: High-Rank Training Through Low-Rank Updates - [[2307.05695](https://arxiv.org/abs/2307.05695)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.05695.md)]. - Differentiable Blocks World: Qualitative 3D Decomposition by Rendering Primitives - [[2307.05473](https://arxiv.org/abs/2307.05473)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.05473.md)]. - Self-consistency for open-ended generations - [[2307.06857](https://arxiv.org/abs/2307.06857)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.06857.md)]. - EgoVLPv2: Egocentric Video-Language Pre-training with Fusion in the Backbone - [[2307.05463](https://arxiv.org/abs/2307.05463)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.05463.md)]. - Efficient 3D Articulated Human Generation with Layered Surface Volumes - [[2307.05462](https://arxiv.org/abs/2307.05462)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.05462.md)]. - Empowering Cross-lingual Behavioral Testing of NLP Models with Typological Features - [[2307.05454](https://arxiv.org/abs/2307.05454)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.05454.md)]. - Self-Supervised Learning with Lie Symmetries for Partial Differential Equations - [[2307.05432](https://arxiv.org/abs/2307.05432)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.05432.md)]. - Unleashing Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration - [[2307.05300](https://arxiv.org/abs/2307.05300)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.05300.md)]. - Generative Pretraining in Multimodality - [[2307.05222](https://arxiv.org/abs/2307.05222)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.05222.md)]. - DNAGPT: A Generalized Pre-trained Tool for Versatile DNA Sequence Analysis Tasks - [[2307.05628](https://arxiv.org/abs/2307.05628)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.05628.md)]. - Test-Time Training on Video Streams - [[2307.05014](https://arxiv.org/abs/2307.05014)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.05014.md)]. - Monotone deep Boltzmann machines - [[2307.04990](https://arxiv.org/abs/2307.04990)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.04990.md)]. - Secrets of RLHF in Large Language Models Part I: PPO - [[2307.04964](https://arxiv.org/abs/2307.04964)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.04964.md)]. - Semantic-SAM: Segment and Recognize Anything at Any Granularity - [[2307.04767](https://arxiv.org/abs/2307.04767)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.04767.md)]. - SITTA: A Semantic Image-Text Alignment for Image Captioning - [[2307.05591](https://arxiv.org/abs/2307.05591)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.05591.md)]. - Shelving, Stacking, Hanging: Relational Pose Diffusion for Multi-modal Rearrangement - [[2307.04751](https://arxiv.org/abs/2307.04751)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.04751.md)]. - RoCo: Dialectic Multi-Robot Collaboration with Large Language Models - [[2307.04738](https://arxiv.org/abs/2307.04738)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.04738.md)]. - AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning - [[2307.04725](https://arxiv.org/abs/2307.04725)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.04725.md)]. - Large Language Models as General Pattern Machines - [[2307.04721](https://arxiv.org/abs/2307.04721)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.04721.md)]. - International Institutions for Advanced AI - [[2307.04699](https://arxiv.org/abs/2307.04699)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.04699.md)]. - VampNet: Music Generation via Masked Acoustic Token Modeling - [[2307.04686](https://arxiv.org/abs/2307.04686)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.04686.md)]. - AnyTeleop: A General Vision-Based Dexterous Robot Arm-Hand Teleoperation System - [[2307.04577](https://arxiv.org/abs/2307.04577)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.04577.md)]. - Improving Factuality of Abstractive Summarization via Contrastive Reward Learning - [[2307.04507](https://arxiv.org/abs/2307.04507)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.04507.md)]. - RLTF: Reinforcement Learning from Unit Test Feedback - [[2307.04349](https://arxiv.org/abs/2307.04349)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.04349.md)]. - Convex Decomposition of Indoor Scenes - [[2307.04246](https://arxiv.org/abs/2307.04246)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.04246.md)]. - Parametric Depth Based Feature Representation Learning for Object Detection and Segmentation in Bird's Eye View - [[2307.04106](https://arxiv.org/abs/2307.04106)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.04106.md)]. - SVIT: Scaling up Visual Instruction Tuning - [[2307.04087](https://arxiv.org/abs/2307.04087)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.04087.md)]. - Toward Interactive Dictation - [[2307.04008](https://arxiv.org/abs/2307.04008)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.04008.md)]. - On decoder-only architecture for speech-to-text and large language model integration - [[2307.03917](https://arxiv.org/abs/2307.03917)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.03917.md)]. - Large Language Models for Supply Chain Optimization - [[2307.03875](https://arxiv.org/abs/2307.03875)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.03875.md)]. - Sketch-A-Shape: Zero-Shot Sketch-to-3D Shape Generation - [[2307.03869](https://arxiv.org/abs/2307.03869)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.03869.md)]. - AutoDecoding Latent 3D Diffusion Models - [[2307.05445](https://arxiv.org/abs/2307.05445)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.05445.md)]. - Equivariant Single View Pose Prediction Via Induced and Restricted Representations - [[2307.03704](https://arxiv.org/abs/2307.03704)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.03704.md)]. - Decomposing the Generalization Gap in Imitation Learning for Visual Robotic Manipulation - [[2307.03659](https://arxiv.org/abs/2307.03659)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.03659.md)]. - GPT4RoI: Instruction Tuning Large Language Model on Region-of-Interest - [[2307.03601](https://arxiv.org/abs/2307.03601)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.03601.md)]. - One Step of Gradient Descent is Provably the Optimal In-Context Learner with One Layer of Linear Self-Attention - [[2307.03576](https://arxiv.org/abs/2307.03576)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.03576.md)]. - Discovering Hierarchical Achievements in Reinforcement Learning via Contrastive Learning - [[2307.03486](https://arxiv.org/abs/2307.03486)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.03486.md)]. - Solvent: A Framework for Protein Folding - [[2307.04603](https://arxiv.org/abs/2307.04603)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.04603.md)]. - Goal-Conditioned Predictive Coding as an Implicit Planner for Offline Reinforcement Learning - [[2307.03406](https://arxiv.org/abs/2307.03406)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.03406.md)]. - Teaching Arithmetic to Small Transformers - [[2307.03381](https://arxiv.org/abs/2307.03381)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.03381.md)]. - BiPhone: Modeling Inter Language Phonetic Influences in Text - [[2307.03322](https://arxiv.org/abs/2307.03322)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.03322.md)]. - Whisper-AT: Noise-Robust Automatic Speech Recognizers are Also Strong General Audio Event Taggers - [[2307.03183](https://arxiv.org/abs/2307.03183)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.03183.md)]. - Lost in the Middle: How Language Models Use Long Contexts - [[2307.03172](https://arxiv.org/abs/2307.03172)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.03172.md)]. - Focused Transformer: Contrastive Training for Context Scaling - [[2307.03170](https://arxiv.org/abs/2307.03170)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.03170.md)]. - VideoGLUE: Video General Understanding Evaluation of Foundation Models - [[2307.03166](https://arxiv.org/abs/2307.03166)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.03166.md)]. - Distilling Large Vision-Language Model with Out-of-Distribution Generalizability - [[2307.03135](https://arxiv.org/abs/2307.03135)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.03135.md)]. - Frontier AI Regulation: Managing Emerging Risks to Public Safety - [[2307.03718](https://arxiv.org/abs/2307.03718)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.03718.md)]. - A Survey on Evaluation of Large Language Models - [[2307.03109](https://arxiv.org/abs/2307.03109)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.03109.md)]. - Improving Retrieval-Augmented Large Language Models via Data Importance Learning - [[2307.03027](https://arxiv.org/abs/2307.03027)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.03027.md)]. - Style Over Substance: Evaluation Biases for Large Language Models - [[2307.03025](https://arxiv.org/abs/2307.03025)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.03025.md)]. - Contrast Is All You Need - [[2307.02882](https://arxiv.org/abs/2307.02882)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.02882.md)]. - What Should Data Science Education Do with Large Language Models? - [[2307.02792](https://arxiv.org/abs/2307.02792)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.02792.md)]. - Training Models to Generate, Recognize, and Reframe Unhelpful Thoughts - [[2307.02768](https://arxiv.org/abs/2307.02768)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.02768.md)]. - Wireless Multi-Agent Generative AI: From Connected Intelligence to Collective Intelligence - [[2307.02757](https://arxiv.org/abs/2307.02757)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.02757.md)]. - SkipDecode: Autoregressive Skip Decoding with Batching and Caching for Efficient LLM Inference - [[2307.02628](https://arxiv.org/abs/2307.02628)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.02628.md)]. - LongNet: Scaling Transformers to 1,000,000,000 Tokens - [[2307.02486](https://arxiv.org/abs/2307.02486)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.02486.md)]. - Building Cooperative Embodied Agents Modularly with Large Language Models - [[2307.02485](https://arxiv.org/abs/2307.02485)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.02485.md)]. - Elastic Decision Transformer - [[2307.02484](https://arxiv.org/abs/2307.02484)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.02484.md)]. - Jailbroken: How Does LLM Safety Training Fail? - [[2307.02483](https://arxiv.org/abs/2307.02483)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.02483.md)]. - Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks - [[2307.02477](https://arxiv.org/abs/2307.02477)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.02477.md)]. - What Matters in Training a GPT4-Style Language Model with Multimodal Inputs? - [[2307.02469](https://arxiv.org/abs/2307.02469)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.02469.md)]. - Using Rewrite Strategies for Efficient Functional Automatic Differentiation - [[2307.02447](https://arxiv.org/abs/2307.02447)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.02447.md)]. - DragonDiffusion: Enabling Drag-style Manipulation on Diffusion Models - [[2307.02421](https://arxiv.org/abs/2307.02421)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.02421.md)]. - MSViT: Dynamic Mixed-Scale Tokenization for Vision Transformers - [[2307.02321](https://arxiv.org/abs/2307.02321)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.02321.md)]. - Rethinking Multiple Instance Learning for Whole Slide Image Classification: A Good Instance Classifier is All You Need - [[2307.02249](https://arxiv.org/abs/2307.02249)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.02249.md)]. - Open-Source Large Language Models Outperform Crowd Workers and Approach ChatGPT in Text-Annotation Tasks - [[2307.02179](https://arxiv.org/abs/2307.02179)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.02179.md)]. - Becoming self-instruct: introducing early stopping criteria for minimal instruct tuning - [[2307.03692](https://arxiv.org/abs/2307.03692)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.03692.md)]. - Flacuna: Unleashing the Problem Solving Power of Vicuna using FLAN Fine-Tuning - [[2307.02053](https://arxiv.org/abs/2307.02053)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.02053.md)]. - SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis - [[2307.01952](https://arxiv.org/abs/2307.01952)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.01952.md)]. - Physics-based Motion Retargeting from Sparse Inputs - [[2307.01938](https://arxiv.org/abs/2307.01938)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.01938.md)]. - Robots That Ask For Help: Uncertainty Alignment for Large Language Model Planners - [[2307.01928](https://arxiv.org/abs/2307.01928)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.01928.md)]. - Crossway Diffusion: Improving Diffusion-based Visuomotor Policy via Self-supervised Learning - [[2307.01849](https://arxiv.org/abs/2307.01849)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.01849.md)]. - Embodied Task Planning with Large Language Models - [[2307.01848](https://arxiv.org/abs/2307.01848)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.01848.md)]. - Collaborative Score Distillation for Consistent Visual Synthesis - [[2307.04787](https://arxiv.org/abs/2307.04787)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.04787.md)]. - DiT-3D: Exploring Plain Diffusion Transformers for 3D Shape Generation - [[2307.01831](https://arxiv.org/abs/2307.01831)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.01831.md)]. - Pretraining is All You Need: A Multi-Atlas Enhanced Transformer Framework for Autism Spectrum Disorder Classification - [[2307.01759](https://arxiv.org/abs/2307.01759)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.01759.md)]. - Synthetic is all you need: removing the auxiliary data assumption for membership inference attacks against synthetic data - [[2307.01701](https://arxiv.org/abs/2307.01701)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.01701.md)]. - mPLUG-DocOwl: Modularized Multimodal Large Language Model for Document Understanding - [[2307.02499](https://arxiv.org/abs/2307.02499)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.02499.md)]. - ChildPlay: A New Benchmark for Understanding Children's Gaze Behaviour - [[2307.01630](https://arxiv.org/abs/2307.01630)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.01630.md)]. - On Hofstadter's G-sequence - [[2307.1471](https://arxiv.org/abs/2307.1471)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.1471.md)]. - Hybrid two-level MCMC for Bayesian Inverse Problems - [[2307.1463](https://arxiv.org/abs/2307.1463)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.1463.md)]. - Practical Collaborative Perception: A Framework for Asynchronous and Multi-Agent 3D Object Detection - [[2307.1462](https://arxiv.org/abs/2307.1462)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.1462.md)]. - Multi-Task Learning Improves Performance In Deep Argument Mining Models - [[2307.1401](https://arxiv.org/abs/2307.1401)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.1401.md)]. - EIGER IV: The cool 10$^4$K circumgalactic environment of high-$z$ galaxies reveals remarkably efficient IGM enrichment - [[2307.1273](https://arxiv.org/abs/2307.1273)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.1273.md)]. - Real-time Monocular Full-body Capture in World Space via Sequential Proxy-to-Motion Learning - [[2307.01200](https://arxiv.org/abs/2307.01200)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.01200.md)]. - Segment Anything Meets Point Tracking - [[2307.01197](https://arxiv.org/abs/2307.01197)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.01197.md)]. - Variational integrals on Hessian spaces: partial regularity for critical points - [[2307.1191](https://arxiv.org/abs/2307.1191)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.1191.md)]. - Characterisation of three-body loss in ${}^{166}$Er and optimised production of large Bose-Einstein condensates - [[2307.1245](https://arxiv.org/abs/2307.1245)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.1245.md)]. - Improving Language Plasticity via Pretraining with Active Forgetting - [[2307.01163](https://arxiv.org/abs/2307.01163)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.01163.md)]. - SCITUNE: Aligning Large Language Models with Scientific Multimodal Instructions - [[2307.01139](https://arxiv.org/abs/2307.01139)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.01139.md)]. - MVDiffusion: Enabling Holistic Multi-view Image Generation with Correspondence-Aware Diffusion - [[2307.01097](https://arxiv.org/abs/2307.01097)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.01097.md)]. - Scalable quantum neural networks by few quantum resources - [[2307.1017](https://arxiv.org/abs/2307.1017)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.1017.md)]. - Visual Instruction Tuning with Polite Flamingo - [[2307.01003](https://arxiv.org/abs/2307.01003)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.01003.md)]. - NOMA-Assisted Grant-Free Transmission: How to Design Pre-Configured SNR Levels? - [[2307.0990](https://arxiv.org/abs/2307.0990)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.0990.md)]. - Motion-X: A Large-scale 3D Expressive Whole-body Human Motion Dataset - [[2307.00818](https://arxiv.org/abs/2307.00818)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.00818.md)]. - SketchMetaFace: A Learning-based Sketching Interface for High-fidelity 3D Character Face Modeling - [[2307.00804](https://arxiv.org/abs/2307.00804)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.00804.md)]. - EmoGen: Eliminating Subjective Bias in Emotional Music Generation - [[2307.01229](https://arxiv.org/abs/2307.01229)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.01229.md)]. - JourneyDB: A Benchmark for Generative Image Understanding - [[2307.00716](https://arxiv.org/abs/2307.00716)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.00716.md)]. - LEDITS: Real Image Editing with DDPM Inversion and Semantic Guidance - [[2307.00522](https://arxiv.org/abs/2307.00522)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.00522.md)]. - Almost sure bounds for a weighted Steinhaus random multiplicative function - [[2307.0499](https://arxiv.org/abs/2307.0499)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.0499.md)]. - One Copy Is All You Need: Resource-Efficient Streaming of Medical Imaging Data at Scale - [[2307.00438](https://arxiv.org/abs/2307.00438)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.00438.md)]. - ProbVLM: Probabilistic Adapter for Frozen Vision-Language Models - [[2307.00398](https://arxiv.org/abs/2307.00398)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.00398.md)]. - DoReMi: Grounding Language Model by Detecting and Recovering from Plan-Execution Misalignment - [[2307.00329](https://arxiv.org/abs/2307.00329)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.00329.md)]. - Personality Traits in Large Language Models - [[2307.00184](https://arxiv.org/abs/2307.00184)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.00184.md)]. ### June 2023 - Meta-training with Demonstration Retrieval for Efficient Few-shot Learning - [[2307.00119](https://arxiv.org/abs/2307.00119)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.00119.md)]. - Goal Representations for Instruction Following: A Semi-Supervised Language Interface to Control - [[2307.00117](https://arxiv.org/abs/2307.00117)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.00117.md)]. - Hardwiring ViT Patch Selectivity into CNNs using Patch Mixing - [[2306.17848](https://arxiv.org/abs/2306.17848)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.17848.md)]. - Magic123: One Image to High-Quality 3D Object Generation Using Both 2D and 3D Diffusion Priors - [[2306.17843](https://arxiv.org/abs/2306.17843)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.17843.md)]. - SPAE: Semantic Pyramid AutoEncoder for Multimodal Generation with Frozen LLMs - [[2306.17842](https://arxiv.org/abs/2306.17842)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.17842.md)]. - Statler: State-Maintaining Language Models for Embodied Reasoning - [[2306.17840](https://arxiv.org/abs/2306.17840)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.17840.md)]. - DisCo: Disentangled Control for Referring Human Dance Generation in Real World - [[2307.00040](https://arxiv.org/abs/2307.00040)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.00040.md)]. - Stay on topic with Classifier-Free Guidance - [[2306.17806](https://arxiv.org/abs/2306.17806)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.17806.md)]. - Topologically Attributed Graphs for Shape Discrimination - [[2306.17805](https://arxiv.org/abs/2306.17805)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.17805.md)]. - The Shaped Transformer: Attention Models in the Infinite Depth-and-Width Limit - [[2306.17759](https://arxiv.org/abs/2306.17759)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.17759.md)]. - Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting - [[2306.17563](https://arxiv.org/abs/2306.17563)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.17563.md)]. - Preference Ranking Optimization for Human Alignment - [[2306.17492](https://arxiv.org/abs/2306.17492)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.17492.md)]. - ReMaX: Relaxing for Better Training on Efficient Panoptic Segmentation - [[2306.17319](https://arxiv.org/abs/2306.17319)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.17319.md)]. - Towards Zero-Shot Scale-Aware Monocular Depth Estimation - [[2306.17253](https://arxiv.org/abs/2306.17253)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.17253.md)]. - Generative AI for Programming Education: Benchmarking ChatGPT, GPT-4, and Human Tutors - [[2306.17156](https://arxiv.org/abs/2306.17156)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.17156.md)]. - Generate Anything Anywhere in Any Scene - [[2306.17154](https://arxiv.org/abs/2306.17154)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.17154.md)]. - Michelangelo: Conditional 3D Shape Generation based on Shape-Image-Text Aligned Latent Representation - [[2306.17115](https://arxiv.org/abs/2306.17115)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.17115.md)]. - LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding - [[2306.17107](https://arxiv.org/abs/2306.17107)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.17107.md)]. - End-to-end Autonomous Driving: Challenges and Frontiers - [[2306.16927](https://arxiv.org/abs/2306.16927)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.16927.md)]. - BEDLAM: A Synthetic Dataset of Bodies Exhibiting Detailed Lifelike Animated Motion - [[2306.16940](https://arxiv.org/abs/2306.16940)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.16940.md)]. - DreamDiffusion: Generating High-Quality Images from Brain EEG Signals - [[2306.16934](https://arxiv.org/abs/2306.16934)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.16934.md)]. - One-2-3-45: Any Single Image to 3D Mesh in 45 Seconds without Per-Shape Optimization - [[2306.16928](https://arxiv.org/abs/2306.16928)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.16928.md)]. - NeuralFuse: Learning to Improve the Accuracy of Access-Limited Neural Network Inference in Low-Voltage Regimes - [[2306.16869](https://arxiv.org/abs/2306.16869)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.16869.md)]. - ArrayBot: Reinforcement Learning for Generalizable Distributed Manipulation through Touch - [[2306.16857](https://arxiv.org/abs/2306.16857)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.16857.md)]. - Benchmarking Large Language Model Capabilities for Conditional Generation - [[2306.16793](https://arxiv.org/abs/2306.16793)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.16793.md)]. - Dynamic-Resolution Model Learning for Object Pile Manipulation - [[2306.16700](https://arxiv.org/abs/2306.16700)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.16700.md)]. - KITE: Keypoint-Conditioned Policies for Semantic Manipulation - [[2306.16605](https://arxiv.org/abs/2306.16605)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.16605.md)]. - An Efficient Sparse Inference Software Accelerator for Transformer-based Language Models on CPUs - [[2306.16601](https://arxiv.org/abs/2306.16601)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.16601.md)]. - LLM Calibration and Automatic Hallucination Detection via Pareto Optimal Self-supervision - [[2306.16564](https://arxiv.org/abs/2306.16564)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.16564.md)]. - Towards Language Models That Can See: Computer Vision Through the LENS of Natural Language - [[2306.16410](https://arxiv.org/abs/2306.16410)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.16410.md)]. - On the Exploitability of Instruction Tuning - [[2306.17194](https://arxiv.org/abs/2306.17194)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.17194.md)]. - Towards Measuring the Representation of Subjective Global Opinions in Language Models - [[2306.16388](https://arxiv.org/abs/2306.16388)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.16388.md)]. - Inferring the Goals of Communicating Agents from Actions and Instructions - [[2306.16207](https://arxiv.org/abs/2306.16207)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.16207.md)]. - SVNR: Spatially-variant Noise Removal with Denoising Diffusion - [[2306.16052](https://arxiv.org/abs/2306.16052)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.16052.md)]. - Positive Label Is All You Need for Multi-Label Classification - [[2306.16016](https://arxiv.org/abs/2306.16016)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.16016.md)]. - Accelerating Transducers through Adjacent Token Merging - [[2306.16009](https://arxiv.org/abs/2306.16009)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.16009.md)]. - Confidence Ranking for CTR Prediction - [[2307.1206](https://arxiv.org/abs/2307.1206)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2307.1206.md)]. - Subclass-balancing Contrastive Learning for Long-tailed Recognition - [[2306.15925](https://arxiv.org/abs/2306.15925)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.15925.md)]. - Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias - [[2306.15895](https://arxiv.org/abs/2306.15895)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.15895.md)]. - HyenaDNA: Long-Range Genomic Sequence Modeling at Single Nucleotide Resolution - [[2306.15794](https://arxiv.org/abs/2306.15794)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.15794.md)]. - REFLECT: Summarizing Robot Experiences for Failure Explanation and Correction - [[2306.15724](https://arxiv.org/abs/2306.15724)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.15724.md)]. - PoseDiffusion: Solving Pose Estimation via Diffusion-aided Bundle Adjustment - [[2306.15667](https://arxiv.org/abs/2306.15667)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.15667.md)]. - CLIPA-v2: Scaling CLIP Training with 81.1% Zero-shot ImageNet Accuracy within a \$10,000 Budget; An Extra \$4,000 Unlocks 81.8% Accuracy - [[2306.15658](https://arxiv.org/abs/2306.15658)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.15658.md)]. - Asynchronous Algorithmic Alignment with Cocycles - [[2306.15632](https://arxiv.org/abs/2306.15632)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.15632.md)]. - LeanDojo: Theorem Proving with Retrieval-Augmented Language Models - [[2306.15626](https://arxiv.org/abs/2306.15626)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.15626.md)]. - Extending Context Window of Large Language Models via Positional Interpolation - [[2306.15595](https://arxiv.org/abs/2306.15595)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.15595.md)]. - Explainable Multimodal Emotion Reasoning - [[2306.15401](https://arxiv.org/abs/2306.15401)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.15401.md)]. - Length Generalization in Arithmetic Transformers - [[2306.15400](https://arxiv.org/abs/2306.15400)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.15400.md)]. - 3D-Speaker: A Large-Scale Multi-Device, Multi-Distance, and Multi-Dialect Corpus for Speech Representation Disentanglement - [[2306.15354](https://arxiv.org/abs/2306.15354)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.15354.md)]. - MindDial: Belief Dynamics Tracking with Theory-of-Mind Modeling for Situated Neural Dialogue Generation - [[2306.15253](https://arxiv.org/abs/2306.15253)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.15253.md)]. - Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic - [[2306.15195](https://arxiv.org/abs/2306.15195)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.15195.md)]. - MIMIC: Masked Image Modeling with Image Correspondences - [[2306.15128](https://arxiv.org/abs/2306.15128)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.15128.md)]. - Understanding In-Context Learning via Supportive Pretraining Data - [[2306.15091](https://arxiv.org/abs/2306.15091)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.15091.md)]. - InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback - [[2306.14898](https://arxiv.org/abs/2306.14898)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.14898.md)]. - RVT: Robotic View Transformer for 3D Object Manipulation - [[2306.14896](https://arxiv.org/abs/2306.14896)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.14896.md)]. - Supervised Pretraining Can Learn In-Context Reinforcement Learning - [[2306.14892](https://arxiv.org/abs/2306.14892)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.14892.md)]. - Restart Sampling for Improving Generative Processes - [[2306.14878](https://arxiv.org/abs/2306.14878)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.14878.md)]. - Are aligned neural networks adversarially aligned? - [[2306.15447](https://arxiv.org/abs/2306.15447)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.15447.md)]. - ViNT: A Foundation Model for Visual Navigation - [[2306.14846](https://arxiv.org/abs/2306.14846)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.14846.md)]. - Kosmos-2: Grounding Multimodal Large Language Models to the World - [[2306.14824](https://arxiv.org/abs/2306.14824)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.14824.md)]. - MotionGPT: Human Motion as a Foreign Language - [[2306.14795](https://arxiv.org/abs/2306.14795)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.14795.md)]. - SugarCrepe: Fixing Hackable Benchmarks for Vision-Language Compositionality - [[2306.14610](https://arxiv.org/abs/2306.14610)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.14610.md)]. - Aligning Large Multi-Modal Model with Robust Instruction Tuning - [[2306.14565](https://arxiv.org/abs/2306.14565)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.14565.md)]. - A-STAR: Test-time Attention Segregation and Retention for Text-to-image Synthesis - [[2306.14544](https://arxiv.org/abs/2306.14544)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.14544.md)]. - CEIL: Generalized Contextual Imitation Learning - [[2306.14534](https://arxiv.org/abs/2306.14534)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.14534.md)]. - ParameterNet: Parameters Are All You Need for Large-scale Visual Pretraining of Mobile Networks - [[2306.14525](https://arxiv.org/abs/2306.14525)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.14525.md)]. - RoboCook: Long-Horizon Elasto-Plastic Object Manipulation with Diverse Tools - [[2306.14447](https://arxiv.org/abs/2306.14447)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.14447.md)]. - DragDiffusion: Harnessing Diffusion Models for Interactive Point-based Image Editing - [[2306.14435](https://arxiv.org/abs/2306.14435)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.14435.md)]. - Faster Segment Anything: Towards Lightweight SAM for Mobile Applications - [[2306.14289](https://arxiv.org/abs/2306.14289)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.14289.md)]. - BiFF: Bi-level Future Fusion with Polyline-based Coordinate for Interactive Trajectory Prediction - [[2306.14161](https://arxiv.org/abs/2306.14161)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.14161.md)]. - DomainStudio: Fine-Tuning Diffusion Models for Domain-Driven Image Generation using Limited Data - [[2306.14153](https://arxiv.org/abs/2306.14153)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.14153.md)]. - Language models are weak learners - [[2306.14101](https://arxiv.org/abs/2306.14101)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.14101.md)]. - SEEDS: Emulation of Weather Forecast Ensembles with Diffusion Models - [[2306.14066](https://arxiv.org/abs/2306.14066)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.14066.md)]. - DesCo: Learning Object Recognition with Rich Language Descriptions - [[2306.14060](https://arxiv.org/abs/2306.14060)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.14060.md)]. - H$_2$O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models - [[2306.14048](https://arxiv.org/abs/2306.14048)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.14048.md)]. - Thinking Like an Annotator: Generation of Dataset Labeling Instructions - [[2306.14035](https://arxiv.org/abs/2306.14035)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.14035.md)]. - Cross-Validation Is All You Need: A Statistical Approach To Label Noise Estimation - [[2306.13990](https://arxiv.org/abs/2306.13990)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.13990.md)]. - Beyond Scale: the Diversity Coefficient as a Data Quality Metric Demonstrates LLMs are Pre-trained on Formally Diverse Data - [[2306.13840](https://arxiv.org/abs/2306.13840)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.13840.md)]. - LLM-Assisted Content Analysis: Using Large Language Models to Support Deductive Coding - [[2306.14924](https://arxiv.org/abs/2306.14924)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.14924.md)]. - Swin-Free: Achieving Better Cross-Window Attention and Efficiency with Size-varying Window - [[2306.13776](https://arxiv.org/abs/2306.13776)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.13776.md)]. - Zero-shot spatial layout conditioning for text-to-image diffusion models - [[2306.13754](https://arxiv.org/abs/2306.13754)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.13754.md)]. - Bring Your Own Data! Self-Supervised Evaluation for Large Language Models - [[2306.13651](https://arxiv.org/abs/2306.13651)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.13651.md)]. - GKD: Generalized Knowledge Distillation for Auto-regressive Sequence Models - [[2306.13649](https://arxiv.org/abs/2306.13649)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.13649.md)]. - OpenMask3D: Open-Vocabulary 3D Instance Segmentation - [[2306.13631](https://arxiv.org/abs/2306.13631)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.13631.md)]. - System-Level Natural Language Feedback - [[2306.13588](https://arxiv.org/abs/2306.13588)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.13588.md)]. - Scaling MLPs: A Tale of Inductive Bias - [[2306.13575](https://arxiv.org/abs/2306.13575)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.13575.md)]. - A Survey on Multimodal Large Language Models - [[2306.13549](https://arxiv.org/abs/2306.13549)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.13549.md)]. - DreamEditor: Text-Driven 3D Scene Editing with Neural Fields - [[2306.13455](https://arxiv.org/abs/2306.13455)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.13455.md)]. - Long-range Language Modeling with Self-retrieval - [[2306.13421](https://arxiv.org/abs/2306.13421)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.13421.md)]. - MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models - [[2306.13394](https://arxiv.org/abs/2306.13394)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.13394.md)]. - Evading Forensic Classifiers with Attribute-Conditioned Adversarial Faces - [[2306.13091](https://arxiv.org/abs/2306.13091)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.13091.md)]. - Continuous Layout Editing of Single Images with Diffusion Models - [[2306.13078](https://arxiv.org/abs/2306.13078)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.13078.md)]. - Quantizable Transformers: Removing Outliers by Helping Attention Heads Do Nothing - [[2306.12929](https://arxiv.org/abs/2306.12929)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.12929.md)]. - AudioPaLM: A Large Language Model That Can Speak and Listen - [[2306.12925](https://arxiv.org/abs/2306.12925)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.12925.md)]. - Learning from Visual Observation via Offline Pretrained State-to-Go Transformer - [[2306.12860](https://arxiv.org/abs/2306.12860)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.12860.md)]. - Blended-NeRF: Zero-Shot Object Generation and Blending in Existing Neural Radiance Fields - [[2306.12760](https://arxiv.org/abs/2306.12760)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.12760.md)]. - SoftGPT: Learn Goal-oriented Soft Object Manipulation Skills by Generative Pre-trained Heterogeneous Graph Transformer - [[2306.12677](https://arxiv.org/abs/2306.12677)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.12677.md)]. - From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought - [[2306.12672](https://arxiv.org/abs/2306.12672)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.12672.md)]. - Towards Regulatable AI Systems: Technical Gaps and Policy Opportunities - [[2306.12609](https://arxiv.org/abs/2306.12609)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.12609.md)]. - Local 3D Editing via 3D Distillation of CLIP Knowledge - [[2306.12570](https://arxiv.org/abs/2306.12570)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.12570.md)]. - FFCV: Accelerating Training by Removing Data Bottlenecks - [[2306.12517](https://arxiv.org/abs/2306.12517)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.12517.md)]. - Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference - [[2306.12509](https://arxiv.org/abs/2306.12509)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.12509.md)]. - DreamTime: An Improved Optimization Strategy for Text-to-3D Content Creation - [[2306.12422](https://arxiv.org/abs/2306.12422)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.12422.md)]. - OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents - [[2306.16527](https://arxiv.org/abs/2306.16527)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.16527.md)]. - Fast Segment Anything - [[2306.12156](https://arxiv.org/abs/2306.12156)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.12156.md)]. - Mass-Producing Failures of Multimodal Systems with Language Models - [[2306.12105](https://arxiv.org/abs/2306.12105)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.12105.md)]. - HSR-Diff:Hyperspectral Image Super-Resolution via Conditional Diffusion Models - [[2306.12085](https://arxiv.org/abs/2306.12085)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.12085.md)]. - EquiformerV2: Improved Equivariant Transformer for Scaling to Higher-Degree Representations - [[2306.12059](https://arxiv.org/abs/2306.12059)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.12059.md)]. - Training Transformers with 4-bit Integers - [[2306.11987](https://arxiv.org/abs/2306.11987)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.11987.md)]. - Opportunities and Risks of LLMs for Scalable Deliberation with Polis - [[2306.11932](https://arxiv.org/abs/2306.11932)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.11932.md)]. - Randomized Quantization is All You Need for Differential Privacy in Federated Learning - [[2306.11913](https://arxiv.org/abs/2306.11913)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.11913.md)]. - SPRINT: Scalable Policy Pre-Training via Language Instruction Relabeling - [[2306.11886](https://arxiv.org/abs/2306.11886)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.11886.md)]. - Diffusion with Forward Models: Solving Stochastic Inverse Problems Without Direct Supervision - [[2306.11719](https://arxiv.org/abs/2306.11719)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.11719.md)]. - RoboCat: A Self-Improving Foundation Agent for Robotic Manipulation - [[2306.11706](https://arxiv.org/abs/2306.11706)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.11706.md)]. - Textbooks Are All You Need - [[2306.11644](https://arxiv.org/abs/2306.11644)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.11644.md)]. - Improving Image Captioning Descriptiveness by Ranking and LLM-based Fusion - [[2306.11593](https://arxiv.org/abs/2306.11593)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.11593.md)]. - HomeRobot: Open-Vocabulary Mobile Manipulation - [[2306.11565](https://arxiv.org/abs/2306.11565)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.11565.md)]. - Improving visual image reconstruction from human brain activity using latent diffusion models via multiple decoded inputs - [[2306.11536](https://arxiv.org/abs/2306.11536)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.11536.md)]. - RM-PRT: Realistic Robotic Manipulation Simulator and Benchmark with Progressive Reasoning Tasks - [[2306.11335](https://arxiv.org/abs/2306.11335)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.11335.md)]. - Dynamic Perceiver for Efficient Visual Recognition - [[2306.11248](https://arxiv.org/abs/2306.11248)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.11248.md)]. - Quilt-1M: One Million Image-Text Pairs for Histopathology - [[2306.11207](https://arxiv.org/abs/2306.11207)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.11207.md)]. - Large Language Models are Fixated by Red Herrings: Exploring Creative Problem Solving and Einstellung Effect using the Only Connect Wall Dataset - [[2306.11167](https://arxiv.org/abs/2306.11167)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.11167.md)]. - FSAR: Federated Skeleton-based Action Recognition with Adaptive Topology Structure and Knowledge Distillation - [[2306.11046](https://arxiv.org/abs/2306.11046)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.11046.md)]. - RepoFusion: Training Code Models to Understand Your Repository - [[2306.10998](https://arxiv.org/abs/2306.10998)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.10998.md)]. - BayLing: Bridging Cross-lingual Alignment and Instruction Following through Interactive Translation for Large Language Models - [[2306.10968](https://arxiv.org/abs/2306.10968)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.10968.md)]. - MotionGPT: Finetuned LLMs are General-Purpose Motion Generators - [[2306.10900](https://arxiv.org/abs/2306.10900)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.10900.md)]. - 3D VR Sketch Guided 3D Shape Prototyping and Exploration - [[2306.10830](https://arxiv.org/abs/2306.10830)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.10830.md)]. - Multitrack Music Transcription with a Time-Frequency Perceiver - [[2306.10785](https://arxiv.org/abs/2306.10785)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.10785.md)]. - Guiding Language Models of Code with Global Context using Monitors - [[2306.10763](https://arxiv.org/abs/2306.10763)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.10763.md)]. - UniMC: A Unified Framework for Long-Term Memory Conversation via Relevance Representation Learning - [[2306.10543](https://arxiv.org/abs/2306.10543)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.10543.md)]. - Point-Cloud Completion with Pretrained Text-to-image Diffusion Models - [[2306.10533](https://arxiv.org/abs/2306.10533)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.10533.md)]. - CLARA: Classifying and Disambiguating User Commands for Reliable Interactive Robotic Agents - [[2306.10376](https://arxiv.org/abs/2306.10376)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.10376.md)]. - GLIMMER: generalized late-interaction memory reranker - [[2306.10231](https://arxiv.org/abs/2306.10231)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.10231.md)]. - ZeRO++: Extremely Efficient Collective Communication for Giant Model Training - [[2306.10209](https://arxiv.org/abs/2306.10209)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.10209.md)]. - Meta-Personalizing Vision-Language Models to Find Named Instances in Video - [[2306.10169](https://arxiv.org/abs/2306.10169)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.10169.md)]. - MagicBrush: A Manually Annotated Dataset for Instruction-Guided Image Editing - [[2306.10012](https://arxiv.org/abs/2306.10012)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.10012.md)]. - CLIP2Protect: Protecting Facial Privacy using Text-Guided Makeup via Adversarial Latent Search - [[2306.10008](https://arxiv.org/abs/2306.10008)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.10008.md)]. - Robot Learning with Sensorimotor Pre-training - [[2306.10007](https://arxiv.org/abs/2306.10007)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.10007.md)]. - Investigating Prompting Techniques for Zero- and Few-Shot Visual Question Answering - [[2306.09996](https://arxiv.org/abs/2306.09996)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.09996.md)]. - Evaluating Superhuman Models with Consistency Checks - [[2306.09983](https://arxiv.org/abs/2306.09983)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.09983.md)]. - LabelBench: A Comprehensive Framework for Benchmarking Label-Efficient Learning - [[2306.09910](https://arxiv.org/abs/2306.09910)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.09910.md)]. - Demystifying GPT Self-Repair for Code Generation - [[2306.09896](https://arxiv.org/abs/2306.09896)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.09896.md)]. - AvatarBooth: High-Quality and Customizable 3D Human Avatar Generation - [[2306.09864](https://arxiv.org/abs/2306.09864)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.09864.md)]. - Full Parameter Fine-tuning for Large Language Models with Limited Resources - [[2306.09782](https://arxiv.org/abs/2306.09782)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.09782.md)]. - Gradient is All You Need? - [[2306.09778](https://arxiv.org/abs/2306.09778)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.09778.md)]. - Scaling Open-Vocabulary Object Detection - [[2306.09683](https://arxiv.org/abs/2306.09683)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.09683.md)]. - OCTScenes: A Versatile Real-World Dataset of Tabletop Scenes for Object-Centric Learning - [[2306.09682](https://arxiv.org/abs/2306.09682)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.09682.md)]. - CLIPSonic: Text-to-Audio Synthesis with Unlabeled Videos and Pretrained Language-Vision Models - [[2306.09635](https://arxiv.org/abs/2306.09635)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.09635.md)]. - CAJun: Continuous Adaptive Jumping using a Learned Centroidal Controller - [[2306.09557](https://arxiv.org/abs/2306.09557)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.09557.md)]. - Block-State Transformer - [[2306.09539](https://arxiv.org/abs/2306.09539)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.09539.md)]. - Retrieving-to-Answer: Zero-Shot Video Question Answering with Frozen Large Language Models - [[2306.11732](https://arxiv.org/abs/2306.11732)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.11732.md)]. - Inverse Scaling: When Bigger Isn't Better - [[2306.09479](https://arxiv.org/abs/2306.09479)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.09479.md)]. - Explore, Establish, Exploit: Red Teaming Language Models from Scratch - [[2306.09442](https://arxiv.org/abs/2306.09442)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.09442.md)]. - Seeing the World through Your Eyes - [[2306.09348](https://arxiv.org/abs/2306.09348)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.09348.md)]. - UrbanIR: Large-Scale Urban Scene Inverse Rendering from a Single Video - [[2306.09349](https://arxiv.org/abs/2306.09349)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.09349.md)]. - Rosetta Neurons: Mining the Common Units in a Model Zoo - [[2306.09346](https://arxiv.org/abs/2306.09346)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.09346.md)]. - Evaluating Data Attribution for Text-to-Image Models - [[2306.09345](https://arxiv.org/abs/2306.09345)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.09345.md)]. - Human Preference Score v2: A Solid Benchmark for Evaluating Human Preferences of Text-to-Image Synthesis - [[2306.09341](https://arxiv.org/abs/2306.09341)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.09341.md)]. - DreamHuman: Animatable 3D Avatars from Text - [[2306.09329](https://arxiv.org/abs/2306.09329)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.09329.md)]. - Language-Guided Music Recommendation for Video via Prompt Analogies - [[2306.09327](https://arxiv.org/abs/2306.09327)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.09327.md)]. - Neural Relighting with Subsurface Scattering by Learning the Radiance Transfer Gradient - [[2306.09322](https://arxiv.org/abs/2306.09322)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.09322.md)]. - Diffusion Models for Zero-Shot Open-Vocabulary Segmentation - [[2306.09316](https://arxiv.org/abs/2306.09316)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.09316.md)]. - Can Language Models Teach Weaker Agents? Teacher Explanations Improve Students via Theory of Mind - [[2306.09299](https://arxiv.org/abs/2306.09299)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.09299.md)]. - KoLA: Carefully Benchmarking World Knowledge of Large Language Models - [[2306.09296](https://arxiv.org/abs/2306.09296)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.09296.md)]. - A9 Intersection Dataset: All You Need for Urban 3D Camera-LiDAR Roadside Perception - [[2306.09266](https://arxiv.org/abs/2306.09266)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.09266.md)]. - LVLM-eHub: A Comprehensive Evaluation Benchmark for Large Vision-Language Models - [[2306.09265](https://arxiv.org/abs/2306.09265)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.09265.md)]. - Encyclopedic VQA: Visual questions about detailed properties of fine-grained categories - [[2306.09224](https://arxiv.org/abs/2306.09224)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.09224.md)]. - CMMLU: Measuring massive multitask language understanding in Chinese - [[2306.09212](https://arxiv.org/abs/2306.09212)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.09212.md)]. - NAVI: Category-Agnostic Image Collections with High-Quality 3D Shape and Pose Annotations - [[2306.09109](https://arxiv.org/abs/2306.09109)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.09109.md)]. - Macaw-LLM: Multi-Modal Language Modeling with Image, Audio, Video, and Text Integration - [[2306.09093](https://arxiv.org/abs/2306.09093)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.09093.md)]. - Behavioral Cloning via Search in Embedded Demonstration Dataset - [[2306.09082](https://arxiv.org/abs/2306.09082)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.09082.md)]. - Re-Benchmarking Pool-Based Active Learning for Binary Classification - [[2306.08954](https://arxiv.org/abs/2306.08954)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.08954.md)]. - LOVM: Language-Only Vision Model Selection - [[2306.08893](https://arxiv.org/abs/2306.08893)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.08893.md)]. - EPIC Fields: Marrying 3D Geometry and Video Understanding - [[2306.08731](https://arxiv.org/abs/2306.08731)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.08731.md)]. - VidEdit: Zero-Shot and Spatially Aware Text-Driven Video Editing - [[2306.08707](https://arxiv.org/abs/2306.08707)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.08707.md)]. - Toward Grounded Social Reasoning - [[2306.08651](https://arxiv.org/abs/2306.08651)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.08651.md)]. - Language to Rewards for Robotic Skill Synthesis - [[2306.08647](https://arxiv.org/abs/2306.08647)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.08647.md)]. - Towards AGI in Computer Vision: Lessons Learned from GPT and Large Language Models - [[2306.08641](https://arxiv.org/abs/2306.08641)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.08641.md)]. - AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn - [[2306.08640](https://arxiv.org/abs/2306.08640)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.08640.md)]. - TAPIR: Tracking Any Point with per-frame Initialization and temporal Refinement - [[2306.08637](https://arxiv.org/abs/2306.08637)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.08637.md)]. - Anticipatory Music Transformer - [[2306.08620](https://arxiv.org/abs/2306.08620)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.08620.md)]. - WizardCoder: Empowering Code Large Language Models with Evol-Instruct - [[2306.08568](https://arxiv.org/abs/2306.08568)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.08568.md)]. - Knowledge Distillation of Large Language Models - [[2306.08543](https://arxiv.org/abs/2306.08543)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.08543.md)]. - TryOnDiffusion: A Tale of Two UNets - [[2306.08276](https://arxiv.org/abs/2306.08276)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.08276.md)]. - Contrastive Loss is All You Need to Recover Analogies as Parallel Lines - [[2306.08221](https://arxiv.org/abs/2306.08221)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.08221.md)]. - Agile Catching with Whole-Body MPC and Blackbox Policy Learning - [[2306.08205](https://arxiv.org/abs/2306.08205)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.08205.md)]. - h2oGPT: Democratizing Large Language Models - [[2306.08161](https://arxiv.org/abs/2306.08161)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.08161.md)]. - Large-scale Language Model Rescoring on Long-form Data - [[2306.08133](https://arxiv.org/abs/2306.08133)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.08133.md)]. - AVIS: Autonomous Visual Information Seeking with Large Language Models - [[2306.08129](https://arxiv.org/abs/2306.08129)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.08129.md)]. - DORSal: Diffusion for Object-centric Representations of Scenes $\textit{et al.}$ - [[2306.08068](https://arxiv.org/abs/2306.08068)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.08068.md)]. - Tune As You Scale: Hyperparameter Optimization For Compute Efficient Training - [[2306.08055](https://arxiv.org/abs/2306.08055)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.08055.md)]. - Efficient 3D Semantic Segmentation with Superpoint Transformer - [[2306.08045](https://arxiv.org/abs/2306.08045)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.08045.md)]. - Neural Scene Chronology - [[2306.07970](https://arxiv.org/abs/2306.07970)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.07970.md)]. - GeneCIS: A Benchmark for General Conditional Image Similarity - [[2306.07969](https://arxiv.org/abs/2306.07969)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.07969.md)]. - arXiVeri: Automatic table verification with GPT - [[2306.07968](https://arxiv.org/abs/2306.07968)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.07968.md)]. - One-for-All: Generalized LoRA for Parameter-Efficient Fine-tuning - [[2306.07967](https://arxiv.org/abs/2306.07967)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.07967.md)]. - Hidden Biases of End-to-End Driving Models - [[2306.07957](https://arxiv.org/abs/2306.07957)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.07957.md)]. - Rerender A Video: Zero-Shot Text-Guided Video-to-Video Translation - [[2306.07954](https://arxiv.org/abs/2306.07954)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.07954.md)]. - Questioning the Survey Responses of Large Language Models - [[2306.07951](https://arxiv.org/abs/2306.07951)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.07951.md)]. - Image Captioners Are Scalable Vision Learners Too - [[2306.07915](https://arxiv.org/abs/2306.07915)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.07915.md)]. - WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences - [[2306.07906](https://arxiv.org/abs/2306.07906)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.07906.md)]. - Viewset Diffusion: (0-)Image-Conditioned 3D Generative Models from 2D Data - [[2306.07881](https://arxiv.org/abs/2306.07881)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.07881.md)]. - Area is all you need: repeatable elements make stronger adversarial attacks - [[2306.07768](https://arxiv.org/abs/2306.07768)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.07768.md)]. - E2E-LOAD: End-to-End Long-form Online Action Detection - [[2306.07703](https://arxiv.org/abs/2306.07703)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.07703.md)]. - SayTap: Language to Quadrupedal Locomotion - [[2306.07580](https://arxiv.org/abs/2306.07580)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.07580.md)]. - Galactic: Scaling End-to-End Reinforcement Learning for Rearrangement at 100k Steps-Per-Second - [[2306.07552](https://arxiv.org/abs/2306.07552)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.07552.md)]. - TART: A plug-and-play Transformer module for task-agnostic reasoning - [[2306.07536](https://arxiv.org/abs/2306.07536)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.07536.md)]. - Require Process Control? LSTMc is all you need! - [[2306.07510](https://arxiv.org/abs/2306.07510)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.07510.md)]. - AniFaceDrawing: Anime Portrait Exploration during Your Sketching - [[2306.07476](https://arxiv.org/abs/2306.07476)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.07476.md)]. - 3D molecule generation by denoising voxel grids - [[2306.07473](https://arxiv.org/abs/2306.07473)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.07473.md)]. - Instant Multi-View Head Capture through Learnable Registration - [[2306.07437](https://arxiv.org/abs/2306.07437)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.07437.md)]. - Controlling Text-to-Image Diffusion by Orthogonal Finetuning - [[2306.07280](https://arxiv.org/abs/2306.07280)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.07280.md)]. - Scalable 3D Captioning with Pretrained Models - [[2306.07279](https://arxiv.org/abs/2306.07279)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.07279.md)]. - Zero-shot Composed Text-Image Retrieval - [[2306.07272](https://arxiv.org/abs/2306.07272)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.07272.md)]. - Retrieval-Enhanced Contrastive Vision-Text Models - [[2306.07196](https://arxiv.org/abs/2306.07196)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.07196.md)]. - Benchmarking Neural Network Training Algorithms - [[2306.07179](https://arxiv.org/abs/2306.07179)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.07179.md)]. - Augmenting Language Models with Long-Term Memory - [[2306.07174](https://arxiv.org/abs/2306.07174)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.07174.md)]. - Transformers learn through gradual rank increase - [[2306.07042](https://arxiv.org/abs/2306.07042)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.07042.md)]. - Small Temperature is All You Need for Differentiable Architecture Search - [[2306.06855](https://arxiv.org/abs/2306.06855)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.06855.md)]. - Weakly supervised information extraction from inscrutable handwritten document images - [[2306.06823](https://arxiv.org/abs/2306.06823)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.06823.md)]. - Attention, Compilation, and Solver-based Symbolic Analysis are All You Need - [[2306.06755](https://arxiv.org/abs/2306.06755)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.06755.md)]. - LAMM: Language-Assisted Multi-Modal Instruction-Tuning Dataset, Framework, and Benchmark - [[2306.06687](https://arxiv.org/abs/2306.06687)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.06687.md)]. - Face0: Instantaneously Conditioning a Text-to-Image Model on a Face - [[2306.06638](https://arxiv.org/abs/2306.06638)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.06638.md)]. - RestGPT: Connecting Large Language Models with Real-World RESTful APIs - [[2306.06624](https://arxiv.org/abs/2306.06624)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.06624.md)]. - High-Fidelity Audio Compression with Improved RVQGAN - [[2306.06546](https://arxiv.org/abs/2306.06546)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.06546.md)]. - Learning Image-Adaptive Codebooks for Class-Agnostic Image Restoration - [[2306.06513](https://arxiv.org/abs/2306.06513)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.06513.md)]. - Aladdin: Zero-Shot Hallucination of Stylized 3D Assets from Abstract Scene Descriptions - [[2306.06212](https://arxiv.org/abs/2306.06212)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.06212.md)]. - FasterViT: Fast Vision Transformers with Hierarchical Attention - [[2306.06189](https://arxiv.org/abs/2306.06189)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.06189.md)]. - Value function estimation using conditional diffusion models for control - [[2306.07290](https://arxiv.org/abs/2306.07290)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.07290.md)]. - Realistic Saliency Guided Image Enhancement - [[2306.06092](https://arxiv.org/abs/2306.06092)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.06092.md)]. - Mind2Web: Towards a Generalist Agent for the Web - [[2306.06070](https://arxiv.org/abs/2306.06070)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.06070.md)]. - GANeRF: Leveraging Discriminators to Optimize Neural Radiance Fields - [[2306.06044](https://arxiv.org/abs/2306.06044)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.06044.md)]. - DetZero: Rethinking Offboard 3D Object Detection with Long-term Sequential Point Clouds - [[2306.06023](https://arxiv.org/abs/2306.06023)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.06023.md)]. - S$^{3}$: Increasing GPU Utilization during Generative Inference for Higher Throughput - [[2306.06000](https://arxiv.org/abs/2306.06000)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.06000.md)]. - GPT-Calls: Enhancing Call Segmentation and Tagging by Generating Synthetic Conversations via Large Language Models - [[2306.07941](https://arxiv.org/abs/2306.07941)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.07941.md)]. - Evaluating the Social Impact of Generative AI Systems in Systems and Society - [[2306.05949](https://arxiv.org/abs/2306.05949)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.05949.md)]. - Can Large Language Models Infer Causation from Correlation? - [[2306.05836](https://arxiv.org/abs/2306.05836)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.05836.md)]. - Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation - [[2306.05783](https://arxiv.org/abs/2306.05783)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.05783.md)]. - Embodied Executable Policy Learning with Language-based Scene Summarization - [[2306.05696](https://arxiv.org/abs/2306.05696)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.05696.md)]. - Judging LLM-as-a-judge with MT-Bench and Chatbot Arena - [[2306.05685](https://arxiv.org/abs/2306.05685)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.05685.md)]. - On the Importance of Feature Decorrelation for Unsupervised Representation Learning in Reinforcement Learning - [[2306.05637](https://arxiv.org/abs/2306.05637)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.05637.md)]. - Speech-to-Text Adapter and Speech-to-Entity Retriever Augmented LLMs for Speech Understanding - [[2306.07944](https://arxiv.org/abs/2306.07944)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.07944.md)]. - BOOT: Data-free Distillation of Denoising Diffusion Models with Bootstrapping - [[2306.05544](https://arxiv.org/abs/2306.05544)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.05544.md)]. - Multi-Modal Classifiers for Open-Vocabulary Object Detection - [[2306.05493](https://arxiv.org/abs/2306.05493)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.05493.md)]. - Grounded Text-to-Image Synthesis with Attention Refocusing - [[2306.05427](https://arxiv.org/abs/2306.05427)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.05427.md)]. - Background Prompting for Improved Object Depth - [[2306.05428](https://arxiv.org/abs/2306.05428)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.05428.md)]. - MIMIC-IT: Multi-Modal In-Context Instruction Tuning - [[2306.05425](https://arxiv.org/abs/2306.05425)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.05425.md)]. - Video-ChatGPT: Towards Detailed Video Understanding via Large Vision and Language Models - [[2306.05424](https://arxiv.org/abs/2306.05424)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.05424.md)]. - Tracking Everything Everywhere All at Once - [[2306.05422](https://arxiv.org/abs/2306.05422)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.05422.md)]. - Scaling Spherical CNNs - [[2306.05420](https://arxiv.org/abs/2306.05420)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.05420.md)]. - R-MAE: Regions Meet Masked Autoencoders - [[2306.05411](https://arxiv.org/abs/2306.05411)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.05411.md)]. - LU-NeRF: Scene and Pose Estimation by Synchronizing Local Unposed NeRFs - [[2306.05410](https://arxiv.org/abs/2306.05410)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.05410.md)]. - Matting Anything - [[2306.05399](https://arxiv.org/abs/2306.05399)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.05399.md)]. - Modular Visual Question Answering via Code Generation - [[2306.05392](https://arxiv.org/abs/2306.05392)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.05392.md)]. - Unsupervised Compositional Concepts Discovery with Text-to-Image Generative Models - [[2306.05357](https://arxiv.org/abs/2306.05357)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.05357.md)]. - ToolAlpaca: Generalized Tool Learning for Language Models with 3000 Simulated Cases - [[2306.05301](https://arxiv.org/abs/2306.05301)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.05301.md)]. - Simple and Controllable Music Generation - [[2306.05284](https://arxiv.org/abs/2306.05284)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.05284.md)]. - M3Exam: A Multilingual, Multimodal, Multilevel Benchmark for Examining Large Language Models - [[2306.05179](https://arxiv.org/abs/2306.05179)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.05179.md)]. - SyncDiffusion: Coherent Montage via Synchronized Joint Diffusions - [[2306.05178](https://arxiv.org/abs/2306.05178)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.05178.md)]. - PandaLM: An Automatic Evaluation Benchmark for LLM Instruction Tuning Optimization - [[2306.05087](https://arxiv.org/abs/2306.05087)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.05087.md)]. - ScaleDet: A Scalable Multi-Dataset Object Detector - [[2306.04849](https://arxiv.org/abs/2306.04849)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.04849.md)]. - Mixture-of-Supernets: Improving Weight-Sharing Supernet Training with Architecture-Routed Mixture-of-Experts - [[2306.04845](https://arxiv.org/abs/2306.04845)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.04845.md)]. - Optimizing ViViT Training: Time and Memory Reduction for Action Recognition - [[2306.04822](https://arxiv.org/abs/2306.04822)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.04822.md)]. - INSTRUCTEVAL: Towards Holistic Evaluation of Instruction-Tuned Large Language Models - [[2306.04757](https://arxiv.org/abs/2306.04757)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.04757.md)]. - How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources - [[2306.04751](https://arxiv.org/abs/2306.04751)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.04751.md)]. - Improving Open Language Models by Learning from Organic Interactions - [[2306.04707](https://arxiv.org/abs/2306.04707)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.04707.md)]. - On the Reliability of Watermarks for Large Language Models - [[2306.04634](https://arxiv.org/abs/2306.04634)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.04634.md)]. - Designing a Better Asymmetric VQGAN for StableDiffusion - [[2306.04632](https://arxiv.org/abs/2306.04632)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.04632.md)]. - ARTIC3D: Learning Robust Articulated 3D Shapes from Noisy Web Image Collections - [[2306.04619](https://arxiv.org/abs/2306.04619)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.04619.md)]. - PromptBench: Towards Evaluating the Robustness of Large Language Models on Adversarial Prompts - [[2306.04528](https://arxiv.org/abs/2306.04528)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.04528.md)]. - Improving neural network representations using human similarity judgments - [[2306.04507](https://arxiv.org/abs/2306.04507)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.04507.md)]. - Rewarded soups: towards Pareto-optimal alignment by interpolating weights fine-tuned on diverse rewards - [[2306.04488](https://arxiv.org/abs/2306.04488)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.04488.md)]. - M$^3$IT: A Large-Scale Dataset towards Multi-Modal Multilingual Instruction Tuning - [[2306.04387](https://arxiv.org/abs/2306.04387)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.04387.md)]. - Youku-mPLUG: A 10 Million Large-scale Chinese Video-Language Dataset for Pre-training and Benchmarks - [[2306.04362](https://arxiv.org/abs/2306.04362)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.04362.md)]. - MobileNMT: Enabling Translation in 15MB and 30ms - [[2306.04235](https://arxiv.org/abs/2306.04235)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.04235.md)]. - Benchmarking Foundation Models with Language-Model-as-an-Examiner - [[2306.04181](https://arxiv.org/abs/2306.04181)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.04181.md)]. - Increasing Diversity While Maintaining Accuracy: Text Data Generation with Large Language Models and Human Interventions - [[2306.04140](https://arxiv.org/abs/2306.04140)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.04140.md)]. - Text-only Domain Adaptation using Unified Speech-Text Representation in Transducer - [[2306.04076](https://arxiv.org/abs/2306.04076)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.04076.md)]. - Transferable Adversarial Robustness for Categorical Data via Universal Robust Embeddings - [[2306.04064](https://arxiv.org/abs/2306.04064)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.04064.md)]. - LLMZip: Lossless Text Compression using Large Language Models - [[2306.04050](https://arxiv.org/abs/2306.04050)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.04050.md)]. - Certified Reasoning with Language Models - [[2306.04031](https://arxiv.org/abs/2306.04031)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.04031.md)]. - Triggering Multi-Hop Reasoning for Question Answering in Language Models using Soft Prompts and Random Walks - [[2306.04009](https://arxiv.org/abs/2306.04009)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.04009.md)]. - ATT3D: Amortized Text-to-3D Object Synthesis - [[2306.07349](https://arxiv.org/abs/2306.07349)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.07349.md)]. - ChatDB: Augmenting LLMs with Databases as Their Symbolic Memory - [[2306.03901](https://arxiv.org/abs/2306.03901)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.03901.md)]. - Emergent Correspondence from Image Diffusion - [[2306.03881](https://arxiv.org/abs/2306.03881)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.03881.md)]. - Deductive Verification of Chain-of-Thought Reasoning - [[2306.03872](https://arxiv.org/abs/2306.03872)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.03872.md)]. - LEACE: Perfect linear concept erasure in closed form - [[2306.03819](https://arxiv.org/abs/2306.03819)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.03819.md)]. - Learning to Ground Instructional Articles in Videos through Narrations - [[2306.03802](https://arxiv.org/abs/2306.03802)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.03802.md)]. - Enabling Intelligent Interactions between an Agent and an LLM: A Reinforcement Learning Approach - [[2306.03604](https://arxiv.org/abs/2306.03604)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.03604.md)]. - On Pitfalls of Test-Time Adaptation - [[2306.03536](https://arxiv.org/abs/2306.03536)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.03536.md)]. - Recognize Anything: A Strong Image Tagging Model - [[2306.03514](https://arxiv.org/abs/2306.03514)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.03514.md)]. - Mega-TTS: Zero-Shot Text-to-Speech at Scale with Intrinsic Inductive Bias - [[2306.03509](https://arxiv.org/abs/2306.03509)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.03509.md)]. - Ada-TTA: Towards Adaptive High-Quality Text-to-Talking Avatar Synthesis - [[2306.03504](https://arxiv.org/abs/2306.03504)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.03504.md)]. - A Grasp Pose is All You Need: Learning Multi-fingered Grasping with Deep Reinforcement Learning from Vision and Touch - [[2306.03484](https://arxiv.org/abs/2306.03484)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.03484.md)]. - Natural Language Commanding via Program Synthesis - [[2306.03460](https://arxiv.org/abs/2306.03460)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.03460.md)]. - Large Language Models of Code Fail at Completing Code with Potential Bugs - [[2306.03438](https://arxiv.org/abs/2306.03438)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.03438.md)]. - GaitGCI: Generative Counterfactual Intervention for Gait Recognition - [[2306.03428](https://arxiv.org/abs/2306.03428)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.03428.md)]. - DVIS: Decoupled Video Instance Segmentation Framework - [[2306.03413](https://arxiv.org/abs/2306.03413)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.03413.md)]. - Vid2Act: Activate Offline Videos for Visual RL - [[2306.03360](https://arxiv.org/abs/2306.03360)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.03360.md)]. - Stabilizing Contrastive RL: Techniques for Offline Goal Reaching - [[2306.03346](https://arxiv.org/abs/2306.03346)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.03346.md)]. - Multi-Agent Collaboration: Harnessing the Power of Intelligent LLM Agents - [[2306.03314](https://arxiv.org/abs/2306.03314)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.03314.md)]. - A Static Evaluation of Code Completion by Large Language Models - [[2306.03203](https://arxiv.org/abs/2306.03203)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.03203.md)]. - Neuralangelo: High-Fidelity Neural Surface Reconstruction - [[2306.03092](https://arxiv.org/abs/2306.03092)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.03092.md)]. - MotionDiffuser: Controllable Multi-Agent Motion Prediction using Diffusion - [[2306.03083](https://arxiv.org/abs/2306.03083)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.03083.md)]. - InstructZero: Efficient Instruction Optimization for Black-Box Large Language Models - [[2306.03082](https://arxiv.org/abs/2306.03082)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.03082.md)]. - HeadSculpt: Crafting 3D Head Avatars with Text - [[2306.03038](https://arxiv.org/abs/2306.03038)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.03038.md)]. - PokemonChat: Auditing ChatGPT for Pokémon Universe Knowledge - [[2306.03024](https://arxiv.org/abs/2306.03024)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.03024.md)]. - BeyondPixels: A Comprehensive Review of the Evolution of Neural Radiance Fields - [[2306.03000](https://arxiv.org/abs/2306.03000)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.03000.md)]. - PolyVoice: Language Models for Speech to Speech Translation - [[2306.02982](https://arxiv.org/abs/2306.02982)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.02982.md)]. - Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding - [[2306.02858](https://arxiv.org/abs/2306.02858)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.02858.md)]. - Scene as Occupancy - [[2306.02851](https://arxiv.org/abs/2306.02851)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.02851.md)]. - Orca: Progressive Learning from Complex Explanation Traces of GPT-4 - [[2306.02707](https://arxiv.org/abs/2306.02707)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.02707.md)]. - LLM-Blender: Ensembling Large Language Models with Pairwise Ranking and Generative Fusion - [[2306.02561](https://arxiv.org/abs/2306.02561)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.02561.md)]. - RecAgent: A Novel Simulation Paradigm for Recommender Systems - [[2306.02552](https://arxiv.org/abs/2306.02552)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.02552.md)]. - PLANNER: Generating Diversified Paragraph via Latent Language Diffusion Model - [[2306.02531](https://arxiv.org/abs/2306.02531)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.02531.md)]. - A Technical Report for Polyglot-Ko: Open-Source Large-Scale Korean Language Models - [[2306.02254](https://arxiv.org/abs/2306.02254)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.02254.md)]. - SAM3D: Zero-Shot 3D Object Detection via Segment Anything Model - [[2306.02245](https://arxiv.org/abs/2306.02245)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.02245.md)]. - Benchmarking Robustness of Adaptation Methods on Pre-trained Vision-Language Models - [[2306.02080](https://arxiv.org/abs/2306.02080)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.02080.md)]. - Prompting Is All You Need: Automated Android Bug Replay with Large Language Models - [[2306.01987](https://arxiv.org/abs/2306.01987)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.01987.md)]. - AI Transparency in the Age of LLMs: A Human-Centered Research Roadmap - [[2306.01941](https://arxiv.org/abs/2306.01941)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.01941.md)]. - RITA: Group Attention is All You Need for Timeseries Analytics - [[2306.01926](https://arxiv.org/abs/2306.01926)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.01926.md)]. - The Surprising Effectiveness of Diffusion Models for Optical Flow and Monocular Depth Estimation - [[2306.01923](https://arxiv.org/abs/2306.01923)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.01923.md)]. - VisualGPTScore: Visio-Linguistic Reasoning with Multimodal Generative Pre-Training Scores - [[2306.01879](https://arxiv.org/abs/2306.01879)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.01879.md)]. - Probabilistic Adaptation of Text-to-Video Models - [[2306.01872](https://arxiv.org/abs/2306.01872)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.01872.md)]. - Binary and Ternary Natural Language Generation - [[2306.01841](https://arxiv.org/abs/2306.01841)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.01841.md)]. - DaTaSeg: Taming a Universal Multi-Dataset Multi-Task Segmentation Model - [[2306.01736](https://arxiv.org/abs/2306.01736)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.01736.md)]. - Evaluating Language Models for Mathematics through Interactions - [[2306.01694](https://arxiv.org/abs/2306.01694)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.01694.md)]. - Fine-Grained Human Feedback Gives Better Rewards for Language Model Training - [[2306.01693](https://arxiv.org/abs/2306.01693)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.01693.md)]. - Harnessing large-language models to generate private synthetic text - [[2306.01684](https://arxiv.org/abs/2306.01684)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.01684.md)]. - STUDY: Socially Aware Temporally Causal Decoder Recommender Systems - [[2306.07946](https://arxiv.org/abs/2306.07946)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.07946.md)]. - Segment Anything in High Quality - [[2306.01567](https://arxiv.org/abs/2306.01567)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.01567.md)]. - Bi-LRFusion: Bi-Directional LiDAR-Radar Fusion for 3D Dynamic Object Detection - [[2306.01438](https://arxiv.org/abs/2306.01438)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.01438.md)]. - An Empirical Study on Challenging Math Problem Solving with GPT-4 - [[2306.01337](https://arxiv.org/abs/2306.01337)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.01337.md)]. - LoCoOp: Few-Shot Out-of-Distribution Detection via Prompt Learning - [[2306.01293](https://arxiv.org/abs/2306.01293)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.01293.md)]. - Responsible Task Automation: Empowering Large Language Models as Responsible Task Automators - [[2306.01242](https://arxiv.org/abs/2306.01242)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.01242.md)]. - Faster Causal Attention Over Large Sequences Through Sparse Flash Attention - [[2306.01160](https://arxiv.org/abs/2306.01160)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.01160.md)]. - The RefinedWeb Dataset for Falcon LLM: Outperforming Curated Corpora with Web Data, and Web Data Only - [[2306.01116](https://arxiv.org/abs/2306.01116)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.01116.md)]. - Reimagining Retrieval Augmented Language Models for Answering Queries - [[2306.01061](https://arxiv.org/abs/2306.01061)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.01061.md)]. - Diffusion Self-Guidance for Controllable Image Generation - [[2306.00986](https://arxiv.org/abs/2306.00986)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.00986.md)]. - StyleDrop: Text-to-Image Generation in Any Style - [[2306.00983](https://arxiv.org/abs/2306.00983)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.00983.md)]. - StableRep: Synthetic Images from Text-to-Image Models Make Strong Visual Representation Learners - [[2306.00984](https://arxiv.org/abs/2306.00984)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.00984.md)]. - SnapFusion: Text-to-Image Diffusion Model on Mobile Devices within Two Seconds - [[2306.00980](https://arxiv.org/abs/2306.00980)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.00980.md)]. - AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration - [[2306.00978](https://arxiv.org/abs/2306.00978)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.00978.md)]. - ViCo: Detail-Preserving Visual Condition for Personalized Text-to-Image Generation - [[2306.00971](https://arxiv.org/abs/2306.00971)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.00971.md)]. - The Hidden Language of Diffusion Models - [[2306.00966](https://arxiv.org/abs/2306.00966)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.00966.md)]. - Cocktail: Mixing Multi-Modality Controls for Text-Conditional Image Generation - [[2306.00964](https://arxiv.org/abs/2306.00964)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.00964.md)]. - The ObjectFolder Benchmark: Multisensory Learning with Neural and Real Objects - [[2306.00956](https://arxiv.org/abs/2306.00956)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.00956.md)]. - Make-Your-Video: Customized Video Generation Using Textual and Structural Guidance - [[2306.00943](https://arxiv.org/abs/2306.00943)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.00943.md)]. - STEVE-1: A Generative Model for Text-to-Behavior in Minecraft - [[2306.00937](https://arxiv.org/abs/2306.00937)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.00937.md)]. - Inserting Anybody in Diffusion Models via Celeb Basis - [[2306.00926](https://arxiv.org/abs/2306.00926)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.00926.md)]. - T2IAT: Measuring Valence and Stereotypical Biases in Text-to-Image Generation - [[2306.00905](https://arxiv.org/abs/2306.00905)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.00905.md)]. - LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day - [[2306.00890](https://arxiv.org/abs/2306.00890)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.00890.md)]. - Birth of a Transformer: A Memory Viewpoint - [[2306.00802](https://arxiv.org/abs/2306.00802)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.00802.md)]. - Microstructure quality control of steels using deep learning - [[2306.0797](https://arxiv.org/abs/2306.0797)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.0797.md)]. - GPT4Image: Can Large Pre-trained Models Help Vision Models on Perception Tasks? - [[2306.00693](https://arxiv.org/abs/2306.00693)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.00693.md)]. - Wuerstchen: Efficient Pretraining of Text-to-Image Models - [[2306.00637](https://arxiv.org/abs/2306.00637)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.00637.md)]. - ReviewerGPT? An Exploratory Study on Using Large Language Models for Paper Reviewing - [[2306.00622](https://arxiv.org/abs/2306.00622)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.00622.md)]. - Exploring Open-Vocabulary Semantic Segmentation without Human Labels - [[2306.00450](https://arxiv.org/abs/2306.00450)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.00450.md)]. - Example-based Motion Synthesis via Generative Motion Matching - [[2306.00378](https://arxiv.org/abs/2306.00378)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.00378.md)]. - Thought Cloning: Learning to Think while Acting by Imitating Human Thinking - [[2306.00323](https://arxiv.org/abs/2306.00323)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.00323.md)]. - Rethinking Model Evaluation as Narrowing the Socio-Technical Gap - [[2306.03100](https://arxiv.org/abs/2306.03100)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.03100.md)]. ### May 2023 - From Pixels to UI Actions: Learning to Follow Instructions via Graphical User Interfaces - [[2306.00245](https://arxiv.org/abs/2306.00245)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.00245.md)]. - Bytes Are All You Need: Transformers Operating Directly On File Bytes - [[2306.00238](https://arxiv.org/abs/2306.00238)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.00238.md)]. - SafeDiffuser: Safe Planning with Diffusion Probabilistic Models - [[2306.00148](https://arxiv.org/abs/2306.00148)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.00148.md)]. - MuseCoco: Generating Symbolic Music from Text - [[2306.00110](https://arxiv.org/abs/2306.00110)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.00110.md)]. - MERT: Acoustic Music Understanding Model with Large-Scale Self-supervised Training - [[2306.00107](https://arxiv.org/abs/2306.00107)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.00107.md)]. - Humans in 4D: Reconstructing and Tracking Humans with Transformers - [[2305.20091](https://arxiv.org/abs/2305.20091)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.20091.md)]. - Improving CLIP Training with Language Rewrites - [[2305.20088](https://arxiv.org/abs/2305.20088)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.20088.md)]. - Too Large; Data Reduction for Vision-Language Pre-Training - [[2305.20087](https://arxiv.org/abs/2305.20087)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.20087.md)]. - Understanding and Mitigating Copying in Diffusion Models - [[2305.20086](https://arxiv.org/abs/2305.20086)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.20086.md)]. - Control4D: Dynamic Portrait Editing by Learning 4D GAN from 2D Diffusion-based Editor - [[2305.20082](https://arxiv.org/abs/2305.20082)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.20082.md)]. - Efficient Diffusion Policies for Offline Reinforcement Learning - [[2305.20081](https://arxiv.org/abs/2305.20081)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.20081.md)]. - Tree-Ring Watermarks: Fingerprints for Diffusion Images that are Invisible and Robust - [[2305.20030](https://arxiv.org/abs/2305.20030)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.20030.md)]. - Monotonic Location Attention for Length Generalization - [[2305.20019](https://arxiv.org/abs/2305.20019)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.20019.md)]. - Human or Not? A Gamified Approach to the Turing Test - [[2305.20010](https://arxiv.org/abs/2305.20010)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.20010.md)]. - Deliberate then Generate: Enhanced Prompting Framework for Text Generation - [[2305.19835](https://arxiv.org/abs/2305.19835)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.19835.md)]. - Dense and Aligned Captions (DAC) Promote Compositional Reasoning in VL Models - [[2305.19595](https://arxiv.org/abs/2305.19595)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.19595.md)]. - Neural Kernel Surface Reconstruction - [[2305.19590](https://arxiv.org/abs/2305.19590)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.19590.md)]. - CodeTF: One-stop Transformer Library for State-of-the-art Code LLM - [[2306.00029](https://arxiv.org/abs/2306.00029)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.00029.md)]. - PlaSma: Making Small Language Models Better Procedural Knowledge Models for (Counterfactual) Planning - [[2305.19472](https://arxiv.org/abs/2305.19472)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.19472.md)]. - The Impact of Positional Encoding on Length Generalization in Transformers - [[2305.19466](https://arxiv.org/abs/2305.19466)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.19466.md)]. - Bigger, Better, Faster: Human-level Atari with human-level efficiency - [[2305.19452](https://arxiv.org/abs/2305.19452)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.19452.md)]. - Blockwise Parallel Transformer for Large Context Models - [[2305.19370](https://arxiv.org/abs/2305.19370)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.19370.md)]. - AlteredAvatar: Stylizing Dynamic 3D Avatars with Fast Style Adaptation - [[2305.19245](https://arxiv.org/abs/2305.19245)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.19245.md)]. - Grammar Prompting for Domain-Specific Language Generation with Large Language Models - [[2305.19234](https://arxiv.org/abs/2305.19234)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.19234.md)]. - LANCE: Stress-testing Visual Models by Generating Language-guided Counterfactual Images - [[2305.19164](https://arxiv.org/abs/2305.19164)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.19164.md)]. - Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate - [[2305.19118](https://arxiv.org/abs/2305.19118)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.19118.md)]. - Nested Diffusion Processes for Anytime Image Generation - [[2305.19066](https://arxiv.org/abs/2305.19066)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.19066.md)]. - Rank-adaptive spectral pruning of convolutional layers during training - [[2305.19059](https://arxiv.org/abs/2305.19059)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.19059.md)]. - StyleAvatar3D: Leveraging Image-Text Diffusion Models for High-Fidelity 3D Avatar Generation - [[2305.19012](https://arxiv.org/abs/2305.19012)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.19012.md)]. - Independent Component Alignment for Multi-Task Learning - [[2305.19000](https://arxiv.org/abs/2305.19000)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.19000.md)]. - LibriTTS-R: A Restored Multi-Speaker Text-to-Speech Corpus - [[2305.18802](https://arxiv.org/abs/2305.18802)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.18802.md)]. - HiFA: High-fidelity Text-to-3D with Advanced Diffusion Guidance - [[2305.18766](https://arxiv.org/abs/2305.18766)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.18766.md)]. - VSTAR: A Video-grounded Dialogue Dataset for Situated Semantic Understanding with Scene and Topic Transitions - [[2305.18756](https://arxiv.org/abs/2305.18756)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.18756.md)]. - GPT4Tools: Teaching Large Language Model to Use Tools via Self-instruction - [[2305.18752](https://arxiv.org/abs/2305.18752)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.18752.md)]. - Real-World Image Variation by Aligning Diffusion Inversion Chain - [[2305.18729](https://arxiv.org/abs/2305.18729)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.18729.md)]. - Faith and Fate: Limits of Transformers on Compositionality - [[2305.18654](https://arxiv.org/abs/2305.18654)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.18654.md)]. - Controllable Text-to-Image Generation with GPT-4 - [[2305.18583](https://arxiv.org/abs/2305.18583)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.18583.md)]. - PaLI-X: On Scaling up a Multilingual Vision and Language Model - [[2305.18565](https://arxiv.org/abs/2305.18565)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.18565.md)]. - Brainformers: Trading Simplicity for Efficiency - [[2306.00008](https://arxiv.org/abs/2306.00008)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.00008.md)]. - RAPHAEL: Text-to-Image Generation via Large Mixture of Diffusion Paths - [[2305.18295](https://arxiv.org/abs/2305.18295)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.18295.md)]. - Mix-of-Show: Decentralized Low-Rank Adaptation for Multi-Concept Customization of Diffusion Models - [[2305.18292](https://arxiv.org/abs/2305.18292)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.18292.md)]. - Direct Preference Optimization: Your Language Model is Secretly a Reward Model - [[2305.18290](https://arxiv.org/abs/2305.18290)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.18290.md)]. - Photoswap: Personalized Subject Swapping in Images - [[2305.18286](https://arxiv.org/abs/2305.18286)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.18286.md)]. - Contextual Object Detection with Multimodal Large Language Models - [[2305.18279](https://arxiv.org/abs/2305.18279)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.18279.md)]. - Reconstructing the Mind's Eye: fMRI-to-Image with Contrastive Learning and Diffusion Priors - [[2305.18274](https://arxiv.org/abs/2305.18274)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.18274.md)]. - Gen-L-Video: Multi-Text to Long Video Generation via Temporal Co-Denoising - [[2305.18264](https://arxiv.org/abs/2305.18264)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.18264.md)]. - GlyphControl: Glyph Conditional Control for Visual Text Generation - [[2305.18259](https://arxiv.org/abs/2305.18259)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.18259.md)]. - TaleCrafter: Interactive Story Visualization with Multiple Characters - [[2305.18247](https://arxiv.org/abs/2305.18247)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.18247.md)]. - Marked Personas: Using Natural Language Prompts to Measure Stereotypes in Language Models - [[2305.18189](https://arxiv.org/abs/2305.18189)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.18189.md)]. - Code Prompting: a Neural Symbolic Method for Complex Reasoning in Large Language Models - [[2305.18507](https://arxiv.org/abs/2305.18507)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.18507.md)]. - Pre-training Contextualized World Models with In-the-wild Videos for Reinforcement Learning - [[2305.18499](https://arxiv.org/abs/2305.18499)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.18499.md)]. - BigTranslate: Augmenting Large Language Models with Multilingual Translation Capability over 100 Languages - [[2305.18098](https://arxiv.org/abs/2305.18098)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.18098.md)]. - Make-An-Audio 2: Temporal-Enhanced Text-to-Audio Generation - [[2305.18474](https://arxiv.org/abs/2305.18474)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.18474.md)]. - DiffRate : Differentiable Compression Rate for Efficient Vision Transformers - [[2305.17997](https://arxiv.org/abs/2305.17997)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.17997.md)]. - Efficient Storage of Fine-Tuned Models via Low-Rank Approximation of Weight Residuals - [[2305.18425](https://arxiv.org/abs/2305.18425)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.18425.md)]. - Geometric Algebra Transformers - [[2305.18415](https://arxiv.org/abs/2305.18415)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.18415.md)]. - KAFA: Rethinking Image Ad Understanding with Knowledge-Augmented Feature Adaptation of Vision-Language Models - [[2305.18373](https://arxiv.org/abs/2305.18373)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.18373.md)]. - Data Minimization at Inference Time - [[2305.17593](https://arxiv.org/abs/2305.17593)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.17593.md)]. - Scalable Transformer for PDE Surrogate Modeling - [[2305.17560](https://arxiv.org/abs/2305.17560)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.17560.md)]. - The Curse of Recursion: Training on Generated Data Makes Models Forget - [[2305.17493](https://arxiv.org/abs/2305.17493)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.17493.md)]. - What can Large Language Models do in chemistry? A comprehensive benchmark on eight tasks - [[2305.18365](https://arxiv.org/abs/2305.18365)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.18365.md)]. - SwiftSage: A Generative Agent with Fast and Slow Thinking for Complex Interactive Tasks - [[2305.17390](https://arxiv.org/abs/2305.17390)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.17390.md)]. - MPCHAT: Towards Multimodal Persona-Grounded Conversation - [[2305.17388](https://arxiv.org/abs/2305.17388)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.17388.md)]. - Augmenting Large Language Model Translators via Translation Memories - [[2305.17367](https://arxiv.org/abs/2305.17367)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.17367.md)]. - DNA-GPT: Divergent N-Gram Analysis for Training-Free Detection of GPT-Generated Text - [[2305.17359](https://arxiv.org/abs/2305.17359)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.17359.md)]. - Fine-Tuning Language Models with Just Forward Passes - [[2305.17333](https://arxiv.org/abs/2305.17333)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.17333.md)]. - Beyond Positive Scaling: How Negation Impacts Scaling Trends of Language Models - [[2305.17311](https://arxiv.org/abs/2305.17311)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.17311.md)]. - Chain-of-Thought Hub: A Continuous Effort to Measure Large Language Models' Reasoning Performance - [[2305.17306](https://arxiv.org/abs/2305.17306)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.17306.md)]. - SQL-PaLM: Improved Large Language Model Adaptation for Text-to-SQL - [[2306.00739](https://arxiv.org/abs/2306.00739)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.00739.md)]. - Generating Images with Multimodal Language Models - [[2305.17216](https://arxiv.org/abs/2305.17216)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.17216.md)]. - Large Language Models as Tool Makers - [[2305.17126](https://arxiv.org/abs/2305.17126)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.17126.md)]. - Scissorhands: Exploiting the Persistence of Importance Hypothesis for LLM KV Cache Compression at Test Time - [[2305.17118](https://arxiv.org/abs/2305.17118)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.17118.md)]. - High-Fidelity Image Compression with Score-based Generative Models - [[2305.18231](https://arxiv.org/abs/2305.18231)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.18231.md)]. - ControlVideo: Adding Conditional Control for One Shot Text-to-Video Editing - [[2305.17098](https://arxiv.org/abs/2305.17098)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.17098.md)]. - Mindstorms in Natural Language-Based Societies of Mind - [[2305.17066](https://arxiv.org/abs/2305.17066)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.17066.md)]. - SOC: Semantic-Assisted Object Cluster for Referring Video Object Segmentation - [[2305.17011](https://arxiv.org/abs/2305.17011)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.17011.md)]. - Let the Flows Tell: Solving Graph Combinatorial Optimization Problems with GFlowNets - [[2305.17010](https://arxiv.org/abs/2305.17010)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.17010.md)]. - Three Towers: Flexible Contrastive Learning with Pretrained Image Models - [[2305.16999](https://arxiv.org/abs/2305.16999)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.16999.md)]. - Inverse Dynamics Pretraining Learns Good Representations for Multitask Imitation - [[2305.16985](https://arxiv.org/abs/2305.16985)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.16985.md)]. - Training Socially Aligned Language Models in Simulated Human Society - [[2305.16960](https://arxiv.org/abs/2305.16960)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.16960.md)]. - MixCE: Training Autoregressive Language Models by Mixing Forward and Reverse Cross-Entropies - [[2305.16958](https://arxiv.org/abs/2305.16958)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.16958.md)]. - On Evaluating Adversarial Robustness of Large Vision-Language Models - [[2305.16934](https://arxiv.org/abs/2305.16934)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.16934.md)]. - MultiTool-CoT: GPT-3 Can Use Multiple External Tools with Chain of Thought Prompting - [[2305.16896](https://arxiv.org/abs/2305.16896)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.16896.md)]. - Playing repeated games with Large Language Models - [[2305.16867](https://arxiv.org/abs/2305.16867)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.16867.md)]. - Randomized Positional Encodings Boost Length Generalization of Transformers - [[2305.16843](https://arxiv.org/abs/2305.16843)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.16843.md)]. - Selective Mixup Helps with Distribution Shifts, But Not (Only) because of Mixup - [[2305.16817](https://arxiv.org/abs/2305.16817)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.16817.md)]. - Do GPTs Produce Less Literal Translations? - [[2305.16806](https://arxiv.org/abs/2305.16806)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.16806.md)]. - Multimodal Recommendation Dialog with Subjective Preference: A New Challenge and Benchmark - [[2305.18212](https://arxiv.org/abs/2305.18212)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.18212.md)]. - A Closer Look at In-Context Learning under Distribution Shifts - [[2305.16704](https://arxiv.org/abs/2305.16704)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.16704.md)]. - AdaPlanner: Adaptive Planning from Feedback with Language Models - [[2305.16653](https://arxiv.org/abs/2305.16653)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.16653.md)]. - Impossible Distillation: from Low-Quality Model to High-Quality Dataset &amp; Model for Summarization and Paraphrasing - [[2305.16635](https://arxiv.org/abs/2305.16635)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.16635.md)]. - Beyond Chain-of-Thought, Effective Graph-of-Thought Reasoning in Large Language Models - [[2305.16582](https://arxiv.org/abs/2305.16582)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.16582.md)]. - On the Tool Manipulation Capability of Open-source Large Language Models - [[2305.16504](https://arxiv.org/abs/2305.16504)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.16504.md)]. - ZeroAvatar: Zero-shot 3D Avatar Generation from a Single Image - [[2305.16411](https://arxiv.org/abs/2305.16411)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.16411.md)]. - Ghost in the Minecraft: Generally Capable Agents for Open-World Environments via Large Language Models with Text-based Knowledge and Memory - [[2305.17144](https://arxiv.org/abs/2305.17144)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.17144.md)]. - Break-A-Scene: Extracting Multiple Concepts from a Single Image - [[2305.16311](https://arxiv.org/abs/2305.16311)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.16311.md)]. - Landmark Attention: Random-Access Infinite Context Length for Transformers - [[2305.16300](https://arxiv.org/abs/2305.16300)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.16300.md)]. - Voyager: An Open-Ended Embodied Agent with Large Language Models - [[2305.16291](https://arxiv.org/abs/2305.16291)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.16291.md)]. - DPOK: Reinforcement Learning for Fine-tuning Text-to-Image Diffusion Models - [[2305.16381](https://arxiv.org/abs/2305.16381)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.16381.md)]. - ProlificDreamer: High-Fidelity and Diverse Text-to-3D Generation with Variational Score Distillation - [[2305.16213](https://arxiv.org/abs/2305.16213)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.16213.md)]. - Scan and Snap: Understanding Training Dynamics and Token Composition in 1-layer Transformer - [[2305.16380](https://arxiv.org/abs/2305.16380)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.16380.md)]. - ChatBridge: Bridging Modalities with Large Language Model as a Language Catalyst - [[2305.16103](https://arxiv.org/abs/2305.16103)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.16103.md)]. - Role-Play with Large Language Models - [[2305.16367](https://arxiv.org/abs/2305.16367)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.16367.md)]. - On Architectural Compression of Text-to-Image Diffusion Models - [[2305.15798](https://arxiv.org/abs/2305.15798)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.15798.md)]. - Custom-Edit: Text-Guided Image Editing with Customized Diffusion Models - [[2305.15779](https://arxiv.org/abs/2305.15779)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.15779.md)]. - On the Planning Abilities of Large Language Models -- A Critical Investigation - [[2305.15771](https://arxiv.org/abs/2305.15771)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.15771.md)]. - Efficient Neural Music Generation - [[2305.15719](https://arxiv.org/abs/2305.15719)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.15719.md)]. - The False Promise of Imitating Proprietary LLMs - [[2305.15717](https://arxiv.org/abs/2305.15717)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.15717.md)]. - PandaGPT: One Model To Instruction-Follow Them All - [[2305.16355](https://arxiv.org/abs/2305.16355)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.16355.md)]. - Manifold Diffusion Fields - [[2305.15586](https://arxiv.org/abs/2305.15586)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.15586.md)]. - Unsupervised Semantic Correspondence Using Stable Diffusion - [[2305.15581](https://arxiv.org/abs/2305.15581)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.15581.md)]. - Lexinvariant Language Models - [[2305.16349](https://arxiv.org/abs/2305.16349)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.16349.md)]. - SPRING: GPT-4 Out-performs RL Algorithms by Studying Papers and Reasoning - [[2305.15486](https://arxiv.org/abs/2305.15486)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.15486.md)]. - LayoutGPT: Compositional Visual Planning and Generation with Large Language Models - [[2305.15393](https://arxiv.org/abs/2305.15393)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.15393.md)]. - Learning high-level visual representations from a child's perspective without strong inductive biases - [[2305.15372](https://arxiv.org/abs/2305.15372)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.15372.md)]. - Gorilla: Large Language Model Connected with Massive APIs - [[2305.15334](https://arxiv.org/abs/2305.15334)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.15334.md)]. - Visual Programming for Text-to-Image Generation and Evaluation - [[2305.15328](https://arxiv.org/abs/2305.15328)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.15328.md)]. - Enhancing Retrieval-Augmented Large Language Models with Iterative Retrieval-Generation Synergy - [[2305.15294](https://arxiv.org/abs/2305.15294)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.15294.md)]. - ViTMatte: Boosting Image Matting with Pretrained Plain Vision Transformers - [[2305.15272](https://arxiv.org/abs/2305.15272)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.15272.md)]. - Revisiting Parallel Context Windows: A Frustratingly Simple Alternative and Chain-of-Thought Deterioration - [[2305.15262](https://arxiv.org/abs/2305.15262)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.15262.md)]. - Adaptive Policy Learning to Additional Tasks - [[2305.15193](https://arxiv.org/abs/2305.15193)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.15193.md)]. - Policy Learning based on Deep Koopman Representation - [[2305.15188](https://arxiv.org/abs/2305.15188)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.15188.md)]. - Semantic-Enhanced Differentiable Search Index Inspired by Learning Strategies - [[2305.15115](https://arxiv.org/abs/2305.15115)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.15115.md)]. - Dynamic Masking Rate Schedules for MLM Pretraining - [[2305.15096](https://arxiv.org/abs/2305.15096)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.15096.md)]. - Cream: Visually-Situated Natural Language Understanding with Contrastive Reading Model and Frozen Large Language Models - [[2305.15080](https://arxiv.org/abs/2305.15080)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.15080.md)]. - Is GPT-4 a Good Data Analyst? - [[2305.15038](https://arxiv.org/abs/2305.15038)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.15038.md)]. - Cheap and Quick: Efficient Vision-Language Instruction Tuning for Large Language Models - [[2305.15023](https://arxiv.org/abs/2305.15023)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.15023.md)]. - EmbodiedGPT: Vision-Language Pre-Training via Embodied Chain of Thought - [[2305.15021](https://arxiv.org/abs/2305.15021)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.15021.md)]. - Reasoning with Language Model is Planning with World Model - [[2305.14992](https://arxiv.org/abs/2305.14992)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.14992.md)]. - IdealGPT: Iteratively Decomposing Vision and Language Reasoning via Large Language Models - [[2305.14985](https://arxiv.org/abs/2305.14985)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.14985.md)]. - Benchmarking Arabic AI with Large Language Models - [[2305.14982](https://arxiv.org/abs/2305.14982)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.14982.md)]. - Assessment of the Reliablity of a Model's Decision by Generalizing Attribution to the Wavelet Domain - [[2305.14979](https://arxiv.org/abs/2305.14979)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.14979.md)]. - Discriminator-Guided Multi-step Reasoning with Language Models - [[2305.14934](https://arxiv.org/abs/2305.14934)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.14934.md)]. - Leveraging GPT-4 for Automatic Translation Post-Editing - [[2305.14878](https://arxiv.org/abs/2305.14878)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.14878.md)]. - PaCE: Unified Multi-modal Dialogue Pre-training with Progressive and Compositional Experts - [[2305.14839](https://arxiv.org/abs/2305.14839)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.14839.md)]. - Adapting Language Models to Compress Contexts - [[2305.14788](https://arxiv.org/abs/2305.14788)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.14788.md)]. - Instructions as Backdoors: Backdoor Vulnerabilities of Instruction Tuning for Large Language Models - [[2305.14710](https://arxiv.org/abs/2305.14710)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.14710.md)]. - ExpertPrompting: Instructing Large Language Models to be Distinguished Experts - [[2305.14688](https://arxiv.org/abs/2305.14688)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.14688.md)]. - Barkour: Benchmarking Animal-level Agility with Quadruped Robots - [[2305.14654](https://arxiv.org/abs/2305.14654)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.14654.md)]. - Enabling Large Language Models to Generate Text with Citations - [[2305.14627](https://arxiv.org/abs/2305.14627)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.14627.md)]. - Think Before You Act: Decision Transformers with Internal Working Memory - [[2305.16338](https://arxiv.org/abs/2305.16338)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.16338.md)]. - Attentiveness to Answer Choices Doesn't Always Entail High QA Accuracy - [[2305.14596](https://arxiv.org/abs/2305.14596)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.14596.md)]. - PEARL: Prompting Large Language Models to Plan and Execute Actions Over Long Documents - [[2305.14564](https://arxiv.org/abs/2305.14564)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.14564.md)]. - LLMs as Factual Reasoners: Insights from Existing Benchmarks and Beyond - [[2305.14540](https://arxiv.org/abs/2305.14540)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.14540.md)]. - Self-Polish: Enhance Reasoning in Large Language Models via Problem Refinement - [[2305.14497](https://arxiv.org/abs/2305.14497)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.14497.md)]. - Video Prediction Models as Rewards for Reinforcement Learning - [[2305.14343](https://arxiv.org/abs/2305.14343)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.14343.md)]. - Automatic Model Selection with Large Language Models for Reasoning - [[2305.14333](https://arxiv.org/abs/2305.14333)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.14333.md)]. - Improving Factuality and Reasoning in Language Models through Multiagent Debate - [[2305.14325](https://arxiv.org/abs/2305.14325)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.14325.md)]. - ChatCoT: Tool-Augmented Chain-of-Thought Reasoning on Chat-based Large Language Models - [[2305.14323](https://arxiv.org/abs/2305.14323)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.14323.md)]. - RET-LLM: Towards a General Read-Write Memory for Large Language Models - [[2305.14322](https://arxiv.org/abs/2305.14322)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.14322.md)]. - CREATOR: Disentangling Abstract and Concrete Reasonings of Large Language Models through Tool Creation - [[2305.14318](https://arxiv.org/abs/2305.14318)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.14318.md)]. - QLoRA: Efficient Finetuning of Quantized LLMs - [[2305.14314](https://arxiv.org/abs/2305.14314)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.14314.md)]. - On Learning to Summarize with Large Language Models as References - [[2305.14239](https://arxiv.org/abs/2305.14239)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.14239.md)]. - REC-MV: REconstructing 3D Dynamic Cloth from Monocular Videos - [[2305.14236](https://arxiv.org/abs/2305.14236)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.14236.md)]. - Enhancing Chat Language Models by Scaling High-quality Instructional Conversations - [[2305.14233](https://arxiv.org/abs/2305.14233)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.14233.md)]. - Goat: Fine-tuned LLaMA Outperforms GPT-4 on Arithmetic Tasks - [[2305.14201](https://arxiv.org/abs/2305.14201)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.14201.md)]. - DetGPT: Detect What You Need via Reasoning - [[2305.14167](https://arxiv.org/abs/2305.14167)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.14167.md)]. - Let's Think Frame by Frame: Evaluating Video Chain of Thought with Video Infilling and Prediction - [[2305.13903](https://arxiv.org/abs/2305.13903)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.13903.md)]. - PaD: Program-aided Distillation Specializes Large Models in Reasoning - [[2305.13888](https://arxiv.org/abs/2305.13888)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.13888.md)]. - OlaGPT: Empowering LLMs With Human-like Problem-Solving Abilities - [[2305.16334](https://arxiv.org/abs/2305.16334)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.16334.md)]. - Control-A-Video: Controllable Text-to-Video Generation with Diffusion Models - [[2305.13840](https://arxiv.org/abs/2305.13840)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.13840.md)]. - Can Large Language Models Infer and Disagree Like Humans? - [[2305.13788](https://arxiv.org/abs/2305.13788)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.13788.md)]. - Perception Test: A Diagnostic Benchmark for Multimodal Video Models - [[2305.13786](https://arxiv.org/abs/2305.13786)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.13786.md)]. - Images in Language Space: Exploring the Suitability of Large Language Models for Vision &amp; Language Tasks - [[2305.13782](https://arxiv.org/abs/2305.13782)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.13782.md)]. - Aligning Large Language Models through Synthetic Feedback - [[2305.13735](https://arxiv.org/abs/2305.13735)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.13735.md)]. - Text Is All You Need: Learning Language Representations for Sequential Recommendation - [[2305.13731](https://arxiv.org/abs/2305.13731)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.13731.md)]. - Prompting and Evaluating Large Language Models for Proactive Dialogues: Clarification, Target-guided, and Non-collaboration - [[2305.13626](https://arxiv.org/abs/2305.13626)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.13626.md)]. - Transformer-based Vulnerability Detection in Code at EditTime: Zero-shot, Few-shot, or Fine-tuning? - [[2306.01754](https://arxiv.org/abs/2306.01754)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.01754.md)]. - Enhancing Detail Preservation for Customized Text-to-Image Generation: A Regularization-Free Approach - [[2305.13579](https://arxiv.org/abs/2305.13579)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.13579.md)]. - How Language Model Hallucinations Can Snowball - [[2305.13534](https://arxiv.org/abs/2305.13534)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.13534.md)]. - RecurrentGPT: Interactive Generation of (Arbitrarily) Long Text - [[2305.13304](https://arxiv.org/abs/2305.13304)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.13304.md)]. - Training Diffusion Models with Reinforcement Learning - [[2305.13301](https://arxiv.org/abs/2305.13301)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.13301.md)]. - Interactive Natural Language Processing - [[2305.13246](https://arxiv.org/abs/2305.13246)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.13246.md)]. - LLMs for Knowledge Graph Construction and Reasoning: Recent Capabilities and Future Opportunities - [[2305.13168](https://arxiv.org/abs/2305.13168)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.13168.md)]. - ControlVideo: Training-free Controllable Text-to-Video Generation - [[2305.13077](https://arxiv.org/abs/2305.13077)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.13077.md)]. - Making Language Models Better Tool Learners with Execution Feedback - [[2305.13068](https://arxiv.org/abs/2305.13068)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.13068.md)]. - AudioToken: Adaptation of Text-Conditioned Diffusion Models for Audio-to-Image Generation - [[2305.13050](https://arxiv.org/abs/2305.13050)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.13050.md)]. - RWKV: Reinventing RNNs for the Transformer Era - [[2305.13048](https://arxiv.org/abs/2305.13048)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.13048.md)]. - Textually Pretrained Speech Language Models - [[2305.13009](https://arxiv.org/abs/2305.13009)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.13009.md)]. - Boosting Long-tailed Object Detection via Step-wise Learning on Smooth-tail Data - [[2305.12833](https://arxiv.org/abs/2305.12833)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.12833.md)]. - Keeping Up with the Language Models: Robustness-Bias Interplay in NLI Data and Models - [[2305.12620](https://arxiv.org/abs/2305.12620)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.12620.md)]. - GMD: Controllable Human Motion Synthesis via Guided Diffusion Models - [[2305.12577](https://arxiv.org/abs/2305.12577)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.12577.md)]. - Conditional Generative Modeling is All You Need for Marked Temporal Point Processes - [[2305.12569](https://arxiv.org/abs/2305.12569)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.12569.md)]. - Augmenting Autotelic Agents with Large Language Models - [[2305.12487](https://arxiv.org/abs/2305.12487)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.12487.md)]. - Advancing Referring Expression Segmentation Beyond Single Image - [[2305.12452](https://arxiv.org/abs/2305.12452)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.12452.md)]. - Collaborative Development of NLP models - [[2305.12219](https://arxiv.org/abs/2305.12219)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.12219.md)]. - CodeCompose: A Large-Scale Industrial Deployment of AI-assisted Code Authoring - [[2305.12050](https://arxiv.org/abs/2305.12050)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.12050.md)]. - OPT-R: Exploring the Role of Explanations in Finetuning and Prompting for Reasoning Skills of Large Language Models - [[2305.12001](https://arxiv.org/abs/2305.12001)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.12001.md)]. - Exploring the Viability of Synthetic Query Generation for Relevance Prediction - [[2305.11944](https://arxiv.org/abs/2305.11944)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.11944.md)]. - XTREME-UP: A User-Centric Scarce-Data Benchmark for Under-Represented Languages - [[2305.11938](https://arxiv.org/abs/2305.11938)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.11938.md)]. - Chupa: Carving 3D Clothed Humans from Skinned Shape Priors using 2D Diffusion Probabilistic Models - [[2305.11870](https://arxiv.org/abs/2305.11870)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.11870.md)]. - Scaling laws for language encoding models in fMRI - [[2305.11863](https://arxiv.org/abs/2305.11863)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.11863.md)]. - Multimodal Web Navigation with Instruction-Finetuned Foundation Models - [[2305.11854](https://arxiv.org/abs/2305.11854)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.11854.md)]. - Any-to-Any Generation via Composable Diffusion - [[2305.11846](https://arxiv.org/abs/2305.11846)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.11846.md)]. - How Does Generative Retrieval Scale to Millions of Passages? - [[2305.11841](https://arxiv.org/abs/2305.11841)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.11841.md)]. - SeeGULL: A Stereotype Benchmark with Broad Geo-Cultural Coverage Leveraging Generative Models - [[2305.11840](https://arxiv.org/abs/2305.11840)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.11840.md)]. - Comparing Software Developers with ChatGPT: An Empirical Investigation - [[2305.11837](https://arxiv.org/abs/2305.11837)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.11837.md)]. - Pengi: An Audio Language Model for Audio Tasks - [[2305.11834](https://arxiv.org/abs/2305.11834)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.11834.md)]. - Cross-Lingual Supervision improves Large Language Models Pre-training - [[2305.11778](https://arxiv.org/abs/2305.11778)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.11778.md)]. - Neural Foundations of Mental Simulation: Future Prediction of Latent Representations on Dynamic Scenes - [[2305.11772](https://arxiv.org/abs/2305.11772)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.11772.md)]. - Controlling the Extraction of Memorized Data from Large Language Models via Prompt-Tuning - [[2305.11759](https://arxiv.org/abs/2305.11759)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.11759.md)]. - CRITIC: Large Language Models Can Self-Correct with Tool-Interactive Critiquing - [[2305.11738](https://arxiv.org/abs/2305.11738)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.11738.md)]. - QUEST: A Retrieval Dataset of Entity-Seeking Queries with Implicit Set Operations - [[2305.11694](https://arxiv.org/abs/2305.11694)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.11694.md)]. - Learning Global-aware Kernel for Image Harmonization - [[2305.11676](https://arxiv.org/abs/2305.11676)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.11676.md)]. - Cinematic Mindscapes: High-quality Video Reconstruction from Brain Activity - [[2305.11675](https://arxiv.org/abs/2305.11675)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.11675.md)]. - Introspective Tips: Large Language Model for In-Context Decision Making - [[2305.11598](https://arxiv.org/abs/2305.11598)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.11598.md)]. - Text2NeRF: Text-Driven 3D Scene Generation with Neural Radiance Fields - [[2305.11588](https://arxiv.org/abs/2305.11588)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.11588.md)]. - ToolkenGPT: Augmenting Frozen Language Models with Massive Tools via Tool Embeddings - [[2305.11554](https://arxiv.org/abs/2305.11554)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.11554.md)]. - Empower Large Language Model to Perform Better on Industrial Domain-Specific Question Answering - [[2305.11541](https://arxiv.org/abs/2305.11541)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.11541.md)]. - RCOT: Detecting and Rectifying Factual Inconsistency in Reasoning by Reversing Chain-of-Thought - [[2305.11499](https://arxiv.org/abs/2305.11499)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.11499.md)]. - Enhancing Personalized Dialogue Generation with Contrastive Latent Variables: Combining Sparse and Dense Persona - [[2305.11482](https://arxiv.org/abs/2305.11482)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.11482.md)]. - Towards Human-AI Collaborative Urban Science Research Enabled by Pre-trained Large Language Models - [[2305.11418](https://arxiv.org/abs/2305.11418)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.11418.md)]. - Visualizing Linguistic Diversity of Text Datasets Synthesized by Large Language Models - [[2305.11364](https://arxiv.org/abs/2305.11364)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.11364.md)]. - RoomDreamer: Text-Driven 3D Indoor Scene Synthesis with Coherent Geometry and Texture - [[2305.11337](https://arxiv.org/abs/2305.11337)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.11337.md)]. - Counterfactuals for Design: A Model-Agnostic Method For Design Recommendations - [[2305.11308](https://arxiv.org/abs/2305.11308)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.11308.md)]. - Towards Collaborative Plan Acquisition through Theory of Mind Modeling in Situated Dialogue - [[2305.11271](https://arxiv.org/abs/2305.11271)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.11271.md)]. - Instruct2Act: Mapping Multi-modality Instructions to Robotic Actions with Large Language Model - [[2305.11176](https://arxiv.org/abs/2305.11176)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.11176.md)]. - VisionLLM: Large Language Model is also an Open-Ended Decoder for Vision-Centric Tasks - [[2305.11175](https://arxiv.org/abs/2305.11175)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.11175.md)]. - Going Denser with Open-Vocabulary Part Segmentation - [[2305.11173](https://arxiv.org/abs/2305.11173)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.11173.md)]. - TrueTeacher: Learning Factual Consistency Evaluation with Large Language Models - [[2305.11171](https://arxiv.org/abs/2305.11171)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.11171.md)]. - Evidence of Meaning in Language Models Trained on Programs - [[2305.11169](https://arxiv.org/abs/2305.11169)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.11169.md)]. - TOME: A Two-stage Approach for Model-based Retrieval - [[2305.11161](https://arxiv.org/abs/2305.11161)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.11161.md)]. - LIMA: Less Is More for Alignment - [[2305.11206](https://arxiv.org/abs/2305.11206)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.11206.md)]. - UniControl: A Unified Diffusion Model for Controllable Visual Generation In the Wild - [[2305.11147](https://arxiv.org/abs/2305.11147)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.11147.md)]. - SimOAP: Improve Coherence and Consistency in Persona-based Dialogue Generation via Over-sampling and Post-evaluation - [[2305.11130](https://arxiv.org/abs/2305.11130)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.11130.md)]. - mLongT5: A Multilingual and Efficient Text-To-Text Transformer for Longer Sequences - [[2305.11129](https://arxiv.org/abs/2305.11129)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.11129.md)]. - LLMScore: Unveiling the Power of Large Language Models in Text-to-Image Synthesis Evaluation - [[2305.11116](https://arxiv.org/abs/2305.11116)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.11116.md)]. - PDP: Parameter-free Differentiable Pruning is All You Need - [[2305.11203](https://arxiv.org/abs/2305.11203)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.11203.md)]. - DrugChat: Towards Enabling ChatGPT-Like Capabilities on Drug Molecule Graphs - [[2309.03907](https://arxiv.org/abs/2309.03907)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2309.03907.md)]. - Inspecting the Geographical Representativeness of Images from Text-to-Image Models - [[2305.11080](https://arxiv.org/abs/2305.11080)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.11080.md)]. - SDC-UDA: Volumetric Unsupervised Domain Adaptation Framework for Slice-Direction Continuous Cross-Modality Medical Image Segmentation - [[2305.11012](https://arxiv.org/abs/2305.11012)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.11012.md)]. - SpeechGPT: Empowering Large Language Models with Intrinsic Cross-Modal Conversational Abilities - [[2305.11000](https://arxiv.org/abs/2305.11000)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.11000.md)]. - Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold - [[2305.10973](https://arxiv.org/abs/2305.10973)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.10973.md)]. - An Android Robot Head as Embodied Conversational Agent - [[2305.10945](https://arxiv.org/abs/2305.10945)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.10945.md)]. - A Generalist Dynamics Model for Control - [[2305.10912](https://arxiv.org/abs/2305.10912)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.10912.md)]. - VideoFactory: Swap Attention in Spatiotemporal Diffusions for Text-to-Video Generation - [[2305.10874](https://arxiv.org/abs/2305.10874)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.10874.md)]. - TextDiffuser: Diffusion Models as Text Painters - [[2305.10855](https://arxiv.org/abs/2305.10855)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.10855.md)]. - 3D Registration with Maximal Cliques - [[2305.10854](https://arxiv.org/abs/2305.10854)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.10854.md)]. - LDM3D: Latent Diffusion Model for 3D - [[2305.10853](https://arxiv.org/abs/2305.10853)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.10853.md)]. - GETMusic: Generating Any Music Tracks with a Unified Representation and Diffusion Framework - [[2305.10841](https://arxiv.org/abs/2305.10841)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.10841.md)]. - Listen, Think, and Understand - [[2305.10790](https://arxiv.org/abs/2305.10790)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.10790.md)]. - OpenShape: Scaling Up 3D Shape Representation Towards Open-World Understanding - [[2305.10764](https://arxiv.org/abs/2305.10764)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.10764.md)]. - CLAPSpeech: Learning Prosody from Text Context with Contrastive Language-Audio Pre-training - [[2305.10763](https://arxiv.org/abs/2305.10763)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.10763.md)]. - Boost Vision Transformer with GPU-Friendly Sparsity and Quantization - [[2305.10727](https://arxiv.org/abs/2305.10727)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.10727.md)]. - Discriminative Diffusion Models as Few-shot Vision and Language Learners - [[2305.10722](https://arxiv.org/abs/2305.10722)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.10722.md)]. - Zero-Day Backdoor Attack against Text-to-Image Diffusion Models via Personalization - [[2305.10701](https://arxiv.org/abs/2305.10701)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.10701.md)]. - MolXPT: Wrapping Molecules with Text for Generative Pre-training - [[2305.10688](https://arxiv.org/abs/2305.10688)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.10688.md)]. - Language Models Meet World Models: Embodied Experiences Enhance Language Models - [[2305.10626](https://arxiv.org/abs/2305.10626)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.10626.md)]. - Tree of Thoughts: Deliberate Problem Solving with Large Language Models - [[2305.10601](https://arxiv.org/abs/2305.10601)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.10601.md)]. - Instruction Tuned Models are Quick Learners - [[2306.05539](https://arxiv.org/abs/2306.05539)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.05539.md)]. - IMAD: IMage-Augmented multi-modal Dialogue - [[2305.10512](https://arxiv.org/abs/2305.10512)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.10512.md)]. - FastComposer: Tuning-Free Multi-Subject Image Generation with Localized Attention - [[2305.10431](https://arxiv.org/abs/2305.10431)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.10431.md)]. - Preserve Your Own Correlation: A Noise Prior for Video Diffusion Models - [[2305.10474](https://arxiv.org/abs/2305.10474)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.10474.md)]. - DoReMi: Optimizing Data Mixtures Speeds Up Language Model Pretraining - [[2305.10429](https://arxiv.org/abs/2305.10429)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.10429.md)]. - SLiC-HF: Sequence Likelihood Calibration with Human Feedback - [[2305.10425](https://arxiv.org/abs/2305.10425)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.10425.md)]. - PMC-VQA: Visual Instruction Tuning for Medical Visual Question Answering - [[2305.10415](https://arxiv.org/abs/2305.10415)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.10415.md)]. - PaLM 2 Technical Report - [[2305.10403](https://arxiv.org/abs/2305.10403)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.10403.md)]. - What You See is What You Read? Improving Text-Image Alignment Evaluation - [[2305.10400](https://arxiv.org/abs/2305.10400)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.10400.md)]. - Elaborative Simplification as Implicit Questions Under Discussion - [[2305.10387](https://arxiv.org/abs/2305.10387)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.10387.md)]. - Evaluating Object Hallucination in Large Vision-Language Models - [[2305.10355](https://arxiv.org/abs/2305.10355)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.10355.md)]. - CostFormer:Cost Transformer for Cost Aggregation in Multi-view Stereo - [[2305.10320](https://arxiv.org/abs/2305.10320)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.10320.md)]. - Searching for Needles in a Haystack: On the Role of Incidental Bilingualism in PaLM's Translation Capability - [[2305.10266](https://arxiv.org/abs/2305.10266)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.10266.md)]. - MemoryBank: Enhancing Large Language Models with Long-Term Memory - [[2305.10250](https://arxiv.org/abs/2305.10250)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.10250.md)]. - Knowledge-enhanced Mixed-initiative Dialogue System for Emotional Support Conversations - [[2305.10172](https://arxiv.org/abs/2305.10172)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.10172.md)]. - Improving Language Model Negotiation with Self-Play and In-Context Learning from AI Feedback - [[2305.10142](https://arxiv.org/abs/2305.10142)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.10142.md)]. - Transfer Learning for Fine-grained Classification Using Semi-supervised Learning and Visual Transformers - [[2305.10018](https://arxiv.org/abs/2305.10018)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.10018.md)]. - DinoSR: Self-Distillation and Online Clustering for Self-supervised Speech Representation Learning - [[2305.10005](https://arxiv.org/abs/2305.10005)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.10005.md)]. - Dual Semantic Knowledge Composed Multimodal Dialog Systems - [[2305.09990](https://arxiv.org/abs/2305.09990)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.09990.md)]. - Smart Word Suggestions for Writing Assistance - [[2305.09975](https://arxiv.org/abs/2305.09975)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.09975.md)]. - Towards Generalist Robots: A Promising Paradigm via Generative Simulation - [[2305.10455](https://arxiv.org/abs/2305.10455)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.10455.md)]. - Explaining black box text modules in natural language with language models - [[2305.09863](https://arxiv.org/abs/2305.09863)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.09863.md)]. - CoEdIT: Text Editing by Task-Specific Instruction Tuning - [[2305.09857](https://arxiv.org/abs/2305.09857)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.09857.md)]. - ConvXAI: Delivering Heterogeneous AI Explanations via Conversations to Support Human-AI Scientific Writing - [[2305.09770](https://arxiv.org/abs/2305.09770)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.09770.md)]. - Application-Agnostic Language Modeling for On-Device ASR - [[2305.09764](https://arxiv.org/abs/2305.09764)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.09764.md)]. - NerfBridge: Bringing Real-time, Online Neural Radiance Field Training to Robotics - [[2305.09761](https://arxiv.org/abs/2305.09761)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.09761.md)]. - A Video Is Worth 4096 Tokens: Verbalize Story Videos To Understand Them In Zero Shot - [[2305.09758](https://arxiv.org/abs/2305.09758)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.09758.md)]. - Understanding 3D Object Interaction from a Single Image - [[2305.09664](https://arxiv.org/abs/2305.09664)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.09664.md)]. - Make-An-Animation: Large-Scale Text-conditional 3D Human Motion Generation - [[2305.09662](https://arxiv.org/abs/2305.09662)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.09662.md)]. - FitMe: Deep Photorealistic 3D Morphable Model Avatars - [[2305.09641](https://arxiv.org/abs/2305.09641)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.09641.md)]. - SoundStorm: Efficient Parallel Audio Generation - [[2305.09636](https://arxiv.org/abs/2305.09636)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.09636.md)]. - Towards Expert-Level Medical Question Answering with Large Language Models - [[2305.09617](https://arxiv.org/abs/2305.09617)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.09617.md)]. - Large Language Models are Built-in Autoregressive Search Engines - [[2305.09612](https://arxiv.org/abs/2305.09612)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.09612.md)]. - Cooperation Is All You Need - [[2305.10449](https://arxiv.org/abs/2305.10449)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.10449.md)]. - AR-Diffusion: Auto-Regressive Diffusion Model for Text Generation - [[2305.09515](https://arxiv.org/abs/2305.09515)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.09515.md)]. - Online Continual Learning Without the Storage Constraint - [[2305.09253](https://arxiv.org/abs/2305.09253)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.09253.md)]. - Dual-Alignment Pre-training for Cross-lingual Sentence Embedding - [[2305.09148](https://arxiv.org/abs/2305.09148)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.09148.md)]. - Pre-Training to Learn in Context - [[2305.09137](https://arxiv.org/abs/2305.09137)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.09137.md)]. - SuSana Distancia is all you need: Enforcing class separability in metric learning via two novel distance-based loss functions for few-shot image classification - [[2305.09062](https://arxiv.org/abs/2305.09062)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.09062.md)]. - MV-Map: Offboard HD-Map Generation with Multi-view Consistency - [[2305.08851](https://arxiv.org/abs/2305.08851)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.08851.md)]. - Make-A-Protagonist: Generic Video Editing with An Ensemble of Experts - [[2305.08850](https://arxiv.org/abs/2305.08850)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.08850.md)]. - Small Models are Valuable Plug-ins for Large Language Models - [[2305.08848](https://arxiv.org/abs/2305.08848)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.08848.md)]. - RL4F: Generating Natural Language Feedback with Reinforcement Learning for Repairing Model Outputs - [[2305.08844](https://arxiv.org/abs/2305.08844)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.08844.md)]. - Straightening Out the Straight-Through Estimator: Overcoming Optimization Challenges in Vector Quantized Networks - [[2305.08842](https://arxiv.org/abs/2305.08842)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.08842.md)]. - Attacking Perceptual Similarity Metrics - [[2305.08840](https://arxiv.org/abs/2305.08840)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.08840.md)]. - AutoRecon: Automated 3D Object Discovery and Reconstruction - [[2305.08810](https://arxiv.org/abs/2305.08810)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.08810.md)]. - Interpretability at Scale: Identifying Causal Mechanisms in Alpaca - [[2305.08809](https://arxiv.org/abs/2305.08809)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.08809.md)]. - A Reproducible Extraction of Training Images from Diffusion Models - [[2305.08694](https://arxiv.org/abs/2305.08694)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.08694.md)]. - Natural Language Decomposition and Interpretation of Complex Utterances - [[2305.08677](https://arxiv.org/abs/2305.08677)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.08677.md)]. - DarkBERT: A Language Model for the Dark Side of the Internet - [[2305.08596](https://arxiv.org/abs/2305.08596)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.08596.md)]. - Common Diffusion Noise Schedules and Sample Steps are Flawed - [[2305.08891](https://arxiv.org/abs/2305.08891)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.08891.md)]. - TESS: Text-to-Text Self-Conditioned Simplex Diffusion - [[2305.08379](https://arxiv.org/abs/2305.08379)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.08379.md)]. - Symbol tuning improves in-context learning in language models - [[2305.08298](https://arxiv.org/abs/2305.08298)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.08298.md)]. - ULIP-2: Towards Scalable Multimodal Pre-training for 3D Understanding - [[2305.08275](https://arxiv.org/abs/2305.08275)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.08275.md)]. - A Cognitive Stimulation Dialogue System with Multi-source Knowledge Fusion for Elders with Cognitive Impairment - [[2305.08200](https://arxiv.org/abs/2305.08200)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.08200.md)]. - GPT-Sentinel: Distinguishing Human and ChatGPT Generated Content - [[2305.07969](https://arxiv.org/abs/2305.07969)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.07969.md)]. - Leveraging Large Language Models in Conversational Recommender Systems - [[2305.07961](https://arxiv.org/abs/2305.07961)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.07961.md)]. - CodeT5+: Open Code Large Language Models for Code Understanding and Generation - [[2305.07922](https://arxiv.org/abs/2305.07922)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.07922.md)]. - Improving Small Language Models on PubMedQA via Generative Data Augmentation - [[2305.07804](https://arxiv.org/abs/2305.07804)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.07804.md)]. - ACCENT: An Automatic Event Commonsense Evaluation Metric for Open-Domain Dialogue Systems - [[2305.07797](https://arxiv.org/abs/2305.07797)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.07797.md)]. - TinyStories: How Small Can Language Models Be and Still Speak Coherent English? - [[2305.07759](https://arxiv.org/abs/2305.07759)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.07759.md)]. - In Search of Verifiability: Explanations Rarely Enable Complementary Performance in AI-Advised Decision Making - [[2305.07722](https://arxiv.org/abs/2305.07722)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.07722.md)]. - What are the Desired Characteristics of Calibration Sets? Identifying Correlates on Long Form Scientific Summarization - [[2305.07615](https://arxiv.org/abs/2305.07615)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.07615.md)]. - Is ChatGPT Fair for Recommendation? Evaluating Fairness in Large Language Model Recommendation - [[2305.07609](https://arxiv.org/abs/2305.07609)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.07609.md)]. - Measuring Progress in Fine-grained Vision-and-Language Understanding - [[2305.07558](https://arxiv.org/abs/2305.07558)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.07558.md)]. - BlendFields: Few-Shot Example-Driven Facial Modeling - [[2305.07514](https://arxiv.org/abs/2305.07514)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.07514.md)]. - ArtGPT-4: Artistic Vision-Language Understanding with Adapter-enhanced MiniGPT-4 - [[2305.07490](https://arxiv.org/abs/2305.07490)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.07490.md)]. - Surfacing Biases in Large Language Models using Contrastive Input Decoding - [[2305.07378](https://arxiv.org/abs/2305.07378)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.07378.md)]. - Better speech synthesis through scaling - [[2305.07243](https://arxiv.org/abs/2305.07243)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.07243.md)]. - MMG-Ego4D: Multi-Modal Generalization in Egocentric Action Recognition - [[2305.07214](https://arxiv.org/abs/2305.07214)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.07214.md)]. - MEGABYTE: Predicting Million-byte Sequences with Multiscale Transformers - [[2305.07185](https://arxiv.org/abs/2305.07185)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.07185.md)]. - Masked Audio Text Encoders are Effective Multi-Modal Rescorers - [[2305.07677](https://arxiv.org/abs/2305.07677)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.07677.md)]. - Towards best practices in AGI safety and governance: A survey of expert opinion - [[2305.07153](https://arxiv.org/abs/2305.07153)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.07153.md)]. - EfficientViT: Memory Efficient Vision Transformer with Cascaded Group Attention - [[2305.07027](https://arxiv.org/abs/2305.07027)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.07027.md)]. - Simple Token-Level Confidence Improves Caption Correctness - [[2305.07021](https://arxiv.org/abs/2305.07021)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.07021.md)]. - An Inverse Scaling Law for CLIP Training - [[2305.07017](https://arxiv.org/abs/2305.07017)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.07017.md)]. - Exploiting Diffusion Prior for Real-World Image Super-Resolution - [[2305.07015](https://arxiv.org/abs/2305.07015)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.07015.md)]. - Region-Aware Pretraining for Open-Vocabulary Object Detection with Vision Transformers - [[2305.07011](https://arxiv.org/abs/2305.07011)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.07011.md)]. - Learning the Visualness of Text Using Large Vision-Language Models - [[2305.10434](https://arxiv.org/abs/2305.10434)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.10434.md)]. - Not All Languages Are Created Equal in LLMs: Improving Multilingual Capability by Cross-Lingual-Thought Prompting - [[2305.07004](https://arxiv.org/abs/2305.07004)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.07004.md)]. - Universal Source Separation with Weakly Labelled Data - [[2305.07447](https://arxiv.org/abs/2305.07447)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.07447.md)]. - CoMoSpeech: One-Step Speech and Singing Voice Synthesis via Consistency Model - [[2305.06908](https://arxiv.org/abs/2305.06908)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.06908.md)]. - A Category-theoretical Meta-analysis of Definitions of Disentanglement - [[2305.06886](https://arxiv.org/abs/2305.06886)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.06886.md)]. - Optimizing Memory Mapping Using Deep Reinforcement Learning - [[2305.07440](https://arxiv.org/abs/2305.07440)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.07440.md)]. - Distracting Downpour: Adversarial Weather Attacks for Motion Estimation - [[2305.06716](https://arxiv.org/abs/2305.06716)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.06716.md)]. - V2Meow: Meowing to the Visual Beat via Music Generation - [[2305.06594](https://arxiv.org/abs/2305.06594)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.06594.md)]. - Chain-of-Dictionary Prompting Elicits Translation in Large Language Models - [[2305.06575](https://arxiv.org/abs/2305.06575)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.06575.md)]. - How to Index Item IDs for Recommendation Foundation Models - [[2305.06569](https://arxiv.org/abs/2305.06569)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.06569.md)]. - Segment and Track Anything - [[2305.06558](https://arxiv.org/abs/2305.06558)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.06558.md)]. - Domain Incremental Lifelong Learning in an Open World - [[2305.06555](https://arxiv.org/abs/2305.06555)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.06555.md)]. - InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning - [[2305.06500](https://arxiv.org/abs/2305.06500)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.06500.md)]. - Do LLMs Understand User Preferences? Evaluating LLMs On User Rating Prediction - [[2305.06474](https://arxiv.org/abs/2305.06474)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.06474.md)]. - Perpetual Humanoid Control for Real-time Simulated Avatars - [[2305.06456](https://arxiv.org/abs/2305.06456)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.06456.md)]. - Bot or Human? Detecting ChatGPT Imposters with A Single Question - [[2305.06424](https://arxiv.org/abs/2305.06424)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.06424.md)]. - LACoS-BLOOM: Low-rank Adaptation with Contrastive objective on 8 bits Siamese-BLOOM - [[2305.06404](https://arxiv.org/abs/2305.06404)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.06404.md)]. - HumanRF: High-Fidelity Neural Radiance Fields for Humans in Motion - [[2305.06356](https://arxiv.org/abs/2305.06356)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.06356.md)]. - VideoChat: Chat-Centric Video Understanding - [[2305.06355](https://arxiv.org/abs/2305.06355)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.06355.md)]. - Reconstructing Animatable Categories from Videos - [[2305.06351](https://arxiv.org/abs/2305.06351)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.06351.md)]. - Alternating Gradient Descent and Mixture-of-Experts for Integrated Multimodal Perception - [[2305.06324](https://arxiv.org/abs/2305.06324)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.06324.md)]. - Summarizing, Simplifying, and Synthesizing Medical Evidence Using GPT-3 (with Varying Success) - [[2305.06299](https://arxiv.org/abs/2305.06299)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.06299.md)]. - Generative AI meets 3D: A Survey on Text-to-3D in AIGC Era - [[2305.06131](https://arxiv.org/abs/2305.06131)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.06131.md)]. - The Compositional Structure of Bayesian Inference - [[2305.06112](https://arxiv.org/abs/2305.06112)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.06112.md)]. - Relightify: Relightable 3D Faces from a Single Image via Diffusion Models - [[2305.06077](https://arxiv.org/abs/2305.06077)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.06077.md)]. - GPT Models Meet Robotic Applications: Co-Speech Gesturing Chat System - [[2306.01741](https://arxiv.org/abs/2306.01741)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.01741.md)]. - Privacy-Preserving Recommender Systems with Synthetic Query Generation using Differentially Private Large Language Models - [[2305.05973](https://arxiv.org/abs/2305.05973)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.05973.md)]. - Fast Distributed Inference Serving for Large Language Models - [[2305.05920](https://arxiv.org/abs/2305.05920)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.05920.md)]. - SHS-Net: Learning Signed Hyper Surfaces for Oriented Normal Estimation of Point Clouds - [[2305.05873](https://arxiv.org/abs/2305.05873)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.05873.md)]. - Are ChatGPT and GPT-4 General-Purpose Solvers for Financial Text Analytics? An Examination on Several Typical Tasks - [[2305.05862](https://arxiv.org/abs/2305.05862)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.05862.md)]. - Sketching the Future (STF): Applying Conditional Control Techniques to Text-to-Video Models - [[2305.05845](https://arxiv.org/abs/2305.05845)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.05845.md)]. - DexArt: Benchmarking Generalizable Dexterous Manipulation with Articulated Objects - [[2305.05706](https://arxiv.org/abs/2305.05706)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.05706.md)]. - InternGPT: Solving Vision-Centric Tasks by Interacting with ChatGPT Beyond Language - [[2305.05662](https://arxiv.org/abs/2305.05662)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.05662.md)]. - TidyBot: Personalized Robot Assistance with Large Language Models - [[2305.05658](https://arxiv.org/abs/2305.05658)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.05658.md)]. - Towards Building the Federated GPT: Federated Instruction Tuning - [[2305.05644](https://arxiv.org/abs/2305.05644)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.05644.md)]. - AudioSlots: A slot-centric generative model for audio separation - [[2305.05591](https://arxiv.org/abs/2305.05591)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.05591.md)]. - Recursions Are All You Need: Towards Efficient Deep Unfolding Networks - [[2305.05505](https://arxiv.org/abs/2305.05505)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.05505.md)]. - WikiWeb2M: A Page-Level Multimodal Wikipedia Dataset - [[2305.05432](https://arxiv.org/abs/2305.05432)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.05432.md)]. - Large Language Model Programs - [[2305.05364](https://arxiv.org/abs/2305.05364)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.05364.md)]. - Dialogue Planning via Brownian Bridge Stochastic Process for Goal-directed Proactive Dialogue - [[2305.05290](https://arxiv.org/abs/2305.05290)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.05290.md)]. - Distilling Script Knowledge from Large Language Models for Constrained Language Planning - [[2305.05252](https://arxiv.org/abs/2305.05252)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.05252.md)]. - SUR-adapter: Enhancing Text-to-Image Pre-trained Diffusion Models with Large Language Models - [[2305.05189](https://arxiv.org/abs/2305.05189)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.05189.md)]. - FrugalGPT: How to Use Large Language Models While Reducing Cost and Improving Performance - [[2305.05176](https://arxiv.org/abs/2305.05176)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.05176.md)]. - Knowledge-enhanced Agents for Interactive Text Games - [[2305.05091](https://arxiv.org/abs/2305.05091)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.05091.md)]. - Multi-Task End-to-End Training Improves Conversational Recommendation - [[2305.06218](https://arxiv.org/abs/2305.06218)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.06218.md)]. - Recommender Systems with Generative Retrieval - [[2305.05065](https://arxiv.org/abs/2305.05065)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.05065.md)]. - NerfAcc: Efficient Sampling Accelerates NeRFs - [[2305.04966](https://arxiv.org/abs/2305.04966)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.04966.md)]. - A Drop of Ink Makes a Million Think: The Spread of False Information in Large Language Models - [[2305.04812](https://arxiv.org/abs/2305.04812)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.04812.md)]. - MultiModal-GPT: A Vision and Language Model for Dialogue with Humans - [[2305.04790](https://arxiv.org/abs/2305.04790)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.04790.md)]. - AvatarReX: Real-time Expressive Full-body Avatars - [[2305.04789](https://arxiv.org/abs/2305.04789)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.04789.md)]. - Controllable Light Diffusion for Portraits - [[2305.04745](https://arxiv.org/abs/2305.04745)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.04745.md)]. - Code Execution with Pre-trained Language Models - [[2305.05383](https://arxiv.org/abs/2305.05383)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.05383.md)]. - LMPT: Prompt Tuning with Class-Specific Embedding Loss for Long-tailed Multi-Label Visual Recognition - [[2305.04536](https://arxiv.org/abs/2305.04536)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.04536.md)]. - Video Object Segmentation in Panoptic Wild Scenes - [[2305.04470](https://arxiv.org/abs/2305.04470)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.04470.md)]. - Locally Attentional SDF Diffusion for Controllable 3D Shape Generation - [[2305.04461](https://arxiv.org/abs/2305.04461)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.04461.md)]. - Prompt Tuning Inversion for Text-Driven Image Editing Using Diffusion Models - [[2305.04441](https://arxiv.org/abs/2305.04441)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.04441.md)]. - A Variational Perspective on Solving Inverse Problems with Diffusion Models - [[2305.04391](https://arxiv.org/abs/2305.04391)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.04391.md)]. - Language Models Don't Always Say What They Think: Unfaithful Explanations in Chain-of-Thought Prompting - [[2305.04388](https://arxiv.org/abs/2305.04388)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.04388.md)]. - Unified Demonstration Retriever for In-Context Learning - [[2305.04320](https://arxiv.org/abs/2305.04320)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.04320.md)]. - Multi-Space Neural Radiance Fields - [[2305.04268](https://arxiv.org/abs/2305.04268)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.04268.md)]. - Vcc: Scaling Transformers to 128K Tokens or More by Prioritizing Important Tokens - [[2305.04241](https://arxiv.org/abs/2305.04241)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.04241.md)]. - Text-to-Image Diffusion Models can be Easily Backdoored through Multimodal Data Poisoning - [[2305.04175](https://arxiv.org/abs/2305.04175)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.04175.md)]. - X-LLM: Bootstrapping Advanced Large Language Models by Treating Multi-Modalities as Foreign Languages - [[2305.04160](https://arxiv.org/abs/2305.04160)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.04160.md)]. - Exploring Human-Like Translation Strategy with Large Language Models - [[2305.04118](https://arxiv.org/abs/2305.04118)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.04118.md)]. - Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models - [[2305.04091](https://arxiv.org/abs/2305.04091)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.04091.md)]. - Pre-training Language Model as a Multi-perspective Course Learner - [[2305.03981](https://arxiv.org/abs/2305.03981)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.03981.md)]. - Residual Prompt Tuning: Improving Prompt Tuning with Residual Reparameterization - [[2305.03937](https://arxiv.org/abs/2305.03937)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.03937.md)]. - Otter: A Multi-Modal Model with In-Context Instruction Tuning - [[2305.03726](https://arxiv.org/abs/2305.03726)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.03726.md)]. - Avatar Fingerprinting for Authorized Use of Synthetic Talking-Head Videos - [[2305.03713](https://arxiv.org/abs/2305.03713)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.03713.md)]. - LMEye: An Interactive Perception Network for Large Language Models - [[2305.03701](https://arxiv.org/abs/2305.03701)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.03701.md)]. - Vera: A General-Purpose Plausibility Estimation Model for Commonsense Statements - [[2305.03695](https://arxiv.org/abs/2305.03695)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.03695.md)]. - Mining bias-target Alignment from Voronoi Cells - [[2305.03691](https://arxiv.org/abs/2305.03691)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.03691.md)]. - COLA: A Benchmark for Compositional Text-to-image Retrieval - [[2305.03689](https://arxiv.org/abs/2305.03689)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.03689.md)]. - A Suite of Generative Tasks for Multi-Level Multimodal Webpage Understanding - [[2305.03668](https://arxiv.org/abs/2305.03668)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.03668.md)]. - Query Expansion by Prompting Large Language Models - [[2305.03653](https://arxiv.org/abs/2305.03653)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.03653.md)]. - T-SciQ: Teaching Multimodal Chain-of-Thought Reasoning via Large Language Model Signals for Science Question Answering - [[2305.03453](https://arxiv.org/abs/2305.03453)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.03453.md)]. - TransESC: Smoothing Emotional Support Conversation via Turn-Level State Transition - [[2305.03296](https://arxiv.org/abs/2305.03296)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.03296.md)]. - Composite Motion Learning with Task Control - [[2305.03286](https://arxiv.org/abs/2305.03286)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.03286.md)]. - Verify-and-Edit: A Knowledge-Enhanced Chain-of-Thought Framework - [[2305.03268](https://arxiv.org/abs/2305.03268)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.03268.md)]. - AttentionViz: A Global View of Transformer Attention - [[2305.03210](https://arxiv.org/abs/2305.03210)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.03210.md)]. - Can LLM Already Serve as A Database Interface? A BIg Bench for Large-Scale Database Grounded Text-to-SQLs - [[2305.03111](https://arxiv.org/abs/2305.03111)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.03111.md)]. - ZipIt! Merging Models from Different Tasks without Training - [[2305.03053](https://arxiv.org/abs/2305.03053)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.03053.md)]. - Tracking through Containers and Occluders in the Wild - [[2305.03052](https://arxiv.org/abs/2305.03052)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.03052.md)]. - Controllable Visual-Tactile Synthesis - [[2305.03051](https://arxiv.org/abs/2305.03051)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.03051.md)]. - NeuralEditor: Editing Neural Radiance Fields via Manipulating Point Clouds - [[2305.03049](https://arxiv.org/abs/2305.03049)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.03049.md)]. - Personalize Segment Anything Model with One Shot - [[2305.03048](https://arxiv.org/abs/2305.03048)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.03048.md)]. - Principle-Driven Self-Alignment of Language Models from Scratch with Minimal Human Supervision - [[2305.03047](https://arxiv.org/abs/2305.03047)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.03047.md)]. - Single-Shot Implicit Morphable Faces with Consistent Texture Parameterization - [[2305.03043](https://arxiv.org/abs/2305.03043)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.03043.md)]. - TUVF: Learning Generalizable Texture UV Radiance Fields - [[2305.03040](https://arxiv.org/abs/2305.03040)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.03040.md)]. - NeRSemble: Multi-view Radiance Field Reconstruction of Human Heads - [[2305.03027](https://arxiv.org/abs/2305.03027)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.03027.md)]. - Diffusion Explainer: Visual Explanation for Text-to-image Stable Diffusion - [[2305.03509](https://arxiv.org/abs/2305.03509)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.03509.md)]. - Masked Trajectory Models for Prediction, Representation, and Control - [[2305.02968](https://arxiv.org/abs/2305.02968)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.02968.md)]. - BranchNorm: Robustly Scaling Extremely Deep Transformers - [[2305.02790](https://arxiv.org/abs/2305.02790)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.02790.md)]. - A Survey on Proactive Dialogue Systems: Problems, Methods, and Prospects - [[2305.02750](https://arxiv.org/abs/2305.02750)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.02750.md)]. - Real-Time Neural Appearance Models - [[2305.02678](https://arxiv.org/abs/2305.02678)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.02678.md)]. - Caption Anything: Interactive Image Description with Diverse Multimodal Controls - [[2305.02677](https://arxiv.org/abs/2305.02677)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.02677.md)]. - Learning Language-Specific Layers for Multilingual Machine Translation - [[2305.02665](https://arxiv.org/abs/2305.02665)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.02665.md)]. - Semantically Structured Image Compression via Irregular Group-Based Decoupling - [[2305.02586](https://arxiv.org/abs/2305.02586)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.02586.md)]. - Should ChatGPT and Bard Share Revenue with Their Data Providers? A New Business Model for the AI Era - [[2305.02555](https://arxiv.org/abs/2305.02555)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.02555.md)]. - FormNetV2: Multimodal Graph Contrastive Learning for Form Document Information Extraction - [[2305.02549](https://arxiv.org/abs/2305.02549)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.02549.md)]. - AutoML-GPT: Automatic Machine Learning with GPT - [[2305.02499](https://arxiv.org/abs/2305.02499)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.02499.md)]. - ChatGPT-steered Editing Instructor for Customization of Abstractive Summarization - [[2305.02483](https://arxiv.org/abs/2305.02483)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.02483.md)]. - Shap-E: Generating Conditional 3D Implicit Functions - [[2305.02463](https://arxiv.org/abs/2305.02463)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.02463.md)]. - Cheaply Evaluating Inference Efficiency Metrics for Autoregressive Transformer APIs - [[2305.02440](https://arxiv.org/abs/2305.02440)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.02440.md)]. - Plan, Eliminate, and Track -- Language Models are Good Teachers for Embodied Agents - [[2305.02412](https://arxiv.org/abs/2305.02412)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.02412.md)]. - Generating Synthetic Documents for Cross-Encoder Re-Rankers: A Comparative Study of ChatGPT and Human Experts - [[2305.02320](https://arxiv.org/abs/2305.02320)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.02320.md)]. - Visual Chain of Thought: Bridging Logical Gaps with Multimodal Infillings - [[2305.02317](https://arxiv.org/abs/2305.02317)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.02317.md)]. - Uncovering ChatGPT's Capabilities in Recommender Systems - [[2305.02182](https://arxiv.org/abs/2305.02182)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.02182.md)]. - Zero-Shot Listwise Document Reranking with a Large Language Model - [[2305.02156](https://arxiv.org/abs/2305.02156)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.02156.md)]. - Multimodal Procedural Planning via Dual Text-Image Prompting - [[2305.01795](https://arxiv.org/abs/2305.01795)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.01795.md)]. - Automated Code generation for Information Technology Tasks in YAML through Large Language Models - [[2305.02783](https://arxiv.org/abs/2305.02783)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.02783.md)]. - Stars Are All You Need: A Distantly Supervised Pyramid Network for Document-Level End-to-End Sentiment Analysis - [[2305.01710](https://arxiv.org/abs/2305.01710)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.01710.md)]. - TMR: Text-to-Motion Retrieval Using Contrastive 3D Human Motion Synthesis - [[2305.00976](https://arxiv.org/abs/2305.00976)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.00976.md)]. - Unlimiformer: Long-Range Transformers with Unlimited Length Input - [[2305.01625](https://arxiv.org/abs/2305.01625)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.01625.md)]. - Transfer Visual Prompt Generator across LLMs - [[2305.01278](https://arxiv.org/abs/2305.01278)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.01278.md)]. - The Role of Summarization in Generative Agents: A Preliminary Perspective - [[2305.01253](https://arxiv.org/abs/2305.01253)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.01253.md)]. - ArK: Augmented Reality with Knowledge Interactive Emergent Ability - [[2305.00970](https://arxiv.org/abs/2305.00970)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.00970.md)]. - Bridging the Gap: A Survey on Integrating (Human) Feedback for Natural Language Generation - [[2305.00955](https://arxiv.org/abs/2305.00955)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.00955.md)]. - Hypernuclear event detection in the nuclear emulsion with Monte Carlo simulation and machine learning - [[2305.0884](https://arxiv.org/abs/2305.0884)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.0884.md)]. - Learning to Reason and Memorize with Self-Notes - [[2305.00833](https://arxiv.org/abs/2305.00833)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.00833.md)]. - Bidirectional Copy-Paste for Semi-Supervised Medical Image Segmentation - [[2305.00673](https://arxiv.org/abs/2305.00673)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.00673.md)]. - Decomposition Enhances Reasoning via Self-Evaluation Guided Decoding - [[2305.00633](https://arxiv.org/abs/2305.00633)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.00633.md)]. ### April 2023 - TALLRec: An Effective and Efficient Tuning Framework to Align Large Language Model with Recommendation - [[2305.00447](https://arxiv.org/abs/2305.00447)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.00447.md)]. - LLaMA-Adapter V2: Parameter-Efficient Visual Instruction Model - [[2304.15010](https://arxiv.org/abs/2304.15010)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.15010.md)]. - Topic-oriented Adversarial Attacks against Black-box Neural Ranking Models - [[2304.14867](https://arxiv.org/abs/2304.14867)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.14867.md)]. - A Unified Generative Retriever for Knowledge-Intensive Language Tasks via Prompt Learning - [[2304.14856](https://arxiv.org/abs/2304.14856)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.14856.md)]. - IMP: Iterative Matching and Pose Estimation with Adaptive Pooling - [[2304.14837](https://arxiv.org/abs/2304.14837)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.14837.md)]. - Multivariate Representation Learning for Information Retrieval - [[2304.14522](https://arxiv.org/abs/2304.14522)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.14522.md)]. - Framing the News:From Human Perception to Large Language Model Inferences - [[2304.14456](https://arxiv.org/abs/2304.14456)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.14456.md)]. - ChatVideo: A Tracklet-centric Multimodal and Versatile Video Understanding System - [[2304.14407](https://arxiv.org/abs/2304.14407)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.14407.md)]. - Large Language Models are Strong Zero-Shot Retriever - [[2304.14233](https://arxiv.org/abs/2304.14233)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.14233.md)]. - mPLUG-Owl: Modularization Empowers Large Language Models with Multimodality - [[2304.14178](https://arxiv.org/abs/2304.14178)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.14178.md)]. - Categorification of Group Equivariant Neural Networks - [[2304.14144](https://arxiv.org/abs/2304.14144)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.14144.md)]. - ChatLog: Recording and Analyzing ChatGPT Across Time - [[2304.14106](https://arxiv.org/abs/2304.14106)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.14106.md)]. - Learning Human-Human Interactions in Images from Weak Textual Supervision - [[2304.14104](https://arxiv.org/abs/2304.14104)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.14104.md)]. - Is a prompt and a few samples all you need? Using GPT-4 for data augmentation in low-resource classification tasks - [[2304.13861](https://arxiv.org/abs/2304.13861)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.13861.md)]. - Multi-Party Chat: Conversational Agents in Group Settings with Humans and Models - [[2304.13835](https://arxiv.org/abs/2304.13835)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.13835.md)]. - Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond - [[2304.13712](https://arxiv.org/abs/2304.13712)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.13712.md)]. - Multimodal Grounding for Embodied AI via Augmented Reality Headsets for Natural Language Driven Task Planning - [[2304.13676](https://arxiv.org/abs/2304.13676)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.13676.md)]. - Unleashing Infinite-Length Input Capacity for Large-scale Language Models with Self-Controlled Memory System - [[2304.13343](https://arxiv.org/abs/2304.13343)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.13343.md)]. - EverLight: Indoor-Outdoor Editable HDR Lighting Estimation - [[2304.13207](https://arxiv.org/abs/2304.13207)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.13207.md)]. - SAFE: Machine Unlearning With Shard Graphs - [[2304.13169](https://arxiv.org/abs/2304.13169)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.13169.md)]. - Generative Relevance Feedback with Large Language Models - [[2304.13157](https://arxiv.org/abs/2304.13157)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.13157.md)]. - Answering Questions by Meta-Reasoning over Multiple Chains of Thought - [[2304.13007](https://arxiv.org/abs/2304.13007)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.13007.md)]. - Patch-based 3D Natural Scene Generation from a Single Example - [[2304.12670](https://arxiv.org/abs/2304.12670)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.12670.md)]. - Bayesian Optimization Meets Self-Distillation - [[2304.12666](https://arxiv.org/abs/2304.12666)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.12666.md)]. - Proto-Value Networks: Scaling Representation Learning with Auxiliary Tasks - [[2304.12567](https://arxiv.org/abs/2304.12567)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.12567.md)]. - GlyphDiffusion: Text Generation as Image Generation - [[2304.12519](https://arxiv.org/abs/2304.12519)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.12519.md)]. - On the Challenges of Using Black-Box APIs for Toxicity Evaluation in Research - [[2304.12397](https://arxiv.org/abs/2304.12397)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.12397.md)]. - Beyond the Pixel: a Photometrically Calibrated HDR Dataset for Luminance and Color Prediction - [[2304.12372](https://arxiv.org/abs/2304.12372)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.12372.md)]. - WizardLM: Empowering Large Language Models to Follow Complex Instructions - [[2304.12244](https://arxiv.org/abs/2304.12244)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.12244.md)]. - Track Anything: Segment Anything Meets Videos - [[2304.11968](https://arxiv.org/abs/2304.11968)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.11968.md)]. - ChatLLM Network: More brains, More intelligence - [[2304.12998](https://arxiv.org/abs/2304.12998)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.12998.md)]. - Universal Domain Adaptation via Compressive Attention Matching - [[2304.11862](https://arxiv.org/abs/2304.11862)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.11862.md)]. - Enhancing Fine-Tuning Based Backdoor Defense with Sharpness-Aware Minimization - [[2304.11823](https://arxiv.org/abs/2304.11823)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.11823.md)]. - Score-Based Diffusion Models as Principled Priors for Inverse Imaging - [[2304.11751](https://arxiv.org/abs/2304.11751)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.11751.md)]. - SketchXAI: A First Look at Explainability for Human Sketches - [[2304.11744](https://arxiv.org/abs/2304.11744)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.11744.md)]. - Walking Your LiDOG: A Journey Through Multiple Domains for LiDAR Semantic Segmentation - [[2304.11705](https://arxiv.org/abs/2304.11705)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.11705.md)]. - SATIN: A Multi-Task Metadataset for Classifying Satellite Imagery using Vision-Language Models - [[2304.11619](https://arxiv.org/abs/2304.11619)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.11619.md)]. - LLM+P: Empowering Large Language Models with Optimal Planning Proficiency - [[2304.11477](https://arxiv.org/abs/2304.11477)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.11477.md)]. - Speed Is All You Need: On-Device Acceleration of Large Diffusion Models via GPU-Aware Optimizations - [[2304.11267](https://arxiv.org/abs/2304.11267)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.11267.md)]. - Emergent and Predictable Memorization in Large Language Models - [[2304.11158](https://arxiv.org/abs/2304.11158)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.11158.md)]. - ChatABL: Abductive Learning via Natural Language Interaction with ChatGPT - [[2304.11107](https://arxiv.org/abs/2304.11107)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.11107.md)]. - Can GPT-4 Perform Neural Architecture Search? - [[2304.10970](https://arxiv.org/abs/2304.10970)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.10970.md)]. - Auditing and Generating Synthetic Data with Controllable Trust Trade-offs - [[2304.10819](https://arxiv.org/abs/2304.10819)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.10819.md)]. - Long-Term Photometric Consistent Novel View Synthesis with Diffusion Models - [[2304.10700](https://arxiv.org/abs/2304.10700)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.10700.md)]. - HM-ViT: Hetero-modal Vehicle-to-Vehicle Cooperative perception with vision transformer - [[2304.10628](https://arxiv.org/abs/2304.10628)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.10628.md)]. - Learning in Imperfect Environment: Multi-Label Classification with Long-Tailed Distribution and Partial Labels - [[2304.10539](https://arxiv.org/abs/2304.10539)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.10539.md)]. - MiniGPT-4: Enhancing Vision-Language Understanding with Advanced Large Language Models - [[2304.10592](https://arxiv.org/abs/2304.10592)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.10592.md)]. - Generalizing Neural Human Fitting to Unseen Poses With Articulated SE(3) Equivariance - [[2304.10528](https://arxiv.org/abs/2304.10528)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.10528.md)]. - Phoenix: Democratizing ChatGPT across Languages - [[2304.10453](https://arxiv.org/abs/2304.10453)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.10453.md)]. - SINC: Spatial Composition of 3D Human Motions for Simultaneous Action Generation - [[2304.10417](https://arxiv.org/abs/2304.10417)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.10417.md)]. - SCoDA: Domain Adaptive Shape Completion for Real Scans - [[2304.10179](https://arxiv.org/abs/2304.10179)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.10179.md)]. - Learning Bottleneck Concepts in Image Classification - [[2304.10131](https://arxiv.org/abs/2304.10131)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.10131.md)]. - Recognizability Embedding Enhancement for Very Low-Resolution Face Recognition and Quality Estimation - [[2304.10066](https://arxiv.org/abs/2304.10066)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.10066.md)]. - MARS: Model-agnostic Biased Object Removal without Additional Supervision for Weakly-Supervised Semantic Segmentation - [[2304.09913](https://arxiv.org/abs/2304.09913)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.09913.md)]. - Evaluating Verifiability in Generative Search Engines - [[2304.09848](https://arxiv.org/abs/2304.09848)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.09848.md)]. - Chameleon: Plug-and-Play Compositional Reasoning with Large Language Models - [[2304.09842](https://arxiv.org/abs/2304.09842)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.09842.md)]. - MetaBEV: Solving Sensor Failures for BEV Detection and Map Segmentation - [[2304.09801](https://arxiv.org/abs/2304.09801)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.09801.md)]. - DarSwin: Distortion Aware Radial Swin Transformer - [[2304.09691](https://arxiv.org/abs/2304.09691)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.09691.md)]. - Is ChatGPT Good at Search? Investigating Large Language Models as Re-Ranking Agent - [[2304.09542](https://arxiv.org/abs/2304.09542)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.09542.md)]. - Network Pruning Spaces - [[2304.09453](https://arxiv.org/abs/2304.09453)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.09453.md)]. - ASM: Adaptive Skinning Model for High-Quality 3D Face Modeling - [[2304.09423](https://arxiv.org/abs/2304.09423)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.09423.md)]. - To Compress or Not to Compress- Self-Supervised Learning and Information Theory: A Review - [[2304.09355](https://arxiv.org/abs/2304.09355)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.09355.md)]. - Fast Neural Scene Flow - [[2304.09121](https://arxiv.org/abs/2304.09121)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.09121.md)]. - Think Before You Act: Unified Policy for Interleaving Language Reasoning with Actions - [[2304.11063](https://arxiv.org/abs/2304.11063)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.11063.md)]. - In ChatGPT We Trust? Measuring and Characterizing the Reliability of ChatGPT - [[2304.08979](https://arxiv.org/abs/2304.08979)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.08979.md)]. - SurfelNeRF: Neural Surfel Radiance Fields for Online Photorealistic Reconstruction of Indoor Scenes - [[2304.08971](https://arxiv.org/abs/2304.08971)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.08971.md)]. - Looking Through the Glass: Neural Surface Reconstruction Against High Specular Reflections - [[2304.08706](https://arxiv.org/abs/2304.08706)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.08706.md)]. - An Evaluation on Large Language Model Outputs: Discourse and Memorization - [[2304.08637](https://arxiv.org/abs/2304.08637)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.08637.md)]. - Visual Instruction Tuning - [[2304.08485](https://arxiv.org/abs/2304.08485)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.08485.md)]. - Towards Robust Prompts on Vision-Language Models - [[2304.08479](https://arxiv.org/abs/2304.08479)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.08479.md)]. - Learning to Compress Prompts with Gist Tokens - [[2304.08467](https://arxiv.org/abs/2304.08467)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.08467.md)]. - Efficient Video Action Detection with Token Dropout and Context Refinement - [[2304.08451](https://arxiv.org/abs/2304.08451)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.08451.md)]. - Tool Learning with Foundation Models - [[2304.08354](https://arxiv.org/abs/2304.08354)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.08354.md)]. - Magnitude of arithmetic scalar and matrix categories - [[2304.08334](https://arxiv.org/abs/2304.08334)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.08334.md)]. - Chain of Thought Prompt Tuning in Vision Language Models - [[2304.07919](https://arxiv.org/abs/2304.07919)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.07919.md)]. - Towards Better Instruction Following Language Models for Chinese: Investigating the Impact of Training Data and Evaluation - [[2304.07854](https://arxiv.org/abs/2304.07854)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.07854.md)]. - EGformer: Equirectangular Geometry-biased Transformer for 360 Depth Estimation - [[2304.07803](https://arxiv.org/abs/2304.07803)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.07803.md)]. - Self-collaboration Code Generation via ChatGPT - [[2304.07590](https://arxiv.org/abs/2304.07590)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.07590.md)]. - Tractable Control for Autoregressive Language Generation - [[2304.07438](https://arxiv.org/abs/2304.07438)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.07438.md)]. - DINOv2: Learning Robust Visual Features without Supervision - [[2304.07193](https://arxiv.org/abs/2304.07193)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.07193.md)]. - M2T: Masking Transformers Twice for Faster Decoding - [[2304.07313](https://arxiv.org/abs/2304.07313)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.07313.md)]. - Delta Denoising Score - [[2304.07090](https://arxiv.org/abs/2304.07090)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.07090.md)]. - DCFace: Synthetic Face Generation with Dual Condition Diffusion Model - [[2304.07060](https://arxiv.org/abs/2304.07060)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.07060.md)]. - DeePoint: Visual Pointing Recognition and Direction Estimation - [[2304.06977](https://arxiv.org/abs/2304.06977)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.06977.md)]. - Multimodal C4: An Open, Billion-scale Corpus of Images Interleaved with Text - [[2304.06939](https://arxiv.org/abs/2304.06939)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.06939.md)]. - Unified Out-Of-Distribution Detection: A Model-Specific Perspective - [[2304.06813](https://arxiv.org/abs/2304.06813)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.06813.md)]. - RAFT: Reward rAnked FineTuning for Generative Foundation Model Alignment - [[2304.06767](https://arxiv.org/abs/2304.06767)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.06767.md)]. - Expressive Text-to-Image Generation with Rich Text - [[2304.06720](https://arxiv.org/abs/2304.06720)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.06720.md)]. - Single-Stage Diffusion NeRF: A Unified Approach to 3D Generation and Reconstruction - [[2304.06714](https://arxiv.org/abs/2304.06714)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.06714.md)]. - What does CLIP know about a red circle? Visual prompt engineering for VLMs - [[2304.06712](https://arxiv.org/abs/2304.06712)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.06712.md)]. - DynaMITe: Dynamic Query Bootstrapping for Multi-object Interactive Segmentation Transformer - [[2304.06668](https://arxiv.org/abs/2304.06668)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.06668.md)]. - DiffFit: Unlocking Transferability of Large Diffusion Models via Simple Parameter-Efficient Fine-Tuning - [[2304.06648](https://arxiv.org/abs/2304.06648)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.06648.md)]. - Are LLMs All You Need for Task-Oriented Dialogue? - [[2304.06556](https://arxiv.org/abs/2304.06556)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.06556.md)]. - Perspectives on Large Language Models for Relevance Judgment - [[2304.09161](https://arxiv.org/abs/2304.09161)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.09161.md)]. - Multi-Mode Online Knowledge Distillation for Self-Supervised Visual Representation Learning - [[2304.06461](https://arxiv.org/abs/2304.06461)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.06461.md)]. - AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models - [[2304.06364](https://arxiv.org/abs/2304.06364)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.06364.md)]. - NeRFVS: Neural Radiance Fields for Free View Synthesis via Geometry Scaffolds - [[2304.06287](https://arxiv.org/abs/2304.06287)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.06287.md)]. - Language Instructed Reinforcement Learning for Human-AI Coordination - [[2304.07297](https://arxiv.org/abs/2304.07297)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.07297.md)]. - Asymmetrically-powered Neural Image Compression with Shallow Decoders - [[2304.06244](https://arxiv.org/abs/2304.06244)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.06244.md)]. - [CLS] Token is All You Need for Zero-Shot Semantic Segmentation - [[2304.06212](https://arxiv.org/abs/2304.06212)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.06212.md)]. - Probabilistic Human Mesh Recovery in 3D Scenes from Egocentric Views - [[2304.06024](https://arxiv.org/abs/2304.06024)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.06024.md)]. - VidStyleODE: Disentangled Video Editing via StyleGAN and NeuralODEs - [[2304.06020](https://arxiv.org/abs/2304.06020)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.06020.md)]. - Can Large Language Models Transform Computational Social Science? - [[2305.03514](https://arxiv.org/abs/2305.03514)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2305.03514.md)]. - Hard Patches Mining for Masked Image Modeling - [[2304.05919](https://arxiv.org/abs/2304.05919)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.05919.md)]. - Representation Learning with Multi-Step Inverse Kinematics: An Efficient and Optimal Approach to Rich-Observation RL - [[2304.05889](https://arxiv.org/abs/2304.05889)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.05889.md)]. - Are Local Features All You Need for Cross-Domain Visual Place Recognition? - [[2304.05887](https://arxiv.org/abs/2304.05887)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.05887.md)]. - Mesh2Tex: Generating Mesh Textures from Image Queries - [[2304.05868](https://arxiv.org/abs/2304.05868)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.05868.md)]. - Factorized Inverse Path Tracing for Efficient and Accurate Material-Lighting Estimation - [[2304.05669](https://arxiv.org/abs/2304.05669)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.05669.md)]. - Instance-Aware Domain Generalization for Face Anti-Spoofing - [[2304.05640](https://arxiv.org/abs/2304.05640)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.05640.md)]. - ChatGPT is all you need to decolonize sub-Saharan Vocational Education - [[2304.13728](https://arxiv.org/abs/2304.13728)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.13728.md)]. - ChemCrow: Augmenting large-language models with chemistry tools - [[2304.05376](https://arxiv.org/abs/2304.05376)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.05376.md)]. - Toxicity in ChatGPT: Analyzing Persona-assigned Language Models - [[2304.05335](https://arxiv.org/abs/2304.05335)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.05335.md)]. - OccFormer: Dual-path Transformer for Vision-based 3D Semantic Occupancy Prediction - [[2304.05316](https://arxiv.org/abs/2304.05316)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.05316.md)]. - SportsMOT: A Large Multi-Object Tracking Dataset in Multiple Sports Scenes - [[2304.05170](https://arxiv.org/abs/2304.05170)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.05170.md)]. - Teaching Large Language Models to Self-Debug - [[2304.05128](https://arxiv.org/abs/2304.05128)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.05128.md)]. - StageInteractor: Query-based Object Detector with Cross-stage Interaction - [[2304.04978](https://arxiv.org/abs/2304.04978)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.04978.md)]. - Gradient-based Uncertainty Attribution for Explainable Bayesian Deep Learning - [[2304.04824](https://arxiv.org/abs/2304.04824)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.04824.md)]. - A Cheaper and Better Diffusion Language Model with Soft-Masked Noise - [[2304.04746](https://arxiv.org/abs/2304.04746)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.04746.md)]. - Ambiguous Medical Image Segmentation using Diffusion Models - [[2304.04745](https://arxiv.org/abs/2304.04745)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.04745.md)]. - Detection Transformer with Stable Matching - [[2304.04742](https://arxiv.org/abs/2304.04742)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.04742.md)]. - Prompt Pre-Training with Twenty-Thousand Classes for Open-Vocabulary Visual Recognition - [[2304.04704](https://arxiv.org/abs/2304.04704)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.04704.md)]. - Improved Test-Time Adaptation for Domain Generalization - [[2304.04494](https://arxiv.org/abs/2304.04494)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.04494.md)]. - Instance Neural Radiance Field - [[2304.04395](https://arxiv.org/abs/2304.04395)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.04395.md)]. - Graph-ToolFormer: To Empower LLMs with Graph Reasoning Ability via Prompt Augmented by ChatGPT - [[2304.11116](https://arxiv.org/abs/2304.11116)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.11116.md)]. - OpenAGI: When LLM Meets Domain Experts - [[2304.04370](https://arxiv.org/abs/2304.04370)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.04370.md)]. - Video ChatCaptioner: Towards Enriched Spatiotemporal Descriptions - [[2304.04227](https://arxiv.org/abs/2304.04227)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.04227.md)]. - Shape-Erased Feature Learning for Visible-Infrared Person Re-Identification - [[2304.04205](https://arxiv.org/abs/2304.04205)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.04205.md)]. - Token Boosting for Robust Self-Supervised Visual Transformer Pre-training - [[2304.04175](https://arxiv.org/abs/2304.04175)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.04175.md)]. - Hi Sheldon! Creating Deep Personalized Characters from TV Shows - [[2304.11093](https://arxiv.org/abs/2304.11093)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.11093.md)]. - Decoder-Only or Encoder-Decoder? Interpreting Language Model as a Regularized Encoder-Decoder - [[2304.04052](https://arxiv.org/abs/2304.04052)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.04052.md)]. - ChatGPT Empowered Long-Step Robot Control in Various Environments: A Case Application - [[2304.03893](https://arxiv.org/abs/2304.03893)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.03893.md)]. - Harnessing the Spatial-Temporal Attention of Diffusion Models for High-Fidelity Text-to-Image Synthesis - [[2304.03869](https://arxiv.org/abs/2304.03869)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.03869.md)]. - Why think step by step? Reasoning emerges from the locality of experience - [[2304.03843](https://arxiv.org/abs/2304.03843)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.03843.md)]. - Meta-causal Learning for Single Domain Generalization - [[2304.03709](https://arxiv.org/abs/2304.03709)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.03709.md)]. - Model-Agnostic Gender Debiased Image Captioning - [[2304.03693](https://arxiv.org/abs/2304.03693)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.03693.md)]. - Attention: Marginal Probability is All You Need? - [[2304.04556](https://arxiv.org/abs/2304.04556)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.04556.md)]. - Sheaf Neural Networks for Graph-based Recommender Systems - [[2304.09097](https://arxiv.org/abs/2304.09097)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.09097.md)]. - RED-PSM: Regularization by Denoising of Partially Separable Models for Dynamic Imaging - [[2304.03483](https://arxiv.org/abs/2304.03483)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.03483.md)]. - Generative Agents: Interactive Simulacra of Human Behavior - [[2304.03442](https://arxiv.org/abs/2304.03442)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.03442.md)]. - TopNet: Transformer-based Object Placement Network for Image Compositing - [[2304.03372](https://arxiv.org/abs/2304.03372)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.03372.md)]. - SegGPT: Segmenting Everything In Context - [[2304.03284](https://arxiv.org/abs/2304.03284)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.03284.md)]. - Visual Dependency Transformers: Dependency Tree Emerges from Reversed Attention - [[2304.03282](https://arxiv.org/abs/2304.03282)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.03282.md)]. - Retention Is All You Need - [[2304.03103](https://arxiv.org/abs/2304.03103)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.03103.md)]. - MULLER: Multilayer Laplacian Resizer for Vision - [[2304.02859](https://arxiv.org/abs/2304.02859)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.02859.md)]. - Learning Neural Eigenfunctions for Unsupervised Semantic Segmentation - [[2304.02841](https://arxiv.org/abs/2304.02841)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.02841.md)]. - Segment Anything - [[2304.02643](https://arxiv.org/abs/2304.02643)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.02643.md)]. - ENTL: Embodied Navigation Trajectory Learner - [[2304.02639](https://arxiv.org/abs/2304.02639)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.02639.md)]. - HNeRV: A Hybrid Neural Representation for Videos - [[2304.02633](https://arxiv.org/abs/2304.02633)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.02633.md)]. - Dynamic Point Fields - [[2304.02626](https://arxiv.org/abs/2304.02626)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.02626.md)]. - Generative Novel View Synthesis with 3D-Aware Diffusion Models - [[2304.02602](https://arxiv.org/abs/2304.02602)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.02602.md)]. - Detecting and Grounding Multi-Modal Media Manipulation - [[2304.02556](https://arxiv.org/abs/2304.02556)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.02556.md)]. - TM2D: Bimodality Driven 3D Dance Generation via Music-Text Integration - [[2304.02419](https://arxiv.org/abs/2304.02419)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.02419.md)]. - Effective control of two-dimensional Rayleigh--Bénard convection: invariant multi-agent reinforcement learning is all you need - [[2304.02370](https://arxiv.org/abs/2304.02370)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.02370.md)]. - SMPConv: Self-moving Point Representations for Continuous Convolution - [[2304.02330](https://arxiv.org/abs/2304.02330)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.02330.md)]. - Few-shot Semantic Image Synthesis with Class Affinity Transfer - [[2304.02321](https://arxiv.org/abs/2304.02321)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.02321.md)]. - How to choose your best allies for a transferable attack? - [[2304.02312](https://arxiv.org/abs/2304.02312)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.02312.md)]. - ERRA: An Embodied Representation and Reasoning Architecture for Long-horizon Language-conditioned Manipulation Tasks - [[2304.02251](https://arxiv.org/abs/2304.02251)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.02251.md)]. - GINA-3D: Learning to Generate Implicit Neural Assets in the Wild - [[2304.02163](https://arxiv.org/abs/2304.02163)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.02163.md)]. - FREDOM: Fairness Domain Adaptation Approach to Semantic Scene Understanding - [[2304.02135](https://arxiv.org/abs/2304.02135)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.02135.md)]. - Multimodal Garment Designer: Human-Centric Latent Diffusion Models for Fashion Image Editing - [[2304.02051](https://arxiv.org/abs/2304.02051)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.02051.md)]. - GlueStick: Robust Image Matching by Sticking Points and Lines Together - [[2304.02008](https://arxiv.org/abs/2304.02008)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.02008.md)]. - MonoHuman: Animatable Human Neural Field from Monocular Video - [[2304.02001](https://arxiv.org/abs/2304.02001)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.02001.md)]. - LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models - [[2304.01933](https://arxiv.org/abs/2304.01933)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.01933.md)]. - Trace and Pace: Controllable Pedestrian Animation via Guided Trajectory Diffusion - [[2304.01893](https://arxiv.org/abs/2304.01893)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.01893.md)]. - Learning to Name Classes for Vision and Language Models - [[2304.01830](https://arxiv.org/abs/2304.01830)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.01830.md)]. - Toward Verifiable and Reproducible Human Evaluation for Text-to-Image Generation - [[2304.01816](https://arxiv.org/abs/2304.01816)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.01816.md)]. - Bridging the Gap between Model Explanations in Partially Annotated Multi-label Classification - [[2304.01804](https://arxiv.org/abs/2304.01804)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.01804.md)]. - Towards Open-Vocabulary Video Instance Segmentation - [[2304.01715](https://arxiv.org/abs/2304.01715)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.01715.md)]. - HyperCUT: Video Sequence from a Single Blurry Image using Unsupervised Ordering - [[2304.01686](https://arxiv.org/abs/2304.01686)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.01686.md)]. - On the Stability-Plasticity Dilemma of Class-Incremental Learning - [[2304.01663](https://arxiv.org/abs/2304.01663)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.01663.md)]. - Cross-Domain Image Captioning with Discriminative Finetuning - [[2304.01662](https://arxiv.org/abs/2304.01662)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.01662.md)]. - IterativePFN: True Iterative Point Cloud Filtering - [[2304.01529](https://arxiv.org/abs/2304.01529)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.01529.md)]. - Robust Outlier Rejection for 3D Registration with Variational Bayes - [[2304.01514](https://arxiv.org/abs/2304.01514)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.01514.md)]. - Defending Against Patch-based Backdoor Attacks on Self-Supervised Learning - [[2304.01482](https://arxiv.org/abs/2304.01482)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.01482.md)]. - Hierarchical Supervision and Shuffle Data Augmentation for 3D Semi-Supervised Object Detection - [[2304.01464](https://arxiv.org/abs/2304.01464)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.01464.md)]. - Learning Personalized High Quality Volumetric Head Avatars from Monocular RGB Videos - [[2304.01436](https://arxiv.org/abs/2304.01436)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.01436.md)]. - VNE: An Effective Method for Improving Deep Representation by Manipulating Eigenvalue Distribution - [[2304.01434](https://arxiv.org/abs/2304.01434)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.01434.md)]. - Pythia: A Suite for Analyzing Large Language Models Across Training and Scaling - [[2304.01373](https://arxiv.org/abs/2304.01373)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.01373.md)]. - Monocular 3D Object Detection with Bounding Box Denoising in 3D by Perceiver - [[2304.01289](https://arxiv.org/abs/2304.01289)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.01289.md)]. - Long-Tailed Visual Recognition via Self-Heterogeneous Integration with Knowledge Excavation - [[2304.01279](https://arxiv.org/abs/2304.01279)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.01279.md)]. - Asymptotic expansions for the maximum likelihood estimation errors of the rotating parameter of the gravitational wave from core-collapse supernovae - [[2304.1267](https://arxiv.org/abs/2304.1267)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.1267.md)]. - Neural Volumetric Memory for Visual Locomotion Control - [[2304.01201](https://arxiv.org/abs/2304.01201)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.01201.md)]. - Baize: An Open-Source Chat Model with Parameter-Efficient Tuning on Self-Chat Data - [[2304.01196](https://arxiv.org/abs/2304.01196)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.01196.md)]. - Not All Features Matter: Enhancing Few-shot CLIP with Adaptive Prior Refinement - [[2304.01195](https://arxiv.org/abs/2304.01195)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.01195.md)]. - Burstormer: Burst Image Restoration and Enhancement Transformer - [[2304.01194](https://arxiv.org/abs/2304.01194)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.01194.md)]. - Navigating to Objects Specified by Images - [[2304.01192](https://arxiv.org/abs/2304.01192)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.01192.md)]. - Generative Multiplane Neural Radiance for 3D-Aware Image Generation - [[2304.01172](https://arxiv.org/abs/2304.01172)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.01172.md)]. - Generative Diffusion Prior for Unified Image Restoration and Enhancement - [[2304.01247](https://arxiv.org/abs/2304.01247)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.01247.md)]. - ReMoDiffuse: Retrieval-Augmented Motion Diffusion Model - [[2304.01116](https://arxiv.org/abs/2304.01116)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.01116.md)]. - DivClust: Controlling Diversity in Deep Clustering - [[2304.01042](https://arxiv.org/abs/2304.01042)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.01042.md)]. - Temporal Enhanced Training of Multi-view 3D Object Detector via Historical Object Prediction - [[2304.00967](https://arxiv.org/abs/2304.00967)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.00967.md)]. - Astroformer: More Data Might not be all you need for Classification - [[2304.05350](https://arxiv.org/abs/2304.05350)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.05350.md)]. - Few-shot Fine-tuning is All You Need for Source-free Domain Adaptation - [[2304.00792](https://arxiv.org/abs/2304.00792)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.00792.md)]. - Multi-Modal Representation Learning with Text-Driven Soft Masks - [[2304.00719](https://arxiv.org/abs/2304.00719)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.00719.md)]. - 3D Semantic Segmentation in the Wild: Learning Generalized Models for Adverse-Condition Point Clouds - [[2304.00690](https://arxiv.org/abs/2304.00690)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.00690.md)]. - Metrological detection of multipartite entanglement through dynamical symmetries - [[2304.0564](https://arxiv.org/abs/2304.0564)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.0564.md)]. - UniDexGrasp++: Improving Dexterous Grasping Policy Learning via Geometry-aware Curriculum and Iterative Generalist-Specialist Learning - [[2304.00464](https://arxiv.org/abs/2304.00464)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.00464.md)]. - Re-IQA: Unsupervised Learning for Image Quality Assessment in the Wild - [[2304.00451](https://arxiv.org/abs/2304.00451)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.00451.md)]. - When Crowd Meets Persona: Creating a Large-Scale Open-Domain Persona Dialogue Corpus - [[2304.00350](https://arxiv.org/abs/2304.00350)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.00350.md)]. - Devil is in the Queries: Advancing Mask Transformers for Real-world Medical Image Segmentation and Out-of-Distribution Localization - [[2304.00212](https://arxiv.org/abs/2304.00212)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.00212.md)]. ### March 2023 - Learning the Distribution of Errors in Stereo Matching for Joint Disparity and Uncertainty Estimation - [[2304.00152](https://arxiv.org/abs/2304.00152)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.00152.md)]. - On stochastic MPC formulations with closed-loop guarantees: Analysis and a unifying framework - [[2304.0069](https://arxiv.org/abs/2304.0069)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.0069.md)]. - Weakly-Supervised Text-driven Contrastive Learning for Facial Behavior Understanding - [[2304.00058](https://arxiv.org/abs/2304.00058)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.00058.md)]. - LivePose: Online 3D Reconstruction from Monocular Video with Dynamic Camera Poses - [[2304.00054](https://arxiv.org/abs/2304.00054)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.00054.md)]. - Accelerating exploration and representation learning with offline pre-training - [[2304.00046](https://arxiv.org/abs/2304.00046)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.00046.md)]. - Choose Your Weapon: Survival Strategies for Depressed AI Academics - [[2304.06035](https://arxiv.org/abs/2304.06035)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.06035.md)]. - A Survey of Large Language Models - [[2303.18223](https://arxiv.org/abs/2303.18223)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.18223.md)]. - Assessing Language Model Deployment with Risk Cards - [[2303.18190](https://arxiv.org/abs/2303.18190)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.18190.md)]. - Towards Nonlinear-Motion-Aware and Occlusion-Robust Rolling Shutter Correction - [[2303.18125](https://arxiv.org/abs/2303.18125)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.18125.md)]. - VDN-NeRF: Resolving Shape-Radiance Ambiguity via View-Dependence Normalization - [[2303.17968](https://arxiv.org/abs/2303.17968)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.17968.md)]. - Diffusion Action Segmentation - [[2303.17959](https://arxiv.org/abs/2303.17959)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.17959.md)]. - 3D-aware Image Generation using 2D Diffusion Models - [[2303.17905](https://arxiv.org/abs/2303.17905)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.17905.md)]. - Shepherding Slots to Objects: Towards Stable and Robust Object-Centric Learning - [[2303.17842](https://arxiv.org/abs/2303.17842)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.17842.md)]. - Learning Procedure-aware Video Representation from Instructional Videos and Their Narrations - [[2303.17839](https://arxiv.org/abs/2303.17839)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.17839.md)]. - Neural Microfacet Fields for Inverse Rendering - [[2303.17806](https://arxiv.org/abs/2303.17806)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.17806.md)]. - CrossLoc3D: Aerial-Ground Cross-Source 3D Place Recognition - [[2303.17778](https://arxiv.org/abs/2303.17778)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.17778.md)]. - CAMEL: Communicative Agents for "Mind" Exploration of Large Scale Language Model Society - [[2303.17760](https://arxiv.org/abs/2303.17760)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.17760.md)]. - Optimal Input Gain: All You Need to Supercharge a Feed-Forward Neural Network - [[2303.17732](https://arxiv.org/abs/2303.17732)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.17732.md)]. - S-VolSDF: Sparse Multi-View Stereo Regularization of Neural Implicit Surfaces - [[2303.17712](https://arxiv.org/abs/2303.17712)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.17712.md)]. - Self-Refine: Iterative Refinement with Self-Feedback - [[2303.17651](https://arxiv.org/abs/2303.17651)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.17651.md)]. - SparseViT: Revisiting Activation Sparsity for Efficient High-Resolution Vision Transformer - [[2303.17605](https://arxiv.org/abs/2303.17605)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.17605.md)]. - TiDy-PSFs: Computational Imaging with Time-Averaged Dynamic Point-Spread-Functions - [[2303.17583](https://arxiv.org/abs/2303.17583)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.17583.md)]. - HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face - [[2303.17580](https://arxiv.org/abs/2303.17580)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.17580.md)]. - Iterative Prompt Learning for Unsupervised Backlit Image Enhancement - [[2303.17569](https://arxiv.org/abs/2303.17569)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.17569.md)]. - Whose Opinions Do Language Models Reflect? - [[2303.17548](https://arxiv.org/abs/2303.17548)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.17548.md)]. - Language Models can Solve Computer Tasks - [[2303.17491](https://arxiv.org/abs/2303.17491)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.17491.md)]. - All You Need Is Sex for Diversity - [[2303.17441](https://arxiv.org/abs/2303.17441)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.17441.md)]. - WavCaps: A ChatGPT-Assisted Weakly-Labelled Audio Captioning Dataset for Audio-Language Multimodal Research - [[2303.17395](https://arxiv.org/abs/2303.17395)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.17395.md)]. - Social Biases through the Text-to-Image Generation Lens - [[2304.06034](https://arxiv.org/abs/2304.06034)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.06034.md)]. - Mixed Autoencoder for Self-supervised Visual Representation Learning - [[2303.17152](https://arxiv.org/abs/2303.17152)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.17152.md)]. - NeILF++: Inter-Reflectable Light Fields for Geometry and Material Estimation - [[2303.17147](https://arxiv.org/abs/2303.17147)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.17147.md)]. - ImageNet-E: Benchmarking Neural Network Robustness via Attribute Editing - [[2303.17096](https://arxiv.org/abs/2303.17096)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.17096.md)]. - AutoAD: Movie Description in Context - [[2303.16899](https://arxiv.org/abs/2303.16899)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.16899.md)]. - ViewRefer: Grasp the Multi-view Knowledge for 3D Visual Grounding with GPT and Prototype Guidance - [[2303.16894](https://arxiv.org/abs/2303.16894)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.16894.md)]. - Adaptive Superpixel for Active Learning in Semantic Segmentation - [[2303.16817](https://arxiv.org/abs/2303.16817)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.16817.md)]. - TTA-COPE: Test-Time Adaptation for Category-Level Object Pose Estimation - [[2303.16730](https://arxiv.org/abs/2303.16730)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.16730.md)]. - G-Eval: NLG Evaluation using GPT-4 with Better Human Alignment - [[2303.16634](https://arxiv.org/abs/2303.16634)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.16634.md)]. - Adaptive Spot-Guided Transformer for Consistent Local Feature Matching - [[2303.16624](https://arxiv.org/abs/2303.16624)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.16624.md)]. - Personalised Language Modelling of Screen Characters Using Rich Metadata Annotations - [[2303.16618](https://arxiv.org/abs/2303.16618)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.16618.md)]. - Plan4MC: Skill Reinforcement Learning and Planning for Open-World Minecraft Tasks - [[2303.16563](https://arxiv.org/abs/2303.16563)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.16563.md)]. - Fair Federated Medical Image Segmentation via Client Contribution Estimation - [[2303.16520](https://arxiv.org/abs/2303.16520)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.16520.md)]. - Multi-View Azimuth Stereo via Tangent Space Consistency - [[2303.16447](https://arxiv.org/abs/2303.16447)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.16447.md)]. - TaskMatrix.AI: Completing Tasks by Connecting Foundation Models with Millions of APIs - [[2303.16434](https://arxiv.org/abs/2303.16434)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.16434.md)]. - ChatGPT is a Knowledgeable but Inexperienced Solver: An Investigation of Commonsense Problem in Large Language Models - [[2303.16421](https://arxiv.org/abs/2303.16421)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.16421.md)]. - Are Data-driven Explanations Robust against Out-of-distribution Data? - [[2303.16390](https://arxiv.org/abs/2303.16390)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.16390.md)]. - Communication-Efficient Vertical Federated Learning with Limited Overlapping Samples - [[2303.16270](https://arxiv.org/abs/2303.16270)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.16270.md)]. - Your Diffusion Model is Secretly a Zero-Shot Classifier - [[2303.16203](https://arxiv.org/abs/2303.16203)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.16203.md)]. - ASIC: Aligning Sparse in-the-wild Image Collections - [[2303.16201](https://arxiv.org/abs/2303.16201)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.16201.md)]. - LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention - [[2303.16199](https://arxiv.org/abs/2303.16199)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.16199.md)]. - SparseNeRF: Distilling Depth Ranking for Few-shot Novel View Synthesis - [[2303.16196](https://arxiv.org/abs/2303.16196)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.16196.md)]. - Large-scale Training Data Search for Object Re-identification - [[2303.16186](https://arxiv.org/abs/2303.16186)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.16186.md)]. - Unmasked Teacher: Towards Training-Efficient Video Foundation Models - [[2303.16058](https://arxiv.org/abs/2303.16058)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.16058.md)]. - F$^{2}$-NeRF: Fast Neural Radiance Field Training with Free Camera Trajectories - [[2303.15951](https://arxiv.org/abs/2303.15951)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.15951.md)]. - Unify, Align and Refine: Multi-Level Semantic Alignment for Radiology Report Generation - [[2303.15932](https://arxiv.org/abs/2303.15932)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.15932.md)]. - Mask-Free Video Instance Segmentation - [[2303.15904](https://arxiv.org/abs/2303.15904)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.15904.md)]. - X-Mesh: Towards Fast and Accurate Text-driven 3D Stylization via Dynamic Textual Guidance - [[2303.15764](https://arxiv.org/abs/2303.15764)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.15764.md)]. - Foundation Models and Fair Use - [[2303.15715](https://arxiv.org/abs/2303.15715)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.15715.md)]. - DisWOT: Student Architecture Search for Distillation WithOut Training - [[2303.15678](https://arxiv.org/abs/2303.15678)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.15678.md)]. - On the Creativity of Large Language Models - [[2304.00008](https://arxiv.org/abs/2304.00008)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2304.00008.md)]. - SwiftFormer: Efficient Additive Attention for Transformer-based Real-time Mobile Vision Applications - [[2303.15446](https://arxiv.org/abs/2303.15446)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.15446.md)]. - Quantum Multi-Model Fitting - [[2303.15444](https://arxiv.org/abs/2303.15444)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.15444.md)]. - GeoNet: Benchmarking Unsupervised Adaptation across Geographies - [[2303.15443](https://arxiv.org/abs/2303.15443)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.15443.md)]. - Zero-shot Model Diagnosis - [[2303.15441](https://arxiv.org/abs/2303.15441)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.15441.md)]. - Anti-DreamBooth: Protecting users from personalized text-to-image synthesis - [[2303.15433](https://arxiv.org/abs/2303.15433)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.15433.md)]. - Learning to Zoom and Unzoom - [[2303.15390](https://arxiv.org/abs/2303.15390)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.15390.md)]. - Active Self-Supervised Learning: A Few Low-Cost Relationships Are All You Need - [[2303.15256](https://arxiv.org/abs/2303.15256)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.15256.md)]. - Zero-Shot Composed Image Retrieval with Textual Inversion - [[2303.15247](https://arxiv.org/abs/2303.15247)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.15247.md)]. - SimpleNet: A Simple Network for Image Anomaly Detection and Localization - [[2303.15140](https://arxiv.org/abs/2303.15140)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.15140.md)]. - UniDistill: A Universal Cross-Modality Knowledge Distillation Framework for 3D Object Detection in Bird's-Eye View - [[2303.15083](https://arxiv.org/abs/2303.15083)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.15083.md)]. - Single-subject Multi-contrast MRI Super-resolution via Implicit Neural Representations - [[2303.15065](https://arxiv.org/abs/2303.15065)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.15065.md)]. - DyGait: Exploiting Dynamic Representations for High-performance Gait Recognition - [[2303.14953](https://arxiv.org/abs/2303.14953)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.14953.md)]. - Text is All You Need: Personalizing ASR Models using Controllable Speech Synthesis - [[2303.14885](https://arxiv.org/abs/2303.14885)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.14885.md)]. - Label-Free Liver Tumor Segmentation - [[2303.14869](https://arxiv.org/abs/2303.14869)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.14869.md)]. - DiffTAD: Temporal Action Detection with Proposal Denoising Diffusion - [[2303.14863](https://arxiv.org/abs/2303.14863)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.14863.md)]. - Frame Flexible Network - [[2303.14817](https://arxiv.org/abs/2303.14817)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.14817.md)]. - BlackVIP: Black-Box Visual Prompting for Robust Transfer Learning - [[2303.14773](https://arxiv.org/abs/2303.14773)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.14773.md)]. - Disentangling Writer and Character Styles for Handwriting Generation - [[2303.14736](https://arxiv.org/abs/2303.14736)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.14736.md)]. - Natural Language Reasoning, A Survey - [[2303.14725](https://arxiv.org/abs/2303.14725)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.14725.md)]. - CelebV-Text: A Large-Scale Facial Text-Video Dataset - [[2303.14717](https://arxiv.org/abs/2303.14717)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.14717.md)]. - Learning Versatile 3D Shape Generation with Improved AR Models - [[2303.14700](https://arxiv.org/abs/2303.14700)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.14700.md)]. - Sat2Density: Faithful Density Learning from Satellite-Ground Image Pairs - [[2303.14672](https://arxiv.org/abs/2303.14672)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.14672.md)]. - Learning video embedding space with Natural Language Supervision - [[2303.14584](https://arxiv.org/abs/2303.14584)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.14584.md)]. - SUDS: Scalable Urban Dynamic Scenes - [[2303.14536](https://arxiv.org/abs/2303.14536)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.14536.md)]. - Compacting Binary Neural Networks by Sparse Kernel Selection - [[2303.14470](https://arxiv.org/abs/2303.14470)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.14470.md)]. - NeRF-DS: Neural Radiance Fields for Dynamic Specular Objects - [[2303.14435](https://arxiv.org/abs/2303.14435)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.14435.md)]. - Human Preference Score: Better Aligning Text-to-Image Models with Human Preference - [[2303.14420](https://arxiv.org/abs/2303.14420)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.14420.md)]. - VL-SAT: Visual-Linguistic Semantics Assisted Training for 3D Semantic Scene Graph Prediction in Point Cloud - [[2303.14408](https://arxiv.org/abs/2303.14408)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.14408.md)]. - DoNet: Deep De-overlapping Network for Cytology Instance Segmentation - [[2303.14373](https://arxiv.org/abs/2303.14373)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.14373.md)]. - IDGI: A Framework to Eliminate Explanation Noise from Integrated Gradients - [[2303.14242](https://arxiv.org/abs/2303.14242)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.14242.md)]. - FastViT: A Fast Hybrid Vision Transformer using Structural Reparameterization - [[2303.14189](https://arxiv.org/abs/2303.14189)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.14189.md)]. - UrbanGIRAFFE: Representing Urban Scenes as Compositional Generative Neural Feature Fields - [[2303.14167](https://arxiv.org/abs/2303.14167)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.14167.md)]. - Errors are Useful Prompts: Instruction Guided Task Programming with Verifier-Assisted Iterative Prompting - [[2303.14100](https://arxiv.org/abs/2303.14100)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.14100.md)]. - Poincaré ResNet - [[2303.14027](https://arxiv.org/abs/2303.14027)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.14027.md)]. - AssetField: Assets Mining and Reconfiguration in Ground Feature Plane Representation - [[2303.13953](https://arxiv.org/abs/2303.13953)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.13953.md)]. - Robust Test-Time Adaptation in Dynamic Scenarios - [[2303.13899](https://arxiv.org/abs/2303.13899)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.13899.md)]. - Compositionality in algorithms for smoothing - [[2303.13865](https://arxiv.org/abs/2303.13865)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.13865.md)]. - Anomaly Detection under Distribution Shift - [[2303.13845](https://arxiv.org/abs/2303.13845)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.13845.md)]. - Zolly: Zoom Focal Length Correctly for Perspective-Distorted Human Mesh Reconstruction - [[2303.13796](https://arxiv.org/abs/2303.13796)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.13796.md)]. - Progressively Optimized Local Radiance Fields for Robust View Synthesis - [[2303.13791](https://arxiv.org/abs/2303.13791)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.13791.md)]. - Sparsifiner: Learning Sparse Instance-Dependent Attention for Efficient Vision Transformers - [[2303.13755](https://arxiv.org/abs/2303.13755)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.13755.md)]. - Conditional Image-to-Video Generation with Latent Flow Diffusion Models - [[2303.13744](https://arxiv.org/abs/2303.13744)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.13744.md)]. - How Does Attention Work in Vision Transformers? A Visual Analytics Attempt - [[2303.13731](https://arxiv.org/abs/2303.13731)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.13731.md)]. - End-to-End Diffusion Latent Optimization Improves Classifier Guidance - [[2303.13703](https://arxiv.org/abs/2303.13703)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.13703.md)]. - Rethinking Domain Generalization for Face Anti-spoofing: Separability and Alignment - [[2303.13662](https://arxiv.org/abs/2303.13662)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.13662.md)]. - Theoretical and Numerical Analysis of 3D Reconstruction Using Point and Line Incidences - [[2303.13593](https://arxiv.org/abs/2303.13593)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.13593.md)]. - Towards Solving Fuzzy Tasks with Human Feedback: A Retrospective of the MineRL BASALT 2022 Competition - [[2303.13512](https://arxiv.org/abs/2303.13512)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.13512.md)]. - DreamBooth3D: Subject-Driven Text-to-3D Generation - [[2303.13508](https://arxiv.org/abs/2303.13508)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.13508.md)]. - Chordal Averaging on Flag Manifolds and Its Applications - [[2303.13501](https://arxiv.org/abs/2303.13501)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.13501.md)]. - The effectiveness of MAE pre-pretraining for billion-scale pretraining - [[2303.13496](https://arxiv.org/abs/2303.13496)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.13496.md)]. - Multi-granularity Interaction Simulation for Unsupervised Interactive Segmentation - [[2303.13399](https://arxiv.org/abs/2303.13399)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.13399.md)]. - Zero-guidance Segmentation Using Zero Segment Labels - [[2303.13396](https://arxiv.org/abs/2303.13396)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.13396.md)]. - Unsupervised Deep Probabilistic Approach for Partial Point Cloud Registration - [[2303.13290](https://arxiv.org/abs/2303.13290)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.13290.md)]. - Marching-Primitives: Shape Abstraction from Signed Distance Function - [[2303.13190](https://arxiv.org/abs/2303.13190)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.13190.md)]. - Masked Image Training for Generalizable Deep Image Denoising - [[2303.13132](https://arxiv.org/abs/2303.13132)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.13132.md)]. - Orthogonal Annotation Benefits Barely-supervised Medical Image Segmentation - [[2303.13090](https://arxiv.org/abs/2303.13090)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.13090.md)]. - ENVIDR: Implicit Differentiable Renderer with Neural Environment Lighting - [[2303.13022](https://arxiv.org/abs/2303.13022)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.13022.md)]. - From Knowledge Distillation to Self-Knowledge Distillation: A Unified Approach with Normalized Loss and Customized Soft Labels - [[2303.13005](https://arxiv.org/abs/2303.13005)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.13005.md)]. - SHERF: Generalizable Human NeRF from a Single Image - [[2303.12791](https://arxiv.org/abs/2303.12791)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.12791.md)]. - FeatureNeRF: Learning Generalizable NeRFs by Distilling Foundation Models - [[2303.12786](https://arxiv.org/abs/2303.12786)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.12786.md)]. - Spherical Transformer for LiDAR-based 3D Recognition - [[2303.12766](https://arxiv.org/abs/2303.12766)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.12766.md)]. - Correlational Image Modeling for Self-Supervised Visual Pre-Training - [[2303.12670](https://arxiv.org/abs/2303.12670)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.12670.md)]. - MEGA: Multilingual Evaluation of Generative AI - [[2303.12528](https://arxiv.org/abs/2303.12528)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.12528.md)]. - UMC: A Unified Bandwidth-efficient and Multi-resolution based Collaborative Perception Framework - [[2303.12400](https://arxiv.org/abs/2303.12400)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.12400.md)]. - RegFormer: An Efficient Projection-Aware Transformer Network for Large-Scale Point Cloud Registration - [[2303.12384](https://arxiv.org/abs/2303.12384)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.12384.md)]. - LD-ZNet: A Latent Diffusion Approach for Text-Based Image Segmentation - [[2303.12343](https://arxiv.org/abs/2303.12343)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.12343.md)]. - Make Encoder Great Again in 3D GAN Inversion through Geometry and Occlusion-Aware Encoding - [[2303.12326](https://arxiv.org/abs/2303.12326)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.12326.md)]. - NLOS-NeuS: Non-line-of-sight Neural Implicit Surface - [[2303.12280](https://arxiv.org/abs/2303.12280)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.12280.md)]. - Exploring the Benefits of Visual Prompting in Differential Privacy - [[2303.12247](https://arxiv.org/abs/2303.12247)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.12247.md)]. - Object Pose Estimation with Statistical Guarantees: Conformal Keypoint Detection and Geometric Uncertainty Propagation - [[2303.12246](https://arxiv.org/abs/2303.12246)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.12246.md)]. - Two-shot Video Object Segmentation - [[2303.12078](https://arxiv.org/abs/2303.12078)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.12078.md)]. - VAD: Vectorized Scene Representation for Efficient Autonomous Driving - [[2303.12077](https://arxiv.org/abs/2303.12077)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.12077.md)]. - CC3D: Layout-Conditioned Generation of Compositional 3D Scenes - [[2303.12074](https://arxiv.org/abs/2303.12074)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.12074.md)]. - Vox-E: Text-guided Voxel Editing of 3D Objects - [[2303.12048](https://arxiv.org/abs/2303.12048)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.12048.md)]. - Joint Visual Grounding and Tracking with Natural Language Specification - [[2303.12027](https://arxiv.org/abs/2303.12027)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.12027.md)]. - Logical Reasoning over Natural Language as Knowledge Representation: A Survey - [[2303.12023](https://arxiv.org/abs/2303.12023)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.12023.md)]. - NeAT: Learning Neural Implicit Surfaces with Arbitrary Topologies from Multi-view Images - [[2303.12012](https://arxiv.org/abs/2303.12012)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.12012.md)]. - Text2Room: Extracting Textured 3D Meshes from 2D Text-to-Image Models - [[2303.11989](https://arxiv.org/abs/2303.11989)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.11989.md)]. - NEMTO: Neural Environment Matting for Novel View and Relighting Synthesis of Transparent Objects - [[2303.11963](https://arxiv.org/abs/2303.11963)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.11963.md)]. - Using Explanations to Guide Models - [[2303.11932](https://arxiv.org/abs/2303.11932)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.11932.md)]. - Exploring Object-Centric Temporal Modeling for Efficient Multi-View 3D Object Detection - [[2303.11926](https://arxiv.org/abs/2303.11926)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.11926.md)]. - Efficient Decision-based Black-box Patch Attacks on Video Recognition - [[2303.11917](https://arxiv.org/abs/2303.11917)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.11917.md)]. - Solving Oscillation Problem in Post-Training Quantization Through a Theoretical Perspective - [[2303.11906](https://arxiv.org/abs/2303.11906)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.11906.md)]. - 3D Human Mesh Estimation from Virtual Markers - [[2303.11726](https://arxiv.org/abs/2303.11726)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.11726.md)]. - Implicit Neural Representation for Cooperative Low-light Image Enhancement - [[2303.11722](https://arxiv.org/abs/2303.11722)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.11722.md)]. - A Complete Survey on Generative AI (AIGC): Is ChatGPT from GPT-4 to GPT-5 All You Need? - [[2303.11717](https://arxiv.org/abs/2303.11717)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.11717.md)]. - DiffuMask: Synthesizing Images with Pixel-level Annotations for Semantic Segmentation Using Diffusion Models - [[2303.11681](https://arxiv.org/abs/2303.11681)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.11681.md)]. - BoxSnake: Polygonal Instance Segmentation with Box Supervision - [[2303.11630](https://arxiv.org/abs/2303.11630)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.11630.md)]. - TMA: Temporal Motion Aggregation for Event-based Optical Flow - [[2303.11629](https://arxiv.org/abs/2303.11629)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.11629.md)]. - LayoutDiffusion: Improving Graphic Layout Generation by Discrete Diffusion Probabilistic Models - [[2303.11589](https://arxiv.org/abs/2303.11589)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.11589.md)]. - Boundary Unlearning - [[2303.11570](https://arxiv.org/abs/2303.11570)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.11570.md)]. - Texture Learning Domain Randomization for Domain Generalized Segmentation - [[2303.11546](https://arxiv.org/abs/2303.11546)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.11546.md)]. - PRISE: Demystifying Deep Lucas-Kanade with Strongly Star-Convex Constraints for Multimodel Image Alignment - [[2303.11526](https://arxiv.org/abs/2303.11526)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.11526.md)]. - Sketch2Saliency: Learning to Detect Salient Objects from Human Drawings - [[2303.11502](https://arxiv.org/abs/2303.11502)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.11502.md)]. - Polynomial Implicit Neural Representations For Large Diverse Datasets - [[2303.11424](https://arxiv.org/abs/2303.11424)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.11424.md)]. - EPiC: Ensemble of Partial Point Clouds for Robust Classification - [[2303.11419](https://arxiv.org/abs/2303.11419)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.11419.md)]. - Stable Bias: Analyzing Societal Representations in Diffusion Models - [[2303.11408](https://arxiv.org/abs/2303.11408)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.11408.md)]. - eP-ALM: Efficient Perceptual Augmentation of Language Models - [[2303.11403](https://arxiv.org/abs/2303.11403)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.11403.md)]. - MM-REACT: Prompting ChatGPT for Multimodal Reasoning and Action - [[2303.11381](https://arxiv.org/abs/2303.11381)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.11381.md)]. - Reflexion: Language Agents with Verbal Reinforcement Learning - [[2303.11366](https://arxiv.org/abs/2303.11366)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.11366.md)]. - Sound Localization from Motion: Jointly Learning Sound Direction and Camera Rotation - [[2303.11329](https://arxiv.org/abs/2303.11329)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.11329.md)]. - Zero-1-to-3: Zero-shot One Image to 3D Object - [[2303.11328](https://arxiv.org/abs/2303.11328)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.11328.md)]. - Open-vocabulary Panoptic Segmentation with Embedding Modulation - [[2303.11324](https://arxiv.org/abs/2303.11324)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.11324.md)]. - SVDiff: Compact Parameter Space for Diffusion Fine-Tuning - [[2303.11305](https://arxiv.org/abs/2303.11305)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.11305.md)]. - HiFace: High-Fidelity 3D Face Reconstruction by Learning Static and Dynamic Details - [[2303.11225](https://arxiv.org/abs/2303.11225)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.11225.md)]. - Less is More: Reducing Task and Model Complexity for 3D Point Cloud Semantic Segmentation - [[2303.11203](https://arxiv.org/abs/2303.11203)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.11203.md)]. - Robustifying Token Attention for Vision Transformers - [[2303.11126](https://arxiv.org/abs/2303.11126)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.11126.md)]. - SeiT: Storage-Efficient Vision Training with Tokens Using 1% of Pixel Storage - [[2303.11114](https://arxiv.org/abs/2303.11114)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.11114.md)]. - EmoTalk: Speech-Driven Emotional Disentanglement for 3D Face Animation - [[2303.11089](https://arxiv.org/abs/2303.11089)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.11089.md)]. - Learning Foresightful Dense Visual Affordance for Deformable Object Manipulation - [[2303.11057](https://arxiv.org/abs/2303.11057)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.11057.md)]. - Learning Optical Flow from Event Camera with Rendered Dataset - [[2303.11011](https://arxiv.org/abs/2303.11011)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.11011.md)]. - EqMotion: Equivariant Multi-agent Motion Prediction with Invariant Interaction Reasoning - [[2303.10876](https://arxiv.org/abs/2303.10876)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.10876.md)]. - Dynamic Documentation for AI Systems - [[2303.10854](https://arxiv.org/abs/2303.10854)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.10854.md)]. - Ref-NeuS: Ambiguity-Reduced Neural Implicit Surface Learning for Multi-View Reconstruction with Reflection - [[2303.10840](https://arxiv.org/abs/2303.10840)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.10840.md)]. - NeRF-LOAM: Neural Implicit Representation for Large-Scale Incremental LiDAR Odometry and Mapping - [[2303.10709](https://arxiv.org/abs/2303.10709)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.10709.md)]. - Compatibility of Fundamental Matrices for Complete Viewing Graphs - [[2303.10658](https://arxiv.org/abs/2303.10658)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.10658.md)]. - CLIP4MC: An RL-Friendly Vision-Language Model for Minecraft - [[2303.10571](https://arxiv.org/abs/2303.10571)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.10571.md)]. - Is Prompt All You Need? No. A Comprehensive and Broader View of Instruction Learning - [[2303.10475](https://arxiv.org/abs/2303.10475)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.10475.md)]. - Spatial-Aware Token for Weakly Supervised Object Localization - [[2303.10438](https://arxiv.org/abs/2303.10438)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.10438.md)]. - Grounding 3D Object Affordance from 2D Interactions in Images - [[2303.10437](https://arxiv.org/abs/2303.10437)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.10437.md)]. - DialogPaint: A Dialog-based Image Editing Model - [[2303.10073](https://arxiv.org/abs/2303.10073)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.10073.md)]. - No Fear of Classifier Biases: Neural Collapse Inspired Federated Learning with Synthetic and Fixed Classifier - [[2303.10058](https://arxiv.org/abs/2303.10058)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.10058.md)]. - Adversarial Counterfactual Visual Explanations - [[2303.09962](https://arxiv.org/abs/2303.09962)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.09962.md)]. - Leaping Into Memories: Space-Time Deep Feature Synthesis - [[2303.09941](https://arxiv.org/abs/2303.09941)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.09941.md)]. - TeSLA: Test-Time Self-Learning With Automatic Adversarial Augmentation - [[2303.09870](https://arxiv.org/abs/2303.09870)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.09870.md)]. - DiffusionRet: Generative Text-Video Retrieval with Diffusion Model - [[2303.09867](https://arxiv.org/abs/2303.09867)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.09867.md)]. - FreeDoM: Training-Free Energy-Guided Conditional Diffusion Model - [[2303.09833](https://arxiv.org/abs/2303.09833)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.09833.md)]. - Hierarchical Prior Mining for Non-local Multi-View Stereo - [[2303.09758](https://arxiv.org/abs/2303.09758)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.09758.md)]. - Video Action Recognition with Attentive Semantic Units - [[2303.09756](https://arxiv.org/abs/2303.09756)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.09756.md)]. - CoLT5: Faster Long-Range Transformers with Conditional Computation - [[2303.09752](https://arxiv.org/abs/2303.09752)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.09752.md)]. - ElasticViT: Conflict-aware Supernet Training for Deploying Fast Vision Transformer on Diverse Mobile Devices - [[2303.09730](https://arxiv.org/abs/2303.09730)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.09730.md)]. - CHAMPAGNE: Learning Real-world Conversation from Large-Scale Web Videos - [[2303.09713](https://arxiv.org/abs/2303.09713)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.09713.md)]. - Unsupervised Self-Driving Attention Prediction via Uncertainty Mining and Knowledge Embedding - [[2303.09706](https://arxiv.org/abs/2303.09706)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.09706.md)]. - Efficient Computation Sharing for Multi-Task Visual Scene Understanding - [[2303.09663](https://arxiv.org/abs/2303.09663)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.09663.md)]. - Iterative Soft Shrinkage Learning for Efficient Image Super-Resolution - [[2303.09650](https://arxiv.org/abs/2303.09650)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.09650.md)]. - DS-Fusion: Artistic Typography via Discriminated and Stylized Diffusion - [[2303.09604](https://arxiv.org/abs/2303.09604)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.09604.md)]. - Efficient Diffusion Training via Min-SNR Weighting Strategy - [[2303.09556](https://arxiv.org/abs/2303.09556)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.09556.md)]. - FateZero: Fusing Attentions for Zero-shot Text-based Video Editing - [[2303.09535](https://arxiv.org/abs/2303.09535)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.09535.md)]. - Among Us: Adversarially Robust Collaborative Perception by Consensus - [[2303.09495](https://arxiv.org/abs/2303.09495)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.09495.md)]. - DINAR: Diffusion Inpainting of Neural Textures for One-Shot Human Avatars - [[2303.09375](https://arxiv.org/abs/2303.09375)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.09375.md)]. - StylerDALLE: Language-Guided Style Transfer Using a Vector-Quantized Tokenizer of a Large-Scale Generative Model - [[2303.09268](https://arxiv.org/abs/2303.09268)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.09268.md)]. - Rehearsal-Free Domain Continual Face Anti-Spoofing: Generalize More and Forget Less - [[2303.09914](https://arxiv.org/abs/2303.09914)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.09914.md)]. - Human-AI Collaboration: The Effect of AI Delegation on Human Task Performance and Task Satisfaction - [[2303.09224](https://arxiv.org/abs/2303.09224)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.09224.md)]. - Global Knowledge Calibration for Fast Open-Vocabulary Segmentation - [[2303.09181](https://arxiv.org/abs/2303.09181)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.09181.md)]. - Learning a Room with the Occ-SDF Hybrid: Signed Distance Function Mingled with Occupancy Aids Scene Representation - [[2303.09152](https://arxiv.org/abs/2303.09152)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.09152.md)]. - Robust Evaluation of Diffusion-Based Adversarial Purification - [[2303.09051](https://arxiv.org/abs/2303.09051)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.09051.md)]. - HE is all you need: Compressing FHE Ciphertexts using Additive HE - [[2303.09043](https://arxiv.org/abs/2303.09043)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.09043.md)]. - Mimic3D: Thriving 3D-Aware GANs via 3D-to-2D Imitation - [[2303.09036](https://arxiv.org/abs/2303.09036)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.09036.md)]. - ART: Automatic multi-step reasoning and tool-use for large language models - [[2303.09014](https://arxiv.org/abs/2303.09014)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.09014.md)]. - Unified Visual Relationship Detection with Vision and Language Models - [[2303.08998](https://arxiv.org/abs/2303.08998)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.08998.md)]. - Reinforce Data, Multiply Impact: Improved Model Accuracy and Robustness with Dataset Reinforcement - [[2303.08983](https://arxiv.org/abs/2303.08983)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.08983.md)]. - MAtch, eXpand and Improve: Unsupervised Finetuning for Zero-Shot Action Recognition with Language Knowledge - [[2303.08914](https://arxiv.org/abs/2303.08914)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.08914.md)]. - Stochastic Segmentation with Conditional Categorical Diffusion Models - [[2303.08888](https://arxiv.org/abs/2303.08888)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.08888.md)]. - BiFormer: Vision Transformer with Bi-Level Routing Attention - [[2303.08810](https://arxiv.org/abs/2303.08810)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.08810.md)]. - PLEX: Making the Most of the Available Data for Robotic Manipulation Pretraining - [[2303.08789](https://arxiv.org/abs/2303.08789)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.08789.md)]. - Re-ReND: Real-time Rendering of NeRFs across Devices - [[2303.08717](https://arxiv.org/abs/2303.08717)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.08717.md)]. - Bi-directional Distribution Alignment for Transductive Zero-Shot Learning - [[2303.08698](https://arxiv.org/abs/2303.08698)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.08698.md)]. - Making Vision Transformers Efficient from A Token Sparsification View - [[2303.08685](https://arxiv.org/abs/2303.08685)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.08685.md)]. - Can Large Language Models design a Robot? - [[2303.15324](https://arxiv.org/abs/2303.15324)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.15324.md)]. - Improving 3D Imaging with Pre-Trained Perpendicular 2D Diffusion Models - [[2303.08440](https://arxiv.org/abs/2303.08440)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.08440.md)]. - VideoFlow: Exploiting Temporal Cues for Multi-frame Optical Flow Estimation - [[2303.08340](https://arxiv.org/abs/2303.08340)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.08340.md)]. - Towards High-Quality and Efficient Video Super-Resolution via Spatial-Temporal Data Overfitting - [[2303.08331](https://arxiv.org/abs/2303.08331)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.08331.md)]. - MSF: Motion-guided Sequential Fusion for Efficient 3D Object Detection from Point Cloud Sequences - [[2303.08316](https://arxiv.org/abs/2303.08316)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.08316.md)]. - SpaceEvo: Hardware-Friendly Search Space Design for Efficient INT8 Inference - [[2303.08308](https://arxiv.org/abs/2303.08308)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.08308.md)]. - Chat with the Environment: Interactive Multimodal Perception Using Large Language Models - [[2303.08268](https://arxiv.org/abs/2303.08268)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.08268.md)]. - Rotation-Invariant Transformer for Point Cloud Matching - [[2303.08231](https://arxiv.org/abs/2303.08231)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.08231.md)]. - Diversity-Aware Meta Visual Prompting - [[2303.08138](https://arxiv.org/abs/2303.08138)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.08138.md)]. - Manipulate by Seeing: Creating Manipulation Controllers from Pre-Trained Representations - [[2303.08135](https://arxiv.org/abs/2303.08135)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.08135.md)]. - Parameter is Not All You Need: Starting from Non-Parametric Networks for 3D Point Cloud Analysis - [[2303.08134](https://arxiv.org/abs/2303.08134)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.08134.md)]. - A Simple Framework for Open-Vocabulary Segmentation and Detection - [[2303.08131](https://arxiv.org/abs/2303.08131)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.08131.md)]. - ViperGPT: Visual Inference via Python Execution for Reasoning - [[2303.08128](https://arxiv.org/abs/2303.08128)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.08128.md)]. - Editing Implicit Assumptions in Text-to-Image Diffusion Models - [[2303.08084](https://arxiv.org/abs/2303.08084)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.08084.md)]. - Window-Based Early-Exit Cascades for Uncertainty Estimation: When Deep Ensembles are More Efficient than Single Models - [[2303.08010](https://arxiv.org/abs/2303.08010)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.08010.md)]. - Adaptive Rotated Convolution for Rotated Object Detection - [[2303.07820](https://arxiv.org/abs/2303.07820)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.07820.md)]. - MobileVOS: Real-Time Video Object Segmentation Contrastive Learning meets Knowledge Distillation - [[2303.07815](https://arxiv.org/abs/2303.07815)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.07815.md)]. - ICICLE: Interpretable Class Incremental Continual Learning - [[2303.07811](https://arxiv.org/abs/2303.07811)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.07811.md)]. - USAGE: A Unified Seed Area Generation Paradigm for Weakly Supervised Semantic Segmentation - [[2303.07806](https://arxiv.org/abs/2303.07806)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.07806.md)]. - Good Neighbors Are All You Need for Chinese Grapheme-to-Phoneme Conversion - [[2303.07726](https://arxiv.org/abs/2303.07726)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.07726.md)]. - Query2doc: Query Expansion with Large Language Models - [[2303.07678](https://arxiv.org/abs/2303.07678)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.07678.md)]. - One scalar is all you need -- absolute depth estimation using monocular self-supervision - [[2303.07662](https://arxiv.org/abs/2303.07662)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.07662.md)]. - NEF: Neural Edge Fields for 3D Parametric Curve Reconstruction from Multi-view Images - [[2303.07653](https://arxiv.org/abs/2303.07653)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.07653.md)]. - RE-MOVE: An Adaptive Policy Design Approach for Dynamic Environments via Language-Based Feedback - [[2303.07622](https://arxiv.org/abs/2303.07622)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.07622.md)]. - The Life Cycle of Knowledge in Big Language Models: A Survey - [[2303.07616](https://arxiv.org/abs/2303.07616)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.07616.md)]. - WDiscOOD: Out-of-Distribution Detection via Whitened Linear Discriminant Analysis - [[2303.07543](https://arxiv.org/abs/2303.07543)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.07543.md)]. - Audio Visual Language Maps for Robot Navigation - [[2303.07522](https://arxiv.org/abs/2303.07522)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.07522.md)]. - TriDet: Temporal Action Detection with Relative Boundary Modeling - [[2303.07347](https://arxiv.org/abs/2303.07347)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.07347.md)]. - Erasing Concepts from Diffusion Models - [[2303.07345](https://arxiv.org/abs/2303.07345)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.07345.md)]. - Revisiting Class-Incremental Learning with Pre-Trained Models: Generalizability and Adaptivity are All You Need - [[2303.07338](https://arxiv.org/abs/2303.07338)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.07338.md)]. - PoseExaminer: Automated Testing of Out-of-Distribution Robustness in Human Pose and Shape Estimation - [[2303.07337](https://arxiv.org/abs/2303.07337)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.07337.md)]. - Align and Attend: Multimodal Summarization with Dual Contrastive Losses - [[2303.07284](https://arxiv.org/abs/2303.07284)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.07284.md)]. - Breaking Common Sense: WHOOPS! A Vision-and-Language Benchmark of Synthetic and Compositional Images - [[2303.07274](https://arxiv.org/abs/2303.07274)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.07274.md)]. - Efficient Semantic Segmentation by Altering Resolutions for Compressed Videos - [[2303.07224](https://arxiv.org/abs/2303.07224)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.07224.md)]. - Modality-Agnostic Debiasing for Single Domain Generalization - [[2303.07123](https://arxiv.org/abs/2303.07123)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.07123.md)]. - Upcycling Models under Domain and Category Shift - [[2303.07110](https://arxiv.org/abs/2303.07110)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.07110.md)]. - Prototype-based Embedding Network for Scene Graph Generation - [[2303.07096](https://arxiv.org/abs/2303.07096)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.07096.md)]. - MSINet: Twins Contrastive Search of Multi-Scale Interaction for Object ReID - [[2303.07065](https://arxiv.org/abs/2303.07065)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.07065.md)]. - TARGET: Federated Class-Continual Learning via Exemplar-Free Distillation - [[2303.06937](https://arxiv.org/abs/2303.06937)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.06937.md)]. - NeRFLiX: High-Quality Neural View Synthesis by Learning a Degradation-Driven Inter-viewpoint MiXer - [[2303.06919](https://arxiv.org/abs/2303.06919)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.06919.md)]. - DR2: Diffusion-based Robust Degradation Remover for Blind Face Restoration - [[2303.06885](https://arxiv.org/abs/2303.06885)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.06885.md)]. - Progressive Open Space Expansion for Open-Set Model Attribution - [[2303.06877](https://arxiv.org/abs/2303.06877)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.06877.md)]. - Adaptive Data-Free Quantization - [[2303.06869](https://arxiv.org/abs/2303.06869)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.06869.md)]. - FlexGen: High-Throughput Generative Inference of Large Language Models with a Single GPU - [[2303.06865](https://arxiv.org/abs/2303.06865)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.06865.md)]. - Learning Distortion Invariant Representation for Image Restoration from A Causality Perspective - [[2303.06859](https://arxiv.org/abs/2303.06859)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.06859.md)]. - Dynamic Neural Network for Multi-Task Learning Searching across Diverse Network Topologies - [[2303.06856](https://arxiv.org/abs/2303.06856)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.06856.md)]. - DDFM: Denoising Diffusion Model for Multi-Modality Image Fusion - [[2303.06840](https://arxiv.org/abs/2303.06840)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.06840.md)]. - Transformer-based Planning for Symbolic Regression - [[2303.06833](https://arxiv.org/abs/2303.06833)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.06833.md)]. - Traj-MAE: Masked Autoencoders for Trajectory Prediction - [[2303.06697](https://arxiv.org/abs/2303.06697)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.06697.md)]. - Self-planning Code Generation with Large Language Models - [[2303.06689](https://arxiv.org/abs/2303.06689)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.06689.md)]. - Iterative Geometry Encoding Volume for Stereo Matching - [[2303.06615](https://arxiv.org/abs/2303.06615)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.06615.md)]. - Multi-metrics adaptively identifies backdoors in Federated learning - [[2303.06601](https://arxiv.org/abs/2303.06601)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.06601.md)]. - ChatGPT Asks, BLIP-2 Answers: Automatic Questioning Towards Enriched Visual Descriptions - [[2303.06594](https://arxiv.org/abs/2303.06594)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.06594.md)]. - Large Language Models Know Your Contextual Search Intent: A Prompting Framework for Conversational Search - [[2303.06573](https://arxiv.org/abs/2303.06573)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.06573.md)]. - Gradient-Regulated Meta-Prompt Learning for Generalizable Vision-Language Models - [[2303.06571](https://arxiv.org/abs/2303.06571)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.06571.md)]. - ChatGPT Prompt Patterns for Improving Code Quality, Refactoring, Requirements Elicitation, and Software Design - [[2303.07839](https://arxiv.org/abs/2303.07839)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.07839.md)]. - FAC: 3D Representation Learning via Foreground Aware Feature Contrast - [[2303.06388](https://arxiv.org/abs/2303.06388)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.06388.md)]. - DeltaEdit: Exploring Text-free Training for Text-Driven Image Manipulation - [[2303.06285](https://arxiv.org/abs/2303.06285)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.06285.md)]. - Task and Motion Planning with Large Language Models for Object Rearrangement - [[2303.06247](https://arxiv.org/abs/2303.06247)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.06247.md)]. - Who's Thinking? A Push for Human-Centered Evaluation of LLMs using the XAI Playbook - [[2303.06223](https://arxiv.org/abs/2303.06223)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.06223.md)]. - SemARFlow: Injecting Semantics into Unsupervised Optical Flow Estimation for Autonomous Driving - [[2303.06209](https://arxiv.org/abs/2303.06209)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.06209.md)]. - Overwriting Pretrained Bias with Finetuning Data - [[2303.06167](https://arxiv.org/abs/2303.06167)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.06167.md)]. - StyleGANEX: StyleGAN-Based Manipulation Beyond Cropped Aligned Faces - [[2303.06146](https://arxiv.org/abs/2303.06146)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.06146.md)]. - MVImgNet: A Large-scale Dataset of Multi-view Images - [[2303.06042](https://arxiv.org/abs/2303.06042)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.06042.md)]. - Accountable Textual-Visual Chat Learns to Reject Human Instructions in Image Re-creation - [[2303.05983](https://arxiv.org/abs/2303.05983)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.05983.md)]. - Structural Multiplane Image: Bridging Neural View Synthesis and 3D Reconstruction - [[2303.05937](https://arxiv.org/abs/2303.05937)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.05937.md)]. - GECCO: Geometrically-Conditioned Point Diffusion Models - [[2303.05916](https://arxiv.org/abs/2303.05916)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.05916.md)]. - Object-Aware Distillation Pyramid for Open-Vocabulary Object Detection - [[2303.05892](https://arxiv.org/abs/2303.05892)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.05892.md)]. - TrojDiff: Trojan Attacks on Diffusion Models with Diverse Targets - [[2303.05762](https://arxiv.org/abs/2303.05762)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.05762.md)]. - GameFormer: Game-theoretic Modeling and Learning of Transformer-based Interactive Prediction and Planning for Autonomous Driving - [[2303.05760](https://arxiv.org/abs/2303.05760)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.05760.md)]. - Hardware Acceleration of Neural Graphics - [[2303.05735](https://arxiv.org/abs/2303.05735)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.05735.md)]. - 3D Cinemagraphy from a Single Image - [[2303.05724](https://arxiv.org/abs/2303.05724)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.05724.md)]. - Context-Based Trit-Plane Coding for Progressive Image Compression - [[2303.05715](https://arxiv.org/abs/2303.05715)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.05715.md)]. - Inducing Neural Collapse to a Fixed Hierarchy-Aware Frame for Reducing Mistake Severity - [[2303.05689](https://arxiv.org/abs/2303.05689)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.05689.md)]. - Mimic before Reconstruct: Enhancing Masked Autoencoders with Feature Mimicking - [[2303.05475](https://arxiv.org/abs/2303.05475)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.05475.md)]. - Rethinking Range View Representation for LiDAR Segmentation - [[2303.05367](https://arxiv.org/abs/2303.05367)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.05367.md)]. - 3D Video Loops from Asynchronous Input - [[2303.05312](https://arxiv.org/abs/2303.05312)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.05312.md)]. - From Visual Prompt Learning to Zero-Shot Transfer: Mapping Is All You Need - [[2303.05266](https://arxiv.org/abs/2303.05266)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.05266.md)]. - Masked Image Modeling with Local Multi-Scale Reconstruction - [[2303.05251](https://arxiv.org/abs/2303.05251)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.05251.md)]. - Revisiting Rotation Averaging: Uncertainties and Robust Losses - [[2303.05195](https://arxiv.org/abs/2303.05195)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.05195.md)]. - Efficient Transformer-based 3D Object Detection with Dynamic Token Halting - [[2303.05078](https://arxiv.org/abs/2303.05078)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.05078.md)]. - Identification of Systematic Errors of Image Classifiers on Rare Subgroups - [[2303.05072](https://arxiv.org/abs/2303.05072)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.05072.md)]. - MBPTrack: Improving 3D Point Cloud Tracking with Memory Networks and Box Priors - [[2303.05071](https://arxiv.org/abs/2303.05071)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.05071.md)]. - ICL-D3IE: In-Context Learning with Diverse Demonstrations Updating for Document Information Extraction - [[2303.05063](https://arxiv.org/abs/2303.05063)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.05063.md)]. - X-Pruner: eXplainable Pruning for Vision Transformers - [[2303.04935](https://arxiv.org/abs/2303.04935)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.04935.md)]. - CROSSFIRE: Camera Relocalization On Self-Supervised Features from an Implicit Representation - [[2303.04869](https://arxiv.org/abs/2303.04869)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.04869.md)]. - Open-Vocabulary Panoptic Segmentation with Text-to-Image Diffusion Models - [[2303.04803](https://arxiv.org/abs/2303.04803)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.04803.md)]. - Cost-Effective Hyperparameter Optimization for Large Language Model Generation Inference - [[2303.04673](https://arxiv.org/abs/2303.04673)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.04673.md)]. - Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models - [[2303.04671](https://arxiv.org/abs/2303.04671)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.04671.md)]. - DNBP: Differentiable Nonparametric Belief Propagation - [[2303.04616](https://arxiv.org/abs/2303.04616)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.04616.md)]. - CUDA: Convolution-based Unlearnable Datasets - [[2303.04278](https://arxiv.org/abs/2303.04278)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.04278.md)]. - Where We Are and What We're Looking At: Query Based Worldwide Image Geo-localization Using Hierarchies and Scenes - [[2303.04249](https://arxiv.org/abs/2303.04249)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.04249.md)]. - Foundation Models for Decision Making: Problems, Methods, and Opportunities - [[2303.04129](https://arxiv.org/abs/2303.04129)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.04129.md)]. - Meta-Explore: Exploratory Hierarchical Vision-and-Language Navigation Using Scene Object Spectrum Grounding - [[2303.04077](https://arxiv.org/abs/2303.04077)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.04077.md)]. - DLT: Conditioned layout generation with Joint Discrete-Continuous Diffusion Layout Transformer - [[2303.03755](https://arxiv.org/abs/2303.03755)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.03755.md)]. - MOSO: Decomposing MOtion, Scene and Object for Video Prediction - [[2303.03684](https://arxiv.org/abs/2303.03684)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.03684.md)]. - Run, Don't Walk: Chasing Higher FLOPS for Faster Neural Networks - [[2303.03667](https://arxiv.org/abs/2303.03667)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.03667.md)]. - LoGoNet: Towards Accurate 3D Object Detection with Local-to-Global Cross-Modal Fusion - [[2303.03595](https://arxiv.org/abs/2303.03595)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.03595.md)]. - Can an Embodied Agent Find Your "Cat-shaped Mug"? LLM-Based Zero-Shot Object Navigation - [[2303.03480](https://arxiv.org/abs/2303.03480)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.03480.md)]. - PaLM-E: An Embodied Multimodal Language Model - [[2303.03378](https://arxiv.org/abs/2303.03378)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.03378.md)]. - CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive Learning - [[2303.03323](https://arxiv.org/abs/2303.03323)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.03323.md)]. - MACARONS: Mapping And Coverage Anticipation with RGB Online Self-Supervision - [[2303.03315](https://arxiv.org/abs/2303.03315)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.03315.md)]. - Continuous Sign Language Recognition with Correlation Network - [[2303.03202](https://arxiv.org/abs/2303.03202)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.03202.md)]. - Masked Images Are Counterfactual Samples for Robust Fine-tuning - [[2303.03052](https://arxiv.org/abs/2303.03052)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.03052.md)]. - UniHCP: A Unified Model for Human-Centric Perceptions - [[2303.02936](https://arxiv.org/abs/2303.02936)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.02936.md)]. - LIDA: A Tool for Automatic Generation of Grammar-Agnostic Visualizations and Infographics using Large Language Models - [[2303.02927](https://arxiv.org/abs/2303.02927)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.02927.md)]. - DeepMAD: Mathematical Architecture Design for Deep Convolutional Neural Network - [[2303.02165](https://arxiv.org/abs/2303.02165)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.02165.md)]. - Human-Art: A Versatile Human-Centric Dataset Bridging Natural and Artificial Scenes - [[2303.02760](https://arxiv.org/abs/2303.02760)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.02760.md)]. - HairStep: Transfer Synthetic to Real Using Strand and Depth Maps for Single-View 3D Hair Modeling - [[2303.02700](https://arxiv.org/abs/2303.02700)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.02700.md)]. - Super-Resolution Neural Operator - [[2303.02584](https://arxiv.org/abs/2303.02584)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.02584.md)]. - Prismer: A Vision-Language Model with An Ensemble of Experts - [[2303.02506](https://arxiv.org/abs/2303.02506)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.02506.md)]. - CapDet: Unifying Dense Captioning and Open-World Detection Pretraining - [[2303.02489](https://arxiv.org/abs/2303.02489)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.02489.md)]. - FAME-ViL: Multi-Tasking Vision-Language Model for Heterogeneous Fashion Tasks - [[2303.02483](https://arxiv.org/abs/2303.02483)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.02483.md)]. - ConZIC: Controllable Zero-shot Image Captioning by Sampling-Based Polishing - [[2303.02437](https://arxiv.org/abs/2303.02437)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.02437.md)]. - NeuDA: Neural Deformable Anchor for High-Fidelity Implicit Surface Reconstruction - [[2303.02375](https://arxiv.org/abs/2303.02375)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.02375.md)]. - Decompose, Adjust, Compose: Effective Normalization by Playing with Frequency for Domain Generalization - [[2303.02328](https://arxiv.org/abs/2303.02328)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.02328.md)]. - MathPrompter: Mathematical Reasoning using Large Language Models - [[2303.05398](https://arxiv.org/abs/2303.05398)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.05398.md)]. - Technical Report on: Tripedal Dynamic Gaits for a Quadruped Robot - [[2303.02280](https://arxiv.org/abs/2303.02280)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.02280.md)]. - Unleashing Text-to-Image Diffusion Models for Visual Perception - [[2303.02153](https://arxiv.org/abs/2303.02153)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.02153.md)]. - Prompt, Generate, then Cache: Cascade of Foundation Models makes Strong Few-shot Learners - [[2303.02151](https://arxiv.org/abs/2303.02151)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.02151.md)]. - Gradient Norm Aware Minimization Seeks First-Order Flatness and Improves Generalization - [[2303.03108](https://arxiv.org/abs/2303.03108)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.03108.md)]. - EcoTTA: Memory-Efficient Continual Test-time Adaptation via Self-distilled Regularization - [[2303.01904](https://arxiv.org/abs/2303.01904)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.01904.md)]. - Prompting Large Language Models with Answer Heuristics for Knowledge-based Visual Question Answering - [[2303.01903](https://arxiv.org/abs/2303.01903)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.01903.md)]. - Generative Diffusions in Augmented Spaces: A Complete Recipe - [[2303.01748](https://arxiv.org/abs/2303.01748)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.01748.md)]. - Near Optimal Memory-Regret Tradeoff for Online Learning - [[2303.1673](https://arxiv.org/abs/2303.1673)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.1673.md)]. - WESPER: Zero-shot and Realtime Whisper to Normal Voice Conversion for Whisper-based Speech Interactions - [[2303.1639](https://arxiv.org/abs/2303.1639)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.1639.md)]. - A Meta-Learning Approach to Predicting Performance and Data Requirements - [[2303.01598](https://arxiv.org/abs/2303.01598)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.01598.md)]. - Improving GAN Training via Feature Space Shrinkage - [[2303.01559](https://arxiv.org/abs/2303.01559)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.01559.md)]. - First Order Quantum Phase Transition in the Hybrid Metal-Mott Insulator Transition Metal Dichalcogenide 4Hb-TaS2 - [[2303.1447](https://arxiv.org/abs/2303.1447)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.1447.md)]. - Isotopic effects in molecular attosecond photoelectron interferometry - [[2303.1329](https://arxiv.org/abs/2303.1329)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.1329.md)]. - Token Contrast for Weakly-Supervised Semantic Segmentation - [[2303.1267](https://arxiv.org/abs/2303.1267)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.1267.md)]. - MixPHM: Redundancy-Aware Parameter-Efficient Tuning for Low-Resource Visual Question Answering - [[2303.01239](https://arxiv.org/abs/2303.01239)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.01239.md)]. - Demystifying Causal Features on Adversarial Examples and Causal Inoculation for Robust Network by Adversarial Instrumental Variable Regression - [[2303.01052](https://arxiv.org/abs/2303.01052)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.01052.md)]. - Eulerian-Lagrangian particle-based model for diffusional growth for the better parameterization of ISM clouds: A road map for improving climate model through small-scale model using observations - [[2303.0987](https://arxiv.org/abs/2303.0987)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.0987.md)]. - Image Labels Are All You Need for Coarse Seagrass Segmentation - [[2303.00973](https://arxiv.org/abs/2303.00973)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.00973.md)]. - Disentangling Orthogonal Planes for Indoor Panoramic Room Layout Estimation with Cross-Scale Distortion Awareness - [[2303.00971](https://arxiv.org/abs/2303.00971)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.00971.md)]. - UniDexGrasp: Universal Robotic Dexterous Grasping via Learning Diverse Proposal Generation and Goal-Conditioned Policy - [[2303.00938](https://arxiv.org/abs/2303.00938)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.00938.md)]. - Neuro-Modulated Hebbian Learning for Fully Test-Time Adaptation - [[2303.00914](https://arxiv.org/abs/2303.00914)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.00914.md)]. - Open-World Object Manipulation using Pre-trained Vision-Language Models - [[2303.00905](https://arxiv.org/abs/2303.00905)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.00905.md)]. - Towards Trustable Skin Cancer Diagnosis via Rewriting Model's Decision - [[2303.00885](https://arxiv.org/abs/2303.00885)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.00885.md)]. - Grounded Decoding: Guiding Text Generation with Grounded Models for Robot Control - [[2303.00855](https://arxiv.org/abs/2303.00855)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.00855.md)]. - UDAPDR: Unsupervised Domain Adaptation via LLM Prompting and Distillation of Rerankers - [[2303.00807](https://arxiv.org/abs/2303.00807)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.00807.md)]. - WhisperX: Time-Accurate Speech Transcription of Long-Form Audio - [[2303.00747](https://arxiv.org/abs/2303.00747)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.00747.md)]. - IPCC-TP: Utilizing Incremental Pearson Correlation Coefficient for Joint Multi-Agent Trajectory Prediction - [[2303.00575](https://arxiv.org/abs/2303.00575)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.00575.md)]. - On the Audio-visual Synchronization for Lip-to-Speech Synthesis - [[2303.00502](https://arxiv.org/abs/2303.00502)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.00502.md)]. - Hidden Gems: 4D Radar Scene Flow Learning Using Cross-Modal Supervision - [[2303.00462](https://arxiv.org/abs/2303.00462)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.00462.md)]. - Distilled Reverse Attention Network for Open-world Compositional Zero-Shot Learning - [[2303.00404](https://arxiv.org/abs/2303.00404)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.00404.md)]. - A Practical Upper Bound for the Worst-Case Attribution Deviations - [[2303.00340](https://arxiv.org/abs/2303.00340)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.00340.md)]. - Can ChatGPT Assess Human Personalities? A General Evaluation Framework - [[2303.01248](https://arxiv.org/abs/2303.01248)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.01248.md)]. - Single Image Backdoor Inversion via Robust Smoothed Classifiers - [[2303.00215](https://arxiv.org/abs/2303.00215)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.00215.md)]. - Zyxin is all you need: machine learning adherent cell mechanics - [[2303.00176](https://arxiv.org/abs/2303.00176)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.00176.md)]. ### February 2023 - Towards Generalisable Video Moment Retrieval: Visual-Dynamic Injection to Image-Text Pre-Training - [[2303.00040](https://arxiv.org/abs/2303.00040)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.00040.md)]. - A Comprehensive Perturbative Formalism for Phase Mixing in Perturbed Disks. II. Phase Spirals in an Inhomogeneous Disk Galaxy with a Non-responsive Dark Matter Halo - [[2303.0034](https://arxiv.org/abs/2303.0034)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.0034.md)]. - PA&amp;DA: Jointly Sampling PAth and DAta for Consistent NAS - [[2302.14772](https://arxiv.org/abs/2302.14772)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.14772.md)]. - Generic-to-Specific Distillation of Masked Autoencoders - [[2302.14771](https://arxiv.org/abs/2302.14771)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.14771.md)]. - Mask3D: Pre-training 2D Vision Transformers by Learning Masked 3D Priors - [[2302.14746](https://arxiv.org/abs/2302.14746)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.14746.md)]. - Which One Are You Referring To? Multimodal Object Identification in Situated Dialogue - [[2302.14680](https://arxiv.org/abs/2302.14680)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.14680.md)]. - Backdoor Attacks Against Deep Image Compression via Adaptive Frequency Trigger - [[2302.14677](https://arxiv.org/abs/2302.14677)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.14677.md)]. - Attention-based Point Cloud Edge Sampling - [[2302.14673](https://arxiv.org/abs/2302.14673)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.14673.md)]. - Interactive Segmentation as Gaussian Process Classification - [[2302.14578](https://arxiv.org/abs/2302.14578)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.14578.md)]. - A Little Bit Attention Is All You Need for Person Re-Identification - [[2302.14574](https://arxiv.org/abs/2302.14574)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.14574.md)]. - A Hierarchical Representation Network for Accurate and Detailed Face Reconstruction from In-The-Wild Images - [[2302.14434](https://arxiv.org/abs/2302.14434)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.14434.md)]. - DREAM: Efficient Dataset Distillation by Representative Matching - [[2302.14416](https://arxiv.org/abs/2302.14416)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.14416.md)]. - GLM-Dialog: Noise-tolerant Pre-training for Knowledge-grounded Dialogue Generation - [[2302.14401](https://arxiv.org/abs/2302.14401)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.14401.md)]. - HelixSurf: A Robust and Efficient Neural Implicit Surface Learning of Indoor Scenes with Iterative Intertwined Regularization - [[2302.14340](https://arxiv.org/abs/2302.14340)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.14340.md)]. - BEVPlace: Learning LiDAR-based Place Recognition using Bird's Eye View Images - [[2302.14325](https://arxiv.org/abs/2302.14325)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.14325.md)]. - Towards Memory- and Time-Efficient Backpropagation for Training Spiking Neural Networks - [[2302.14311](https://arxiv.org/abs/2302.14311)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.14311.md)]. - Learning to Retain while Acquiring: Combating Distribution-Shift in Adversarial Data-Free Knowledge Distillation - [[2302.14290](https://arxiv.org/abs/2302.14290)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.14290.md)]. - Vid2Seq: Large-Scale Pretraining of a Visual Language Model for Dense Video Captioning - [[2302.14115](https://arxiv.org/abs/2302.14115)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.14115.md)]. - Internet Explorer: Targeted Representation Learning on the Open Web - [[2302.14051](https://arxiv.org/abs/2302.14051)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.14051.md)]. - Language Is Not All You Need: Aligning Perception with Language Models - [[2302.14045](https://arxiv.org/abs/2302.14045)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.14045.md)]. - LLaMA: Open and Efficient Foundation Language Models - [[2302.13971](https://arxiv.org/abs/2302.13971)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.13971.md)]. - Communication-efficient Federated Learning with Single-Step Synthetic Features Compressor for Faster Convergence - [[2302.13562](https://arxiv.org/abs/2302.13562)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.13562.md)]. - Navigating the Grey Area: Expressions of Overconfidence and Uncertainty in Language Models - [[2302.13439](https://arxiv.org/abs/2302.13439)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.13439.md)]. - Self-similarity Driven Scale-invariant Learning for Weakly Supervised Person Search - [[2302.12986](https://arxiv.org/abs/2302.12986)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.12986.md)]. - Control flow in active inference systems - [[2303.1514](https://arxiv.org/abs/2303.1514)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2303.1514.md)]. - From Occlusion to Insight: Object Search in Semantic Shelves using Large Language Models - [[2302.12915](https://arxiv.org/abs/2302.12915)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.12915.md)]. - SplineCam: Exact Visualization and Characterization of Deep Network Geometry and Decision Boundaries - [[2302.12828](https://arxiv.org/abs/2302.12828)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.12828.md)]. - Automatic Prompt Augmentation and Selection with Chain-of-Thought from Labeled Data - [[2302.12822](https://arxiv.org/abs/2302.12822)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.12822.md)]. - VoxFormer: Sparse Voxel Transformer for Camera-based 3D Semantic Scene Completion - [[2302.12251](https://arxiv.org/abs/2302.12251)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.12251.md)]. - Active Prompting with Chain-of-Thought for Large Language Models - [[2302.12246](https://arxiv.org/abs/2302.12246)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.12246.md)]. - Aligning Text-to-Image Models using Human Feedback - [[2302.12192](https://arxiv.org/abs/2302.12192)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.12192.md)]. - Can Pre-trained Vision and Language Models Answer Visual Information-Seeking Questions? - [[2302.11713](https://arxiv.org/abs/2302.11713)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.11713.md)]. - Some Might Say All You Need Is Sum - [[2302.11603](https://arxiv.org/abs/2302.11603)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.11603.md)]. - Vid2Avatar: 3D Avatar Reconstruction from Videos in the Wild via Self-supervised Scene Decomposition - [[2302.11566](https://arxiv.org/abs/2302.11566)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.11566.md)]. - Distributionally Robust Recourse Action - [[2302.11211](https://arxiv.org/abs/2302.11211)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.11211.md)]. - Open-domain Visual Entity Recognition: Towards Recognizing Millions of Wikipedia Entities - [[2302.11154](https://arxiv.org/abs/2302.11154)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.11154.md)]. - Hyena Hierarchy: Towards Larger Convolutional Language Models - [[2302.10866](https://arxiv.org/abs/2302.10866)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.10866.md)]. - Prompt Stealing Attacks Against Text-to-Image Generation Models - [[2302.09923](https://arxiv.org/abs/2302.09923)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.09923.md)]. - EuroCrops: All you need to know about the Largest Harmonised Open Crop Dataset Across the European Union - [[2302.10202](https://arxiv.org/abs/2302.10202)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.10202.md)]. - ChatGPT for Robotics: Design Principles and Model Abilities - [[2306.17582](https://arxiv.org/abs/2306.17582)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2306.17582.md)]. - Weakly Supervised Label Learning Flows - [[2302.09649](https://arxiv.org/abs/2302.09649)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.09649.md)]. - Temporal Interpolation Is All You Need for Dynamic Neural Radiance Fields - [[2302.09311](https://arxiv.org/abs/2302.09311)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.09311.md)]. - Recent Advances towards Safe, Responsible, and Moral Dialogue Systems: A Survey - [[2302.09270](https://arxiv.org/abs/2302.09270)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.09270.md)]. - Complex QA and language models hybrid architectures, Survey - [[2302.09051](https://arxiv.org/abs/2302.09051)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.09051.md)]. - Are Gaussian data all you need? Extents and limits of universality in high-dimensional generalized linear estimation - [[2302.08923](https://arxiv.org/abs/2302.08923)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.08923.md)]. - A survey on online active learning - [[2302.08893](https://arxiv.org/abs/2302.08893)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.08893.md)]. - Conformers are All You Need for Visual Speech Recogntion - [[2302.10915](https://arxiv.org/abs/2302.10915)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.10915.md)]. - 3D-aware Conditional Image Synthesis - [[2302.08509](https://arxiv.org/abs/2302.08509)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.08509.md)]. - PersonNeRF: Personalized Reconstruction from Photo Collections - [[2302.08504](https://arxiv.org/abs/2302.08504)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.08504.md)]. - Tuning computer vision models with task rewards - [[2302.08242](https://arxiv.org/abs/2302.08242)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.08242.md)]. - Aligning Language Models with Preferences through f-divergence Minimization - [[2302.08215](https://arxiv.org/abs/2302.08215)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.08215.md)]. - Parallax-Tolerant Unsupervised Deep Image Stitching - [[2302.08207](https://arxiv.org/abs/2302.08207)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.08207.md)]. - À-la-carte Prompt Tuning (APT): Combining Distinct Data Via Composable Prompting - [[2302.07994](https://arxiv.org/abs/2302.07994)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.07994.md)]. - One-Shot Face Video Re-enactment using Hybrid Latent Spaces of StyleGAN2 - [[2302.07848](https://arxiv.org/abs/2302.07848)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.07848.md)]. - Augmented Language Models: a Survey - [[2302.07842](https://arxiv.org/abs/2302.07842)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.07842.md)]. - Video Probabilistic Diffusion Models in Projected Latent Space - [[2302.07685](https://arxiv.org/abs/2302.07685)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.07685.md)]. - LiveHand: Real-time and Photorealistic Neural Hand Rendering - [[2302.07672](https://arxiv.org/abs/2302.07672)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.07672.md)]. - The Capacity for Moral Self-Correction in Large Language Models - [[2302.07459](https://arxiv.org/abs/2302.07459)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.07459.md)]. - PolyFormer: Referring Image Segmentation as Sequential Polygon Generation - [[2302.07387](https://arxiv.org/abs/2302.07387)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.07387.md)]. - Painting 3D Nature in 2D: View Synthesis of Natural Scenes from a Single Semantic Mask - [[2302.07224](https://arxiv.org/abs/2302.07224)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.07224.md)]. - VQ3D: Learning a 3D-Aware Generative Model on ImageNet - [[2302.06833](https://arxiv.org/abs/2302.06833)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.06833.md)]. - The Stable Entropy Hypothesis and Entropy-Aware Decoding: An Analysis and Algorithm for Robust Natural Language Generation - [[2302.06784](https://arxiv.org/abs/2302.06784)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.06784.md)]. - Stitchable Neural Networks - [[2302.06586](https://arxiv.org/abs/2302.06586)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.06586.md)]. - A Reparameterized Discrete Diffusion Model for Text Generation - [[2302.05737](https://arxiv.org/abs/2302.05737)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.05737.md)]. - Compositional Exemplars for In-context Learning - [[2302.05698](https://arxiv.org/abs/2302.05698)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.05698.md)]. - Adding Conditional Control to Text-to-Image Diffusion Models - [[2302.05543](https://arxiv.org/abs/2302.05543)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.05543.md)]. - The Wisdom of Hindsight Makes Language Models Better Instruction Followers - [[2302.05206](https://arxiv.org/abs/2302.05206)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.05206.md)]. - RelightableHands: Efficient Neural Relighting of Articulated Hand Models - [[2302.04866](https://arxiv.org/abs/2302.04866)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.04866.md)]. - Toolformer: Language Models Can Teach Themselves to Use Tools - [[2302.04761](https://arxiv.org/abs/2302.04761)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.04761.md)]. - Q-Diffusion: Quantizing Diffusion Models - [[2302.04304](https://arxiv.org/abs/2302.04304)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.04304.md)]. - On the Computational Complexity of Ethics: Moral Tractability for Minds and Machines - [[2302.04218](https://arxiv.org/abs/2302.04218)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.04218.md)]. - GPTScore: Evaluate as You Desire - [[2302.04166](https://arxiv.org/abs/2302.04166)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.04166.md)]. - A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity - [[2302.04023](https://arxiv.org/abs/2302.04023)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.04023.md)]. - Is ChatGPT a General-Purpose Natural Language Processing Task Solver? - [[2302.06476](https://arxiv.org/abs/2302.06476)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.06476.md)]. - Zero-shot Generation of Coherent Storybook from Plain Text Story using Diffusion Models - [[2302.03900](https://arxiv.org/abs/2302.03900)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.03900.md)]. - Controlling Personality Style in Dialogue with Zero-Shot Prompt-Based Learning - [[2302.03848](https://arxiv.org/abs/2302.03848)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.03848.md)]. - Long Horizon Temperature Scaling - [[2302.03686](https://arxiv.org/abs/2302.03686)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.03686.md)]. - HumanMAC: Masked Motion Completion for Human Motion Prediction - [[2302.03665](https://arxiv.org/abs/2302.03665)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.03665.md)]. - Fair Diffusion: Instructing Text-to-Image Generation Models on Fairness - [[2302.10893](https://arxiv.org/abs/2302.10893)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.10893.md)]. - Structure and Content-Guided Video Synthesis with Diffusion Models - [[2302.03011](https://arxiv.org/abs/2302.03011)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.03011.md)]. - LexLIP: Lexicon-Bottlenecked Language-Image Pre-Training for Large-Scale Image-Text Retrieval - [[2302.02908](https://arxiv.org/abs/2302.02908)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.02908.md)]. - Chain of Hindsight Aligns Language Models with Feedback - [[2302.02676](https://arxiv.org/abs/2302.02676)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.02676.md)]. - Rethinking Out-of-distribution (OOD) Detection: Masked Image Modeling is All You Need - [[2302.02615](https://arxiv.org/abs/2302.02615)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.02615.md)]. - Decoupled Iterative Refinement Framework for Interacting Hands Reconstruction from a Single RGB Image - [[2302.02410](https://arxiv.org/abs/2302.02410)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.02410.md)]. - Aligning Robot and Human Representations - [[2302.01928](https://arxiv.org/abs/2302.01928)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.01928.md)]. - MOSE: A New Dataset for Video Object Segmentation in Complex Scenes - [[2302.01872](https://arxiv.org/abs/2302.01872)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.01872.md)]. - Robust Camera Pose Refinement for Multi-Resolution Hash Encoding - [[2302.01571](https://arxiv.org/abs/2302.01571)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.01571.md)]. - Describe, Explain, Plan and Select: Interactive Planning with Large Language Models Enables Open-World Multi-Task Agents - [[2302.01560](https://arxiv.org/abs/2302.01560)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.01560.md)]. - Inference in Non-stationary High-Dimensional VARs - [[2302.1434](https://arxiv.org/abs/2302.1434)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.1434.md)]. - Accelerating Policy Gradient by Estimating Value Function from Prior Computation in Deep Reinforcement Learning - [[2302.01399](https://arxiv.org/abs/2302.01399)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.01399.md)]. - Accelerating Large Language Model Decoding with Speculative Sampling - [[2302.01318](https://arxiv.org/abs/2302.01318)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.01318.md)]. - Are Diffusion Models Vulnerable to Membership Inference Attacks? - [[2302.01316](https://arxiv.org/abs/2302.01316)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.01316.md)]. - Get3DHuman: Lifting StyleGAN-Human into a 3D Generative Model using Pixel-aligned Reconstruction Priors - [[2302.01162](https://arxiv.org/abs/2302.01162)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.01162.md)]. - HaMuCo: Hand Pose Estimation via Multiview Collaborative Self-Supervised Learning - [[2302.00988](https://arxiv.org/abs/2302.00988)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.00988.md)]. - Multimodal Chain-of-Thought Reasoning in Language Models - [[2302.00923](https://arxiv.org/abs/2302.00923)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.00923.md)]. - Collaborating with language models for embodied reasoning - [[2302.00763](https://arxiv.org/abs/2302.00763)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.00763.md)]. - Synthetic Prompting: Generating Chain-of-Thought Demonstrations for Large Language Models - [[2302.00618](https://arxiv.org/abs/2302.00618)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.00618.md)]. - Compositional Prompt Tuning with Motion Cues for Open-vocabulary Video Relation Detection - [[2302.00268](https://arxiv.org/abs/2302.00268)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.00268.md)]. ### January 2023 - Learning Universal Policies via Text-Guided Video Generation - [[2302.00111](https://arxiv.org/abs/2302.00111)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.00111.md)]. - Large Language Models Can Be Easily Distracted by Irrelevant Context - [[2302.00093](https://arxiv.org/abs/2302.00093)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.00093.md)]. - Debiasing Vision-Language Models via Biased Prompts - [[2302.00070](https://arxiv.org/abs/2302.00070)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.00070.md)]. - Grounding Language Models to Images for Multimodal Inputs and Outputs - [[2301.13823](https://arxiv.org/abs/2301.13823)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.13823.md)]. - Large Language Models are Versatile Decomposers: Decompose Evidence and Questions for Table-based Reasoning - [[2301.13808](https://arxiv.org/abs/2301.13808)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.13808.md)]. - The Flan Collection: Designing Data and Methods for Effective Instruction Tuning - [[2301.13688](https://arxiv.org/abs/2301.13688)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.13688.md)]. - Anti-Exploration by Random Network Distillation - [[2301.13616](https://arxiv.org/abs/2301.13616)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.13616.md)]. - Faithful Chain-of-Thought Reasoning - [[2301.13379](https://arxiv.org/abs/2301.13379)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.13379.md)]. - Neural Operator: Is data all you need to model the world? An insight into the impact of Physics Informed Machine Learning - [[2301.13331](https://arxiv.org/abs/2301.13331)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.13331.md)]. - GALIP: Generative Adversarial CLIPs for Text-to-Image Synthesis - [[2301.12959](https://arxiv.org/abs/2301.12959)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.12959.md)]. - DepGraph: Towards Any Structural Pruning - [[2301.12900](https://arxiv.org/abs/2301.12900)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.12900.md)]. - Red teaming ChatGPT via Jailbreaking: Bias, Robustness, Reliability and Toxicity - [[2301.12867](https://arxiv.org/abs/2301.12867)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.12867.md)]. - Specializing Smaller Language Models towards Multi-Step Reasoning - [[2301.12726](https://arxiv.org/abs/2301.12726)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.12726.md)]. - Adversarial Style Augmentation for Domain Generalization - [[2301.12643](https://arxiv.org/abs/2301.12643)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.12643.md)]. - BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models - [[2301.12597](https://arxiv.org/abs/2301.12597)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.12597.md)]. - Do Embodied Agents Dream of Pixelated Sheep: Embodied Decision Making using Language Guided World Modelling - [[2301.12050](https://arxiv.org/abs/2301.12050)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.12050.md)]. - Understanding the Effectiveness of Very Large Language Models on Dialog Evaluation - [[2301.12004](https://arxiv.org/abs/2301.12004)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.12004.md)]. - Generalized Munchausen Reinforcement Learning using Tsallis KL Divergence - [[2301.11476](https://arxiv.org/abs/2301.11476)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.11476.md)]. - Cut and Learn for Unsupervised Object Detection and Instance Segmentation - [[2301.11320](https://arxiv.org/abs/2301.11320)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.11320.md)]. - Principled Reinforcement Learning with Human Feedback from Pairwise or $K$-wise Comparisons - [[2301.11270](https://arxiv.org/abs/2301.11270)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.11270.md)]. - On the Importance of Noise Scheduling for Diffusion Models - [[2301.10972](https://arxiv.org/abs/2301.10972)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.10972.md)]. - Causal Reasoning of Entities and Events in Procedural Texts - [[2301.10896](https://arxiv.org/abs/2301.10896)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.10896.md)]. - ExaRanker: Explanation-Augmented Neural Ranker - [[2301.10521](https://arxiv.org/abs/2301.10521)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.10521.md)]. - HAL3D: Hierarchical Active Learning for Fine-Grained 3D Part Labeling - [[2301.10460](https://arxiv.org/abs/2301.10460)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.10460.md)]. - Using a Waffle Iron for Automotive Point Cloud Semantic Segmentation - [[2301.10100](https://arxiv.org/abs/2301.10100)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.10100.md)]. - HexPlane: A Fast Representation for Dynamic Scenes - [[2301.09632](https://arxiv.org/abs/2301.09632)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.09632.md)]. - GP-NAS-ensemble: a model for NAS Performance Prediction - [[2301.09231](https://arxiv.org/abs/2301.09231)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.09231.md)]. - BallGAN: 3D-aware Image Synthesis with a Spherical Background - [[2301.09091](https://arxiv.org/abs/2301.09091)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.09091.md)]. - FlatFormer: Flattened Window Attention for Efficient Point Cloud Transformer - [[2301.08739](https://arxiv.org/abs/2301.08739)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.08739.md)]. - Neural Architecture Search: Insights from 1000 Papers - [[2301.08727](https://arxiv.org/abs/2301.08727)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.08727.md)]. - NeRF in the Palm of Your Hand: Corrective Augmentation for Robotics via Novel-View Synthesis - [[2301.08556](https://arxiv.org/abs/2301.08556)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.08556.md)]. - OmniObject3D: Large-Vocabulary 3D Object Dataset for Realistic Perception, Reconstruction and Generation - [[2301.07525](https://arxiv.org/abs/2301.07525)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.07525.md)]. - Understanding the Role of Human Intuition on Reliance in Human-AI Decision-Making with Explanations - [[2301.07255](https://arxiv.org/abs/2301.07255)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.07255.md)]. - A Large-Scale Outdoor Multi-modal Dataset and Benchmark for Novel View Synthesis and Implicit Scene Reconstruction - [[2301.06782](https://arxiv.org/abs/2301.06782)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.06782.md)]. - FemtoDet: An Object Detection Baseline for Energy Versus Performance Tradeoffs - [[2301.06719](https://arxiv.org/abs/2301.06719)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.06719.md)]. - Dissociating language and thought in large language models: a cognitive perspective - [[2301.06627](https://arxiv.org/abs/2301.06627)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.06627.md)]. - DPE: Disentanglement of Pose and Expression for General Video Portrait Editing - [[2301.06281](https://arxiv.org/abs/2301.06281)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.06281.md)]. - Multimodality Helps Unimodality: Cross-Modal Few-Shot Learning with Multimodal Models - [[2301.06267](https://arxiv.org/abs/2301.06267)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.06267.md)]. - DSVT: Dynamic Sparse Voxel Transformer with Rotated Sets - [[2301.06051](https://arxiv.org/abs/2301.06051)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.06051.md)]. - TikTalk: A Video-Based Dialogue Dataset for Multi-Modal Chitchat in Real World - [[2301.05880](https://arxiv.org/abs/2301.05880)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.05880.md)]. - Leveraging Large Language Models to Power Chatbots for Collecting User Self-Reported Data - [[2301.05843](https://arxiv.org/abs/2301.05843)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.05843.md)]. - Open-vocabulary Object Segmentation with Diffusion Models - [[2301.05221](https://arxiv.org/abs/2301.05221)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.05221.md)]. - Learning to Memorize Entailment and Discourse Relations for Persona-Consistent Dialogues - [[2301.04871](https://arxiv.org/abs/2301.04871)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.04871.md)]. - LinkGAN: Linking GAN Latents to Pixels for Controllable Image Synthesis - [[2301.04604](https://arxiv.org/abs/2301.04604)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.04604.md)]. - ChatGPT is not all you need. A State of the Art Review of large Generative AI models - [[2301.04655](https://arxiv.org/abs/2301.04655)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.04655.md)]. - Pruning Compact ConvNets for Efficient Inference - [[2301.04502](https://arxiv.org/abs/2301.04502)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.04502.md)]. - Mastering Diverse Domains through World Models - [[2301.04104](https://arxiv.org/abs/2301.04104)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.04104.md)]. - FrustumFormer: Adaptive Instance-aware Resampling for Multi-view 3D Detection - [[2301.04467](https://arxiv.org/abs/2301.04467)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.04467.md)]. - Locomotion-Action-Manipulation: Synthesizing Human-Scene Interactions in Complex 3D Environments - [[2301.02667](https://arxiv.org/abs/2301.02667)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.02667.md)]. - STPrivacy: Spatio-Temporal Privacy-Preserving Action Recognition - [[2301.03046](https://arxiv.org/abs/2301.03046)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.03046.md)]. - A Survey on Transformers in Reinforcement Learning - [[2301.03044](https://arxiv.org/abs/2301.03044)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.03044.md)]. - MoreauGrad: Sparse and Robust Interpretation of Neural Networks via Moreau Envelope - [[2302.05294](https://arxiv.org/abs/2302.05294)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2302.05294.md)]. - Learning Support and Trivial Prototypes for Interpretable Image Classification - [[2301.04011](https://arxiv.org/abs/2301.04011)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.04011.md)]. - InPars-Light: Cost-Effective Unsupervised Training of Efficient Rankers - [[2301.02998](https://arxiv.org/abs/2301.02998)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.02998.md)]. - You Truly Understand What I Need: Intellectual and Friendly Dialogue Agents grounding Knowledge and Persona - [[2301.02401](https://arxiv.org/abs/2301.02401)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.02401.md)]. - Anchor3DLane: Learning to Regress 3D Anchors for Monocular 3D Lane Detection - [[2301.02371](https://arxiv.org/abs/2301.02371)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.02371.md)]. - Object as Query: Lifting any 2D Object Detector to 3D Detection - [[2301.02364](https://arxiv.org/abs/2301.02364)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.02364.md)]. - Robust Dynamic Radiance Fields - [[2301.02239](https://arxiv.org/abs/2301.02239)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.02239.md)]. - Neural Codec Language Models are Zero-Shot Text to Speech Synthesizers - [[2301.02111](https://arxiv.org/abs/2301.02111)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.02111.md)]. - Learning Trajectory-Word Alignments for Video-Language Tasks - [[2301.01953](https://arxiv.org/abs/2301.01953)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.01953.md)]. - SPRING: Situated Conversation Agent Pretrained with Multimodal Questions from Incremental Layout Graph - [[2301.01949](https://arxiv.org/abs/2301.01949)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.01949.md)]. - Event Camera Data Pre-training - [[2301.01928](https://arxiv.org/abs/2301.01928)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.01928.md)]. - GIVL: Improving Geographical Inclusivity of Vision-Language Models with Pre-Training Methods - [[2301.01893](https://arxiv.org/abs/2301.01893)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.01893.md)]. - InPars-v2: Large Language Models as Efficient Dataset Generators for Information Retrieval - [[2301.01820](https://arxiv.org/abs/2301.01820)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.01820.md)]. - Unsupervised Manifold Linearizing and Clustering - [[2301.01805](https://arxiv.org/abs/2301.01805)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.01805.md)]. - PACO: Parts and Attributes of Common Objects - [[2301.01795](https://arxiv.org/abs/2301.01795)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.01795.md)]. - Self-Supervised Video Forensics by Audio-Visual Anomaly Detection - [[2301.01767](https://arxiv.org/abs/2301.01767)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.01767.md)]. - Iterated Decomposition: Improving Science Q&amp;A by Supervising Reasoning Processes - [[2301.01751](https://arxiv.org/abs/2301.01751)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.01751.md)]. - RecRecNet: Rectangling Rectified Wide-Angle Images by Thin-Plate Spline Model and DoF-based Curriculum Learning - [[2301.01661](https://arxiv.org/abs/2301.01661)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.01661.md)]. - Ego-Only: Egocentric Action Detection without Exocentric Transferring - [[2301.01380](https://arxiv.org/abs/2301.01380)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.01380.md)]. - Cross Modal Transformer: Towards Fast and Robust 3D Object Detection - [[2301.01283](https://arxiv.org/abs/2301.01283)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.01283.md)]. - Rethinking Mobile Block for Efficient Attention-based Models - [[2301.01146](https://arxiv.org/abs/2301.01146)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.01146.md)]. - One-Time Universal Hashing Quantum Digital Signatures without Perfect Keys - [[2301.1132](https://arxiv.org/abs/2301.1132)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.1132.md)]. - STEPs: Self-Supervised Key Step Extraction and Localization from Unlabeled Procedural Videos - [[2301.00794](https://arxiv.org/abs/2301.00794)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.00794.md)]. - CLIP-Driven Universal Model for Organ Segmentation and Tumor Detection - [[2301.00785](https://arxiv.org/abs/2301.00785)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.00785.md)]. - Efficient On-device Training via Gradient Filtering - [[2301.00330](https://arxiv.org/abs/2301.00330)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.00330.md)]. </div></details> <details open><summary><strong>2022</strong></summary><div> ### December 2022 - Rethinking with Retrieval: Faithful Large Language Model Inference - [[2301.00303](https://arxiv.org/abs/2301.00303)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.00303.md)]. - A Survey on In-context Learning - [[2301.00234](https://arxiv.org/abs/2301.00234)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.00234.md)]. - Unlearnable Clusters: Towards Label-agnostic Unlearnable Examples - [[2301.01217](https://arxiv.org/abs/2301.01217)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.01217.md)]. - Tracing the Origin of Adversarial Attack for Forensic Investigation and Deterrence - [[2301.01218](https://arxiv.org/abs/2301.01218)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.01218.md)]. - Imitator: Personalized Speech-driven 3D Facial Animation - [[2301.00023](https://arxiv.org/abs/2301.00023)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.00023.md)]. - NIRVANA: Neural Implicit Representations of Videos with Adaptive Networks and Autoregressive Patch-wise Modeling - [[2212.14593](https://arxiv.org/abs/2212.14593)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.14593.md)]. - Scale-MAE: A Scale-Aware Masked Autoencoder for Multiscale Geospatial Representation Learning - [[2212.14532](https://arxiv.org/abs/2212.14532)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.14532.md)]. - Effects of Data Geometry in Early Deep Learning - [[2301.00008](https://arxiv.org/abs/2301.00008)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2301.00008.md)]. - Foreground-Background Separation through Concept Distillation from Generative Image Foundation Models - [[2212.14306](https://arxiv.org/abs/2212.14306)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.14306.md)]. - Discriminator-Cooperated Feature Map Distillation for GAN Compression - [[2212.14169](https://arxiv.org/abs/2212.14169)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.14169.md)]. - Cramming: Training a Language Model on a Single GPU in One Day - [[2212.14034](https://arxiv.org/abs/2212.14034)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.14034.md)]. - Demonstrate-Search-Predict: Composing retrieval and language models for knowledge-intensive NLP - [[2212.14024](https://arxiv.org/abs/2212.14024)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.14024.md)]. - Dream3D: Zero-Shot Text-to-3D Synthesis Using 3D Shape Prior and Text-to-Image Diffusion Models - [[2212.14704](https://arxiv.org/abs/2212.14704)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.14704.md)]. - Noise-aware Learning from Web-crawled Image-Text Data for Image Captioning - [[2212.13563](https://arxiv.org/abs/2212.13563)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.13563.md)]. - Behavioral Cloning via Search in Video PreTraining Latent Space - [[2212.13326](https://arxiv.org/abs/2212.13326)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.13326.md)]. - Large Language Models Encode Clinical Knowledge - [[2212.13138](https://arxiv.org/abs/2212.13138)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.13138.md)]. - SMMix: Self-Motivated Image Mixing for Vision Transformers - [[2212.12977](https://arxiv.org/abs/2212.12977)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.12977.md)]. - When Do Curricula Work in Federated Learning? - [[2212.12712](https://arxiv.org/abs/2212.12712)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.12712.md)]. - On Realization of Intelligent Decision-Making in the Real World: A Foundation Decision Model Perspective - [[2212.12669](https://arxiv.org/abs/2212.12669)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.12669.md)]. - Detecting Objects with Context-Likelihood Graphs and Graph Refinement - [[2212.12395](https://arxiv.org/abs/2212.12395)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.12395.md)]. - A-NeSI: A Scalable Approximate Method for Probabilistic Neurosymbolic Inference - [[2212.12393](https://arxiv.org/abs/2212.12393)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.12393.md)]. - Do DALL-E and Flamingo Understand Each Other? - [[2212.12249](https://arxiv.org/abs/2212.12249)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.12249.md)]. - OPT-IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization - [[2212.12017](https://arxiv.org/abs/2212.12017)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.12017.md)]. - Shakes on a Plane: Unsupervised Depth Estimation from Unstabilized Photography - [[2212.12324](https://arxiv.org/abs/2212.12324)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.12324.md)]. - Markov Categories and Entropy - [[2212.11719](https://arxiv.org/abs/2212.11719)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.11719.md)]. - Text Generation with Diffusion Language Models: A Pre-training Approach with Continuous Paragraph Denoise - [[2212.11685](https://arxiv.org/abs/2212.11685)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.11685.md)]. - DDColor: Towards Photo-Realistic Image Colorization via Dual Decoders - [[2212.11613](https://arxiv.org/abs/2212.11613)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.11613.md)]. - Tune-A-Video: One-Shot Tuning of Image Diffusion Models for Text-to-Video Generation - [[2212.11565](https://arxiv.org/abs/2212.11565)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.11565.md)]. - Generalized Decoding for Pixel, Image, and Language - [[2212.11270](https://arxiv.org/abs/2212.11270)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.11270.md)]. - 3D Highlighter: Localizing Regions on 3D Shapes via Text Descriptions - [[2212.11263](https://arxiv.org/abs/2212.11263)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.11263.md)]. - Hi-LASSIE: High-Fidelity Articulated Shape and Skeleton Discovery from Sparse Image Ensemble - [[2212.11042](https://arxiv.org/abs/2212.11042)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.11042.md)]. - TruFor: Leveraging all-round clues for trustworthy image forgery detection and localization - [[2212.10957](https://arxiv.org/abs/2212.10957)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.10957.md)]. - Critic-Guided Decoding for Controlled Text Generation - [[2212.10938](https://arxiv.org/abs/2212.10938)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.10938.md)]. - In-Sensor &amp; Neuromorphic Computing are all you need for Energy Efficient Computer Vision - [[2212.10881](https://arxiv.org/abs/2212.10881)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.10881.md)]. - MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning - [[2212.10773](https://arxiv.org/abs/2212.10773)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.10773.md)]. - MoralDial: A Framework to Train and Evaluate Moral Dialogue Systems via Moral Discussions - [[2212.10720](https://arxiv.org/abs/2212.10720)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.10720.md)]. - Analyzing Semantic Faithfulness of Language Models via Input Intervention on Conversational Question Answering - [[2212.10696](https://arxiv.org/abs/2212.10696)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.10696.md)]. - Full-Body Articulated Human-Object Interaction - [[2212.10621](https://arxiv.org/abs/2212.10621)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.10621.md)]. - Ontologically Faithful Generation of Non-Player Character Dialogues - [[2212.10618](https://arxiv.org/abs/2212.10618)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.10618.md)]. - Why Can GPT Learn In-Context? Language Models Implicitly Perform Gradient Descent as Meta-Optimizers - [[2212.10559](https://arxiv.org/abs/2212.10559)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.10559.md)]. - Unleashing the Power of Visual Prompting At the Pixel Level - [[2212.10556](https://arxiv.org/abs/2212.10556)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.10556.md)]. - A Survey of Deep Learning for Mathematical Reasoning - [[2212.10535](https://arxiv.org/abs/2212.10535)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.10535.md)]. - Interleaving Retrieval with Chain-of-Thought Reasoning for Knowledge-Intensive Multi-Step Questions - [[2212.10509](https://arxiv.org/abs/2212.10509)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.10509.md)]. - Precise Zero-Shot Dense Retrieval without Relevance Labels - [[2212.10496](https://arxiv.org/abs/2212.10496)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.10496.md)]. - LAMBADA: Backward Chaining for Automated Reasoning in Natural Language - [[2212.13894](https://arxiv.org/abs/2212.13894)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.13894.md)]. - Controllable Text Generation with Language Constraints - [[2212.10466](https://arxiv.org/abs/2212.10466)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.10466.md)]. - Towards Reasoning in Large Language Models: A Survey - [[2212.10403](https://arxiv.org/abs/2212.10403)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.10403.md)]. - SeqDiffuSeq: Text Diffusion with Encoder-Decoder Transformers - [[2212.10325](https://arxiv.org/abs/2212.10325)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.10325.md)]. - ReCode: Robustness Evaluation of Code Generation Models - [[2212.10264](https://arxiv.org/abs/2212.10264)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.10264.md)]. - StyleDomain: Efficient and Lightweight Parameterizations of StyleGAN for One-shot and Few-shot Domain Adaptation - [[2212.10229](https://arxiv.org/abs/2212.10229)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.10229.md)]. - Hoyer regularizer is all you need for ultra low-latency spiking neural networks - [[2212.10170](https://arxiv.org/abs/2212.10170)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.10170.md)]. - Large Language Models Are Reasoning Teachers - [[2212.10071](https://arxiv.org/abs/2212.10071)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.10071.md)]. - I Cast Detect Thoughts: Learning to Converse and Guide with Intents and Theory-of-Mind in Dungeons and Dragons - [[2212.10060](https://arxiv.org/abs/2212.10060)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.10060.md)]. - Towards Understanding Chain-of-Thought Prompting: An Empirical Study of What Matters - [[2212.10001](https://arxiv.org/abs/2212.10001)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.10001.md)]. - Panoptic Lifting for 3D Scene Understanding with Neural Fields - [[2212.09802](https://arxiv.org/abs/2212.09802)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.09802.md)]. - Denotationally Correct, Purely Functional, Efficient Reverse-mode Automatic Differentiation - [[2212.09801](https://arxiv.org/abs/2212.09801)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.09801.md)]. - Scalable Diffusion Models with Transformers - [[2212.09748](https://arxiv.org/abs/2212.09748)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.09748.md)]. - One Embedder, Any Task: Instruction-Finetuned Text Embeddings - [[2212.09741](https://arxiv.org/abs/2212.09741)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.09741.md)]. - Position-guided Text Prompt for Vision-Language Pre-training - [[2212.09737](https://arxiv.org/abs/2212.09737)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.09737.md)]. - Don't Generate, Discriminate: A Proposal for Grounding Language Models to Real-World Environments - [[2212.09736](https://arxiv.org/abs/2212.09736)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.09736.md)]. - A Probabilistic Framework for Lifelong Test-Time Adaptation - [[2212.09713](https://arxiv.org/abs/2212.09713)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.09713.md)]. - Reasoning with Language Model Prompting: A Survey - [[2212.09597](https://arxiv.org/abs/2212.09597)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.09597.md)]. - Large Language Models are Better Reasoners with Self-Verification - [[2212.09561](https://arxiv.org/abs/2212.09561)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.09561.md)]. - MM-Diffusion: Learning Multi-Modal Diffusion Models for Joint Audio and Video Generation - [[2212.09478](https://arxiv.org/abs/2212.09478)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.09478.md)]. - Latent Diffusion for Language Generation - [[2212.09462](https://arxiv.org/abs/2212.09462)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.09462.md)]. - Difformer: Empowering Diffusion Models on the Embedding Space for Text Generation - [[2212.09412](https://arxiv.org/abs/2212.09412)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.09412.md)]. - Out-of-domain GAN inversion via Invertibility Decomposition for Photo-Realistic Human Face Manipulation - [[2212.09262](https://arxiv.org/abs/2212.09262)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.09262.md)]. - Discovering Language Model Behaviors with Model-Written Evaluations - [[2212.09251](https://arxiv.org/abs/2212.09251)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.09251.md)]. - PAL: Persona-Augmented Emotional Support Conversation Generation - [[2212.09235](https://arxiv.org/abs/2212.09235)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.09235.md)]. - Emergent Analogical Reasoning in Large Language Models - [[2212.09196](https://arxiv.org/abs/2212.09196)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.09196.md)]. - Don't Forget Your ABC's: Evaluating the State-of-the-Art in Chat-Oriented Dialogue Systems - [[2212.09180](https://arxiv.org/abs/2212.09180)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.09180.md)]. - Can Retriever-Augmented Language Models Reason? The Blame Game Between the Retriever and the Language Model - [[2212.09146](https://arxiv.org/abs/2212.09146)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.09146.md)]. - Let's Negotiate! A Survey of Negotiation Dialogue Systems - [[2212.09072](https://arxiv.org/abs/2212.09072)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.09072.md)]. - Fine-Tuning Is All You Need to Mitigate Backdoor Attacks - [[2212.09067](https://arxiv.org/abs/2212.09067)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.09067.md)]. - The Impact of Symbolic Representations on In-context Learning for Few-shot Reasoning - [[2212.08686](https://arxiv.org/abs/2212.08686)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.08686.md)]. - Attentive Mask CLIP - [[2212.08653](https://arxiv.org/abs/2212.08653)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.08653.md)]. - GFPose: Learning 3D Human Pose Prior with Gradient Fields - [[2212.08641](https://arxiv.org/abs/2212.08641)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.08641.md)]. - Learnable Commutative Monoids for Graph Neural Networks - [[2212.08541](https://arxiv.org/abs/2212.08541)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.08541.md)]. - Teaching Small Language Models to Reason - [[2212.08410](https://arxiv.org/abs/2212.08410)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.08410.md)]. - RepQ-ViT: Scale Reparameterization for Post-Training Quantization of Vision Transformers - [[2212.08254](https://arxiv.org/abs/2212.08254)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.08254.md)]. - Injecting Domain Knowledge in Language Models for Task-Oriented Dialogue Systems - [[2212.08120](https://arxiv.org/abs/2212.08120)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.08120.md)]. - On Second Thought, Let's Not Think Step by Step! Bias and Toxicity in Zero-Shot Reasoning - [[2212.08061](https://arxiv.org/abs/2212.08061)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.08061.md)]. - Rethinking Vision Transformers for MobileNet Size and Speed - [[2212.08059](https://arxiv.org/abs/2212.08059)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.08059.md)]. - Real-Time Neural Light Field on Mobile Devices - [[2212.08057](https://arxiv.org/abs/2212.08057)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.08057.md)]. - Constitutional AI: Harmlessness from AI Feedback - [[2212.08073](https://arxiv.org/abs/2212.08073)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.08073.md)]. - Enhanced Training of Query-Based Object Detection via Selective Query Recollection - [[2212.07593](https://arxiv.org/abs/2212.07593)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.07593.md)]. - ECON: Explicit Clothed humans Optimized via Normal integration - [[2212.07422](https://arxiv.org/abs/2212.07422)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.07422.md)]. - NoPe-NeRF: Optimising Neural Radiance Field with No Pose Prior - [[2212.07388](https://arxiv.org/abs/2212.07388)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.07388.md)]. - HOOD: Hierarchical Graphs for Generalized Modelling of Clothing Dynamics - [[2212.07242](https://arxiv.org/abs/2212.07242)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.07242.md)]. - PD-Quant: Post-Training Quantization based on Prediction Difference Metric - [[2212.07048](https://arxiv.org/abs/2212.07048)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.07048.md)]. - Understanding Zero-Shot Adversarial Robustness for Large-Scale Models - [[2212.07016](https://arxiv.org/abs/2212.07016)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.07016.md)]. - EgoLoc: Revisiting 3D Object Localization from Egocentric Videos with Visual Queries - [[2212.06969](https://arxiv.org/abs/2212.06969)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.06969.md)]. - Structured 3D Features for Reconstructing Controllable Avatars - [[2212.06820](https://arxiv.org/abs/2212.06820)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.06820.md)]. - Learning 3D Representations from 2D Pre-trained Models via Image-to-Point Masked Autoencoders - [[2212.06785](https://arxiv.org/abs/2212.06785)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.06785.md)]. - Category Theory for Quantum Natural Language Processing - [[2212.06615](https://arxiv.org/abs/2212.06615)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.06615.md)]. - Pixel is All You Need: Adversarial Trajectory-Ensemble Active Learning for Salient Object Detection - [[2212.06493](https://arxiv.org/abs/2212.06493)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.06493.md)]. - DeepMapping2: Self-Supervised Large-Scale LiDAR Map Optimization - [[2212.06331](https://arxiv.org/abs/2212.06331)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.06331.md)]. - Doubly Right Object Recognition: A Why Prompt for Visual Rationales - [[2212.06202](https://arxiv.org/abs/2212.06202)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.06202.md)]. - Evaluation and Improvement of Interpretability for Self-Explainable Part-Prototype Networks - [[2212.05946](https://arxiv.org/abs/2212.05946)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.05946.md)]. - MoDem: Accelerating Visual Model-Based Reinforcement Learning with Demonstrations - [[2212.05698](https://arxiv.org/abs/2212.05698)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.05698.md)]. - REAP: A Large-Scale Realistic Adversarial Patch Benchmark - [[2212.05680](https://arxiv.org/abs/2212.05680)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.05680.md)]. - Cross-Modal Learning with 3D Deformable Attention for Action Recognition - [[2212.05638](https://arxiv.org/abs/2212.05638)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.05638.md)]. - Source-free Depth for Object Pop-out - [[2212.05370](https://arxiv.org/abs/2212.05370)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.05370.md)]. - Position Embedding Needs an Independent Layer Normalization - [[2212.05262](https://arxiv.org/abs/2212.05262)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.05262.md)]. - NeuS2: Fast Learning of Neural Implicit Surfaces for Multi-view Reconstruction - [[2212.05231](https://arxiv.org/abs/2212.05231)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.05231.md)]. - A Whac-A-Mole Dilemma: Shortcuts Come in Multiples Where Mitigating One Amplifies Others - [[2212.04825](https://arxiv.org/abs/2212.04825)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.04825.md)]. - Training-Free Structured Diffusion Guidance for Compositional Text-to-Image Synthesis - [[2212.05032](https://arxiv.org/abs/2212.05032)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.05032.md)]. - Genie: Show Me the Data for Quantization - [[2212.04780](https://arxiv.org/abs/2212.04780)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.04780.md)]. - SLAM for Visually Impaired Navigation: A Systematic Literature Review of the Current State of Research - [[2212.04745](https://arxiv.org/abs/2212.04745)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.04745.md)]. - MIMO Is All You Need : A Strong Multi-In-Multi-Out Baseline for Video Prediction - [[2212.04655](https://arxiv.org/abs/2212.04655)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.04655.md)]. - Structured Like a Language Model: Analysing AI as an Automated Subject - [[2212.05058](https://arxiv.org/abs/2212.05058)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.05058.md)]. - Multi-Concept Customization of Text-to-Image Diffusion - [[2212.04488](https://arxiv.org/abs/2212.04488)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.04488.md)]. - BEVBert: Multimodal Map Pre-training for Language-guided Navigation - [[2212.04385](https://arxiv.org/abs/2212.04385)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.04385.md)]. - On the Robustness of Normalizing Flows for Inverse Problems in Imaging - [[2212.04319](https://arxiv.org/abs/2212.04319)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.04319.md)]. - Decorate the Newcomers: Visual Domain Prompt for Continual Test Time Adaptation - [[2212.04145](https://arxiv.org/abs/2212.04145)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.04145.md)]. - Deep Incubation: Training Large Models by Divide-and-Conquering - [[2212.04129](https://arxiv.org/abs/2212.04129)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.04129.md)]. - Successive Prompting for Decomposing Complex Questions - [[2212.04092](https://arxiv.org/abs/2212.04092)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.04092.md)]. - LLM-Planner: Few-Shot Grounded Planning for Embodied Agents with Large Language Models - [[2212.04088](https://arxiv.org/abs/2212.04088)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.04088.md)]. - Teaching Matters: Investigating the Role of Supervision in Vision Transformers - [[2212.03862](https://arxiv.org/abs/2212.03862)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.03862.md)]. - Talking Head Generation with Probabilistic Audio-to-Visual Diffusion Priors - [[2212.04248](https://arxiv.org/abs/2212.04248)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.04248.md)]. - FineDance: A Fine-grained Choreography Dataset for 3D Full Body Dance Generation - [[2212.03741](https://arxiv.org/abs/2212.03741)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.03741.md)]. - EditableNeRF: Editing Topologically Varying Neural Radiance Fields by Key Points - [[2212.04247](https://arxiv.org/abs/2212.04247)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.04247.md)]. - Diffusion-SDF: Text-to-Shape via Voxelized Diffusion - [[2212.03293](https://arxiv.org/abs/2212.03293)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.03293.md)]. - InternVideo: General Video Foundation Models via Generative and Discriminative Learning - [[2212.03191](https://arxiv.org/abs/2212.03191)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.03191.md)]. - Adaptive Testing of Computer Vision Models - [[2212.02774](https://arxiv.org/abs/2212.02774)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.02774.md)]. - Learning Neural Parametric Head Models - [[2212.02761](https://arxiv.org/abs/2212.02761)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.02761.md)]. - SceneRF: Self-Supervised Monocular 3D Scene Reconstruction with Radiance Fields - [[2212.02501](https://arxiv.org/abs/2212.02501)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.02501.md)]. - Images Speak in Images: A Generalist Painter for In-Context Visual Learning - [[2212.02499](https://arxiv.org/abs/2212.02499)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.02499.md)]. - PEANUT: Predicting and Navigating to Unseen Targets - [[2212.02497](https://arxiv.org/abs/2212.02497)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.02497.md)]. - One-shot Implicit Animatable Avatars with Model-based Priors - [[2212.02469](https://arxiv.org/abs/2212.02469)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.02469.md)]. - Momentum Decoding: Open-ended Text Generation As Graph Exploration - [[2212.02175](https://arxiv.org/abs/2212.02175)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.02175.md)]. - Multiscale Structure Guided Diffusion for Image Deblurring - [[2212.01789](https://arxiv.org/abs/2212.01789)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.01789.md)]. - Fast Point Cloud Generation with Straight Flows - [[2212.01747](https://arxiv.org/abs/2212.01747)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.01747.md)]. - StegaNeRF: Embedding Invisible Information within Neural Radiance Fields - [[2212.01602](https://arxiv.org/abs/2212.01602)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.01602.md)]. - PGFed: Personalize Each Client's Global Objective for Federated Learning - [[2212.01448](https://arxiv.org/abs/2212.01448)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.01448.md)]. - Moving Beyond Downstream Task Accuracy for Information Retrieval Benchmarking - [[2212.01340](https://arxiv.org/abs/2212.01340)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.01340.md)]. - DiffRF: Rendering-Guided 3D Radiance Field Diffusion - [[2212.01206](https://arxiv.org/abs/2212.01206)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.01206.md)]. - RT-NeRF: Real-Time On-Device Neural Radiance Fields Towards Immersive AR/VR Rendering - [[2212.01120](https://arxiv.org/abs/2212.01120)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.01120.md)]. - Are Straight-Through gradients and Soft-Thresholding all you need for Sparse Training? - [[2212.01076](https://arxiv.org/abs/2212.01076)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.01076.md)]. - 3D Segmentation of Humans in Point Clouds with Synthetic Data - [[2212.00786](https://arxiv.org/abs/2212.00786)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.00786.md)]. - ResFormer: Scaling ViTs with Multi-Resolution Training - [[2212.00776](https://arxiv.org/abs/2212.00776)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.00776.md)]. - Exploiting Proximity-Aware Tasks for Embodied Social Navigation - [[2212.00767](https://arxiv.org/abs/2212.00767)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.00767.md)]. - Graph Convolutional Neural Networks as Parametric CoKleisli morphisms - [[2212.00542](https://arxiv.org/abs/2212.00542)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.00542.md)]. - Safe Learning-Based Control of Elastic Joint Robots via Control Barrier Functions - [[2212.0478](https://arxiv.org/abs/2212.0478)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.0478.md)]. - Language Model Pre-training on True Negatives - [[2212.00460](https://arxiv.org/abs/2212.00460)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.00460.md)]. - Parametric Information Maximization for Generalized Category Discovery - [[2212.00334](https://arxiv.org/abs/2212.00334)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.00334.md)]. - All You Need Is Hashing: Defending Against Data Reconstruction Attack in Vertical Federated Learning - [[2212.00325](https://arxiv.org/abs/2212.00325)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.00325.md)]. - Distilling Reasoning Capabilities into Smaller Language Models - [[2212.00193](https://arxiv.org/abs/2212.00193)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.00193.md)]. ### November 2022 - Feature Selection with Distance Correlation - [[2212.0046](https://arxiv.org/abs/2212.0046)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2212.0046.md)]. - CLIPascene: Scene Sketching with Different Types and Levels of Abstraction - [[2211.17256](https://arxiv.org/abs/2211.17256)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.17256.md)]. - Fast Inference from Transformers via Speculative Decoding - [[2211.17192](https://arxiv.org/abs/2211.17192)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.17192.md)]. - Spatio-Temporal Crop Aggregation for Video Representation Learning - [[2211.17042](https://arxiv.org/abs/2211.17042)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.17042.md)]. - NeuralLift-360: Lifting An In-the-wild 2D Photo to A 3D Object with 360° Views - [[2211.16431](https://arxiv.org/abs/2211.16431)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.16431.md)]. - PLA: Language-Driven Open-Vocabulary 3D Scene Understanding - [[2211.16312](https://arxiv.org/abs/2211.16312)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.16312.md)]. - Out-Of-Distribution Detection Is Not All You Need - [[2211.16158](https://arxiv.org/abs/2211.16158)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.16158.md)]. - NoisyQuant: Noisy Bias-Enhanced Post-Training Activation Quantization for Vision Transformers - [[2211.16056](https://arxiv.org/abs/2211.16056)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.16056.md)]. - SparsePose: Sparse-View Camera Pose Regression and Refinement - [[2211.16991](https://arxiv.org/abs/2211.16991)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.16991.md)]. - Decentralized Learning with Multi-Headed Distillation - [[2211.15774](https://arxiv.org/abs/2211.15774)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.15774.md)]. - Post-training Quantization on Diffusion Models - [[2211.15736](https://arxiv.org/abs/2211.15736)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.15736.md)]. - Is Conditional Generative Modeling all you need for Decision-Making? - [[2211.15657](https://arxiv.org/abs/2211.15657)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.15657.md)]. - SuS-X: Training-Free Name-Only Transfer of Vision-Language Models - [[2211.16198](https://arxiv.org/abs/2211.16198)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.16198.md)]. - In-Hand 3D Object Scanning from an RGB Sequence - [[2211.16193](https://arxiv.org/abs/2211.16193)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.16193.md)]. - DiffusionBERT: Improving Generative Masked Language Models with Diffusion Models - [[2211.15029](https://arxiv.org/abs/2211.15029)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.15029.md)]. - Dense Text Retrieval based on Pretrained Language Models: A Survey - [[2211.14876](https://arxiv.org/abs/2211.14876)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.14876.md)]. - 3DPPE: 3D Point Positional Encoding for Multi-Camera 3D Object Detection Transformers - [[2211.14710](https://arxiv.org/abs/2211.14710)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.14710.md)]. - Towards Improved Input Masking for Convolutional Neural Networks - [[2211.14646](https://arxiv.org/abs/2211.14646)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.14646.md)]. - Residual Pattern Learning for Pixel-wise Out-of-Distribution Detection in Semantic Segmentation - [[2211.14512](https://arxiv.org/abs/2211.14512)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.14512.md)]. - RUST: Latent Neural Scene Representations from Unposed Imagery - [[2211.14306](https://arxiv.org/abs/2211.14306)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.14306.md)]. - BeLFusion: Latent Diffusion for Behavior-Driven Human Motion Prediction - [[2211.14304](https://arxiv.org/abs/2211.14304)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.14304.md)]. - RbA: Segmenting Unknown Regions Rejected by All - [[2211.14293](https://arxiv.org/abs/2211.14293)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.14293.md)]. - NeuralUDF: Learning Unsigned Distance Fields for Multi-view Reconstruction of Surfaces with Arbitrary Topologies - [[2211.14173](https://arxiv.org/abs/2211.14173)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.14173.md)]. - ShadowNeuS: Neural SDF Reconstruction by Shadow Ray Supervision - [[2211.14086](https://arxiv.org/abs/2211.14086)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.14086.md)]. - SCOOP: Self-Supervised Correspondence and Optimization-Based Scene Flow - [[2211.14020](https://arxiv.org/abs/2211.14020)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.14020.md)]. - MPCViT: Searching for Accurate and Efficient MPC-Friendly Vision Transformer with Heterogeneous Attention - [[2211.13955](https://arxiv.org/abs/2211.13955)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.13955.md)]. - FFHQ-UV: Normalized Facial UV-Texture Dataset for 3D Face Reconstruction - [[2211.13874](https://arxiv.org/abs/2211.13874)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.13874.md)]. - SAGA: Spectral Adversarial Geometric Attack on 3D Meshes - [[2211.13775](https://arxiv.org/abs/2211.13775)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.13775.md)]. - Diffusion-SDF: Conditional Generative Modeling of Signed Distance Functions - [[2211.13757](https://arxiv.org/abs/2211.13757)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.13757.md)]. - SfM-TTR: Using Structure from Motion for Test-Time Refinement of Single-View Depth Networks - [[2211.13551](https://arxiv.org/abs/2211.13551)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.13551.md)]. - Video Test-Time Adaptation for Action Recognition - [[2211.15393](https://arxiv.org/abs/2211.15393)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.15393.md)]. - TSGP: Two-Stage Generative Prompting for Unsupervised Commonsense Question Answering - [[2211.13515](https://arxiv.org/abs/2211.13515)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.13515.md)]. - HouseDiffusion: Vector Floorplan Generation via a Diffusion Model with Discrete and Continuous Denoising - [[2211.13287](https://arxiv.org/abs/2211.13287)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.13287.md)]. - ClimateNeRF: Extreme Weather Synthesis in Neural Radiance Field - [[2211.13226](https://arxiv.org/abs/2211.13226)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.13226.md)]. - Lite-Mono: A Lightweight CNN and Transformer Architecture for Self-Supervised Monocular Depth Estimation - [[2211.13202](https://arxiv.org/abs/2211.13202)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.13202.md)]. - Robust Mean Teacher for Continual and Gradual Test-Time Adaptation - [[2211.13081](https://arxiv.org/abs/2211.13081)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.13081.md)]. - OReX: Object Reconstruction from Planar Cross-sections Using Neural Fields - [[2211.12886](https://arxiv.org/abs/2211.12886)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.12886.md)]. - ActMAD: Activation Matching to Align Distributions for Test-Time-Training - [[2211.12870](https://arxiv.org/abs/2211.12870)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.12870.md)]. - BAD-NeRF: Bundle Adjusted Deblur Neural Radiance Fields - [[2211.12853](https://arxiv.org/abs/2211.12853)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.12853.md)]. - Texts as Images in Prompt Tuning for Multi-Label Image Recognition - [[2211.12739](https://arxiv.org/abs/2211.12739)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.12739.md)]. - Integrally Pre-Trained Transformer Pyramid Networks - [[2211.12735](https://arxiv.org/abs/2211.12735)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.12735.md)]. - PNI : Industrial Anomaly Detection using Position and Neighborhood Information - [[2211.12634](https://arxiv.org/abs/2211.12634)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.12634.md)]. - Program of Thoughts Prompting: Disentangling Computation from Reasoning for Numerical Reasoning Tasks - [[2211.12588](https://arxiv.org/abs/2211.12588)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.12588.md)]. - CASSPR: Cross Attention Single Scan Place Recognition - [[2211.12542](https://arxiv.org/abs/2211.12542)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.12542.md)]. - Shortcomings of Top-Down Randomization-Based Sanity Checks for Evaluations of Deep Neural Network Explanations - [[2211.12486](https://arxiv.org/abs/2211.12486)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.12486.md)]. - OCTET: Object-aware Counterfactual Explanations - [[2211.12380](https://arxiv.org/abs/2211.12380)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.12380.md)]. - DETRs with Collaborative Hybrid Assignments Training - [[2211.12860](https://arxiv.org/abs/2211.12860)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.12860.md)]. - GlowGAN: Unsupervised Learning of HDR Images from LDR Images in the Wild - [[2211.12352](https://arxiv.org/abs/2211.12352)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.12352.md)]. - DOLCE: A Model-Based Probabilistic Diffusion Framework for Limited-Angle CT Reconstruction - [[2211.12340](https://arxiv.org/abs/2211.12340)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.12340.md)]. - SPIn-NeRF: Multiview Segmentation and Perceptual Inpainting with Neural Radiance Fields - [[2211.12254](https://arxiv.org/abs/2211.12254)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.12254.md)]. - SadTalker: Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation - [[2211.12194](https://arxiv.org/abs/2211.12194)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.12194.md)]. - DiffDreamer: Towards Consistent Unsupervised Single-view Scene Extrapolation with Conditional Diffusion Models - [[2211.12131](https://arxiv.org/abs/2211.12131)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.12131.md)]. - Explaining Image Classifiers with Multiscale Directional Image Representation - [[2211.12857](https://arxiv.org/abs/2211.12857)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.12857.md)]. - Level-S$^2$fM: Structure from Motion on Neural Level Set of Implicit Surfaces - [[2211.12018](https://arxiv.org/abs/2211.12018)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.12018.md)]. - One Eye is All You Need: Lightweight Ensembles for Gaze Estimation with Single Encoders - [[2211.11936](https://arxiv.org/abs/2211.11936)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.11936.md)]. - Multi-Directional Subspace Editing in Style-Space - [[2211.11825](https://arxiv.org/abs/2211.11825)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.11825.md)]. - Teaching Structured Vision&amp;Language Concepts to Vision&amp;Language Models - [[2211.11733](https://arxiv.org/abs/2211.11733)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.11733.md)]. - Multitask Vision-Language Prompt Tuning - [[2211.11720](https://arxiv.org/abs/2211.11720)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.11720.md)]. - PointCLIP V2: Prompting CLIP and GPT for Powerful 3D Open-world Learning - [[2211.11682](https://arxiv.org/abs/2211.11682)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.11682.md)]. - SMAUG: Sparse Masked Autoencoder for Efficient Video-Language Pre-training - [[2211.11446](https://arxiv.org/abs/2211.11446)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.11446.md)]. - MATE: Masked Autoencoders are Online 3D Test-Time Learners - [[2211.11432](https://arxiv.org/abs/2211.11432)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.11432.md)]. - SeeABLE: Soft Discrepancies and Bounded Contrastive Learning for Exposing Deepfakes - [[2211.11296](https://arxiv.org/abs/2211.11296)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.11296.md)]. - Investigating Prompt Engineering in Diffusion Models - [[2211.15462](https://arxiv.org/abs/2211.15462)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.15462.md)]. - NeuMap: Neural Coordinate Mapping by Auto-Transdecoder for Camera Localization - [[2211.11177](https://arxiv.org/abs/2211.11177)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.11177.md)]. - Vision Transformer with Super Token Sampling - [[2211.11167](https://arxiv.org/abs/2211.11167)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.11167.md)]. - Language in a Bottle: Language Model Guided Concept Bottlenecks for Interpretable Image Classification - [[2211.11158](https://arxiv.org/abs/2211.11158)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.11158.md)]. - You Need Multiple Exiting: Dynamic Early Exiting for Accelerating Unified Vision Language Model - [[2211.11152](https://arxiv.org/abs/2211.11152)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.11152.md)]. - DynIBaR: Neural Dynamic Image-Based Rendering - [[2211.11082](https://arxiv.org/abs/2211.11082)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.11082.md)]. - The Stack: 3 TB of permissively licensed source code - [[2211.15533](https://arxiv.org/abs/2211.15533)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.15533.md)]. - Minimizing the Accumulated Trajectory Error to Improve Dataset Distillation - [[2211.11004](https://arxiv.org/abs/2211.11004)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.11004.md)]. - DeepSolo: Let Transformer Decoder with Explicit Points Solo for Text Spotting - [[2211.10772](https://arxiv.org/abs/2211.10772)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.10772.md)]. - Parallel Diffusion Models of Operator and Image for Blind Inverse Problems - [[2211.10656](https://arxiv.org/abs/2211.10656)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.10656.md)]. - Solving 3D Inverse Problems using Pre-trained 2D Diffusion Models - [[2211.10655](https://arxiv.org/abs/2211.10655)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.10655.md)]. - LidarGait: Benchmarking 3D Gait Recognition with Point Clouds - [[2211.10598](https://arxiv.org/abs/2211.10598)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.10598.md)]. - MatrixVT: Efficient Multi-Camera to BEV Transformation for 3D Perception - [[2211.10593](https://arxiv.org/abs/2211.10593)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.10593.md)]. - Tired of Over-smoothing? Stress Graph Drawing Is All You Need! - [[2211.10579](https://arxiv.org/abs/2211.10579)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.10579.md)]. - BEVFormer v2: Adapting Modern Image Backbones to Bird's-Eye-View Recognition via Perspective Supervision - [[2211.10439](https://arxiv.org/abs/2211.10439)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.10439.md)]. - PAL: Program-aided Language Models - [[2211.10435](https://arxiv.org/abs/2211.10435)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.10435.md)]. - Visual Programming: Compositional visual reasoning without training - [[2211.11559](https://arxiv.org/abs/2211.11559)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.11559.md)]. - SPACE: Speech-driven Portrait Animation with Controllable Expression - [[2211.09809](https://arxiv.org/abs/2211.09809)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.09809.md)]. - Uni-Perceiver v2: A Generalist Model for Large-Scale Vision and Vision-Language Tasks - [[2211.09808](https://arxiv.org/abs/2211.09808)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.09808.md)]. - InstructPix2Pix: Learning to Follow Image Editing Instructions - [[2211.09800](https://arxiv.org/abs/2211.09800)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.09800.md)]. - I Can't Believe There's No Images! Learning Visual Tasks Using only Language Supervision - [[2211.09778](https://arxiv.org/abs/2211.09778)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.09778.md)]. - EfficientTrain: Exploring Generalized Curriculum Learning for Training Visual Backbones - [[2211.09703](https://arxiv.org/abs/2211.09703)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.09703.md)]. - CRAFT: Concept Recursive Activation FacTorization for Explainability - [[2211.10154](https://arxiv.org/abs/2211.10154)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.10154.md)]. - UniFormerV2: Spatiotemporal Learning by Arming Image ViTs with Video UniFormer - [[2211.09552](https://arxiv.org/abs/2211.09552)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.09552.md)]. - DETRDistill: A Universal Knowledge Distillation Framework for DETR-families - [[2211.10156](https://arxiv.org/abs/2211.10156)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.10156.md)]. - UMFuse: Unified Multi View Fusion for Human Editing applications - [[2211.10157](https://arxiv.org/abs/2211.10157)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.10157.md)]. - Task-aware Retrieval with Instructions - [[2211.09260](https://arxiv.org/abs/2211.09260)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.09260.md)]. - AdaMAE: Adaptive Masking for Efficient Spatiotemporal Learning with Masked Autoencoders - [[2211.09120](https://arxiv.org/abs/2211.09120)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.09120.md)]. - Token Turing Machines - [[2211.09119](https://arxiv.org/abs/2211.09119)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.09119.md)]. - MAGE: MAsked Generative Encoder to Unify Representation Learning and Image Synthesis - [[2211.09117](https://arxiv.org/abs/2211.09117)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.09117.md)]. - Holistic Evaluation of Language Models - [[2211.09110](https://arxiv.org/abs/2211.09110)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.09110.md)]. - Galactica: A Large Language Model for Science - [[2211.09085](https://arxiv.org/abs/2211.09085)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.09085.md)]. - Stare at What You See: Masked Image Modeling without Reconstruction - [[2211.08887](https://arxiv.org/abs/2211.08887)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.08887.md)]. - Consistent Direct Time-of-Flight Video Depth Super-Resolution - [[2211.08658](https://arxiv.org/abs/2211.08658)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.08658.md)]. - R-Pred: Two-Stage Motion Prediction Via Tube-Query Attention-Based Trajectory Refinement - [[2211.08609](https://arxiv.org/abs/2211.08609)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.08609.md)]. - PromptCap: Prompt-Guided Task-Aware Image Captioning - [[2211.09699](https://arxiv.org/abs/2211.09699)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.09699.md)]. - Versatile Diffusion: Text, Images and Variations All in One Diffusion Model - [[2211.08332](https://arxiv.org/abs/2211.08332)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.08332.md)]. - Is Style All You Need? Dependencies Between Emotion and GST-based Speaker Recognition - [[2211.08213](https://arxiv.org/abs/2211.08213)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.08213.md)]. - Teaching Algorithmic Reasoning via In-context Learning - [[2211.09066](https://arxiv.org/abs/2211.09066)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.09066.md)]. - EVA: Exploring the Limits of Masked Visual Representation Learning at Scale - [[2211.07636](https://arxiv.org/abs/2211.07636)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.07636.md)]. - Follow the Wisdom of the Crowd: Effective Text Generation via Minimum Bayes Risk Decoding - [[2211.07634](https://arxiv.org/abs/2211.07634)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.07634.md)]. - Latent-NeRF for Shape-Guided Generation of 3D Shapes and Textures - [[2211.07600](https://arxiv.org/abs/2211.07600)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.07600.md)]. - Imagination is All You Need! Curved Contrastive Learning for Abstract Sequence Modeling Utilized on Long Short-Term Dialogue Planning - [[2211.07591](https://arxiv.org/abs/2211.07591)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.07591.md)]. - PKCAM: Previous Knowledge Channel Attention Module - [[2211.07521](https://arxiv.org/abs/2211.07521)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.07521.md)]. - Fcaformer: Forward Cross Attention in Hybrid Vision Transformer - [[2211.07198](https://arxiv.org/abs/2211.07198)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.07198.md)]. - ParCNetV2: Oversized Kernel with Enhanced Attention - [[2211.07157](https://arxiv.org/abs/2211.07157)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.07157.md)]. - BiViT: Extremely Compressed Binary Vision Transformer - [[2211.07091](https://arxiv.org/abs/2211.07091)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.07091.md)]. - OverFlow: Putting flows on top of neural transducers for better TTS - [[2211.06892](https://arxiv.org/abs/2211.06892)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.06892.md)]. - SCOTCH and SODA: A Transformer Video Shadow Detection Framework - [[2211.06885](https://arxiv.org/abs/2211.06885)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.06885.md)]. - What would Harry say? Building Dialogue Agents for Characters in a Story - [[2211.06869](https://arxiv.org/abs/2211.06869)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.06869.md)]. - MARLIN: Masked Autoencoder for facial video Representation LearnINg - [[2211.06627](https://arxiv.org/abs/2211.06627)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.06627.md)]. - OpenGait: Revisiting Gait Recognition Toward Better Practicality - [[2211.06597](https://arxiv.org/abs/2211.06597)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.06597.md)]. - Masked Contrastive Representation Learning - [[2211.06012](https://arxiv.org/abs/2211.06012)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.06012.md)]. - High-Quality Entity Segmentation - [[2211.05776](https://arxiv.org/abs/2211.05776)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.05776.md)]. - OneFormer: One Transformer to Rule Universal Image Segmentation - [[2211.06220](https://arxiv.org/abs/2211.06220)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.06220.md)]. - MMDialog: A Large-scale Multi-turn Dialogue Dataset Towards Multi-modal Open-domain Conversation - [[2211.05719](https://arxiv.org/abs/2211.05719)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.05719.md)]. - Secure Aggregation Is Not All You Need: Mitigating Privacy Attacks with Noise Tolerance in Federated Learning - [[2211.06324](https://arxiv.org/abs/2211.06324)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.06324.md)]. - BLOOM: A 176B-Parameter Open-Access Multilingual Language Model - [[2211.05100](https://arxiv.org/abs/2211.05100)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.05100.md)]. - NoiSER: Noise is All You Need for Low-Light Image Enhancement - [[2211.04700](https://arxiv.org/abs/2211.04700)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.04700.md)]. - Cross-Attention is all you need: Real-Time Streaming Transformers for Personalised Speech Enhancement - [[2211.04346](https://arxiv.org/abs/2211.04346)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.04346.md)]. - Self-conditioned Embedding Diffusion for Text Generation - [[2211.04236](https://arxiv.org/abs/2211.04236)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.04236.md)]. - $BT^2$: Backward-compatible Training with Basis Transformation - [[2211.03989](https://arxiv.org/abs/2211.03989)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.03989.md)]. - Rickrolling the Artist: Injecting Backdoors into Text Encoders for Text-to-Image Synthesis - [[2211.02408](https://arxiv.org/abs/2211.02408)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.02408.md)]. - Large Language Models Are Human-Level Prompt Engineers - [[2211.01910](https://arxiv.org/abs/2211.01910)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.01910.md)]. - Crosslingual Generalization through Multitask Finetuning - [[2211.01786](https://arxiv.org/abs/2211.01786)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.01786.md)]. - PINTO: Faithful Language Reasoning Using Prompt-Generated Rationales - [[2211.01562](https://arxiv.org/abs/2211.01562)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.01562.md)]. - Flashlights: An Off-Caustic Lensed Star at Redshift $z$ = 1.26 in Abell 370 - [[2211.1402](https://arxiv.org/abs/2211.1402)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.1402.md)]. - eDiff-I: Text-to-Image Diffusion Models with an Ensemble of Expert Denoisers - [[2211.01324](https://arxiv.org/abs/2211.01324)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.01324.md)]. - Late lumping of transformation-based feedback laws for boundary control systems - [[2211.1238](https://arxiv.org/abs/2211.1238)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.1238.md)]. - Bipartite Mixed Membership Distribution-Free Model. A novel model for community detection in overlapping bipartite weighted networks - [[2211.0912](https://arxiv.org/abs/2211.0912)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.0912.md)]. - The Perils of Learning From Unlabeled Data: Backdoor Attacks on Semi-supervised Learning - [[2211.00453](https://arxiv.org/abs/2211.00453)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.00453.md)]. - CARE: Causality Reasoning for Empathetic Responses by Conditional Graph Generation - [[2211.00255](https://arxiv.org/abs/2211.00255)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.00255.md)]. - Evaluating Impact of Social Media Posts by Executives on Stock Prices - [[2211.1287](https://arxiv.org/abs/2211.1287)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2211.1287.md)]. ### October 2022 - SSD-LM: Semi-autoregressive Simplex-based Diffusion Language Model for Text Generation and Modular Control - [[2210.17432](https://arxiv.org/abs/2210.17432)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.17432.md)]. - GPTQ: Accurate Post-Training Quantization for Generative Pre-trained Transformers - [[2210.17323](https://arxiv.org/abs/2210.17323)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.17323.md)]. - DiffusER: Discrete Diffusion via Edit-based Reconstruction - [[2210.16886](https://arxiv.org/abs/2210.16886)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.16886.md)]. - Saliency Can Be All You Need In Contrastive Self-Supervised Learning - [[2210.16776](https://arxiv.org/abs/2210.16776)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.16776.md)]. - FreeVC: Towards High-Quality Text-Free One-Shot Voice Conversion - [[2210.15418](https://arxiv.org/abs/2210.15418)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.15418.md)]. - Contrastive Decoding: Open-ended Text Generation as Optimization - [[2210.15097](https://arxiv.org/abs/2210.15097)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.15097.md)]. - Streaming Radiance Fields for 3D Video Synthesis - [[2210.14831](https://arxiv.org/abs/2210.14831)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.14831.md)]. - Contrastive Search Is What You Need For Neural Text Generation - [[2210.14140](https://arxiv.org/abs/2210.14140)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.14140.md)]. - FineD-Eval: Fine-grained Automatic Dialogue-Level Evaluation - [[2210.13832](https://arxiv.org/abs/2210.13832)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.13832.md)]. - Dichotomy of Control: Separating What You Can Control from What You Cannot - [[2210.13435](https://arxiv.org/abs/2210.13435)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.13435.md)]. - Emergent World Representations: Exploring a Sequence Model Trained on a Synthetic Task - [[2210.13382](https://arxiv.org/abs/2210.13382)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.13382.md)]. - 10 hours data is all you need - [[2210.13067](https://arxiv.org/abs/2210.13067)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.13067.md)]. - DANLI: Deliberative Agent for Following Natural Language Instructions - [[2210.12485](https://arxiv.org/abs/2210.12485)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.12485.md)]. - Towards Efficient Dialogue Pre-training with Transferable and Interpretable Latent Structure - [[2210.12461](https://arxiv.org/abs/2210.12461)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.12461.md)]. - Collaborative Reasoning on Multi-Modal Semantic Graphs for Video-Grounded Dialogue Generation - [[2210.12460](https://arxiv.org/abs/2210.12460)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.12460.md)]. - There Is No Standard Answer: Knowledge-Grounded Dialogue Generation with Adversarial Activated Multi-Reference Learning - [[2210.12459](https://arxiv.org/abs/2210.12459)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.12459.md)]. - WikiWhy: Answering and Explaining Cause-and-Effect Questions - [[2210.12152](https://arxiv.org/abs/2210.12152)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.12152.md)]. - Large Language Models Can Self-Improve - [[2210.11610](https://arxiv.org/abs/2210.11610)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.11610.md)]. - Scaling Instruction-Finetuned Language Models - [[2210.11416](https://arxiv.org/abs/2210.11416)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.11416.md)]. - On the Feasibility of Cross-Task Transfer with Model-Based Reinforcement Learning - [[2210.10763](https://arxiv.org/abs/2210.10763)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.10763.md)]. - Scaling Laws for Reward Model Overoptimization - [[2210.10760](https://arxiv.org/abs/2210.10760)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.10760.md)]. - How to Boost Face Recognition with StyleGAN? - [[2210.10090](https://arxiv.org/abs/2210.10090)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.10090.md)]. - Bag All You Need: Learning a Generalizable Bagging Strategy for Heterogeneous Objects - [[2210.09997](https://arxiv.org/abs/2210.09997)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.09997.md)]. - Perceptual Grouping in Contrastive Vision-Language Models - [[2210.09996](https://arxiv.org/abs/2210.09996)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.09996.md)]. - MotionDeltaCNN: Sparse CNN Inference of Frame Differences in Moving Camera Videos - [[2210.09887](https://arxiv.org/abs/2210.09887)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.09887.md)]. - DisCup: Discriminator Cooperative Unlikelihood Prompt-tuning for Controllable Text Generation - [[2210.09551](https://arxiv.org/abs/2210.09551)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.09551.md)]. - Imagic: Text-Based Real Image Editing with Diffusion Models - [[2210.09276](https://arxiv.org/abs/2210.09276)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.09276.md)]. - Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them - [[2210.09261](https://arxiv.org/abs/2210.09261)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.09261.md)]. - DiffuSeq: Sequence to Sequence Text Generation with Diffusion Models - [[2210.08933](https://arxiv.org/abs/2210.08933)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.08933.md)]. - Keep Me Updated! Memory Management in Long-term Conversations - [[2210.08750](https://arxiv.org/abs/2210.08750)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.08750.md)]. - Scratching Visual Transformer's Back with Uniform Attention - [[2210.08457](https://arxiv.org/abs/2210.08457)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.08457.md)]. - Data-Efficient Augmentation for Training Neural Networks - [[2210.08363](https://arxiv.org/abs/2210.08363)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.08363.md)]. - Is synthetic data from generative models ready for image recognition? - [[2210.07574](https://arxiv.org/abs/2210.07574)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.07574.md)]. - DyLoRA: Parameter Efficient Tuning of Pre-trained Models using Dynamic Search-Free Low-Rank Adaptation - [[2210.07558](https://arxiv.org/abs/2210.07558)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.07558.md)]. - Visual Reinforcement Learning with Self-Supervised 3D Representations - [[2210.07241](https://arxiv.org/abs/2210.07241)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.07241.md)]. - Unified Vision and Language Prompt Learning - [[2210.07225](https://arxiv.org/abs/2210.07225)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.07225.md)]. - Visual Classification via Description from Large Language Models - [[2210.07183](https://arxiv.org/abs/2210.07183)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.07183.md)]. - Language Models of Code are Few-Shot Commonsense Learners - [[2210.07128](https://arxiv.org/abs/2210.07128)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.07128.md)]. - Retrospectives on the Embodied AI Workshop - [[2210.06849](https://arxiv.org/abs/2210.06849)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.06849.md)]. - H2RBox: Horizontal Box Annotation is All You Need for Oriented Object Detection - [[2210.06742](https://arxiv.org/abs/2210.06742)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.06742.md)]. - Explanations from Large Language Models Make Small Reasoners Better - [[2210.06726](https://arxiv.org/abs/2210.06726)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.06726.md)]. - Large Language Models are few(1)-shot Table Reasoners - [[2210.06710](https://arxiv.org/abs/2210.06710)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.06710.md)]. - RankT5: Fine-Tuning T5 for Text Ranking with Ranking Losses - [[2210.10634](https://arxiv.org/abs/2210.10634)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.10634.md)]. - Token-Label Alignment for Vision Transformers - [[2210.06455](https://arxiv.org/abs/2210.06455)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.06455.md)]. - Visual Prompting for Adversarial Robustness - [[2210.06284](https://arxiv.org/abs/2210.06284)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.06284.md)]. - Masked Motion Encoding for Self-Supervised Video Representation Learning - [[2210.06096](https://arxiv.org/abs/2210.06096)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.06096.md)]. - BEV-LaneDet: a Simple and Effective 3D Lane Detection Baseline - [[2210.06006](https://arxiv.org/abs/2210.06006)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.06006.md)]. - OPERA: Omni-Supervised Representation Learning with Hierarchical Supervisions - [[2210.05557](https://arxiv.org/abs/2210.05557)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.05557.md)]. - Mind's Eye: Grounded Language Model Reasoning through Simulation - [[2210.05359](https://arxiv.org/abs/2210.05359)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.05359.md)]. - Multi-Object Navigation with dynamically learned neural implicit representations - [[2210.05129](https://arxiv.org/abs/2210.05129)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.05129.md)]. - Certified Training: Small Boxes are All You Need - [[2210.04871](https://arxiv.org/abs/2210.04871)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.04871.md)]. - Improving Multi-turn Emotional Support Dialogue Generation with Lookahead Strategy Planning - [[2210.04242](https://arxiv.org/abs/2210.04242)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.04242.md)]. - Uncertainty-Aware Unsupervised Image Deblurring with Deep Residual Prior - [[2210.05361](https://arxiv.org/abs/2210.05361)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.05361.md)]. - Controllable Dialogue Simulation with In-Context Learning - [[2210.04185](https://arxiv.org/abs/2210.04185)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.04185.md)]. - Open-Vocabulary Semantic Segmentation with Mask-adapted CLIP - [[2210.04150](https://arxiv.org/abs/2210.04150)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.04150.md)]. - Don't Lose Yourself! Empathetic Response Generation via Explicit Self-Other Awareness - [[2210.03884](https://arxiv.org/abs/2210.03884)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.03884.md)]. - Is margin all you need? An extensive empirical study of active learning on tabular data - [[2210.03822](https://arxiv.org/abs/2210.03822)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.03822.md)]. - Large Language Models can Implement Policy Iteration - [[2210.03821](https://arxiv.org/abs/2210.03821)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.03821.md)]. - GraspCaps: Capsule Networks Are All You Need for Grasping Familiar Objects - [[2210.03628](https://arxiv.org/abs/2210.03628)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.03628.md)]. - Automatic Chain of Thought Prompting in Large Language Models - [[2210.03493](https://arxiv.org/abs/2210.03493)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.03493.md)]. - Measuring and Narrowing the Compositionality Gap in Language Models - [[2210.03350](https://arxiv.org/abs/2210.03350)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.03350.md)]. - A ResNet is All You Need? Modeling A Strong Baseline for Detecting Referable Diabetic Retinopathy in Fundus Images - [[2210.03180](https://arxiv.org/abs/2210.03180)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.03180.md)]. - FAST: Improving Controllability for Text Generation with Feedback Aware Self-Training - [[2210.03167](https://arxiv.org/abs/2210.03167)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.03167.md)]. - MaPLe: Multi-modal Prompt Learning - [[2210.03117](https://arxiv.org/abs/2210.03117)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.03117.md)]. - CLIP model is an Efficient Continual Learner - [[2210.03114](https://arxiv.org/abs/2210.03114)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.03114.md)]. - VIMA: General Robot Manipulation with Multimodal Prompts - [[2210.03094](https://arxiv.org/abs/2210.03094)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.03094.md)]. - Rainier: Reinforced Knowledge Introspector for Commonsense Question Answering - [[2210.03078](https://arxiv.org/abs/2210.03078)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.03078.md)]. - Language Models are Multilingual Chain-of-Thought Reasoners - [[2210.03057](https://arxiv.org/abs/2210.03057)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.03057.md)]. - ByteTransformer: A High-Performance Transformer Boosted for Variable-Length Inputs - [[2210.03052](https://arxiv.org/abs/2210.03052)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.03052.md)]. - A Distributional Lens for Multi-Aspect Controllable Text Generation - [[2210.02889](https://arxiv.org/abs/2210.02889)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.02889.md)]. - ReAct: Synergizing Reasoning and Acting in Language Models - [[2210.03629](https://arxiv.org/abs/2210.03629)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.03629.md)]. - Depth Is All You Need for Monocular 3D Detection - [[2210.02493](https://arxiv.org/abs/2210.02493)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.02493.md)]. - GLM-130B: An Open Bilingual Pre-trained Model - [[2210.02414](https://arxiv.org/abs/2210.02414)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.02414.md)]. - Decomposed Prompting: A Modular Approach for Solving Complex Tasks - [[2210.02406](https://arxiv.org/abs/2210.02406)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.02406.md)]. - Imagen Video: High Definition Video Generation with Diffusion Models - [[2210.02303](https://arxiv.org/abs/2210.02303)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.02303.md)]. - CorefDiffs: Co-referential and Differential Knowledge Flow in Document Grounded Conversations - [[2210.02223](https://arxiv.org/abs/2210.02223)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.02223.md)]. - Collecting The Puzzle Pieces: Disentangled Self-Driven Human Pose Transfer by Permuting Textures - [[2210.01887](https://arxiv.org/abs/2210.01887)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.01887.md)]. - Group Personalized Federated Learning - [[2210.01863](https://arxiv.org/abs/2210.01863)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.01863.md)]. - Centerpoints Are All You Need in Overhead Imagery - [[2210.01857](https://arxiv.org/abs/2210.01857)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.01857.md)]. - COPILOT: Human-Environment Collision Prediction and Localization from Egocentric Videos - [[2210.01781](https://arxiv.org/abs/2210.01781)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.01781.md)]. - Knowledge Unlearning for Mitigating Privacy Risks in Language Models - [[2210.01504](https://arxiv.org/abs/2210.01504)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.01504.md)]. - Extraneousness-Aware Imitation Learning - [[2210.01379](https://arxiv.org/abs/2210.01379)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.01379.md)]. - Recitation-Augmented Language Models - [[2210.01296](https://arxiv.org/abs/2210.01296)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.01296.md)]. - Event-based Temporally Dense Optical Flow Estimation with Sequential Neural Networks - [[2210.01244](https://arxiv.org/abs/2210.01244)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.01244.md)]. - Is Reinforcement Learning (Not) for Natural Language Processing: Benchmarks, Baselines, and Building Blocks for Natural Language Policy Optimization - [[2210.01241](https://arxiv.org/abs/2210.01241)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.01241.md)]. - Language Models Are Greedy Reasoners: A Systematic Formal Analysis of Chain-of-Thought - [[2210.01240](https://arxiv.org/abs/2210.01240)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.01240.md)]. - Masked Spiking Transformer - [[2210.01208](https://arxiv.org/abs/2210.01208)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.01208.md)]. - Membership Inference Attacks Against Text-to-image Generation Models - [[2210.00968](https://arxiv.org/abs/2210.00968)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.00968.md)]. - Improving Sample Quality of Diffusion Models Using Self-Attention Guidance - [[2210.00939](https://arxiv.org/abs/2210.00939)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.00939.md)]. - Mastering Spatial Graph Prediction of Road Networks - [[2210.00828](https://arxiv.org/abs/2210.00828)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.00828.md)]. - Complexity-Based Prompting for Multi-Step Reasoning - [[2210.00720](https://arxiv.org/abs/2210.00720)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.00720.md)]. - IntrinsicNeRF: Learning Intrinsic Neural Radiance Fields for Editable Novel View Synthesis - [[2210.00647](https://arxiv.org/abs/2210.00647)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.00647.md)]. - "Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction - [[2210.03735](https://arxiv.org/abs/2210.03735)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.03735.md)]. - NeRF: Neural Radiance Field in 3D Vision, A Comprehensive Review - [[2210.00379](https://arxiv.org/abs/2210.00379)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.00379.md)]. - Multimodal Analogical Reasoning over Knowledge Graphs - [[2210.00312](https://arxiv.org/abs/2210.00312)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.00312.md)]. ### September 2022 - Combining Efficient and Precise Sign Language Recognition: Good pose estimation library is all you need - [[2210.00893](https://arxiv.org/abs/2210.00893)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2210.00893.md)]. - Hiding Visual Information via Obfuscating Adversarial Perturbations - [[2209.15304](https://arxiv.org/abs/2209.15304)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2209.15304.md)]. - Learning Transferable Spatiotemporal Representations from Natural Script Knowledge - [[2209.15280](https://arxiv.org/abs/2209.15280)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2209.15280.md)]. - Compositional Semantic Parsing with Large Language Models - [[2209.15003](https://arxiv.org/abs/2209.15003)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2209.15003.md)]. - DreamFusion: Text-to-3D using 2D Diffusion - [[2209.14988](https://arxiv.org/abs/2209.14988)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2209.14988.md)]. - EDA: Explicit Text-Decoupling and Dense Alignment for 3D Visual Grounding - [[2209.14941](https://arxiv.org/abs/2209.14941)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2209.14941.md)]. - Make-A-Video: Text-to-Video Generation without Text-Video Data - [[2209.14792](https://arxiv.org/abs/2209.14792)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2209.14792.md)]. - Dependent Bayesian Lenses: Categories of Bidirectional Markov Kernels with Canonical Bayesian Inversion - [[2209.14728](https://arxiv.org/abs/2209.14728)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2209.14728.md)]. - Dynamic Prompt Learning via Policy Gradient for Semi-structured Mathematical Reasoning - [[2209.14610](https://arxiv.org/abs/2209.14610)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2209.14610.md)]. - Improving alignment of dialogue agents via targeted human judgements - [[2209.14375](https://arxiv.org/abs/2209.14375)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2209.14375.md)]. - Learning State-Aware Visual Representations from Audible Interactions - [[2209.13583](https://arxiv.org/abs/2209.13583)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2209.13583.md)]. - Sentiment is all you need to win US Presidential elections - [[2209.13487](https://arxiv.org/abs/2209.13487)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2209.13487.md)]. - Can Large Language Models Truly Understand Prompts? A Case Study with Negated Prompts - [[2209.12711](https://arxiv.org/abs/2209.12711)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2209.12711.md)]. - Paraphrasing Is All You Need for Novel Object Captioning - [[2209.12343](https://arxiv.org/abs/2209.12343)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2209.12343.md)]. - Promptagator: Few-shot Dense Retrieval From 8 Examples - [[2209.11755](https://arxiv.org/abs/2209.11755)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2209.11755.md)]. - ProgPrompt: Generating Situated Robot Task Plans using Large Language Models - [[2209.11302](https://arxiv.org/abs/2209.11302)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2209.11302.md)]. - Pretraining the Vision Transformer using self-supervised methods for vision based Deep Reinforcement Learning - [[2209.10901](https://arxiv.org/abs/2209.10901)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2209.10901.md)]. - Generate rather than Retrieve: Large Language Models are Strong Context Generators - [[2209.10063](https://arxiv.org/abs/2209.10063)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2209.10063.md)]. - Target-Guided Open-Domain Conversation Planning - [[2209.09746](https://arxiv.org/abs/2209.09746)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2209.09746.md)]. - Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering - [[2209.09513](https://arxiv.org/abs/2209.09513)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2209.09513.md)]. - Space-time tradeoffs of lenses and optics via higher category theory - [[2209.09351](https://arxiv.org/abs/2209.09351)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2209.09351.md)]. - Loc-NeRF: Monte Carlo Localization using Neural Radiance Fields - [[2209.09050](https://arxiv.org/abs/2209.09050)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2209.09050.md)]. - A Benchmark for Understanding and Generating Dialogue between Characters in Stories - [[2209.08524](https://arxiv.org/abs/2209.08524)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2209.08524.md)]. - Psychologically-informed chain-of-thought prompts for metaphor understanding in large language models - [[2209.08141](https://arxiv.org/abs/2209.08141)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2209.08141.md)]. - Test-Time Prompt Tuning for Zero-Shot Generalization in Vision-Language Models - [[2209.07511](https://arxiv.org/abs/2209.07511)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2209.07511.md)]. - A Geometric Perspective on Variational Autoencoders - [[2209.07370](https://arxiv.org/abs/2209.07370)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2209.07370.md)]. - PaLI: A Jointly-Scaled Multilingual Language-Image Model - [[2209.06794](https://arxiv.org/abs/2209.06794)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2209.06794.md)]. - Certified Robustness to Word Substitution Ranking Attack for Neural Ranking Models - [[2209.06691](https://arxiv.org/abs/2209.06691)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2209.06691.md)]. - Order-Disorder: Imitation Adversarial Attacks for Black-box Neural Ranking Models - [[2209.06506](https://arxiv.org/abs/2209.06506)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2209.06506.md)]. - Hard Negatives or False Negatives: Correcting Pooling Bias in Training Neural Ranking Models - [[2209.05072](https://arxiv.org/abs/2209.05072)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2209.05072.md)]. - What does a platypus look like? Generating customized prompts for zero-shot image classification - [[2209.03320](https://arxiv.org/abs/2209.03320)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2209.03320.md)]. - EnergonAI: An Inference System for 10-100 Billion Parameter Transformer Models - [[2209.02341](https://arxiv.org/abs/2209.02341)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2209.02341.md)]. - Selective Annotation Makes Language Models Better Few-Shot Learners - [[2209.01975](https://arxiv.org/abs/2209.01975)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2209.01975.md)]. - An Empirical Study of End-to-End Video-Language Transformers with Masked Visual Modeling - [[2209.01540](https://arxiv.org/abs/2209.01540)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2209.01540.md)]. - Visual Prompting via Image Inpainting - [[2209.00647](https://arxiv.org/abs/2209.00647)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2209.00647.md)]. - FLAME: Free-form Language-based Motion Synthesis &amp; Editing - [[2209.00349](https://arxiv.org/abs/2209.00349)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2209.00349.md)]. ### August 2022 - Rethinking Conversational Recommendations: Is Decision Tree All You Need? - [[2208.14614](https://arxiv.org/abs/2208.14614)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2208.14614.md)]. - Radon concentration variations at the Yangyang underground laboratory - [[2209.0737](https://arxiv.org/abs/2209.0737)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2209.0737.md)]. - Faithful Reasoning Using Large Language Models - [[2208.14271](https://arxiv.org/abs/2208.14271)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2208.14271.md)]. - Multi-Outputs Is All You Need For Deblur - [[2208.13029](https://arxiv.org/abs/2208.13029)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2208.13029.md)]. - Masked Autoencoders Enable Efficient Knowledge Distillers - [[2208.12256](https://arxiv.org/abs/2208.12256)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2208.12256.md)]. - DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation - [[2208.12242](https://arxiv.org/abs/2208.12242)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2208.12242.md)]. - Understanding Diffusion Models: A Unified Perspective - [[2208.11970](https://arxiv.org/abs/2208.11970)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2208.11970.md)]. - Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned - [[2209.07858](https://arxiv.org/abs/2209.07858)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2209.07858.md)]. - Improving Personality Consistency in Conversation by Persona Extending - [[2208.10816](https://arxiv.org/abs/2208.10816)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2208.10816.md)]. - Extending nnU-Net is all you need - [[2208.10791](https://arxiv.org/abs/2208.10791)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2208.10791.md)]. - Hierarchically Decomposed Graph Convolutional Networks for Skeleton-Based Action Recognition - [[2208.10741](https://arxiv.org/abs/2208.10741)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2208.10741.md)]. - Are disentangled representations all you need to build speaker anonymization systems? - [[2208.10497](https://arxiv.org/abs/2208.10497)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2208.10497.md)]. - Scattered or Connected? An Optimized Parameter-efficient Tuning Approach for Information Retrieval - [[2208.09847](https://arxiv.org/abs/2208.09847)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2208.09847.md)]. - A Contrastive Pre-training Approach to Learn Discriminative Autoencoder for Dense Retrieval - [[2208.09846](https://arxiv.org/abs/2208.09846)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2208.09846.md)]. - Label-Noise Learning with Intrinsically Long-Tailed Data - [[2208.09833](https://arxiv.org/abs/2208.09833)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2208.09833.md)]. - DenseShift: Towards Accurate and Efficient Low-Bit Power-of-Two Quantization - [[2208.09708](https://arxiv.org/abs/2208.09708)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2208.09708.md)]. - SAFARI: Versatile and Efficient Evaluations for Robustness of Interpretability - [[2208.09418](https://arxiv.org/abs/2208.09418)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2208.09418.md)]. - Pseudo-Labels Are All You Need - [[2208.09243](https://arxiv.org/abs/2208.09243)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2208.09243.md)]. - CASE: Aligning Coarse-to-Fine Cognition and Affection for Empathetic Response Generation - [[2208.08845](https://arxiv.org/abs/2208.08845)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2208.08845.md)]. - Towards Open-vocabulary Scene Graph Generation with Prompt-based Finetuning - [[2208.08165](https://arxiv.org/abs/2208.08165)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2208.08165.md)]. - CorpusBrain: Pre-train a Generative Retrieval Model for Knowledge-Intensive Language Tasks - [[2208.07652](https://arxiv.org/abs/2208.07652)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2208.07652.md)]. - Long-Short History of Gradients is All You Need: Detecting Malicious and Unreliable Clients in Federated Learning - [[2208.10273](https://arxiv.org/abs/2208.10273)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2208.10273.md)]. - Dropout is NOT All You Need to Prevent Gradient Leakage - [[2208.06163](https://arxiv.org/abs/2208.06163)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2208.06163.md)]. - PSUMNet: Unified Modality Part Streams are All You Need for Efficient Pose-based Action Recognition - [[2208.05775](https://arxiv.org/abs/2208.05775)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2208.05775.md)]. - Assessing the Unitary RNN as an End-to-End Compositional Model of Syntax - [[2208.05719](https://arxiv.org/abs/2208.05719)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2208.05719.md)]. - Generative Action Description Prompts for Skeleton-based Action Recognition - [[2208.05318](https://arxiv.org/abs/2208.05318)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2208.05318.md)]. - Follow Me: Conversation Planning for Target-driven Recommendation Dialogue Systems - [[2208.03516](https://arxiv.org/abs/2208.03516)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2208.03516.md)]. - Atlas: Few-shot Learning with Retrieval Augmented Language Models - [[2208.03299](https://arxiv.org/abs/2208.03299)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2208.03299.md)]. - BlenderBot 3: a deployed conversational agent that continually learns to responsibly engage - [[2208.03188](https://arxiv.org/abs/2208.03188)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2208.03188.md)]. - Prompt Tuning for Generative Multimodal Pretrained Models - [[2208.02532](https://arxiv.org/abs/2208.02532)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2208.02532.md)]. - Character Generation through Self-Supervised Vectorization - [[2208.02012](https://arxiv.org/abs/2208.02012)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2208.02012.md)]. - Prompt-to-Prompt Image Editing with Cross Attention Control - [[2208.01626](https://arxiv.org/abs/2208.01626)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2208.01626.md)]. - An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion - [[2208.01618](https://arxiv.org/abs/2208.01618)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2208.01618.md)]. - Reduction Rules and ILP Are All You Need: Minimal Directed Feedback Vertex Set - [[2208.01119](https://arxiv.org/abs/2208.01119)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2208.01119.md)]. - Neural network layers as parametric spans - [[2208.00809](https://arxiv.org/abs/2208.00809)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2208.00809.md)]. - Composable Text Controls in Latent Space with ODEs - [[2208.00638](https://arxiv.org/abs/2208.00638)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2208.00638.md)]. - Search for or Navigate to? Dual Adaptive Thinking for Object Navigation - [[2208.00553](https://arxiv.org/abs/2208.00553)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2208.00553.md)]. ### July 2022 - MobileNeRF: Exploiting the Polygon Rasterization Pipeline for Efficient Neural Field Rendering on Mobile Architectures - [[2208.00277](https://arxiv.org/abs/2208.00277)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2208.00277.md)]. - Group DETR: Fast DETR Training with Group-Wise One-to-Many Assignment - [[2207.13085](https://arxiv.org/abs/2207.13085)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2207.13085.md)]. - DETRs with Hybrid Matching - [[2207.13080](https://arxiv.org/abs/2207.13080)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2207.13080.md)]. - Visual correspondence-based explanations improve AI robustness and human-AI team accuracy - [[2208.00780](https://arxiv.org/abs/2208.00780)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2208.00780.md)]. - Dynamic Planning in Open-Ended Dialogue using Reinforcement Learning - [[2208.02294](https://arxiv.org/abs/2208.02294)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2208.02294.md)]. - Is GPT-3 all you need for Visual Question Answering in Cultural Heritage? - [[2207.12101](https://arxiv.org/abs/2207.12101)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2207.12101.md)]. - Neural Generation Meets Real People: Building a Social, Informative Open-Domain Dialogue Agent - [[2207.12021](https://arxiv.org/abs/2207.12021)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2207.12021.md)]. - All you need for horizontal slicing in 5G network - [[2207.11477](https://arxiv.org/abs/2207.11477)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2207.11477.md)]. - Divide and Conquer: 3D Point Cloud Instance Segmentation With Point-Wise Binarization - [[2207.11209](https://arxiv.org/abs/2207.11209)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2207.11209.md)]. - Adaptive Soft Contrastive Learning - [[2207.11163](https://arxiv.org/abs/2207.11163)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2207.11163.md)]. - Omni3D: A Large Benchmark and Model for 3D Object Detection in the Wild - [[2207.10660](https://arxiv.org/abs/2207.10660)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2207.10660.md)]. - Language Model Cascades - [[2207.10342](https://arxiv.org/abs/2207.10342)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2207.10342.md)]. - Tailoring Self-Supervision for Supervised Learning - [[2207.10023](https://arxiv.org/abs/2207.10023)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2207.10023.md)]. - Overlooked factors in concept-based explanations: Dataset choice, concept learnability, and human capability - [[2207.09615](https://arxiv.org/abs/2207.09615)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2207.09615.md)]. - FedX: Unsupervised Federated Learning with Cross Knowledge Distillation - [[2207.09158](https://arxiv.org/abs/2207.09158)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2207.09158.md)]. - UniFusion: Unified Multi-view Fusion Transformer for Spatial-Temporal Representation in Bird's-Eye-View - [[2207.08536](https://arxiv.org/abs/2207.08536)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2207.08536.md)]. - Open-world Semantic Segmentation via Contrasting and Clustering Vision-Language Embedding - [[2207.08455](https://arxiv.org/abs/2207.08455)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2207.08455.md)]. - Language models show human-like content effects on reasoning - [[2207.07051](https://arxiv.org/abs/2207.07051)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2207.07051.md)]. - PointNorm: Dual Normalization is All You Need for Point Cloud Analysis - [[2207.06324](https://arxiv.org/abs/2207.06324)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2207.06324.md)]. - Look-ups are not (yet) all you need for deep learning inference - [[2207.05808](https://arxiv.org/abs/2207.05808)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2207.05808.md)]. - A Data-Based Perspective on Transfer Learning - [[2207.05739](https://arxiv.org/abs/2207.05739)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2207.05739.md)]. - Inner Monologue: Embodied Reasoning through Planning with Language Models - [[2207.05608](https://arxiv.org/abs/2207.05608)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2207.05608.md)]. - Bootstrapping a User-Centered Task-Oriented Dialogue System - [[2207.05223](https://arxiv.org/abs/2207.05223)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2207.05223.md)]. - LM-Nav: Robotic Navigation with Large Pre-Trained Models of Language, Vision, and Action - [[2207.04429](https://arxiv.org/abs/2207.04429)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2207.04429.md)]. - Back to the Source: Diffusion-Driven Test-Time Adaptation - [[2207.03442](https://arxiv.org/abs/2207.03442)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2207.03442.md)]. - YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors - [[2207.02696](https://arxiv.org/abs/2207.02696)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2207.02696.md)]. - Is a PET all you need? A multi-modal study for Alzheimer's disease using 3D CNNs - [[2207.02094](https://arxiv.org/abs/2207.02094)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2207.02094.md)]. - Beyond mAP: Towards better evaluation of instance segmentation - [[2207.01614](https://arxiv.org/abs/2207.01614)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2207.01614.md)]. - PVO: Panoptic Visual Odometry - [[2207.01610](https://arxiv.org/abs/2207.01610)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2207.01610.md)]. - I-ViT: Integer-only Quantization for Efficient Vision Transformer Inference - [[2207.01405](https://arxiv.org/abs/2207.01405)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2207.01405.md)]. - WebShop: Towards Scalable Real-World Web Interaction with Grounded Language Agents - [[2207.01206](https://arxiv.org/abs/2207.01206)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2207.01206.md)]. - Towards Robust Referring Video Object Segmentation with Cyclic Relational Consensus - [[2207.01203](https://arxiv.org/abs/2207.01203)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2207.01203.md)]. - Computer-assisted Pronunciation Training -- Speech synthesis is almost all you need - [[2207.00774](https://arxiv.org/abs/2207.00774)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2207.00774.md)]. - Rationale-Augmented Ensembles in Language Models - [[2207.00747](https://arxiv.org/abs/2207.00747)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2207.00747.md)]. ### June 2022 - Solving Quantitative Reasoning Problems with Language Models - [[2206.14858](https://arxiv.org/abs/2206.14858)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2206.14858.md)]. - CV 3315 Is All You Need : Semantic Segmentation Competition - [[2206.12571](https://arxiv.org/abs/2206.12571)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2206.12571.md)]. - Diegetic Representation of Feedback in Open Games - [[2206.12338](https://arxiv.org/abs/2206.12338)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2206.12338.md)]. - zPROBE: Zero Peek Robustness Checks for Federated Learning - [[2206.12100](https://arxiv.org/abs/2206.12100)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2206.12100.md)]. - Rethinking Surgical Instrument Segmentation: A Background Image Can Be All You Need - [[2206.11804](https://arxiv.org/abs/2206.11804)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2206.11804.md)]. - Invariant Causal Mechanisms through Distribution Matching - [[2206.11646](https://arxiv.org/abs/2206.11646)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2206.11646.md)]. - GODEL: Large-Scale Pre-Training for Goal-Directed Dialog - [[2206.11309](https://arxiv.org/abs/2206.11309)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2206.11309.md)]. - KiloNeuS: A Versatile Neural Implicit Surface Representation for Real-Time Rendering - [[2206.10885](https://arxiv.org/abs/2206.10885)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2206.10885.md)]. - Questions Are All You Need to Train a Dense Passage Retriever - [[2206.10658](https://arxiv.org/abs/2206.10658)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2206.10658.md)]. - Marginal Tail-Adaptive Normalizing Flows - [[2206.10311](https://arxiv.org/abs/2206.10311)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2206.10311.md)]. - DualCoOp: Fast Adaptation to Multi-Label Recognition with Limited Annotations - [[2206.09541](https://arxiv.org/abs/2206.09541)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2206.09541.md)]. - All you need is feedback: Communication with block attention feedback codes - [[2206.09457](https://arxiv.org/abs/2206.09457)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2206.09457.md)]. - Gender Artifacts in Visual Datasets - [[2206.09191](https://arxiv.org/abs/2206.09191)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2206.09191.md)]. - Landscape Learning for Neural Network Inversion - [[2206.09027](https://arxiv.org/abs/2206.09027)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2206.09027.md)]. - MineDojo: Building Open-Ended Embodied Agents with Internet-Scale Knowledge - [[2206.08853](https://arxiv.org/abs/2206.08853)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2206.08853.md)]. - Sheaf Neural Networks with Connection Laplacians - [[2206.08702](https://arxiv.org/abs/2206.08702)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2206.08702.md)]. - PRANC: Pseudo RAndom Networks for Compacting deep models - [[2206.08464](https://arxiv.org/abs/2206.08464)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2206.08464.md)]. - Zero-Shot Video Question Answering via Frozen Bidirectional Language Models - [[2206.08155](https://arxiv.org/abs/2206.08155)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2206.08155.md)]. - Balancing Discriminability and Transferability for Source-Free Domain Adaptation - [[2206.08009](https://arxiv.org/abs/2206.08009)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2206.08009.md)]. - Emergent Abilities of Large Language Models - [[2206.07682](https://arxiv.org/abs/2206.07682)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2206.07682.md)]. - A smile is all you need: Predicting limiting activity coefficients from SMILES with natural language processing - [[2206.07048](https://arxiv.org/abs/2206.07048)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2206.07048.md)]. - GRAM-HD: 3D-Consistent Image Generation at High Resolution with Generative Radiance Manifolds - [[2206.07255](https://arxiv.org/abs/2206.07255)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2206.07255.md)]. - LAVENDER: Unifying Video-Language Understanding as Masked Language Modeling - [[2206.07160](https://arxiv.org/abs/2206.07160)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2206.07160.md)]. - Confidence Score for Source-Free Unsupervised Domain Adaptation - [[2206.06640](https://arxiv.org/abs/2206.06640)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2206.06640.md)]. - Transformers are Meta-Reinforcement Learners - [[2206.06614](https://arxiv.org/abs/2206.06614)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2206.06614.md)]. - Language Models are General-Purpose Interfaces - [[2206.06336](https://arxiv.org/abs/2206.06336)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2206.06336.md)]. - Mining Multi-Label Samples from Single Positive Labels - [[2206.05764](https://arxiv.org/abs/2206.05764)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2206.05764.md)]. - Building a Personalized Dialogue System with Prompt-Tuning - [[2206.05399](https://arxiv.org/abs/2206.05399)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2206.05399.md)]. - Neural Prompt Search - [[2206.04673](https://arxiv.org/abs/2206.04673)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2206.04673.md)]. - Simple Cues Lead to a Strong Multi-Object Tracker - [[2206.04656](https://arxiv.org/abs/2206.04656)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2206.04656.md)]. - Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models - [[2206.04615](https://arxiv.org/abs/2206.04615)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2206.04615.md)]. - Spatial-temporal Concept based Explanation of 3D ConvNets - [[2206.05275](https://arxiv.org/abs/2206.05275)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2206.05275.md)]. - Words are all you need? Language as an approximation for human similarity judgments - [[2206.04105](https://arxiv.org/abs/2206.04105)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2206.04105.md)]. - MobileOne: An Improved One millisecond Mobile Backbone - [[2206.04040](https://arxiv.org/abs/2206.04040)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2206.04040.md)]. - Unsupervised Context Aware Sentence Representation Pretraining for Multi-lingual Dense Retrieval - [[2206.03281](https://arxiv.org/abs/2206.03281)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2206.03281.md)]. - TriBYOL: Triplet BYOL for Self-Supervised Representation Learning - [[2206.03012](https://arxiv.org/abs/2206.03012)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2206.03012.md)]. - Self-Knowledge Distillation based Self-Supervised Learning for Covid-19 Detection from Chest X-Ray Images - [[2206.03009](https://arxiv.org/abs/2206.03009)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2206.03009.md)]. - Self-supervised Learning for Human Activity Recognition Using 700,000 Person-days of Wearable Data - [[2206.02909](https://arxiv.org/abs/2206.02909)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2206.02909.md)]. - A Neural Corpus Indexer for Document Retrieval - [[2206.02743](https://arxiv.org/abs/2206.02743)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2206.02743.md)]. - Revisiting Realistic Test-Time Training: Sequential Inference and Adaptation by Anchored Clustering - [[2206.02721](https://arxiv.org/abs/2206.02721)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2206.02721.md)]. - Is More Data All You Need? A Causal Exploration - [[2206.02409](https://arxiv.org/abs/2206.02409)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2206.02409.md)]. - Learning to Break the Loop: Analyzing and Mitigating Repetitions for Neural Text Generation - [[2206.02369](https://arxiv.org/abs/2206.02369)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2206.02369.md)]. - Making Large Language Models Better Reasoners with Step-Aware Verifier - [[2206.02336](https://arxiv.org/abs/2206.02336)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2206.02336.md)]. - Delving into the Openness of CLIP - [[2206.01986](https://arxiv.org/abs/2206.01986)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2206.01986.md)]. - Revisiting the "Video" in Video-Language Understanding - [[2206.01720](https://arxiv.org/abs/2206.01720)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2206.01720.md)]. - PROMISSING: Pruning Missing Values in Neural Networks - [[2206.01640](https://arxiv.org/abs/2206.01640)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2206.01640.md)]. - A Survey on Computationally Efficient Neural Architecture Search - [[2206.01520](https://arxiv.org/abs/2206.01520)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2206.01520.md)]. - PETRv2: A Unified Framework for 3D Perception from Multi-Camera Images - [[2206.01256](https://arxiv.org/abs/2206.01256)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2206.01256.md)]. - Multi-View Active Fine-Grained Recognition - [[2206.01153](https://arxiv.org/abs/2206.01153)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2206.01153.md)]. - Unified Recurrence Modeling for Video Action Anticipation - [[2206.01009](https://arxiv.org/abs/2206.01009)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2206.01009.md)]. - NIPQ: Noise proxy-based Integrated Pseudo-Quantization - [[2206.00820](https://arxiv.org/abs/2206.00820)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2206.00820.md)]. - Hopular: Modern Hopfield Networks for Tabular Data - [[2206.0664](https://arxiv.org/abs/2206.0664)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2206.0664.md)]. - One- and two-dimensional solitons in spin-orbit-coupled Bose-Einstein condensates with fractional kinetic energy - [[2206.0404](https://arxiv.org/abs/2206.0404)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2206.0404.md)]. - ORC: Network Group-based Knowledge Distillation using Online Role Change - [[2206.01186](https://arxiv.org/abs/2206.01186)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2206.01186.md)]. - A Theoretical Framework for Inference Learning - [[2206.0164](https://arxiv.org/abs/2206.0164)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2206.0164.md)]. ### May 2022 - New asymptotically flat static vacuum metrics with near Euclidean boundary data - [[2206.0082](https://arxiv.org/abs/2206.0082)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2206.0082.md)]. - itKD: Interchange Transfer-based Knowledge Distillation for 3D Object Detection - [[2205.15531](https://arxiv.org/abs/2205.15531)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.15531.md)]. - Non-Markovian Reward Modelling from Trajectory Labels via Interpretable Multiple Instance Learning - [[2205.15367](https://arxiv.org/abs/2205.15367)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.15367.md)]. - Self-Supervised Visual Representation Learning with Semantic Grouping - [[2205.15288](https://arxiv.org/abs/2205.15288)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.15288.md)]. - GMML is All you Need - [[2205.14986](https://arxiv.org/abs/2205.14986)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.14986.md)]. - FRAug: Tackling Federated Learning with Non-IID Features via Representation Augmentation - [[2205.14900](https://arxiv.org/abs/2205.14900)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.14900.md)]. - Robust Weight Perturbation for Adversarial Training - [[2205.14826](https://arxiv.org/abs/2205.14826)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.14826.md)]. - EfficientViT: Multi-Scale Linear Attention for High-Resolution Dense Prediction - [[2205.14756](https://arxiv.org/abs/2205.14756)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.14756.md)]. - CPED: A Large-Scale Chinese Personalized and Emotional Dialogue Dataset for Conversational AI - [[2205.14727](https://arxiv.org/abs/2205.14727)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.14727.md)]. - CoNT: Contrastive Neural Text Generation - [[2205.14690](https://arxiv.org/abs/2205.14690)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.14690.md)]. - Semi-supervised Semantics-guided Adversarial Training for Trajectory Prediction - [[2205.14230](https://arxiv.org/abs/2205.14230)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.14230.md)]. - Controllable Text Generation with Neurally-Decomposed Oracle - [[2205.14219](https://arxiv.org/abs/2205.14219)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.14219.md)]. - Diffusion-LM Improves Controllable Text Generation - [[2205.14217](https://arxiv.org/abs/2205.14217)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.14217.md)]. - GIT: A Generative Image-to-text Transformer for Vision and Language - [[2205.14100](https://arxiv.org/abs/2205.14100)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.14100.md)]. - Prototype Based Classification from Hierarchy to Fairness - [[2205.13997](https://arxiv.org/abs/2205.13997)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.13997.md)]. - Bongard-HOI: Benchmarking Few-Shot Visual Reasoning for Human-Object Interactions - [[2205.13803](https://arxiv.org/abs/2205.13803)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.13803.md)]. - Quark: Controllable Text Generation with Reinforced Unlearning - [[2205.13636](https://arxiv.org/abs/2205.13636)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.13636.md)]. - Physical-World Optical Adversarial Attacks on 3D Face Recognition - [[2205.13412](https://arxiv.org/abs/2205.13412)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.13412.md)]. - Pretraining is All You Need for Image-to-Image Translation - [[2205.12952](https://arxiv.org/abs/2205.12952)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.12952.md)]. - RSTGen: Imbuing Fine-Grained Interpretable Control into Long-FormText Generators - [[2205.12590](https://arxiv.org/abs/2205.12590)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.12590.md)]. - TALM: Tool Augmented Language Models - [[2205.12255](https://arxiv.org/abs/2205.12255)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.12255.md)]. - Large Language Models are Zero-Shot Reasoners - [[2205.11916](https://arxiv.org/abs/2205.11916)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.11916.md)]. - Maieutic Prompting: Logically Consistent Reasoning with Recursive Explanations - [[2205.11822](https://arxiv.org/abs/2205.11822)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.11822.md)]. - Decoder Denoising Pretraining for Semantic Segmentation - [[2205.11423](https://arxiv.org/abs/2205.11423)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.11423.md)]. - PointDistiller: Structured Knowledge Distillation Towards Efficient and Compact 3D Detection - [[2205.11098](https://arxiv.org/abs/2205.11098)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.11098.md)]. - GraphMAE: Self-Supervised Masked Graph Autoencoders - [[2205.10803](https://arxiv.org/abs/2205.10803)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.10803.md)]. - All You Need Is Logs: Improving Code Completion by Learning from Anonymous IDE Usage Logs - [[2205.10692](https://arxiv.org/abs/2205.10692)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.10692.md)]. - Least-to-Most Prompting Enables Complex Reasoning in Large Language Models - [[2205.10625](https://arxiv.org/abs/2205.10625)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.10625.md)]. - A Review of Safe Reinforcement Learning: Methods, Theory and Applications - [[2205.10330](https://arxiv.org/abs/2205.10330)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.10330.md)]. - Sampling Is All You Need on Modeling Long-Term User Behaviors for CTR Prediction - [[2205.10249](https://arxiv.org/abs/2205.10249)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.10249.md)]. - Can Foundation Models Wrangle Your Data? - [[2205.09911](https://arxiv.org/abs/2205.09911)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.09911.md)]. - RankGen: Improving Text Generation with Large Ranking Models - [[2205.09726](https://arxiv.org/abs/2205.09726)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.09726.md)]. - Selection-Inference: Exploiting Large Language Models for Interpretable Logical Reasoning - [[2205.09712](https://arxiv.org/abs/2205.09712)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.09712.md)]. - Integrally Migrating Pre-trained Transformer Encoder-decoders for Visual Object Detection - [[2205.09613](https://arxiv.org/abs/2205.09613)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.09613.md)]. - Learning Graph Structure from Convolutional Mixtures - [[2205.09575](https://arxiv.org/abs/2205.09575)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.09575.md)]. - Target-Guided Dialogue Response Generation Using Commonsense and Data Augmentation - [[2205.09314](https://arxiv.org/abs/2205.09314)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.09314.md)]. - Positional Information is All You Need: A Novel Pipeline for Self-Supervised SVDE from Videos - [[2205.08851](https://arxiv.org/abs/2205.08851)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.08851.md)]. - Need is All You Need: Homeostatic Neural Networks Adapt to Concept Shift - [[2205.08645](https://arxiv.org/abs/2205.08645)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.08645.md)]. - A CLIP-Hitchhiker's Guide to Long Video Retrieval - [[2205.08508](https://arxiv.org/abs/2205.08508)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.08508.md)]. - Robust Losses for Learning Value Functions - [[2205.08464](https://arxiv.org/abs/2205.08464)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.08464.md)]. - LogicSolver: Towards Interpretable Math Word Problem Solving with Logical Prompt-enhanced Learning - [[2205.08232](https://arxiv.org/abs/2205.08232)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.08232.md)]. - Diffusion Models for Adversarial Purification - [[2205.07460](https://arxiv.org/abs/2205.07460)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.07460.md)]. - Long-term Control for Dialogue Generation: Methods and Evaluation - [[2205.07352](https://arxiv.org/abs/2205.07352)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.07352.md)]. - Aligning Robot Representations with Humans - [[2205.07882](https://arxiv.org/abs/2205.07882)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.07882.md)]. - A Generalist Agent - [[2205.06175](https://arxiv.org/abs/2205.06175)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.06175.md)]. - Reduce Information Loss in Transformers for Pluralistic Image Inpainting - [[2205.05076](https://arxiv.org/abs/2205.05076)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.05076.md)]. - Learning to Answer Visual Questions from Web Videos - [[2205.05019](https://arxiv.org/abs/2205.05019)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.05019.md)]. - When does dough become a bagel? Analyzing the remaining mistakes on ImageNet - [[2205.04596](https://arxiv.org/abs/2205.04596)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.04596.md)]. - A for-loop is all you need. For solving the inverse problem in the case of personalized tumor growth modeling - [[2205.04550](https://arxiv.org/abs/2205.04550)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.04550.md)]. - ConvMAE: Masked Convolution Meets Masked Autoencoders - [[2205.03892](https://arxiv.org/abs/2205.03892)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.03892.md)]. - Towards a Progression-Aware Autonomous Dialogue Agent - [[2205.03692](https://arxiv.org/abs/2205.03692)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.03692.md)]. - The Unreliability of Explanations in Few-shot Prompting for Textual Reasoning - [[2205.03401](https://arxiv.org/abs/2205.03401)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.03401.md)]. - Spiking Graph Convolutional Networks - [[2205.02767](https://arxiv.org/abs/2205.02767)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.02767.md)]. - A Simple Contrastive Learning Objective for Alleviating Neural Text Degeneration - [[2205.02517](https://arxiv.org/abs/2205.02517)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.02517.md)]. - Lexical Knowledge Internalization for Neural Dialog Generation - [[2205.01941](https://arxiv.org/abs/2205.01941)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.01941.md)]. - End2End Multi-View Feature Matching with Differentiable Pose Optimization - [[2205.01694](https://arxiv.org/abs/2205.01694)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.01694.md)]. - Learning to Transfer Prompts for Text Generation - [[2205.01543](https://arxiv.org/abs/2205.01543)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.01543.md)]. - OPT: Open Pre-trained Transformer Language Models - [[2205.01068](https://arxiv.org/abs/2205.01068)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.01068.md)]. ### April 2022 - Building a Role Specified Open-Domain Dialogue System Leveraging Large-Scale Language Models - [[2205.00176](https://arxiv.org/abs/2205.00176)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2205.00176.md)]. - Flamingo: a Visual Language Model for Few-Shot Learning - [[2204.14198](https://arxiv.org/abs/2204.14198)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2204.14198.md)]. - Control Globally, Understand Locally: A Global-to-Local Hierarchical Graph Network for Emotional Support Conversation - [[2204.12749](https://arxiv.org/abs/2204.12749)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2204.12749.md)]. - MM-TTA: Multi-Modal Test-Time Adaptation for 3D Semantic Segmentation - [[2204.12667](https://arxiv.org/abs/2204.12667)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2204.12667.md)]. - PolyLoss: A Polynomial Expansion Perspective of Classification Loss Functions - [[2204.12511](https://arxiv.org/abs/2204.12511)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2204.12511.md)]. - LoL: A Comparative Regularization Loss over Query Reformulation Losses for Pseudo-Relevance Feedback - [[2204.11545](https://arxiv.org/abs/2204.11545)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2204.11545.md)]. - Evaluating Interpolation and Extrapolation Performance of Neural Retrieval Models - [[2204.11447](https://arxiv.org/abs/2204.11447)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2204.11447.md)]. - Simulating Fluids in Real-World Still Images - [[2204.11335](https://arxiv.org/abs/2204.11335)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2204.11335.md)]. - Meet Your Favorite Character: Open-domain Chatbot Mimicking Fictional Characters with only a Few Utterances - [[2204.10825](https://arxiv.org/abs/2204.10825)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2204.10825.md)]. - Pre-train a Discriminative Text Encoder for Dense Retrieval via Contrastive Span Prediction - [[2204.10641](https://arxiv.org/abs/2204.10641)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2204.10641.md)]. - Autoregressive Search Engines: Generating Substrings as Document Identifiers - [[2204.10628](https://arxiv.org/abs/2204.10628)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2204.10628.md)]. - Sharper Utility Bounds for Differentially Private Models - [[2204.10536](https://arxiv.org/abs/2204.10536)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2204.10536.md)]. - Towards Multi-Turn Empathetic Dialogs with Positive Emotion Elicitation - [[2204.10509](https://arxiv.org/abs/2204.10509)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2204.10509.md)]. - Event Transition Planning for Open-ended Text Generation - [[2204.09453](https://arxiv.org/abs/2204.09453)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2204.09453.md)]. - Visio-Linguistic Brain Encoding - [[2204.08261](https://arxiv.org/abs/2204.08261)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2204.08261.md)]. - Improving Passage Retrieval with Zero-Shot Question Generation - [[2204.07496](https://arxiv.org/abs/2204.07496)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2204.07496.md)]. - INSTA-BNN: Binary Neural Network with INSTAnce-aware Threshold - [[2204.07439](https://arxiv.org/abs/2204.07439)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2204.07439.md)]. - A Personalized Dialogue Generator with Implicit User Persona Detection - [[2204.07372](https://arxiv.org/abs/2204.07372)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2204.07372.md)]. - LaMemo: Language Modeling with Look-Ahead Memory - [[2204.07341](https://arxiv.org/abs/2204.07341)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2204.07341.md)]. - Measuring Compositional Consistency for Video Question Answering - [[2204.07190](https://arxiv.org/abs/2204.07190)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2204.07190.md)]. - Masked Siamese Networks for Label-Efficient Learning - [[2204.07141](https://arxiv.org/abs/2204.07141)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2204.07141.md)]. - GPT-NeoX-20B: An Open-Source Autoregressive Language Model - [[2204.06745](https://arxiv.org/abs/2204.06745)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2204.06745.md)]. - Transparent Shape from a Single View Polarization Image - [[2204.06331](https://arxiv.org/abs/2204.06331)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2204.06331.md)]. - Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback - [[2204.05862](https://arxiv.org/abs/2204.05862)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2204.05862.md)]. - Bootstrap Motion Forecasting With Self-Consistent Constraints - [[2204.05859](https://arxiv.org/abs/2204.05859)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2204.05859.md)]. - Stylized Knowledge-Grounded Dialogue Generation via Disentangled Template Rewriting - [[2204.05610](https://arxiv.org/abs/2204.05610)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2204.05610.md)]. - Overlapping Word Removal is All You Need: Revisiting Data Imbalance in Hope Speech Detection - [[2204.05488](https://arxiv.org/abs/2204.05488)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2204.05488.md)]. - Exploring the Universal Vulnerability of Prompt-based Learning Paradigm - [[2204.05239](https://arxiv.org/abs/2204.05239)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2204.05239.md)]. - DualPrompt: Complementary Prompting for Rehearsal-free Continual Learning - [[2204.04799](https://arxiv.org/abs/2204.04799)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2204.04799.md)]. - Learning Trajectory-Aware Transformer for Video Super-Resolution - [[2204.04216](https://arxiv.org/abs/2204.04216)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2204.04216.md)]. - Federated Learning with Partial Model Personalization - [[2204.03809](https://arxiv.org/abs/2204.03809)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2204.03809.md)]. - Unsupervised Prompt Learning for Vision-Language Models - [[2204.03649](https://arxiv.org/abs/2204.03649)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2204.03649.md)]. - Interacting with Non-Cooperative User: A New Paradigm for Proactive Dialogue Policy - [[2204.07433](https://arxiv.org/abs/2204.07433)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2204.07433.md)]. - Winoground: Probing Vision and Language Models for Visio-Linguistic Compositionality - [[2204.03162](https://arxiv.org/abs/2204.03162)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2204.03162.md)]. - Knowledge Infused Decoding - [[2204.03084](https://arxiv.org/abs/2204.03084)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2204.03084.md)]. - Unleashing Vanilla Vision Transformer with Masked Image Modeling for Object Detection - [[2204.02964](https://arxiv.org/abs/2204.02964)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2204.02964.md)]. - Towards An End-to-End Framework for Flow-Guided Video Inpainting - [[2204.02663](https://arxiv.org/abs/2204.02663)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2204.02663.md)]. - There Are a Thousand Hamlets in a Thousand People's Eyes: Enhancing Knowledge-grounded Dialogue with Personal Memory - [[2204.02624](https://arxiv.org/abs/2204.02624)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2204.02624.md)]. - Efficient Test-Time Model Adaptation without Forgetting - [[2204.02610](https://arxiv.org/abs/2204.02610)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2204.02610.md)]. - C3KG: A Chinese Commonsense Conversation Knowledge Graph - [[2204.02549](https://arxiv.org/abs/2204.02549)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2204.02549.md)]. - Can language models learn from explanations in context? - [[2204.02329](https://arxiv.org/abs/2204.02329)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2204.02329.md)]. - PaLM: Scaling Language Modeling with Pathways - [[2204.02311](https://arxiv.org/abs/2204.02311)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2204.02311.md)]. - At the Locus of Performance: A Case Study in Enhancing CPUs with Copious 3D-Stacked Cache - [[2204.02235](https://arxiv.org/abs/2204.02235)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2204.02235.md)]. - $\textit{latent}$-GLAT: Glancing at Latent Variables for Parallel Text Generation - [[2204.02030](https://arxiv.org/abs/2204.02030)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2204.02030.md)]. - Learning Neural Acoustic Fields - [[2204.00628](https://arxiv.org/abs/2204.00628)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2204.00628.md)]. - Do As I Can, Not As I Say: Grounding Language in Robotic Affordances - [[2204.01691](https://arxiv.org/abs/2204.01691)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2204.01691.md)]. - Value Gradient weighted Model-Based Reinforcement Learning - [[2204.01464](https://arxiv.org/abs/2204.01464)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2204.01464.md)]. - PRADA: Practical Black-Box Adversarial Attacks against Neural Ranking Models - [[2204.01321](https://arxiv.org/abs/2204.01321)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2204.01321.md)]. - Probabilistic Implicit Scene Completion - [[2204.01264](https://arxiv.org/abs/2204.01264)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2204.01264.md)]. - Implicit Feedback for Dense Passage Retrieval: A Counterfactual Approach - [[2204.00718](https://arxiv.org/abs/2204.00718)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2204.00718.md)]. - Socratic Models: Composing Zero-Shot Multimodal Reasoning with Language - [[2204.00598](https://arxiv.org/abs/2204.00598)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2204.00598.md)]. - Distill-VQ: Learning Retrieval Oriented Vector Quantization By Distilling Knowledge from Dense Embeddings - [[2204.00185](https://arxiv.org/abs/2204.00185)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2204.00185.md)]. ### March 2022 - TransGeo: Transformer Is All You Need for Cross-view Image Geo-localization - [[2204.00097](https://arxiv.org/abs/2204.00097)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2204.00097.md)]. - Exploring Visual Prompts for Adapting Large-Scale Models - [[2203.17274](https://arxiv.org/abs/2203.17274)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.17274.md)]. - A 23 MW data centre is all you need - [[2203.17265](https://arxiv.org/abs/2203.17265)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.17265.md)]. - R2L: Distilling Neural Radiance Field to Neural Light Field for Efficient Novel View Synthesis - [[2203.17261](https://arxiv.org/abs/2203.17261)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.17261.md)]. - Templates for 3D Object Pose Estimation Revisited: Generalization to New Objects and Robustness to Occlusions - [[2203.17234](https://arxiv.org/abs/2203.17234)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.17234.md)]. - PromptDet: Towards Open-vocabulary Detection using Uncurated Images - [[2203.16513](https://arxiv.org/abs/2203.16513)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.16513.md)]. - Causal de Finetti: On the Identification of Invariant Causal Structure in Exchangeable Data - [[2203.15756](https://arxiv.org/abs/2203.15756)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.15756.md)]. - Training Compute-Optimal Large Language Models - [[2203.15556](https://arxiv.org/abs/2203.15556)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.15556.md)]. - Graph Neural Networks are Dynamic Programmers - [[2203.15544](https://arxiv.org/abs/2203.15544)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.15544.md)]. - MAT: Mask-Aware Transformer for Large Hole Image Inpainting - [[2203.15270](https://arxiv.org/abs/2203.15270)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.15270.md)]. - Parameter-efficient Model Adaptation for Vision Transformers - [[2203.16329](https://arxiv.org/abs/2203.16329)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.16329.md)]. - Generalizing Few-Shot NAS with Gradient Matching - [[2203.15207](https://arxiv.org/abs/2203.15207)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.15207.md)]. - Neural Vocoder is All You Need for Speech Super-resolution - [[2203.14941](https://arxiv.org/abs/2203.14941)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.14941.md)]. - Learning to Prompt for Open-Vocabulary Object Detection with Vision-Language Model - [[2203.14940](https://arxiv.org/abs/2203.14940)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.14940.md)]. - STaR: Bootstrapping Reasoning With Reasoning - [[2203.14465](https://arxiv.org/abs/2203.14465)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.14465.md)]. - How Severe is Benchmark-Sensitivity in Video Self-Supervised Learning? - [[2203.14221](https://arxiv.org/abs/2203.14221)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.14221.md)]. - AutoML for Deep Recommender Systems: A Survey - [[2203.13922](https://arxiv.org/abs/2203.13922)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.13922.md)]. - Continual Test-Time Domain Adaptation - [[2203.13591](https://arxiv.org/abs/2203.13591)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.13591.md)]. - MISC: A MIxed Strategy-Aware Model Integrating COMET for Emotional Support Conversation - [[2203.13560](https://arxiv.org/abs/2203.13560)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.13560.md)]. - A Comparative Survey of Deep Active Learning - [[2203.13450](https://arxiv.org/abs/2203.13450)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.13450.md)]. - Linking Emergent and Natural Languages via Corpus Transfer - [[2203.13344](https://arxiv.org/abs/2203.13344)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.13344.md)]. - MonoDETR: Depth-guided Transformer for Monocular 3D Object Detection - [[2203.13310](https://arxiv.org/abs/2203.13310)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.13310.md)]. - Pathways: Asynchronous Distributed Dataflow for ML - [[2203.12533](https://arxiv.org/abs/2203.12533)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.12533.md)]. - Ev-TTA: Test-Time Adaptation for Event-Based Object Recognition - [[2203.12247](https://arxiv.org/abs/2203.12247)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.12247.md)]. - Visual Prompt Tuning - [[2203.12119](https://arxiv.org/abs/2203.12119)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.12119.md)]. - Language modeling via stochastic processes - [[2203.11370](https://arxiv.org/abs/2203.11370)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.11370.md)]. - Self-Consistency Improves Chain of Thought Reasoning in Language Models - [[2203.11171](https://arxiv.org/abs/2203.11171)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.11171.md)]. - Teaching language models to support answers with verified quotes - [[2203.11147](https://arxiv.org/abs/2203.11147)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.11147.md)]. - Towards Large-Scale Interpretable Knowledge Graph Reasoning for Dialogue Systems - [[2203.10610](https://arxiv.org/abs/2203.10610)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.10610.md)]. - On Robust Prefix-Tuning for Text Classification - [[2203.10378](https://arxiv.org/abs/2203.10378)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.10378.md)]. - Generative Principal Component Analysis - [[2203.09693](https://arxiv.org/abs/2203.09693)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.09693.md)]. - Monotonic Differentiable Sorting Networks - [[2203.09630](https://arxiv.org/abs/2203.09630)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.09630.md)]. - A Framework and Benchmark for Deep Batch Active Learning for Regression - [[2203.09410](https://arxiv.org/abs/2203.09410)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.09410.md)]. - RoMe: A Robust Metric for Evaluating Natural Language Generation - [[2203.09183](https://arxiv.org/abs/2203.09183)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.09183.md)]. - PLANET: Dynamic Content Planning in Autoregressive Transformers for Long-form Text Generation - [[2203.09100](https://arxiv.org/abs/2203.09100)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.09100.md)]. - Memorizing Transformers - [[2203.08913](https://arxiv.org/abs/2203.08913)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.08913.md)]. - Multi-Stage Prompting for Knowledgeable Dialogue Generation - [[2203.08745](https://arxiv.org/abs/2203.08745)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.08745.md)]. - Differentiable DAG Sampling - [[2203.08509](https://arxiv.org/abs/2203.08509)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.08509.md)]. - Iteratively Prompt Pre-trained Language Models for Chain of Thought - [[2203.08383](https://arxiv.org/abs/2203.08383)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.08383.md)]. - Multi-View Document Representation Learning for Open-Domain Dense Retrieval - [[2203.08372](https://arxiv.org/abs/2203.08372)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.08372.md)]. - Unified Visual Transformer Compression - [[2203.08243](https://arxiv.org/abs/2203.08243)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.08243.md)]. - Vision-Based Manipulators Need to Also See from Their Hands - [[2203.12677](https://arxiv.org/abs/2203.12677)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.12677.md)]. - Augmenting Document Representations for Dense Retrieval with Interpolation and Perturbation - [[2203.07735](https://arxiv.org/abs/2203.07735)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.07735.md)]. - ActFormer: A GAN-based Transformer towards General Action-Conditioned 3D Human Motion Generation - [[2203.07706](https://arxiv.org/abs/2203.07706)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.07706.md)]. - Distraction is All You Need for Fairness - [[2203.07593](https://arxiv.org/abs/2203.07593)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.07593.md)]. - ScienceWorld: Is your Agent Smarter than a 5th Grader? - [[2203.07540](https://arxiv.org/abs/2203.07540)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.07540.md)]. - Respecting causality is all you need for training physics-informed neural networks - [[2203.07404](https://arxiv.org/abs/2203.07404)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.07404.md)]. - All in One: Exploring Unified Video-Language Pre-training - [[2203.07303](https://arxiv.org/abs/2203.07303)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.07303.md)]. - Orchestrated Value Mapping for Reinforcement Learning - [[2203.07171](https://arxiv.org/abs/2203.07171)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.07171.md)]. - PromptChainer: Chaining Large Language Model Prompts through Visual Programming - [[2203.06566](https://arxiv.org/abs/2203.06566)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.06566.md)]. - Categories of Differentiable Polynomial Circuits for Machine Learning - [[2203.06430](https://arxiv.org/abs/2203.06430)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.06430.md)]. - BiBERT: Accurate Fully Binarized BERT - [[2203.06390](https://arxiv.org/abs/2203.06390)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.06390.md)]. - MISF: Multi-level Interactive Siamese Filtering for High-Fidelity Image Inpainting - [[2203.06304](https://arxiv.org/abs/2203.06304)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.06304.md)]. - LaPraDoR: Unsupervised Pretrained Dense Retriever for Zero-Shot Text Retrieval - [[2203.06169](https://arxiv.org/abs/2203.06169)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.06169.md)]. - An Interpretable Neuro-Symbolic Reasoning Framework for Task-Oriented Dialogue Generation - [[2203.05843](https://arxiv.org/abs/2203.05843)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.05843.md)]. - Long Time No See! Open-Domain Conversation with Long-Term Persona Memory - [[2203.05797](https://arxiv.org/abs/2203.05797)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.05797.md)]. - Conditional Prompt Learning for Vision-Language Models - [[2203.05557](https://arxiv.org/abs/2203.05557)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.05557.md)]. - Back to the Feature: Classical 3D Features are (Almost) All You Need for 3D Anomaly Detection - [[2203.05550](https://arxiv.org/abs/2203.05550)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.05550.md)]. - Internet-augmented language models through few-shot prompting for open-domain question answering - [[2203.05115](https://arxiv.org/abs/2203.05115)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.05115.md)]. - All You Need is LUV: Unsupervised Collection of Labeled Images using Invisible UV Fluorescent Indicators - [[2203.04566](https://arxiv.org/abs/2203.04566)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.04566.md)]. - Source-free Video Domain Adaptation by Learning Temporal Consistency for Action Recognition - [[2203.04559](https://arxiv.org/abs/2203.04559)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.04559.md)]. - Kubric: A scalable dataset generator - [[2203.03570](https://arxiv.org/abs/2203.03570)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.03570.md)]. - Adaptive Cross-Layer Attention for Image Restoration - [[2203.03619](https://arxiv.org/abs/2203.03619)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.03619.md)]. - Structured Pruning is All You Need for Pruning CNNs at Initialization - [[2203.02549](https://arxiv.org/abs/2203.02549)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.02549.md)]. - Neural Simulated Annealing - [[2203.02201](https://arxiv.org/abs/2203.02201)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.02201.md)]. - Training language models to follow instructions with human feedback - [[2203.02155](https://arxiv.org/abs/2203.02155)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.02155.md)]. - BoMD: Bag of Multi-label Descriptors for Noisy Chest X-ray Classification - [[2203.01937](https://arxiv.org/abs/2203.01937)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.01937.md)]. - Self-Supervised Scene Flow Estimation with 4-D Automotive Radar - [[2203.1137](https://arxiv.org/abs/2203.1137)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.1137.md)]. - Follow-Up of Extended Shells around B[e] Stars - [[2203.0963](https://arxiv.org/abs/2203.0963)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.0963.md)]. - Video is All You Need: Attacking PPG-based Biometric Authentication - [[2203.00928](https://arxiv.org/abs/2203.00928)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.00928.md)]. - Incremental Transformer Structure Enhanced Image Inpainting with Masking Positional Encoding - [[2203.00867](https://arxiv.org/abs/2203.00867)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.00867.md)]. - DynamicRetriever: A Pre-training Model-based IR System with Neither Sparse nor Dense Index - [[2203.00537](https://arxiv.org/abs/2203.00537)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.00537.md)]. - MERIt: Meta-Path Guided Contrastive Learning for Logical Reasoning - [[2203.0357](https://arxiv.org/abs/2203.0357)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.0357.md)]. ### February 2022 - One Model is All You Need: Multi-Task Learning Enables Simultaneous Histology Image Segmentation and Classification - [[2203.00077](https://arxiv.org/abs/2203.00077)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2203.00077.md)]. - Rethinking and Refining the Distinct Metric - [[2202.13587](https://arxiv.org/abs/2202.13587)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2202.13587.md)]. - Filter-enhanced MLP is All You Need for Sequential Recommendation - [[2202.13556](https://arxiv.org/abs/2202.13556)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2202.13556.md)]. - The Spectral Bias of Polynomial Neural Networks - [[2202.13473](https://arxiv.org/abs/2202.13473)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2202.13473.md)]. - AugESC: Dialogue Augmentation with Large Language Models for Emotional Support Conversation - [[2202.13047](https://arxiv.org/abs/2202.13047)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2202.13047.md)]. - Ask2Mask: Guided Data Selection for Masked Speech Modeling - [[2202.12719](https://arxiv.org/abs/2202.12719)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2202.12719.md)]. - All You Need Is Supervised Learning: From Imitation Learning to Meta-RL With Upside Down RL - [[2202.11960](https://arxiv.org/abs/2202.11960)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2202.11960.md)]. - Auto-scaling Vision Transformers without Training - [[2202.11921](https://arxiv.org/abs/2202.11921)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2202.11921.md)]. - COLD Decoding: Energy-based Constrained Text Generation with Langevin Dynamics - [[2202.11705](https://arxiv.org/abs/2202.11705)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2202.11705.md)]. - Socialformer: Social Network Inspired Long Document Modeling for Document Ranking - [[2202.10870](https://arxiv.org/abs/2202.10870)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2202.10870.md)]. - 1-WL Expressiveness Is (Almost) All You Need - [[2202.10156](https://arxiv.org/abs/2202.10156)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2202.10156.md)]. - Pseudo Numerical Methods for Diffusion Models on Manifolds - [[2202.09778](https://arxiv.org/abs/2202.09778)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2202.09778.md)]. - Bit-wise Training of Neural Network Weights - [[2202.09571](https://arxiv.org/abs/2202.09571)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2202.09571.md)]. - Gaussian Mixture Convolution Networks - [[2202.09153](https://arxiv.org/abs/2202.09153)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2202.09153.md)]. - cosFormer: Rethinking Softmax in Attention - [[2202.08791](https://arxiv.org/abs/2202.08791)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2202.08791.md)]. - Task-Agnostic Graph Explanations - [[2202.08335](https://arxiv.org/abs/2202.08335)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2202.08335.md)]. - Not All Patches are What You Need: Expediting Vision Transformers via Token Reorganizations - [[2202.07800](https://arxiv.org/abs/2202.07800)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2202.07800.md)]. - Don't Lie to Me! Robust and Efficient Explainability with Verified Perturbation Analysis - [[2202.07728](https://arxiv.org/abs/2202.07728)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2202.07728.md)]. - A precortical module for robust CNNs to light variations - [[2202.07432](https://arxiv.org/abs/2202.07432)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2202.07432.md)]. - Transformer Memory as a Differentiable Search Index - [[2202.06991](https://arxiv.org/abs/2202.06991)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2202.06991.md)]. - Domain Adaptation via Prompt Learning - [[2202.06687](https://arxiv.org/abs/2202.06687)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2202.06687.md)]. - FlowEval: A Consensus-Based Dialogue Evaluation Framework Using Segment Act Flows - [[2202.06633](https://arxiv.org/abs/2202.06633)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2202.06633.md)]. - A Contrastive Framework for Neural Text Generation - [[2202.06417](https://arxiv.org/abs/2202.06417)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2202.06417.md)]. - Conditional Contrastive Learning with Kernel - [[2202.05458](https://arxiv.org/abs/2202.05458)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2202.05458.md)]. - Domain Adversarial Training: A Game Perspective - [[2202.05352](https://arxiv.org/abs/2202.05352)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2202.05352.md)]. - InPars: Data Augmentation for Information Retrieval using Large Language Models - [[2202.05144](https://arxiv.org/abs/2202.05144)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2202.05144.md)]. - Neural Sheaf Diffusion: A Topological Perspective on Heterophily and Oversmoothing in GNNs - [[2202.04579](https://arxiv.org/abs/2202.04579)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2202.04579.md)]. - GiraffeDet: A Heavy-Neck Paradigm for Object Detection - [[2202.04256](https://arxiv.org/abs/2202.04256)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2202.04256.md)]. - Distillation with Contrast is All You Need for Self-Supervised Point Cloud Representation Learning - [[2202.04241](https://arxiv.org/abs/2202.04241)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2202.04241.md)]. - DALL-Eval: Probing the Reasoning Skills and Social Biases of Text-to-Image Generation Models - [[2202.04053](https://arxiv.org/abs/2202.04053)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2202.04053.md)]. - Survey of Hallucination in Natural Language Generation - [[2202.03629](https://arxiv.org/abs/2202.03629)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2202.03629.md)]. - GrASP: Gradient-Based Affordance Selection for Planning - [[2202.04772](https://arxiv.org/abs/2202.04772)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2202.04772.md)]. - PolicyCleanse: Backdoor Detection and Mitigation in Reinforcement Learning - [[2202.03609](https://arxiv.org/abs/2202.03609)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2202.03609.md)]. - Message Passing Neural PDE Solvers - [[2202.03376](https://arxiv.org/abs/2202.03376)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2202.03376.md)]. - OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework - [[2202.03052](https://arxiv.org/abs/2202.03052)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2202.03052.md)]. - User Satisfaction Estimation with Sequential Dialogue Act Modeling in Goal-oriented Conversational Systems - [[2202.02912](https://arxiv.org/abs/2202.02912)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2202.02912.md)]. - One-Nearest-Neighbor Search is All You Need for Minimax Optimal Regression and Classification - [[2202.02464](https://arxiv.org/abs/2202.02464)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2202.02464.md)]. - mSLAM: Massively multilingual joint pre-training for speech and text - [[2202.01374](https://arxiv.org/abs/2202.01374)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2202.01374.md)]. - A Survey on Retrieval-Augmented Text Generation - [[2202.01110](https://arxiv.org/abs/2202.01110)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2202.01110.md)]. - ColloSSL: Collaborative Self-Supervised Learning for Human Activity Recognition - [[2202.00758](https://arxiv.org/abs/2202.00758)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2202.00758.md)]. - CLA-NeRF: Category-Level Articulated Neural Radiance Field - [[2202.00181](https://arxiv.org/abs/2202.00181)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2202.00181.md)]. ### January 2022 - Signing the Supermask: Keep, Hide, Invert - [[2201.13361](https://arxiv.org/abs/2201.13361)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2201.13361.md)]. - Few-Shot Backdoor Attacks on Visual Object Tracking - [[2201.13178](https://arxiv.org/abs/2201.13178)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2201.13178.md)]. - Robust Imitation Learning from Corrupted Demonstrations - [[2201.12594](https://arxiv.org/abs/2201.12594)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2201.12594.md)]. - ItôWave: Itô Stochastic Differential Equation Is All You Need For Wave Generation - [[2201.12519](https://arxiv.org/abs/2201.12519)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2201.12519.md)]. - Counterfactual Plans under Distributional Ambiguity - [[2201.12487](https://arxiv.org/abs/2201.12487)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2201.12487.md)]. - DAB-DETR: Dynamic Anchor Boxes are Better Queries for DETR - [[2201.12329](https://arxiv.org/abs/2201.12329)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2201.12329.md)]. - Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model - [[2201.11990](https://arxiv.org/abs/2201.11990)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2201.11990.md)]. - Chain-of-Thought Prompting Elicits Reasoning in Large Language Models - [[2201.11903](https://arxiv.org/abs/2201.11903)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2201.11903.md)]. - DiscoScore: Evaluating Text Generation with BERT and Discourse Coherence - [[2201.11176](https://arxiv.org/abs/2201.11176)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2201.11176.md)]. - Natural Language Descriptions of Deep Visual Features - [[2201.11114](https://arxiv.org/abs/2201.11114)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2201.11114.md)]. - Explanatory Learning: Beyond Empiricism in Neural Networks - [[2201.10222](https://arxiv.org/abs/2201.10222)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2201.10222.md)]. - RePaint: Inpainting using Denoising Diffusion Probabilistic Models - [[2201.09865](https://arxiv.org/abs/2201.09865)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2201.09865.md)]. - Learning Graph Augmentations to Learn Graph Representations - [[2201.09830](https://arxiv.org/abs/2201.09830)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2201.09830.md)]. - Patches Are All You Need? - [[2201.09792](https://arxiv.org/abs/2201.09792)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2201.09792.md)]. - Neural Implicit Surface Evolution - [[2201.09636](https://arxiv.org/abs/2201.09636)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2201.09636.md)]. - Universal Online Learning with Unbounded Losses: Memory Is All You Need - [[2201.08903](https://arxiv.org/abs/2201.08903)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2201.08903.md)]. - Fast Differentiable Matrix Square Root - [[2201.08663](https://arxiv.org/abs/2201.08663)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2201.08663.md)]. - End-to-end Generative Pretraining for Multimodal Video Captioning - [[2201.08264](https://arxiv.org/abs/2201.08264)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2201.08264.md)]. - LaMDA: Language Models for Dialog Applications - [[2201.08239](https://arxiv.org/abs/2201.08239)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2201.08239.md)]. - Safe Deep RL in 3D Environments using Human Feedback - [[2201.08102](https://arxiv.org/abs/2201.08102)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2201.08102.md)]. - Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents - [[2201.07207](https://arxiv.org/abs/2201.07207)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2201.07207.md)]. - Parameter-free Online Test-time Adaptation - [[2201.05718](https://arxiv.org/abs/2201.05718)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2201.05718.md)]. - Progressively Optimized Bi-Granular Document Representation for Scalable Embedding Based Retrieval - [[2201.05409](https://arxiv.org/abs/2201.05409)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2201.05409.md)]. - A Survey of Controllable Text Generation using Transformer-based Pre-trained Language Models - [[2201.05337](https://arxiv.org/abs/2201.05337)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2201.05337.md)]. - Neural Circuit Architectural Priors for Embodied Control - [[2201.05242](https://arxiv.org/abs/2201.05242)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2201.05242.md)]. - SparseDet: Improving Sparsely Annotated Object Detection with Pseudo-positive Mining - [[2201.04620](https://arxiv.org/abs/2201.04620)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2201.04620.md)]. - Structure and Semantics Preserving Document Representations - [[2201.03720](https://arxiv.org/abs/2201.03720)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2201.03720.md)]. - QuadTree Attention for Vision Transformers - [[2201.02767](https://arxiv.org/abs/2201.02767)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2201.02767.md)]. - Categorical Hopfield Networks - [[2201.02756](https://arxiv.org/abs/2201.02756)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2201.02756.md)]. - Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets - [[2201.02177](https://arxiv.org/abs/2201.02177)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2201.02177.md)]. - All You Need In Sign Language Production - [[2201.01609](https://arxiv.org/abs/2201.01609)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2201.01609.md)]. - C2-CRS: Coarse-to-Fine Contrastive Learning for Conversational Recommender System - [[2201.02732](https://arxiv.org/abs/2201.02732)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2201.02732.md)]. - Global existence and decay estimates for a viscoelastic plate equation with nonlinear damping and logarithmic nonlinearity - [[2201.0983](https://arxiv.org/abs/2201.0983)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2201.0983.md)]. </div></details> <details open><summary><strong>2021</strong></summary><div> ### December 2021 - Optimal Representations for Covariate Shift - [[2201.00057](https://arxiv.org/abs/2201.00057)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2201.00057.md)]. - On the Role of Neural Collapse in Transfer Learning - [[2112.15121](https://arxiv.org/abs/2112.15121)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2112.15121.md)]. - Self Reward Design with Fine-grained Interpretability - [[2112.15034](https://arxiv.org/abs/2112.15034)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2112.15034.md)]. - Generative Kernel Continual learning - [[2112.13410](https://arxiv.org/abs/2112.13410)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2112.13410.md)]. - Revisiting Transformation Invariant Geometric Deep Learning: Are Initial Representations All You Need? - [[2112.12345](https://arxiv.org/abs/2112.12345)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2112.12345.md)]. - ML4CO: Is GCNN All You Need? Graph Convolutional Neural Networks Produce Strong Baselines For Combinatorial Optimization Problems, If Tuned and Trained Properly, on Appropriate Data - [[2112.12251](https://arxiv.org/abs/2112.12251)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2112.12251.md)]. - Cost Aggregation Is All You Need for Few-Shot Segmentation - [[2112.11685](https://arxiv.org/abs/2112.11685)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2112.11685.md)]. - Generalized Few-Shot Semantic Segmentation: All You Need is Fine-Tuning - [[2112.10982](https://arxiv.org/abs/2112.10982)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2112.10982.md)]. - High-Resolution Image Synthesis with Latent Diffusion Models - [[2112.10752](https://arxiv.org/abs/2112.10752)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2112.10752.md)]. - Transformers Can Do Bayesian Inference - [[2112.10510](https://arxiv.org/abs/2112.10510)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2112.10510.md)]. - Align and Prompt: Video-and-Language Pre-training with Entity Prompts - [[2112.09583](https://arxiv.org/abs/2112.09583)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2112.09583.md)]. - WebGPT: Browser-assisted question-answering with human feedback - [[2112.09332](https://arxiv.org/abs/2112.09332)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2112.09332.md)]. - Automated Deep Learning: Neural Architecture Search Is Not the End - [[2112.09245](https://arxiv.org/abs/2112.09245)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2112.09245.md)]. - All You Need is RAW: Defending Against Adversarial Attacks with Camera Image Pipelines - [[2112.09219](https://arxiv.org/abs/2112.09219)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2112.09219.md)]. - Unsupervised Dense Information Retrieval with Contrastive Learning - [[2112.09118](https://arxiv.org/abs/2112.09118)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2112.09118.md)]. - NeuroLogic A*esque Decoding: Constrained Text Generation with Lookahead Heuristics - [[2112.08726](https://arxiv.org/abs/2112.08726)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2112.08726.md)]. - Reframing Human-AI Collaboration for Generating Free-Text Explanations - [[2112.08674](https://arxiv.org/abs/2112.08674)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2112.08674.md)]. - Learning to Prompt for Continual Learning - [[2112.08654](https://arxiv.org/abs/2112.08654)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2112.08654.md)]. - Learning To Retrieve Prompts for In-Context Learning - [[2112.08633](https://arxiv.org/abs/2112.08633)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2112.08633.md)]. - Call for Customized Conversation: Customized Conversation Grounding Persona and Knowledge - [[2112.08619](https://arxiv.org/abs/2112.08619)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2112.08619.md)]. - Rethinking Nearest Neighbors for Visual Classification - [[2112.08459](https://arxiv.org/abs/2112.08459)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2112.08459.md)]. - Improving Conversational Recommendation Systems' Quality with Context-Aware Item Meta Information - [[2112.08140](https://arxiv.org/abs/2112.08140)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2112.08140.md)]. - Massive-scale Decoding for Text Generation using Lattices - [[2112.07660](https://arxiv.org/abs/2112.07660)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2112.07660.md)]. - MDD-Eval: Self-Training on Augmented Data for Multi-Domain Dialogue Evaluation - [[2112.07194](https://arxiv.org/abs/2112.07194)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2112.07194.md)]. - Real-Time Neural Voice Camouflage - [[2112.07076](https://arxiv.org/abs/2112.07076)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2112.07076.md)]. - GLaM: Efficient Scaling of Language Models with Mixture-of-Experts - [[2112.06905](https://arxiv.org/abs/2112.06905)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2112.06905.md)]. - VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks - [[2112.06825](https://arxiv.org/abs/2112.06825)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2112.06825.md)]. - Step-unrolled Denoising Autoencoders for Text Generation - [[2112.06749](https://arxiv.org/abs/2112.06749)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2112.06749.md)]. - CR-FIQA: Face Image Quality Assessment by Learning Sample Relative Classifiability - [[2112.06592](https://arxiv.org/abs/2112.06592)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2112.06592.md)]. - Self-Supervised Bot Play for Conversational Recommendation with Justifications - [[2112.05197](https://arxiv.org/abs/2112.05197)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2112.05197.md)]. - On Convergence of Federated Averaging Langevin Dynamics - [[2112.05120](https://arxiv.org/abs/2112.05120)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2112.05120.md)]. - Scaling Language Models: Methods, Analysis &amp; Insights from Training Gopher - [[2112.11446](https://arxiv.org/abs/2112.11446)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2112.11446.md)]. - Prompting Visual-Language Models for Efficient Video Understanding - [[2112.04478](https://arxiv.org/abs/2112.04478)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2112.04478.md)]. - Improving language models by retrieving from trillions of tokens - [[2112.04426](https://arxiv.org/abs/2112.04426)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2112.04426.md)]. - Pareto Domain Adaptation - [[2112.04137](https://arxiv.org/abs/2112.04137)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2112.04137.md)]. - DeepFace-EMD: Re-ranking Using Patch-wise Earth Mover's Distance Improves Out-Of-Distribution Face Identification - [[2112.04016](https://arxiv.org/abs/2112.04016)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2112.04016.md)]. - Universalizing Weak Supervision - [[2112.03865](https://arxiv.org/abs/2112.03865)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2112.03865.md)]. - Genetic Algorithm for Constrained Molecular Inverse Design - [[2112.03518](https://arxiv.org/abs/2112.03518)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2112.03518.md)]. - Variational Wasserstein gradient flow - [[2112.02424](https://arxiv.org/abs/2112.02424)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2112.02424.md)]. - YourTTS: Towards Zero-Shot Multi-Speaker TTS and Zero-Shot Voice Conversion for everyone - [[2112.02418](https://arxiv.org/abs/2112.02418)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2112.02418.md)]. - Linear algebra with transformers - [[2112.01898](https://arxiv.org/abs/2112.01898)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2112.01898.md)]. - DenseCLIP: Language-Guided Dense Prediction with Context-Aware Prompting - [[2112.01518](https://arxiv.org/abs/2112.01518)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2112.01518.md)]. - Mind the gap in university rankings: a complex network approach towards fairness - [[2112.1341](https://arxiv.org/abs/2112.1341)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2112.1341.md)]. - Magnetic correction to the Anomalous Magnetic Moment of Electron - [[2112.1051](https://arxiv.org/abs/2112.1051)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2112.1051.md)]. - Neural Stochastic Dual Dynamic Programming - [[2112.00874](https://arxiv.org/abs/2112.00874)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2112.00874.md)]. - A General Language Assistant as a Laboratory for Alignment - [[2112.00861](https://arxiv.org/abs/2112.00861)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2112.00861.md)]. - Routing with Self-Attention for Multimodal Capsule Networks - [[2112.00775](https://arxiv.org/abs/2112.00775)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2112.00775.md)]. - MAD: A Scalable Dataset for Language Grounding in Videos from Movie Audio Descriptions - [[2112.00431](https://arxiv.org/abs/2112.00431)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2112.00431.md)]. ### November 2021 - Show Your Work: Scratchpads for Intermediate Computation with Language Models - [[2112.00114](https://arxiv.org/abs/2112.00114)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2112.00114.md)]. - Towards Robust and Adaptive Motion Forecasting: A Causal Representation Perspective - [[2111.14820](https://arxiv.org/abs/2111.14820)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2111.14820.md)]. - GALAXY: A Generative Pre-trained Model for Task-Oriented Dialog with Semi-Supervised Learning and Explicit Policy Injection - [[2111.14592](https://arxiv.org/abs/2111.14592)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2111.14592.md)]. - A category theory framework for Bayesian learning - [[2111.14293](https://arxiv.org/abs/2111.14293)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2111.14293.md)]. - Pre-training Methods in Information Retrieval - [[2111.13853](https://arxiv.org/abs/2111.13853)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2111.13853.md)]. - SQUID: Deep Feature In-Painting for Unsupervised Anomaly Detection - [[2111.13495](https://arxiv.org/abs/2111.13495)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2111.13495.md)]. - SwinBERT: End-to-End Transformers with Sparse Attention for Video Captioning - [[2111.13196](https://arxiv.org/abs/2111.13196)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2111.13196.md)]. - Group equivariant neural posterior estimation - [[2111.13139](https://arxiv.org/abs/2111.13139)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2111.13139.md)]. - CDNet is all you need: Cascade DCN based underwater object detection RCNN - [[2111.12982](https://arxiv.org/abs/2111.12982)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2111.12982.md)]. - VIOLET : End-to-End Video-Language Transformers with Masked Visual-token Modeling - [[2111.12681](https://arxiv.org/abs/2111.12681)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2111.12681.md)]. - Hierarchical Modular Network for Video Captioning - [[2111.12476](https://arxiv.org/abs/2111.12476)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2111.12476.md)]. - Node-Level Differentially Private Graph Neural Networks - [[2111.15521](https://arxiv.org/abs/2111.15521)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2111.15521.md)]. - Subgraph Permutation Equivariant Networks - [[2111.11840](https://arxiv.org/abs/2111.11840)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2111.11840.md)]. - Variance Reduction in Deep Learning: More Momentum is All You Need - [[2111.11828](https://arxiv.org/abs/2111.11828)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2111.11828.md)]. - Deep Point Cloud Reconstruction - [[2111.11704](https://arxiv.org/abs/2111.11704)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2111.11704.md)]. - Lossless Compression with Probabilistic Circuits - [[2111.11632](https://arxiv.org/abs/2111.11632)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2111.11632.md)]. - Direct Voxel Grid Optimization: Super-fast Convergence for Radiance Fields Reconstruction - [[2111.11215](https://arxiv.org/abs/2111.11215)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2111.11215.md)]. - Plant 'n' Seek: Can You Find the Winning Ticket? - [[2111.11153](https://arxiv.org/abs/2111.11153)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2111.11153.md)]. - Deep Probability Estimation - [[2111.10734](https://arxiv.org/abs/2111.10734)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2111.10734.md)]. - Self-Supervised Point Cloud Completion via Inpainting - [[2111.10701](https://arxiv.org/abs/2111.10701)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2111.10701.md)]. - Are Vision Transformers Robust to Patch Perturbations? - [[2111.10659](https://arxiv.org/abs/2111.10659)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2111.10659.md)]. - Deep Safe Multi-Task Learning - [[2111.10601](https://arxiv.org/abs/2111.10601)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2111.10601.md)]. - FBNetV5: Neural Architecture Search for Multiple Tasks in One Run - [[2111.10007](https://arxiv.org/abs/2111.10007)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2111.10007.md)]. - One-Shot Generative Domain Adaptation - [[2111.09876](https://arxiv.org/abs/2111.09876)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2111.09876.md)]. - Perceiving and Modeling Density is All You Need for Image Dehazing - [[2111.09733](https://arxiv.org/abs/2111.09733)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2111.09733.md)]. - Selective Ensembles for Consistent Predictions - [[2111.08230](https://arxiv.org/abs/2111.08230)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2111.08230.md)]. - Bolstering Stochastic Gradient Descent with Model Building - [[2111.07058](https://arxiv.org/abs/2111.07058)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2111.07058.md)]. - Gradients are Not All You Need - [[2111.05803](https://arxiv.org/abs/2111.05803)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2111.05803.md)]. - Sliced Recursive Transformer - [[2111.05297](https://arxiv.org/abs/2111.05297)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2111.05297.md)]. - Realizable Learning is All You Need - [[2111.04746](https://arxiv.org/abs/2111.04746)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2111.04746.md)]. - MT3: Multi-Task Multitrack Music Transcription - [[2111.03017](https://arxiv.org/abs/2111.03017)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2111.03017.md)]. - Is Bang-Bang Control All You Need? Solving Continuous Control with Bernoulli Policies - [[2111.02552](https://arxiv.org/abs/2111.02552)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2111.02552.md)]. - LAION-400M: Open Dataset of CLIP-Filtered 400 Million Image-Text Pairs - [[2111.02114](https://arxiv.org/abs/2111.02114)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2111.02114.md)]. - DAGSurv: Directed Acyclic Graph Based Survival Analysis Using Deep Neural Networks - [[2111.1482](https://arxiv.org/abs/2111.1482)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2111.1482.md)]. - Can Vision Transformers Perform Convolution? - [[2111.01353](https://arxiv.org/abs/2111.01353)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2111.01353.md)]. - Deep neural networks as nested dynamical systems - [[2111.01297](https://arxiv.org/abs/2111.01297)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2111.01297.md)]. - LSTA-Net: Long short-term Spatio-Temporal Aggregation Network for Skeleton-based Action Recognition - [[2111.0823](https://arxiv.org/abs/2111.0823)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2111.0823.md)]. - Towards the Generalization of Contrastive Self-Supervised Learning - [[2111.00743](https://arxiv.org/abs/2111.00743)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2111.00743.md)]. ### October 2021 - Template Filling for Controllable Commonsense Reasoning - [[2111.00539](https://arxiv.org/abs/2111.00539)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2111.00539.md)]. - Hyperparameter Tuning is All You Need for LISTA - [[2110.15900](https://arxiv.org/abs/2110.15900)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.15900.md)]. - Improving Fairness via Federated Learning - [[2110.15545](https://arxiv.org/abs/2110.15545)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.15545.md)]. - The magnitude vector of images - [[2110.15188](https://arxiv.org/abs/2110.15188)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.15188.md)]. - Semi-Siamese Bi-encoder Neural Ranking Model Using Lightweight Fine-Tuning - [[2110.14943](https://arxiv.org/abs/2110.14943)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.14943.md)]. - Training Verifiers to Solve Math Word Problems - [[2110.14168](https://arxiv.org/abs/2110.14168)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.14168.md)]. - s2s-ft: Fine-Tuning Pretrained Transformer Encoders for Sequence-to-Sequence Learning - [[2110.13640](https://arxiv.org/abs/2110.13640)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.13640.md)]. - The Efficiency Misnomer - [[2110.12894](https://arxiv.org/abs/2110.12894)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.12894.md)]. - Facilitating Database Tuning with Hyper-Parameter Optimization: A Comprehensive Experimental Evaluation - [[2110.12654](https://arxiv.org/abs/2110.12654)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.12654.md)]. - DelightfulTTS: The Microsoft Speech Synthesis System for Blizzard Challenge 2021 - [[2110.12612](https://arxiv.org/abs/2110.12612)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.12612.md)]. - Double Trouble: How to not explain a text classifier's decisions using counterfactuals synthesized by masked language models? - [[2110.11929](https://arxiv.org/abs/2110.11929)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.11929.md)]. - Center Loss Regularization for Continual Learning - [[2110.11314](https://arxiv.org/abs/2110.11314)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.11314.md)]. - Fast Model Editing at Scale - [[2110.11309](https://arxiv.org/abs/2110.11309)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.11309.md)]. - SILG: The Multi-environment Symbolic Interactive Language Grounding Benchmark - [[2110.10661](https://arxiv.org/abs/2110.10661)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.10661.md)]. - BERMo: What can BERT learn from ELMo? - [[2110.15802](https://arxiv.org/abs/2110.15802)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.15802.md)]. - TLDR: Twin Learning for Dimensionality Reduction - [[2110.09455](https://arxiv.org/abs/2110.09455)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.09455.md)]. - Natural Attribute-based Shift Detection - [[2110.09276](https://arxiv.org/abs/2110.09276)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.09276.md)]. - Illiterate DALL-E Learns to Compose - [[2110.11405](https://arxiv.org/abs/2110.11405)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.11405.md)]. - Multimodal Dialogue Response Generation - [[2110.08515](https://arxiv.org/abs/2110.08515)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.08515.md)]. - Comparing Human and Machine Bias in Face Recognition - [[2110.08396](https://arxiv.org/abs/2110.08396)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.08396.md)]. - Generated Knowledge Prompting for Commonsense Reasoning - [[2110.08387](https://arxiv.org/abs/2110.08387)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.08387.md)]. - On Learning the Transformer Kernel - [[2110.08323](https://arxiv.org/abs/2110.08323)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.08323.md)]. - Multitask Prompted Training Enables Zero-Shot Task Generalization - [[2110.08207](https://arxiv.org/abs/2110.08207)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.08207.md)]. - Guided Point Contrastive Learning for Semi-supervised Point Cloud Semantic Segmentation - [[2110.08188](https://arxiv.org/abs/2110.08188)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.08188.md)]. - Few-Shot Bot: Prompt-Based Learning for Dialogue Systems - [[2110.08118](https://arxiv.org/abs/2110.08118)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.08118.md)]. - Jurassic is (almost) All You Need: Few-Shot Meaning-to-Text Generation for Open-Domain Dialogue - [[2110.08094](https://arxiv.org/abs/2110.08094)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.08094.md)]. - On-Policy Model Errors in Reinforcement Learning - [[2110.07985](https://arxiv.org/abs/2110.07985)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.07985.md)]. - ContraQA: Question Answering under Contradicting Contexts - [[2110.07803](https://arxiv.org/abs/2110.07803)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.07803.md)]. - RecInDial: A Unified Framework for Conversational Recommendation with Pretrained Language Models - [[2110.07477](https://arxiv.org/abs/2110.07477)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.07477.md)]. - RocketQAv2: A Joint Training Method for Dense Passage Retrieval and Passage Re-ranking - [[2110.07367](https://arxiv.org/abs/2110.07367)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.07367.md)]. - CLIP4Caption: CLIP for Video Caption - [[2110.06615](https://arxiv.org/abs/2110.06615)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.06615.md)]. - Parallel Deep Neural Networks Have Zero Duality Gap - [[2110.06482](https://arxiv.org/abs/2110.06482)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.06482.md)]. - Causal discovery from conditionally stationary time-series - [[2110.06257](https://arxiv.org/abs/2110.06257)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.06257.md)]. - Molecular Graph Generation via Geometric Scattering - [[2110.06241](https://arxiv.org/abs/2110.06241)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.06241.md)]. - Open-Set Recognition: a Good Closed-Set Classifier is All You Need? - [[2110.06207](https://arxiv.org/abs/2110.06207)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.06207.md)]. - Efficient Neural Ranking using Forward Indexes - [[2110.06051](https://arxiv.org/abs/2110.06051)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.06051.md)]. - DiscoDVT: Generating Long Text with Discourse-Aware Discrete Variational Transformer - [[2110.05999](https://arxiv.org/abs/2110.05999)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.05999.md)]. - Relative Molecule Self-Attention Transformer - [[2110.05841](https://arxiv.org/abs/2110.05841)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.05841.md)]. - Learning Discrete Representations via Constrained Clustering for Effective and Efficient Dense Retrieval - [[2110.05789](https://arxiv.org/abs/2110.05789)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.05789.md)]. - Certified Patch Robustness via Smoothed Vision Transformers - [[2110.07719](https://arxiv.org/abs/2110.07719)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.07719.md)]. - Global Vision Transformer Pruning with Hessian-Aware Saliency - [[2110.04869](https://arxiv.org/abs/2110.04869)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.04869.md)]. - Long Expressive Memory for Sequence Modeling - [[2110.04744](https://arxiv.org/abs/2110.04744)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.04744.md)]. - Multi-Agent MDP Homomorphic Networks - [[2110.04495](https://arxiv.org/abs/2110.04495)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.04495.md)]. - Neural Link Prediction with Walk Pooling - [[2110.04375](https://arxiv.org/abs/2110.04375)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.04375.md)]. - FRL: Federated Rank Learning - [[2110.04350](https://arxiv.org/abs/2110.04350)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.04350.md)]. - On the Limitations of Multimodal VAEs - [[2110.04121](https://arxiv.org/abs/2110.04121)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.04121.md)]. - Token Pooling in Vision Transformers - [[2110.03860](https://arxiv.org/abs/2110.03860)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.03860.md)]. - FOCUS: Familiar Objects in Common and Uncommon Settings - [[2110.03804](https://arxiv.org/abs/2110.03804)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.03804.md)]. - Hyperparameter Tuning with Renyi Differential Privacy - [[2110.03620](https://arxiv.org/abs/2110.03620)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.03620.md)]. - Adversarial Retriever-Ranker for dense text retrieval - [[2110.03611](https://arxiv.org/abs/2110.03611)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.03611.md)]. - RAR: Region-Aware Point Cloud Registration - [[2110.03544](https://arxiv.org/abs/2110.03544)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.03544.md)]. - Cartoon Explanations of Image Classifiers - [[2110.03485](https://arxiv.org/abs/2110.03485)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.03485.md)]. - Situated Dialogue Learning through Procedural Environment Generation - [[2110.03262](https://arxiv.org/abs/2110.03262)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.03262.md)]. - On the Optimal Memorization Power of ReLU Neural Networks - [[2110.03187](https://arxiv.org/abs/2110.03187)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.03187.md)]. - Attention is All You Need? Good Embeddings with Statistics are enough:Large Scale Audio Understanding without Transformers/ Convolutions/ BERTs/ Mixers/ Attention/ RNNs or .... - [[2110.03183](https://arxiv.org/abs/2110.03183)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.03183.md)]. - Generative Modeling with Optimal Transport Maps - [[2110.02999](https://arxiv.org/abs/2110.02999)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.02999.md)]. - Federated Learning via Plurality Vote - [[2110.02998](https://arxiv.org/abs/2110.02998)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.02998.md)]. - Nested Policy Reinforcement Learning - [[2110.02879](https://arxiv.org/abs/2110.02879)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.02879.md)]. - How BPE Affects Memorization in Transformers - [[2110.02782](https://arxiv.org/abs/2110.02782)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.02782.md)]. - On The Transferability of Deep-Q Networks - [[2110.02639](https://arxiv.org/abs/2110.02639)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.02639.md)]. - Test-time Batch Statistics Calibration for Covariate Shift - [[2110.04065](https://arxiv.org/abs/2110.04065)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.04065.md)]. - Geometric Algebra Attention Networks for Small Point Clouds - [[2110.02393](https://arxiv.org/abs/2110.02393)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.02393.md)]. - EntQA: Entity Linking as Question Answering - [[2110.02369](https://arxiv.org/abs/2110.02369)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.02369.md)]. - Autoregressive Diffusion Models - [[2110.02037](https://arxiv.org/abs/2110.02037)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.02037.md)]. - AI Chains: Transparent and Controllable Human-AI Interaction by Chaining Large Language Model Prompts - [[2110.01691](https://arxiv.org/abs/2110.01691)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.01691.md)]. - Generalized Kernel Thinning - [[2110.01593](https://arxiv.org/abs/2110.01593)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.01593.md)]. - One Timestep is All You Need: Training Spiking Neural Networks with Ultra Low Latency - [[2110.05929](https://arxiv.org/abs/2110.05929)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.05929.md)]. - Batch size-invariance for policy optimization - [[2110.00641](https://arxiv.org/abs/2110.00641)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.00641.md)]. - Dynamics of targeted ransomware negotiation - [[2110.0362](https://arxiv.org/abs/2110.0362)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.0362.md)]. - Vision-Only Robot Navigation in a Neural Radiance World - [[2110.00168](https://arxiv.org/abs/2110.00168)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.00168.md)]. ### September 2021 - Multi-Task Pre-Training for Plug-and-Play Task-Oriented Dialogue System - [[2109.14739](https://arxiv.org/abs/2109.14739)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2109.14739.md)]. - Stochastic Training is Not Necessary for Generalization - [[2109.14119](https://arxiv.org/abs/2109.14119)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2109.14119.md)]. - IGLU: Efficient GCN Training via Lazy Updates - [[2109.13995](https://arxiv.org/abs/2109.13995)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2109.13995.md)]. - Unsolved Problems in ML Safety - [[2109.13916](https://arxiv.org/abs/2109.13916)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2109.13916.md)]. - OpenViDial 2.0: A Larger-Scale, Open-Domain Dialogue Generation Dataset with Visual Contexts - [[2109.12761](https://arxiv.org/abs/2109.12761)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2109.12761.md)]. - Learning Neural Templates for Recommender Dialogue System - [[2109.12302](https://arxiv.org/abs/2109.12302)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2109.12302.md)]. - CPT: Colorful Prompt Tuning for Pre-trained Vision-Language Models - [[2109.11797](https://arxiv.org/abs/2109.11797)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2109.11797.md)]. - A Survey on Cost Types, Interaction Schemes, and Annotator Performance Models in Selection Algorithms for Active Learning in Classification - [[2109.11301](https://arxiv.org/abs/2109.11301)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2109.11301.md)]. - Recursively Summarizing Books with Human Feedback - [[2109.10862](https://arxiv.org/abs/2109.10862)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2109.10862.md)]. - Scalable and Efficient MoE Training for Multitask Multilingual Models - [[2109.10465](https://arxiv.org/abs/2109.10465)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2109.10465.md)]. - SPLADE v2: Sparse Lexical and Expansion Model for Information Retrieval - [[2109.10086](https://arxiv.org/abs/2109.10086)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2109.10086.md)]. - Neural networks with trainable matrix activation functions - [[2109.09948](https://arxiv.org/abs/2109.09948)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2109.09948.md)]. - PLATO-XL: Exploring the Large-scale Pre-training of Dialogue Generation - [[2109.09519](https://arxiv.org/abs/2109.09519)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2109.09519.md)]. - DuRecDial 2.0: A Bilingual Parallel Corpus for Conversational Recommendation - [[2109.08877](https://arxiv.org/abs/2109.08877)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2109.08877.md)]. - Perspective-taking and Pragmatics for Generating Empathetic Responses Focused on Emotion Causes - [[2109.08828](https://arxiv.org/abs/2109.08828)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2109.08828.md)]. - Primer: Searching for Efficient Transformers for Language Modeling - [[2109.08668](https://arxiv.org/abs/2109.08668)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2109.08668.md)]. - Is Curiosity All You Need? On the Utility of Emergent Behaviours from Curious Exploration - [[2109.08603](https://arxiv.org/abs/2109.08603)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2109.08603.md)]. - Torch.manual_seed(3407) is all you need: On the influence of random seeds in deep learning architectures for computer vision - [[2109.08203](https://arxiv.org/abs/2109.08203)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2109.08203.md)]. - Scaling Laws for Neural Machine Translation - [[2109.07740](https://arxiv.org/abs/2109.07740)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2109.07740.md)]. - Transferable Persona-Grounded Dialogues via Grounded Minimal Edits - [[2109.07713](https://arxiv.org/abs/2109.07713)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2109.07713.md)]. - Attention Is Indeed All You Need: Semantically Attention-Guided Decoding for Data-to-Text NLG - [[2109.07043](https://arxiv.org/abs/2109.07043)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2109.07043.md)]. - Benchmarking the Spectrum of Agent Capabilities - [[2109.06780](https://arxiv.org/abs/2109.06780)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2109.06780.md)]. - Exploring Prompt-based Few-shot Learning for Grounded Dialog Generation - [[2109.06513](https://arxiv.org/abs/2109.06513)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2109.06513.md)]. - Space Time Recurrent Memory Network - [[2109.06474](https://arxiv.org/abs/2109.06474)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2109.06474.md)]. - Compression, Transduction, and Creation: A Unified Framework for Evaluating Natural Language Generation - [[2109.06379](https://arxiv.org/abs/2109.06379)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2109.06379.md)]. - WeakSTIL: Weak whole-slide image level stromal tumor infiltrating lymphocyte scores are all you need - [[2109.05892](https://arxiv.org/abs/2109.05892)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2109.05892.md)]. - CEM: Commonsense-aware Empathetic Response Generation - [[2109.05739](https://arxiv.org/abs/2109.05739)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2109.05739.md)]. - Bootstrapped Meta-Learning - [[2109.04504](https://arxiv.org/abs/2109.04504)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2109.04504.md)]. - A Three-Stage Learning Framework for Low-Resource Knowledge-Grounded Dialogue Generation - [[2109.04096](https://arxiv.org/abs/2109.04096)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2109.04096.md)]. - Thinking Clearly, Talking Fast: Concept-Guided Non-Autoregressive Generation for Open-Domain Dialogue Systems - [[2109.04084](https://arxiv.org/abs/2109.04084)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2109.04084.md)]. - Local Augmentation for Graph Neural Networks - [[2109.03856](https://arxiv.org/abs/2109.03856)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2109.03856.md)]. - Sqrt(d) Dimension Dependence of Langevin Monte Carlo - [[2109.03839](https://arxiv.org/abs/2109.03839)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2109.03839.md)]. - Mask is All You Need: Rethinking Mask R-CNN for Dense and Arbitrary-Shaped Scene Text Detection - [[2109.03426](https://arxiv.org/abs/2109.03426)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2109.03426.md)]. - Learning Neural Causal Models with Active Interventions - [[2109.02429](https://arxiv.org/abs/2109.02429)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2109.02429.md)]. - Learning to Prompt for Vision-Language Models - [[2109.01134](https://arxiv.org/abs/2109.01134)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2109.01134.md)]. - The fractional chromatic number of double cones over graphs - [[2109.0774](https://arxiv.org/abs/2109.0774)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2109.0774.md)]. - Regional Adversarial Training for Better Robust Generalization - [[2109.0678](https://arxiv.org/abs/2109.0678)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2109.0678.md)]. - Searching for Efficient Multi-Stage Vision Transformers - [[2109.00642](https://arxiv.org/abs/2109.00642)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2109.00642.md)]. - Boosting Search Engines with Interactive Agents - [[2109.00527](https://arxiv.org/abs/2109.00527)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2109.00527.md)]. ### August 2021 - Improving Query Representations for Dense Retrieval with Pseudo Relevance Feedback - [[2108.13454](https://arxiv.org/abs/2108.13454)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2108.13454.md)]. - Neural HMMs are all you need (for high-quality attention-free TTS) - [[2108.13320](https://arxiv.org/abs/2108.13320)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2108.13320.md)]. - Subjective Learning for Open-Ended Data - [[2108.12113](https://arxiv.org/abs/2108.12113)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2108.12113.md)]. - Photos Are All You Need for Reciprocal Recommendation in Online Dating - [[2108.11714](https://arxiv.org/abs/2108.11714)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2108.11714.md)]. - SimVLM: Simple Visual Language Model Pretraining with Weak Supervision - [[2108.10904](https://arxiv.org/abs/2108.10904)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2108.10904.md)]. - Contrastive Learning of User Behavior Sequence for Context-Aware Document Ranking - [[2108.10510](https://arxiv.org/abs/2108.10510)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2108.10510.md)]. - One TTS Alignment To Rule Them All - [[2108.10447](https://arxiv.org/abs/2108.10447)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2108.10447.md)]. - Dynamic processes in superconductors and the laws of thermodynamics - [[2110.0386](https://arxiv.org/abs/2110.0386)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2110.0386.md)]. - Anarchic Federated Learning - [[2108.09875](https://arxiv.org/abs/2108.09875)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2108.09875.md)]. - Pre-training for Ad-hoc Retrieval: Hyperlink is Also You Need - [[2108.09346](https://arxiv.org/abs/2108.09346)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2108.09346.md)]. - Fastformer: Additive Attention Can Be All You Need - [[2108.09084](https://arxiv.org/abs/2108.09084)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2108.09084.md)]. - A good body is all you need: avoiding catastrophic interference via agent architecture search - [[2108.08398](https://arxiv.org/abs/2108.08398)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2108.08398.md)]. - On the Opportunities and Risks of Foundation Models - [[2108.07258](https://arxiv.org/abs/2108.07258)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2108.07258.md)]. - MMChat: Multi-Modal Chat Dataset on Social Media - [[2108.07154](https://arxiv.org/abs/2108.07154)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2108.07154.md)]. - FedPara: Low-Rank Hadamard Product for Communication-Efficient Federated Learning - [[2108.06098](https://arxiv.org/abs/2108.06098)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2108.06098.md)]. - PAIR: Leveraging Passage-Centric Similarity Relation for Improving Dense Passage Retrieval - [[2108.06027](https://arxiv.org/abs/2108.06027)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2108.06027.md)]. - Logit Attenuating Weight Normalization - [[2108.05839](https://arxiv.org/abs/2108.05839)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2108.05839.md)]. - Unsupervised Corpus Aware Language Model Pre-training for Dense Passage Retrieval - [[2108.05540](https://arxiv.org/abs/2108.05540)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2108.05540.md)]. - Are Neural Ranking Models Robust? - [[2108.05018](https://arxiv.org/abs/2108.05018)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2108.05018.md)]. - Rethinking Architecture Selection in Differentiable NAS - [[2108.04392](https://arxiv.org/abs/2108.04392)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2108.04392.md)]. - Pose is all you need: The pose only group activity recognition system (POGARS) - [[2108.04186](https://arxiv.org/abs/2108.04186)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2108.04186.md)]. - BIGRoC: Boosting Image Generation via a Robust Classifier - [[2108.03702](https://arxiv.org/abs/2108.03702)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2108.03702.md)]. - Source-Free Domain Adaptation for Image Segmentation - [[2108.03152](https://arxiv.org/abs/2108.03152)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2108.03152.md)]. - Improving Contrastive Learning by Visualizing Feature Transformation - [[2108.02982](https://arxiv.org/abs/2108.02982)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2108.02982.md)]. - Internal Video Inpainting by Implicit Long-range Propagation - [[2108.01912](https://arxiv.org/abs/2108.01912)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2108.01912.md)]. - Model-Based Opponent Modeling - [[2108.01843](https://arxiv.org/abs/2108.01843)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2108.01843.md)]. - Offline Decentralized Multi-Agent Reinforcement Learning - [[2108.01832](https://arxiv.org/abs/2108.01832)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2108.01832.md)]. - SphereFace2: Binary Classification is All You Need for Deep Face Recognition - [[2108.01513](https://arxiv.org/abs/2108.01513)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2108.01513.md)]. - How to Evaluate Your Dialogue Models: A Review of Approaches - [[2108.01369](https://arxiv.org/abs/2108.01369)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2108.01369.md)]. - SDEdit: Guided Image Synthesis and Editing with Stochastic Differential Equations - [[2108.01073](https://arxiv.org/abs/2108.01073)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2108.01073.md)]. - Evaluating Deep Graph Neural Networks - [[2108.00955](https://arxiv.org/abs/2108.00955)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2108.00955.md)]. - Jointly Optimizing Query Encoder and Product Quantization to Improve Retrieval Performance - [[2108.00644](https://arxiv.org/abs/2108.00644)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2108.00644.md)]. ### July 2021 - Imbalanced Adversarial Training with Reweighting - [[2107.13639](https://arxiv.org/abs/2107.13639)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2107.13639.md)]. - Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing - [[2107.13586](https://arxiv.org/abs/2107.13586)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2107.13586.md)]. - Functorial String Diagrams for Reverse-Mode Automatic Differentiation - [[2107.13433](https://arxiv.org/abs/2107.13433)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2107.13433.md)]. - Unsupervised Learning of Neurosymbolic Encoders - [[2107.13132](https://arxiv.org/abs/2107.13132)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2107.13132.md)]. - Joint Shapley values: a measure of joint feature importance - [[2107.11357](https://arxiv.org/abs/2107.11357)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2107.11357.md)]. - Few Shots Are All You Need: A Progressive Few Shot Learning Approach for Low Resource Handwritten Text Recognition - [[2107.10064](https://arxiv.org/abs/2107.10064)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2107.10064.md)]. - Conditional GANs with Auxiliary Discriminative Classifier - [[2107.10060](https://arxiv.org/abs/2107.10060)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2107.10060.md)]. - Guided Generation of Cause and Effect - [[2107.09846](https://arxiv.org/abs/2107.09846)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2107.09846.md)]. - QVHighlights: Detecting Moments and Highlights in Videos via Natural Language Queries - [[2107.09609](https://arxiv.org/abs/2107.09609)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2107.09609.md)]. - Structured Stochastic Gradient MCMC - [[2107.09028](https://arxiv.org/abs/2107.09028)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2107.09028.md)]. - Is attention to bounding boxes all you need for pedestrian action prediction? - [[2107.08031](https://arxiv.org/abs/2107.08031)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2107.08031.md)]. - DNN is not all you need: Parallelizing Non-Neural ML Algorithms on Ultra-Low-Power IoT Processors - [[2107.09448](https://arxiv.org/abs/2107.09448)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2107.09448.md)]. - Align before Fuse: Vision and Language Representation Learning with Momentum Distillation - [[2107.07651](https://arxiv.org/abs/2107.07651)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2107.07651.md)]. - FastSHAP: Real-Time Shapley Value Estimation - [[2107.07436](https://arxiv.org/abs/2107.07436)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2107.07436.md)]. - How Much Can CLIP Benefit Vision-and-Language Tasks? - [[2107.06383](https://arxiv.org/abs/2107.06383)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2107.06383.md)]. - Per-Pixel Classification is Not All You Need for Semantic Segmentation - [[2107.06278](https://arxiv.org/abs/2107.06278)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2107.06278.md)]. - A Configurable Multilingual Model is All You Need to Recognize All Languages - [[2107.05876](https://arxiv.org/abs/2107.05876)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2107.05876.md)]. - SPLADE: Sparse Lexical and Expansion Model for First Stage Ranking - [[2107.05720](https://arxiv.org/abs/2107.05720)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2107.05720.md)]. - Explore and Control with Adversarial Surprise - [[2107.07394](https://arxiv.org/abs/2107.07394)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2107.07394.md)]. - ViTGAN: Training GANs with Vision Transformers - [[2107.04589](https://arxiv.org/abs/2107.04589)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2107.04589.md)]. - Hoechst Is All You Need: Lymphocyte Classification with Deep Learning - [[2107.04388](https://arxiv.org/abs/2107.04388)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2107.04388.md)]. - Towards Robust Active Feature Acquisition - [[2107.04163](https://arxiv.org/abs/2107.04163)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2107.04163.md)]. - Evaluating Large Language Models Trained on Code - [[2107.03374](https://arxiv.org/abs/2107.03374)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2107.03374.md)]. - Understanding Intrinsic Robustness Using Label Uncertainty - [[2107.03250](https://arxiv.org/abs/2107.03250)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2107.03250.md)]. - Neural Contextual Bandits without Regret - [[2107.03144](https://arxiv.org/abs/2107.03144)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2107.03144.md)]. - Structured Denoising Diffusion Models in Discrete State-Spaces - [[2107.03006](https://arxiv.org/abs/2107.03006)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2107.03006.md)]. - Depth-supervised NeRF: Fewer Views and Faster Training for Free - [[2107.02791](https://arxiv.org/abs/2107.02791)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2107.02791.md)]. - VidLanKD: Improving Language Understanding via Video-Distilled Knowledge Transfer - [[2107.02681](https://arxiv.org/abs/2107.02681)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2107.02681.md)]. - Rethinking Positional Encoding - [[2107.02561](https://arxiv.org/abs/2107.02561)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2107.02561.md)]. - When and How to Fool Explainable Models (and Humans) with Adversarial Examples - [[2107.01943](https://arxiv.org/abs/2107.01943)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2107.01943.md)]. - Mutation is all you need - [[2107.07343](https://arxiv.org/abs/2107.07343)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2107.07343.md)]. - Scale Mixtures of Neural Network Gaussian Processes - [[2107.01408](https://arxiv.org/abs/2107.01408)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2107.01408.md)]. - On the Practicality of Deterministic Epistemic Uncertainty - [[2107.00649](https://arxiv.org/abs/2107.00649)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2107.00649.md)]. - Exact verification of the strong BSD conjecture for some absolutely simple abelian surfaces - [[2107.0325](https://arxiv.org/abs/2107.0325)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2107.0325.md)]. ### June 2021 - Automatically Select Emotion for Response via Personality-affected Emotion Transition - [[2106.15846](https://arxiv.org/abs/2106.15846)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.15846.md)]. - Local Reweighting for Adversarial Training - [[2106.15776](https://arxiv.org/abs/2106.15776)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.15776.md)]. - Don't Take It Literally: An Edit-Invariant Sequence Loss for Text Generation - [[2106.15078](https://arxiv.org/abs/2106.15078)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.15078.md)]. - Multimodal Few-Shot Learning with Frozen Language Models - [[2106.13884](https://arxiv.org/abs/2106.13884)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.13884.md)]. - Animatable Neural Radiance Fields from Monocular RGB Videos - [[2106.13629](https://arxiv.org/abs/2106.13629)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.13629.md)]. - DCoM: A Deep Column Mapper for Semantic Data Type Detection - [[2106.12871](https://arxiv.org/abs/2106.12871)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.12871.md)]. - All You Need is a Second Look: Towards Arbitrary-Shaped Text Detection - [[2106.12720](https://arxiv.org/abs/2106.12720)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.12720.md)]. - IA-RED$^2$: Interpretability-Aware Redundancy Reduction for Vision Transformers - [[2106.12620](https://arxiv.org/abs/2106.12620)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.12620.md)]. - Learning Multimodal VAEs through Mutual Supervision - [[2106.12570](https://arxiv.org/abs/2106.12570)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.12570.md)]. - Sampling with Mirrored Stein Operators - [[2106.12506](https://arxiv.org/abs/2106.12506)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.12506.md)]. - Adapting Off-the-Shelf Source Segmenter for Target Medical Image Segmentation - [[2106.12497](https://arxiv.org/abs/2106.12497)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.12497.md)]. - CharacterChat: Supporting the Creation of Fictional Characters through Conversation and Progressive Manifestation with a Chatbot - [[2106.12314](https://arxiv.org/abs/2106.12314)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.12314.md)]. - Secure Domain Adaptation with Multiple Sources - [[2106.12124](https://arxiv.org/abs/2106.12124)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.12124.md)]. - Volume Rendering of Neural Implicit Surfaces - [[2106.12052](https://arxiv.org/abs/2106.12052)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.12052.md)]. - Policy Smoothing for Provably Robust Reinforcement Learning - [[2106.11420](https://arxiv.org/abs/2106.11420)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.11420.md)]. - Towards Long-Form Video Understanding - [[2106.11310](https://arxiv.org/abs/2106.11310)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.11310.md)]. - Boundary Graph Neural Networks for 3D Simulations - [[2106.11299](https://arxiv.org/abs/2106.11299)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.11299.md)]. - Pseudo-Relevance Feedback for Multiple Representation Dense Retrieval - [[2106.11251](https://arxiv.org/abs/2106.11251)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.11251.md)]. - CLIP2Video: Mastering Video-Text Retrieval via Image CLIP - [[2106.11097](https://arxiv.org/abs/2106.11097)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.11097.md)]. - Analytically Tractable Bayesian Deep Q-Learning - [[2106.11086](https://arxiv.org/abs/2106.11086)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.11086.md)]. - NeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction - [[2106.10689](https://arxiv.org/abs/2106.10689)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.10689.md)]. - Shuffle Private Stochastic Convex Optimization - [[2106.09805](https://arxiv.org/abs/2106.09805)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.09805.md)]. - On Invariance Penalties for Risk Minimization - [[2106.09777](https://arxiv.org/abs/2106.09777)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.09777.md)]. - Visual Correspondence Hallucination - [[2106.09711](https://arxiv.org/abs/2106.09711)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.09711.md)]. - Poisoning and Backdooring Contrastive Learning - [[2106.09667](https://arxiv.org/abs/2106.09667)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.09667.md)]. - Robust Model-based Face Reconstruction through Weakly-Supervised Outlier Segmentation - [[2106.09614](https://arxiv.org/abs/2106.09614)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.09614.md)]. - Revisiting the Weaknesses of Reinforcement Learning for Neural Machine Translation - [[2106.08942](https://arxiv.org/abs/2106.08942)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.08942.md)]. - Transductive Few-Shot Learning: Clustering is All You Need? - [[2106.09516](https://arxiv.org/abs/2106.09516)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.09516.md)]. - Unsupervised Enrichment of Persona-grounded Dialog with Background Stories - [[2106.08364](https://arxiv.org/abs/2106.08364)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.08364.md)]. - Query Embedding on Hyper-relational Knowledge Graphs - [[2106.08166](https://arxiv.org/abs/2106.08166)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.08166.md)]. - UnivNet: A Neural Vocoder with Multi-Resolution Spectrogram Discriminators for High-Fidelity Waveform Generation - [[2106.07889](https://arxiv.org/abs/2106.07889)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.07889.md)]. - HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units - [[2106.07447](https://arxiv.org/abs/2106.07447)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.07447.md)]. - Constraining Linear-chain CRFs to Regular Languages - [[2106.07306](https://arxiv.org/abs/2106.07306)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.07306.md)]. - Pre-Trained Models: Past, Present and Future - [[2106.07139](https://arxiv.org/abs/2106.07139)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.07139.md)]. - Category Theory in Machine Learning - [[2106.07032](https://arxiv.org/abs/2106.07032)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.07032.md)]. - Inverting Adversarially Robust Networks for Image Synthesis - [[2106.06927](https://arxiv.org/abs/2106.06927)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.06927.md)]. - Prompting Contrastive Explanations for Commonsense Reasoning Tasks - [[2106.06823](https://arxiv.org/abs/2106.06823)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.06823.md)]. - Learning to Pool in Graph Neural Networks for Extrapolation - [[2106.06210](https://arxiv.org/abs/2106.06210)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.06210.md)]. - Is Homophily a Necessity for Graph Neural Networks? - [[2106.06134](https://arxiv.org/abs/2106.06134)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.06134.md)]. - Bridging Subword Gaps in Pretrain-Finetune Paradigm for Natural Language Generation - [[2106.06125](https://arxiv.org/abs/2106.06125)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.06125.md)]. - Fair Normalizing Flows - [[2106.05937](https://arxiv.org/abs/2106.05937)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.05937.md)]. - A Neural Tangent Kernel Perspective of GANs - [[2106.05566](https://arxiv.org/abs/2106.05566)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.05566.md)]. - Knowledge distillation: A good teacher is patient and consistent - [[2106.05237](https://arxiv.org/abs/2106.05237)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.05237.md)]. - Do Transformers Really Perform Bad for Graph Representation? - [[2106.05234](https://arxiv.org/abs/2106.05234)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.05234.md)]. - DIGRAC: Digraph Clustering Based on Flow Imbalance - [[2106.05194](https://arxiv.org/abs/2106.05194)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.05194.md)]. - Pretrained Encoders are All You Need - [[2106.05139](https://arxiv.org/abs/2106.05139)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.05139.md)]. - It Takes Two to Tango: Mixup for Deep Metric Learning - [[2106.04990](https://arxiv.org/abs/2106.04990)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.04990.md)]. - Taxonomy of Machine Learning Safety: A Survey and Primer - [[2106.04823](https://arxiv.org/abs/2106.04823)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.04823.md)]. - Mean-Shifted Contrastive Loss for Anomaly Detection - [[2106.03844](https://arxiv.org/abs/2106.03844)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.03844.md)]. - RegMix: Data Mixing Augmentation for Regression - [[2106.03374](https://arxiv.org/abs/2106.03374)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.03374.md)]. - Tabular Data: Deep Learning is Not All You Need - [[2106.03253](https://arxiv.org/abs/2106.03253)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.03253.md)]. - Self-Supervision is All You Need for Solving Rubik's Cube - [[2106.03157](https://arxiv.org/abs/2106.03157)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.03157.md)]. - Model Zoo: A Growing "Brain" That Learns Continually - [[2106.03027](https://arxiv.org/abs/2106.03027)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.03027.md)]. - Context-Aware Sparse Deep Coordination Graphs - [[2106.02886](https://arxiv.org/abs/2106.02886)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.02886.md)]. - Learning Curves for SGD on Structured Features - [[2106.02713](https://arxiv.org/abs/2106.02713)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.02713.md)]. - Meta-Learning with Fewer Tasks through Task Interpolation - [[2106.02695](https://arxiv.org/abs/2106.02695)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.02695.md)]. - Churn Reduction via Distillation - [[2106.02654](https://arxiv.org/abs/2106.02654)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.02654.md)]. - MERLOT: Multimodal Neural Script Knowledge Models - [[2106.02636](https://arxiv.org/abs/2106.02636)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.02636.md)]. - Conversations Are Not Flat: Modeling the Dynamic Information Flow across Dialogue Utterances - [[2106.02227](https://arxiv.org/abs/2106.02227)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.02227.md)]. - Three Sentences Are All You Need: Local Path Enhanced Document Relation Extraction - [[2106.01793](https://arxiv.org/abs/2106.01793)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.01793.md)]. - Convergent Graph Solvers - [[2106.01680](https://arxiv.org/abs/2106.01680)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.01680.md)]. - Self-Guided Contrastive Learning for BERT Sentence Representations - [[2106.07345](https://arxiv.org/abs/2106.07345)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.07345.md)]. - Steerable 3D Spherical Neurons - [[2106.13863](https://arxiv.org/abs/2106.13863)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.13863.md)]. - Tight High Probability Bounds for Linear Stochastic Approximation with Fixed Stepsize - [[2106.1257](https://arxiv.org/abs/2106.1257)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.1257.md)]. - Evidential Turing Processes - [[2106.01216](https://arxiv.org/abs/2106.01216)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.01216.md)]. - Towards Emotional Support Dialog Systems - [[2106.01144](https://arxiv.org/abs/2106.01144)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.01144.md)]. - Transition-Based Constrained DFT for the Robust and Reliable Treatment of Excitations in Supramolecular Systems - [[2106.1142](https://arxiv.org/abs/2106.1142)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.1142.md)]. - Multiresolution Equivariant Graph Variational Autoencoder - [[2106.00967](https://arxiv.org/abs/2106.00967)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.00967.md)]. - RevCore: Review-augmented Conversational Recommendation - [[2106.00957](https://arxiv.org/abs/2106.00957)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.00957.md)]. - DialoGraph: Incorporating Interpretable Strategy-Graph Networks into Negotiation Dialogues - [[2106.00920](https://arxiv.org/abs/2106.00920)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.00920.md)]. - Efficient Passage Retrieval with Hashing for Open-domain Question Answering - [[2106.00882](https://arxiv.org/abs/2106.00882)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.00882.md)]. - Weighting vectors for machine learning: numerical harmonic analysis applied to boundary detection - [[2106.00827](https://arxiv.org/abs/2106.00827)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.00827.md)]. - DYPLOC: Dynamic Planning of Content Using Mixed Language Models for Text Generation - [[2106.00791](https://arxiv.org/abs/2106.00791)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.00791.md)]. - Towards Quantifiable Dialogue Coherence Evaluation - [[2106.00507](https://arxiv.org/abs/2106.00507)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.00507.md)]. - Concurrent Adversarial Learning for Large-Batch Training - [[2106.00221](https://arxiv.org/abs/2106.00221)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.00221.md)]. - Rethinking Pseudo Labels for Semi-Supervised Object Detection - [[2106.0168](https://arxiv.org/abs/2106.0168)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2106.0168.md)]. ### May 2021 - Efficient and Modular Implicit Differentiation - [[2105.15183](https://arxiv.org/abs/2105.15183)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2105.15183.md)]. - Memory-Efficient Differentiable Transformer Architecture Search - [[2105.14669](https://arxiv.org/abs/2105.14669)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2105.14669.md)]. - How Attentive are Graph Attention Networks? - [[2105.14491](https://arxiv.org/abs/2105.14491)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2105.14491.md)]. - An Attention Free Transformer - [[2105.14103](https://arxiv.org/abs/2105.14103)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2105.14103.md)]. - Gotta Go Fast When Generating Data with Score-Based Models - [[2105.14080](https://arxiv.org/abs/2105.14080)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2105.14080.md)]. - Simple steps are all you need: Frank-Wolfe and generalized self-concordant functions - [[2105.13913](https://arxiv.org/abs/2105.13913)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2105.13913.md)]. - OTTers: One-turn Topic Transitions for Open-Domain Dialogue - [[2105.13710](https://arxiv.org/abs/2105.13710)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2105.13710.md)]. - Data Augmentation for Text Generation Without Any Augmented Data - [[2105.13650](https://arxiv.org/abs/2105.13650)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2105.13650.md)]. - Pre-trained Language Model based Ranking in Baidu Search - [[2105.11108](https://arxiv.org/abs/2105.11108)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2105.11108.md)]. - Unsupervised Speech Recognition - [[2105.11084](https://arxiv.org/abs/2105.11084)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2105.11084.md)]. - DEHB: Evolutionary Hyperband for Scalable, Robust and Efficient Hyperparameter Optimization - [[2105.09821](https://arxiv.org/abs/2105.09821)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2105.09821.md)]. - Intra-Document Cascading: Learning to Select Passages for Neural Document Ranking - [[2105.09816](https://arxiv.org/abs/2105.09816)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2105.09816.md)]. - Unified Conversational Recommendation Policy Learning via Graph-based Reinforcement Learning - [[2105.09710](https://arxiv.org/abs/2105.09710)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2105.09710.md)]. - Value Function is All You Need: A Unified Learning Framework for Ride Hailing Platforms - [[2105.08791](https://arxiv.org/abs/2105.08791)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2105.08791.md)]. - KECRS: Towards Knowledge-Enriched Conversational Recommendation System - [[2105.08261](https://arxiv.org/abs/2105.08261)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2105.08261.md)]. - ItôTTS and ItôWave: Linear Stochastic Differential Equation Is All You Need For Audio Generation - [[2105.07583](https://arxiv.org/abs/2105.07583)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2105.07583.md)]. - RetGen: A Joint framework for Retrieval and Grounded Text Generation Modeling - [[2105.06597](https://arxiv.org/abs/2105.06597)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2105.06597.md)]. - HyKnow: End-to-End Task-Oriented Dialog Modeling with Hybrid Knowledge Management - [[2105.06041](https://arxiv.org/abs/2105.06041)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2105.06041.md)]. - Looking at CTR Prediction Again: Is Attention All You Need? - [[2105.05563](https://arxiv.org/abs/2105.05563)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2105.05563.md)]. - The DEVIL is in the Details: A Diagnostic Evaluation Benchmark for Video Inpainting - [[2105.05332](https://arxiv.org/abs/2105.05332)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2105.05332.md)]. - Diffusion Models Beat GANs on Image Synthesis - [[2105.05233](https://arxiv.org/abs/2105.05233)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2105.05233.md)]. - VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning - [[2105.04906](https://arxiv.org/abs/2105.04906)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2105.04906.md)]. - EL-Attention: Memory Efficient Lossless Attention for Generation - [[2105.04779](https://arxiv.org/abs/2105.04779)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2105.04779.md)]. - Not All Relevance Scores are Equal: Efficient Uncertainty and Calibration Modeling for Deep Retrieval Models - [[2105.04651](https://arxiv.org/abs/2105.04651)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2105.04651.md)]. - Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey - [[2105.04387](https://arxiv.org/abs/2105.04387)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2105.04387.md)]. - Joint Learning of Deep Retrieval Model and Product Quantization based Embedding Index - [[2105.03933](https://arxiv.org/abs/2105.03933)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2105.03933.md)]. - Simulating User Satisfaction for the Evaluation of Task-oriented Dialogue Systems - [[2105.03748](https://arxiv.org/abs/2105.03748)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2105.03748.md)]. - Improving Document Representations by Generating Pseudo Query Embeddings for Dense Retrieval - [[2105.03599](https://arxiv.org/abs/2105.03599)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2105.03599.md)]. - A Survey of Data Augmentation Approaches for NLP - [[2105.03075](https://arxiv.org/abs/2105.03075)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2105.03075.md)]. - Rethinking Search: Making Domain Experts out of Dilettantes - [[2105.02274](https://arxiv.org/abs/2105.02274)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2105.02274.md)]. - PD-GAN: Probabilistic Diverse GAN for Image Inpainting - [[2105.02201](https://arxiv.org/abs/2105.02201)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2105.02201.md)]. - Unsteady and inertial dynamics of an active particle in a fluid - [[2105.1408](https://arxiv.org/abs/2105.1408)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2105.1408.md)]. - Unsupervised Document Expansion for Information Retrieval with Stochastic Text Generation - [[2105.00666](https://arxiv.org/abs/2105.00666)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2105.00666.md)]. ### April 2021 - Emerging Properties in Self-Supervised Vision Transformers - [[2104.14294](https://arxiv.org/abs/2104.14294)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2104.14294.md)]. - Open-vocabulary Object Detection via Vision and Language Knowledge Distillation - [[2104.13921](https://arxiv.org/abs/2104.13921)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2104.13921.md)]. - If your data distribution shifts, use self-learning - [[2104.12928](https://arxiv.org/abs/2104.12928)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2104.12928.md)]. - Multimodal Clustering Networks for Self-supervised Learning from Unlabeled Videos - [[2104.12671](https://arxiv.org/abs/2104.12671)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2104.12671.md)]. - Easy and Efficient Transformer : Scalable Inference Solution For large NLP model - [[2104.12470](https://arxiv.org/abs/2104.12470)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2104.12470.md)]. - GPT2MVS: Generative Pre-trained Transformer-2 for Multi-modal Video Summarization - [[2104.12465](https://arxiv.org/abs/2104.12465)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2104.12465.md)]. - PanGu-$α$: Large-scale Autoregressive Pretrained Chinese Language Models with Auto-parallel Computation - [[2104.12369](https://arxiv.org/abs/2104.12369)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2104.12369.md)]. - Learning Passage Impacts for Inverted Indexes - [[2104.12016](https://arxiv.org/abs/2104.12016)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2104.12016.md)]. - UNISURF: Unifying Neural Implicit Surfaces and Radiance Fields for Multi-View Reconstruction - [[2104.10078](https://arxiv.org/abs/2104.10078)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2104.10078.md)]. - Gradient Matching for Domain Generalization - [[2104.09937](https://arxiv.org/abs/2104.09937)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2104.09937.md)]. - B-PROP: Bootstrapped Pre-training with Representative Words Prediction for Ad-hoc Retrieval - [[2104.09791](https://arxiv.org/abs/2104.09791)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2104.09791.md)]. - Image Inpainting with External-internal Learning and Monochromic Bottleneck - [[2104.09068](https://arxiv.org/abs/2104.09068)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2104.09068.md)]. - Cross-Attention is All You Need: Adapting Pretrained Transformers for Machine Translation - [[2104.08771](https://arxiv.org/abs/2104.08771)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2104.08771.md)]. - The Power of Scale for Parameter-Efficient Prompt Tuning - [[2104.08691](https://arxiv.org/abs/2104.08691)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2104.08691.md)]. - Explaining Answers with Entailment Trees - [[2104.08661](https://arxiv.org/abs/2104.08661)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2104.08661.md)]. - Condenser: a Pre-training Architecture for Dense Retrieval - [[2104.08253](https://arxiv.org/abs/2104.08253)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2104.08253.md)]. - $Q^{2}$: Evaluating Factual Consistency in Knowledge-Grounded Dialogues via Question Generation and Question Answering - [[2104.08202](https://arxiv.org/abs/2104.08202)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2104.08202.md)]. - Optimizing Dense Retrieval Model Training with Hard Negatives - [[2104.08051](https://arxiv.org/abs/2104.08051)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2104.08051.md)]. - Matching-oriented Product Quantization For Ad-hoc Retrieval - [[2104.07858](https://arxiv.org/abs/2104.07858)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2104.07858.md)]. - Ultra-High Dimensional Sparse Representations with Binarization for Efficient Text Retrieval - [[2104.07198](https://arxiv.org/abs/2104.07198)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2104.07198.md)]. - COIL: Revisit Exact Lexical Match in Information Retrieval with Contextualized Inverted List - [[2104.07186](https://arxiv.org/abs/2104.07186)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2104.07186.md)]. - Sparse Attention with Linear Units - [[2104.07012](https://arxiv.org/abs/2104.07012)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2104.07012.md)]. - Efficiently Teaching an Effective Dense Retriever with Balanced Topic Aware Sampling - [[2104.06967](https://arxiv.org/abs/2104.06967)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2104.06967.md)]. - Is Disentanglement all you need? Comparing Concept-based &amp; Disentanglement Approaches - [[2104.06917](https://arxiv.org/abs/2104.06917)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2104.06917.md)]. - Learning How to Ask: Querying LMs with Mixtures of Soft Prompts - [[2104.06599](https://arxiv.org/abs/2104.06599)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2104.06599.md)]. - All you need are a few pixels: semantic segmentation with PixelPick - [[2104.06394](https://arxiv.org/abs/2104.06394)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2104.06394.md)]. - Spatiotemporal Entropy Model is All You Need for Learned Video Compression - [[2104.06083](https://arxiv.org/abs/2104.06083)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2104.06083.md)]. - Not All Attention Is All You Need - [[2104.04692](https://arxiv.org/abs/2104.04692)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2104.04692.md)]. - Progressive Temporal Feature Alignment Network for Video Inpainting - [[2104.03507](https://arxiv.org/abs/2104.03507)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2104.03507.md)]. - SC-GlowTTS: an Efficient Zero-Shot Multi-Speaker Text-To-Speech Model - [[2104.05557](https://arxiv.org/abs/2104.05557)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2104.05557.md)]. - Robust wav2vec 2.0: Analyzing Domain Shift in Self-Supervised Pre-Training - [[2104.01027](https://arxiv.org/abs/2104.01027)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2104.01027.md)]. - Visual Semantic Role Labeling for Video Understanding - [[2104.00990](https://arxiv.org/abs/2104.00990)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2104.00990.md)]. - Frozen in Time: A Joint Video and Image Encoder for End-to-End Retrieval - [[2104.00650](https://arxiv.org/abs/2104.00650)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2104.00650.md)]. - NeRF-VAE: A Geometry Aware 3D Scene Generative Model - [[2104.00587](https://arxiv.org/abs/2104.00587)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2104.00587.md)]. - Improved Image Generation via Sparse Modeling - [[2104.00464](https://arxiv.org/abs/2104.00464)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2104.00464.md)]. - Jigsaw Clustering for Unsupervised Visual Representation Learning - [[2104.00323](https://arxiv.org/abs/2104.00323)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2104.00323.md)]. - Domain Invariant Adversarial Learning - [[2104.00322](https://arxiv.org/abs/2104.00322)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2104.00322.md)]. ### March 2021 - CAMPARI: Camera-Aware Decomposed Generative Neural Radiance Fields - [[2103.17269](https://arxiv.org/abs/2103.17269)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2103.17269.md)]. - Contrastive Embedding for Generalized Zero-Shot Learning - [[2103.16173](https://arxiv.org/abs/2103.16173)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2103.16173.md)]. - AGQA: A Benchmark for Compositional Spatio-Temporal Reasoning - [[2103.16002](https://arxiv.org/abs/2103.16002)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2103.16002.md)]. - TransFill: Reference-guided Image Inpainting by Merging Multiple Color and Spatial Transformations - [[2103.15982](https://arxiv.org/abs/2103.15982)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2103.15982.md)]. - Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers - [[2103.15679](https://arxiv.org/abs/2103.15679)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2103.15679.md)]. - GNeRF: GAN-based Neural Radiance Field without Posed Camera - [[2103.15606](https://arxiv.org/abs/2103.15606)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2103.15606.md)]. - Efficient Explanations from Empirical Explainers - [[2103.15429](https://arxiv.org/abs/2103.15429)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2103.15429.md)]. - Categorical Representation Learning: Morphism is All You Need - [[2103.14770](https://arxiv.org/abs/2103.14770)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2103.14770.md)]. - More Photos are All You Need: Semi-Supervised Learning for Fine-Grained Sketch Based Image Retrieval - [[2103.13990](https://arxiv.org/abs/2103.13990)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2103.13990.md)]. - KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny MLPs - [[2103.13744](https://arxiv.org/abs/2103.13744)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2103.13744.md)]. - DNN Quantization with Attention - [[2103.13322](https://arxiv.org/abs/2103.13322)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2103.13322.md)]. - FastMoE: A Fast Mixture-of-Expert Training System - [[2103.13262](https://arxiv.org/abs/2103.13262)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2103.13262.md)]. - Interpretable Machine Learning: Fundamental Principles and 10 Grand Challenges - [[2103.11251](https://arxiv.org/abs/2103.11251)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2103.11251.md)]. - Concentric Spherical GNN for 3D Representation Learning - [[2103.10484](https://arxiv.org/abs/2103.10484)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2103.10484.md)]. - FastNeRF: High-Fidelity Neural Rendering at 200FPS - [[2103.10380](https://arxiv.org/abs/2103.10380)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2103.10380.md)]. - GLM: General Language Model Pretraining with Autoregressive Blank Infilling - [[2103.10360](https://arxiv.org/abs/2103.10360)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2103.10360.md)]. - Generating Diverse Structure for Image Inpainting With Hierarchical VQ-VAE - [[2103.10022](https://arxiv.org/abs/2103.10022)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2103.10022.md)]. - ENCONTER: Entity Constrained Progressive Sequence Generation via Insertion-based Transformer - [[2103.09548](https://arxiv.org/abs/2103.09548)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2103.09548.md)]. - BYOL for Audio: Self-Supervised Learning for General-Purpose Audio Representation - [[2103.06695](https://arxiv.org/abs/2103.06695)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2103.06695.md)]. - Partial Differential Equations is All You Need for Generating Neural Architectures -- A Theory for Physical Artificial Intelligence Systems - [[2103.08313](https://arxiv.org/abs/2103.08313)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2103.08313.md)]. - GAN Vocoder: Multi-Resolution Discriminator Is All You Need - [[2103.05236](https://arxiv.org/abs/2103.05236)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2103.05236.md)]. - Semantic Models for the First-stage Retrieval: A Comprehensive Review - [[2103.04831](https://arxiv.org/abs/2103.04831)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2103.04831.md)]. - Can Pretext-Based Self-Supervised Learning Be Boosted by Downstream Data? A Theoretical Analysis - [[2103.03568](https://arxiv.org/abs/2103.03568)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2103.03568.md)]. - Attention is Not All You Need: Pure Attention Loses Rank Doubly Exponentially with Depth - [[2103.03404](https://arxiv.org/abs/2103.03404)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2103.03404.md)]. - Barlow Twins: Self-Supervised Learning via Redundancy Reduction - [[2103.03230](https://arxiv.org/abs/2103.03230)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2103.03230.md)]. - Online Adversarial Attacks - [[2103.02014](https://arxiv.org/abs/2103.02014)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2103.02014.md)]. - Mixture of Volumetric Primitives for Efficient Neural Rendering - [[2103.01954](https://arxiv.org/abs/2103.01954)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2103.01954.md)]. - Categorical Foundations of Gradient-Based Learning - [[2103.01931](https://arxiv.org/abs/2103.01931)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2103.01931.md)]. - Learners' Languages - [[2103.01189](https://arxiv.org/abs/2103.01189)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2103.01189.md)]. - Automated Machine Learning on Graphs: A Survey - [[2103.00742](https://arxiv.org/abs/2103.00742)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2103.00742.md)]. ### February 2021 - Learning Transferable Visual Models From Natural Language Supervision - [[2103.00020](https://arxiv.org/abs/2103.00020)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2103.00020.md)]. - Node Proximity Is All You Need: Unified Structural and Positional Node and Graph Embedding - [[2102.13582](https://arxiv.org/abs/2102.13582)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2102.13582.md)]. - Do Input Gradients Highlight Discriminative Features? - [[2102.12781](https://arxiv.org/abs/2102.12781)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2102.12781.md)]. - Teach Me to Explain: A Review of Datasets for Explainable Natural Language Processing - [[2102.12060](https://arxiv.org/abs/2102.12060)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2102.12060.md)]. - Deep ReLU Networks Preserve Expected Length - [[2102.10492](https://arxiv.org/abs/2102.10492)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2102.10492.md)]. - Meta-Learning Dynamics Forecasting Using Task Inference - [[2102.10271](https://arxiv.org/abs/2102.10271)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2102.10271.md)]. - Less is More: Pre-train a Strong Text Encoder for Dense Retrieval Using a Weak Decoder - [[2102.09206](https://arxiv.org/abs/2102.09206)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2102.09206.md)]. - ShaRF: Shape-conditioned Radiance Fields from a Single View - [[2102.08860](https://arxiv.org/abs/2102.08860)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2102.08860.md)]. - DEUP: Direct Epistemic Uncertainty Prediction - [[2102.08501](https://arxiv.org/abs/2102.08501)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2102.08501.md)]. - All You Need is DAG - [[2102.08325](https://arxiv.org/abs/2102.08325)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2102.08325.md)]. - Topological Graph Neural Networks - [[2102.07835](https://arxiv.org/abs/2102.07835)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2102.07835.md)]. - Is Space-Time Attention All You Need for Video Understanding? - [[2102.05095](https://arxiv.org/abs/2102.05095)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2102.05095.md)]. - Contrastive Embeddings for Neural Architectures - [[2102.04208](https://arxiv.org/abs/2102.04208)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2102.04208.md)]. - Hyperspherical embedding for novel class classification - [[2102.03243](https://arxiv.org/abs/2102.03243)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2102.03243.md)]. - Unifying Vision-and-Language Tasks via Text Generation - [[2102.02779](https://arxiv.org/abs/2102.02779)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2102.02779.md)]. - Learning Graph Embeddings for Compositional Zero-shot Learning - [[2102.01987](https://arxiv.org/abs/2102.01987)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2102.01987.md)]. ### January 2021 - VX2TEXT: End-to-End Learning of Video-Based Text Generation From Multimodal Inputs - [[2101.12059](https://arxiv.org/abs/2101.12059)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2101.12059.md)]. - Compositional Semantics for Probabilistic Programs with Exact Conditioning - [[2101.11351](https://arxiv.org/abs/2101.11351)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2101.11351.md)]. - RESPER: Computationally Modelling Resisting Strategies in Persuasive Conversations - [[2101.10545](https://arxiv.org/abs/2101.10545)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2101.10545.md)]. - Reverse Derivative Ascent: A Categorical Approach to Learning Boolean Circuits - [[2101.10488](https://arxiv.org/abs/2101.10488)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2101.10488.md)]. - Advances and Challenges in Conversational Recommender Systems: A Survey - [[2101.09459](https://arxiv.org/abs/2101.09459)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2101.09459.md)]. - A Comprehensive Survey on Hardware-Aware Neural Architecture Search - [[2101.09336](https://arxiv.org/abs/2101.09336)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2101.09336.md)]. - Higher Order Automatic Differentiation of Higher Order Functions - [[2101.06757](https://arxiv.org/abs/2101.06757)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2101.06757.md)]. - The Expando-Mono-Duo Design Pattern for Text Ranking with Pretrained Sequence-to-Sequence Models - [[2101.05667](https://arxiv.org/abs/2101.05667)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2101.05667.md)]. - Evaluating Disentanglement of Structured Representations - [[2101.04041](https://arxiv.org/abs/2101.04041)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2101.04041.md)]. - Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity - [[2101.03961](https://arxiv.org/abs/2101.03961)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2101.03961.md)]. - Evolving Reinforcement Learning Algorithms - [[2101.03958](https://arxiv.org/abs/2101.03958)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2101.03958.md)]. - Max-Affine Spline Insights Into Deep Network Pruning - [[2101.02338](https://arxiv.org/abs/2101.02338)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2101.02338.md)]. - VinVL: Revisiting Visual Representations in Vision-Language Models - [[2101.00529](https://arxiv.org/abs/2101.00529)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2101.00529.md)]. - Prefix-Tuning: Optimizing Continuous Prompts for Generation - [[2101.00190](https://arxiv.org/abs/2101.00190)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2101.00190.md)]. - Multi-task Retrieval for Knowledge-Intensive Tasks - [[2101.00117](https://arxiv.org/abs/2101.00117)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2101.00117.md)]. </div></details> <details open><summary><strong>2020</strong></summary><div> ### December 2020 - Directed Beam Search: Plug-and-Play Lexically Constrained Language Generation - [[2012.15416](https://arxiv.org/abs/2012.15416)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2012.15416.md)]. - Refine and Imitate: Reducing Repetition and Inconsistency in Persuasion Dialogues via Reinforcement Learning and Human Demonstration - [[2012.15375](https://arxiv.org/abs/2012.15375)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2012.15375.md)]. - A Simple Fine-tuning Is All You Need: Towards Robust Deep Learning Via Adversarial Fine-tuning - [[2012.13628](https://arxiv.org/abs/2012.13628)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2012.13628.md)]. - Evolution Is All You Need: Phylogenetic Augmentation for Contrastive Learning - [[2012.13475](https://arxiv.org/abs/2012.13475)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2012.13475.md)]. - ProofWriter: Generating Implications, Proofs, and Abductive Statements over Natural Language - [[2012.13048](https://arxiv.org/abs/2012.13048)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2012.13048.md)]. - Learning Dense Representations of Phrases at Scale - [[2012.12624](https://arxiv.org/abs/2012.12624)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2012.12624.md)]. - A Distributional Approach to Controlled Text Generation - [[2012.11635](https://arxiv.org/abs/2012.11635)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2012.11635.md)]. - OBoW: Online Bag-of-Visual-Words Generation for Self-Supervised Learning - [[2012.11552](https://arxiv.org/abs/2012.11552)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2012.11552.md)]. - Transformer Interpretability Beyond Attention Visualization - [[2012.09838](https://arxiv.org/abs/2012.09838)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2012.09838.md)]. - Neural Volume Rendering: NeRF And Beyond - [[2101.05204](https://arxiv.org/abs/2101.05204)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2101.05204.md)]. - Keyword-Guided Neural Conversational Model - [[2012.08383](https://arxiv.org/abs/2012.08383)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2012.08383.md)]. - CARE: Commonsense-Aware Emotional Response Generation with Latent Concepts - [[2012.08377](https://arxiv.org/abs/2012.08377)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2012.08377.md)]. - Understanding the Behaviour of Contrastive Loss - [[2012.09740](https://arxiv.org/abs/2012.09740)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2012.09740.md)]. - Image Inpainting Guided by Coherence Priors of Semantics and Textures - [[2012.08054](https://arxiv.org/abs/2012.08054)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2012.08054.md)]. - Contrastive Learning with Adversarial Perturbations for Conditional Text Generation - [[2012.07280](https://arxiv.org/abs/2012.07280)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2012.07280.md)]. - Differential Evolution for Neural Architecture Search - [[2012.06400](https://arxiv.org/abs/2012.06400)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2012.06400.md)]. - Few-Shot Segmentation Without Meta-Learning: A Good Transductive Inference Is All You Need? - [[2012.06166](https://arxiv.org/abs/2012.06166)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2012.06166.md)]. - Distilling Knowledge from Reader to Retriever for Question Answering - [[2012.04584](https://arxiv.org/abs/2012.04584)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2012.04584.md)]. - Active Learning: Problem Settings and Recent Developments - [[2012.04225](https://arxiv.org/abs/2012.04225)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2012.04225.md)]. - Sheaf Neural Networks - [[2012.06333](https://arxiv.org/abs/2012.06333)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2012.06333.md)]. - Challenging common interpretability assumptions in feature attribution explanations - [[2012.02748](https://arxiv.org/abs/2012.02748)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2012.02748.md)]. - Practical No-box Adversarial Attacks against DNNs - [[2012.02525](https://arxiv.org/abs/2012.02525)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2012.02525.md)]. - RPT: Relational Pre-trained Transformer Is Almost All You Need towards Democratizing Data Preparation - [[2012.02469](https://arxiv.org/abs/2012.02469)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2012.02469.md)]. - pixelNeRF: Neural Radiance Fields from One or Few Images - [[2012.02190](https://arxiv.org/abs/2012.02190)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2012.02190.md)]. - Learned Initializations for Optimizing Coordinate-Based Neural Representations - [[2012.02189](https://arxiv.org/abs/2012.02189)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2012.02189.md)]. - Neural Prototype Trees for Interpretable Fine-grained Image Recognition - [[2012.02046](https://arxiv.org/abs/2012.02046)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2012.02046.md)]. - Just Ask: Learning to Answer Questions from Millions of Narrated Videos - [[2012.00451](https://arxiv.org/abs/2012.00451)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2012.00451.md)]. - CPM: A Large-scale Generative Chinese Pre-trained Language Model - [[2012.00413](https://arxiv.org/abs/2012.00413)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2012.00413.md)]. ### November 2020 - Feature Learning in Infinite-Width Neural Networks - [[2011.14522](https://arxiv.org/abs/2011.14522)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2011.14522.md)]. - How Well Do Self-Supervised Models Transfer? - [[2011.13377](https://arxiv.org/abs/2011.13377)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2011.13377.md)]. - Can Temporal Information Help with Contrastive Self-Supervised Learning? - [[2011.13046](https://arxiv.org/abs/2011.13046)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2011.13046.md)]. - All You Need is a Good Functional Prior for Bayesian Deep Learning - [[2011.12829](https://arxiv.org/abs/2011.12829)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2011.12829.md)]. - DeRF: Decomposed Radiance Fields - [[2011.12490](https://arxiv.org/abs/2011.12490)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2011.12490.md)]. - GIRAFFE: Representing Scenes as Compositional Generative Neural Feature Fields - [[2011.12100](https://arxiv.org/abs/2011.12100)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2011.12100.md)]. - Hierarchically Decoupled Spatial-Temporal Contrast for Self-supervised Video Representation Learning - [[2011.11261](https://arxiv.org/abs/2011.11261)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2011.11261.md)]. - ROME: Robustifying Memory-Efficient NAS via Topology Disentanglement and Gradient Accumulation - [[2011.11233](https://arxiv.org/abs/2011.11233)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2011.11233.md)]. - Exploring Simple Siamese Representation Learning - [[2011.10566](https://arxiv.org/abs/2011.10566)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2011.10566.md)]. - A Reputation Mechanism Is All You Need: Collaborative Fairness and Adversarial Robustness in Federated Learning - [[2011.10464](https://arxiv.org/abs/2011.10464)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2011.10464.md)]. - Propagate Yourself: Exploring Pixel-Level Consistency for Unsupervised Visual Representation Learning - [[2011.10043](https://arxiv.org/abs/2011.10043)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2011.10043.md)]. - MixMix: All You Need for Data-Free Compression Are Feature and Data Mixing - [[2011.09899](https://arxiv.org/abs/2011.09899)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2011.09899.md)]. - Is Independent Learning All You Need in the StarCraft Multi-Agent Challenge? - [[2011.09533](https://arxiv.org/abs/2011.09533)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2011.09533.md)]. - Contextual Fusion For Adversarial Robustness - [[2011.09526](https://arxiv.org/abs/2011.09526)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2011.09526.md)]. - Functorial Manifold Learning - [[2011.07435](https://arxiv.org/abs/2011.07435)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2011.07435.md)]. - Unsupervised Video Representation Learning by Bidirectional Feature Prediction - [[2011.06037](https://arxiv.org/abs/2011.06037)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2011.06037.md)]. - Multimodal Pretraining for Dense Video Captioning - [[2011.11760](https://arxiv.org/abs/2011.11760)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2011.11760.md)]. - A Broad Dataset is All You Need for One-Shot Object Detection - [[2011.04267](https://arxiv.org/abs/2011.04267)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2011.04267.md)]. - Feature Removal Is a Unifying Principle for Model Explanation Methods - [[2011.03623](https://arxiv.org/abs/2011.03623)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2011.03623.md)]. - Language Model is All You Need: Natural Language Understanding as Question Answering - [[2011.03023](https://arxiv.org/abs/2011.03023)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2011.03023.md)]. - This Looks Like That, Because ... Explaining Prototypes for Interpretable Image Recognition - [[2011.02863](https://arxiv.org/abs/2011.02863)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2011.02863.md)]. - Non-Autoregressive Predictive Coding for Learning Speech Representations from Local Dependencies - [[2011.00406](https://arxiv.org/abs/2011.00406)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2011.00406.md)]. ### October 2020 - A Survey on Contrastive Self-supervised Learning - [[2011.00362](https://arxiv.org/abs/2011.00362)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2011.00362.md)]. - Pretext-Contrastive Learning: Toward Good Practices in Self-supervised Video Representation Leaning - [[2010.15464](https://arxiv.org/abs/2010.15464)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2010.15464.md)]. - Learning to Actively Learn: A Robust Approach - [[2010.15382](https://arxiv.org/abs/2010.15382)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2010.15382.md)]. - Cycle-Contrast for Self-Supervised Video Representation Learning - [[2010.14810](https://arxiv.org/abs/2010.14810)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2010.14810.md)]. - How Does the Task Landscape Affect MAML Performance? - [[2010.14672](https://arxiv.org/abs/2010.14672)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2010.14672.md)]. - One Solution is Not All You Need: Few-Shot Extrapolation via Structured MaxEnt RL - [[2010.14484](https://arxiv.org/abs/2010.14484)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2010.14484.md)]. - RSPNet: Relative Speed Perception for Unsupervised Video Representation Learning - [[2011.07949](https://arxiv.org/abs/2011.07949)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2011.07949.md)]. - Interpretation of NLP models through input marginalization - [[2010.13984](https://arxiv.org/abs/2010.13984)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2010.13984.md)]. - Attention is All You Need in Speech Separation - [[2010.13154](https://arxiv.org/abs/2010.13154)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2010.13154.md)]. - Towards falsifiable interpretability research - [[2010.12016](https://arxiv.org/abs/2010.12016)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2010.12016.md)]. - The Turking Test: Can Language Models Understand Instructions? - [[2010.11982](https://arxiv.org/abs/2010.11982)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2010.11982.md)]. - An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale - [[2010.11929](https://arxiv.org/abs/2010.11929)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2010.11929.md)]. - Transcription Is All You Need: Learning to Separate Musical Mixtures with Score as Supervision - [[2010.11904](https://arxiv.org/abs/2010.11904)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2010.11904.md)]. - Distilling Dense Representations for Ranking using Tightly-Coupled Teachers - [[2010.11386](https://arxiv.org/abs/2010.11386)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2010.11386.md)]. - Counterfactual Explanations and Algorithmic Recourses for Machine Learning: A Review - [[2010.10596](https://arxiv.org/abs/2010.10596)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2010.10596.md)]. - CR-Walker: Tree-Structured Graph Reasoning and Dialog Acts for Conversational Recommendation - [[2010.10333](https://arxiv.org/abs/2010.10333)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2010.10333.md)]. - PROP: Pre-training with Representative Words Prediction for Ad-hoc Retrieval - [[2010.10137](https://arxiv.org/abs/2010.10137)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2010.10137.md)]. - Improving Dialog Systems for Negotiation with Personality Modeling - [[2010.09954](https://arxiv.org/abs/2010.09954)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2010.09954.md)]. - Self-supervised Co-training for Video Representation Learning - [[2010.09709](https://arxiv.org/abs/2010.09709)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2010.09709.md)]. - For self-supervised learning, Rationality implies generalization, provably - [[2010.08508](https://arxiv.org/abs/2010.08508)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2010.08508.md)]. - RocketQA: An Optimized Training Approach to Dense Passage Retrieval for Open-Domain Question Answering - [[2010.08191](https://arxiv.org/abs/2010.08191)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2010.08191.md)]. - What is More Likely to Happen Next? Video-and-Language Future Event Prediction - [[2010.07999](https://arxiv.org/abs/2010.07999)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2010.07999.md)]. - NeRF++: Analyzing and Improving Neural Radiance Fields - [[2010.07492](https://arxiv.org/abs/2010.07492)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2010.07492.md)]. - Representable Markov Categories and Comparison of Statistical Experiments in Categorical Probability - [[2010.07416](https://arxiv.org/abs/2010.07416)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2010.07416.md)]. - Pretrained Transformers for Text Ranking: BERT and Beyond - [[2010.06467](https://arxiv.org/abs/2010.06467)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2010.06467.md)]. - HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis - [[2010.05646](https://arxiv.org/abs/2010.05646)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2010.05646.md)]. - Fairness-aware Agnostic Federated Learning - [[2010.05057](https://arxiv.org/abs/2010.05057)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2010.05057.md)]. - Automated Concatenation of Embeddings for Structured Prediction - [[2010.05006](https://arxiv.org/abs/2010.05006)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2010.05006.md)]. - GRF: Learning a General Radiance Field for 3D Representation and Rendering - [[2010.04595](https://arxiv.org/abs/2010.04595)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2010.04595.md)]. - Automatic Backward Filtering Forward Guiding for Markov processes and graphical models - [[2010.03509](https://arxiv.org/abs/2010.03509)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2010.03509.md)]. - Unsupervised Representation Learning by InvariancePropagation - [[2010.11694](https://arxiv.org/abs/2010.11694)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2010.11694.md)]. - Like hiking? You probably enjoy nature: Persona-grounded Dialog with Commonsense Expansions - [[2010.03205](https://arxiv.org/abs/2010.03205)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2010.03205.md)]. - Beyond [CLS] through Ranking by Generation - [[2010.03073](https://arxiv.org/abs/2010.03073)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2010.03073.md)]. - A Transformer-based Framework for Multivariate Time Series Representation Learning - [[2010.02803](https://arxiv.org/abs/2010.02803)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2010.02803.md)]. - Improving Efficient Neural Ranking Models with Cross-Architecture Knowledge Distillation - [[2010.02666](https://arxiv.org/abs/2010.02666)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2010.02666.md)]. - MIME: MIMicking Emotions for Empathetic Response Generation - [[2010.01454](https://arxiv.org/abs/2010.01454)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2010.01454.md)]. - Sharpness-Aware Minimization for Efficiently Improving Generalization - [[2010.01412](https://arxiv.org/abs/2010.01412)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2010.01412.md)]. - All You Need Is CONSTRUCT - [[2010.00843](https://arxiv.org/abs/2010.00843)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2010.00843.md)]. - SparTerm: Learning Term-based Sparse Representation for Fast Text Retrieval - [[2010.00768](https://arxiv.org/abs/2010.00768)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2010.00768.md)]. - Understanding Self-supervised Learning with Dual Deep Networks - [[2010.00578](https://arxiv.org/abs/2010.00578)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2010.00578.md)]. ### September 2020 - Answering Complex Open-Domain Questions with Multi-Hop Dense Retrieval - [[2009.12756](https://arxiv.org/abs/2009.12756)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2009.12756.md)]. - Learning to Plan and Realize Separately for Open-Ended Dialogue Systems - [[2009.12506](https://arxiv.org/abs/2009.12506)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2009.12506.md)]. - From Pixel to Patch: Synthesize Context-aware Features for Zero-shot Semantic Segmentation - [[2009.12232](https://arxiv.org/abs/2009.12232)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2009.12232.md)]. - Learned Low Precision Graph Neural Networks - [[2009.09232](https://arxiv.org/abs/2009.09232)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2009.09232.md)]. - Generation-Augmented Retrieval for Open-domain Question Answering - [[2009.08553](https://arxiv.org/abs/2009.08553)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2009.08553.md)]. - SelfAugment: Automatic Augmentation Policies for Self-Supervised Learning - [[2009.07724](https://arxiv.org/abs/2009.07724)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2009.07724.md)]. - Simplified TinyBERT: Knowledge Distillation for Document Retrieval - [[2009.07531](https://arxiv.org/abs/2009.07531)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2009.07531.md)]. - BERT-QE: Contextualized Query Expansion for Document Re-ranking - [[2009.07258](https://arxiv.org/abs/2009.07258)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2009.07258.md)]. - Efficient Transformers: A Survey - [[2009.06732](https://arxiv.org/abs/2009.06732)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2009.06732.md)]. - Enhancing Unsupervised Video Representation Learning by Decoupling the Scene and the Motion - [[2009.05757](https://arxiv.org/abs/2009.05757)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2009.05757.md)]. - Understanding the Role of Individual Units in a Deep Neural Network - [[2009.05041](https://arxiv.org/abs/2009.05041)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2009.05041.md)]. - Measuring Massive Multitask Language Understanding - [[2009.03300](https://arxiv.org/abs/2009.03300)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2009.03300.md)]. - Sensors, Safety Models and A System-Level Approach to Safe and Scalable Automated Vehicles - [[2009.03301](https://arxiv.org/abs/2009.03301)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2009.03301.md)]. - Sample-Efficient Automated Deep Reinforcement Learning - [[2009.01555](https://arxiv.org/abs/2009.01555)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2009.01555.md)]. - Learning to summarize from human feedback - [[2009.01325](https://arxiv.org/abs/2009.01325)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2009.01325.md)]. - WaveGrad: Estimating Gradients for Waveform Generation - [[2009.00713](https://arxiv.org/abs/2009.00713)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2009.00713.md)]. - Neural Architecture Search For Keyword Spotting - [[2009.00165](https://arxiv.org/abs/2009.00165)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2009.00165.md)]. ### August 2020 - Self-supervised Video Representation Learning by Uncovering Spatio-temporal Statistics - [[2008.13426](https://arxiv.org/abs/2008.13426)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2008.13426.md)]. - A Survey of Deep Active Learning - [[2009.00236](https://arxiv.org/abs/2009.00236)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2009.00236.md)]. - Against Membership Inference Attack: Pruning is All You Need - [[2008.13578](https://arxiv.org/abs/2008.13578)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2008.13578.md)]. - A Survey of Evaluation Metrics Used for NLG Systems - [[2008.12009](https://arxiv.org/abs/2008.12009)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2008.12009.md)]. - Automated Search for Resource-Efficient Branched Multi-Task Networks - [[2008.10292](https://arxiv.org/abs/2008.10292)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2008.10292.md)]. - Contrastive learning, multi-view redundancy, and linear models - [[2008.10150](https://arxiv.org/abs/2008.10150)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2008.10150.md)]. - A Lip Sync Expert Is All You Need for Speech to Lip Generation In The Wild - [[2008.10010](https://arxiv.org/abs/2008.10010)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2008.10010.md)]. - PARADE: Passage Representation Aggregation for Document Reranking - [[2008.09093](https://arxiv.org/abs/2008.09093)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2008.09093.md)]. - Automated Machine Learning -- a brief review at the end of the early years - [[2008.08516](https://arxiv.org/abs/2008.08516)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2008.08516.md)]. - A Survey of Active Learning for Text Classification using Deep Neural Networks - [[2008.07267](https://arxiv.org/abs/2008.07267)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2008.07267.md)]. - Context-aware Feature Generation for Zero-shot Semantic Segmentation - [[2008.06893](https://arxiv.org/abs/2008.06893)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2008.06893.md)]. - Adaptive Learning of Tensor Network Structures - [[2008.05437](https://arxiv.org/abs/2008.05437)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2008.05437.md)]. - SpeedySpeech: Efficient Neural Speech Synthesis - [[2008.03802](https://arxiv.org/abs/2008.03802)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2008.03802.md)]. - Spatiotemporal Contrastive Video Representation Learning - [[2008.03800](https://arxiv.org/abs/2008.03800)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2008.03800.md)]. - A Boundary Based Out-of-Distribution Classifier for Generalized Zero-Shot Learning - [[2008.04872](https://arxiv.org/abs/2008.04872)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2008.04872.md)]. - Self-supervised Video Representation Learning Using Inter-intra Contrastive Framework - [[2008.02531](https://arxiv.org/abs/2008.02531)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2008.02531.md)]. - Predicting What You Already Know Helps: Provable Self-Supervised Learning - [[2008.01064](https://arxiv.org/abs/2008.01064)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2008.01064.md)]. - Explainable Face Recognition - [[2008.00916](https://arxiv.org/abs/2008.00916)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2008.00916.md)]. ### July 2020 - Demystifying Contrastive Self-Supervised Learning: Invariances, Augmentations and Dataset Biases - [[2007.13916](https://arxiv.org/abs/2007.13916)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2007.13916.md)]. - Self-supervised Learning for Large-scale Item Recommendations - [[2007.12865](https://arxiv.org/abs/2007.12865)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2007.12865.md)]. - Self-Supervised Learning Across Domains - [[2007.12368](https://arxiv.org/abs/2007.12368)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2007.12368.md)]. - Understanding BERT Rankers Under Distillation - [[2007.11088](https://arxiv.org/abs/2007.11088)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2007.11088.md)]. - Video Representation Learning by Recognizing Temporal Transformations - [[2007.10730](https://arxiv.org/abs/2007.10730)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2007.10730.md)]. - Learning Joint Spatial-Temporal Transformations for Video Inpainting - [[2007.10247](https://arxiv.org/abs/2007.10247)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2007.10247.md)]. - Mixture Representation Learning with Coupled Autoencoders - [[2007.09880](https://arxiv.org/abs/2007.09880)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2007.09880.md)]. - Leveraging Seen and Unseen Semantic Relationships for Generative Zero-Shot Learning - [[2007.09549](https://arxiv.org/abs/2007.09549)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2007.09549.md)]. - Towards Deeper Graph Neural Networks - [[2007.09296](https://arxiv.org/abs/2007.09296)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2007.09296.md)]. - DVI: Depth Guided Video Inpainting for Autonomous Driving - [[2007.08854](https://arxiv.org/abs/2007.08854)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2007.08854.md)]. - Hopfield Networks is All You Need - [[2008.02217](https://arxiv.org/abs/2008.02217)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2008.02217.md)]. - Natural Graph Networks - [[2007.08349](https://arxiv.org/abs/2007.08349)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2007.08349.md)]. - Few-shot Scene-adaptive Anomaly Detection - [[2007.07843](https://arxiv.org/abs/2007.07843)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2007.07843.md)]. - Rethinking Image Inpainting via a Mutual Encoder-Decoder with Feature Equalizations - [[2007.06929](https://arxiv.org/abs/2007.06929)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2007.06929.md)]. - TERA: Self-Supervised Learning of Transformer Encoder Representation for Speech - [[2007.06028](https://arxiv.org/abs/2007.06028)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2007.06028.md)]. - Accuracy Prediction with Non-neural Model for Neural Architecture Search - [[2007.04785](https://arxiv.org/abs/2007.04785)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2007.04785.md)]. - GOLD-NAS: Gradual, One-Level, Differentiable - [[2007.03331](https://arxiv.org/abs/2007.03331)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2007.03331.md)]. - GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis - [[2007.02442](https://arxiv.org/abs/2007.02442)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2007.02442.md)]. - The Fyodorov-Hiary-Keating Conjecture. I - [[2007.0988](https://arxiv.org/abs/2007.0988)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2007.0988.md)]. - Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval - [[2007.00808](https://arxiv.org/abs/2007.00808)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2007.00808.md)]. - Interactive Path Reasoning on Graph for Conversational Recommendation - [[2007.00194](https://arxiv.org/abs/2007.00194)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2007.00194.md)]. ### June 2020 - Data Movement Is All You Need: A Case Study on Optimizing Transformers - [[2007.00072](https://arxiv.org/abs/2007.00072)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2007.00072.md)]. - ERNIE-ViL: Knowledge Enhanced Vision-Language Representations Through Scene Graph - [[2006.16934](https://arxiv.org/abs/2006.16934)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2006.16934.md)]. - PLATO-2: Towards Building an Open-Domain Chatbot via Curriculum Learning - [[2006.16779](https://arxiv.org/abs/2006.16779)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2006.16779.md)]. - Lipschitzness Is All You Need To Tame Off-policy Generative Adversarial Imitation Learning - [[2006.16785](https://arxiv.org/abs/2006.16785)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2006.16785.md)]. - RepBERT: Contextualized Text Embeddings for First-Stage Retrieval - [[2006.15498](https://arxiv.org/abs/2006.15498)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2006.15498.md)]. - Video Representation Learning with Visual Tempo Consistency - [[2006.15489](https://arxiv.org/abs/2006.15489)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2006.15489.md)]. - GPT-GNN: Generative Pre-Training of Graph Neural Networks - [[2006.15437](https://arxiv.org/abs/2006.15437)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2006.15437.md)]. - Space-Time Correspondence as a Contrastive Random Walk - [[2006.14613](https://arxiv.org/abs/2006.14613)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2006.14613.md)]. - Practical applications of metric space magnitude and weighting vectors - [[2006.14063](https://arxiv.org/abs/2006.14063)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2006.14063.md)]. - Generative causal explanations of black-box classifiers - [[2006.13913](https://arxiv.org/abs/2006.13913)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2006.13913.md)]. - Gaining Insight into SARS-CoV-2 Infection and COVID-19 Severity Using Self-supervised Edge Features and Graph Neural Networks - [[2006.12971](https://arxiv.org/abs/2006.12971)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2006.12971.md)]. - A Constructive, Type-Theoretic Approach to Regression via Global Optimisation - [[2006.12868](https://arxiv.org/abs/2006.12868)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2006.12868.md)]. - Unsupervised Evaluation of Interactive Dialog with DialoGPT - [[2006.12719](https://arxiv.org/abs/2006.12719)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2006.12719.md)]. - Efficient Hyperparameter Optimization in Deep Learning Using a Variable Length Genetic Algorithm - [[2006.12703](https://arxiv.org/abs/2006.12703)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2006.12703.md)]. - Logarithmic Pruning is All You Need - [[2006.12156](https://arxiv.org/abs/2006.12156)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2006.12156.md)]. - Towards Understanding Label Smoothing - [[2006.11653](https://arxiv.org/abs/2006.11653)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2006.11653.md)]. - wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations - [[2006.11477](https://arxiv.org/abs/2006.11477)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2006.11477.md)]. - Self-Supervised Prototypical Transfer Learning for Few-Shot Classification - [[2006.11325](https://arxiv.org/abs/2006.11325)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2006.11325.md)]. - Denoising Diffusion Probabilistic Models - [[2006.11239](https://arxiv.org/abs/2006.11239)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2006.11239.md)]. - Neural Parameter Allocation Search - [[2006.10598](https://arxiv.org/abs/2006.10598)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2006.10598.md)]. - Contrastive learning of global and local features for medical image segmentation with limited annotations - [[2006.10511](https://arxiv.org/abs/2006.10511)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2006.10511.md)]. - Self-supervised Learning on Graphs: Deep Insights and New Direction - [[2006.10141](https://arxiv.org/abs/2006.10141)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2006.10141.md)]. - Big Self-Supervised Models are Strong Semi-Supervised Learners - [[2006.10029](https://arxiv.org/abs/2006.10029)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2006.10029.md)]. - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training - [[2006.09963](https://arxiv.org/abs/2006.09963)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2006.09963.md)]. - Unsupervised Learning of Visual Features by Contrasting Cluster Assignments - [[2006.09882](https://arxiv.org/abs/2006.09882)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2006.09882.md)]. - Cross-lingual Retrieval for Iterative Self-Supervised Training - [[2006.09526](https://arxiv.org/abs/2006.09526)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2006.09526.md)]. - When Does Self-Supervision Help Graph Convolutional Networks? - [[2006.09136](https://arxiv.org/abs/2006.09136)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2006.09136.md)]. - Augmented Sliced Wasserstein Distances - [[2006.08812](https://arxiv.org/abs/2006.08812)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2006.08812.md)]. - Self-supervised Learning: Generative or Contrastive - [[2006.08218](https://arxiv.org/abs/2006.08218)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2006.08218.md)]. - DeeperGCN: All You Need to Train Deeper GCNs - [[2006.07739](https://arxiv.org/abs/2006.07739)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2006.07739.md)]. - Interpretable Neural Architecture Search via Bayesian Optimisation with Weisfeiler-Lehman Kernels - [[2006.07556](https://arxiv.org/abs/2006.07556)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2006.07556.md)]. - Rethinking the Value of Labels for Improving Class-Imbalanced Learning - [[2006.07529](https://arxiv.org/abs/2006.07529)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2006.07529.md)]. - Self-Supervised Relational Reasoning for Representation Learning - [[2006.05849](https://arxiv.org/abs/2006.05849)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2006.05849.md)]. - Contrastive Multi-View Representation Learning on Graphs - [[2006.05582](https://arxiv.org/abs/2006.05582)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2006.05582.md)]. - FastSpeech 2: Fast and High-Quality End-to-End Text to Speech - [[2006.04558](https://arxiv.org/abs/2006.04558)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2006.04558.md)]. - Differentiable Neural Input Search for Recommender Systems - [[2006.04466](https://arxiv.org/abs/2006.04466)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2006.04466.md)]. - CoCon: A Self-Supervised Approach for Controlled Text Generation - [[2006.03535](https://arxiv.org/abs/2006.03535)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2006.03535.md)]. - M3P: Learning Universal Representations via Multitask Multilingual Multimodal Pre-training - [[2006.02635](https://arxiv.org/abs/2006.02635)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2006.02635.md)]. - Situated and Interactive Multimodal Conversations - [[2006.01460](https://arxiv.org/abs/2006.01460)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2006.01460.md)]. ### May 2020 - Bayesian Updates Compose Optically - [[2006.01631](https://arxiv.org/abs/2006.01631)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2006.01631.md)]. - Explainable Artificial Intelligence: a Systematic Review - [[2006.00093](https://arxiv.org/abs/2006.00093)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2006.00093.md)]. - Language Models are Few-Shot Learners - [[2005.14165](https://arxiv.org/abs/2005.14165)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2005.14165.md)]. - SCAN: Learning to Classify Images without Labels - [[2005.12320](https://arxiv.org/abs/2005.12320)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2005.12320.md)]. - High-Resolution Image Inpainting with Iterative Confidence Feedback and Guided Upsampling - [[2005.11742](https://arxiv.org/abs/2005.11742)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2005.11742.md)]. - Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks - [[2005.11401](https://arxiv.org/abs/2005.11401)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2005.11401.md)]. - Glow-TTS: A Generative Flow for Text-to-Speech via Monotonic Alignment Search - [[2005.11129](https://arxiv.org/abs/2005.11129)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2005.11129.md)]. - Novel Policy Seeking with Constrained Optimization - [[2005.10696](https://arxiv.org/abs/2005.10696)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2005.10696.md)]. - Is MAP Decoding All You Need? The Inadequacy of the Mode in Neural Machine Translation - [[2005.10283](https://arxiv.org/abs/2005.10283)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2005.10283.md)]. - Mirror Descent Policy Optimization - [[2005.09814](https://arxiv.org/abs/2005.09814)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2005.09814.md)]. - Normalized Attention Without Probability Cage - [[2005.09561](https://arxiv.org/abs/2005.09561)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2005.09561.md)]. - Vector-Quantized Autoregressive Predictive Coding - [[2005.08392](https://arxiv.org/abs/2005.08392)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2005.08392.md)]. - Semantic Photo Manipulation with a Generative Image Prior - [[2005.07727](https://arxiv.org/abs/2005.07727)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2005.07727.md)]. - Is Your Goal-Oriented Dialog Model Performing Really Well? Empirical Analysis of System-wise Evaluation - [[2005.07362](https://arxiv.org/abs/2005.07362)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2005.07362.md)]. - Multi-band MelGAN: Faster Waveform Generation for High-Quality Text-to-Speech - [[2005.05106](https://arxiv.org/abs/2005.05106)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2005.05106.md)]. - Local Self-Attention over Long Text for Efficient Document Retrieval - [[2005.04908](https://arxiv.org/abs/2005.04908)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2005.04908.md)]. - Categorical Stochastic Processes and Likelihood - [[2005.04735](https://arxiv.org/abs/2005.04735)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2005.04735.md)]. - Condensed Movies: Story Based Retrieval with Contextual Embeddings - [[2005.04208](https://arxiv.org/abs/2005.04208)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2005.04208.md)]. - DramaQA: Character-Centered Video Story Understanding with Hierarchical QA - [[2005.03356](https://arxiv.org/abs/2005.03356)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2005.03356.md)]. - The Cascade Transformer: an Application for Efficient Answer Sentence Selection - [[2005.02534](https://arxiv.org/abs/2005.02534)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2005.02534.md)]. - Evaluating Explainable AI: Which Algorithmic Explanations Help Users Predict Model Behavior? - [[2005.01831](https://arxiv.org/abs/2005.01831)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2005.01831.md)]. - Does Visual Self-Supervision Improve Learning of Speech Representations for Emotion Recognition? - [[2005.01400](https://arxiv.org/abs/2005.01400)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2005.01400.md)]. - Learning an Unreferenced Metric for Online Dialogue Evaluation - [[2005.00583](https://arxiv.org/abs/2005.00583)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2005.00583.md)]. - POINTER: Constrained Progressive Text Generation via Insertion-based Generative Pre-training - [[2005.00558](https://arxiv.org/abs/2005.00558)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2005.00558.md)]. - HERO: Hierarchical Encoder for Video+Language Omni-representation Pre-training - [[2005.00200](https://arxiv.org/abs/2005.00200)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2005.00200.md)]. - Sparse, Dense, and Attentional Representations for Text Retrieval - [[2005.00181](https://arxiv.org/abs/2005.00181)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2005.00181.md)]. ### April 2020 - Consistent Video Depth Estimation - [[2004.15021](https://arxiv.org/abs/2004.15021)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2004.15021.md)]. - Training Curricula for Open Domain Answer Re-Ranking - [[2004.14269](https://arxiv.org/abs/2004.14269)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2004.14269.md)]. - Efficient Document Re-Ranking for Transformers by Precomputing Term Representations - [[2004.14255](https://arxiv.org/abs/2004.14255)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2004.14255.md)]. - Pre-training Is (Almost) All You Need: An Application to Commonsense Reasoning - [[2004.14074](https://arxiv.org/abs/2004.14074)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2004.14074.md)]. - Complementing Lexical Retrieval with Semantic Residual Embedding - [[2004.13969](https://arxiv.org/abs/2004.13969)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2004.13969.md)]. - Image Augmentation Is All You Need: Regularizing Deep Reinforcement Learning from Pixels - [[2004.13649](https://arxiv.org/abs/2004.13649)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2004.13649.md)]. - Recipes for building an open-domain chatbot - [[2004.13637](https://arxiv.org/abs/2004.13637)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2004.13637.md)]. - Modularized Transfomer-based Ranking Framework - [[2004.13313](https://arxiv.org/abs/2004.13313)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2004.13313.md)]. - ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT - [[2004.12832](https://arxiv.org/abs/2004.12832)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2004.12832.md)]. - All you need is a second look: Towards Tighter Arbitrary shape text detection - [[2004.12436](https://arxiv.org/abs/2004.12436)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2004.12436.md)]. - Multi-Domain Dialogue Acts and Response Co-Generation - [[2004.12363](https://arxiv.org/abs/2004.12363)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2004.12363.md)]. - Beyond 512 Tokens: Siamese Multi-depth Transformer-based Hierarchical Encoder for Long-Form Document Matching - [[2004.12297](https://arxiv.org/abs/2004.12297)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2004.12297.md)]. - Learning Term Discrimination - [[2004.11759](https://arxiv.org/abs/2004.11759)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2004.11759.md)]. - Supervised Contrastive Learning - [[2004.11362](https://arxiv.org/abs/2004.11362)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2004.11362.md)]. - Federated Stochastic Gradient Langevin Dynamics - [[2004.11231](https://arxiv.org/abs/2004.11231)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2004.11231.md)]. - Distilling Knowledge for Fast Retrieval-based Chat-bots - [[2004.11045](https://arxiv.org/abs/2004.11045)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2004.11045.md)]. - Considering Likelihood in NLP Classification Explanations with Occlusion and Language Modeling - [[2004.09890](https://arxiv.org/abs/2004.09890)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2004.09890.md)]. - Oscar: Object-Semantics Aligned Pre-training for Vision-Language Tasks - [[2004.06165](https://arxiv.org/abs/2004.06165)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2004.06165.md)]. - Will I Sound Like Me? Improving Persona Consistency in Dialogues through Pragmatic Self-Consciousness - [[2004.05816](https://arxiv.org/abs/2004.05816)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2004.05816.md)]. - Dense Passage Retrieval for Open-Domain Question Answering - [[2004.04906](https://arxiv.org/abs/2004.04906)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2004.04906.md)]. - TextGAIL: Generative Adversarial Imitation Learning for Text Generation - [[2004.13796](https://arxiv.org/abs/2004.13796)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2004.13796.md)]. - There and Back Again: Revisiting Backpropagation Saliency Methods - [[2004.02866](https://arxiv.org/abs/2004.02866)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2004.02866.md)]. - A Survey on Conversational Recommender Systems - [[2004.00646](https://arxiv.org/abs/2004.00646)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2004.00646.md)]. ### March 2020 - How Useful is Self-Supervised Pretraining for Visual Tasks? - [[2003.14323](https://arxiv.org/abs/2003.14323)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2003.14323.md)]. - InterBERT: Vision-and-Language Interaction for Multi-modal Pretraining - [[2003.13198](https://arxiv.org/abs/2003.13198)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2003.13198.md)]. - VIOLIN: A Large-Scale Dataset for Video-and-Language Inference - [[2003.11618](https://arxiv.org/abs/2003.11618)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2003.11618.md)]. - Rethinking Few-Shot Image Classification: a Good Embedding Is All You Need? - [[2003.11539](https://arxiv.org/abs/2003.11539)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2003.11539.md)]. - Deformable Style Transfer - [[2003.11038](https://arxiv.org/abs/2003.11038)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2003.11038.md)]. - Distributional Reinforcement Learning with Ensembles - [[2003.10903](https://arxiv.org/abs/2003.10903)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2003.10903.md)]. - Model-based Asynchronous Hyperparameter and Neural Architecture Search - [[2003.10865](https://arxiv.org/abs/2003.10865)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2003.10865.md)]. - Pre-trained Models for Natural Language Processing: A Survey - [[2003.08271](https://arxiv.org/abs/2003.08271)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2003.08271.md)]. - Latent Embedding Feedback and Discriminative Features for Zero-Shot Classification - [[2003.07833](https://arxiv.org/abs/2003.07833)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2003.07833.md)]. - XPersona: Evaluating Multilingual Personalized Chatbot - [[2003.07568](https://arxiv.org/abs/2003.07568)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2003.07568.md)]. - Guidance and Evaluation: Semantic-Aware Image Inpainting for Mixed Scenes - [[2003.06877](https://arxiv.org/abs/2003.06877)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2003.06877.md)]. - VCNet: A Robust Approach to Blind Image Inpainting - [[2003.06816](https://arxiv.org/abs/2003.06816)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2003.06816.md)]. - Document Ranking with a Pretrained Sequence-to-Sequence Model - [[2003.06713](https://arxiv.org/abs/2003.06713)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2003.06713.md)]. - Building and Interpreting Deep Similarity Models - [[2003.05431](https://arxiv.org/abs/2003.05431)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2003.05431.md)]. - xCos: An Explainable Cosine Metric for Face Verification Task - [[2003.05383](https://arxiv.org/abs/2003.05383)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2003.05383.md)]. - Video2Commonsense: Generating Commonsense Descriptions to Enrich Video Captioning - [[2003.05162](https://arxiv.org/abs/2003.05162)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2003.05162.md)]. - ReZero is All You Need: Fast Convergence at Large Depth - [[2003.04887](https://arxiv.org/abs/2003.04887)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2003.04887.md)]. - Improved Baselines with Momentum Contrastive Learning - [[2003.04297](https://arxiv.org/abs/2003.04297)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2003.04297.md)]. - How to Train Your Super-Net: An Analysis of Training Heuristics in Weight-Sharing NAS - [[2003.04276](https://arxiv.org/abs/2003.04276)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2003.04276.md)]. - A Safety Framework for Critical Systems Utilising Deep Neural Networks - [[2003.05311](https://arxiv.org/abs/2003.05311)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2003.05311.md)]. - De Finetti's construction as a categorical limit - [[2003.01964](https://arxiv.org/abs/2003.01964)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2003.01964.md)]. - AlignTTS: Efficient Feed-Forward Text-to-Speech System without Explicit Alignment - [[2003.01950](https://arxiv.org/abs/2003.01950)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2003.01950.md)]. - XGPT: Cross-modal Generative Pre-Training for Image Captioning - [[2003.01473](https://arxiv.org/abs/2003.01473)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2003.01473.md)]. - Benchmarking Graph Neural Networks - [[2003.00982](https://arxiv.org/abs/2003.00982)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2003.00982.md)]. ### February 2020 - DC-BERT: Decoupling Question and Document for Efficient Contextual Encoding - [[2002.12591](https://arxiv.org/abs/2002.12591)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2002.12591.md)]. - Estimation-Action-Reflection: Towards Deep Interaction Between Conversational and Recommender Systems - [[2002.09102](https://arxiv.org/abs/2002.09102)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2002.09102.md)]. - Automatic Shortcut Removal for Self-Supervised Representation Learning - [[2002.08822](https://arxiv.org/abs/2002.08822)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2002.08822.md)]. - Disentangled Speech Embeddings using Cross-modal Self-supervision - [[2002.08742](https://arxiv.org/abs/2002.08742)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2002.08742.md)]. - Gradient Boosting Neural Networks: GrowNet - [[2002.07971](https://arxiv.org/abs/2002.07971)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2002.07971.md)]. - Information Condensing Active Learning - [[2002.07916](https://arxiv.org/abs/2002.07916)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2002.07916.md)]. - UniVL: A Unified Video and Language Pre-Training Model for Multimodal Understanding and Generation - [[2002.06353](https://arxiv.org/abs/2002.06353)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2002.06353.md)]. - A Simple Framework for Contrastive Learning of Visual Representations - [[2002.05709](https://arxiv.org/abs/2002.05709)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2002.05709.md)]. - REALM: Retrieval-Augmented Language Model Pre-Training - [[2002.08909](https://arxiv.org/abs/2002.08909)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2002.08909.md)]. - Pre-training Tasks for Embedding-based Large-scale Retrieval - [[2002.03932](https://arxiv.org/abs/2002.03932)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2002.03932.md)]. - Unsupervised pretraining transfers well across languages - [[2002.02848](https://arxiv.org/abs/2002.02848)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2002.02848.md)]. - Dynamic Knowledge Routing Network For Target-Guided Open-Domain Conversation - [[2002.01196](https://arxiv.org/abs/2002.01196)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2002.01196.md)]. - Proving the Lottery Ticket Hypothesis: Pruning is All You Need - [[2002.00585](https://arxiv.org/abs/2002.00585)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2002.00585.md)]. ### January 2020 - Learning Robust and Multilingual Speech Representations - [[2001.11128](https://arxiv.org/abs/2001.11128)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2001.11128.md)]. - Selective Weak Supervision for Neural Information Retrieval - [[2001.10382](https://arxiv.org/abs/2001.10382)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2001.10382.md)]. - Multi-task self-supervised learning for Robust Speech Recognition - [[2001.09239](https://arxiv.org/abs/2001.09239)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2001.09239.md)]. - TVR: A Large-Scale Dataset for Video-Subtitle Moment Retrieval - [[2001.09099](https://arxiv.org/abs/2001.09099)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2001.09099.md)]. - Scaling Laws for Neural Language Models - [[2001.08361](https://arxiv.org/abs/2001.08361)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2001.08361.md)]. - Safety Concerns and Mitigation Approaches Regarding the Use of Deep Learning in Safety-Critical Perception Tasks - [[2001.08001](https://arxiv.org/abs/2001.08001)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2001.08001.md)]. - Latency-Aware Differentiable Neural Architecture Search - [[2001.06392](https://arxiv.org/abs/2001.06392)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2001.06392.md)]. - MixPath: A Unified Approach for One-shot Neural Architecture Search - [[2001.05887](https://arxiv.org/abs/2001.05887)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2001.05887.md)]. - A Categorical Framework for Learning Generalised Tree Automata - [[2001.05786](https://arxiv.org/abs/2001.05786)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2001.05786.md)]. - Visually Guided Self Supervised Learning of Speech Representations - [[2001.04316](https://arxiv.org/abs/2001.04316)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2001.04316.md)]. - ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training - [[2001.04063](https://arxiv.org/abs/2001.04063)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2001.04063.md)]. - Correctness of Automatic Differentiation via Diffeologies and Categorical Gluing - [[2001.02209](https://arxiv.org/abs/2001.02209)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2001.02209.md)]. - Deeper Insights into Weight Sharing in Neural Architecture Search - [[2001.01431](https://arxiv.org/abs/2001.01431)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2001.01431.md)]. </div></details> <details open><summary><strong>2019</strong></summary><div> ### December 2019 - RC-DARTS: Resource Constrained Differentiable Architecture Search - [[1912.12814](https://arxiv.org/abs/1912.12814)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1912.12814.md)]. - NAS evaluation is frustratingly hard - [[1912.12522](https://arxiv.org/abs/1912.12522)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1912.12522.md)]. - Improving Knowledge-aware Dialogue Generation via Knowledge Base Question Answering - [[1912.07491](https://arxiv.org/abs/1912.07491)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1912.07491.md)]. - Image Processing Using Multi-Code GAN Prior - [[1912.07116](https://arxiv.org/abs/1912.07116)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1912.07116.md)]. - ClusterFit: Improving Generalization of Visual Representations - [[1912.03330](https://arxiv.org/abs/1912.03330)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1912.03330.md)]. - Self-Supervised Visual Terrain Classification from Unsupervised Acoustic Feature Learning - [[1912.03227](https://arxiv.org/abs/1912.03227)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1912.03227.md)]. - Infinite products and zero-one laws in categorical probability - [[1912.02769](https://arxiv.org/abs/1912.02769)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1912.02769.md)]. - 15 Keypoints Is All You Need - [[1912.02323](https://arxiv.org/abs/1912.02323)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1912.02323.md)]. - 12-in-1: Multi-Task Vision and Language Representation Learning - [[1912.02315](https://arxiv.org/abs/1912.02315)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1912.02315.md)]. - Prioritized Unit Propagation with Periodic Resetting is (Almost) All You Need for Random SAT Solving - [[1912.05906](https://arxiv.org/abs/1912.05906)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1912.05906.md)]. - Self-Supervised Learning of Pretext-Invariant Representations - [[1912.01991](https://arxiv.org/abs/1912.01991)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1912.01991.md)]. - Just Go with the Flow: Self-Supervised Scene Flow Estimation - [[1912.00497](https://arxiv.org/abs/1912.00497)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1912.00497.md)]. ### November 2019 - ASR is all you need: cross-modal distillation for lip reading - [[1911.12747](https://arxiv.org/abs/1911.12747)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1911.12747.md)]. - Single Headed Attention RNN: Stop Thinking With Your Head - [[1911.11423](https://arxiv.org/abs/1911.11423)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1911.11423.md)]. - Binarized Neural Architecture Search - [[1911.10862](https://arxiv.org/abs/1911.10862)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1911.10862.md)]. - Breaking the cycle -- Colleagues are all you need - [[1911.10538](https://arxiv.org/abs/1911.10538)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1911.10538.md)]. - Region Normalization for Image Inpainting - [[1911.10375](https://arxiv.org/abs/1911.10375)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1911.10375.md)]. - All You Need Is Boundary: Toward Arbitrary-Shaped Text Spotting - [[1911.09550](https://arxiv.org/abs/1911.09550)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1911.09550.md)]. - Automatic Text-based Personality Recognition on Monologues and Multiparty Dialogues Using Attentive Networks and Contextual Embeddings - [[1911.09304](https://arxiv.org/abs/1911.09304)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1911.09304.md)]. - Generating Persona Consistent Dialogues by Exploiting Natural Language Inference - [[1911.05889](https://arxiv.org/abs/1911.05889)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1911.05889.md)]. - Momentum Contrast for Unsupervised Visual Representation Learning - [[1911.05722](https://arxiv.org/abs/1911.05722)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1911.05722.md)]. - A Pre-training Based Personalized Dialogue Generation Model with Persona-sparse Data - [[1911.04700](https://arxiv.org/abs/1911.04700)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1911.04700.md)]. - Effectiveness of self-supervised pre-training for speech recognition - [[1911.03912](https://arxiv.org/abs/1911.03912)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1911.03912.md)]. - Contextualized Sparse Representations for Real-Time Open-Domain Question Answering - [[1911.02896](https://arxiv.org/abs/1911.02896)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1911.02896.md)]. - Fast Transformer Decoding: One Write-Head is All You Need - [[1911.02150](https://arxiv.org/abs/1911.02150)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1911.02150.md)]. ### October 2019 - Attention Is All You Need for Chinese Word Segmentation - [[1910.14537](https://arxiv.org/abs/1910.14537)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1910.14537.md)]. - Multi-Stage Document Ranking with BERT - [[1910.14424](https://arxiv.org/abs/1910.14424)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1910.14424.md)]. - Towards Unsupervised Speech Recognition and Synthesis with Quantized Speech Representation Learning - [[1910.12729](https://arxiv.org/abs/1910.12729)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1910.12729.md)]. - Stabilizing DARTS with Amended Gradient Estimation on Architectural Parameters - [[1910.11831](https://arxiv.org/abs/1910.11831)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1910.11831.md)]. - Mockingjay: Unsupervised Speech Representation Learning with Deep Bidirectional Transformer Encoders - [[1910.12638](https://arxiv.org/abs/1910.12638)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1910.12638.md)]. - Parallel WaveGAN: A fast waveform generation model based on generative adversarial networks with multi-resolution spectrogram - [[1910.11480](https://arxiv.org/abs/1910.11480)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1910.11480.md)]. - Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer - [[1910.10683](https://arxiv.org/abs/1910.10683)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1910.10683.md)]. - Generative Pre-Training for Speech with Autoregressive Predictive Coding - [[1910.12607](https://arxiv.org/abs/1910.12607)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1910.12607.md)]. - KnowIT VQA: Answering Knowledge-Based Questions about Videos - [[1910.10706](https://arxiv.org/abs/1910.10706)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1910.10706.md)]. - Location-Relative Attention Mechanisms For Robust Long-Form Speech Synthesis - [[1910.10288](https://arxiv.org/abs/1910.10288)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1910.10288.md)]. - Adversarial Skill Networks: Unsupervised Robot Skill Learning from Video - [[1910.09430](https://arxiv.org/abs/1910.09430)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1910.09430.md)]. - Understanding Deep Networks via Extremal Perturbations and Smooth Masks - [[1910.08485](https://arxiv.org/abs/1910.08485)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1910.08485.md)]. - ALOHA: Artificial Learning of Human Attributes for Dialogue Agents - [[1910.08293](https://arxiv.org/abs/1910.08293)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1910.08293.md)]. - Reverse derivative categories - [[1910.07065](https://arxiv.org/abs/1910.07065)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1910.07065.md)]. - Understanding the Limitations of Variational Mutual Information Estimators - [[1910.06222](https://arxiv.org/abs/1910.06222)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1910.06222.md)]. - Self-supervised Label Augmentation via Input Transformations - [[1910.05872](https://arxiv.org/abs/1910.05872)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1910.05872.md)]. - vq-wav2vec: Self-Supervised Learning of Discrete Speech Representations - [[1910.05453](https://arxiv.org/abs/1910.05453)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1910.05453.md)]. - A cost-effective method for improving and re-purposing large, pre-trained GANs by fine-tuning their class-embeddings - [[1910.04760](https://arxiv.org/abs/1910.04760)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1910.04760.md)]. - Explaining image classifiers by removing input features using generative models - [[1910.04256](https://arxiv.org/abs/1910.04256)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1910.04256.md)]. - Probability, valuations, hyperspace: Three monads on Top and the support as a morphism - [[1910.03752](https://arxiv.org/abs/1910.03752)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1910.03752.md)]. - Bayesian open games - [[1910.03656](https://arxiv.org/abs/1910.03656)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1910.03656.md)]. - MelGAN: Generative Adversarial Networks for Conditional Waveform Synthesis - [[1910.06711](https://arxiv.org/abs/1910.06711)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1910.06711.md)]. - Continual Learning in Neural Networks - [[1910.02718](https://arxiv.org/abs/1910.02718)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1910.02718.md)]. - ZeRO: Memory Optimizations Toward Training Trillion Parameter Models - [[1910.02054](https://arxiv.org/abs/1910.02054)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1910.02054.md)]. - Is Fast Adaptation All You Need? - [[1910.01705](https://arxiv.org/abs/1910.01705)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1910.01705.md)]. ### September 2019 - Interpretations are useful: penalizing explanations to align neural networks with prior knowledge - [[1909.13584](https://arxiv.org/abs/1909.13584)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1909.13584.md)]. - Visual Explanation for Deep Metric Learning - [[1909.12977](https://arxiv.org/abs/1909.12977)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1909.12977.md)]. - Joint-task Self-supervised Learning for Temporal Correspondence - [[1909.11895](https://arxiv.org/abs/1909.11895)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1909.11895.md)]. - UNITER: UNiversal Image-TExt Representation Learning - [[1909.11740](https://arxiv.org/abs/1909.11740)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1909.11740.md)]. - High Fidelity Speech Synthesis with Adversarial Networks - [[1909.11646](https://arxiv.org/abs/1909.11646)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1909.11646.md)]. - Improving Generative Visual Dialog by Answering Diverse Questions - [[1909.10470](https://arxiv.org/abs/1909.10470)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1909.10470.md)]. - On Model Stability as a Function of Random Seed - [[1909.10447](https://arxiv.org/abs/1909.10447)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1909.10447.md)]. - Understanding and Robustifying Differentiable Architecture Search - [[1909.09656](https://arxiv.org/abs/1909.09656)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1909.09656.md)]. - Self-Training for End-to-End Speech Recognition - [[1909.09116](https://arxiv.org/abs/1909.09116)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1909.09116.md)]. - Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism - [[1909.08053](https://arxiv.org/abs/1909.08053)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1909.08053.md)]. - An Internal Learning Approach to Video Inpainting - [[1909.07957](https://arxiv.org/abs/1909.07957)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1909.07957.md)]. - Learning to Deceive with Attention-Based Explanations - [[1909.07913](https://arxiv.org/abs/1909.07913)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1909.07913.md)]. - Towards Scalable Multi-domain Conversational Agents: The Schema-Guided Dialogue Dataset - [[1909.05855](https://arxiv.org/abs/1909.05855)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1909.05855.md)]. - CTRL: A Conditional Transformer Language Model for Controllable Generation - [[1909.05858](https://arxiv.org/abs/1909.05858)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1909.05858.md)]. - ACUTE-EVAL: Improved Dialogue Evaluation with Optimized Questions and Multi-turn Comparisons - [[1909.03087](https://arxiv.org/abs/1909.03087)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1909.03087.md)]. - Image Inpainting with Learnable Bidirectional Attention Maps - [[1909.00968](https://arxiv.org/abs/1909.00968)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1909.00968.md)]. - Identifying Personality Traits Using Overlap Dynamics in Multiparty Dialogue - [[1909.00876](https://arxiv.org/abs/1909.00876)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1909.00876.md)]. - All You Need is Ratings: A Clustering Approach to Synthetic Rating Datasets Generation - [[1909.00687](https://arxiv.org/abs/1909.00687)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1909.00687.md)]. ### August 2019 - Copy-and-Paste Networks for Deep Video Inpainting - [[1908.11587](https://arxiv.org/abs/1908.11587)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1908.11587.md)]. - Onion-Peel Networks for Deep Video Completion - [[1908.08718](https://arxiv.org/abs/1908.08718)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1908.08718.md)]. - VL-BERT: Pre-training of Generic Visual-Linguistic Representations - [[1908.08530](https://arxiv.org/abs/1908.08530)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1908.08530.md)]. - Efficient Deep Neural Networks - [[1908.08926](https://arxiv.org/abs/1908.08926)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1908.08926.md)]. - A synthetic approach to Markov kernels, conditional independence and theorems on sufficient statistics - [[1908.07021](https://arxiv.org/abs/1908.07021)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1908.07021.md)]. - Unsupervised Learning of Landmarks by Descriptor Vector Exchange - [[1908.06427](https://arxiv.org/abs/1908.06427)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1908.06427.md)]. - Unicoder-VL: A Universal Encoder for Vision and Language by Cross-modal Pre-training - [[1908.06066](https://arxiv.org/abs/1908.06066)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1908.06066.md)]. - StructureFlow: Image Inpainting via Structure-aware Appearance Flow - [[1908.03852](https://arxiv.org/abs/1908.03852)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1908.03852.md)]. - Approximating the Convex Hull via Metric Space Magnitude - [[1908.02692](https://arxiv.org/abs/1908.02692)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1908.02692.md)]. - ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks - [[1908.02265](https://arxiv.org/abs/1908.02265)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1908.02265.md)]. - On the Existence of Simpler Machine Learning Models - [[1908.01755](https://arxiv.org/abs/1908.01755)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1908.01755.md)]. - Smooth Grad-CAM++: An Enhanced Inference Level Visualization Technique for Deep Convolutional Neural Network Models - [[1908.01224](https://arxiv.org/abs/1908.01224)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1908.01224.md)]. - Generative Image Inpainting with Submanifold Alignment - [[1908.00211](https://arxiv.org/abs/1908.00211)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1908.00211.md)]. ### July 2019 - On Mutual Information Maximization for Representation Learning - [[1907.13625](https://arxiv.org/abs/1907.13625)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1907.13625.md)]. - Benchmarking Attribution Methods with Relative Feature Importance - [[1907.09701](https://arxiv.org/abs/1907.09701)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1907.09701.md)]. - Forward-Backward Decoding for Regularizing End-to-End TTS - [[1907.09006](https://arxiv.org/abs/1907.09006)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1907.09006.md)]. - Compositional Deep Learning - [[1907.08292](https://arxiv.org/abs/1907.08292)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1907.08292.md)]. - PC-DARTS: Partial Channel Connections for Memory-Efficient Architecture Search - [[1907.05737](https://arxiv.org/abs/1907.05737)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1907.05737.md)]. - Dual Adversarial Semantics-Consistent Network for Generalized Zero-Shot Learning - [[1907.05570](https://arxiv.org/abs/1907.05570)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1907.05570.md)]. - Generative Counterfactual Introspection for Explainable Deep Learning - [[1907.03077](https://arxiv.org/abs/1907.03077)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1907.03077.md)]. - Large Scale Adversarial Representation Learning - [[1907.02544](https://arxiv.org/abs/1907.02544)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1907.02544.md)]. - Generalizing from a few environments in safety-critical reinforcement learning - [[1907.01475](https://arxiv.org/abs/1907.01475)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1907.01475.md)]. - Learnable Gated Temporal Shift Module for Deep Video Inpainting - [[1907.01131](https://arxiv.org/abs/1907.01131)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1907.01131.md)]. ### June 2019 - Self-Supervised Dialogue Learning - [[1907.00448](https://arxiv.org/abs/1907.00448)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1907.00448.md)]. - Using Self-Supervised Learning Can Improve Model Robustness and Uncertainty - [[1906.12340](https://arxiv.org/abs/1906.12340)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1906.12340.md)]. - Improving performance of deep learning models with axiomatic attribution priors and expected gradients - [[1906.10670](https://arxiv.org/abs/1906.10670)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1906.10670.md)]. - Unsupervised State Representation Learning in Atari - [[1906.08226](https://arxiv.org/abs/1906.08226)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1906.08226.md)]. - Sample-Efficient Neural Architecture Search by Learning Action Space - [[1906.06832](https://arxiv.org/abs/1906.06832)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1906.06832.md)]. - One Epoch Is All You Need - [[1906.06669](https://arxiv.org/abs/1906.06669)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1906.06669.md)]. - Contrastive Multiview Coding - [[1906.05849](https://arxiv.org/abs/1906.05849)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1906.05849.md)]. - Real-Time Open-Domain Question Answering with Dense-Sparse Phrase Index - [[1906.05807](https://arxiv.org/abs/1906.05807)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1906.05807.md)]. - Factorized Mutual Information Maximization - [[1906.05460](https://arxiv.org/abs/1906.05460)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1906.05460.md)]. - Self-Supervised Learning for Contextualized Extractive Summarization - [[1906.04466](https://arxiv.org/abs/1906.04466)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1906.04466.md)]. - Effective Use of Variational Embedding Capacity in Expressive End-to-End Speech Synthesis - [[1906.03402](https://arxiv.org/abs/1906.03402)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1906.03402.md)]. - HowTo100M: Learning a Text-Video Embedding by Watching Hundred Million Narrated Video Clips - [[1906.03327](https://arxiv.org/abs/1906.03327)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1906.03327.md)]. - Selfie: Self-supervised Pretraining for Image Embedding - [[1906.02940](https://arxiv.org/abs/1906.02940)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1906.02940.md)]. - XRAI: Better Attributions Through Regions - [[1906.02825](https://arxiv.org/abs/1906.02825)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1906.02825.md)]. - Attention is all you need for Videos: Self-attention based Video Summarization using Universal Transformers - [[1906.02792](https://arxiv.org/abs/1906.02792)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1906.02792.md)]. - Image Synthesis with a Single (Robust) Classifier - [[1906.09453](https://arxiv.org/abs/1906.09453)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1906.09453.md)]. - Automated Machine Learning: State-of-The-Art and Open Challenges - [[1906.02287](https://arxiv.org/abs/1906.02287)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1906.02287.md)]. - Learning Representations by Maximizing Mutual Information Across Views - [[1906.00910](https://arxiv.org/abs/1906.00910)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1906.00910.md)]. - Zero-Shot Semantic Segmentation - [[1906.00817](https://arxiv.org/abs/1906.00817)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1906.00817.md)]. - Rethinking Loss Design for Large-scale 3D Shape Retrieval - [[1906.0546](https://arxiv.org/abs/1906.0546)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1906.0546.md)]. - Latent Retrieval for Weakly Supervised Open Domain Question Answering - [[1906.00300](https://arxiv.org/abs/1906.00300)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1906.00300.md)]. - Learning to Generate Grounded Visual Captions without Localization Supervision - [[1906.00283](https://arxiv.org/abs/1906.00283)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1906.00283.md)]. ### May 2019 - Attention Is (not) All You Need for Commonsense Reasoning - [[1905.13497](https://arxiv.org/abs/1905.13497)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1905.13497.md)]. - Align-and-Attend Network for Globally and Locally Coherent Video Inpainting - [[1905.13066](https://arxiv.org/abs/1905.13066)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1905.13066.md)]. - Let's Agree to Agree: Neural Networks Share Classification Order on Real Datasets - [[1905.10854](https://arxiv.org/abs/1905.10854)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1905.10854.md)]. - Why do These Match? Explaining the Behavior of Image Similarity Models - [[1905.10797](https://arxiv.org/abs/1905.10797)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1905.10797.md)]. - Countering Noisy Labels By Learning From Auxiliary Clean Labels - [[1905.13305](https://arxiv.org/abs/1905.13305)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1905.13305.md)]. - Data-Efficient Image Recognition with Contrastive Predictive Coding - [[1905.09272](https://arxiv.org/abs/1905.09272)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1905.09272.md)]. - FastSpeech: Fast, Robust and Controllable Text to Speech - [[1905.09263](https://arxiv.org/abs/1905.09263)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1905.09263.md)]. - Deeper Text Understanding for IR with Contextual Neural Language Modeling - [[1905.09217](https://arxiv.org/abs/1905.09217)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1905.09217.md)]. - PEPSI++: Fast and Lightweight Network for Image Inpainting - [[1905.09010](https://arxiv.org/abs/1905.09010)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1905.09010.md)]. - Evolving Rewards to Automate Reinforcement Learning - [[1905.07628](https://arxiv.org/abs/1905.07628)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1905.07628.md)]. - Tabular Benchmarks for Joint Architecture and Hyperparameter Optimization - [[1905.04970](https://arxiv.org/abs/1905.04970)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1905.04970.md)]. - Deep Flow-Guided Video Inpainting - [[1905.02884](https://arxiv.org/abs/1905.02884)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1905.02884.md)]. - Frame-Recurrent Video Inpainting by Robust Optical Flow Inference - [[1905.02882](https://arxiv.org/abs/1905.02882)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1905.02882.md)]. - Characterizing the invariances of learning algorithms using category theory - [[1905.02072](https://arxiv.org/abs/1905.02072)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1905.02072.md)]. - Deep Video Inpainting - [[1905.01639](https://arxiv.org/abs/1905.01639)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1905.01639.md)]. - Unsupervised Pre-Training of Image Features on Non-Curated Data - [[1905.01278](https://arxiv.org/abs/1905.01278)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1905.01278.md)]. - Scaling and Benchmarking Self-Supervised Visual Representation Learning - [[1905.01235](https://arxiv.org/abs/1905.01235)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1905.01235.md)]. - Visualizing Deep Networks by Optimizing with Integrated Gradients - [[1905.00954](https://arxiv.org/abs/1905.00954)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1905.00954.md)]. - Full-Gradient Representation for Neural Network Visualization - [[1905.00780](https://arxiv.org/abs/1905.00780)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1905.00780.md)]. ### April 2019 - Segmentation is All You Need - [[1904.13300](https://arxiv.org/abs/1904.13300)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1904.13300.md)]. - A critical analysis of self-supervision, or what we can learn from a single image - [[1904.13132](https://arxiv.org/abs/1904.13132)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1904.13132.md)]. - TVQA+: Spatio-Temporal Grounding for Video Question Answering - [[1904.11574](https://arxiv.org/abs/1904.11574)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1904.11574.md)]. - DynamoNet: Dynamic Action and Motion Network - [[1904.11407](https://arxiv.org/abs/1904.11407)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1904.11407.md)]. - Free-form Video Inpainting with 3D Gated Convolution and Temporal PatchGAN - [[1904.10247](https://arxiv.org/abs/1904.10247)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1904.10247.md)]. - GraphNAS: Graph Neural Architecture Search with Reinforcement Learning - [[1904.09981](https://arxiv.org/abs/1904.09981)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1904.09981.md)]. - Poly-encoders: Transformer Architectures and Pre-training Strategies for Fast and Accurate Multi-sentence Scoring - [[1905.01969](https://arxiv.org/abs/1905.01969)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1905.01969.md)]. - SelFlow: Self-Supervised Learning of Optical Flow - [[1904.09117](https://arxiv.org/abs/1904.09117)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1904.09117.md)]. - Self-Supervised Audio-Visual Co-Segmentation - [[1904.09013](https://arxiv.org/abs/1904.09013)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1904.09013.md)]. - Understanding Neural Networks via Feature Visualization: A survey - [[1904.08939](https://arxiv.org/abs/1904.08939)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1904.08939.md)]. - Document Expansion by Query Prediction - [[1904.08375](https://arxiv.org/abs/1904.08375)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1904.08375.md)]. - Deep Fusion Network for Image Completion - [[1904.08060](https://arxiv.org/abs/1904.08060)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1904.08060.md)]. - Semantically Aligned Bias Reducing Zero Shot Learning - [[1904.07659](https://arxiv.org/abs/1904.07659)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1904.07659.md)]. - HARK Side of Deep Learning -- From Grad Student Descent to Automated Machine Learning - [[1904.07633](https://arxiv.org/abs/1904.07633)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1904.07633.md)]. - Understanding the Behaviors of BERT in Ranking - [[1904.07531](https://arxiv.org/abs/1904.07531)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1904.07531.md)]. - Learning Pyramid-Context Encoder Network for High-Quality Image Inpainting - [[1904.07475](https://arxiv.org/abs/1904.07475)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1904.07475.md)]. - Counterfactual Visual Explanations - [[1904.07451](https://arxiv.org/abs/1904.07451)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1904.07451.md)]. - The Geometry of Bayesian Programming - [[1904.07425](https://arxiv.org/abs/1904.07425)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1904.07425.md)]. - Focus Is All You Need: Loss Functions For Event-based Vision - [[1904.07235](https://arxiv.org/abs/1904.07235)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1904.07235.md)]. - CEDR: Contextualized Embeddings for Document Ranking - [[1904.07094](https://arxiv.org/abs/1904.07094)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1904.07094.md)]. - VORNet: Spatio-temporally Consistent Video Inpainting for Object Removal - [[1904.06726](https://arxiv.org/abs/1904.06726)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1904.06726.md)]. - wav2vec: Unsupervised Pre-training for Speech Recognition - [[1904.05862](https://arxiv.org/abs/1904.05862)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1904.05862.md)]. - ThumbNet: One Thumbnail Image Contains All You Need for Recognition - [[1904.05034](https://arxiv.org/abs/1904.05034)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1904.05034.md)]. - On zero-shot recognition of generic objects - [[1904.04957](https://arxiv.org/abs/1904.04957)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1904.04957.md)]. - Leveraging the Invariant Side of Generative Zero-Shot Learning - [[1904.04092](https://arxiv.org/abs/1904.04092)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1904.04092.md)]. - Self-supervised Spatio-temporal Representation Learning for Videos by Predicting Motion and Appearance Statistics - [[1904.03597](https://arxiv.org/abs/1904.03597)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1904.03597.md)]. - Data Shapley: Equitable Valuation of Data for Machine Learning - [[1904.02868](https://arxiv.org/abs/1904.02868)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1904.02868.md)]. - VideoBERT: A Joint Model for Video and Language Representation Learning - [[1904.01766](https://arxiv.org/abs/1904.01766)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1904.01766.md)]. - Creativity Inspired Zero-Shot Learning - [[1904.01109](https://arxiv.org/abs/1904.01109)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1904.01109.md)]. ### March 2019 - Interpreting Black Box Models via Hypothesis Testing - [[1904.00045](https://arxiv.org/abs/1904.00045)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1904.00045.md)]. - Wasserstein Dependency Measure for Representation Learning - [[1903.11780](https://arxiv.org/abs/1903.11780)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1903.11780.md)]. - Self-Supervised Learning via Conditional Motion Propagation - [[1903.11412](https://arxiv.org/abs/1903.11412)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1903.11412.md)]. - Simple Applications of BERT for Ad Hoc Document Retrieval - [[1903.10972](https://arxiv.org/abs/1903.10972)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1903.10972.md)]. - Generalized Convolution and Efficient Language Recognition - [[1903.10677](https://arxiv.org/abs/1903.10677)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1903.10677.md)]. - sharpDARTS: Faster and More Accurate Differentiable Architecture Search - [[1903.09900](https://arxiv.org/abs/1903.09900)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1903.09900.md)]. - Learning Correspondence from the Cycle-Consistency of Time - [[1903.07593](https://arxiv.org/abs/1903.07593)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1903.07593.md)]. - A Deep Look into Neural Ranking Models for Information Retrieval - [[1903.06902](https://arxiv.org/abs/1903.06902)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1903.06902.md)]. - All You Need is a Few Shifts: Designing Efficient Convolutional Neural Networks for Image Classification - [[1903.05285](https://arxiv.org/abs/1903.05285)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1903.05285.md)]. - Pluralistic Image Completion - [[1903.04227](https://arxiv.org/abs/1903.04227)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1903.04227.md)]. - Deep Reinforcement Learning of Volume-guided Progressive View Inpainting for 3D Point Scene Completion from a Single Depth Image - [[1903.04019](https://arxiv.org/abs/1903.04019)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1903.04019.md)]. - CLEVR-Dialog: A Diagnostic Dataset for Multi-Round Reasoning in Visual Dialog - [[1903.03166](https://arxiv.org/abs/1903.03166)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1903.03166.md)]. - Self-Supervised Learning of 3D Human Pose using Multi-view Geometry - [[1903.02330](https://arxiv.org/abs/1903.02330)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1903.02330.md)]. - High-Fidelity Image Generation With Fewer Labels - [[1903.02271](https://arxiv.org/abs/1903.02271)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1903.02271.md)]. - Learning Latent Plans from Play - [[1903.01973](https://arxiv.org/abs/1903.01973)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1903.01973.md)]. - Lenses and Learners - [[1903.03671](https://arxiv.org/abs/1903.03671)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1903.03671.md)]. - Stabilizing the Lottery Ticket Hypothesis - [[1903.01611](https://arxiv.org/abs/1903.01611)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1903.01611.md)]. - Differentiable Causal Computations via Delayed Trace - [[1903.01093](https://arxiv.org/abs/1903.01093)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1903.01093.md)]. - Semantic-Guided Multi-Attention Localization for Zero-Shot Learning - [[1903.00502](https://arxiv.org/abs/1903.00502)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1903.00502.md)]. ### February 2019 - Multi-Stage Self-Supervised Learning for Graph Convolutional Networks on Graphs with Few Labels - [[1902.11038](https://arxiv.org/abs/1902.11038)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1902.11038.md)]. - A Theoretical Analysis of Contrastive Unsupervised Representation Learning - [[1902.09229](https://arxiv.org/abs/1902.09229)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1902.09229.md)]. - From open learners to open games - [[1902.08666](https://arxiv.org/abs/1902.08666)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1902.08666.md)]. - Evaluating the Search Phase of Neural Architecture Search - [[1902.08142](https://arxiv.org/abs/1902.08142)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1902.08142.md)]. - Predicting city safety perception based on visual image content - [[1902.06871](https://arxiv.org/abs/1902.06871)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1902.06871.md)]. - SC-FEGAN: Face Editing Generative Adversarial Network with User's Sketch and Color - [[1902.06838](https://arxiv.org/abs/1902.06838)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1902.06838.md)]. - CBOW Is Not All You Need: Combining CBOW with the Compositional Matrix Space Model - [[1902.06423](https://arxiv.org/abs/1902.06423)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1902.06423.md)]. - Self-supervised Visual Feature Learning with Deep Neural Networks: A Survey - [[1902.06162](https://arxiv.org/abs/1902.06162)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1902.06162.md)]. - LS-Tree: Model Interpretation When the Data Are Linguistic - [[1902.04187](https://arxiv.org/abs/1902.04187)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1902.04187.md)]. - Towards Automatic Concept-based Explanations - [[1902.03129](https://arxiv.org/abs/1902.03129)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1902.03129.md)]. - Depthwise Convolution is All You Need for Learning Multiple Visual Domains - [[1902.00927](https://arxiv.org/abs/1902.00927)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1902.00927.md)]. - Collaborative Sampling in Generative Adversarial Networks - [[1902.00813](https://arxiv.org/abs/1902.00813)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1902.00813.md)]. - Parameter-Efficient Transfer Learning for NLP - [[1902.00751](https://arxiv.org/abs/1902.00751)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1902.00751.md)]. ### January 2019 - Compositionality for Recursive Neural Networks - [[1901.10723](https://arxiv.org/abs/1901.10723)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1901.10723.md)]. - Personalized Dialogue Generation with Diversified Traits - [[1901.09672](https://arxiv.org/abs/1901.09672)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1901.09672.md)]. - On the (In)fidelity and Sensitivity for Explanations - [[1901.09392](https://arxiv.org/abs/1901.09392)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1901.09392.md)]. - Revisiting Self-Supervised Visual Representation Learning - [[1901.09005](https://arxiv.org/abs/1901.09005)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1901.09005.md)]. - Diffusion Variational Autoencoders - [[1901.08991](https://arxiv.org/abs/1901.08991)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1901.08991.md)]. - Self-Supervised Generalisation with Meta Auxiliary Learning - [[1901.08933](https://arxiv.org/abs/1901.08933)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1901.08933.md)]. - Improving Sequence-to-Sequence Learning via Optimal Transport - [[1901.06283](https://arxiv.org/abs/1901.06283)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1901.06283.md)]. - Foreground-aware Image Inpainting - [[1901.05945](https://arxiv.org/abs/1901.05945)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1901.05945.md)]. - Passage Re-ranking with BERT - [[1901.04085](https://arxiv.org/abs/1901.04085)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1901.04085.md)]. - Automated Rationale Generation: A Technique for Explainable AI and its Effects on Human Perceptions - [[1901.03729](https://arxiv.org/abs/1901.03729)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1901.03729.md)]. - Detecting Overfitting of Deep Generative Networks via Latent Recovery - [[1901.03396](https://arxiv.org/abs/1901.03396)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1901.03396.md)]. - A Comprehensive Survey on Graph Neural Networks - [[1901.00596](https://arxiv.org/abs/1901.00596)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1901.00596.md)]. - Visualizing Deep Similarity Networks - [[1901.00536](https://arxiv.org/abs/1901.00536)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1901.00536.md)]. - EdgeConnect: Generative Image Inpainting with Adversarial Edge Learning - [[1901.00212](https://arxiv.org/abs/1901.00212)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1901.00212.md)]. - A Theoretical Analysis of Deep Q-Learning - [[1901.00137](https://arxiv.org/abs/1901.00137)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1901.00137.md)]. </div></details> <details open><summary><strong>2018</strong></summary><div> ### December 2018 - Improving the Interpretability of Deep Neural Networks with Knowledge Distillation - [[1812.10924](https://arxiv.org/abs/1812.10924)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1812.10924.md)]. - Adaptive Confidence Smoothing for Generalized Zero-Shot Learning - [[1812.09903](https://arxiv.org/abs/1812.09903)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1812.09903.md)]. - Face Completion with Semantic Knowledge and Collaborative Adversarial Learning - [[1812.03252](https://arxiv.org/abs/1812.03252)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1812.03252.md)]. - Generalized Zero- and Few-Shot Learning via Aligned Variational Autoencoders - [[1812.01784](https://arxiv.org/abs/1812.01784)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1812.01784.md)]. - Deep Inception Generative Network for Cognitive Image Inpainting - [[1812.01458](https://arxiv.org/abs/1812.01458)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1812.01458.md)]. ### November 2018 - Strike (with) a Pose: Neural Networks Are Easily Fooled by Strange Poses of Familiar Objects - [[1811.11553](https://arxiv.org/abs/1811.11553)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1811.11553.md)]. - Coordinate-based Texture Inpainting for Pose-Guided Image Generation - [[1811.11459](https://arxiv.org/abs/1811.11459)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1811.11459.md)]. - GAN Dissection: Visualizing and Understanding Generative Adversarial Networks - [[1811.10597](https://arxiv.org/abs/1811.10597)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1811.10597.md)]. - Self-Supervised Video Representation Learning with Space-Time Cubic Puzzles - [[1811.09795](https://arxiv.org/abs/1811.09795)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1811.09795.md)]. - Bytes are All You Need: End-to-End Multilingual Speech Recognition and Synthesis with Bytes - [[1811.09021](https://arxiv.org/abs/1811.09021)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1811.09021.md)]. - Causal Inference by String Diagram Surgery - [[1811.08338](https://arxiv.org/abs/1811.08338)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1811.08338.md)]. - Generalized Zero-Shot Recognition based on Visually Semantic Embedding - [[1811.07993](https://arxiv.org/abs/1811.07993)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1811.07993.md)]. - Scalable agent alignment via reward modeling: a research direction - [[1811.07871](https://arxiv.org/abs/1811.07871)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1811.07871.md)]. - On Hallucinating Context and Background Pixels from a Face Mask using Multi-scale GANs - [[1811.07104](https://arxiv.org/abs/1811.07104)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1811.07104.md)]. - Grasp2Vec: Learning Object Representations from Self-Supervised Grasping - [[1811.06964](https://arxiv.org/abs/1811.06964)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1811.06964.md)]. - Reward learning from human preferences and demonstrations in Atari - [[1811.06521](https://arxiv.org/abs/1811.06521)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1811.06521.md)]. - CGMH: Constrained Sentence Generation by Metropolis-Hastings Sampling - [[1811.10996](https://arxiv.org/abs/1811.10996)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1811.10996.md)]. - Model Evaluation, Model Selection, and Algorithm Selection in Machine Learning - [[1811.12808](https://arxiv.org/abs/1811.12808)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1811.12808.md)]. - Generative Dual Adversarial Network for Generalized Zero-shot Learning - [[1811.04857](https://arxiv.org/abs/1811.04857)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1811.04857.md)]. - Blockwise Parallel Decoding for Deep Autoregressive Models - [[1811.03115](https://arxiv.org/abs/1811.03115)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1811.03115.md)]. - Image Chat: Engaging Grounded Conversations - [[1811.00945](https://arxiv.org/abs/1811.00945)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1811.00945.md)]. - Persistent-Homology-based Machine Learning and its Applications -- A Survey - [[1811.00252](https://arxiv.org/abs/1811.00252)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1811.00252.md)]. ### October 2018 - Taking Human out of Learning Applications: A Survey on Automated Machine Learning - [[1810.13306](https://arxiv.org/abs/1810.13306)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1810.13306.md)]. - Automatically Evolving CNN Architectures Based on Blocks - [[1810.11875](https://arxiv.org/abs/1810.11875)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1810.11875.md)]. - Image Inpainting via Generative Multi-column Convolutional Neural Networks - [[1810.08771](https://arxiv.org/abs/1810.08771)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1810.08771.md)]. - Graph HyperNetworks for Neural Architecture Search - [[1810.05749](https://arxiv.org/abs/1810.05749)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1810.05749.md)]. - BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding - [[1810.04805](https://arxiv.org/abs/1810.04805)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1810.04805.md)]. - Robustness via Retrying: Closed-Loop Robotic Manipulation with Self-Supervised Learning - [[1810.03043](https://arxiv.org/abs/1810.03043)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1810.03043.md)]. ### September 2018 - Learning Long-Range Perception Using Self-Supervision from Short-Range Sensors and Odometry - [[1809.07207](https://arxiv.org/abs/1809.07207)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1809.07207.md)]. - TVQA: Localized, Compositional Video Question Answering - [[1809.01696](https://arxiv.org/abs/1809.01696)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1809.01696.md)]. ### August 2018 - AISHELL-2: Transforming Mandarin ASR Research Into Industrial Scale - [[1808.10583](https://arxiv.org/abs/1808.10583)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1808.10583.md)]. - Discriminative Deep Dyna-Q: Robust Planning for Dialogue Policy Learning - [[1808.09442](https://arxiv.org/abs/1808.09442)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1808.09442.md)]. - All You Need is "Love": Evading Hate-speech Detection - [[1808.09115](https://arxiv.org/abs/1808.09115)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1808.09115.md)]. - Everybody Dance Now - [[1808.07371](https://arxiv.org/abs/1808.07371)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1808.07371.md)]. - Pervasive Attention: 2D Convolutional Neural Networks for Sequence-to-Sequence Prediction - [[1808.03867](https://arxiv.org/abs/1808.03867)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1808.03867.md)]. - Self-Supervised Model Adaptation for Multimodal Semantic Segmentation - [[1808.03833](https://arxiv.org/abs/1808.03833)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1808.03833.md)]. - Learning Actionable Representations from Visual Observations - [[1808.00928](https://arxiv.org/abs/1808.00928)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1808.00928.md)]. - Efficient Progressive Neural Architecture Search - [[1808.00391](https://arxiv.org/abs/1808.00391)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1808.00391.md)]. ### July 2018 - Regional Multi-scale Approach for Visually Pleasing Explanations of Deep Neural Networks - [[1807.11720](https://arxiv.org/abs/1807.11720)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1807.11720.md)]. - Improving Spatiotemporal Self-Supervision by Deep Reinforcement Learning - [[1807.11293](https://arxiv.org/abs/1807.11293)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1807.11293.md)]. - Diverse feature visualizations reveal invariances in early layers of deep neural networks - [[1807.10589](https://arxiv.org/abs/1807.10589)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1807.10589.md)]. - Cross-lingual Argumentation Mining: Machine Translation (and a bit of Projection) is All You Need! - [[1807.08998](https://arxiv.org/abs/1807.08998)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1807.08998.md)]. - Explaining Image Classifiers by Counterfactual Generation - [[1807.08024](https://arxiv.org/abs/1807.08024)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1807.08024.md)]. - Towards Automated Deep Learning: Efficient Joint Neural Architecture and Hyperparameter Search - [[1807.06906](https://arxiv.org/abs/1807.06906)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1807.06906.md)]. - Visual Reinforcement Learning with Imagined Goals - [[1807.04742](https://arxiv.org/abs/1807.04742)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1807.04742.md)]. - Representation Learning with Contrastive Predictive Coding - [[1807.03748](https://arxiv.org/abs/1807.03748)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1807.03748.md)]. - Talk the Walk: Navigating New York City through Grounded Dialogue - [[1807.03367](https://arxiv.org/abs/1807.03367)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1807.03367.md)]. - A Tutorial on Bayesian Optimization - [[1807.02811](https://arxiv.org/abs/1807.02811)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1807.02811.md)]. ### June 2018 - A Benchmark for Interpretability Methods in Deep Neural Networks - [[1806.10758](https://arxiv.org/abs/1806.10758)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1806.10758.md)]. - This Looks Like That: Deep Learning for Interpretable Image Recognition - [[1806.10574](https://arxiv.org/abs/1806.10574)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1806.10574.md)]. - Video Inpainting by Jointly Learning Temporal Structure and Spatial Details - [[1806.08482](https://arxiv.org/abs/1806.08482)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1806.08482.md)]. - RISE: Randomized Input Sampling for Explanation of Black-box Models - [[1806.07421](https://arxiv.org/abs/1806.07421)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1806.07421.md)]. - Self-Supervised Feature Learning by Learning to Spot Artifacts - [[1806.05024](https://arxiv.org/abs/1806.05024)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1806.05024.md)]. - Polynomial Regression As an Alternative to Neural Nets - [[1806.06850](https://arxiv.org/abs/1806.06850)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1806.06850.md)]. - Resource-Efficient Neural Architect - [[1806.07912](https://arxiv.org/abs/1806.07912)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1806.07912.md)]. - Auto-Meta: Automated Gradient Based Meta Learner Search - [[1806.06927](https://arxiv.org/abs/1806.06927)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1806.06927.md)]. - Free-Form Image Inpainting with Gated Convolution - [[1806.03589](https://arxiv.org/abs/1806.03589)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1806.03589.md)]. - Dank Learning: Generating Memes Using Deep Neural Networks - [[1806.04510](https://arxiv.org/abs/1806.04510)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1806.04510.md)]. - A Peek Into the Hidden Layers of a Convolutional Neural Network Through a Factorization Lens - [[1806.02012](https://arxiv.org/abs/1806.02012)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1806.02012.md)]. - Soccer on Your Tabletop - [[1806.00890](https://arxiv.org/abs/1806.00890)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1806.00890.md)]. ### May 2018 - Explaining Explanations: An Overview of Interpretability of Machine Learning - [[1806.00069](https://arxiv.org/abs/1806.00069)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1806.00069.md)]. - How Important Is a Neuron? - [[1805.12233](https://arxiv.org/abs/1805.12233)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1805.12233.md)]. - Rethinking Knowledge Graph Propagation for Zero-Shot Learning - [[1805.11724](https://arxiv.org/abs/1805.11724)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1805.11724.md)]. - AutoAugment: Learning Augmentation Policies from Data - [[1805.09501](https://arxiv.org/abs/1805.09501)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1805.09501.md)]. - Semantic Network Interpretation - [[1805.08969](https://arxiv.org/abs/1805.08969)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1805.08969.md)]. - Controlling Personality-Based Stylistic Variation with Neural Natural Language Generators - [[1805.08352](https://arxiv.org/abs/1805.08352)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1805.08352.md)]. - Progressive Ensemble Networks for Zero-Shot Recognition - [[1805.07473](https://arxiv.org/abs/1805.07473)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1805.07473.md)]. - Unsupervised Learning of Neural Networks to Explain Neural Networks - [[1805.07468](https://arxiv.org/abs/1805.07468)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1805.07468.md)]. - A Theoretical Explanation for Perplexing Behaviors of Backpropagation-based Visualizations - [[1805.07039](https://arxiv.org/abs/1805.07039)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1805.07039.md)]. - Fighting Fake News: Image Splice Detection via Learned Self-Consistency - [[1805.04096](https://arxiv.org/abs/1805.04096)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1805.04096.md)]. - SPG-Net: Segmentation Prediction and Guidance Network for Image Inpainting - [[1805.03356](https://arxiv.org/abs/1805.03356)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1805.03356.md)]. - Unsupervised Feature Learning via Non-Parametric Instance-level Discrimination - [[1805.01978](https://arxiv.org/abs/1805.01978)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1805.01978.md)]. - AGI Safety Literature Review - [[1805.01109](https://arxiv.org/abs/1805.01109)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1805.01109.md)]. - Learnable PINs: Cross-Modal Embeddings for Person Identity - [[1805.00833](https://arxiv.org/abs/1805.00833)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1805.00833.md)]. ### April 2018 - How convolutional neural network see the world - A survey of convolutional neural network visualization methods - [[1804.11191](https://arxiv.org/abs/1804.11191)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1804.11191.md)]. - FaceShop: Deep Sketch-based Face Image Editing - [[1804.08972](https://arxiv.org/abs/1804.08972)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1804.08972.md)]. - Subgoal Discovery for Hierarchical Dialogue Policy Learning - [[1804.07855](https://arxiv.org/abs/1804.07855)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1804.07855.md)]. - Image Inpainting for Irregular Holes Using Partial Convolutions - [[1804.07723](https://arxiv.org/abs/1804.07723)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1804.07723.md)]. - Audio-Visual Scene Analysis with Self-Supervised Multisensory Features - [[1804.03641](https://arxiv.org/abs/1804.03641)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1804.03641.md)]. - Bimonoidal Structure of Probability Monads - [[1804.03527](https://arxiv.org/abs/1804.03527)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1804.03527.md)]. - Deep Painterly Harmonization - [[1804.03189](https://arxiv.org/abs/1804.03189)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1804.03189.md)]. - The Sound of Pixels - [[1804.03160](https://arxiv.org/abs/1804.03160)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1804.03160.md)]. - Learning to Separate Object Sounds by Watching Unlabeled Video - [[1804.01665](https://arxiv.org/abs/1804.01665)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1804.01665.md)]. - Self-supervised Learning of Geometrically Stable Features Through Probabilistic Introspection - [[1804.01552](https://arxiv.org/abs/1804.01552)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1804.01552.md)]. - The simple essence of automatic differentiation - [[1804.00746](https://arxiv.org/abs/1804.00746)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1804.00746.md)]. ### March 2018 - Meta-Learning Update Rules for Unsupervised Representation Learning - [[1804.00222](https://arxiv.org/abs/1804.00222)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1804.00222.md)]. - MemGEN: Memory is All You Need - [[1803.11203](https://arxiv.org/abs/1803.11203)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1803.11203.md)]. - Structural inpainting - [[1803.10348](https://arxiv.org/abs/1803.10348)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1803.10348.md)]. - Learning Synergies between Pushing and Grasping with Self-supervised Deep Reinforcement Learning - [[1803.09956](https://arxiv.org/abs/1803.09956)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1803.09956.md)]. - BAGAN: Data Augmentation with Balancing GAN - [[1803.09655](https://arxiv.org/abs/1803.09655)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1803.09655.md)]. - Zero-shot Recognition via Semantic Embeddings and Knowledge Graphs - [[1803.08035](https://arxiv.org/abs/1803.08035)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1803.08035.md)]. - Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge - [[1803.05457](https://arxiv.org/abs/1803.05457)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1803.05457.md)]. - Preserving Semantic Relations for Zero-Shot Learning - [[1803.03049](https://arxiv.org/abs/1803.03049)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1803.03049.md)]. ### February 2018 - Relational Reasoning for Markov Chains in a Probabilistic Guarded Lambda Calculus - [[1802.09787](https://arxiv.org/abs/1802.09787)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1802.09787.md)]. - Machine Theory of Mind - [[1802.07740](https://arxiv.org/abs/1802.07740)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1802.07740.md)]. - Diversity is All You Need: Learning Skills without a Reward Function - [[1802.06070](https://arxiv.org/abs/1802.06070)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1802.06070.md)]. - Multimodal Explanations: Justifying Decisions and Pointing to the Evidence - [[1802.08129](https://arxiv.org/abs/1802.08129)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1802.08129.md)]. - On Characterizing the Capacity of Neural Networks using Algebraic Topology - [[1802.04443](https://arxiv.org/abs/1802.04443)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1802.04443.md)]. - A Survey Of Methods For Explaining Black Box Models - [[1802.01933](https://arxiv.org/abs/1802.01933)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1802.01933.md)]. - Learning Image Representations by Completing Damaged Jigsaw Puzzles - [[1802.01880](https://arxiv.org/abs/1802.01880)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1802.01880.md)]. - The Matrix Calculus You Need For Deep Learning - [[1802.01528](https://arxiv.org/abs/1802.01528)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1802.01528.md)]. - Singularities in Einstein-conformally coupled Higgs cosmological models - [[1802.0774](https://arxiv.org/abs/1802.0774)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1802.0774.md)]. - Interpreting CNNs via Decision Trees - [[1802.00121](https://arxiv.org/abs/1802.00121)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1802.00121.md)]. ### January 2018 - Shift-Net: Image Inpainting via Deep Feature Rearrangement - [[1801.09392](https://arxiv.org/abs/1801.09392)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1801.09392.md)]. - Understanding Deep Architectures by Visual Summaries - [[1801.09103](https://arxiv.org/abs/1801.09103)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1801.09103.md)]. - Generative Image Inpainting with Contextual Attention - [[1801.07892](https://arxiv.org/abs/1801.07892)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1801.07892.md)]. - Net2Vec: Quantifying and Explaining how Concepts are Encoded by Filters in Deep Neural Networks - [[1801.03454](https://arxiv.org/abs/1801.03454)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1801.03454.md)]. </div></details> <details open><summary><strong>2017</strong></summary><div> ### December 2017 - Beyond saliency: understanding convolutional neural networks from saliency prediction on layer-wise relevance propagation - [[1712.08268](https://arxiv.org/abs/1712.08268)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1712.08268.md)]. - Objects that Sound - [[1712.06651](https://arxiv.org/abs/1712.06651)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1712.06651.md)]. - Visual Explanation by Interpretation: Improving Visual Feedback Capabilities of Deep Neural Networks - [[1712.06302](https://arxiv.org/abs/1712.06302)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1712.06302.md)]. - Natural TTS Synthesis by Conditioning WaveNet on Mel Spectrogram Predictions - [[1712.05884](https://arxiv.org/abs/1712.05884)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1712.05884.md)]. - A Probability Monad as the Colimit of Spaces of Finite Samples - [[1712.05363](https://arxiv.org/abs/1712.05363)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1712.05363.md)]. - Geometry-Aware Learning of Maps for Camera Localization - [[1712.03342](https://arxiv.org/abs/1712.03342)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1712.03342.md)]. - Self-supervised Learning of Motion Capture - [[1712.01337](https://arxiv.org/abs/1712.01337)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1712.01337.md)]. ### November 2017 - Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV) - [[1711.11279](https://arxiv.org/abs/1711.11279)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1711.11279.md)]. - Deep Image Prior - [[1711.10925](https://arxiv.org/abs/1711.10925)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1711.10925.md)]. - Backprop as Functor: A compositional perspective on supervised learning - [[1711.10455](https://arxiv.org/abs/1711.10455)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1711.10455.md)]. - Population Based Training of Neural Networks - [[1711.09846](https://arxiv.org/abs/1711.09846)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1711.09846.md)]. - Distilling a Neural Network Into a Soft Decision Tree - [[1711.09784](https://arxiv.org/abs/1711.09784)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1711.09784.md)]. - Cross-Domain Self-supervised Multi-task Feature Learning using Synthetic Imagery - [[1711.09082](https://arxiv.org/abs/1711.09082)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1711.09082.md)]. - Contextual-based Image Inpainting: Infer, Match, and Translate - [[1711.08590](https://arxiv.org/abs/1711.08590)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1711.08590.md)]. - Improvements to context based self-supervised learning - [[1711.06379](https://arxiv.org/abs/1711.06379)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1711.06379.md)]. - Denotational validation of higher-order Bayesian inference - [[1711.03219](https://arxiv.org/abs/1711.03219)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1711.03219.md)]. - Hierarchical Representations for Efficient Architecture Search - [[1711.00436](https://arxiv.org/abs/1711.00436)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1711.00436.md)]. ### October 2017 - Grad-CAM++: Improved Visual Explanations for Deep Convolutional Networks - [[1710.11063](https://arxiv.org/abs/1710.11063)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1710.11063.md)]. - Generalized End-to-End Loss for Speaker Verification - [[1710.10467](https://arxiv.org/abs/1710.10467)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1710.10467.md)]. - Dynamic Routing Between Capsules - [[1710.09829](https://arxiv.org/abs/1710.09829)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1710.09829.md)]. - Efficiently Trainable Text-to-Speech System Based on Deep Convolutional Networks with Guided Attention - [[1710.08969](https://arxiv.org/abs/1710.08969)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1710.08969.md)]. - Distill-and-Compare: Auditing Black-Box Models Using Transparent Model Distillation - [[1710.06169](https://arxiv.org/abs/1710.06169)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1710.06169.md)]. - Searching for Activation Functions - [[1710.05941](https://arxiv.org/abs/1710.05941)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1710.05941.md)]. - Generalization in Deep Learning - [[1710.05468](https://arxiv.org/abs/1710.05468)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1710.05468.md)]. - A systematic study of the class imbalance problem in convolutional neural networks - [[1710.05381](https://arxiv.org/abs/1710.05381)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1710.05381.md)]. - Recent Advances in Zero-shot Recognition - [[1710.04837](https://arxiv.org/abs/1710.04837)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1710.04837.md)]. - Deep Learning for Case-Based Reasoning through Prototypes: A Neural Network that Explains Its Predictions - [[1710.04806](https://arxiv.org/abs/1710.04806)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1710.04806.md)]. ### September 2017 - Deep TAMER: Interactive Agent Shaping in High-Dimensional State Spaces - [[1709.10163](https://arxiv.org/abs/1709.10163)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1709.10163.md)]. - Information structures and their cohomology - [[1709.07807](https://arxiv.org/abs/1709.07807)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1709.07807.md)]. - N2N Learning: Network to Network Compression via Policy Gradient Reinforcement Learning - [[1709.06030](https://arxiv.org/abs/1709.06030)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1709.06030.md)]. - AISHELL-1: An Open-Source Mandarin Speech Corpus and A Speech Recognition Baseline - [[1709.05522](https://arxiv.org/abs/1709.05522)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1709.05522.md)]. - ShapeCodes: Self-Supervised Feature Learning by Lifting Views to Viewgrids - [[1709.00505](https://arxiv.org/abs/1709.00505)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1709.00505.md)]. ### August 2017 - Disintegration and Bayesian Inversion via String Diagrams - [[1709.00322](https://arxiv.org/abs/1709.00322)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1709.00322.md)]. - Multi-task Self-Supervised Visual Learning - [[1708.07860](https://arxiv.org/abs/1708.07860)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1708.07860.md)]. - Twin Networks: Matching the Future for Sequence Generation - [[1708.06742](https://arxiv.org/abs/1708.06742)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1708.06742.md)]. - Representation Learning by Learning to Count - [[1708.06734](https://arxiv.org/abs/1708.06734)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1708.06734.md)]. - SMASH: One-Shot Model Architecture Search through HyperNetworks - [[1708.05344](https://arxiv.org/abs/1708.05344)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1708.05344.md)]. - Transitive Invariance for Self-supervised Visual Representation Learning - [[1708.02901](https://arxiv.org/abs/1708.02901)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1708.02901.md)]. - Localizing Moments in Video with Natural Language - [[1708.01641](https://arxiv.org/abs/1708.01641)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1708.01641.md)]. - CASSL: Curriculum Accelerated Self-Supervised Learning - [[1708.01354](https://arxiv.org/abs/1708.01354)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1708.01354.md)]. - Unsupervised Representation Learning by Sorting Sequences - [[1708.01246](https://arxiv.org/abs/1708.01246)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1708.01246.md)]. ### July 2017 - Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering - [[1707.07998](https://arxiv.org/abs/1707.07998)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1707.07998.md)]. - Zero-Shot Learning -- A Comprehensive Evaluation of the Good, the Bad and the Ugly - [[1707.00600](https://arxiv.org/abs/1707.00600)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1707.00600.md)]. - A Channel-Based Perspective on Conjugate Priors - [[1707.00269](https://arxiv.org/abs/1707.00269)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1707.00269.md)]. ### June 2017 - Methods for Interpreting and Understanding Deep Neural Networks - [[1706.07979](https://arxiv.org/abs/1706.07979)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1706.07979.md)]. - SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability - [[1706.05806](https://arxiv.org/abs/1706.05806)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1706.05806.md)]. - SmoothGrad: removing noise by adding noise - [[1706.03825](https://arxiv.org/abs/1706.03825)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1706.03825.md)]. - Attention Is All You Need - [[1706.03762](https://arxiv.org/abs/1706.03762)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1706.03762.md)]. - Deep reinforcement learning from human preferences - [[1706.03741](https://arxiv.org/abs/1706.03741)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1706.03741.md)]. ### May 2017 - Self-supervised learning of visual features through embedding images into text topic spaces - [[1705.08631](https://arxiv.org/abs/1705.08631)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1705.08631.md)]. - Look, Listen and Learn - [[1705.08168](https://arxiv.org/abs/1705.08168)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1705.08168.md)]. - Learning how to explain neural networks: PatternNet and PatternAttribution - [[1705.05598](https://arxiv.org/abs/1705.05598)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1705.05598.md)]. - TALL: Temporal Activity Localization via Language Query - [[1705.02101](https://arxiv.org/abs/1705.02101)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1705.02101.md)]. - Dense-Captioning Events in Videos - [[1705.00754](https://arxiv.org/abs/1705.00754)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1705.00754.md)]. ### April 2017 - DeepArchitect: Automatically Designing and Training Deep Architectures - [[1704.08792](https://arxiv.org/abs/1704.08792)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1704.08792.md)]. - Unsupervised Learning of Depth and Ego-Motion from Video - [[1704.07813](https://arxiv.org/abs/1704.07813)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1704.07813.md)]. - Learning to Fly by Crashing - [[1704.05588](https://arxiv.org/abs/1704.05588)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1704.05588.md)]. - Unsupervised Learning by Predicting Noise - [[1704.05310](https://arxiv.org/abs/1704.05310)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1704.05310.md)]. - TGIF-QA: Toward Spatio-Temporal Reasoning in Visual Question Answering - [[1704.04497](https://arxiv.org/abs/1704.04497)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1704.04497.md)]. - DeepPermNet: Visual Permutation Learning - [[1704.02729](https://arxiv.org/abs/1704.02729)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1704.02729.md)]. - Towards Building Large Scale Multimodal Domain-Aware Conversation Systems - [[1704.00200](https://arxiv.org/abs/1704.00200)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1704.00200.md)]. ### March 2017 - Tacotron: Towards End-to-End Speech Synthesis - [[1703.10135](https://arxiv.org/abs/1703.10135)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1703.10135.md)]. - Towards Automatic Learning of Procedures from Web Instructional Videos - [[1703.09788](https://arxiv.org/abs/1703.09788)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1703.09788.md)]. - Where to put the Image in an Image Caption Generator - [[1703.09137](https://arxiv.org/abs/1703.09137)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1703.09137.md)]. - Mask R-CNN - [[1703.06870](https://arxiv.org/abs/1703.06870)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1703.06870.md)]. - Combining Self-Supervised Learning and Imitation for Vision-Based Rope Manipulation - [[1703.02018](https://arxiv.org/abs/1703.02018)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1703.02018.md)]. - All You Need is Beyond a Good Init: Exploring Better Solution for Training Extremely Deep Convolutional Neural Networks with Orthonormality and Modulation - [[1703.01827](https://arxiv.org/abs/1703.01827)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1703.01827.md)]. ### February 2017 - Towards A Rigorous Science of Interpretable Machine Learning - [[1702.08608](https://arxiv.org/abs/1702.08608)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1702.08608.md)]. - Rationalization: A Neural Machine Translation Approach to Generating Natural Language Explanations - [[1702.07826](https://arxiv.org/abs/1702.07826)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1702.07826.md)]. - Visualizing Deep Neural Network Decisions: Prediction Difference Analysis - [[1702.04595](https://arxiv.org/abs/1702.04595)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1702.04595.md)]. - Face Aging With Conditional Generative Adversarial Networks - [[1702.01983](https://arxiv.org/abs/1702.01983)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1702.01983.md)]. ### January 2017 - Interactive Learning from Policy-Dependent Human Feedback - [[1701.06049](https://arxiv.org/abs/1701.06049)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1701.06049.md)]. - A Convenient Category for Higher-Order Probability Theory - [[1701.02547](https://arxiv.org/abs/1701.02547)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1701.02547.md)]. - Unsupervised Learning of Long-Term Motion Dynamics for Videos - [[1701.01821](https://arxiv.org/abs/1701.01821)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1701.01821.md)]. </div></details> <details open><summary><strong>2016</strong></summary><div> ### December 2016 - Unsupervised Perceptual Rewards for Imitation Learning - [[1612.06699](https://arxiv.org/abs/1612.06699)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1612.06699.md)]. ### November 2016 - High-Resolution Image Inpainting using Multi-Scale Neural Patch Synthesis - [[1611.09969](https://arxiv.org/abs/1611.09969)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1611.09969.md)]. - Split-Brain Autoencoders: Unsupervised Learning by Cross-Channel Prediction - [[1611.09842](https://arxiv.org/abs/1611.09842)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1611.09842.md)]. - Gaze Embeddings for Zero-Shot Image Classification - [[1611.09309](https://arxiv.org/abs/1611.09309)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1611.09309.md)]. - Visual Dialog - [[1611.08669](https://arxiv.org/abs/1611.08669)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1611.08669.md)]. - Multi-Task Zero-Shot Action Recognition with Prioritised Data Augmentation - [[1611.08663](https://arxiv.org/abs/1611.08663)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1611.08663.md)]. - Self-Supervised Video Representation Learning With Odd-One-Out Networks - [[1611.06646](https://arxiv.org/abs/1611.06646)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1611.06646.md)]. - Learning a Deep Embedding Model for Zero-Shot Learning - [[1611.05088](https://arxiv.org/abs/1611.05088)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1611.05088.md)]. ### October 2016 - Safety Verification of Deep Neural Networks - [[1610.06940](https://arxiv.org/abs/1610.06940)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1610.06940.md)]. - Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization - [[1610.02391](https://arxiv.org/abs/1610.02391)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1610.02391.md)]. - Supervision via Competition: Robot Adversaries for Learning Tasks - [[1610.01685](https://arxiv.org/abs/1610.01685)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1610.01685.md)]. ### September 2016 - Multi-view Self-supervised Deep Learning for 6D Pose Estimation in the Amazon Picking Challenge - [[1609.09475](https://arxiv.org/abs/1609.09475)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1609.09475.md)]. ### July 2016 - Improving Semantic Embedding Consistency by Metric Learning for Zero-Shot Classification - [[1607.08085](https://arxiv.org/abs/1607.08085)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1607.08085.md)]. ### June 2016 - A Taxonomy and Library for Visualizing Learned Features in Convolutional Neural Networks - [[1606.07757](https://arxiv.org/abs/1606.07757)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1606.07757.md)]. - Learning to Poke by Poking: Experiential Learning of Intuitive Physics - [[1606.07419](https://arxiv.org/abs/1606.07419)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1606.07419.md)]. - Concrete Problems in AI Safety - [[1606.06565](https://arxiv.org/abs/1606.06565)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1606.06565.md)]. - The Mythos of Model Interpretability - [[1606.03490](https://arxiv.org/abs/1606.03490)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1606.03490.md)]. ### May 2016 - Adversarial Feature Learning - [[1605.09782](https://arxiv.org/abs/1605.09782)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1605.09782.md)]. - An Empirical Study and Analysis of Generalized Zero-Shot Learning for Object Recognition in the Wild - [[1605.04253](https://arxiv.org/abs/1605.04253)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1605.04253.md)]. - Movie Description - [[1605.03705](https://arxiv.org/abs/1605.03705)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1605.03705.md)]. - Not Just a Black Box: Learning Important Features Through Propagating Activation Differences - [[1605.01713](https://arxiv.org/abs/1605.01713)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1605.01713.md)]. ### April 2016 - Context Encoders: Feature Learning by Inpainting - [[1604.07379](https://arxiv.org/abs/1604.07379)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1604.07379.md)]. - Joint Unsupervised Learning of Deep Representations and Image Clusters - [[1604.03628](https://arxiv.org/abs/1604.03628)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1604.03628.md)]. - The Curious Robot: Learning Visual Representations via Physical Interactions - [[1604.01360](https://arxiv.org/abs/1604.01360)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1604.01360.md)]. ### March 2016 - Colorful Image Colorization - [[1603.08511](https://arxiv.org/abs/1603.08511)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1603.08511.md)]. - Persistent self-supervised learning principle: from stereo to monocular vision for obstacle avoidance - [[1603.08047](https://arxiv.org/abs/1603.08047)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1603.08047.md)]. - Unsupervised CNN for Single View Depth Estimation: Geometry to the Rescue - [[1603.04992](https://arxiv.org/abs/1603.04992)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1603.04992.md)]. - XGBoost: A Scalable Tree Boosting System - [[1603.02754](https://arxiv.org/abs/1603.02754)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1603.02754.md)]. ### February 2016 - A Probabilistic Dependent Type System based on Non-Deterministic Beta Reduction - [[1602.06420](https://arxiv.org/abs/1602.06420)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1602.06420.md)]. - "Why Should I Trust You?": Explaining the Predictions of Any Classifier - [[1602.04938](https://arxiv.org/abs/1602.04938)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1602.04938.md)]. </div></details> <details open><summary><strong>2015</strong></summary><div> ### December 2015 - A Mathematical Theory of Deep Convolutional Neural Networks for Feature Extraction - [[1512.06293](https://arxiv.org/abs/1512.06293)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1512.06293.md)]. - Deep Residual Learning for Image Recognition - [[1512.03385](https://arxiv.org/abs/1512.03385)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1512.03385.md)]. - MovieQA: Understanding Stories in Movies through Question-Answering - [[1512.02902](https://arxiv.org/abs/1512.02902)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1512.02902.md)]. - Explaining NonLinear Classification Decisions with Deep Taylor Decomposition - [[1512.02479](https://arxiv.org/abs/1512.02479)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1512.02479.md)]. ### November 2015 - A Type Theory for Probabilistic and Bayesian Reasoning - [[1511.09230](https://arxiv.org/abs/1511.09230)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1511.09230.md)]. - All you need is a good init - [[1511.06422](https://arxiv.org/abs/1511.06422)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1511.06422.md)]. - Unsupervised Deep Embedding for Clustering Analysis - [[1511.06335](https://arxiv.org/abs/1511.06335)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1511.06335.md)]. ### October 2015 - A Primer on Neural Network Models for Natural Language Processing - [[1510.00726](https://arxiv.org/abs/1510.00726)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1510.00726.md)]. ### September 2015 - Supersizing Self-supervision: Learning to Grasp from 50K Tries and 700 Robot Hours - [[1509.06825](https://arxiv.org/abs/1509.06825)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1509.06825.md)]. ### June 2015 - Inverting Visual Representations with Convolutional Networks - [[1506.02753](https://arxiv.org/abs/1506.02753)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1506.02753.md)]. - You Only Look Once: Unified, Real-Time Object Detection - [[1506.02640](https://arxiv.org/abs/1506.02640)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1506.02640.md)]. - Visualizing and Understanding Recurrent Networks - [[1506.02078](https://arxiv.org/abs/1506.02078)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1506.02078.md)]. ### May 2015 - A Critical Review of Recurrent Neural Networks for Sequence Learning - [[1506.00019](https://arxiv.org/abs/1506.00019)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1506.00019.md)]. - Unsupervised Visual Representation Learning by Context Prediction - [[1505.05192](https://arxiv.org/abs/1505.05192)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1505.05192.md)]. ### April 2015 - Fast R-CNN - [[1504.08083](https://arxiv.org/abs/1504.08083)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1504.08083.md)]. ### March 2015 - Label-Embedding for Image Classification - [[1503.08677](https://arxiv.org/abs/1503.08677)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1503.08677.md)]. - LSTM: A Search Space Odyssey - [[1503.04069](https://arxiv.org/abs/1503.04069)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1503.04069.md)]. ### February 2015 - Unsupervised Learning of Video Representations using LSTMs - [[1502.04681](https://arxiv.org/abs/1502.04681)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1502.04681.md)]. - Show, Attend and Tell: Neural Image Caption Generation with Visual Attention - [[1502.03044](https://arxiv.org/abs/1502.03044)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1502.03044.md)]. ### January 2015 - Transductive Multi-view Zero-Shot Learning - [[1501.04560](https://arxiv.org/abs/1501.04560)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1501.04560.md)]. - A Dataset for Movie Description - [[1501.02530](https://arxiv.org/abs/1501.02530)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1501.02530.md)]. </div></details> <details open><summary><strong>2014</strong></summary><div> ### December 2014 - Adam: A Method for Stochastic Optimization - [[1412.6980](https://arxiv.org/abs/1412.6980)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1412.6980.md)]. - Object Detectors Emerge in Deep Scene CNNs - [[1412.6856](https://arxiv.org/abs/1412.6856)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1412.6856.md)]. - Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling - [[1412.3555](https://arxiv.org/abs/1412.3555)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1412.3555.md)]. ### November 2014 - Understanding Deep Image Representations by Inverting Them - [[1412.0035](https://arxiv.org/abs/1412.0035)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1412.0035.md)]. - Frequentism and Bayesianism: A Python-driven Primer - [[1411.5018](https://arxiv.org/abs/1411.5018)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1411.5018.md)]. - Show and Tell: A Neural Image Caption Generator - [[1411.4555](https://arxiv.org/abs/1411.4555)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1411.4555.md)]. ### September 2014 - Computable Stochastic Processes - [[1409.4667](https://arxiv.org/abs/1409.4667)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1409.4667.md)]. - On the Properties of Neural Machine Translation: Encoder-Decoder Approaches - [[1409.1259](https://arxiv.org/abs/1409.1259)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1409.1259.md)]. ### May 2014 - Microsoft COCO: Common Objects in Context - [[1405.0312](https://arxiv.org/abs/1405.0312)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1405.0312.md)]. ### April 2014 - A Tutorial on Principal Component Analysis - [[1404.1100](https://arxiv.org/abs/1404.1100)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1404.1100.md)]. </div></details> <details open><summary><strong>2013</strong></summary><div> ### December 2013 - Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps - [[1312.6034](https://arxiv.org/abs/1312.6034)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1312.6034.md)]. - Bayesian machine learning via category theory - [[1312.1445](https://arxiv.org/abs/1312.1445)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1312.1445.md)]. ### November 2013 - Visualizing and Understanding Convolutional Networks - [[1311.2901](https://arxiv.org/abs/1311.2901)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1311.2901.md)]. - Rich feature hierarchies for accurate object detection and semantic segmentation - [[1311.2524](https://arxiv.org/abs/1311.2524)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1311.2524.md)]. ### September 2013 - Peeking Inside the Black Box: Visualizing Statistical Learning with Plots of Individual Conditional Expectation - [[1309.6392](https://arxiv.org/abs/1309.6392)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1309.6392.md)]. </div></details> <details open><summary><strong>2012</strong></summary><div> ### June 2012 - Representation Learning: A Review and New Perspectives - [[1206.5538](https://arxiv.org/abs/1206.5538)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1206.5538.md)]. </div></details> <details open><summary><strong>2010</strong></summary><div> ### November 2010 - Classifying Clustering Schemes - [[1011.5270](https://arxiv.org/abs/1011.5270)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/1011.5270.md)]. </div></details> <details open><summary><strong>2009</strong></summary><div> ### September 2009 - Chaos in Partial Differential Equations - [[0909.0910](https://arxiv.org/abs/0909.0910)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/0909.0910.md)]. ### August 2009 - Sparse Canonical Correlation Analysis - [[0908.2724](https://arxiv.org/abs/0908.2724)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/0908.2724.md)]. </div></details>
nampdn-ai/tiny-lessons
2023-08-29T05:58:57.000Z
[ "task_categories:text-generation", "size_categories:10K<n<100K", "source_datasets:nampdn-ai/tiny-en", "language:en", "license:cc-by-sa-4.0", "region:us" ]
nampdn-ai
null
null
null
11
118
--- license: cc-by-sa-4.0 task_categories: - text-generation language: - en pretty_name: Tiny Lessons size_categories: - 10K<n<100K source_datasets: - nampdn-ai/tiny-en --- # Tiny Lessons The dataset is designed to help causal language models learn more effectively from raw web text. It is augmented from public web text and contains two key components: theoretical concepts and practical examples. The theoretical concepts provide a foundation for understanding the underlying principles and ideas behind the information contained in the raw web text. The practical examples demonstrate how these theoretical concepts can be applied in real-world situations. This dataset is an ideal resource for ML researchers working with causal language models. I hope you find it useful and welcome any feedback or suggestions you may have. [View Nomic Atlas](https://atlas.nomic.ai/map/af5b399c-caa4-4ea9-8efc-7165972de209/c096774c-f979-4337-a5ea-08ea18be9fa0)
few_rel
2023-06-01T14:59:47.000Z
[ "task_categories:other", "annotations_creators:crowdsourced", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "size_categories:n<1K", "source_datasets:original", "language:en", "license:mit", "relation-extraction", "arxiv:1810.10147", "arxiv:1910.07124", "region:us" ]
null
FewRel is a large-scale few-shot relation extraction dataset, which contains more than one hundred relations and tens of thousands of annotated instances cross different domains.
@inproceedings{han-etal-2018-fewrel, title = "{F}ew{R}el: A Large-Scale Supervised Few-Shot Relation Classification Dataset with State-of-the-Art Evaluation", author = "Han, Xu and Zhu, Hao and Yu, Pengfei and Wang, Ziyun and Yao, Yuan and Liu, Zhiyuan and Sun, Maosong", booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", month = oct # "-" # nov, year = "2018", address = "Brussels, Belgium", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/D18-1514", doi = "10.18653/v1/D18-1514", pages = "4803--4809" } @inproceedings{gao-etal-2019-fewrel, title = "{F}ew{R}el 2.0: Towards More Challenging Few-Shot Relation Classification", author = "Gao, Tianyu and Han, Xu and Zhu, Hao and Liu, Zhiyuan and Li, Peng and Sun, Maosong and Zhou, Jie", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", month = nov, year = "2019", address = "Hong Kong, China", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/D19-1649", doi = "10.18653/v1/D19-1649", pages = "6251--6256" }
null
2
117
--- annotations_creators: - crowdsourced - machine-generated language_creators: - found language: - en license: - mit multilinguality: - monolingual size_categories: - 10K<n<100K - n<1K source_datasets: - original task_categories: - other task_ids: [] paperswithcode_id: fewrel pretty_name: Few-Shot Relation Classification Dataset tags: - relation-extraction dataset_info: - config_name: default features: - name: relation dtype: string - name: tokens sequence: string - name: head struct: - name: text dtype: string - name: type dtype: string - name: indices sequence: sequence: int64 - name: tail struct: - name: text dtype: string - name: type dtype: string - name: indices sequence: sequence: int64 - name: names sequence: string splits: - name: train_wiki num_bytes: 19923155 num_examples: 44800 - name: val_nyt num_bytes: 1385642 num_examples: 2500 - name: val_pubmed num_bytes: 488502 num_examples: 1000 - name: val_semeval num_bytes: 2646249 num_examples: 8851 - name: val_wiki num_bytes: 5147348 num_examples: 11200 - name: pubmed_unsupervised num_bytes: 1117703 num_examples: 2500 download_size: 22674323 dataset_size: 30708599 - config_name: pid2name features: - name: relation dtype: string - name: names sequence: string splits: - name: pid2name num_bytes: 81607 num_examples: 744 download_size: 22674323 dataset_size: 81607 config_names: - default - pid2name --- # Dataset Card for few_rel ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [GitHub Page](https://thunlp.github.io/) - **Repository:** [GitHub](https://github.com/thunlp/FewRel) - **Paper:** [FewRel](https://arxiv.org/abs/1810.10147), [FewRel 2.0](https://arxiv.org/abs/1910.07124) - **Leaderboard:** [GitHub Leaderboard](https://thunlp.github.io/fewrel.html) - **Point of Contact:** [Needs More Information] ### Dataset Summary FewRel is a large-scale few-shot relation extraction dataset, which contains more than one hundred relations and tens of thousands of annotated instances cross different domains. ### Supported Tasks and Leaderboards [Needs More Information] ### Languages The dataset contaings English text, as used by writers on Wikipedia, and crowdsourced English annotations. ## Dataset Structure ### Data Instances An instance from `train_wiki` split: ``` {'head': {'indices': [[16]], 'text': 'tjq', 'type': 'Q1331049'}, 'names': ['place served by transport hub', 'territorial entity or entities served by this transport hub (airport, train station, etc.)'], 'relation': 'P931', 'tail': {'indices': [[13, 14]], 'text': 'tanjung pandan', 'type': 'Q3056359'}, 'tokens': ['Merpati', 'flight', '106', 'departed', 'Jakarta', '(', 'CGK', ')', 'on', 'a', 'domestic', 'flight', 'to', 'Tanjung', 'Pandan', '(', 'TJQ', ')', '.']} ``` ### Data Fields For `default`: - `relation`: a `string` feature containing PID of the relation. - `tokens`: a `list` of `string` features containing tokens for the text. - `head`: a dictionary containing: - `text`: a `string` feature representing the head entity. - `type`: a `string` feature representing the type of the head entity. - `indices`: a `list` containing `list` of token indices. - `tail`: a dictionary containing: - `text`: a `string` feature representing the tail entity. - `type`: a `string` feature representing the type of the tail entity. - `indices`: a `list` containing `list` of token indices. - `names`: a `list` of `string` features containing relation names. For `pubmed_unsupervised` split, this is set to a `list` with an empty `string`. For `val_semeval` and `val_pubmed` split, this is set to a `list` with the `string` from the `relation` field. ### Data Splits `train_wiki`: 44800 `val_nyt`: 2500 `val_pubmed`: 1000 `val_semeval`: 8851 `val_wiki`: 11200 `pubmed_unsupervised`: 2500 ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators For FewRel: Han, Xu and Zhu, Hao and Yu, Pengfei and Wang, Ziyun and Yao, Yuan and Liu, Zhiyuan and Sun, Maosong For FewRel 2.0: Gao, Tianyu and Han, Xu and Zhu, Hao and Liu, Zhiyuan and Li, Peng and Sun, Maosong and Zhou, Jie ### Licensing Information ``` MIT License Copyright (c) 2018 THUNLP Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ``` ### Citation Information ``` @inproceedings{han-etal-2018-fewrel, title = "{F}ew{R}el: A Large-Scale Supervised Few-Shot Relation Classification Dataset with State-of-the-Art Evaluation", author = "Han, Xu and Zhu, Hao and Yu, Pengfei and Wang, Ziyun and Yao, Yuan and Liu, Zhiyuan and Sun, Maosong", booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", month = oct # "-" # nov, year = "2018", address = "Brussels, Belgium", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/D18-1514", doi = "10.18653/v1/D18-1514", pages = "4803--4809" } ``` ``` @inproceedings{gao-etal-2019-fewrel, title = "{F}ew{R}el 2.0: Towards More Challenging Few-Shot Relation Classification", author = "Gao, Tianyu and Han, Xu and Zhu, Hao and Liu, Zhiyuan and Li, Peng and Sun, Maosong and Zhou, Jie", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", month = nov, year = "2019", address = "Hong Kong, China", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/D19-1649", doi = "10.18653/v1/D19-1649", pages = "6251--6256" } ``` ### Contributions Thanks to [@gchhablani](https://github.com/gchhablani) for adding this dataset.
joelniklaus/mapa
2022-10-25T16:17:09.000Z
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:other", "language_creators:found", "multilinguality:multilingual", "size_categories:1K<n<10K", "source_datasets:original", "language:multilingual", "language:bg", "language:cs", "language:da", "language:de", "language:el", "language:en", "language:es", "language:et", "language:fi", "language:fr", "language:ga", "language:hu", "language:it", "language:lt", "language:lv", "language:mt", "language:nl", "language:pt", "language:ro", "language:sk", "language:sv", "license:cc-by-4.0", "named-entity-recognition-and-classification", "region:us" ]
joelniklaus
null
null
null
4
117
--- annotations_creators: - other language_creators: - found language: - multilingual - bg - cs - da - de - el - en - es - et - fi - fr - ga - hu - it - lt - lv - mt - nl - pt - ro - sk - sv license: - cc-by-4.0 multilinguality: - multilingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - token-classification task_ids: - named-entity-recognition pretty_name: Spanish Datasets for Sensitive Entity Detection in the Legal Domain tags: - named-entity-recognition-and-classification --- # Dataset Card for Multilingual European Datasets for Sensitive Entity Detection in the Legal Domain ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - ** Repository:** [Spanish](https://elrc-share.eu/repository/browse/mapa-anonymization-package-spanish/b550e1a88a8311ec9c1a00155d026706687917f92f64482587c6382175dffd76/), [Most](https://elrc-share.eu/repository/search/?q=mfsp:3222a6048a8811ec9c1a00155d0267067eb521077db54d6684fb14ce8491a391), [German, Portuguese, Slovak, Slovenian, Swedish](https://elrc-share.eu/repository/search/?q=mfsp:833df1248a8811ec9c1a00155d0267067685dcdb77064822b51cc16ab7b81a36) - **Paper:** de Gibert Bonet, O., García Pablos, A., Cuadros, M., & Melero, M. (2022). Spanish Datasets for Sensitive Entity Detection in the Legal Domain. Proceedings of the Language Resources and Evaluation Conference, June, 3751–3760. http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.400.pdf - **Leaderboard:** - **Point of Contact:** [Joel Niklaus](mailto:joel.niklaus.2@bfh.ch) ### Dataset Summary The dataset consists of 12 documents (9 for Spanish due to parsing errors) taken from EUR-Lex, a multilingual corpus of court decisions and legal dispositions in the 24 official languages of the European Union. The documents have been annotated for named entities following the guidelines of the [MAPA project]( https://mapa-project.eu/) which foresees two annotation level, a general and a more fine-grained one. The annotated corpus can be used for named entity recognition/classification. ### Supported Tasks and Leaderboards The dataset supports the task of Named Entity Recognition and Classification (NERC). ### Languages The following languages are supported: bg, cs, da, de, el, en, es, et, fi, fr, ga, hu, it, lt, lv, mt, nl, pt, ro, sk, sv ## Dataset Structure ### Data Instances The file format is jsonl and three data splits are present (train, validation and test). Named Entity annotations are non-overlapping. ### Data Fields For the annotation the documents have been split into sentences. The annotations has been done on the token level. The files contain the following data fields - `language`: language of the sentence - `type`: The document type of the sentence. Currently, only EUR-LEX is supported. - `file_name`: The document file name the sentence belongs to. - `sentence_number`: The number of the sentence inside its document. - `tokens`: The list of tokens in the sentence. - `coarse_grained`: The coarse-grained annotations for each token - `fine_grained`: The fine-grained annotations for each token As previously stated, the annotation has been conducted on a global and a more fine-grained level. The tagset used for the global and the fine-grained named entities is the following: - Address - Building - City - Country - Place - Postcode - Street - Territory - Amount - Unit - Value - Date - Year - Standard Abbreviation - Month - Day of the Week - Day - Calender Event - Person - Age - Email - Ethnic Category - Family Name - Financial - Given Name – Female - Given Name – Male - Health Insurance Number - ID Document Number - Initial Name - Marital Status - Medical Record Number - Nationality - Profession - Role - Social Security Number - Title - Url - Organisation - Time - Vehicle - Build Year - Colour - License Plate Number - Model - Type The final coarse grained tagset (in IOB notation) is the following: `['O', 'B-ORGANISATION', 'I-ORGANISATION', 'B-ADDRESS', 'I-ADDRESS', 'B-DATE', 'I-DATE', 'B-PERSON', 'I-PERSON', 'B-AMOUNT', 'I-AMOUNT', 'B-TIME', 'I-TIME']` The final fine grained tagset (in IOB notation) is the following: `[ 'O', 'B-BUILDING', 'I-BUILDING', 'B-CITY', 'I-CITY', 'B-COUNTRY', 'I-COUNTRY', 'B-PLACE', 'I-PLACE', 'B-TERRITORY', 'I-TERRITORY', 'I-UNIT', 'B-UNIT', 'B-VALUE', 'I-VALUE', 'B-YEAR', 'I-YEAR', 'B-STANDARD ABBREVIATION', 'I-STANDARD ABBREVIATION', 'B-MONTH', 'I-MONTH', 'B-DAY', 'I-DAY', 'B-AGE', 'I-AGE', 'B-ETHNIC CATEGORY', 'I-ETHNIC CATEGORY', 'B-FAMILY NAME', 'I-FAMILY NAME', 'B-INITIAL NAME', 'I-INITIAL NAME', 'B-MARITAL STATUS', 'I-MARITAL STATUS', 'B-PROFESSION', 'I-PROFESSION', 'B-ROLE', 'I-ROLE', 'B-NATIONALITY', 'I-NATIONALITY', 'B-TITLE', 'I-TITLE', 'B-URL', 'I-URL', 'B-TYPE', 'I-TYPE', ]` ### Data Splits Splits created by Joel Niklaus. | language | # train files | # validation files | # test files | # train sentences | # validation sentences | # test sentences | |:-----------|----------------:|---------------------:|---------------:|--------------------:|-------------------------:|-------------------:| | bg | 9 | 1 | 2 | 1411 | 166 | 560 | | cs | 9 | 1 | 2 | 1464 | 176 | 563 | | da | 9 | 1 | 2 | 1455 | 164 | 550 | | de | 9 | 1 | 2 | 1457 | 166 | 558 | | el | 9 | 1 | 2 | 1529 | 174 | 584 | | en | 9 | 1 | 2 | 893 | 98 | 408 | | es | 7 | 1 | 1 | 806 | 248 | 155 | | et | 9 | 1 | 2 | 1391 | 163 | 516 | | fi | 9 | 1 | 2 | 1398 | 187 | 531 | | fr | 9 | 1 | 2 | 1297 | 97 | 490 | | ga | 9 | 1 | 2 | 1383 | 165 | 515 | | hu | 9 | 1 | 2 | 1390 | 171 | 525 | | it | 9 | 1 | 2 | 1411 | 162 | 550 | | lt | 9 | 1 | 2 | 1413 | 173 | 548 | | lv | 9 | 1 | 2 | 1383 | 167 | 553 | | mt | 9 | 1 | 2 | 937 | 93 | 442 | | nl | 9 | 1 | 2 | 1391 | 164 | 530 | | pt | 9 | 1 | 2 | 1086 | 105 | 390 | | ro | 9 | 1 | 2 | 1480 | 175 | 557 | | sk | 9 | 1 | 2 | 1395 | 165 | 526 | | sv | 9 | 1 | 2 | 1453 | 175 | 539 | ## Dataset Creation ### Curation Rationale *„[…] to our knowledge, there exist no open resources annotated for NERC [Named Entity Recognition and Classificatio] in Spanish in the legal domain. With the present contribution, we intend to fill this gap. With the release of the created resources for fine-tuning and evaluation of sensitive entities detection in the legal domain, we expect to encourage the development of domain-adapted anonymisation tools for Spanish in this field“* (de Gibert Bonet et al., 2022) ### Source Data #### Initial Data Collection and Normalization The dataset consists of documents taken from EUR-Lex corpus which is publicly available. No further information on the data collection process are given in de Gibert Bonet et al. (2022). #### Who are the source language producers? The source language producers are presumably lawyers. ### Annotations #### Annotation process *"The annotation scheme consists of a complex two level hierarchy adapted to the legal domain, it follows the scheme described in (Gianola et al., 2020) […] Level 1 entities refer to general categories (PERSON, DATE, TIME, ADDRESS...) and level 2 entities refer to more fine-grained subcategories (given name, personal name, day, year, month...). Eur-Lex, CPP and DE have been annotated following this annotation scheme […] The manual annotation was performed using INCePTION (Klie et al., 2018) by a sole annotator following the guidelines provided by the MAPA consortium."* (de Gibert Bonet et al., 2022) #### Who are the annotators? Only one annotator conducted the annotation. More information are not provdided in de Gibert Bonet et al. (2022). ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations Note that the dataset at hand presents only a small portion of a bigger corpus as described in de Gibert Bonet et al. (2022). At the time of writing only the annotated documents from the EUR-Lex corpus were available. Note that the information given in this dataset card refer to the dataset version as provided by Joel Niklaus and Veton Matoshi. The dataset at hand is intended to be part of a bigger benchmark dataset. Creating a benchmark dataset consisting of several other datasets from different sources requires postprocessing. Therefore, the structure of the dataset at hand, including the folder structure, may differ considerably from the original dataset. In addition to that, differences with regard to dataset statistics as give in the respective papers can be expected. The reader is advised to have a look at the conversion script ```convert_to_hf_dataset.py``` in order to retrace the steps for converting the original dataset into the present jsonl-format. For further information on the original dataset structure, we refer to the bibliographical references and the original Github repositories and/or web pages provided in this dataset card. ## Additional Information ### Dataset Curators The names of the original dataset curators and creators can be found in references given below, in the section *Citation Information*. Additional changes were made by Joel Niklaus ([Email](mailto:joel.niklaus.2@bfh.ch) ; [Github](https://github.com/joelniklaus)) and Veton Matoshi ([Email](mailto:veton.matoshi@bfh.ch) ; [Github](https://github.com/kapllan)). ### Licensing Information [Attribution 4.0 International (CC BY 4.0) ](https://creativecommons.org/licenses/by/4.0/) ### Citation Information ``` @article{DeGibertBonet2022, author = {{de Gibert Bonet}, Ona and {Garc{\'{i}}a Pablos}, Aitor and Cuadros, Montse and Melero, Maite}, journal = {Proceedings of the Language Resources and Evaluation Conference}, number = {June}, pages = {3751--3760}, title = {{Spanish Datasets for Sensitive Entity Detection in the Legal Domain}}, url = {https://aclanthology.org/2022.lrec-1.400}, year = {2022} } ``` ### Contributions Thanks to [@JoelNiklaus](https://github.com/joelniklaus) and [@kapllan](https://github.com/kapllan) for adding this dataset.
ywchoi/pubmed_abstract_6
2022-09-13T01:09:44.000Z
[ "region:us" ]
ywchoi
null
null
null
0
117
Entry not found
bigbio/quaero
2022-12-22T15:46:29.000Z
[ "multilinguality:monolingual", "language:fr", "license:other", "region:us" ]
bigbio
The QUAERO French Medical Corpus has been initially developed as a resource for named entity recognition and normalization [1]. It was then improved with the purpose of creating a gold standard set of normalized entities for French biomedical text, that was used in the CLEF eHealth evaluation lab [2][3]. A selection of MEDLINE titles and EMEA documents were manually annotated. The annotation process was guided by concepts in the Unified Medical Language System (UMLS): 1. Ten types of clinical entities, as defined by the following UMLS Semantic Groups (Bodenreider and McCray 2003) were annotated: Anatomy, Chemical and Drugs, Devices, Disorders, Geographic Areas, Living Beings, Objects, Phenomena, Physiology, Procedures. 2. The annotations were made in a comprehensive fashion, so that nested entities were marked, and entities could be mapped to more than one UMLS concept. In particular: (a) If a mention can refer to more than one Semantic Group, all the relevant Semantic Groups should be annotated. For instance, the mention “récidive” (recurrence) in the phrase “prévention des récidives” (recurrence prevention) should be annotated with the category “DISORDER” (CUI C2825055) and the category “PHENOMENON” (CUI C0034897); (b) If a mention can refer to more than one UMLS concept within the same Semantic Group, all the relevant concepts should be annotated. For instance, the mention “maniaques” (obsessive) in the phrase “patients maniaques” (obsessive patients) should be annotated with CUIs C0564408 and C0338831 (category “DISORDER”); (c) Entities which span overlaps with that of another entity should still be annotated. For instance, in the phrase “infarctus du myocarde” (myocardial infarction), the mention “myocarde” (myocardium) should be annotated with category “ANATOMY” (CUI C0027061) and the mention “infarctus du myocarde” should be annotated with category “DISORDER” (CUI C0027051) The QUAERO French Medical Corpus BioC release comprises a subset of the QUAERO French Medical corpus, as follows: Training data (BRAT version used in CLEF eHealth 2015 task 1b as training data): - MEDLINE_train_bioc file: 833 MEDLINE titles, annotated with normalized entities in the BioC format - EMEA_train_bioc file: 3 EMEA documents, segmented into 11 sub-documents, annotated with normalized entities in the BioC format Development data (BRAT version used in CLEF eHealth 2015 task 1b as test data and in CLEF eHealth 2016 task 2 as development data): - MEDLINE_dev_bioc file: 832 MEDLINE titles, annotated with normalized entities in the BioC format - EMEA_dev_bioc file: 3 EMEA documents, segmented into 12 sub-documents, annotated with normalized entities in the BioC format Test data (BRAT version used in CLEF eHealth 2016 task 2 as test data): - MEDLINE_test_bioc folder: 833 MEDLINE titles, annotated with normalized entities in the BioC format - EMEA folder_test_bioc: 4 EMEA documents, segmented into 15 sub-documents, annotated with normalized entities in the BioC format This release of the QUAERO French medical corpus, BioC version, comes in the BioC format, through automatic conversion from the original BRAT format obtained with the Brat2BioC tool https://bitbucket.org/nicta_biomed/brat2bioc developped by Jimeno Yepes et al. Antonio Jimeno Yepes, Mariana Neves, Karin Verspoor Brat2BioC: conversion tool between brat and BioC BioCreative IV track 1 - BioC: The BioCreative Interoperability Initiative, 2013 Please note that the original version of the QUAERO corpus distributed in the CLEF eHealth challenge 2015 and 2016 came in the BRAT stand alone format. It was distributed with the CLEF eHealth evaluation tool. This original distribution of the QUAERO French Medical corpus is available separately from https://quaerofrenchmed.limsi.fr All questions regarding the task or data should be addressed to aurelie.neveol@limsi.fr
@InProceedings{neveol14quaero, author = {Névéol, Aurélie and Grouin, Cyril and Leixa, Jeremy and Rosset, Sophie and Zweigenbaum, Pierre}, title = {The {QUAERO} {French} Medical Corpus: A Ressource for Medical Entity Recognition and Normalization}, OPTbooktitle = {Proceedings of the Fourth Workshop on Building and Evaluating Ressources for Health and Biomedical Text Processing}, booktitle = {Proc of BioTextMining Work}, OPTseries = {BioTxtM 2014}, year = {2014}, pages = {24--30}, }
null
1
117
--- language: - fr bigbio_language: - French license: other multilinguality: monolingual bigbio_license_shortname: GFDL_1p3 pretty_name: QUAERO homepage: https://quaerofrenchmed.limsi.fr/ bigbio_pubmed: True bigbio_public: True bigbio_tasks: - NAMED_ENTITY_RECOGNITION - NAMED_ENTITY_DISAMBIGUATION --- # Dataset Card for QUAERO ## Dataset Description - **Homepage:** https://quaerofrenchmed.limsi.fr/ - **Pubmed:** True - **Public:** True - **Tasks:** NER,NED The QUAERO French Medical Corpus has been initially developed as a resource for named entity recognition and normalization [1]. It was then improved with the purpose of creating a gold standard set of normalized entities for French biomedical text, that was used in the CLEF eHealth evaluation lab [2][3]. A selection of MEDLINE titles and EMEA documents were manually annotated. The annotation process was guided by concepts in the Unified Medical Language System (UMLS): 1. Ten types of clinical entities, as defined by the following UMLS Semantic Groups (Bodenreider and McCray 2003) were annotated: Anatomy, Chemical and Drugs, Devices, Disorders, Geographic Areas, Living Beings, Objects, Phenomena, Physiology, Procedures. 2. The annotations were made in a comprehensive fashion, so that nested entities were marked, and entities could be mapped to more than one UMLS concept. In particular: (a) If a mention can refer to more than one Semantic Group, all the relevant Semantic Groups should be annotated. For instance, the mention “récidive” (recurrence) in the phrase “prévention des récidives” (recurrence prevention) should be annotated with the category “DISORDER” (CUI C2825055) and the category “PHENOMENON” (CUI C0034897); (b) If a mention can refer to more than one UMLS concept within the same Semantic Group, all the relevant concepts should be annotated. For instance, the mention “maniaques” (obsessive) in the phrase “patients maniaques” (obsessive patients) should be annotated with CUIs C0564408 and C0338831 (category “DISORDER”); (c) Entities which span overlaps with that of another entity should still be annotated. For instance, in the phrase “infarctus du myocarde” (myocardial infarction), the mention “myocarde” (myocardium) should be annotated with category “ANATOMY” (CUI C0027061) and the mention “infarctus du myocarde” should be annotated with category “DISORDER” (CUI C0027051) The QUAERO French Medical Corpus BioC release comprises a subset of the QUAERO French Medical corpus, as follows: Training data (BRAT version used in CLEF eHealth 2015 task 1b as training data): - MEDLINE_train_bioc file: 833 MEDLINE titles, annotated with normalized entities in the BioC format - EMEA_train_bioc file: 3 EMEA documents, segmented into 11 sub-documents, annotated with normalized entities in the BioC format Development data (BRAT version used in CLEF eHealth 2015 task 1b as test data and in CLEF eHealth 2016 task 2 as development data): - MEDLINE_dev_bioc file: 832 MEDLINE titles, annotated with normalized entities in the BioC format - EMEA_dev_bioc file: 3 EMEA documents, segmented into 12 sub-documents, annotated with normalized entities in the BioC format Test data (BRAT version used in CLEF eHealth 2016 task 2 as test data): - MEDLINE_test_bioc folder: 833 MEDLINE titles, annotated with normalized entities in the BioC format - EMEA folder_test_bioc: 4 EMEA documents, segmented into 15 sub-documents, annotated with normalized entities in the BioC format This release of the QUAERO French medical corpus, BioC version, comes in the BioC format, through automatic conversion from the original BRAT format obtained with the Brat2BioC tool https://bitbucket.org/nicta_biomed/brat2bioc developped by Jimeno Yepes et al. Antonio Jimeno Yepes, Mariana Neves, Karin Verspoor Brat2BioC: conversion tool between brat and BioC BioCreative IV track 1 - BioC: The BioCreative Interoperability Initiative, 2013 Please note that the original version of the QUAERO corpus distributed in the CLEF eHealth challenge 2015 and 2016 came in the BRAT stand alone format. It was distributed with the CLEF eHealth evaluation tool. This original distribution of the QUAERO French Medical corpus is available separately from https://quaerofrenchmed.limsi.fr All questions regarding the task or data should be addressed to aurelie.neveol@limsi.fr ## Citation Information ``` @InProceedings{neveol14quaero, author = {Névéol, Aurélie and Grouin, Cyril and Leixa, Jeremy and Rosset, Sophie and Zweigenbaum, Pierre}, title = {The {QUAERO} {French} Medical Corpus: A Ressource for Medical Entity Recognition and Normalization}, OPTbooktitle = {Proceedings of the Fourth Workshop on Building and Evaluating Ressources for Health and Biomedical Text Processing}, booktitle = {Proc of BioTextMining Work}, OPTseries = {BioTxtM 2014}, year = {2014}, pages = {24--30}, } ```
Jzuluaga/uwb_atcc
2022-12-05T11:15:20.000Z
[ "task_categories:automatic-speech-recognition", "multilinguality:monolingual", "language:en", "license:cc-by-nc-sa-4.0", "audio", "automatic-speech-recognition", "en-atc", "en", "noisy-speech-recognition", "speech-recognition", "arxiv:2203.16822", "region:us" ]
Jzuluaga
null
null
null
0
117
--- dataset_info: features: - name: id dtype: string - name: audio dtype: audio: sampling_rate: 16000 - name: text dtype: string - name: segment_start_time dtype: float32 - name: segment_end_time dtype: float32 - name: duration dtype: float32 splits: - name: test num_bytes: 140620332.25 num_examples: 2822 - name: train num_bytes: 608597323.625 num_examples: 11291 download_size: 711464914 dataset_size: 749217655.875 tags: - audio - automatic-speech-recognition - en-atc - en - noisy-speech-recognition - speech-recognition task_categories: - automatic-speech-recognition language: - en multilinguality: - monolingual license: - cc-by-nc-sa-4.0 --- # Dataset Card for UWB-ATCC corpus ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages and Other Details](#languages-and-other-details) - [Dataset Structure](#dataset-structure) - [Data Fields](#data-fields) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [UWB-ATCC corpus homepage](https://lindat.mff.cuni.cz/repository/xmlui/handle/11858/00-097C-0000-0001-CCA1-0) - **Repository:** [GitHub repository (used in research)](https://github.com/idiap/w2v2-air-traffic) - **Paper:** [Air traffic control communication (ATCC) speech corpora and their use for ASR and TTS development](https://link.springer.com/article/10.1007/s10579-019-09449-5) - **Paper of this research:** [How Does Pre-trained Wav2Vec 2.0 Perform on Domain Shifted ASR? An Extensive Benchmark on Air Traffic Control Communications](https://arxiv.org/abs/2203.16822) ### Dataset Summary The UWB-ATCC Corpus is provided provided by University of West Bohemia, Department of Cybernetics. The corpus contains recordings of communication between air traffic controllers and pilots. The speech is manually transcribed and labeled with the information about the speaker (pilot/controller, not the full identity of the person). The corpus is currently small (20 hours) but we plan to search for additional data next year. The audio data format is: 8kHz, 16bit PCM, mono. Important, from the `<id (string)>` field, you can obtain the speaker roles. For instance: - `_PI`: segment with only pilot speech - `_AT`: segment with only ATCO speech - `PIAT`: segment with both, ATCO and pilot speech ### Supported Tasks and Leaderboards - `automatic-speech-recognition`. Already adapted/fine-tuned models are available here --> [XLS-R-300m](https://huggingface.co/Jzuluaga/wav2vec2-large-960h-lv60-self-en-atc-atcosim). ### Languages and other details The text and the recordings are in English. The authors took advantage of the fact that one of their industrial partners develops complex IT solutions for several ATC authorities and airports and, as such, has access to the ATC communication recordings collected in the Czech airspace. This partner was able to secure the following data: - Ground control—communication before takeoff and after landing—19.2 h of data. - Tower control—communication during takeoff, landing and landing standby—22.5 h. - Approach control—communication during landing approach—25.5 h. - Area control—communication during overflights and cruises—71.3 h. (Not all data is released. Check their website [here](https://lindat.mff.cuni.cz/repository/xmlui/handle/11858/00-097C-0000-0001-CCA1-0)) ## Dataset Structure ### Data Fields - `id (string)`: a string of recording identifier for each example, corresponding to its. - `audio (audio)`: audio data for the given ID - `text (string)`: transcript of the file already normalized. Follow these repositories for more details [w2v2-air-traffic](https://github.com/idiap/w2v2-air-traffic) and [bert-text-diarization-atc](https://github.com/idiap/bert-text-diarization-atc) - `segment_start_time (float32)`: segment start time (normally 0) - `segment_end_time (float32): segment end time - `duration (float32)`: duration of the recording, compute as segment_end_time - segment_start_time ## Additional Information ### Licensing Information The licensing status of the dataset hinges on the legal status of the [UWB-ATCC corpus](https://lindat.mff.cuni.cz/repository/xmlui/handle/11858/00-097C-0000-0001-CCA1-0) creators. They used [Creative Commons - Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/) licensing. ### Citation Information Contributors who prepared, processed, normalized and uploaded the dataset in HuggingFace: ``` @article{zuluaga2022how, title={How Does Pre-trained Wav2Vec2. 0 Perform on Domain Shifted ASR? An Extensive Benchmark on Air Traffic Control Communications}, author={Zuluaga-Gomez, Juan and Prasad, Amrutha and Nigmatulina, Iuliia and Sarfjoo, Saeed and others}, journal={IEEE Spoken Language Technology Workshop (SLT), Doha, Qatar}, year={2022} } @article{zuluaga2022bertraffic, title={BERTraffic: BERT-based Joint Speaker Role and Speaker Change Detection for Air Traffic Control Communications}, author={Zuluaga-Gomez, Juan and Sarfjoo, Seyyed Saeed and Prasad, Amrutha and others}, journal={IEEE Spoken Language Technology Workshop (SLT), Doha, Qatar}, year={2022} } @article{zuluaga2022atco2, title={ATCO2 corpus: A Large-Scale Dataset for Research on Automatic Speech Recognition and Natural Language Understanding of Air Traffic Control Communications}, author={Zuluaga-Gomez, Juan and Vesel{\`y}, Karel and Sz{\"o}ke, Igor and Motlicek, Petr and others}, journal={arXiv preprint arXiv:2211.04054}, year={2022} } ``` Authors of the dataset: ``` @article{vsmidl2019air, title={Air traffic control communication (ATCC) speech corpora and their use for ASR and TTS development}, author={{\v{S}}m{\'\i}dl, Lubo{\v{s}} and {\v{S}}vec, Jan and Tihelka, Daniel and Matou{\v{s}}ek, Jind{\v{r}}ich and Romportl, Jan and Ircing, Pavel}, journal={Language Resources and Evaluation}, volume={53}, number={3}, pages={449--464}, year={2019}, publisher={Springer} } ```
DDSC/partial-danish-gigaword-small-test-sample
2023-01-09T13:11:16.000Z
[ "language:da", "region:us" ]
DDSC
null
null
null
0
117
--- dataset_info: features: - name: text dtype: string - name: source dtype: string - name: doc_id dtype: string - name: LICENSE dtype: string - name: uri dtype: string - name: date_built dtype: string splits: - name: train num_bytes: 23816547.04337273 num_examples: 2411 download_size: 11686492 dataset_size: 23816547.04337273 language: - da pretty_name: Danish Gigaword Test Sample --- # Dataset Card for "Danish Gigaword Test Sample" This is a small sample of the dataset `DDSC/partial-danish-gigaword-no-twitter`. It is meant as a small dataset for testing code. It is constructed using the following code: ```python from datasets import concatenate_datasets, load_dataset # download dataset from huggingface dataset = load_dataset("DDSC/partial-danish-gigaword-no-twitter") # All of the dataset is available in the train split - we can simply: dataset = dataset["train"] # downsample it to three domains legal = dataset.filter(lambda x: x["source"] == "retsinformationdk") news = dataset.filter(lambda x: x["source"] == "tv2r") speech = dataset.filter(lambda x: x["source"] == "spont") # downsample to 1000 samples legal = legal.select(range(1000)) news = news.select(range(1000)) # combine the three domains dataset = concatenate_datasets([legal, news, speech]) # upload to hub dataset.push_to_hub("DDSC/partial-danish-gigaword-small-test-sample") ```
Someman/hindi-summarization
2023-05-30T12:55:13.000Z
[ "task_categories:summarization", "size_categories:10K<n<100K", "language:hi", "license:mit", "region:us" ]
Someman
null
null
null
0
117
--- license: mit task_categories: - summarization language: hi original_source: >- https://www.kaggle.com/datasets/disisbig/hindi-text-short-and-large-summarization-corpus dataset_info: features: - name: headline dtype: string - name: summary dtype: string - name: article dtype: string splits: - name: train num_bytes: 410722079.5542422 num_examples: 55226 - name: test num_bytes: 102684238.44575782 num_examples: 13807 - name: valid num_bytes: 128376473 num_examples: 17265 download_size: 150571314 dataset_size: 641782791 pretty_name: hindi summarization size_categories: - 10K<n<100K --- # Dataset Card for Dataset Name ## Dataset Description - Homepage: https://www.kaggle.com/datasets/disisbig/hindi-text-short-and-large-summarization-corpus?select=test.csv ### Dataset Summary Hindi Text Short and Large Summarization Corpus is a collection of ~180k articles with their headlines and summary collected from Hindi News Websites. This is a first of its kind Dataset in Hindi which can be used to benchmark models for Text summarization in Hindi. This does not contain articles contained in Hindi Text Short Summarization Corpus which is being released parallely with this Dataset. The dataset retains original punctuation, numbers etc in the articles. ### Languages The language is Hindi. ### Licensing Information MIT ### Citation Information https://www.kaggle.com/datasets/disisbig/hindi-text-short-and-large-summarization-corpus?select=test.csv ### Contributions
C-MTEB/OCNLI
2023-07-28T07:10:28.000Z
[ "region:us" ]
C-MTEB
null
null
null
0
117
--- configs: - config_name: default data_files: - split: validation path: data/validation-* dataset_info: features: - name: sent1 sequence: string - name: sent2 sequence: string - name: labels sequence: int64 splits: - name: validation num_bytes: 222873 num_examples: 1 download_size: 153558 dataset_size: 222873 --- # Dataset Card for "OCNLI" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
shnl/qg_vimmrc2.0
2023-09-19T16:00:46.000Z
[ "region:us" ]
shnl
null
null
null
0
117
Entry not found
HumanCompatibleAI/ppo-seals-Swimmer-v1
2023-09-27T07:01:55.000Z
[ "region:us" ]
HumanCompatibleAI
null
null
null
0
117
--- dataset_info: features: - name: obs sequence: sequence: float64 - name: acts sequence: sequence: float32 - name: infos sequence: string - name: terminal dtype: bool - name: rews sequence: float32 splits: - name: train num_bytes: 131302158 num_examples: 104 download_size: 23343768 dataset_size: 131302158 --- # Dataset Card for "ppo-seals-Swimmer-v1" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)