The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code: DatasetGenerationCastError
Exception: DatasetGenerationCastError
Message: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 4 new columns ({'question', 'sample_id', 'evidence_id_1', 'evidence_id_2'}) and 4 missing columns ({'claim_id', 'evi_path', 'evi_path_original', 'claim_id_pair'}).
This happened while the json dataset builder was generating data using
hf://datasets/alabnii/sciclaimeval-shared-task/data/dev_task2_release.json (at revision 73b2074477204b8a0e3a2002b19f067c5bed1392), [/tmp/hf-datasets-cache/medium/datasets/76688353431443-config-parquet-and-info-alabnii-sciclaimeval-shar-ca63431d/hub/datasets--alabnii--sciclaimeval-shared-task/snapshots/73b2074477204b8a0e3a2002b19f067c5bed1392/data/dev_task2_release.json (origin=hf://datasets/alabnii/sciclaimeval-shared-task@73b2074477204b8a0e3a2002b19f067c5bed1392/data/dev_task2_release.json)]
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1887, in _prepare_split_single
writer.write_table(table)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 675, in write_table
pa_table = table_cast(pa_table, self._schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
sample_id: string
question: string
evidence_id_1: string
evidence_id_2: string
label: string
claim: string
context: string
caption: string
domain: string
evi_type: string
paper_id: string
use_context: string
operation: string
paper_path: string
detail_others: string
license_name: string
license_url: string
-- schema metadata --
pandas: '{"index_columns": [], "column_indexes": [], "columns": [{"name":' + 2238
to
{'paper_id': Value('string'), 'claim_id': Value('string'), 'claim': Value('string'), 'label': Value('string'), 'caption': Value('string'), 'evi_type': Value('string'), 'evi_path': Value('string'), 'context': Value('string'), 'domain': Value('string'), 'use_context': Value('string'), 'operation': Value('string'), 'paper_path': Value('string'), 'detail_others': Value('string'), 'license_name': Value('string'), 'license_url': Value('string'), 'claim_id_pair': Value('string'), 'evi_path_original': Value('string')}
because column names don't match
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1347, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 980, in convert_to_parquet
builder.download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 884, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 947, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1736, in _prepare_split
for job_id, done, content in self._prepare_split_single(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1889, in _prepare_split_single
raise DatasetGenerationCastError.from_cast_error(
datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 4 new columns ({'question', 'sample_id', 'evidence_id_1', 'evidence_id_2'}) and 4 missing columns ({'claim_id', 'evi_path', 'evi_path_original', 'claim_id_pair'}).
This happened while the json dataset builder was generating data using
hf://datasets/alabnii/sciclaimeval-shared-task/data/dev_task2_release.json (at revision 73b2074477204b8a0e3a2002b19f067c5bed1392), [/tmp/hf-datasets-cache/medium/datasets/76688353431443-config-parquet-and-info-alabnii-sciclaimeval-shar-ca63431d/hub/datasets--alabnii--sciclaimeval-shared-task/snapshots/73b2074477204b8a0e3a2002b19f067c5bed1392/data/dev_task2_release.json (origin=hf://datasets/alabnii/sciclaimeval-shared-task@73b2074477204b8a0e3a2002b19f067c5bed1392/data/dev_task2_release.json)]
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
paper_id
string | claim_id
string | claim
string | label
string | caption
string | evi_type
string | evi_path
string | context
string | domain
string | use_context
string | operation
string | paper_path
string | detail_others
string | license_name
string | license_url
string | claim_id_pair
string | evi_path_original
string |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2403.19137
|
val_tab_0001
|
Table 1 shows that our probabilistic inference module consistently outperforms its deterministic counterpart in terms of Avg and Last accuracy.
|
Supported
|
Table 1 : Performance comparison of different methods averaged over three runs. Best scores are in bold . Second best scores are in blue . The results for L2P, DualPrompt, and PROOF are taken from [ 92 ] . See App. Table 14 for std. dev. scores.
|
table
|
tables_png/dev/val_tab_0001.png
|
To understand our probabilistic inference modules further, we examine their performance against the deterministic variant of ours (Ours w/o VI).
|
ml
|
yes
|
Change the cell values
|
papers/dev/ml_2403.19137.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0001
|
tables/dev/val_tab_0001.tex
|
|
2403.19137
|
val_tab_0002
|
Table 1 shows that our probabilistic inference module consistently outperforms its deterministic counterpart in terms of Avg and Last accuracy.
|
Refuted
|
Table 1 : Performance comparison of different methods averaged over three runs. Best scores are in bold . Second best scores are in blue . The results for L2P, DualPrompt, and PROOF are taken from [ 92 ] . See App. Table 14 for std. dev. scores.
|
table
|
tables_png/dev/val_tab_0002.png
|
To understand our probabilistic inference modules further, we examine their performance against the deterministic variant of ours (Ours w/o VI).
|
ml
|
yes
|
Change the cell values
|
papers/dev/ml_2403.19137.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0001
|
tables/dev/val_tab_0002.tex
|
|
2203.01212
|
val_tab_0004
|
If we compare the two-layer network results from GeoLIP and Sampling in Table 1 , which is a lower bound of true Lipschitz constant, the ratio is within 1.783 .
|
Supported
|
Table 1: \ell_{\infty} -FGL estimation of various methods: DGeoLIP and NGeoLIP induce the same values on two layer networks. DGeoLIP always produces tighter estimations than LiPopt and MP do.
|
table
|
tables_png/dev/val_tab_0004.png
|
We have also shown that the two-layer network \ell_{\infty} -FGL estimation from GeoLIP has a theoretical guarantee with the approximation factor K_{G}<1.783 ( Theorem 3.3 ).
|
ml
|
no
|
Change the cell values
|
papers/dev/ml_2203.01212.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0002
|
tables/dev/val_tab_0004.tex
|
|
2209.12590
|
val_tab_0005
|
Table 2 shows that the adversary applies lower dropout probabilities to less informative words such as ‘unknown’ tokens that replace all out-of-dictionary words, and so offer little information about the next word.
|
Supported
|
Table 2 : Analysis of the Adversary. For selected SNLI (upper) and Yahoo (lower) sentences, word dropout scores are from a trained adversary and normalized per sentence. Darker colouring indicates a higher dropout probability. Boxed words are those selected to be dropped. <sos> = start of the sequence token; <eos> = end of the sequence token; _unk = unknown token.
|
table
|
tables_png/dev/val_tab_0005.png
|
We obtain further insight into what the adversary learns by analyzing the word dropout scores for different sentences.
|
ml
|
no
|
Change the cell values
|
papers/dev/ml_2209.12590.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0003
|
tables/dev/val_tab_0005.tex
|
|
2209.12590
|
val_tab_0006
|
Table 2 shows that the adversary applies lower dropout probabilities to less informative words such as ‘unknown’ tokens that replace all out-of-dictionary words, and so offer little information about the next word.
|
Refuted
|
Table 2 : Analysis of the Adversary. For selected SNLI (upper) and Yahoo (lower) sentences, word dropout scores are from a trained adversary and normalized per sentence. Darker colouring indicates a higher dropout probability. Boxed words are those selected to be dropped. <sos> = start of the sequence token; <eos> = end of the sequence token; _unk = unknown token.
|
table
|
tables_png/dev/val_tab_0006.png
|
We obtain further insight into what the adversary learns by analyzing the word dropout scores for different sentences.
|
ml
|
no
|
Change the cell values
|
papers/dev/ml_2209.12590.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0003
|
tables/dev/val_tab_0006.tex
|
|
2209.12590
|
val_tab_0008
|
We also show that adversarial training learns a useful generative model with meaningful latent space by interpolating between sentences (Table 3 ).
|
Supported
|
Table 3 : Sentence interpolation (Yelp dataset). Representations of two sentences (top, bottom) are obtained by feeding them through an adversarially trained VAE encoder. Three linearly interpolated representations are passed to the VAE decoder and sentences generated by greedy sampling (middle) .
|
table
|
tables_png/dev/val_tab_0008.png
|
We obtain further insight into what the adversary learns by analyzing the word dropout scores for different sentences. Table 2 shows that the adversary applies lower dropout probabilities to less informative words such as ‘unknown’ tokens that replace all out-of-dictionary words, and so offer little information about the next word. Depending on the data semantics, the adversary selects different types of words: for SNLI, verbs tend to be picked that explain the activity, e.g. working and participating ; for Yahoo, words are identified that carry question semantics, e.g. what , how , if , and when . Figure 4 (a) shows a quantitative analysis of dropout saliency map across different part-of-speech (POS) tags. Verb (verb), interjections (intj), and nouns (noun) have higher saliency scores (higher chances of being dropped) compared to punctuation (punc), determiners (det), and the start (sos) and end tokens (eos) which are relatively easier to predict given previous words.
|
ml
|
no
|
Change the cell values
|
papers/dev/ml_2209.12590.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0004
|
tables/dev/val_tab_0008.tex
|
|
2205.11361
|
val_tab_0009
|
Lastly, we note that using larger values of \gamma naively does not guarantee better test performance: one has to fine tune the parameters \beta , \mu , \sigma , \eta appropriately to achieve favorable trade-off between training stability and test performance.
|
Supported
|
Table 1: Shallow neural nets trained on the airfoil data set. The results in parenthesis are achieved with the variant ( 11 ).
All the results are averaged over 5 models trained with different seed values.
|
table
|
tables_png/dev/val_tab_0009.png
|
For the training, we use a fully connected shallow neural network of width 16 with ReLU activation and train for 3000 epochs with the learning rate \eta=0.1 , using mean square error (MSE) as the loss and choosing \beta=0.5 . Table 1 reports the average root MSE (RMSE) and the RMSE gap (defined as test RMSE - train RMSE) evaluated for models that are trained with 5 different seed values for this task. We can see that MPGD leads to both lower test RMSE and RMSE gap when compared with vanilla GD (baseline) and GD with uncorrelated Gaussian perturbations (see the results not in parenthesis in Table 1 ; here \mu=0.01 , \sigma=0.02 ). Using the form of the perturbations in ( 11 ) instead can also give lower RMSE gap (see the results in parenthesis in Table 1 ; here \sigma=\mu=0.01 ). Overall these results support our generalization theory for MPGD.
|
ml
|
no
|
Change the cell values
|
papers/dev/ml_2205.11361.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0005
|
tables/dev/val_tab_0009.tex
|
|
2205.11361
|
val_tab_0010
|
Table 2 shows that MPGD can lead to better test performance and lower accuracy gap when compared to the baseline (full batch GD) and Gaussian noise-perturbed GD.
|
Supported
|
Table 2: ResNet-18 trained on CIFAR-10 for 1000 epochs. Here, accuracy gap = training accuracy - validation accuracy. The results in parenthesis are achieved with the variant of MPGD ( 11 ). All the results are averaged over 5 models trained with different seed values.
|
table
|
tables_png/dev/val_tab_0010.png
|
ml
|
no
|
Change the cell values
|
papers/dev/ml_2205.11361.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0006
|
tables/dev/val_tab_0010.tex
|
||
2209.12362
|
val_tab_0016
|
We also show that our performance boost does not come from the additional training dataset of ActivityNet in Table 3 .
|
Supported
|
Table 3: Ablation experiments. We investigate the effectiveness of each component of our method as well as compare to vanilla multi-dataset training method.
“Vanilla” means using cross entropy (CE) loss in training.
“w/o informative los” means using CE and projection loss.
The numbers are top-1/top-5 accuracy, respectively.
Training data:
(e) Kinetics-400;
(f) SSv2;
(g) MiT;
(h) ActivityNet.
|
table
|
tables_png/dev/val_tab_0016.png
|
We then compare our method with state-of-the-art on these datasets. We train a higher resolution model with larger spatial inputs (312p) and achieves better performance compared to recent multi-dataset training methods, CoVER [ 59 ] and PolyVit [ 38 ] , on Kinetics-400, and significantly better on MiT and SSv2, as shown in Table 1 . Note that our model does not use any image training datasets, and our model computation cost is only a fraction of the baselines.
|
ml
|
no
|
Change the cell values
|
papers/dev/ml_2209.12362.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0007
|
tables/dev/val_tab_0016.tex
|
|
2209.12362
|
val_tab_0017
|
Also, longer training schedule does not lead to better performance on some datasets, including SSv2, suggesting vanilla multi-dataset training is unstable.
|
Supported
|
Table 3: Ablation experiments. We investigate the effectiveness of each component of our method as well as compare to vanilla multi-dataset training method.
“Vanilla” means using cross entropy (CE) loss in training.
“w/o informative los” means using CE and projection loss.
The numbers are top-1/top-5 accuracy, respectively.
Training data:
(e) Kinetics-400;
(f) SSv2;
(g) MiT;
(h) ActivityNet.
|
table
|
tables_png/dev/val_tab_0017.png
|
Does our proposed robust loss help? We compare our model training with vanilla multi-dataset training, where multiple classification heads are attached to the same backbone and the model is trained simply with cross-entropy loss. The vanilla model is trained from a K400 checkpoint as ours. As shown in Table 3 , we try training the vanilla model with both the same training schedule as ours and a 4x longer schedule. As we see, there is a significant gap between the overall performance of the vanilla model and ours, validating the efficacy of our proposed method.
|
ml
|
no
|
Change the cell values
|
papers/dev/ml_2209.12362.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0008
|
tables/dev/val_tab_0017.tex
|
|
2209.12362
|
val_tab_0018
|
Also, longer training schedule does not lead to better performance on some datasets, including SSv2, suggesting vanilla multi-dataset training is unstable.
|
Refuted
|
Table 3: Ablation experiments. We investigate the effectiveness of each component of our method as well as compare to vanilla multi-dataset training method.
“Vanilla” means using cross entropy (CE) loss in training.
“w/o informative los” means using CE and projection loss.
The numbers are top-1/top-5 accuracy, respectively.
Training data:
(e) Kinetics-400;
(f) SSv2;
(g) MiT;
(h) ActivityNet.
|
table
|
tables_png/dev/val_tab_0018.png
|
Does our proposed robust loss help? We compare our model training with vanilla multi-dataset training, where multiple classification heads are attached to the same backbone and the model is trained simply with cross-entropy loss. The vanilla model is trained from a K400 checkpoint as ours. As shown in Table 3 , we try training the vanilla model with both the same training schedule as ours and a 4x longer schedule. As we see, there is a significant gap between the overall performance of the vanilla model and ours, validating the efficacy of our proposed method.
|
ml
|
no
|
Change the cell values
|
papers/dev/ml_2209.12362.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0008
|
tables/dev/val_tab_0018.tex
|
|
2209.12362
|
val_tab_0019
|
In terms of performance on ActivityNet, we observe that both training methods achieve good results, which might be because ActivityNet classes are highly overlapped with Kinetics-400 (65 out of 200).
|
Supported
|
Table 3: Ablation experiments. We investigate the effectiveness of each component of our method as well as compare to vanilla multi-dataset training method.
“Vanilla” means using cross entropy (CE) loss in training.
“w/o informative los” means using CE and projection loss.
The numbers are top-1/top-5 accuracy, respectively.
Training data:
(e) Kinetics-400;
(f) SSv2;
(g) MiT;
(h) ActivityNet.
|
table
|
tables_png/dev/val_tab_0019.png
|
Does our proposed robust loss help? We compare our model training with vanilla multi-dataset training, where multiple classification heads are attached to the same backbone and the model is trained simply with cross-entropy loss. The vanilla model is trained from a K400 checkpoint as ours. As shown in Table 3 , we try training the vanilla model with both the same training schedule as ours and a 4x longer schedule. As we see, there is a significant gap between the overall performance of the vanilla model and ours, validating the efficacy of our proposed method. Also, longer training schedule does not lead to better performance on some datasets, including SSv2, suggesting vanilla multi-dataset training is unstable.
|
ml
|
no
|
Change the cell values
|
papers/dev/ml_2209.12362.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0009
|
tables/dev/val_tab_0019.tex
|
|
2209.12362
|
val_tab_0020
|
As shown in Table 3 , the performance on MiT and SSv2 suffers by a large margin, indicating that the projection design helps boost training by better utilizing multi-dataset information.
|
Supported
|
Table 3: Ablation experiments. We investigate the effectiveness of each component of our method as well as compare to vanilla multi-dataset training method.
“Vanilla” means using cross entropy (CE) loss in training.
“w/o informative los” means using CE and projection loss.
The numbers are top-1/top-5 accuracy, respectively.
Training data:
(e) Kinetics-400;
(f) SSv2;
(g) MiT;
(h) ActivityNet.
|
table
|
tables_png/dev/val_tab_0020.png
|
How important is the projection loss? We then experiment with removing the projection heads (Section 3.2 ) during multi-dataset training. The model is trained with the original cross-entropy loss and the informative loss.
|
ml
|
no
|
Change the cell values
|
papers/dev/ml_2209.12362.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0010
|
tables/dev/val_tab_0020.tex
|
|
2211.01233
|
val_tab_0022
|
Finally, we show that ViTCA does not dramatically suffer when no explicit positioning is used—in contrast to typical Transformer-based models—as cells are still able to localize themselves by relying on their stored hidden information.
|
Supported
|
Quantitative ablation for denoising autoencoding with ViTCA (unless otherwise stated via prefix) on CelebA [ 47 ] . Boldface and underlining denote best and second best results. Italicized items denote baseline configuration settings. † Trained with gradient checkpointing [ 44 ] , which slightly alters round-off error during backpropagation, resulting in slight variations of results compared to training without checkpointing. See Appendix A.2 .
|
table
|
tables_png/dev/val_tab_0022.png
|
As shown in Tab. 2 , ViTCA benefits from an increase to most CA and Transformer-centric parameters, at the cost of computational complexity and/or an increase in parameter count. A noticeable decrease in performance is observed when embed size d\!=\!512 , most likely due to the vast increase in parameter count necessitating more training. In the original ViT, multiple encoding blocks were needed before the model could exhibit performance equivalent to their baseline CNN [ 13 ] , as verified in our ablation with our ViT. However, for ViTCA we notice an inverse relationship of the effect of Transformer depth, causing a divergence in cell state. It is not clear why this is the case, as we have observed that the LN layers and overflow losses otherwise encourage a contractive F_{\theta} . This is an investigation we leave for future work. Despite the benefits of increasing h , we use h\!=\!4 for our baseline to optimize runtime performance.
|
ml
|
no
|
Change the cell values
|
papers/dev/ml_2211.01233.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0011
|
tables/dev/val_tab_0022.tex
|
|
2211.01233
|
val_tab_0023
|
Finally, we show that ViTCA does not dramatically suffer when no explicit positioning is used—in contrast to typical Transformer-based models—as cells are still able to localize themselves by relying on their stored hidden information.
|
Refuted
|
Quantitative ablation for denoising autoencoding with ViTCA (unless otherwise stated via prefix) on CelebA [ 47 ] . Boldface and underlining denote best and second best results. Italicized items denote baseline configuration settings. † Trained with gradient checkpointing [ 44 ] , which slightly alters round-off error during backpropagation, resulting in slight variations of results compared to training without checkpointing. See Appendix A.2 .
|
table
|
tables_png/dev/val_tab_0023.png
|
As shown in Tab. 2 , ViTCA benefits from an increase to most CA and Transformer-centric parameters, at the cost of computational complexity and/or an increase in parameter count. A noticeable decrease in performance is observed when embed size d\!=\!512 , most likely due to the vast increase in parameter count necessitating more training. In the original ViT, multiple encoding blocks were needed before the model could exhibit performance equivalent to their baseline CNN [ 13 ] , as verified in our ablation with our ViT. However, for ViTCA we notice an inverse relationship of the effect of Transformer depth, causing a divergence in cell state. It is not clear why this is the case, as we have observed that the LN layers and overflow losses otherwise encourage a contractive F_{\theta} . This is an investigation we leave for future work. Despite the benefits of increasing h , we use h\!=\!4 for our baseline to optimize runtime performance.
|
ml
|
no
|
Change the cell values
|
papers/dev/ml_2211.01233.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0011
|
tables/dev/val_tab_0023.tex
|
|
2206.15241
|
val_tab_0026
|
In contrast, deep learning-based post-processing rather hinders performance for ‘heavy rain’ prediction.
|
Supported
|
Table 3: Evaluation metrics of KoMet and baseline models for precipitation while 12 variables are utilized for the training. Best performances are marked in bold.
|
table
|
tables_png/dev/val_tab_0026.png
|
Table 3 shows the results for various lead times ranging from 6 to 87 hours, according to the changes of the architectures. Compared to the statistics of GDAPS-KIM for predicting class ‘rain’, the qualities of post-processed predictions with the three networks are consistently improved while that of MetNet is the best.
|
ml
|
no
|
Change the cell values
|
papers/dev/ml_2206.15241.json
|
Public Domain
|
http://creativecommons.org/publicdomain/zero/1.0/
|
0012
|
tables/dev/val_tab_0026.tex
|
|
2206.15241
|
val_tab_0027
|
In contrast, deep learning-based post-processing rather hinders performance for ‘heavy rain’ prediction.
|
Refuted
|
Table 3: Evaluation metrics of KoMet and baseline models for precipitation while 12 variables are utilized for the training. Best performances are marked in bold.
|
table
|
tables_png/dev/val_tab_0027.png
|
Table 3 shows the results for various lead times ranging from 6 to 87 hours, according to the changes of the architectures. Compared to the statistics of GDAPS-KIM for predicting class ‘rain’, the qualities of post-processed predictions with the three networks are consistently improved while that of MetNet is the best.
|
ml
|
no
|
Change the cell values
|
papers/dev/ml_2206.15241.json
|
Public Domain
|
http://creativecommons.org/publicdomain/zero/1.0/
|
0012
|
tables/dev/val_tab_0027.tex
|
|
2210.05883
|
val_tab_0029
|
Results in Table 1 show that AD-Drop achieves consistent improvement, boosting the average scores of BERT base and RoBERTa base by 0.87 and 0.62, respectively.
|
Supported
|
Table 1: Overall results of fine-tuned models on the GLUE benchmark. The symbol \dagger denotes results directly taken from the original papers. The best average results are shown in bold.
|
table
|
tables_png/dev/val_tab_0029.png
|
We report the overall results of the fine-tuned models in Table 1 . We first compare AD-Drop with existing regularization methods on the development sets, including the original fine-tuning, SCAL [ 17 ] , SuperT [ 48 ] , R-Drop [ 18 ] , and HiddenCut [ 15 ] . We observe that AD-Drop surpasses the baselines on most of the datasets. Specifically, AD-Drop yields an average improvement of 1.98 and 1.29 points on BERT base and RoBERTa base , respectively. We then discuss the performance of AD-Drop on the test sets.
|
ml
|
no
|
Change the cell values
|
papers/dev/ml_2210.05883.json
|
CC BY-NC-SA 4.0
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
0013
|
tables/dev/val_tab_0029.tex
|
|
2210.05883
|
val_tab_0030
|
Results in Table 1 show that AD-Drop achieves consistent improvement, boosting the average scores of BERT base and RoBERTa base by 0.87 and 0.62, respectively.
|
Refuted
|
Table 1: Overall results of fine-tuned models on the GLUE benchmark. The symbol \dagger denotes results directly taken from the original papers. The best average results are shown in bold.
|
table
|
tables_png/dev/val_tab_0030.png
|
We report the overall results of the fine-tuned models in Table 1 . We first compare AD-Drop with existing regularization methods on the development sets, including the original fine-tuning, SCAL [ 17 ] , SuperT [ 48 ] , R-Drop [ 18 ] , and HiddenCut [ 15 ] . We observe that AD-Drop surpasses the baselines on most of the datasets. Specifically, AD-Drop yields an average improvement of 1.98 and 1.29 points on BERT base and RoBERTa base , respectively. We then discuss the performance of AD-Drop on the test sets.
|
ml
|
no
|
Change the cell values
|
papers/dev/ml_2210.05883.json
|
CC BY-NC-SA 4.0
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
0013
|
tables/dev/val_tab_0030.tex
|
|
2210.05883
|
val_tab_0031
|
Second, IGA outperforms GA in some cases.
|
Supported
|
Table 2: Results of ablation studies, in which r/w means “replace with” and w/o means “without”.
|
table
|
tables_png/dev/val_tab_0031.png
|
AD-Drop can be implemented with different attribution methods to generate the mask matrix in Eq. ( 1 ), such as integrated gradient attribution (IGA) introduced Eq. ( 3 ), attention weights for attribution (AA), and randomly generating the discard region (RD) in Eq. ( 6 ). We replace the gradient attribution (GA) in Eq. ( 5 )-( 6 ) with these methods. From Table 2 , we can make three observations. First, AD-Drop with gradient-based attribution methods (GA and IGA) surpasses that with the other methods (AA or RD) on most of the datasets, illustrating that gradient-based methods are better at finding features that are likely to cause overfitting.
|
ml
|
no
|
Change the cell values
|
papers/dev/ml_2210.05883.json
|
CC BY-NC-SA 4.0
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
0014
|
tables/dev/val_tab_0031.tex
|
|
2210.05883
|
val_tab_0032
|
Second, IGA outperforms GA in some cases.
|
Refuted
|
Table 2: Results of ablation studies, in which r/w means “replace with” and w/o means “without”.
|
table
|
tables_png/dev/val_tab_0032.png
|
AD-Drop can be implemented with different attribution methods to generate the mask matrix in Eq. ( 1 ), such as integrated gradient attribution (IGA) introduced Eq. ( 3 ), attention weights for attribution (AA), and randomly generating the discard region (RD) in Eq. ( 6 ). We replace the gradient attribution (GA) in Eq. ( 5 )-( 6 ) with these methods. From Table 2 , we can make three observations. First, AD-Drop with gradient-based attribution methods (GA and IGA) surpasses that with the other methods (AA or RD) on most of the datasets, illustrating that gradient-based methods are better at finding features that are likely to cause overfitting.
|
ml
|
no
|
Change the cell values
|
papers/dev/ml_2210.05883.json
|
CC BY-NC-SA 4.0
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
0014
|
tables/dev/val_tab_0032.tex
|
|
2210.05883
|
val_tab_0033
|
Third, AD-Drop improves the original BERT base and RoBERTa base with any of the masking strategies, demonstrating the robustness of AD-Drop to overfitting when fine-tuning these models.
|
Supported
|
Table 2: Results of ablation studies, in which r/w means “replace with” and w/o means “without”.
|
table
|
tables_png/dev/val_tab_0033.png
|
AD-Drop can be implemented with different attribution methods to generate the mask matrix in Eq. ( 1 ), such as integrated gradient attribution (IGA) introduced Eq. ( 3 ), attention weights for attribution (AA), and randomly generating the discard region (RD) in Eq. ( 6 ). We replace the gradient attribution (GA) in Eq. ( 5 )-( 6 ) with these methods. From Table 2 , we can make three observations. First, AD-Drop with gradient-based attribution methods (GA and IGA) surpasses that with the other methods (AA or RD) on most of the datasets, illustrating that gradient-based methods are better at finding features that are likely to cause overfitting. Second, IGA outperforms GA in some cases. Although IGA provides better theoretical justification than GA for attribution, it requires prohibitively more computational cost than GA (see Section 4.7 for efficiency analysis), making GA a more desirable choice for AD-Drop .
|
ml
|
no
|
Change the cell values
|
papers/dev/ml_2210.05883.json
|
CC BY-NC-SA 4.0
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
0015
|
tables/dev/val_tab_0033.tex
|
|
2210.05883
|
val_tab_0034
|
As shown in Table 2 , removing cross-tuning causes noticeable performance degradation on most of the datasets.
|
Supported
|
Table 2: Results of ablation studies, in which r/w means “replace with” and w/o means “without”.
|
table
|
tables_png/dev/val_tab_0034.png
|
To verify the effectiveness of the cross-tuning strategy, we ablate it and apply only AD-Drop in all training epochs.
|
ml
|
no
|
Change the cell values
|
papers/dev/ml_2210.05883.json
|
CC BY-NC-SA 4.0
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
0016
|
tables/dev/val_tab_0034.tex
|
|
2210.05883
|
val_tab_0035
|
As shown in Table 2 , removing cross-tuning causes noticeable performance degradation on most of the datasets.
|
Refuted
|
Table 2: Results of ablation studies, in which r/w means “replace with” and w/o means “without”.
|
table
|
tables_png/dev/val_tab_0035.png
|
To verify the effectiveness of the cross-tuning strategy, we ablate it and apply only AD-Drop in all training epochs.
|
ml
|
no
|
Change the cell values
|
papers/dev/ml_2210.05883.json
|
CC BY-NC-SA 4.0
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
0016
|
tables/dev/val_tab_0035.tex
|
|
2210.05883
|
val_tab_0036
|
From Table 6 , we can see that RoBERTa with AD-Drop achieves better generalization, where AD-Drop boosts the performance by 0.66 on HANS and 3.35 on PAWS-X, illustrating that the model trained with AD-Drop generalizes better to OOD data.
|
Supported
|
Table 6: Testing AD-Drop on OOD datasets.
|
table
|
tables_png/dev/val_tab_0036.png
|
To further demonstrate AD-Drop is beneficial to reducing overfitting, we test AD-Drop with RoBERTa base on two out-of-distribution (OOD) datasets, i.e., HANS and PAWS-X. For HANS, we use the checkpoints trained on MNLI and test their performance on the validation set (the test set is not supplied). For PAWS-X, we use the checkpoints trained on QQP and examine its performance on the test set. The evaluation metric is accuracy.
|
ml
|
no
|
Change the cell values
|
papers/dev/ml_2210.05883.json
|
CC BY-NC-SA 4.0
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
0017
|
tables/dev/val_tab_0036.tex
|
|
2210.05883
|
val_tab_0037
|
As shown in Table 7 , although IGA achieves more favorable performance on one of the datasets, it requires higher computational costs than its counterparts, especially when applied in all the layers.
|
Supported
|
Table 7: Results of performance and computational cost of AD-Drop with different masking strategies (GA, IGA, AA, and RD) relative to the original fine-tuning. The symbol \ddagger means AD-Drop is only applied in the first layer. BERT is chosen as the base model.
|
table
|
tables_png/dev/val_tab_0037.png
|
To analyze the computational efficiency, we quantitatively study the computational cost of AD-Drop with different dropping strategies (GA, IGA, AA, and RD) relative to the original fine-tuning on CoLA, STS-B, MRPC, and RTE. BERT is chosen as the base model for this experiment.
|
ml
|
no
|
Change the cell values
|
papers/dev/ml_2210.05883.json
|
CC BY-NC-SA 4.0
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
0018
|
tables/dev/val_tab_0037.tex
|
|
2209.15246
|
val_tab_0038
|
Note that resistance against the attack to both in- and out-sets is much harder than other cases since the perturbation budget has effectively been doubled.
|
Supported
|
Table 1: OOD detection AUROC under attack with \epsilon=\frac{8}{255} for various methods trained with CIFAR-10 or CIFAR-100 as the closed set. A clean evaluation is one where no attack is made on the data, whereas an in/out evaluation means that the corresponding data is attacked. The best and second-best results are distinguished with bold and underlined text for each column.
|
table
|
tables_png/dev/val_tab_0038.png
|
OOD detection under adversarial attack: To perform a comprehensive study, AUROC is computed in four different settings for each method. First, the standard OOD detection without any attack is conducted (Clean). Next, either in- or out-datasets are attacked (In/Out). Finally, both the in- and out-sets are attacked (In and Out).
|
ml
|
no
|
Change the cell values
|
papers/dev/ml_2209.15246.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0019
|
tables/dev/val_tab_0038.tex
|
|
2209.15246
|
val_tab_0040
|
Moreover, we check the effect of attacking the generated images, which leads to a lower AUROC score.
|
Supported
|
Table 3: Ablation study on our method. Other choices for the feature extractor, training method, and generated data attacking are tested on CIFAR-10, but none of them is as effective as the setting used in the ATD.
|
table
|
tables_png/dev/val_tab_0040.png
|
Ablation study: ATD uses HAT as the feature extractor and performs adversarial training on open and closed sets to robustify the discriminator. As an ablation study using CIFAR-10 as the in-distribution data, we replace HAT with a standard trained model to check its effectiveness. Also, in another experiment, we replace the discriminator adversarial training with the standard training. The results in Table 3 demonstrate that these settings are not as effective as ATD in achieving a robust detection model. As another ablation, we consider removing the feature extractor and generating images instead of features. Based on the results, the trained discriminator with this method is also not as robust as the ATD.
|
ml
|
no
|
Change the cell values
|
papers/dev/ml_2209.15246.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0020
|
tables/dev/val_tab_0040.tex
|
|
2209.15246
|
val_tab_0041
|
Moreover, we check the effect of attacking the generated images, which leads to a lower AUROC score.
|
Refuted
|
Table 3: Ablation study on our method. Other choices for the feature extractor, training method, and generated data attacking are tested on CIFAR-10, but none of them is as effective as the setting used in the ATD.
|
table
|
tables_png/dev/val_tab_0041.png
|
Ablation study: ATD uses HAT as the feature extractor and performs adversarial training on open and closed sets to robustify the discriminator. As an ablation study using CIFAR-10 as the in-distribution data, we replace HAT with a standard trained model to check its effectiveness. Also, in another experiment, we replace the discriminator adversarial training with the standard training. The results in Table 3 demonstrate that these settings are not as effective as ATD in achieving a robust detection model. As another ablation, we consider removing the feature extractor and generating images instead of features. Based on the results, the trained discriminator with this method is also not as robust as the ATD.
|
ml
|
no
|
Change the cell values
|
papers/dev/ml_2209.15246.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0020
|
tables/dev/val_tab_0041.tex
|
|
2207.00461
|
val_tab_0042
|
As shown in Table 1 , ELIRL requires little extra training time versus MaxEnt IRL, even with the optional \mathbf{s}^{(t)} re-optimization, and runs significantly faster than GPIRL.
|
Supported
|
Table 1 : The average learning time per task. The standard error is reported after the \pm .
|
table
|
tables_png/dev/val_tab_0042.png
|
ml
|
no
|
Supported_claim_only
|
papers/dev/ml_2207.00461.json
|
CC BY-NC-SA 4.0
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
no pair
|
tables/dev/val_tab_0042.tex
|
||
2211.12551
|
val_tab_0044
|
As shown in Table 2 , the proposed method outperforms all three baselines.
|
Supported
|
Table 2 : Character-level language modeling results on Penn Tree Bank in test set bpd.
|
table
|
tables_png/dev/val_tab_0044.png
|
We use the Penn Tree Bank dataset with standard processing from Mikolov et al. [ 29 ] , which contains around 5M characters and a character-level vocabulary size of 50 . The data is split into sentences with a maximum sequence length of 288 . We compare with three competitive normalizing-flow-based models: Bipartite flow [ 42 ] and latent flows [ 48 ] including AF/SCF and IAF/SCF, since they are the only comparable work with non-autoregressive language modeling.
|
ml
|
no
|
Supported_claim_only
|
papers/dev/ml_2211.12551.json
|
CC BY-NC-SA 4.0
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
no pair
|
tables/dev/val_tab_0044.tex
|
|
2405.17991
|
val_tab_0054
|
VeLoRA lowers the memory requirement of SSF [ 26 ] by 16% with only a minor degradation (0.1 pp ) in accuracy.
|
Supported
|
Results on a subset of the VTAB-1k benchmark. All methods use a ViT-Base-224/16 model pre-trained on ImageNet-21k. The batch sizes and ranks are the same across all tasks.
|
table
|
tables_png/dev/val_tab_0054.png
|
We conduct experiments evaluating the performance of VeLoRA for full-tuning and how it complements other PEFT methods. In Tab. 1 we reproduce a large set of results for LoRA [ 17 ] , SSF [ 26 ] , and Hydra [ 20 ] on a subset of the VTAB-1K benchmark, where the sub-token size for each experiment is given in the Appendix. Unlike what is more common in the PEFT literature [ 19 , 20 ] , we do not perform any task-specific hyperparameter tuning that would change the memory, such as batch size and rank, and to also avoid any potential overfitting to the specific task. For all experiments we used the authors provided implementations for the adapters and integrated them into the same training framework for a fair comparison. We observe that VeLoRA improves the performance compared to full-tuning by 1.5 percentage points ( pp ), while lowering the memory requirements. We also observe that when combined with PEFT methods, VeLoRA can come with improvement in memory and performance.
|
ml
|
no
|
Change the cell values
|
papers/dev/ml_2405.17991.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0021
|
tables/dev/val_tab_0054.tex
|
|
2405.17991
|
val_tab_0055
|
VeLoRA lowers the memory requirement of SSF [ 26 ] by 16% with only a minor degradation (0.1 pp ) in accuracy.
|
Refuted
|
Results on a subset of the VTAB-1k benchmark. All methods use a ViT-Base-224/16 model pre-trained on ImageNet-21k. The batch sizes and ranks are the same across all tasks.
|
table
|
tables_png/dev/val_tab_0055.png
|
We conduct experiments evaluating the performance of VeLoRA for full-tuning and how it complements other PEFT methods. In Tab. 1 we reproduce a large set of results for LoRA [ 17 ] , SSF [ 26 ] , and Hydra [ 20 ] on a subset of the VTAB-1K benchmark, where the sub-token size for each experiment is given in the Appendix. Unlike what is more common in the PEFT literature [ 19 , 20 ] , we do not perform any task-specific hyperparameter tuning that would change the memory, such as batch size and rank, and to also avoid any potential overfitting to the specific task. For all experiments we used the authors provided implementations for the adapters and integrated them into the same training framework for a fair comparison. We observe that VeLoRA improves the performance compared to full-tuning by 1.5 percentage points ( pp ), while lowering the memory requirements. We also observe that when combined with PEFT methods, VeLoRA can come with improvement in memory and performance.
|
ml
|
no
|
Change the cell values
|
papers/dev/ml_2405.17991.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0021
|
tables/dev/val_tab_0055.tex
|
|
2405.17991
|
val_tab_0056
|
It lowers the memory requirement of Hydra [ 20 ] by 7% while improving the accuracy by 0.1 pp .
|
Supported
|
Results on a subset of the VTAB-1k benchmark. All methods use a ViT-Base-224/16 model pre-trained on ImageNet-21k. The batch sizes and ranks are the same across all tasks.
|
table
|
tables_png/dev/val_tab_0056.png
|
We conduct experiments evaluating the performance of VeLoRA for full-tuning and how it complements other PEFT methods. In Tab. 1 we reproduce a large set of results for LoRA [ 17 ] , SSF [ 26 ] , and Hydra [ 20 ] on a subset of the VTAB-1K benchmark, where the sub-token size for each experiment is given in the Appendix. Unlike what is more common in the PEFT literature [ 19 , 20 ] , we do not perform any task-specific hyperparameter tuning that would change the memory, such as batch size and rank, and to also avoid any potential overfitting to the specific task. For all experiments we used the authors provided implementations for the adapters and integrated them into the same training framework for a fair comparison. We observe that VeLoRA improves the performance compared to full-tuning by 1.5 percentage points ( pp ), while lowering the memory requirements. We also observe that when combined with PEFT methods, VeLoRA can come with improvement in memory and performance. VeLoRA lowers the memory requirement of SSF [ 26 ] by 16% with only a minor degradation (0.1 pp ) in accuracy.
|
ml
|
yes
|
Change the cell values
|
papers/dev/ml_2405.17991.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0022
|
tables/dev/val_tab_0056.tex
|
|
2405.17991
|
val_tab_0057
|
Finally, it lowers the memory requirements of LoRA [ 17 ] by 4% while improving the accuracy by 0.6 pp
|
Supported
|
Results on a subset of the VTAB-1k benchmark. All methods use a ViT-Base-224/16 model pre-trained on ImageNet-21k. The batch sizes and ranks are the same across all tasks.
|
table
|
tables_png/dev/val_tab_0057.png
|
We conduct experiments evaluating the performance of VeLoRA for full-tuning and how it complements other PEFT methods. In Tab. 1 we reproduce a large set of results for LoRA [ 17 ] , SSF [ 26 ] , and Hydra [ 20 ] on a subset of the VTAB-1K benchmark, where the sub-token size for each experiment is given in the Appendix. Unlike what is more common in the PEFT literature [ 19 , 20 ] , we do not perform any task-specific hyperparameter tuning that would change the memory, such as batch size and rank, and to also avoid any potential overfitting to the specific task. For all experiments we used the authors provided implementations for the adapters and integrated them into the same training framework for a fair comparison. We observe that VeLoRA improves the performance compared to full-tuning by 1.5 percentage points ( pp ), while lowering the memory requirements. We also observe that when combined with PEFT methods, VeLoRA can come with improvement in memory and performance. VeLoRA lowers the memory requirement of SSF [ 26 ] by 16% with only a minor degradation (0.1 pp ) in accuracy. It lowers the memory requirement of Hydra [ 20 ] by 7% while improving the accuracy by 0.1 pp .
|
ml
|
no
|
Change the cell values
|
papers/dev/ml_2405.17991.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0023
|
tables/dev/val_tab_0057.tex
|
|
2405.17991
|
val_tab_0058
|
Finally, it lowers the memory requirements of LoRA [ 17 ] by 4% while improving the accuracy by 0.6 pp
|
Refuted
|
Results on a subset of the VTAB-1k benchmark. All methods use a ViT-Base-224/16 model pre-trained on ImageNet-21k. The batch sizes and ranks are the same across all tasks.
|
table
|
tables_png/dev/val_tab_0058.png
|
We conduct experiments evaluating the performance of VeLoRA for full-tuning and how it complements other PEFT methods. In Tab. 1 we reproduce a large set of results for LoRA [ 17 ] , SSF [ 26 ] , and Hydra [ 20 ] on a subset of the VTAB-1K benchmark, where the sub-token size for each experiment is given in the Appendix. Unlike what is more common in the PEFT literature [ 19 , 20 ] , we do not perform any task-specific hyperparameter tuning that would change the memory, such as batch size and rank, and to also avoid any potential overfitting to the specific task. For all experiments we used the authors provided implementations for the adapters and integrated them into the same training framework for a fair comparison. We observe that VeLoRA improves the performance compared to full-tuning by 1.5 percentage points ( pp ), while lowering the memory requirements. We also observe that when combined with PEFT methods, VeLoRA can come with improvement in memory and performance. VeLoRA lowers the memory requirement of SSF [ 26 ] by 16% with only a minor degradation (0.1 pp ) in accuracy. It lowers the memory requirement of Hydra [ 20 ] by 7% while improving the accuracy by 0.1 pp .
|
ml
|
no
|
Change the cell values
|
papers/dev/ml_2405.17991.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0023
|
tables/dev/val_tab_0058.tex
|
|
2405.17991
|
val_tab_0059
|
Our method further reduces the memory needed for training to 2.23GB, an improvement of 18% compared to LoRA, and 45% compared to GaLore, while still reaching higher results than either of them.
|
Supported
|
Comparison of our method with full fine-tuning, GaLore and LORA on GLUE benchmark using pre-trained RoBERTa-Base. Our method reaches the best overall results while showing significant memory improvements, especially compared to GaLore. We bold the best results from the considered PEFT methods. The GPU memory is measured on-device.
|
table
|
tables_png/dev/val_tab_0059.png
|
We now evaluate our method with M=16 on various language tasks, using RoBERTa-Base [ 28 ] in the GLUE benchmark, and compare it with full fine-tuning, LoRA [ 17 ] and GaLore [ 53 ] , presenting the results in Tab. 2 .
We observe that both GaLore and LoRA lower the memory requirements compared to fine-tuning from 4.64GB to 4.04GB, respectively to 2.71GB, at a cost of accuracy degradation with GaLore performance lowered by 0.39 pp , while LoRA accuracy drops by 0.67 pp .
|
ml
|
no
|
Change the cell values
|
papers/dev/ml_2405.17991.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0024
|
tables/dev/val_tab_0059.tex
|
|
2405.17991
|
val_tab_0060
|
Our method further reduces the memory needed for training to 2.23GB, an improvement of 18% compared to LoRA, and 45% compared to GaLore, while still reaching higher results than either of them.
|
Refuted
|
Comparison of our method with full fine-tuning, GaLore and LORA on GLUE benchmark using pre-trained RoBERTa-Base. Our method reaches the best overall results while showing significant memory improvements, especially compared to GaLore. We bold the best results from the considered PEFT methods. The GPU memory is measured on-device.
|
table
|
tables_png/dev/val_tab_0060.png
|
We now evaluate our method with M=16 on various language tasks, using RoBERTa-Base [ 28 ] in the GLUE benchmark, and compare it with full fine-tuning, LoRA [ 17 ] and GaLore [ 53 ] , presenting the results in Tab. 2 .
We observe that both GaLore and LoRA lower the memory requirements compared to fine-tuning from 4.64GB to 4.04GB, respectively to 2.71GB, at a cost of accuracy degradation with GaLore performance lowered by 0.39 pp , while LoRA accuracy drops by 0.67 pp .
|
ml
|
no
|
Change the cell values
|
papers/dev/ml_2405.17991.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0024
|
tables/dev/val_tab_0060.tex
|
|
2405.14903
|
val_tab_0062
|
Our method enhances the speed of both forward and backward computations compared to the previous state-of-the-art approach, benefiting both gradient-based and gradient-free optimization techniques.
|
Supported
|
Table 2 : Time Performance. Our method achieves one order of magnitude speedup across all resolutions compared to PhiFlow in both forward simulation and backward gradient propagation.
|
table
|
tables_png/dev/val_tab_0062.png
|
In this experiment, we demonstrate the performance efficiency of our framework through a comparison with PhiFlow. While PhiFlow operates with a TensorFlow-GPU backend, our framework is implemented in CUDA C++ and features a high-performance Geometric-Multigrid-Preconditioned-Conjugate-Gradient Poisson solver [ 17 ] . We benchmark both the one-step forward simulation time and gradient back-propagation time at different resolutions, as shown in Table 2 . The experiment runs on a workstation with a NVIDIA RTX A6000 GPU. Our high-performance framework consistently outperforms PhiFlow by an order of magnitude across all resolutions, thanks to the custom CUDA kernels and the rapid convergence of the Poisson solver.
|
ml
|
no
|
Supported_claim_only
|
papers/dev/ml_2405.14903.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
no pair
|
tables/dev/val_tab_0062.tex
|
|
2405.14507
|
val_tab_0064
|
The experimental results, as shown in Table 2 , reveal that enhancing the strong activation of SCMoE can further boost MoE models’ performance.
|
Supported
|
Table 2: Experimental results of different strong activations.
We set the weak activation with rank- 2 routing.
For each benchmark,
we select the top- k routing yielding the best performance in Figure 1 as the ideal strong activation.
The specific hyperparameter settings can be found in Table 9 .
|
table
|
tables_png/dev/val_tab_0064.png
|
As revealed by Figure 1 , using default top-2 routing is not optimal for all tasks. For instance, top-3 routing yields best results on GSM8K, while top-4 routing achieves the highest accuracy on HumanEval and StrategyQA. This leads us to consider whether enhancing the strong activation in SCMoE can further unlock MoE models’ potential. To this end, we adjust the strong activation of Mixtral 8x7B to top-3 for GSM8K, and to top-4 for StrategyQA, MBPP, and HumanEval, while keeping the weak activation with rank-2 routing as before.
|
ml
|
no
|
Change the cell values
|
papers/dev/ml_2405.14507.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0025
|
tables/dev/val_tab_0064.tex
|
|
2405.14507
|
val_tab_0065
|
The experimental results, as shown in Table 2 , reveal that enhancing the strong activation of SCMoE can further boost MoE models’ performance.
|
Refuted
|
Table 2: Experimental results of different strong activations.
We set the weak activation with rank- 2 routing.
For each benchmark,
we select the top- k routing yielding the best performance in Figure 1 as the ideal strong activation.
The specific hyperparameter settings can be found in Table 9 .
|
table
|
tables_png/dev/val_tab_0065.png
|
As revealed by Figure 1 , using default top-2 routing is not optimal for all tasks. For instance, top-3 routing yields best results on GSM8K, while top-4 routing achieves the highest accuracy on HumanEval and StrategyQA. This leads us to consider whether enhancing the strong activation in SCMoE can further unlock MoE models’ potential. To this end, we adjust the strong activation of Mixtral 8x7B to top-3 for GSM8K, and to top-4 for StrategyQA, MBPP, and HumanEval, while keeping the weak activation with rank-2 routing as before.
|
ml
|
no
|
Change the cell values
|
papers/dev/ml_2405.14507.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0025
|
tables/dev/val_tab_0065.tex
|
|
2405.14507
|
val_tab_0066
|
The results in Table 3 show that SCMoE increases the decoding time by a factor of 1.30x compared to greedy.
|
Supported
|
Table 3: Averaged decoding latency for each method. CS is short for contrastive search and CD is short for contrastive decoding. We set k = 3 for ensemble routing, while for dynamic routing we set threshold = 0.5. The speeds are tested on 4 A100 40G with batch size = 1.
|
table
|
tables_png/dev/val_tab_0066.png
|
We further evaluate the impact of SCMoE on decoding latency and compare it with other methods on Mixtral 8x7B. Specifically, we first input 32 tokens to each method and then force them to generate a sequence of 512 tokens to calculate the latency.
|
ml
|
no
|
Supported_claim_only
|
papers/dev/ml_2405.14507.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
no pair
|
tables/dev/val_tab_0066.tex
|
|
2405.14507
|
val_tab_0067
|
Specifically, compared to greedy baseline, SCMoE demonstrates improvements across all tasks: it enhances mathematical reasoning by 1.82 on GSM8K, commonsense reasoning by 2.58 on StrategyQA, code generation by 2.00 on MBPP, and 1.22 on HumanEval.
|
Supported
|
Table 4: Experimental results on GSM8K, StrategyQA, MBPP and HumanEval with DeepSeekMoE-16B. We report the best results for each method here. The performance of each method with different hyperparameters can be found in the Appendix Table 10 .
|
table
|
tables_png/dev/val_tab_0067.png
|
We further explore the adaptability of SCMoE to other MoE models. We conduct experiments on DeepSeekMoE-16B [ 5 ] . DeepSeekMoE-16B employs fine-grained expert segmentation and shared expert isolation routing strategies, which is different from Mixtral 8x7B [ 17 ] . We detail the hyperparameters settings of experiments in Appendix C . It is worth noting that contrastive decoding needs a suitable model to serve as an amateur. However, DeepSeekMoE-16B does not have a smaller model with the same vocabulary, so DeepSeekMoE-16B does not have the contrast decoding baseline. As depicted in Table 4 , SCMoE effectively unleashes the potential of DeepSeekMoE-16B.
|
ml
|
yes
|
Supported_claim_only
|
papers/dev/ml_2405.14507.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
no pair
|
tables/dev/val_tab_0067.tex
|
|
2405.14507
|
val_tab_0068
|
These results demonstrate that SCMoE can be successfully applied to other MoE models.
|
Supported
|
Table 4: Experimental results on GSM8K, StrategyQA, MBPP and HumanEval with DeepSeekMoE-16B. We report the best results for each method here. The performance of each method with different hyperparameters can be found in the Appendix Table 10 .
|
table
|
tables_png/dev/val_tab_0068.png
|
We further explore the adaptability of SCMoE to other MoE models. We conduct experiments on DeepSeekMoE-16B [ 5 ] . DeepSeekMoE-16B employs fine-grained expert segmentation and shared expert isolation routing strategies, which is different from Mixtral 8x7B [ 17 ] . We detail the hyperparameters settings of experiments in Appendix C . It is worth noting that contrastive decoding needs a suitable model to serve as an amateur. However, DeepSeekMoE-16B does not have a smaller model with the same vocabulary, so DeepSeekMoE-16B does not have the contrast decoding baseline. As depicted in Table 4 , SCMoE effectively unleashes the potential of DeepSeekMoE-16B. Specifically, compared to greedy baseline, SCMoE demonstrates improvements across all tasks: it enhances mathematical reasoning by 1.82 on GSM8K, commonsense reasoning by 2.58 on StrategyQA, code generation by 2.00 on MBPP, and 1.22 on HumanEval. In contrast, other methods, regardless of routing-based or search-based, struggle to outperform the greedy baseline.
|
ml
|
yes
|
Supported_claim_only
|
papers/dev/ml_2405.14507.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
no pair
|
tables/dev/val_tab_0068.tex
|
|
2405.14800
|
val_tab_0069
|
We observe that under this over-training scenario, both of our methods nearly achieve ideal binary classification effectiveness.
|
Supported
|
Results under Over-training setting. We mark the best and second-best results for each metric in bold and underline , respectively. Additionally, the best results from baselines are marked in blue for comparison.
|
table
|
tables_png/dev/val_tab_0069.png
|
Over-training scenario. In Tab. 1 , models are trained for excessive steps on all three datasets, resulting in significant overfitting.
|
ml
|
no
|
Supported_claim_only
|
papers/dev/ml_2405.14800.json
|
CC BY-NC-SA 4.0
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
no pair
|
tables/dev/val_tab_0069.tex
|
|
2405.14800
|
val_tab_0070
|
The best value of ASR and AUC of baseline methods decreases to around 65%, and the best value of TPR@1%FPR decreases to around 5%, indicating insufficient effectiveness of previous member inference methods in real-world training scenarios of text-to-image diffusion models.
|
Supported
|
Results under Real-world training setting. We also highlight key results according to Tab. 1 .
|
table
|
tables_png/dev/val_tab_0070.png
|
Real-world training scenario. In Tab. 2 , we adjust the training steps simulating real-word training scenario [ 19 ] and utilize default data augmentation [ 20 ] .
|
ml
|
other sources
|
Change the cell values
|
papers/dev/ml_2405.14800.json
|
CC BY-NC-SA 4.0
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
0026
|
tables/dev/val_tab_0070.tex
|
|
2206.00050
|
val_tab_0072
|
Thanks to this design, our method requires only a very small number of additional parameters on top of the base network, e.g., converting a single ResNet-18 model to an ensemble of 16 models increases the parameter count by 1.3%, compared to an increase by 1500% when setting up a naïve ensemble, see Table 1 .
|
Supported
|
Table 1: Memory and inference complexity comparison (CIFAR-10/100 datasets): Number of additional trainable parameters to have 16 ensemble members for different backbones. The inference time (mult-adds) shown corresponds to the mean GPU time (number of multiply-add operations) required to run a forward pass for a batch of size 1 with 16 ensemble members. The bottom section comprises methods whose forward and backward passes are implemented in parallel over ensemble members. Measurements are done on an NVIDIA GeForce GTX 1080 Ti.
|
table
|
tables_png/dev/val_tab_0072.png
|
Inspired by that line of work, we propose a new, efficient ensemble method, FiLM-Ensemble. Our method adapts
feature-wise linear modulation as an alternative way to construct an ensemble for (epistemic) uncertainty estimation. FiLM-Ensemble greatly reduces the computational overhead compared to the naïve ensemble approach, while performing almost on par with it, sometimes even better. In a nutshell, FiLM-Ensemble can be described as an implicit model ensemble in which each individual member is defined via its own set of linear modulation parameters for the activations, whereas all other network parameters are shared among ensemble members – and therefore only need to be stored and trained once.
|
ml
|
yes
|
Supported_claim_only
|
papers/dev/ml_2206.00050.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
no pair
|
tables/dev/val_tab_0072.tex
|
|
2205.13790
|
val_tab_0073
|
As shown in Table 4 , when we directly evaluate detectors trained without robustness augmentation, BEVFusion shows higher accuracy than the LiDAR-only stream and vanilla LiDAR-camera fusion approach in TransFusion.
|
Supported
|
Table 4: Results on robustness setting of object failure cases. Here, we report the results of baseline and our method that trained on the nuScenes dataset with and without the proposed robustness augmentation (Aug.). All settings are the same as in Table 3 .
|
table
|
tables_png/dev/val_tab_0073.png
|
LiDAR fails to receive object reflection points. Here exist common scenarios when LiDAR fails to receive points from the object. For example, on rainy days, the reflection rate of some common objects is below the threshold of LiDAR hence causing the issue of object failure AnonymousBenchmark . To simulate such a scenario, we adopt the second aforementioned robust augmentation strategy on the validation set.
|
ml
|
no
|
Change the cell values
|
papers/dev/ml_2205.13790.json
|
Public Domain
|
http://creativecommons.org/publicdomain/zero/1.0/
|
0027
|
tables/dev/val_tab_0073.tex
|
|
2205.13790
|
val_tab_0074
|
When we finetune detectors on the robust augmented training set, BEVFusion largely improves PointPillars, CenterPoint, and TransFusion-L by 28.9%, 22.7%, and 15.7% mAP.
|
Supported
|
Table 4: Results on robustness setting of object failure cases. Here, we report the results of baseline and our method that trained on the nuScenes dataset with and without the proposed robustness augmentation (Aug.). All settings are the same as in Table 3 .
|
table
|
tables_png/dev/val_tab_0074.png
|
LiDAR fails to receive object reflection points. Here exist common scenarios when LiDAR fails to receive points from the object. For example, on rainy days, the reflection rate of some common objects is below the threshold of LiDAR hence causing the issue of object failure AnonymousBenchmark . To simulate such a scenario, we adopt the second aforementioned robust augmentation strategy on the validation set. As shown in Table 4 , when we directly evaluate detectors trained without robustness augmentation, BEVFusion shows higher accuracy than the LiDAR-only stream and vanilla LiDAR-camera fusion approach in TransFusion.
|
ml
|
no
|
Supported_claim_only
|
papers/dev/ml_2205.13790.json
|
Public Domain
|
http://creativecommons.org/publicdomain/zero/1.0/
|
no pair
|
tables/dev/val_tab_0074.tex
|
|
2205.14612
|
val_tab_0075
|
A first observation is that one can transfer these weights to deeper ResNets without significantly affecting the test accuracy of the model: it remains above 94.5\% on CIFAR-10 and 72\% on ImageNet.
|
Supported
|
Table 2: Test accuracy (ResNet)
|
table
|
tables_png/dev/val_tab_0075.png
|
In addition, we also want our pretrained model to verify assumption 2 so we consider the following setup. On CIFAR (resp. ImageNet) we train a ResNet with 4 (resp. 8) blocks in each layer, where weights are tied within each layer.
|
ml
|
yes
|
Change the cell values
|
papers/dev/ml_2205.14612.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0028
|
tables/dev/val_tab_0075.tex
|
|
2503.07300
|
val_tab_0076
|
Despite being executed on a high-performance server, the CMAES method mosleh2020hardware requires 144 seconds to process a 4K input image, which is considerably slow.
|
Supported
|
Experimental results demonstrating efficiency across varying input resolutions. Our method significantly outperforms other methods, achieving a speed enhancement of 260 times relative to the cascaded proxy method tseng2022neural at 720P resolution, and 117 times faster than the CMAES approach hansen2006cma ; mosleh2020hardware at 4K resolution.
|
table
|
tables_png/dev/val_tab_0076.png
|
As indicated in Tab. 2 , our method demonstrated superior efficiency, requiring only 1.23 seconds per execution with 4K input. While the monolithic proxy-based method outperformed the search-based method in terms of speed, it faced limitations as input resolutions increased, leading to out-of-memory (OOM) errors once GPU memory was exceeded. The cascaded proxy-based method, which includes MLP networks, was the slowest and most prone to memory overflows due to its intensive memory demands. The search-based method primarily depends on CPU performance.
|
ml
|
no
|
Supported_claim_only
|
papers/dev/ml_2503.07300.json
|
CC BY-NC-SA 4.0
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
no pair
|
tables/dev/val_tab_0076.tex
|
|
2203.01212
|
val_tab_0077
|
In the experiments of LiPopt, we only used LiPopt-2.
|
Supported
|
Table 1: \ell_{\infty} -FGL estimation of various methods: DGeoLIP and NGeoLIP induce the same values on two layer networks. DGeoLIP always produces tighter estimations than LiPopt and MP do.
|
table
|
tables_png/dev/val_tab_0077.png
|
ml
|
other sources
|
Change the cell values
|
papers/dev/ml_2203.01212.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0029
|
tables/dev/val_tab_0077.tex
|
||
2203.01212
|
val_tab_0078
|
In the experiments of LiPopt, we only used LiPopt-2.
|
Refuted
|
Table 1: \ell_{\infty} -FGL estimation of various methods: DGeoLIP and NGeoLIP induce the same values on two layer networks. DGeoLIP always produces tighter estimations than LiPopt and MP do.
|
table
|
tables_png/dev/val_tab_0078.png
|
ml
|
other sources
|
Change the cell values
|
papers/dev/ml_2203.01212.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0029
|
tables/dev/val_tab_0078.tex
|
||
2408.16862
|
val_tab_0079
|
We see that p-dLDS broadly outperforms existing methods in all metrics and significantly improves inference for decomposed models.
|
Supported
|
Table 1: Metrics for synthetic dynamical systems. Bold means best performance. (\uparrow) indicates higher score is better while (\downarrow) indicates that lower is better. ✗ indicates that value diverged towards -\infty . Switch events for decomposed models are defined as times where the active set of DOs change from the previous time step.
|
table
|
tables_png/dev/val_tab_0079.png
|
Table 1 summarizes our quantitative evaluations on three metrics: 1) the mean squared error (MSE) between the learned and ground truth latent dynamics, 2) the MSE between the inferred and true switch rate to determine agreement of the discrete switching behavior,
and 3) the 100-step inference R^{2} to demonstrate that the learned system generalizes beyond a single step on held-out data. (See Appendix D for mathematical definitions).
|
ml
|
other sources
|
Change the cell values
|
papers/dev/ml_2408.16862.json
|
CC BY-SA 4.0
|
http://creativecommons.org/licenses/by-sa/4.0/
|
0030
|
tables/dev/val_tab_0079.tex
|
|
2408.16862
|
val_tab_0080
|
We see that p-dLDS broadly outperforms existing methods in all metrics and significantly improves inference for decomposed models.
|
Refuted
|
Table 1: Metrics for synthetic dynamical systems. Bold means best performance. (\uparrow) indicates higher score is better while (\downarrow) indicates that lower is better. ✗ indicates that value diverged towards -\infty . Switch events for decomposed models are defined as times where the active set of DOs change from the previous time step.
|
table
|
tables_png/dev/val_tab_0080.png
|
Table 1 summarizes our quantitative evaluations on three metrics: 1) the mean squared error (MSE) between the learned and ground truth latent dynamics, 2) the MSE between the inferred and true switch rate to determine agreement of the discrete switching behavior,
and 3) the 100-step inference R^{2} to demonstrate that the learned system generalizes beyond a single step on held-out data. (See Appendix D for mathematical definitions).
|
ml
|
other sources
|
Change the cell values
|
papers/dev/ml_2408.16862.json
|
CC BY-SA 4.0
|
http://creativecommons.org/licenses/by-sa/4.0/
|
0030
|
tables/dev/val_tab_0080.tex
|
|
2408.16862
|
val_tab_0081
|
Moreover, this leads to improved estimation of latent dynamics, a switching rate that agrees with the true system, and improved multistep inference performance as shown in Table 1 .
|
Supported
|
Table 1: Metrics for synthetic dynamical systems. Bold means best performance. (\uparrow) indicates higher score is better while (\downarrow) indicates that lower is better. ✗ indicates that value diverged towards -\infty . Switch events for decomposed models are defined as times where the active set of DOs change from the previous time step.
|
table
|
tables_png/dev/val_tab_0081.png
|
In figure 2 D, we see that rSLDS does not distinguish between the different speeds along the outer and inner sections of the attractor. Instead, the discrete states obscure the continuum of speeds by incorrectly grouping all activity in each lobe into a single regime. Furthermore, we observe that dLDS is limited without an offset term, unable to accurately represent multiple fixed points. Instead of aligning with the two attractor lobes, transitions in the dominant coefficients occur radially relative to the origin and fail to reconstruct the two orbiting fixed points. Conversely, p-dLDS’s offset term enables learning a system where coefficients better match the true geometry. This representation correctly recovers differences between the outer and inner sections of the attractor while also accurately reconstructing the two orbiting fixed points.
|
ml
|
other sources
|
Change the cell values
|
papers/dev/ml_2408.16862.json
|
CC BY-SA 4.0
|
http://creativecommons.org/licenses/by-sa/4.0/
|
0031
|
tables/dev/val_tab_0081.tex
|
|
2501.08508
|
val_tab_0082
|
We see that FuncMol slightly improves VoxMol and both models perform worse compared to the equivariant point-cloud based baselines.
|
Supported
|
Table 1: QM9 results w.r.t. test set for 10000 samples per model. \uparrow / \downarrow indicate that higher/lower numbers are better. The row data are randomly sampled molecules from the validation set. We report 1-sigma error bars over 3 sampling runs.
|
table
|
tables_png/dev/val_tab_0082.png
|
Table 1 report the metrics described in Section ˜ 5.1 .
|
ml
|
other sources
|
Change the cell values
|
papers/dev/ml_2501.08508.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0032
|
tables/dev/val_tab_0082.tex
|
|
2501.08508
|
val_tab_0083
|
We see that FuncMol slightly improves VoxMol and both models perform worse compared to the equivariant point-cloud based baselines.
|
Refuted
|
Table 1: QM9 results w.r.t. test set for 10000 samples per model. \uparrow / \downarrow indicate that higher/lower numbers are better. The row data are randomly sampled molecules from the validation set. We report 1-sigma error bars over 3 sampling runs.
|
table
|
tables_png/dev/val_tab_0083.png
|
Table 1 report the metrics described in Section ˜ 5.1 .
|
ml
|
other sources
|
Change the cell values
|
papers/dev/ml_2501.08508.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0032
|
tables/dev/val_tab_0083.tex
|
|
2501.08508
|
val_tab_0084
|
We note that sampling time of FuncMol is an order of magnitude better than baselines.
|
Supported
|
Table 1: QM9 results w.r.t. test set for 10000 samples per model. \uparrow / \downarrow indicate that higher/lower numbers are better. The row data are randomly sampled molecules from the validation set. We report 1-sigma error bars over 3 sampling runs.
|
table
|
tables_png/dev/val_tab_0084.png
|
Table 1 report the metrics described in Section ˜ 5.1 . We see that FuncMol slightly improves VoxMol and both models perform worse compared to the equivariant point-cloud based baselines.
|
ml
|
other sources
|
Change the cell values
|
papers/dev/ml_2501.08508.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0033
|
tables/dev/val_tab_0084.tex
|
|
2211.12551
|
val_fig_0001
|
It shows that both eParam and eFlow are reasonable pruning strategy, however, as we increase the percentage of pruned parameters, eFlow has less log-likelihoods drop compared with eParam .
|
Supported
|
(a) Comparison of heuristics eRand , eParam , and eFlow . Heuristic eFlow can prune up to 80% of the parameters without much loglikelihoods decrease.; (b) Histogram of parameters before (the same as in Figure 1 ) and after pruning. The parameter values take higher significance after pruning.; Empirical evaluation of the pruning operation.
|
figure
|
figures/dev/val_fig_0001.png
|
Figure 5(a) compares the effect of pruning heuristics eParam , eFlow , as well as an uninformed strategy, prune randomly, which we denote as eRand .
|
ml
|
yes
|
Legend Swap
|
papers/dev/ml_2211.12551.json
|
CC BY-NC-SA 4.0
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
0034
| null |
|
2211.16499
|
val_fig_0002
|
Thereafter, Swin becomes comparatively much more robust as occlusion becomes severe.
|
Supported
|
Figure 3: Random patch drop occlusion study on ConvNext and Swin networks on the ImageNet-1k validation set. Swin Transformers are slightly more robust to this type of artificial occlusion than ConvNext networks when the information loss is small, although they become comparatively much stronger as the information loss is increased.
|
figure
|
figures/dev/val_fig_0002.png
|
For (1), we provide evidence that this difference between ResNets and DeiTs is not due to the nature of transformer vs. conv layers. In order to do so, we re-run the random patch drop experiments in [ 47 ] on all pairs of ConvNext and Swin networks. We present results for different levels of information loss in Fig. 3 . For all networks we observe a drop in performance as information loss increases. We show that ConvNext networks do not suffer the same critical failure mode that DenseNet121, ResNet50, SqueezeNet and VGG19 exhibit in Naseer et al. [ 47 ] . The top accuracy of these four classic ConvNets collapses to half after about 10% of patches are occluded and to 0 after about 40% information loss. ConvNext shows that it is much more resistant, conserving a large part of its accuracy even after 50% information loss. Next, we observe that ConvNext networks are slightly less robust than Swin Transformers for low amounts of information loss (under 50%).
|
ml
|
yes
|
Legend Swap
|
papers/dev/ml_2211.16499.json
|
same image
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0035
| null |
2211.16499
|
val_fig_0003
|
Thereafter, Swin becomes comparatively much more robust as occlusion becomes severe.
|
Refuted
|
Figure 3: Random patch drop occlusion study on ConvNext and Swin networks on the ImageNet-1k validation set. Swin Transformers are slightly more robust to this type of artificial occlusion than ConvNext networks when the information loss is small, although they become comparatively much stronger as the information loss is increased.
|
figure
|
figures/dev/val_fig_0003.png
|
For (1), we provide evidence that this difference between ResNets and DeiTs is not due to the nature of transformer vs. conv layers. In order to do so, we re-run the random patch drop experiments in [ 47 ] on all pairs of ConvNext and Swin networks. We present results for different levels of information loss in Fig. 3 . For all networks we observe a drop in performance as information loss increases. We show that ConvNext networks do not suffer the same critical failure mode that DenseNet121, ResNet50, SqueezeNet and VGG19 exhibit in Naseer et al. [ 47 ] . The top accuracy of these four classic ConvNets collapses to half after about 10% of patches are occluded and to 0 after about 40% information loss. ConvNext shows that it is much more resistant, conserving a large part of its accuracy even after 50% information loss. Next, we observe that ConvNext networks are slightly less robust than Swin Transformers for low amounts of information loss (under 50%).
|
ml
|
yes
|
Legend Swap
|
papers/dev/ml_2211.16499.json
|
same image
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0035
| null |
2211.16499
|
val_fig_0004
|
For Base-22k and Large, the difference becomes large after the scale multiplier is under 0.6.
|
Supported
|
Figure 5: Counterfactual study of all sizes of ConvNext and Swin networks for object scale.
|
figure
|
figures/dev/val_fig_0004.png
|
In order to compare the robustness of ConvNext and Swin networks to scale, we first generate images of all main object models with unit object scale. Although objects have different volume in this state (e.g. microwaves are larger than spatulas), they take a fair amount of space in the frame and don’t present much of a challenge to large architectures. Specifically, top-5 accuracies for Swin-L and ConvNext-L are 90.5% and 93.8% respectively. We then perform counterfactual simulation testing by plotting the stability of correct predictions when the object scale multiplier is reduced from 1. We show these results in Fig. 5 . We observe that for Small, Base-22k and Large-22k networks, ConvNext vastly outperforms Swin for smaller objects.
|
ml
|
yes
|
Graph Swap
|
papers/dev/ml_2211.16499.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0036
| null |
|
2403.19137
|
val_fig_0005
|
With task-specific encoders (right), the cross-task class centroids are further apart than those with one global encoder (left).
|
Supported
|
Figure 3 : Need for task-specific distribution encoders: Cosine distance between the centroids of class-specific latent variables produced without ( left ) and with ( right ) the use of task-specific encoders on CIFAR100 (10 tasks, 10 classes per task). Centroids of class-specific latent variables are more separable on the right.
|
figure
|
figures/dev/val_fig_0005.png
|
where \{z^{t}_{i}\}_{m=1}^{M} are the M task-specific MC samples. Task-specific encoders help us sample latent variables that are more discriminative across tasks. This is depicted in Fig. 3 using the cosine distance between the embeddings of class-specific samples.
|
ml
|
yes
|
Graph Swap
|
papers/dev/ml_2403.19137.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0037
| null |
|
2403.19137
|
val_fig_0006
|
With task-specific encoders (right), the cross-task class centroids are further apart than those with one global encoder (left).
|
Refuted
|
Figure 3 : Need for task-specific distribution encoders: Cosine distance between the centroids of class-specific latent variables produced without ( left ) and with ( right ) the use of task-specific encoders on CIFAR100 (10 tasks, 10 classes per task). Centroids of class-specific latent variables are more separable on the right.
|
figure
|
figures/dev/val_fig_0006.png
|
where \{z^{t}_{i}\}_{m=1}^{M} are the M task-specific MC samples. Task-specific encoders help us sample latent variables that are more discriminative across tasks. This is depicted in Fig. 3 using the cosine distance between the embeddings of class-specific samples.
|
ml
|
yes
|
Graph Swap
|
papers/dev/ml_2403.19137.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0037
| null |
|
2402.12416
|
val_fig_0007
|
As depicted in the figure, AgA with sign selection is aligned towards the fastest update direction, resulting in AgA progressing approximately 6 steps ahead of AgA without Sign at the end of the trajectory.
|
Supported
|
(a) Cleanup: AgA with different \lambda and AgA/Sign; (b) Cleanup: comparison between AgA and baselines; (c) Harvest: AgA with different \lambda and AgA/Sign; (d) Cleanup: comparison between AgA and baselines; Comparison of Episodic Social Welfare: our proposed AgA methods along with baselines on Cleanup (figure a and b) and Harvest (figure c and d) tasks. Figure (a) and (c) provide an illustration of varying alignment parameter \lambda values in AgA, as well as AgA sans sign selection (AgA/Sign), as applied to Cleanup and Harvest tasks respectively. Subfigures (b) and (d) exhibit the social welfare obtained by AgA utilizing the optimal \lambda value, in comparison to those secured by Simul-Ind, Simal-Co, CGA, SGA and SVO. Bold lines represent the average of the social welfare across three seeds, with the encompassing shaded areas denoting the standard deviation.
|
figure
|
figures/dev/val_fig_0007.png
|
In addition, Fig. 3 presents a critical comparison between the trajectories of AgA (in red) and AgA without sign alignment (AgA w/o Sign, depicted in purple), as outlined in Corollary 4.3 . This side-by-side comparison covers 40 steps and features highlighted points every tenth step. It provides a visual illustration of the efficiency introduced by sign alignment starting from the 14th step in the trajectories of AgA and AgA without sign alignment. Remarkably, norm gradients are represented by blue arrows, indicating the direction of the fastest updates.
|
ml
|
yes
|
Legend Swap
|
papers/dev/ml_2402.12416.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0038
| null |
|
2402.12416
|
val_fig_0008
|
As depicted in the figure, AgA with sign selection is aligned towards the fastest update direction, resulting in AgA progressing approximately 6 steps ahead of AgA without Sign at the end of the trajectory.
|
Refuted
|
(a) Cleanup: AgA with different \lambda and AgA/Sign; (b) Cleanup: comparison between AgA and baselines; (c) Harvest: AgA with different \lambda and AgA/Sign; (d) Cleanup: comparison between AgA and baselines; Comparison of Episodic Social Welfare: our proposed AgA methods along with baselines on Cleanup (figure a and b) and Harvest (figure c and d) tasks. Figure (a) and (c) provide an illustration of varying alignment parameter \lambda values in AgA, as well as AgA sans sign selection (AgA/Sign), as applied to Cleanup and Harvest tasks respectively. Subfigures (b) and (d) exhibit the social welfare obtained by AgA utilizing the optimal \lambda value, in comparison to those secured by Simul-Ind, Simal-Co, CGA, SGA and SVO. Bold lines represent the average of the social welfare across three seeds, with the encompassing shaded areas denoting the standard deviation.
|
figure
|
figures/dev/val_fig_0008.png
|
In addition, Fig. 3 presents a critical comparison between the trajectories of AgA (in red) and AgA without sign alignment (AgA w/o Sign, depicted in purple), as outlined in Corollary 4.3 . This side-by-side comparison covers 40 steps and features highlighted points every tenth step. It provides a visual illustration of the efficiency introduced by sign alignment starting from the 14th step in the trajectories of AgA and AgA without sign alignment. Remarkably, norm gradients are represented by blue arrows, indicating the direction of the fastest updates.
|
ml
|
yes
|
Legend Swap
|
papers/dev/ml_2402.12416.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0038
| null |
|
2205.14612
|
val_fig_0010
|
Approximated gradients, however, lead to a large test error at small depth, but give the same performance at large depth, hence confirming our results in Prop. 4 and 6 .
|
Supported
|
Figure 4: Comparison of the best test errors as a function of depth when using Euler or Heun’s discretization method with or without the adjoint method.
|
figure
|
figures/dev/val_fig_0010.png
|
We then apply a batch norm, a ReLU and iterate relation ( 1 ) where f is a pre-activation basic block (He et al., 2016b ) . We consider the zero residual initialisation: the last batch norm of each basic block is initialized to zero. We consider different values for the depth N and notice that in this setup, the deeper our model is, the better it performs in term of test accuracy. We then compare the performance of our model using a ResNet (forward rule ( 1 )) or a HeunNet (forward rule ( 8 )). We train our networks using either the classical backpropagation or our corresponding proxys using the adjoint method (formulas ( 6 ) and ( 9 )). We display the final test accuracy (median over 5 runs) for different values of the depth N in Figure 4 . The true backpropagation gives the same curves for the ResNet and the HeunNet.
|
ml
|
yes
|
Supported_claim_only
|
papers/dev/ml_2205.14612.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
no pair
| null |
|
2410.17599
|
val_fig_0011
|
The underfitting delta model had a win rate of 54 when \alpha was 1.0, which increased to 64 when \alpha was 2.0.
|
Supported
|
(a) Impact on instruction tuning; (b) Impact on unlearning; Impact of strength coefficient \alpha on performance
|
figure
|
figures/dev/val_fig_0011.png
|
For instruction tuning, we tested values within the range of [0.5, 2] and evaluated them on the first 50 data points of AlpacaEval. As shown in Figure 4(a) , we found that the performance was optimal when the \alpha value was 1.0, and increasing or decreasing \alpha resulted in decreased performance. This is similar to causing the delta model to overfit or underfit, so we explored whether adjusting the \alpha value during inference could counteract the overfitting and underfitting during training. Specifically, the delta model performed best after training for four epochs, and we selected the checkpoint after training for two epochs as the underfitting scenario and the checkpoint after training for eight epochs as the overfitting scenario. We found that adjusting the \alpha value could counteract the overfitting or underfitting during training.
|
ml
|
yes
|
Legend Swap
|
papers/dev/ml_2410.17599.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0039
| null |
|
2201.12414
|
val_fig_0014
|
We also see that acquiring based on the lookahead posteriors incurs only a minimal increase in error compared to the sampling-based method, despite being far more efficient.
|
Supported
|
Figure 7: Average RMSE of reconstructions for greedy active feature acquisition.
|
figure
|
figures/dev/val_fig_0014.png
|
Figure 7 presents the root-mean-square error, averaged across the test instances, when imputing \mathbf{x}_{u} with different numbers of acquired features. We see that our models are able to achieve lower error than EDDI.
|
ml
|
yes
|
Graph Flip
|
papers/dev/ml_2201.12414.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0040
| null |
|
2201.12414
|
val_fig_0015
|
We also see that acquiring based on the lookahead posteriors incurs only a minimal increase in error compared to the sampling-based method, despite being far more efficient.
|
Refuted
|
Figure 7: Average RMSE of reconstructions for greedy active feature acquisition.
|
figure
|
figures/dev/val_fig_0015.png
|
Figure 7 presents the root-mean-square error, averaged across the test instances, when imputing \mathbf{x}_{u} with different numbers of acquired features. We see that our models are able to achieve lower error than EDDI.
|
ml
|
yes
|
Graph Flip
|
papers/dev/ml_2201.12414.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0040
| null |
|
2502.17771
|
val_fig_0016
|
Better sample selection due to jittering subsequently leads to significantly better performance in regression (Fig. 4 (c)).
|
Supported
|
Figure 4: Jittering analysis. (a) When trained without jittering, feature extractors easily overfit the noisy training data (yellow-shaded region), while jittering-regularized feature extractors robustly learn from the noisy training data. (b) Overfitted feature extractors (yellow-shaded region) on noisy samples increase their likelihood, leading to a higher selection rate and ERR.
It exhibits nearly twice higher ERR (a lower value is better).
(c) Most importantly, jittering regularization improves performance in regression.
|
figure
|
figures/dev/val_fig_0016.png
|
Fig. 4 (a) shows that with jittering, the feature extractor exhibits higher accuracy on the clean test data due to its regularization effect. In the sample selection stage (Fig. 4 (b)),
the feature extractor trained without jittering easily overfits the noise, resulting in over-selection and higher ERR (§ 4.2 ). In contrast, the jittered feature extractor achieves a relatively low selection rate with halved ERR, indicating that the noisier samples are filtered out.
|
ml
|
yes
|
Legend Swap
|
papers/dev/ml_2502.17771.json
|
Additionally, color changed.
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0041
| null |
2502.17771
|
val_fig_0017
|
Better sample selection due to jittering subsequently leads to significantly better performance in regression (Fig. 4 (c)).
|
Refuted
|
Figure 4: Jittering analysis. (a) When trained without jittering, feature extractors easily overfit the noisy training data (yellow-shaded region), while jittering-regularized feature extractors robustly learn from the noisy training data. (b) Overfitted feature extractors (yellow-shaded region) on noisy samples increase their likelihood, leading to a higher selection rate and ERR.
It exhibits nearly twice higher ERR (a lower value is better).
(c) Most importantly, jittering regularization improves performance in regression.
|
figure
|
figures/dev/val_fig_0017.png
|
Fig. 4 (a) shows that with jittering, the feature extractor exhibits higher accuracy on the clean test data due to its regularization effect. In the sample selection stage (Fig. 4 (b)),
the feature extractor trained without jittering easily overfits the noise, resulting in over-selection and higher ERR (§ 4.2 ). In contrast, the jittered feature extractor achieves a relatively low selection rate with halved ERR, indicating that the noisier samples are filtered out.
|
ml
|
yes
|
Legend Swap
|
papers/dev/ml_2502.17771.json
|
Additionally, color changed.
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0041
| null |
2502.17771
|
val_fig_0018
|
ConFrag achieves the lowest ERR while maintaining above-average selection rates, resulting in the best MRAE.
|
Supported
|
Figure 5: Selection/ERR/MRAE comparison between ConFrag and strong baselines of CNLCU-H, BMM, DY-S, AUX and Selfie on IMDB-Clean-B. We exclude the performance during the warm-up.
|
figure
|
figures/dev/val_fig_0018.png
|
Selection/ERR/MRAE comparison. Fig. 5 compares ConFrag to five selection and refurbishment baselines of CNLCU-H, BMM, DY-S, AUX, Selfie on IMDB-Clean-B using the selection rate, ERR, and MRAE. Ideally, a model should attain a high selection rate and a low ERR. It is worth noting that the relative importance of ERR and selection rate may vary depending on the dataset and the task.
|
ml
|
yes
|
Graph Flip
|
papers/dev/ml_2502.17771.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0042
| null |
|
2502.17771
|
val_fig_0019
|
ConFrag achieves the lowest ERR while maintaining above-average selection rates, resulting in the best MRAE.
|
Refuted
|
Figure 5: Selection/ERR/MRAE comparison between ConFrag and strong baselines of CNLCU-H, BMM, DY-S, AUX and Selfie on IMDB-Clean-B. We exclude the performance during the warm-up.
|
figure
|
figures/dev/val_fig_0019.png
|
Selection/ERR/MRAE comparison. Fig. 5 compares ConFrag to five selection and refurbishment baselines of CNLCU-H, BMM, DY-S, AUX, Selfie on IMDB-Clean-B using the selection rate, ERR, and MRAE. Ideally, a model should attain a high selection rate and a low ERR. It is worth noting that the relative importance of ERR and selection rate may vary depending on the dataset and the task.
|
ml
|
yes
|
Graph Flip
|
papers/dev/ml_2502.17771.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0042
| null |
|
2502.17771
|
val_fig_0020
|
The contrastive fragment pairing demonstrates superior performance to other pairing methods.
|
Supported
|
Figure 6: Analysis with 40% symmetric noise. (a) Comparison between the proposed contrastive pairing and other pairings on IMDB-Clean-B.
(b) Comparison between fragment numbers on SHIFT15M-B and IMDB-Clean-B.
|
figure
|
figures/dev/val_fig_0020.png
|
Fragment pairing. Fig. 6 (a) compares contrastive pairing to alternative pairings
using MRAE as a metric.
|
ml
|
yes
|
Legend Swap
|
papers/dev/ml_2502.17771.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0043
| null |
|
2502.17771
|
val_fig_0021
|
The contrastive fragment pairing demonstrates superior performance to other pairing methods.
|
Refuted
|
Figure 6: Analysis with 40% symmetric noise. (a) Comparison between the proposed contrastive pairing and other pairings on IMDB-Clean-B.
(b) Comparison between fragment numbers on SHIFT15M-B and IMDB-Clean-B.
|
figure
|
figures/dev/val_fig_0021.png
|
Fragment pairing. Fig. 6 (a) compares contrastive pairing to alternative pairings
using MRAE as a metric.
|
ml
|
yes
|
Legend Swap
|
papers/dev/ml_2502.17771.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0043
| null |
|
2403.19137
|
val_fig_0029
|
In Fig. 5(a) , the accuracy is poorer in range [1,10] , grows in range [10,20] , and saturates thereafter.
|
Supported
|
(a) Accuracy-runtime tradeoff; (b) Parameter count comparison; Ablations on CIFAR100 showing: (a) performance trade-off with the number of MC samples M , (b) the number of trainable parameters in different finetuning methods.
|
figure
|
figures/dev/val_fig_0029.png
|
We vary the number of MC samples M from 1 to 50.
|
ml
|
no
|
Graph Flip
|
papers/dev/ml_2403.19137.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0044
| null |
|
2403.07937
|
val_fig_0032
|
On the other hand, increasing training data appears to have only a minor influence on robustness.
|
Supported
|
(a); (b); (c); NWER on non-adversarial and adversarial perturbations plotted against the number of parameters (a & c) and hours of training data used to train the DNN (b). Figures (a) and (b) plot all the models, while (c) plots three families of models (indicated by the color). The models within the family share similar architectures, and training datasets.
|
figure
|
figures/dev/val_fig_0032.png
|
To determine if the prevailing practice of training DNNs with more parameters on larger datasets is yielding improvements in robustness, we plot NWER against the number of model parameters and the size of the training data of all the candidate models. These plots are shown in Figures 4(a) and 4(b) . We note that increasing model size is correlated with improved robustness (lower NWER), however, the effect is more inconsistent and weaker for adversarial perturbations. To further isolate the impact of the model size we control the architecture and training data and plot the NWER of models from the same family in Figure 4(c) , which have similar architectures and training datasets. We note that larger models are more robust in the Whisper and Wav2Vec-2.0 families, but, surprisingly, not in the HuBert family.
|
ml
|
no
|
Supported_claim_only
|
papers/dev/ml_2403.07937.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
no pair
| null |
|
2403.19863
|
val_fig_0033
|
An intriguing observation is that the decodability of both attributes decreases as the depth of the neural network increases, which is also observed in [ 10 ] .
|
Supported
|
(a); (b); Exploring the Effect of depth modulation: (a) illustrates how the linear decodability of features decreases as neural network depth increases, while (b) dives into the training dynamics of MLPs with varying depths under ERM.
|
figure
|
figures/dev/val_fig_0033.png
|
We employ the concept of feature decodability to assess the extent to which the specific features of a given dataset can be reliably decoded from the models with varying depths. Hermann et al . [ 10 ] demonstrated that the visual features can be decoded from the higher layers of untrained models. Additionally, they observed that the feature decodability from an untrained model has a significant impact in determining which features are emphasized and suppressed during the model training. Following their approach, we specifically focus on assessing the decodability of bias and core attributes from the penultimate layer of untrained models. In order to evaluate the decodability of an attribute in a dataset, we train a decoder to map the activations from the penultimate layer of a frozen, untrained model to attribute labels. The decoder comprises a single linear layer followed by a softmax activation function. The decoder is trained using an unbiased validation set associated with the dataset, where each instance is labeled according to the attribute under consideration. Subsequently, the linear decodability of the attribute, measured in accuracy, is reported on the unbiased test set. We investigate the decodability of digit and color attributes in the CMNIST dataset from MLP models with varying depths, including 3, 4, and 5 layers, and the results are depicted in Fig. 2(a) .
|
ml
|
no
|
Graph Flip
|
papers/dev/ml_2403.19863.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0045
| null |
|
2403.19863
|
val_fig_0035
|
Hence, the disparity in decodability between color and digit attributes becomes significantly more pronounced in a 5-layer MLP in comparison to a 3-layer MLP.
|
Supported
|
(a); (b); Exploring the Effect of depth modulation: (a) illustrates how the linear decodability of features decreases as neural network depth increases, while (b) dives into the training dynamics of MLPs with varying depths under ERM.
|
figure
|
figures/dev/val_fig_0035.png
|
As observed in Fig. 2(b) , the initial phases of training for both networks emphasize color attribute (since bias is easy to learn), resulting in a notable enhancement in the decodability of color in both models. Also, as the training progresses, the decodability of the digit is higher in a 3-layer model when compared to a 5-layer model.
|
ml
|
no
|
Graph Flip
|
papers/dev/ml_2403.19863.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0046
| null |
|
2403.19863
|
val_fig_0036
|
As depicted in Fig. 4 , prior to the training, the deep branch demonstrates lower linear decodability for both the digit (core attribute) and color (bias attribute) in comparison to the linear decodability observed in the shallow branch.
|
Supported
|
Figure 4 : Training dynamics of DeNetDM during initial stages of training monitored in terms of linear decodability of bias attribute (color) and core attribute (digit).
|
figure
|
figures/dev/val_fig_0036.png
|
In Sec. 3.1 , we discussed the variability in the linear decodability at various depths and how the disparity in linear decodability between biased attributes and target attributes forms a significant motivation for debiasing. To further validate this intuition and identify the elements contributing to its effectiveness, we delve into the analysis of the training dynamics of DeNetDM during the initial training stages. We consider the training of Colored MNIST with 1% skewness due to its simplicity and ease of analysis. Figure 4 shows how linear decodability of attributes varies across different branches of DeNetDM during training.
|
ml
|
no
|
Supported_claim_only
|
papers/dev/ml_2403.19863.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
no pair
| null |
|
2403.17919
|
val_fig_0042
|
In particular, LISA requires much less activation memory consumption than LoRA since it does not introduce additional parameters brought by the adaptor.
|
Supported
|
Figure 3: GPU memory consumption of LLaMA-2-7B with different methods and batch size 1.
|
figure
|
figures/dev/val_fig_0042.png
|
In Figure 3 , it is worth noticing that the memory reduction in LISA allows LLaMA-2-7B to be trained on a single RTX4090 (24GB) GPU, which makes high-quality fine-tuning affordable even on a laptop computer.
|
ml
|
no
|
Category Swap
|
papers/dev/ml_2403.17919.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0047
| null |
|
2403.17919
|
val_fig_0043
|
In particular, LISA requires much less activation memory consumption than LoRA since it does not introduce additional parameters brought by the adaptor.
|
Refuted
|
Figure 3: GPU memory consumption of LLaMA-2-7B with different methods and batch size 1.
|
figure
|
figures/dev/val_fig_0043.png
|
In Figure 3 , it is worth noticing that the memory reduction in LISA allows LLaMA-2-7B to be trained on a single RTX4090 (24GB) GPU, which makes high-quality fine-tuning affordable even on a laptop computer.
|
ml
|
no
|
Category Swap
|
papers/dev/ml_2403.17919.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0047
| null |
|
2403.17919
|
val_fig_0045
|
As shown in Figure 4 , LISA provides almost 2.9\times speedup when compared with full-parameter tuning, and \sim 1.5\times speedup against LoRA, partially due to the removal of adaptor structures.
|
Supported
|
Figure 4: Single-iteration time cost of LLaMA-2-7B with different methods and batch size 1.
|
figure
|
figures/dev/val_fig_0045.png
|
On top of that, a reduction in memory footprint from LISA also leads to an acceleration in speed.
|
ml
|
no
|
Category Swap
|
papers/dev/ml_2403.17919.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0048
| null |
|
2403.17919
|
val_fig_0046
|
As shown in Figure 4 , LISA provides almost 2.9\times speedup when compared with full-parameter tuning, and \sim 1.5\times speedup against LoRA, partially due to the removal of adaptor structures.
|
Refuted
|
Figure 4: Single-iteration time cost of LLaMA-2-7B with different methods and batch size 1.
|
figure
|
figures/dev/val_fig_0046.png
|
On top of that, a reduction in memory footprint from LISA also leads to an acceleration in speed.
|
ml
|
no
|
Category Swap
|
papers/dev/ml_2403.17919.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0048
| null |
|
2403.17919
|
val_fig_0047
|
Specifically, Figure 5 highlights the model’s performance in various aspects, particularly LISA’s superiority over all methods in aspects of Writing, Roleplay, and STEM.
|
Supported
|
Figure 5: Different aspects of LLaMA-2-70B model on MT-Bench.
|
figure
|
figures/dev/val_fig_0047.png
|
As shown in Table 5 , LISA consistently produces better or on-par performance when compared with LoRA. Furthermore, in instruction-tuning tasks, LISA again surpasses full-parameter training, rendering it a competitive method for this setting.
|
ml
|
no
|
Legend Swap
|
papers/dev/ml_2403.17919.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0049
| null |
|
2403.17919
|
val_fig_0048
|
Specifically, Figure 5 highlights the model’s performance in various aspects, particularly LISA’s superiority over all methods in aspects of Writing, Roleplay, and STEM.
|
Refuted
|
Figure 5: Different aspects of LLaMA-2-70B model on MT-Bench.
|
figure
|
figures/dev/val_fig_0048.png
|
As shown in Table 5 , LISA consistently produces better or on-par performance when compared with LoRA. Furthermore, in instruction-tuning tasks, LISA again surpasses full-parameter training, rendering it a competitive method for this setting.
|
ml
|
no
|
Legend Swap
|
papers/dev/ml_2403.17919.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0049
| null |
|
2403.17919
|
val_fig_0049
|
On top of that, LISA displays consistently higher performance than LoRA on all subtasks, underscoring LISA’s effectiveness across diverse tasks.
|
Supported
|
Figure 5: Different aspects of LLaMA-2-70B model on MT-Bench.
|
figure
|
figures/dev/val_fig_0049.png
|
As shown in Table 5 , LISA consistently produces better or on-par performance when compared with LoRA. Furthermore, in instruction-tuning tasks, LISA again surpasses full-parameter training, rendering it a competitive method for this setting. Specifically, Figure 5 highlights the model’s performance in various aspects, particularly LISA’s superiority over all methods in aspects of Writing, Roleplay, and STEM.
|
ml
|
no
|
Legend Swap
|
papers/dev/ml_2403.17919.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0050
| null |
|
2403.17919
|
val_fig_0050
|
On top of that, LISA displays consistently higher performance than LoRA on all subtasks, underscoring LISA’s effectiveness across diverse tasks.
|
Refuted
|
Figure 5: Different aspects of LLaMA-2-70B model on MT-Bench.
|
figure
|
figures/dev/val_fig_0050.png
|
As shown in Table 5 , LISA consistently produces better or on-par performance when compared with LoRA. Furthermore, in instruction-tuning tasks, LISA again surpasses full-parameter training, rendering it a competitive method for this setting. Specifically, Figure 5 highlights the model’s performance in various aspects, particularly LISA’s superiority over all methods in aspects of Writing, Roleplay, and STEM.
|
ml
|
no
|
Legend Swap
|
papers/dev/ml_2403.17919.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0050
| null |
|
2403.17919
|
val_fig_0051
|
The chart also contrasts the yellow LoRA line with the purple Vanilla line, revealing that in large models like the 70B, LoRA does not perform as well as expected, showing only marginal improvements on specific aspects.
|
Supported
|
Figure 5: Different aspects of LLaMA-2-70B model on MT-Bench.
|
figure
|
figures/dev/val_fig_0051.png
|
As shown in Table 5 , LISA consistently produces better or on-par performance when compared with LoRA. Furthermore, in instruction-tuning tasks, LISA again surpasses full-parameter training, rendering it a competitive method for this setting. Specifically, Figure 5 highlights the model’s performance in various aspects, particularly LISA’s superiority over all methods in aspects of Writing, Roleplay, and STEM. On top of that, LISA displays consistently higher performance than LoRA on all subtasks, underscoring LISA’s effectiveness across diverse tasks.
|
ml
|
no
|
Supported_claim_only
|
papers/dev/ml_2403.17919.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
no pair
| null |
|
2403.17919
|
val_fig_0052
|
It is also intriguing to observe from Figure 5 that Vanilla LLaMA-2-70B excelled in Writing, but full-parameter fine-tuning led to a decline in these areas, a phenomenon known as the “Alignment Tax” (Ouyang et al., 2022 ) .
|
Supported
|
Figure 5: Different aspects of LLaMA-2-70B model on MT-Bench.
|
figure
|
figures/dev/val_fig_0052.png
|
ml
|
no
|
Legend Swap
|
papers/dev/ml_2403.17919.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0051
| null |
||
2402.12416
|
val_fig_0053
|
Notably, AgA with \lambda=0.1 underperforms, even when contrasted with AgA’s ablation study.
|
Supported
|
(a) MMM2: AgA with various \lambda and AgA/Sign; (b) MMM2: comparison between AgA and baselines; The comparison of win rates on the mixed-motive MMM2 map of SMAC.
The left figure shows results of varying alignment parameter \lambda values in AgA, as well as AgA sans sign selection (AgA/Sign),
The right figure compare the win rate of AgA (\lambda=100) with baselines.
The bold lines illustrate the mean of the win rate over 3 seeds, while the encompassing shaded regions depict the standard deviation.
|
figure
|
figures/dev/val_fig_0053.png
|
Results. Fig. 5(a) compares average win rates for various \lambda parameters in AgA, also encompassing AgA without sign selection, as stated in Corollary 4.3 . The results suggest comparable performance for \lambda=1 and 100 , with a lower value of 0.1 trailing noticeably.
|
ml
|
no
|
Legend Swap
|
papers/dev/ml_2402.12416.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0052
| null |
|
2402.12416
|
val_fig_0054
|
Notably, AgA with \lambda=0.1 underperforms, even when contrasted with AgA’s ablation study.
|
Refuted
|
(a) MMM2: AgA with various \lambda and AgA/Sign; (b) MMM2: comparison between AgA and baselines; The comparison of win rates on the mixed-motive MMM2 map of SMAC.
The left figure shows results of varying alignment parameter \lambda values in AgA, as well as AgA sans sign selection (AgA/Sign),
The right figure compare the win rate of AgA (\lambda=100) with baselines.
The bold lines illustrate the mean of the win rate over 3 seeds, while the encompassing shaded regions depict the standard deviation.
|
figure
|
figures/dev/val_fig_0054.png
|
Results. Fig. 5(a) compares average win rates for various \lambda parameters in AgA, also encompassing AgA without sign selection, as stated in Corollary 4.3 . The results suggest comparable performance for \lambda=1 and 100 , with a lower value of 0.1 trailing noticeably.
|
ml
|
no
|
Legend Swap
|
papers/dev/ml_2402.12416.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0052
| null |
|
2405.14507
|
val_fig_0056
|
However, in our exploratory experiment on Mixtral 8x7B [ 17 ] , we find simply raising the number of activated experts (blue lines in Figure 1 ) does not lead to stable improvements and may even hurt performance on different tasks.
|
Supported
|
Figure 1 : Performance comparison between increasing the value of top- k ( i.e. , ensemble routing) and SCMoE. SCMoE surpasses the performance of ensemble routing across various benchmarks.
|
figure
|
figures/dev/val_fig_0056.png
|
In this paper, we investigate the impact of unchosen experts 1 1 1 Unchosen experts refer to the experts not selected by default routing ( e.g. , top-2 routing in Mixtral 8x7B). on the performance of MoE models and explore their suitable usage. A direct hypothesis is that incorporating more experts improves MoE models and helps solve more difficult problems [ 3 ; 36 ; 14 ] .
|
ml
|
no
|
Graph Flip
|
papers/dev/ml_2405.14507.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0053
| null |
|
2405.14507
|
val_fig_0057
|
For instance, top-3 routing yields best results on GSM8K, while top-4 routing achieves the highest accuracy on HumanEval and StrategyQA.
|
Supported
|
Figure 1 : Performance comparison between increasing the value of top- k ( i.e. , ensemble routing) and SCMoE. SCMoE surpasses the performance of ensemble routing across various benchmarks.
|
figure
|
figures/dev/val_fig_0057.png
|
As revealed by Figure 1 , using default top-2 routing is not optimal for all tasks.
|
ml
|
no
|
Graph Swap
|
papers/dev/ml_2405.14507.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0054
| null |
|
2405.14507
|
val_fig_0058
|
For instance, top-3 routing yields best results on GSM8K, while top-4 routing achieves the highest accuracy on HumanEval and StrategyQA.
|
Refuted
|
Figure 1 : Performance comparison between increasing the value of top- k ( i.e. , ensemble routing) and SCMoE. SCMoE surpasses the performance of ensemble routing across various benchmarks.
|
figure
|
figures/dev/val_fig_0058.png
|
As revealed by Figure 1 , using default top-2 routing is not optimal for all tasks.
|
ml
|
no
|
Graph Swap
|
papers/dev/ml_2405.14507.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0054
| null |
|
2405.14507
|
val_fig_0059
|
Furthermore, SCMoE can enhance the major@20 accuracy from 75.59 to 78.31 (+2.72) on GSM8K.
|
Supported
|
Figure 5 : Experimental results on combining SCMoE with self-consistency on GSM8K using Mixtral 8x7B.
|
figure
|
figures/dev/val_fig_0059.png
|
Using self-consistency [ 35 ] for multiple sampling and taking a majority vote to determine the final answer is a common method to improve LLMs’ performance. Therefore, we explore whether SCMoE can combined with self-consistency. For vanilla self-consistency, we use temperature sampling with temperature \tau=0.7 to reach the best baseline performance. For self-consistency with SCMoE, we simply employ \beta=0.5 , rank-3 routing as weak activation, according to the best hyperparameters setting from Table 8 . It is worth noting that since SCMoE already has a mask \alpha=0.1 to limit the sampling range of the vocabulary, we do not perform any additional temperature processing on the final logits. As shown in Figure 5 ,
SCMoE (67.94) yields comparable results with major@5 (66.87).
|
ml
|
no
|
Legend Swap
|
papers/dev/ml_2405.14507.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0055
| null |
|
2405.14507
|
val_fig_0060
|
Furthermore, SCMoE can enhance the major@20 accuracy from 75.59 to 78.31 (+2.72) on GSM8K.
|
Refuted
|
Figure 5 : Experimental results on combining SCMoE with self-consistency on GSM8K using Mixtral 8x7B.
|
figure
|
figures/dev/val_fig_0060.png
|
Using self-consistency [ 35 ] for multiple sampling and taking a majority vote to determine the final answer is a common method to improve LLMs’ performance. Therefore, we explore whether SCMoE can combined with self-consistency. For vanilla self-consistency, we use temperature sampling with temperature \tau=0.7 to reach the best baseline performance. For self-consistency with SCMoE, we simply employ \beta=0.5 , rank-3 routing as weak activation, according to the best hyperparameters setting from Table 8 . It is worth noting that since SCMoE already has a mask \alpha=0.1 to limit the sampling range of the vocabulary, we do not perform any additional temperature processing on the final logits. As shown in Figure 5 ,
SCMoE (67.94) yields comparable results with major@5 (66.87).
|
ml
|
no
|
Legend Swap
|
papers/dev/ml_2405.14507.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
0055
| null |
|
2205.11361
|
val_fig_0062
|
Figure 2 demonstrates that MPGD can lead to successful optimization of the widening loss whereas the baseline GD and GD with Gaussian perturbations lead to poor solutions.
|
Supported
|
Figure 2: Evolution of GD with Gaussian perturbations (left plot) vs. that of MPGD with \gamma=0.7 , \beta=0.5 (middle plot), using \mu=0.02 , \sigma=0.05 , and \eta=0.01 ( m=100 ). Here we see that MPGD leads to successful optimization of the widening valley loss whereas that with Gaussian perturbations fails to converge. Moreover, MPGD effectively reduces the trace of loss Hessian (see right plot), steering the GD iterates to flatter region of the loss landscape.
|
figure
|
figures/dev/val_fig_0062.png
|
For the experiments, we start optimizing from the point (u_{0},0) , where u_{0}\sim 5\cdot\mathcal{U}(d) with d=10 , and use the learning rate \eta=0.01 . We study and compare the behavior of the following schemes: (i) baseline (vanilla GD), (ii) GD with uncorrelated Gaussian noise injection instead, and (iii) MPGD.
|
ml
|
no
|
Supported_claim_only
|
papers/dev/ml_2205.11361.json
|
CC BY 4.0
|
http://creativecommons.org/licenses/by/4.0/
|
no pair
| null |
End of preview.