id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2404.00341
|
Ahmed R. Sadik Dr.-Ing.
|
Ahmed R.Sadik, Bodo Urban
|
Ontology in Holonic Cooperative Manufacturing: A Solution to Share and
Exchange the Knowledge
| null | null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Cooperative manufacturing is a new trend in industry, which depends on the
existence of a collaborative robot. A collaborative robot is usually a
light-weight robot which is capable of operating safely with a human co-worker
in a shared work environment. During this cooperation, a vast amount of
information is exchanged between the collaborative robot and the worker. This
information constructs the cooperative manufacturing knowledge, which describes
the production components and environment. In this research, we propose a
holonic control solution, which uses the ontology concept to represent the
cooperative manufacturing knowledge. The holonic control solution is
implemented as an autonomous multi-agent system that exchanges the
manufacturing knowledge based on an ontology model. Ultimately, the research
illustrates and implements the proposed solution over a cooperative assembly
scenario, which involves two workers and one collaborative robot, whom
cooperate together to assemble a customized product.
|
[
{
"created": "Sat, 30 Mar 2024 12:38:47 GMT",
"version": "v1"
}
] |
2024-04-02
|
[
[
"Sadik",
"Ahmed R.",
""
],
[
"Urban",
"Bodo",
""
]
] |
Cooperative manufacturing is a new trend in industry, which depends on the existence of a collaborative robot. A collaborative robot is usually a light-weight robot which is capable of operating safely with a human co-worker in a shared work environment. During this cooperation, a vast amount of information is exchanged between the collaborative robot and the worker. This information constructs the cooperative manufacturing knowledge, which describes the production components and environment. In this research, we propose a holonic control solution, which uses the ontology concept to represent the cooperative manufacturing knowledge. The holonic control solution is implemented as an autonomous multi-agent system that exchanges the manufacturing knowledge based on an ontology model. Ultimately, the research illustrates and implements the proposed solution over a cooperative assembly scenario, which involves two workers and one collaborative robot, whom cooperate together to assemble a customized product.
|
2406.15202
|
Lucie Guillou
|
Lucie Guillou, Arnaud Sangnier, Nathalie Sznajder
|
Phase-Bounded Broadcast Networks over Topologies of Communication
|
long version of a paper accepted to appear at CONCUR 2024
| null | null | null |
cs.LO cs.MA
|
http://creativecommons.org/licenses/by/4.0/
|
We study networks of processes that all execute the same finite state
protocol and that communicate through broadcasts. The processes are organized
in a graph (a topology) and only the neighbors of a process in this graph can
receive its broadcasts. The coverability problem asks, given a protocol and a
state of the protocol, whether there is a topology for the processes such that
one of them (at least) reaches the given state. This problem is undecidable. We
study here an under-approximation of the problem where processes alternate a
bounded number of times $k$ between phases of broadcasting and phases of
receiving messages. We show that, if the problem remains undecidable when $k$
is greater than 6, it becomes decidable for $k=2$, and EXPSPACE-complete for
$k=1$. Furthermore, we show that if we restrict ourselves to line topologies,
the problem is in $P$ for $k=1$ and $k=2$.
|
[
{
"created": "Fri, 21 Jun 2024 14:43:23 GMT",
"version": "v1"
},
{
"created": "Thu, 4 Jul 2024 11:02:35 GMT",
"version": "v2"
}
] |
2024-07-08
|
[
[
"Guillou",
"Lucie",
""
],
[
"Sangnier",
"Arnaud",
""
],
[
"Sznajder",
"Nathalie",
""
]
] |
We study networks of processes that all execute the same finite state protocol and that communicate through broadcasts. The processes are organized in a graph (a topology) and only the neighbors of a process in this graph can receive its broadcasts. The coverability problem asks, given a protocol and a state of the protocol, whether there is a topology for the processes such that one of them (at least) reaches the given state. This problem is undecidable. We study here an under-approximation of the problem where processes alternate a bounded number of times $k$ between phases of broadcasting and phases of receiving messages. We show that, if the problem remains undecidable when $k$ is greater than 6, it becomes decidable for $k=2$, and EXPSPACE-complete for $k=1$. Furthermore, we show that if we restrict ourselves to line topologies, the problem is in $P$ for $k=1$ and $k=2$.
|
2305.08197
|
Ayman Elhalwagy
|
Ayman Elhalwagy and Tatiana Kalganova
|
A Dataset Fusion Algorithm for Generalised Anomaly Detection in
Homogeneous Periodic Time Series Datasets
|
This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessible
| null | null | null |
cs.LG cs.AI eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
The generalisation of Neural Networks (NN) to multiple datasets is often
overlooked in literature due to NNs typically being optimised for specific data
sources. This becomes especially challenging in time-series-based multi-dataset
models due to difficulties in fusing sequential data from different sensors and
collection specifications. In a commercial environment, however, generalisation
can effectively utilise available data and computational power, which is
essential in the context of Green AI, the sustainable development of AI models.
This paper introduces "Dataset Fusion," a novel dataset composition algorithm
for fusing periodic signals from multiple homogeneous datasets into a single
dataset while retaining unique features for generalised anomaly detection. The
proposed approach, tested on a case study of 3-phase current data from 2
different homogeneous Induction Motor (IM) fault datasets using an unsupervised
LSTMCaps NN, significantly outperforms conventional training approaches with an
Average F1 score of 0.879 and effectively generalises across all datasets. The
proposed approach was also tested with varying percentages of the training
data, in line with the principles of Green AI. Results show that using only
6.25\% of the training data, translating to a 93.7\% reduction in computational
power, results in a mere 4.04\% decrease in performance, demonstrating the
advantages of the proposed approach in terms of both performance and
computational efficiency. Moreover, the algorithm's effectiveness under
non-ideal conditions highlights its potential for practical use in real-world
applications.
|
[
{
"created": "Sun, 14 May 2023 16:24:09 GMT",
"version": "v1"
}
] |
2023-05-16
|
[
[
"Elhalwagy",
"Ayman",
""
],
[
"Kalganova",
"Tatiana",
""
]
] |
The generalisation of Neural Networks (NN) to multiple datasets is often overlooked in literature due to NNs typically being optimised for specific data sources. This becomes especially challenging in time-series-based multi-dataset models due to difficulties in fusing sequential data from different sensors and collection specifications. In a commercial environment, however, generalisation can effectively utilise available data and computational power, which is essential in the context of Green AI, the sustainable development of AI models. This paper introduces "Dataset Fusion," a novel dataset composition algorithm for fusing periodic signals from multiple homogeneous datasets into a single dataset while retaining unique features for generalised anomaly detection. The proposed approach, tested on a case study of 3-phase current data from 2 different homogeneous Induction Motor (IM) fault datasets using an unsupervised LSTMCaps NN, significantly outperforms conventional training approaches with an Average F1 score of 0.879 and effectively generalises across all datasets. The proposed approach was also tested with varying percentages of the training data, in line with the principles of Green AI. Results show that using only 6.25\% of the training data, translating to a 93.7\% reduction in computational power, results in a mere 4.04\% decrease in performance, demonstrating the advantages of the proposed approach in terms of both performance and computational efficiency. Moreover, the algorithm's effectiveness under non-ideal conditions highlights its potential for practical use in real-world applications.
|
2203.00815
|
Ola Alkhatib Ms.
|
Ayman Alahmar and Ola Alkhatib
|
Computerization of Clinical Pathways: A Literature Review and Directions
for Future Research
|
12 pages, 4 figures, 3 tables
|
2nd. International Symposium of Scientific Research and Innovative
Studies (ISSRIS'22), March 2-5, 2022
| null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Clinical Pathways (CP) are medical management plans developed to standardize
patient treatment activities, optimize resource usage, reduce expenses, and
improve the quality of healthcare services. Most CPs currently in use are
paper-based documents (i.e., not computerized). CP computerization has been an
active research topic since the inception of CP use in hospitals. This
literature review research aims to examine studies that focused on CP
computerization and offers recommendations for future research in this
important research area. Some critical research suggestions include
centralizing computerized CPs in Healthcare Information Systems (HIS), CP term
standardization using international medical terminology systems, developing a
global CP-specific digital coding system, creating a unified CP meta-ontology,
developing independent Clinical Pathway Management Systems (CPMS), and
supporting CPMSs with machine learning sub-systems.
|
[
{
"created": "Wed, 2 Mar 2022 01:38:40 GMT",
"version": "v1"
}
] |
2022-03-03
|
[
[
"Alahmar",
"Ayman",
""
],
[
"Alkhatib",
"Ola",
""
]
] |
Clinical Pathways (CP) are medical management plans developed to standardize patient treatment activities, optimize resource usage, reduce expenses, and improve the quality of healthcare services. Most CPs currently in use are paper-based documents (i.e., not computerized). CP computerization has been an active research topic since the inception of CP use in hospitals. This literature review research aims to examine studies that focused on CP computerization and offers recommendations for future research in this important research area. Some critical research suggestions include centralizing computerized CPs in Healthcare Information Systems (HIS), CP term standardization using international medical terminology systems, developing a global CP-specific digital coding system, creating a unified CP meta-ontology, developing independent Clinical Pathway Management Systems (CPMS), and supporting CPMSs with machine learning sub-systems.
|
1807.08934
|
Vinod Kumar Chauhan
|
Vinod Kumar Chauhan, Anuj Sharma, Kalpana Dahiya
|
SAAGs: Biased Stochastic Variance Reduction Methods for Large-scale
Learning
|
Final journal version. Appl Intell (2019)
| null |
10.1007/s10489-019-01450-3
| null |
cs.LG cs.AI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Stochastic approximation is one of the effective approach to deal with the
large-scale machine learning problems and the recent research has focused on
reduction of variance, caused by the noisy approximations of the gradients. In
this paper, we have proposed novel variants of SAAG-I and II (Stochastic
Average Adjusted Gradient) (Chauhan et al. 2017), called SAAG-III and IV,
respectively. Unlike SAAG-I, starting point is set to average of previous epoch
in SAAG-III, and unlike SAAG-II, the snap point and starting point are set to
average and last iterate of previous epoch in SAAG-IV, respectively. To
determine the step size, we have used Stochastic Backtracking-Armijo line
Search (SBAS) which performs line search only on selected mini-batch of data
points. Since backtracking line search is not suitable for large-scale problems
and the constants used to find the step size, like Lipschitz constant, are not
always available so SBAS could be very effective in such cases. We have
extended SAAGs (I, II, III and IV) to solve non-smooth problems and designed
two update rules for smooth and non-smooth problems. Moreover, our theoretical
results have proved linear convergence of SAAG-IV for all the four combinations
of smoothness and strong-convexity, in expectation. Finally, our experimental
studies have proved the efficacy of proposed methods against the state-of-art
techniques.
|
[
{
"created": "Tue, 24 Jul 2018 07:36:21 GMT",
"version": "v1"
},
{
"created": "Mon, 24 Dec 2018 10:04:22 GMT",
"version": "v2"
},
{
"created": "Sat, 6 Apr 2019 05:04:23 GMT",
"version": "v3"
}
] |
2019-04-09
|
[
[
"Chauhan",
"Vinod Kumar",
""
],
[
"Sharma",
"Anuj",
""
],
[
"Dahiya",
"Kalpana",
""
]
] |
Stochastic approximation is one of the effective approach to deal with the large-scale machine learning problems and the recent research has focused on reduction of variance, caused by the noisy approximations of the gradients. In this paper, we have proposed novel variants of SAAG-I and II (Stochastic Average Adjusted Gradient) (Chauhan et al. 2017), called SAAG-III and IV, respectively. Unlike SAAG-I, starting point is set to average of previous epoch in SAAG-III, and unlike SAAG-II, the snap point and starting point are set to average and last iterate of previous epoch in SAAG-IV, respectively. To determine the step size, we have used Stochastic Backtracking-Armijo line Search (SBAS) which performs line search only on selected mini-batch of data points. Since backtracking line search is not suitable for large-scale problems and the constants used to find the step size, like Lipschitz constant, are not always available so SBAS could be very effective in such cases. We have extended SAAGs (I, II, III and IV) to solve non-smooth problems and designed two update rules for smooth and non-smooth problems. Moreover, our theoretical results have proved linear convergence of SAAG-IV for all the four combinations of smoothness and strong-convexity, in expectation. Finally, our experimental studies have proved the efficacy of proposed methods against the state-of-art techniques.
|
2312.00622
|
Jose Pablo Folch
|
Jose Pablo Folch, James Odgers, Shiqiang Zhang, Robert M Lee, Behrang
Shafei, David Walz, Calvin Tsay, Mark van der Wilk, Ruth Misener
|
Practical Path-based Bayesian Optimization
|
6 main pages, 12 with references and appendix. 4 figures, 2 tables.
To appear in NeurIPS 2023 Workshop on Adaptive Experimental Design and Active
Learning in the Real World
|
NeurIPS 2023 Workshop on Adaptive Experimental Design and Active
Learning in the Real World
| null | null |
cs.LG math.OC stat.ME
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There has been a surge in interest in data-driven experimental design with
applications to chemical engineering and drug manufacturing. Bayesian
optimization (BO) has proven to be adaptable to such cases, since we can model
the reactions of interest as expensive black-box functions. Sometimes, the cost
of this black-box functions can be separated into two parts: (a) the cost of
the experiment itself, and (b) the cost of changing the input parameters. In
this short paper, we extend the SnAKe algorithm to deal with both types of
costs simultaneously. We further propose extensions to the case of a maximum
allowable input change, as well as to the multi-objective setting.
|
[
{
"created": "Fri, 1 Dec 2023 14:39:11 GMT",
"version": "v1"
}
] |
2023-12-04
|
[
[
"Folch",
"Jose Pablo",
""
],
[
"Odgers",
"James",
""
],
[
"Zhang",
"Shiqiang",
""
],
[
"Lee",
"Robert M",
""
],
[
"Shafei",
"Behrang",
""
],
[
"Walz",
"David",
""
],
[
"Tsay",
"Calvin",
""
],
[
"van der Wilk",
"Mark",
""
],
[
"Misener",
"Ruth",
""
]
] |
There has been a surge in interest in data-driven experimental design with applications to chemical engineering and drug manufacturing. Bayesian optimization (BO) has proven to be adaptable to such cases, since we can model the reactions of interest as expensive black-box functions. Sometimes, the cost of this black-box functions can be separated into two parts: (a) the cost of the experiment itself, and (b) the cost of changing the input parameters. In this short paper, we extend the SnAKe algorithm to deal with both types of costs simultaneously. We further propose extensions to the case of a maximum allowable input change, as well as to the multi-objective setting.
|
1904.12768
|
Roy Dong
|
Tyler Westenbroek and Roy Dong and Lillian J. Ratliff and S. Shankar
Sastry
|
Competitive Statistical Estimation with Strategic Data Sources
|
accepted in the IEEE Transactions on Automatic Control
| null | null | null |
cs.GT stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, data has played an increasingly important role in the
economy as a good in its own right. In many settings, data aggregators cannot
directly verify the quality of the data they purchase, nor the effort exerted
by data sources when creating the data. Recent work has explored mechanisms to
ensure that the data sources share high quality data with a single data
aggregator, addressing the issue of moral hazard. Oftentimes, there is a
unique, socially efficient solution.
In this paper, we consider data markets where there is more than one data
aggregator. Since data can be cheaply reproduced and transmitted once created,
data sources may share the same data with more than one aggregator, leading to
free-riding between data aggregators. This coupling can lead to non-uniqueness
of equilibria and social inefficiency. We examine a particular class of
mechanisms that have received study recently in the literature, and we
characterize all the generalized Nash equilibria of the resulting data market.
We show that, in contrast to the single-aggregator case, there is either
infinitely many generalized Nash equilibria or none. We also provide necessary
and sufficient conditions for all equilibria to be socially inefficient. In our
analysis, we identify the components of these mechanisms which give rise to
these undesirable outcomes, showing the need for research into mechanisms for
competitive settings with multiple data purchasers and sellers.
|
[
{
"created": "Mon, 29 Apr 2019 15:26:05 GMT",
"version": "v1"
}
] |
2019-04-30
|
[
[
"Westenbroek",
"Tyler",
""
],
[
"Dong",
"Roy",
""
],
[
"Ratliff",
"Lillian J.",
""
],
[
"Sastry",
"S. Shankar",
""
]
] |
In recent years, data has played an increasingly important role in the economy as a good in its own right. In many settings, data aggregators cannot directly verify the quality of the data they purchase, nor the effort exerted by data sources when creating the data. Recent work has explored mechanisms to ensure that the data sources share high quality data with a single data aggregator, addressing the issue of moral hazard. Oftentimes, there is a unique, socially efficient solution. In this paper, we consider data markets where there is more than one data aggregator. Since data can be cheaply reproduced and transmitted once created, data sources may share the same data with more than one aggregator, leading to free-riding between data aggregators. This coupling can lead to non-uniqueness of equilibria and social inefficiency. We examine a particular class of mechanisms that have received study recently in the literature, and we characterize all the generalized Nash equilibria of the resulting data market. We show that, in contrast to the single-aggregator case, there is either infinitely many generalized Nash equilibria or none. We also provide necessary and sufficient conditions for all equilibria to be socially inefficient. In our analysis, we identify the components of these mechanisms which give rise to these undesirable outcomes, showing the need for research into mechanisms for competitive settings with multiple data purchasers and sellers.
|
2302.01772
|
Youssef Allouah
|
Youssef Allouah, Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta,
Rafael Pinot, John Stephan
|
Fixing by Mixing: A Recipe for Optimal Byzantine ML under Heterogeneity
|
Accepted paper at AISTATS 2023
| null | null | null |
cs.LG cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Byzantine machine learning (ML) aims to ensure the resilience of distributed
learning algorithms to misbehaving (or Byzantine) machines. Although this
problem received significant attention, prior works often assume the data held
by the machines to be homogeneous, which is seldom true in practical settings.
Data heterogeneity makes Byzantine ML considerably more challenging, since a
Byzantine machine can hardly be distinguished from a non-Byzantine outlier. A
few solutions have been proposed to tackle this issue, but these provide
suboptimal probabilistic guarantees and fare poorly in practice. This paper
closes the theoretical gap, achieving optimality and inducing good empirical
results. In fact, we show how to automatically adapt existing solutions for
(homogeneous) Byzantine ML to the heterogeneous setting through a powerful
mechanism, we call nearest neighbor mixing (NNM), which boosts any standard
robust distributed gradient descent variant to yield optimal Byzantine
resilience under heterogeneity. We obtain similar guarantees (in expectation)
by plugging NNM in the distributed stochastic heavy ball method, a practical
substitute to distributed gradient descent. We obtain empirical results that
significantly outperform state-of-the-art Byzantine ML solutions.
|
[
{
"created": "Fri, 3 Feb 2023 14:30:25 GMT",
"version": "v1"
}
] |
2023-02-06
|
[
[
"Allouah",
"Youssef",
""
],
[
"Farhadkhani",
"Sadegh",
""
],
[
"Guerraoui",
"Rachid",
""
],
[
"Gupta",
"Nirupam",
""
],
[
"Pinot",
"Rafael",
""
],
[
"Stephan",
"John",
""
]
] |
Byzantine machine learning (ML) aims to ensure the resilience of distributed learning algorithms to misbehaving (or Byzantine) machines. Although this problem received significant attention, prior works often assume the data held by the machines to be homogeneous, which is seldom true in practical settings. Data heterogeneity makes Byzantine ML considerably more challenging, since a Byzantine machine can hardly be distinguished from a non-Byzantine outlier. A few solutions have been proposed to tackle this issue, but these provide suboptimal probabilistic guarantees and fare poorly in practice. This paper closes the theoretical gap, achieving optimality and inducing good empirical results. In fact, we show how to automatically adapt existing solutions for (homogeneous) Byzantine ML to the heterogeneous setting through a powerful mechanism, we call nearest neighbor mixing (NNM), which boosts any standard robust distributed gradient descent variant to yield optimal Byzantine resilience under heterogeneity. We obtain similar guarantees (in expectation) by plugging NNM in the distributed stochastic heavy ball method, a practical substitute to distributed gradient descent. We obtain empirical results that significantly outperform state-of-the-art Byzantine ML solutions.
|
2403.02738
|
Congzhi Zhang
|
Congzhi Zhang, Linhai Zhang, Jialong Wu, Deyu Zhou, Yulan He
|
Causal Prompting: Debiasing Large Language Model Prompting based on
Front-Door Adjustment
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite the notable advancements of existing prompting methods, such as
In-Context Learning and Chain-of-Thought for Large Language Models (LLMs), they
still face challenges related to various biases. Traditional debiasing methods
primarily focus on the model training stage, including approaches based on data
augmentation and reweighting, yet they struggle with the complex biases
inherent in LLMs. To address such limitations, the causal relationship behind
the prompting methods is uncovered using a structural causal model, and a novel
causal prompting method based on front-door adjustment is proposed to
effectively mitigate LLMs biases. In specific, causal intervention is achieved
by designing the prompts without accessing the parameters and logits of LLMs.
The chain-of-thought generated by LLM is employed as the mediator variable and
the causal effect between input prompts and output answers is calculated
through front-door adjustment to mitigate model biases. Moreover, to accurately
represent the chain-of-thoughts and estimate the causal effects, contrastive
learning is used to fine-tune the encoder of chain-of-thought by aligning its
space with that of the LLM. Experimental results show that the proposed causal
prompting approach achieves excellent performance across seven natural language
processing datasets on both open-source and closed-source LLMs.
|
[
{
"created": "Tue, 5 Mar 2024 07:47:34 GMT",
"version": "v1"
},
{
"created": "Wed, 22 May 2024 16:21:38 GMT",
"version": "v2"
}
] |
2024-05-24
|
[
[
"Zhang",
"Congzhi",
""
],
[
"Zhang",
"Linhai",
""
],
[
"Wu",
"Jialong",
""
],
[
"Zhou",
"Deyu",
""
],
[
"He",
"Yulan",
""
]
] |
Despite the notable advancements of existing prompting methods, such as In-Context Learning and Chain-of-Thought for Large Language Models (LLMs), they still face challenges related to various biases. Traditional debiasing methods primarily focus on the model training stage, including approaches based on data augmentation and reweighting, yet they struggle with the complex biases inherent in LLMs. To address such limitations, the causal relationship behind the prompting methods is uncovered using a structural causal model, and a novel causal prompting method based on front-door adjustment is proposed to effectively mitigate LLMs biases. In specific, causal intervention is achieved by designing the prompts without accessing the parameters and logits of LLMs. The chain-of-thought generated by LLM is employed as the mediator variable and the causal effect between input prompts and output answers is calculated through front-door adjustment to mitigate model biases. Moreover, to accurately represent the chain-of-thoughts and estimate the causal effects, contrastive learning is used to fine-tune the encoder of chain-of-thought by aligning its space with that of the LLM. Experimental results show that the proposed causal prompting approach achieves excellent performance across seven natural language processing datasets on both open-source and closed-source LLMs.
|
2403.05600
|
Ha Manh Bui
|
Ha Manh Bui and Anqi Liu
|
Density-Regression: Efficient and Distance-Aware Deep Regressor for
Uncertainty Estimation under Distribution Shifts
|
International Conference on Artificial Intelligence and Statistics,
2024
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Morden deep ensembles technique achieves strong uncertainty estimation
performance by going through multiple forward passes with different models.
This is at the price of a high storage space and a slow speed in the inference
(test) time. To address this issue, we propose Density-Regression, a method
that leverages the density function in uncertainty estimation and achieves fast
inference by a single forward pass. We prove it is distance aware on the
feature space, which is a necessary condition for a neural network to produce
high-quality uncertainty estimation under distribution shifts. Empirically, we
conduct experiments on regression tasks with the cubic toy dataset, benchmark
UCI, weather forecast with time series, and depth estimation under real-world
shifted applications. We show that Density-Regression has competitive
uncertainty estimation performance under distribution shifts with modern deep
regressors while using a lower model size and a faster inference speed.
|
[
{
"created": "Thu, 7 Mar 2024 23:20:34 GMT",
"version": "v1"
}
] |
2024-03-13
|
[
[
"Bui",
"Ha Manh",
""
],
[
"Liu",
"Anqi",
""
]
] |
Morden deep ensembles technique achieves strong uncertainty estimation performance by going through multiple forward passes with different models. This is at the price of a high storage space and a slow speed in the inference (test) time. To address this issue, we propose Density-Regression, a method that leverages the density function in uncertainty estimation and achieves fast inference by a single forward pass. We prove it is distance aware on the feature space, which is a necessary condition for a neural network to produce high-quality uncertainty estimation under distribution shifts. Empirically, we conduct experiments on regression tasks with the cubic toy dataset, benchmark UCI, weather forecast with time series, and depth estimation under real-world shifted applications. We show that Density-Regression has competitive uncertainty estimation performance under distribution shifts with modern deep regressors while using a lower model size and a faster inference speed.
|
2402.18101
|
Qiao Wang
|
Qiao Wang and Zheng Yuan
|
Assessing the Efficacy of Grammar Error Correction: A Human Evaluation
Approach in the Japanese Context
|
2024 Joint International Conference on Computational Linguistics,
Language Resources and Evaluation (LREC-COLING 2024)
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
In this study, we evaluated the performance of the state-of-the-art sequence
tagging grammar error detection and correction model (SeqTagger) using Japanese
university students' writing samples. With an automatic annotation toolkit,
ERRANT, we first evaluated SeqTagger's performance on error correction with
human expert correction as the benchmark. Then a human-annotated approach was
adopted to evaluate Seqtagger's performance in error detection using a subset
of the writing dataset. Results indicated a precision of 63.66% and a recall of
20.19% for error correction in the full dataset. For the subset, after manual
exclusion of irrelevant errors such as semantic and mechanical ones, the model
shows an adjusted precision of 97.98% and an adjusted recall of 42.98% for
error detection, indicating the model's high accuracy but also its
conservativeness. Thematic analysis on errors undetected by the model revealed
that determiners and articles, especially the latter, were predominant.
Specifically, in terms of context-independent errors, the model occasionally
overlooked basic ones and faced challenges with overly erroneous or complex
structures. Meanwhile, context-dependent errors, notably those related to tense
and noun number, as well as those possibly influenced by the students' first
language (L1), remained particularly challenging.
|
[
{
"created": "Wed, 28 Feb 2024 06:43:43 GMT",
"version": "v1"
},
{
"created": "Thu, 29 Feb 2024 10:53:40 GMT",
"version": "v2"
}
] |
2024-03-01
|
[
[
"Wang",
"Qiao",
""
],
[
"Yuan",
"Zheng",
""
]
] |
In this study, we evaluated the performance of the state-of-the-art sequence tagging grammar error detection and correction model (SeqTagger) using Japanese university students' writing samples. With an automatic annotation toolkit, ERRANT, we first evaluated SeqTagger's performance on error correction with human expert correction as the benchmark. Then a human-annotated approach was adopted to evaluate Seqtagger's performance in error detection using a subset of the writing dataset. Results indicated a precision of 63.66% and a recall of 20.19% for error correction in the full dataset. For the subset, after manual exclusion of irrelevant errors such as semantic and mechanical ones, the model shows an adjusted precision of 97.98% and an adjusted recall of 42.98% for error detection, indicating the model's high accuracy but also its conservativeness. Thematic analysis on errors undetected by the model revealed that determiners and articles, especially the latter, were predominant. Specifically, in terms of context-independent errors, the model occasionally overlooked basic ones and faced challenges with overly erroneous or complex structures. Meanwhile, context-dependent errors, notably those related to tense and noun number, as well as those possibly influenced by the students' first language (L1), remained particularly challenging.
|
2207.09019
|
Jingwang Ling
|
Jingwang Ling, Zhibo Wang, Ming Lu, Quan Wang, Chen Qian, Feng Xu
|
Structure-aware Editable Morphable Model for 3D Facial Detail Animation
and Manipulation
|
ECCV 2022
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Morphable models are essential for the statistical modeling of 3D faces.
Previous works on morphable models mostly focus on large-scale facial geometry
but ignore facial details. This paper augments morphable models in representing
facial details by learning a Structure-aware Editable Morphable Model (SEMM).
SEMM introduces a detail structure representation based on the distance field
of wrinkle lines, jointly modeled with detail displacements to establish better
correspondences and enable intuitive manipulation of wrinkle structure.
Besides, SEMM introduces two transformation modules to translate expression
blendshape weights and age values into changes in latent space, allowing
effective semantic detail editing while maintaining identity. Extensive
experiments demonstrate that the proposed model compactly represents facial
details, outperforms previous methods in expression animation qualitatively and
quantitatively, and achieves effective age editing and wrinkle line editing of
facial details. Code and model are available at
https://github.com/gerwang/facial-detail-manipulation.
|
[
{
"created": "Tue, 19 Jul 2022 01:48:07 GMT",
"version": "v1"
}
] |
2022-07-20
|
[
[
"Ling",
"Jingwang",
""
],
[
"Wang",
"Zhibo",
""
],
[
"Lu",
"Ming",
""
],
[
"Wang",
"Quan",
""
],
[
"Qian",
"Chen",
""
],
[
"Xu",
"Feng",
""
]
] |
Morphable models are essential for the statistical modeling of 3D faces. Previous works on morphable models mostly focus on large-scale facial geometry but ignore facial details. This paper augments morphable models in representing facial details by learning a Structure-aware Editable Morphable Model (SEMM). SEMM introduces a detail structure representation based on the distance field of wrinkle lines, jointly modeled with detail displacements to establish better correspondences and enable intuitive manipulation of wrinkle structure. Besides, SEMM introduces two transformation modules to translate expression blendshape weights and age values into changes in latent space, allowing effective semantic detail editing while maintaining identity. Extensive experiments demonstrate that the proposed model compactly represents facial details, outperforms previous methods in expression animation qualitatively and quantitatively, and achieves effective age editing and wrinkle line editing of facial details. Code and model are available at https://github.com/gerwang/facial-detail-manipulation.
|
1803.10195
|
Tomas Petricek
|
Tomas Petricek (The Alan Turing Institute, United Kingdom)
|
What we talk about when we talk about monads
| null |
The Art, Science, and Engineering of Programming, 2018, Vol. 2,
Issue 3, Article 12
|
10.22152/programming-journal.org/2018/2/12
| null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Computer science provides an in-depth understanding of technical aspects of
programming concepts, but if we want to understand how programming concepts
evolve, how programmers think and talk about them and how they are used in
practice, we need to consider a broader perspective that includes historical,
philosophical and cognitive aspects. In this paper, we develop such broader
understanding of monads, a programming concept that has an infamous formal
definition, syntactic support in several programming languages and a reputation
for being elegant and powerful, but also intimidating and difficult to grasp.
This paper is not a monad tutorial. It will not tell you what a monad is.
Instead, it helps you understand how computer scientists and programmers talk
about monads and why they do so. To answer these questions, we review the
history of monads in the context of programming and study the development
through the perspectives of philosophy of science, philosophy of mathematics
and cognitive sciences. More generally, we present a framework for
understanding programming concepts that considers them at three levels: formal,
metaphorical and implementation. We base such observations on established
results about the scientific method and mathematical entities -- cognitive
sciences suggest that the metaphors used when thinking about monads are more
important than widely accepted, while philosophy of science explains how the
research paradigm from which monads originate influences and restricts their
use. Finally, we provide evidence for why a broader philosophical, sociological
look at programming concepts should be of interest for programmers. It lets us
understand programming concepts better and, fundamentally, choose more
appropriate abstractions as illustrated in number of case studies that conclude
the paper.
|
[
{
"created": "Tue, 27 Mar 2018 17:35:50 GMT",
"version": "v1"
}
] |
2018-03-28
|
[
[
"Petricek",
"Tomas",
"",
"The Alan Turing Institute, United Kingdom"
]
] |
Computer science provides an in-depth understanding of technical aspects of programming concepts, but if we want to understand how programming concepts evolve, how programmers think and talk about them and how they are used in practice, we need to consider a broader perspective that includes historical, philosophical and cognitive aspects. In this paper, we develop such broader understanding of monads, a programming concept that has an infamous formal definition, syntactic support in several programming languages and a reputation for being elegant and powerful, but also intimidating and difficult to grasp. This paper is not a monad tutorial. It will not tell you what a monad is. Instead, it helps you understand how computer scientists and programmers talk about monads and why they do so. To answer these questions, we review the history of monads in the context of programming and study the development through the perspectives of philosophy of science, philosophy of mathematics and cognitive sciences. More generally, we present a framework for understanding programming concepts that considers them at three levels: formal, metaphorical and implementation. We base such observations on established results about the scientific method and mathematical entities -- cognitive sciences suggest that the metaphors used when thinking about monads are more important than widely accepted, while philosophy of science explains how the research paradigm from which monads originate influences and restricts their use. Finally, we provide evidence for why a broader philosophical, sociological look at programming concepts should be of interest for programmers. It lets us understand programming concepts better and, fundamentally, choose more appropriate abstractions as illustrated in number of case studies that conclude the paper.
|
2307.03110
|
Arjun Sridhar
|
Bhavna Gopal, Arjun Sridhar, Tunhou Zhang and Yiran Chen
|
LISSNAS: Locality-based Iterative Search Space Shrinkage for Neural
Architecture Search
| null |
IJCAI 2023
| null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Search spaces hallmark the advancement of Neural Architecture Search (NAS).
Large and complex search spaces with versatile building operators and
structures provide more opportunities to brew promising architectures, yet pose
severe challenges on efficient exploration and exploitation. Subsequently,
several search space shrinkage methods optimize by selecting a single
sub-region that contains some well-performing networks. Small performance and
efficiency gains are observed with these methods but such techniques leave room
for significantly improved search performance and are ineffective at retaining
architectural diversity. We propose LISSNAS, an automated algorithm that
shrinks a large space into a diverse, small search space with SOTA search
performance. Our approach leverages locality, the relationship between
structural and performance similarity, to efficiently extract many pockets of
well-performing networks. We showcase our method on an array of search spaces
spanning various sizes and datasets. We accentuate the effectiveness of our
shrunk spaces when used in one-shot search by achieving the best Top-1 accuracy
in two different search spaces. Our method achieves a SOTA Top-1 accuracy of
77.6\% in ImageNet under mobile constraints, best-in-class Kendal-Tau,
architectural diversity, and search space size.
|
[
{
"created": "Thu, 6 Jul 2023 16:28:51 GMT",
"version": "v1"
}
] |
2023-07-07
|
[
[
"Gopal",
"Bhavna",
""
],
[
"Sridhar",
"Arjun",
""
],
[
"Zhang",
"Tunhou",
""
],
[
"Chen",
"Yiran",
""
]
] |
Search spaces hallmark the advancement of Neural Architecture Search (NAS). Large and complex search spaces with versatile building operators and structures provide more opportunities to brew promising architectures, yet pose severe challenges on efficient exploration and exploitation. Subsequently, several search space shrinkage methods optimize by selecting a single sub-region that contains some well-performing networks. Small performance and efficiency gains are observed with these methods but such techniques leave room for significantly improved search performance and are ineffective at retaining architectural diversity. We propose LISSNAS, an automated algorithm that shrinks a large space into a diverse, small search space with SOTA search performance. Our approach leverages locality, the relationship between structural and performance similarity, to efficiently extract many pockets of well-performing networks. We showcase our method on an array of search spaces spanning various sizes and datasets. We accentuate the effectiveness of our shrunk spaces when used in one-shot search by achieving the best Top-1 accuracy in two different search spaces. Our method achieves a SOTA Top-1 accuracy of 77.6\% in ImageNet under mobile constraints, best-in-class Kendal-Tau, architectural diversity, and search space size.
|
2212.07495
|
Tooba Imtiaz
|
Tooba Imtiaz, Morgan Kohler, Jared Miller, Zifeng Wang, Mario Sznaier,
Octavia Camps, Jennifer Dy
|
SAIF: Sparse Adversarial and Imperceptible Attack Framework
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Adversarial attacks hamper the decision-making ability of neural networks by
perturbing the input signal. The addition of calculated small distortion to
images, for instance, can deceive a well-trained image classification network.
In this work, we propose a novel attack technique called Sparse Adversarial and
Interpretable Attack Framework (SAIF). Specifically, we design imperceptible
attacks that contain low-magnitude perturbations at a small number of pixels
and leverage these sparse attacks to reveal the vulnerability of classifiers.
We use the Frank-Wolfe (conditional gradient) algorithm to simultaneously
optimize the attack perturbations for bounded magnitude and sparsity with
$O(1/\sqrt{T})$ convergence. Empirical results show that SAIF computes highly
imperceptible and interpretable adversarial examples, and outperforms
state-of-the-art sparse attack methods on the ImageNet dataset.
|
[
{
"created": "Wed, 14 Dec 2022 20:28:50 GMT",
"version": "v1"
},
{
"created": "Wed, 6 Dec 2023 10:55:40 GMT",
"version": "v2"
}
] |
2023-12-07
|
[
[
"Imtiaz",
"Tooba",
""
],
[
"Kohler",
"Morgan",
""
],
[
"Miller",
"Jared",
""
],
[
"Wang",
"Zifeng",
""
],
[
"Sznaier",
"Mario",
""
],
[
"Camps",
"Octavia",
""
],
[
"Dy",
"Jennifer",
""
]
] |
Adversarial attacks hamper the decision-making ability of neural networks by perturbing the input signal. The addition of calculated small distortion to images, for instance, can deceive a well-trained image classification network. In this work, we propose a novel attack technique called Sparse Adversarial and Interpretable Attack Framework (SAIF). Specifically, we design imperceptible attacks that contain low-magnitude perturbations at a small number of pixels and leverage these sparse attacks to reveal the vulnerability of classifiers. We use the Frank-Wolfe (conditional gradient) algorithm to simultaneously optimize the attack perturbations for bounded magnitude and sparsity with $O(1/\sqrt{T})$ convergence. Empirical results show that SAIF computes highly imperceptible and interpretable adversarial examples, and outperforms state-of-the-art sparse attack methods on the ImageNet dataset.
|
2003.06959
|
Pei Xu
|
Pei Xu and Ioannis Karamouzas
|
PFPN: Continuous Control of Physically Simulated Characters using
Particle Filtering Policy Network
|
Motion, Interaction and Games (MIG '21)
| null |
10.1145/3487983.3488301
| null |
cs.LG cs.GR stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Data-driven methods for physics-based character control using reinforcement
learning have been successfully applied to generate high-quality motions.
However, existing approaches typically rely on Gaussian distributions to
represent the action policy, which can prematurely commit to suboptimal actions
when solving high-dimensional continuous control problems for
highly-articulated characters. In this paper, to improve the learning
performance of physics-based character controllers, we propose a framework that
considers a particle-based action policy as a substitute for Gaussian policies.
We exploit particle filtering to dynamically explore and discretize the action
space, and track the posterior policy represented as a mixture distribution.
The resulting policy can replace the unimodal Gaussian policy which has been
the staple for character control problems, without changing the underlying
model architecture of the reinforcement learning algorithm used to perform
policy optimization. We demonstrate the applicability of our approach on
various motion capture imitation tasks. Baselines using our particle-based
policies achieve better imitation performance and speed of convergence as
compared to corresponding implementations using Gaussians, and are more robust
to external perturbations during character control. Related code is available
at: https://motion-lab.github.io/PFPN.
|
[
{
"created": "Mon, 16 Mar 2020 00:35:36 GMT",
"version": "v1"
},
{
"created": "Sat, 3 Oct 2020 15:27:37 GMT",
"version": "v2"
},
{
"created": "Tue, 13 Oct 2020 22:49:38 GMT",
"version": "v3"
},
{
"created": "Fri, 1 Oct 2021 14:09:40 GMT",
"version": "v4"
}
] |
2021-10-06
|
[
[
"Xu",
"Pei",
""
],
[
"Karamouzas",
"Ioannis",
""
]
] |
Data-driven methods for physics-based character control using reinforcement learning have been successfully applied to generate high-quality motions. However, existing approaches typically rely on Gaussian distributions to represent the action policy, which can prematurely commit to suboptimal actions when solving high-dimensional continuous control problems for highly-articulated characters. In this paper, to improve the learning performance of physics-based character controllers, we propose a framework that considers a particle-based action policy as a substitute for Gaussian policies. We exploit particle filtering to dynamically explore and discretize the action space, and track the posterior policy represented as a mixture distribution. The resulting policy can replace the unimodal Gaussian policy which has been the staple for character control problems, without changing the underlying model architecture of the reinforcement learning algorithm used to perform policy optimization. We demonstrate the applicability of our approach on various motion capture imitation tasks. Baselines using our particle-based policies achieve better imitation performance and speed of convergence as compared to corresponding implementations using Gaussians, and are more robust to external perturbations during character control. Related code is available at: https://motion-lab.github.io/PFPN.
|
1406.0173
|
Ljubisa Stankovic
|
Ljubisa Stankovic
|
On the ISAR Image Analysis and Recovery with Unavailable or Heavily
Corrupted Data
|
9 pages, 6 figures, submitted to the IEEE Transactions on Aerospace
and Electronic Systems
| null |
10.1109/TAES.2015.140413
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Common ISAR radar images and signals can be reconstructed from much fewer
samples than the sampling theorem requires since they are usually sparse.
Unavailable randomly positioned samples can result from heavily corrupted parts
of the signal. Since these samples can be omitted and declared as unavailable,
the application of the compressive sensing methods in the recovery of heavily
corrupted signal and radar images is possible. A\ simple direct method for the
recovery of unavailable signal samples and the calculation of the restored ISAR
image is reviewed. An analysis of the noise influence is performed. For fast
maneuvering ISAR targets the sparsity property is lost since the ISAR image is
blurred. A nonparametric quadratic time-frequency representations based method
is used to restore the ISAR image sparsity. However, the linear relation
between the signal and the sparsity domain transformation is lost. A recently
proposed gradient recovery algorithm is adapted for this kind of analysis. It
does not require the linear relation of the signal and its sparsity domain
transformation in the process of unavailable data recovery. The presented
methods and results are tested on several numerical examples proving the
expected accuracy and improvements.
|
[
{
"created": "Sun, 1 Jun 2014 15:46:35 GMT",
"version": "v1"
}
] |
2016-11-17
|
[
[
"Stankovic",
"Ljubisa",
""
]
] |
Common ISAR radar images and signals can be reconstructed from much fewer samples than the sampling theorem requires since they are usually sparse. Unavailable randomly positioned samples can result from heavily corrupted parts of the signal. Since these samples can be omitted and declared as unavailable, the application of the compressive sensing methods in the recovery of heavily corrupted signal and radar images is possible. A\ simple direct method for the recovery of unavailable signal samples and the calculation of the restored ISAR image is reviewed. An analysis of the noise influence is performed. For fast maneuvering ISAR targets the sparsity property is lost since the ISAR image is blurred. A nonparametric quadratic time-frequency representations based method is used to restore the ISAR image sparsity. However, the linear relation between the signal and the sparsity domain transformation is lost. A recently proposed gradient recovery algorithm is adapted for this kind of analysis. It does not require the linear relation of the signal and its sparsity domain transformation in the process of unavailable data recovery. The presented methods and results are tested on several numerical examples proving the expected accuracy and improvements.
|
1112.2336
|
Nasrin Mazaheri
|
Nasrin Mazaheri Soudani and Ahmad Baraani-Dastgerdi
|
The Spatial Nearest Neighbor Skyline Queries
|
15 pages, 14 figures, Journal:International Journal of Database
Management Systems (IJDMS)
|
International Journal of Database Management Systems (IJDMS),
Vol.3, No.4, November 2011, 65-79
| null | null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
User preference queries are very important in spatial databases. With the
help of these queries, one can found best location among points saved in
database. In many situation users evaluate quality of a location with its
distance from its nearest neighbor among a special set of points. There has
been less attention about evaluating a location with its distance to nearest
neighbors in spatial user preference queries. This problem has application in
many domains such as service recommendation systems and investment planning.
Related works in this field are based on top-k queries. The problem with top-k
queries is that user must set weights for attributes and a function for
aggregating them. This is hard for him in most cases. In this paper a new type
of user preference queries called spatial nearest neighbor skyline queries will
be introduced in which user has some sets of points as query parameters. For
each point in database attributes are its distances to the nearest neighbors
from each set of query points. By separating this query as a subset of dynamic
skyline queries N2S2 algorithm is provided for computing it. This algorithm has
good performance compared with the general branch and bound algorithm for
skyline queries.
|
[
{
"created": "Sun, 11 Dec 2011 08:43:54 GMT",
"version": "v1"
}
] |
2011-12-13
|
[
[
"Soudani",
"Nasrin Mazaheri",
""
],
[
"Baraani-Dastgerdi",
"Ahmad",
""
]
] |
User preference queries are very important in spatial databases. With the help of these queries, one can found best location among points saved in database. In many situation users evaluate quality of a location with its distance from its nearest neighbor among a special set of points. There has been less attention about evaluating a location with its distance to nearest neighbors in spatial user preference queries. This problem has application in many domains such as service recommendation systems and investment planning. Related works in this field are based on top-k queries. The problem with top-k queries is that user must set weights for attributes and a function for aggregating them. This is hard for him in most cases. In this paper a new type of user preference queries called spatial nearest neighbor skyline queries will be introduced in which user has some sets of points as query parameters. For each point in database attributes are its distances to the nearest neighbors from each set of query points. By separating this query as a subset of dynamic skyline queries N2S2 algorithm is provided for computing it. This algorithm has good performance compared with the general branch and bound algorithm for skyline queries.
|
2308.12111
|
Chao Tian
|
Chao Tian, Zikun Zhou, Yuqing Huang, Gaojun Li, and Zhenyu He
|
Cross-Modality Proposal-guided Feature Mining for Unregistered
RGB-Thermal Pedestrian Detection
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
RGB-Thermal (RGB-T) pedestrian detection aims to locate the pedestrians in
RGB-T image pairs to exploit the complementation between the two modalities for
improving detection robustness in extreme conditions. Most existing algorithms
assume that the RGB-T image pairs are well registered, while in the real world
they are not aligned ideally due to parallax or different field-of-view of the
cameras. The pedestrians in misaligned image pairs may locate at different
positions in two images, which results in two challenges: 1) how to achieve
inter-modality complementation using spatially misaligned RGB-T pedestrian
patches, and 2) how to recognize the unpaired pedestrians at the boundary. To
deal with these issues, we propose a new paradigm for unregistered RGB-T
pedestrian detection, which predicts two separate pedestrian locations in the
RGB and thermal images, respectively. Specifically, we propose a cross-modality
proposal-guided feature mining (CPFM) mechanism to extract the two precise
fusion features for representing the pedestrian in the two modalities, even if
the RGB-T image pair is unaligned. It enables us to effectively exploit the
complementation between the two modalities. With the CPFM mechanism, we build a
two-stream dense detector; it predicts the two pedestrian locations in the two
modalities based on the corresponding fusion feature mined by the CPFM
mechanism. Besides, we design a data augmentation method, named Homography, to
simulate the discrepancy in scales and views between images. We also
investigate two non-maximum suppression (NMS) methods for post-processing.
Favorable experimental results demonstrate the effectiveness and robustness of
our method in dealing with unregistered pedestrians with different shifts.
|
[
{
"created": "Wed, 23 Aug 2023 12:58:51 GMT",
"version": "v1"
}
] |
2023-08-24
|
[
[
"Tian",
"Chao",
""
],
[
"Zhou",
"Zikun",
""
],
[
"Huang",
"Yuqing",
""
],
[
"Li",
"Gaojun",
""
],
[
"He",
"Zhenyu",
""
]
] |
RGB-Thermal (RGB-T) pedestrian detection aims to locate the pedestrians in RGB-T image pairs to exploit the complementation between the two modalities for improving detection robustness in extreme conditions. Most existing algorithms assume that the RGB-T image pairs are well registered, while in the real world they are not aligned ideally due to parallax or different field-of-view of the cameras. The pedestrians in misaligned image pairs may locate at different positions in two images, which results in two challenges: 1) how to achieve inter-modality complementation using spatially misaligned RGB-T pedestrian patches, and 2) how to recognize the unpaired pedestrians at the boundary. To deal with these issues, we propose a new paradigm for unregistered RGB-T pedestrian detection, which predicts two separate pedestrian locations in the RGB and thermal images, respectively. Specifically, we propose a cross-modality proposal-guided feature mining (CPFM) mechanism to extract the two precise fusion features for representing the pedestrian in the two modalities, even if the RGB-T image pair is unaligned. It enables us to effectively exploit the complementation between the two modalities. With the CPFM mechanism, we build a two-stream dense detector; it predicts the two pedestrian locations in the two modalities based on the corresponding fusion feature mined by the CPFM mechanism. Besides, we design a data augmentation method, named Homography, to simulate the discrepancy in scales and views between images. We also investigate two non-maximum suppression (NMS) methods for post-processing. Favorable experimental results demonstrate the effectiveness and robustness of our method in dealing with unregistered pedestrians with different shifts.
|
2202.10313
|
Alexander Breuer
|
Alexander Breuer, Alexander Heinecke
|
Next-Generation Local Time Stepping for the ADER-DG Finite Element
Method
| null | null | null | null |
cs.DC cs.CE
|
http://creativecommons.org/licenses/by/4.0/
|
High-frequency ground motion simulations pose a grand challenge in
computational seismology. Two main factors drive this challenge. First, to
account for higher frequencies, we have to extend our numerical models, e.g.,
by considering anelasticity, or by including mountain topography. Second, even
if we were able to keep our models unchanged, simply doubling the frequency
content of a seismic wave propagation solver requires a sixteen-fold increase
in computational resources due to the used four-dimensional space-time domains.
This work presents the Extreme Scale Discontinuous Galerkin Environment
(EDGE) in the context of high-frequency ground motion simulations. Our
presented enhancements cover the entire spectrum of the unstructured finite
element solver. This includes the incorporation of anelasticity, the
introduction of a next-generation clustered local time stepping scheme, and the
introduction of a completely revised communication scheme. We close the
modeling and simulation loop by presenting our new and rich preprocessing,
which drives the high problem-awareness and numerical efficiency of the core
solver.
In summary, the presented work allows us to conduct large scale
high-frequency ground motion simulations efficiently, routinely and
conveniently. The soundness of our work is underlined by a set of
high-frequency verification runs using a realistic setting. We conclude the
presentation by studying EDGE's combined algorithmic and computational
efficiency in a demanding setup of the 2014 Mw 5.1 La Habra earthquake. Our
results are compelling and show an improved time-to-solution by over 10x while
scaling strongly from 256 to 1,536 nodes of the Frontera supercomputer with a
parallel efficiency of over 95%.
|
[
{
"created": "Mon, 21 Feb 2022 15:36:05 GMT",
"version": "v1"
}
] |
2022-02-22
|
[
[
"Breuer",
"Alexander",
""
],
[
"Heinecke",
"Alexander",
""
]
] |
High-frequency ground motion simulations pose a grand challenge in computational seismology. Two main factors drive this challenge. First, to account for higher frequencies, we have to extend our numerical models, e.g., by considering anelasticity, or by including mountain topography. Second, even if we were able to keep our models unchanged, simply doubling the frequency content of a seismic wave propagation solver requires a sixteen-fold increase in computational resources due to the used four-dimensional space-time domains. This work presents the Extreme Scale Discontinuous Galerkin Environment (EDGE) in the context of high-frequency ground motion simulations. Our presented enhancements cover the entire spectrum of the unstructured finite element solver. This includes the incorporation of anelasticity, the introduction of a next-generation clustered local time stepping scheme, and the introduction of a completely revised communication scheme. We close the modeling and simulation loop by presenting our new and rich preprocessing, which drives the high problem-awareness and numerical efficiency of the core solver. In summary, the presented work allows us to conduct large scale high-frequency ground motion simulations efficiently, routinely and conveniently. The soundness of our work is underlined by a set of high-frequency verification runs using a realistic setting. We conclude the presentation by studying EDGE's combined algorithmic and computational efficiency in a demanding setup of the 2014 Mw 5.1 La Habra earthquake. Our results are compelling and show an improved time-to-solution by over 10x while scaling strongly from 256 to 1,536 nodes of the Frontera supercomputer with a parallel efficiency of over 95%.
|
2402.08171
|
David Widder
|
David Gray Widder
|
Epistemic Power in AI Ethics Labor: Legitimizing Located Complaints
|
Accepted to ACM FAccT 2024
| null |
10.1145/3630106.3658973
| null |
cs.CY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
What counts as legitimate AI ethics labor, and consequently, what are the
epistemic terms on which AI ethics claims are rendered legitimate? Based on 75
interviews with technologists including researchers, developers, open source
contributors, and activists, this paper explores the various epistemic bases
from which AI ethics is discussed and practiced. In the context of outside
attacks on AI ethics as an impediment to "progress," I show how some AI ethics
practices have reached toward authority from automation and quantification, and
achieved some legitimacy as a result, while those based on richly embodied and
situated lived experience have not. This paper draws together the work of
feminist Anthropology and Science and Technology Studies scholars Diana
Forsythe and Lucy Suchman with the works of postcolonial feminist theorist Sara
Ahmed and Black feminist theorist Kristie Dotson to examine the implications of
dominant AI ethics practices.
By entrenching the epistemic power of quantification, dominant AI ethics
practices -- employing Model Cards and similar interventions -- risk
legitimizing AI ethics as a project in equal and opposite measure to which they
marginalize embodied lived experience as a legitimate part of the same project.
In response, I propose humble technical practices: quantified or technical
practices which specifically seek to make their epistemic limits clear in order
to flatten hierarchies of epistemic power.
|
[
{
"created": "Tue, 13 Feb 2024 02:07:03 GMT",
"version": "v1"
},
{
"created": "Wed, 10 Apr 2024 01:27:21 GMT",
"version": "v2"
},
{
"created": "Thu, 11 Apr 2024 01:19:03 GMT",
"version": "v3"
},
{
"created": "Wed, 17 Apr 2024 18:34:09 GMT",
"version": "v4"
}
] |
2024-04-19
|
[
[
"Widder",
"David Gray",
""
]
] |
What counts as legitimate AI ethics labor, and consequently, what are the epistemic terms on which AI ethics claims are rendered legitimate? Based on 75 interviews with technologists including researchers, developers, open source contributors, and activists, this paper explores the various epistemic bases from which AI ethics is discussed and practiced. In the context of outside attacks on AI ethics as an impediment to "progress," I show how some AI ethics practices have reached toward authority from automation and quantification, and achieved some legitimacy as a result, while those based on richly embodied and situated lived experience have not. This paper draws together the work of feminist Anthropology and Science and Technology Studies scholars Diana Forsythe and Lucy Suchman with the works of postcolonial feminist theorist Sara Ahmed and Black feminist theorist Kristie Dotson to examine the implications of dominant AI ethics practices. By entrenching the epistemic power of quantification, dominant AI ethics practices -- employing Model Cards and similar interventions -- risk legitimizing AI ethics as a project in equal and opposite measure to which they marginalize embodied lived experience as a legitimate part of the same project. In response, I propose humble technical practices: quantified or technical practices which specifically seek to make their epistemic limits clear in order to flatten hierarchies of epistemic power.
|
1909.11851
|
Christian Szegedy
|
Dennis Lee, Christian Szegedy, Markus N. Rabe, Sarah M. Loos and
Kshitij Bansal
|
Mathematical Reasoning in Latent Space
| null | null | null | null |
cs.LG cs.AI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We design and conduct a simple experiment to study whether neural networks
can perform several steps of approximate reasoning in a fixed dimensional
latent space. The set of rewrites (i.e. transformations) that can be
successfully performed on a statement represents essential semantic features of
the statement. We can compress this information by embedding the formula in a
vector space, such that the vector associated with a statement can be used to
predict whether a statement can be rewritten by other theorems. Predicting the
embedding of a formula generated by some rewrite rule is naturally viewed as
approximate reasoning in the latent space. In order to measure the
effectiveness of this reasoning, we perform approximate deduction sequences in
the latent space and use the resulting embedding to inform the semantic
features of the corresponding formal statement (which is obtained by performing
the corresponding rewrite sequence using real formulas). Our experiments show
that graph neural networks can make non-trivial predictions about the
rewrite-success of statements, even when they propagate predicted latent
representations for several steps. Since our corpus of mathematical formulas
includes a wide variety of mathematical disciplines, this experiment is a
strong indicator for the feasibility of deduction in latent space in general.
|
[
{
"created": "Thu, 26 Sep 2019 02:33:07 GMT",
"version": "v1"
}
] |
2019-09-27
|
[
[
"Lee",
"Dennis",
""
],
[
"Szegedy",
"Christian",
""
],
[
"Rabe",
"Markus N.",
""
],
[
"Loos",
"Sarah M.",
""
],
[
"Bansal",
"Kshitij",
""
]
] |
We design and conduct a simple experiment to study whether neural networks can perform several steps of approximate reasoning in a fixed dimensional latent space. The set of rewrites (i.e. transformations) that can be successfully performed on a statement represents essential semantic features of the statement. We can compress this information by embedding the formula in a vector space, such that the vector associated with a statement can be used to predict whether a statement can be rewritten by other theorems. Predicting the embedding of a formula generated by some rewrite rule is naturally viewed as approximate reasoning in the latent space. In order to measure the effectiveness of this reasoning, we perform approximate deduction sequences in the latent space and use the resulting embedding to inform the semantic features of the corresponding formal statement (which is obtained by performing the corresponding rewrite sequence using real formulas). Our experiments show that graph neural networks can make non-trivial predictions about the rewrite-success of statements, even when they propagate predicted latent representations for several steps. Since our corpus of mathematical formulas includes a wide variety of mathematical disciplines, this experiment is a strong indicator for the feasibility of deduction in latent space in general.
|
2312.05557
|
Wenbo Zhu
|
W. Zhu, H. D. Tuan, E. Dutkiewicz, Y. Fang, H. V. Poor, L. Hanzo
|
Long-Term Rate-Fairness-Aware Beamforming Based Massive MIMO Systems
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This is the first treatise on multi-user (MU) beamforming designed for
achieving long-term rate-fairness in fulldimensional MU massive multi-input
multi-output (m-MIMO) systems. Explicitly, based on the channel covariances,
which can be assumed to be known beforehand, we address this problem by
optimizing the following objective functions: the users' signal-toleakage-noise
ratios (SLNRs) using SLNR max-min optimization, geometric mean of SLNRs
(GM-SLNR) based optimization, and SLNR soft max-min optimization. We develop a
convex-solver based algorithm, which invokes a convex subproblem of cubic
time-complexity at each iteration for solving the SLNR maxmin problem. We then
develop closed-form expression based algorithms of scalable complexity for the
solution of the GMSLNR and of the SLNR soft max-min problem. The simulations
provided confirm the users' improved-fairness ergodic rate distributions.
|
[
{
"created": "Sat, 9 Dec 2023 12:10:18 GMT",
"version": "v1"
}
] |
2023-12-12
|
[
[
"Zhu",
"W.",
""
],
[
"Tuan",
"H. D.",
""
],
[
"Dutkiewicz",
"E.",
""
],
[
"Fang",
"Y.",
""
],
[
"Poor",
"H. V.",
""
],
[
"Hanzo",
"L.",
""
]
] |
This is the first treatise on multi-user (MU) beamforming designed for achieving long-term rate-fairness in fulldimensional MU massive multi-input multi-output (m-MIMO) systems. Explicitly, based on the channel covariances, which can be assumed to be known beforehand, we address this problem by optimizing the following objective functions: the users' signal-toleakage-noise ratios (SLNRs) using SLNR max-min optimization, geometric mean of SLNRs (GM-SLNR) based optimization, and SLNR soft max-min optimization. We develop a convex-solver based algorithm, which invokes a convex subproblem of cubic time-complexity at each iteration for solving the SLNR maxmin problem. We then develop closed-form expression based algorithms of scalable complexity for the solution of the GMSLNR and of the SLNR soft max-min problem. The simulations provided confirm the users' improved-fairness ergodic rate distributions.
|
2402.13823
|
Andreas Vogelsang
|
Andreas Vogelsang, Jannik Fischbach
|
Using Large Language Models for Natural Language Processing Tasks in
Requirements Engineering: A Systematic Guideline
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large Language Models (LLMs) are the cornerstone in automating Requirements
Engineering (RE) tasks, underpinning recent advancements in the field. Their
pre-trained comprehension of natural language is pivotal for effectively
tailoring them to specific RE tasks. However, selecting an appropriate LLM from
a myriad of existing architectures and fine-tuning it to address the
intricacies of a given task poses a significant challenge for researchers and
practitioners in the RE domain. Utilizing LLMs effectively for NLP problems in
RE necessitates a dual understanding: firstly, of the inner workings of LLMs,
and secondly, of a systematic approach to selecting and adapting LLMs for
NLP4RE tasks. This chapter aims to furnish readers with essential knowledge
about LLMs in its initial segment. Subsequently, it provides a comprehensive
guideline tailored for students, researchers, and practitioners on harnessing
LLMs to address their specific objectives. By offering insights into the
workings of LLMs and furnishing a practical guide, this chapter contributes
towards improving future research and applications leveraging LLMs for solving
RE challenges.
|
[
{
"created": "Wed, 21 Feb 2024 14:00:52 GMT",
"version": "v1"
},
{
"created": "Thu, 22 Feb 2024 12:23:06 GMT",
"version": "v2"
},
{
"created": "Wed, 15 May 2024 12:57:58 GMT",
"version": "v3"
}
] |
2024-05-16
|
[
[
"Vogelsang",
"Andreas",
""
],
[
"Fischbach",
"Jannik",
""
]
] |
Large Language Models (LLMs) are the cornerstone in automating Requirements Engineering (RE) tasks, underpinning recent advancements in the field. Their pre-trained comprehension of natural language is pivotal for effectively tailoring them to specific RE tasks. However, selecting an appropriate LLM from a myriad of existing architectures and fine-tuning it to address the intricacies of a given task poses a significant challenge for researchers and practitioners in the RE domain. Utilizing LLMs effectively for NLP problems in RE necessitates a dual understanding: firstly, of the inner workings of LLMs, and secondly, of a systematic approach to selecting and adapting LLMs for NLP4RE tasks. This chapter aims to furnish readers with essential knowledge about LLMs in its initial segment. Subsequently, it provides a comprehensive guideline tailored for students, researchers, and practitioners on harnessing LLMs to address their specific objectives. By offering insights into the workings of LLMs and furnishing a practical guide, this chapter contributes towards improving future research and applications leveraging LLMs for solving RE challenges.
|
1302.4549
|
Nir Ailon
|
Nir Ailon and Yudong Chen and Xu Huan
|
Breaking the Small Cluster Barrier of Graph Clustering
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper investigates graph clustering in the planted cluster model in the
presence of {\em small clusters}. Traditional results dictate that for an
algorithm to provably correctly recover the clusters, {\em all} clusters must
be sufficiently large (in particular, $\tilde{\Omega}(\sqrt{n})$ where $n$ is
the number of nodes of the graph). We show that this is not really a
restriction: by a more refined analysis of the trace-norm based recovery
approach proposed in Jalali et al. (2011) and Chen et al. (2012), we prove that
small clusters, under certain mild assumptions, do not hinder recovery of large
ones.
Based on this result, we further devise an iterative algorithm to recover
{\em almost all clusters} via a "peeling strategy", i.e., recover large
clusters first, leading to a reduced problem, and repeat this procedure. These
results are extended to the {\em partial observation} setting, in which only a
(chosen) part of the graph is observed.The peeling strategy gives rise to an
active learning algorithm, in which edges adjacent to smaller clusters are
queried more often as large clusters are learned (and removed).
From a high level, this paper sheds novel insights on high-dimensional
statistics and learning structured data, by presenting a structured matrix
learning problem for which a one shot convex relaxation approach necessarily
fails, but a carefully constructed sequence of convex relaxationsdoes the job.
|
[
{
"created": "Tue, 19 Feb 2013 09:21:09 GMT",
"version": "v1"
},
{
"created": "Wed, 20 Feb 2013 08:35:39 GMT",
"version": "v2"
}
] |
2013-02-21
|
[
[
"Ailon",
"Nir",
""
],
[
"Chen",
"Yudong",
""
],
[
"Huan",
"Xu",
""
]
] |
This paper investigates graph clustering in the planted cluster model in the presence of {\em small clusters}. Traditional results dictate that for an algorithm to provably correctly recover the clusters, {\em all} clusters must be sufficiently large (in particular, $\tilde{\Omega}(\sqrt{n})$ where $n$ is the number of nodes of the graph). We show that this is not really a restriction: by a more refined analysis of the trace-norm based recovery approach proposed in Jalali et al. (2011) and Chen et al. (2012), we prove that small clusters, under certain mild assumptions, do not hinder recovery of large ones. Based on this result, we further devise an iterative algorithm to recover {\em almost all clusters} via a "peeling strategy", i.e., recover large clusters first, leading to a reduced problem, and repeat this procedure. These results are extended to the {\em partial observation} setting, in which only a (chosen) part of the graph is observed.The peeling strategy gives rise to an active learning algorithm, in which edges adjacent to smaller clusters are queried more often as large clusters are learned (and removed). From a high level, this paper sheds novel insights on high-dimensional statistics and learning structured data, by presenting a structured matrix learning problem for which a one shot convex relaxation approach necessarily fails, but a carefully constructed sequence of convex relaxationsdoes the job.
|
1303.1264
|
Radim Belohlavek
|
Radim Belohlavek and Vilem Vychodil
|
Discovery of factors in matrices with grades
| null | null | null | null |
cs.LG cs.NA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present an approach to decomposition and factor analysis of matrices with
ordinal data. The matrix entries are grades to which objects represented by
rows satisfy attributes represented by columns, e.g. grades to which an image
is red, a product has a given feature, or a person performs well in a test. We
assume that the grades form a bounded scale equipped with certain aggregation
operators and conforms to the structure of a complete residuated lattice. We
present a greedy approximation algorithm for the problem of decomposition of
such matrix in a product of two matrices with grades under the restriction that
the number of factors be small. Our algorithm is based on a geometric insight
provided by a theorem identifying particular rectangular-shaped submatrices as
optimal factors for the decompositions. These factors correspond to formal
concepts of the input data and allow an easy interpretation of the
decomposition. We present illustrative examples and experimental evaluation.
|
[
{
"created": "Wed, 6 Mar 2013 07:58:14 GMT",
"version": "v1"
}
] |
2013-03-07
|
[
[
"Belohlavek",
"Radim",
""
],
[
"Vychodil",
"Vilem",
""
]
] |
We present an approach to decomposition and factor analysis of matrices with ordinal data. The matrix entries are grades to which objects represented by rows satisfy attributes represented by columns, e.g. grades to which an image is red, a product has a given feature, or a person performs well in a test. We assume that the grades form a bounded scale equipped with certain aggregation operators and conforms to the structure of a complete residuated lattice. We present a greedy approximation algorithm for the problem of decomposition of such matrix in a product of two matrices with grades under the restriction that the number of factors be small. Our algorithm is based on a geometric insight provided by a theorem identifying particular rectangular-shaped submatrices as optimal factors for the decompositions. These factors correspond to formal concepts of the input data and allow an easy interpretation of the decomposition. We present illustrative examples and experimental evaluation.
|
1806.02023
|
Jia-Hong Lee
|
Jia-Hong Lee, Yi-Ming Chan, Ting-Yen Chen, and Chu-Song Chen
|
Joint Estimation of Age and Gender from Unconstrained Face Images using
Lightweight Multi-task CNN for Mobile Applications
|
To publish in the IEEE first International Conference on Multimedia
Information Processing and Retrieval, 2018. (IEEE MIPR 2018)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automatic age and gender classification based on unconstrained images has
become essential techniques on mobile devices. With limited computing power,
how to develop a robust system becomes a challenging task. In this paper, we
present an efficient convolutional neural network (CNN) called lightweight
multi-task CNN for simultaneous age and gender classification. Lightweight
multi-task CNN uses depthwise separable convolution to reduce the model size
and save the inference time. On the public challenging Adience dataset, the
accuracy of age and gender classification is better than baseline multi-task
CNN methods.
|
[
{
"created": "Wed, 6 Jun 2018 06:22:16 GMT",
"version": "v1"
}
] |
2018-06-07
|
[
[
"Lee",
"Jia-Hong",
""
],
[
"Chan",
"Yi-Ming",
""
],
[
"Chen",
"Ting-Yen",
""
],
[
"Chen",
"Chu-Song",
""
]
] |
Automatic age and gender classification based on unconstrained images has become essential techniques on mobile devices. With limited computing power, how to develop a robust system becomes a challenging task. In this paper, we present an efficient convolutional neural network (CNN) called lightweight multi-task CNN for simultaneous age and gender classification. Lightweight multi-task CNN uses depthwise separable convolution to reduce the model size and save the inference time. On the public challenging Adience dataset, the accuracy of age and gender classification is better than baseline multi-task CNN methods.
|
1708.01341
|
Rui Han
|
Rui Han, Fan Zhang, Zhentao Wang
|
AccurateML: Information-aggregation-based Approximate Processing for
Fast and Accurate Machine Learning on MapReduce
|
9 pages, 9 figures
| null | null |
838-846
|
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The growing demands of processing massive datasets have promoted irresistible
trends of running machine learning applications on MapReduce. When processing
large input data, it is often of greater values to produce fast and accurate
enough approximate results than slow exact results. Existing techniques produce
approximate results by processing parts of the input data, thus incurring large
accuracy losses when using short job execution times, because all the skipped
input data potentially contributes to result accuracy. We address this
limitation by proposing AccurateML that aggregates information of input data in
each map task to create small aggregated data points. These aggregated points
enable all map tasks producing initial outputs quickly to save computation
times and decrease the outputs' size to reduce communication times. Our
approach further identifies the parts of input data most related to result
accuracy, thus first using these parts to improve the produced outputs to
minimize accuracy losses. We evaluated AccurateML using real machine learning
applications and datasets. The results show: (i) it reduces execution times by
30 times with small accuracy losses compared to exact results; (ii) when using
the same execution times, it achieves 2.71 times reductions in accuracy losses
compared to existing approximate processing techniques.
|
[
{
"created": "Fri, 4 Aug 2017 00:57:57 GMT",
"version": "v1"
}
] |
2017-08-07
|
[
[
"Han",
"Rui",
""
],
[
"Zhang",
"Fan",
""
],
[
"Wang",
"Zhentao",
""
]
] |
The growing demands of processing massive datasets have promoted irresistible trends of running machine learning applications on MapReduce. When processing large input data, it is often of greater values to produce fast and accurate enough approximate results than slow exact results. Existing techniques produce approximate results by processing parts of the input data, thus incurring large accuracy losses when using short job execution times, because all the skipped input data potentially contributes to result accuracy. We address this limitation by proposing AccurateML that aggregates information of input data in each map task to create small aggregated data points. These aggregated points enable all map tasks producing initial outputs quickly to save computation times and decrease the outputs' size to reduce communication times. Our approach further identifies the parts of input data most related to result accuracy, thus first using these parts to improve the produced outputs to minimize accuracy losses. We evaluated AccurateML using real machine learning applications and datasets. The results show: (i) it reduces execution times by 30 times with small accuracy losses compared to exact results; (ii) when using the same execution times, it achieves 2.71 times reductions in accuracy losses compared to existing approximate processing techniques.
|
1908.01478
|
Yi-Hsiang Chang
|
Yi-Hsiang Chang, Kuan-Yu Chang, Henry Kuo, Chun-Yi Lee
|
Reusability and Transferability of Macro Actions for Reinforcement
Learning
| null | null |
10.1145/3514260
| null |
cs.NE cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Conventional reinforcement learning (RL) typically determines an appropriate
primitive action at each timestep. However, by using a proper macro action,
defined as a sequence of primitive actions, an agent is able to bypass
intermediate states to a farther state and facilitate its learning procedure.
The problem we would like to investigate is what associated beneficial
properties that macro actions may possess. In this paper, we unveil the
properties of reusability and transferability of macro actions. The first
property, reusability, means that a macro action generated along with one RL
method can be reused by another RL method for training, while the second one,
transferability, means that a macro action can be utilized for training agents
in similar environments with different reward settings. In our experiments, we
first generate macro actions along with RL methods. We then provide a set of
analyses to reveal the properties of reusability and transferability of the
generated macro actions.
|
[
{
"created": "Mon, 5 Aug 2019 05:59:40 GMT",
"version": "v1"
},
{
"created": "Sat, 7 Nov 2020 06:04:26 GMT",
"version": "v2"
},
{
"created": "Thu, 28 Apr 2022 12:43:25 GMT",
"version": "v3"
}
] |
2022-04-29
|
[
[
"Chang",
"Yi-Hsiang",
""
],
[
"Chang",
"Kuan-Yu",
""
],
[
"Kuo",
"Henry",
""
],
[
"Lee",
"Chun-Yi",
""
]
] |
Conventional reinforcement learning (RL) typically determines an appropriate primitive action at each timestep. However, by using a proper macro action, defined as a sequence of primitive actions, an agent is able to bypass intermediate states to a farther state and facilitate its learning procedure. The problem we would like to investigate is what associated beneficial properties that macro actions may possess. In this paper, we unveil the properties of reusability and transferability of macro actions. The first property, reusability, means that a macro action generated along with one RL method can be reused by another RL method for training, while the second one, transferability, means that a macro action can be utilized for training agents in similar environments with different reward settings. In our experiments, we first generate macro actions along with RL methods. We then provide a set of analyses to reveal the properties of reusability and transferability of the generated macro actions.
|
2203.07656
|
Yuqian Fu
|
Yuqian Fu, Yu Xie, Yanwei Fu, Jingjing Chen, Yu-Gang Jiang
|
Wave-SAN: Wavelet based Style Augmentation Network for Cross-Domain
Few-Shot Learning
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Previous few-shot learning (FSL) works mostly are limited to natural images
of general concepts and categories. These works assume very high visual
similarity between the source and target classes. In contrast, the recently
proposed cross-domain few-shot learning (CD-FSL) aims at transferring knowledge
from general nature images of many labeled examples to novel domain-specific
target categories of only a few labeled examples. The key challenge of CD-FSL
lies in the huge data shift between source and target domains, which is
typically in the form of totally different visual styles. This makes it very
nontrivial to directly extend the classical FSL methods to address the CD-FSL
task. To this end, this paper studies the problem of CD-FSL by spanning the
style distributions of the source dataset. Particularly, wavelet transform is
introduced to enable the decomposition of visual representations into
low-frequency components such as shape and style and high-frequency components
e.g., texture. To make our model robust to visual styles, the source images are
augmented by swapping the styles of their low-frequency components with each
other. We propose a novel Style Augmentation (StyleAug) module to implement
this idea. Furthermore, we present a Self-Supervised Learning (SSL) module to
ensure the predictions of style-augmented images are semantically similar to
the unchanged ones. This avoids the potential semantic drift problem in
exchanging the styles. Extensive experiments on two CD-FSL benchmarks show the
effectiveness of our method. Our codes and models will be released.
|
[
{
"created": "Tue, 15 Mar 2022 05:36:41 GMT",
"version": "v1"
}
] |
2022-03-16
|
[
[
"Fu",
"Yuqian",
""
],
[
"Xie",
"Yu",
""
],
[
"Fu",
"Yanwei",
""
],
[
"Chen",
"Jingjing",
""
],
[
"Jiang",
"Yu-Gang",
""
]
] |
Previous few-shot learning (FSL) works mostly are limited to natural images of general concepts and categories. These works assume very high visual similarity between the source and target classes. In contrast, the recently proposed cross-domain few-shot learning (CD-FSL) aims at transferring knowledge from general nature images of many labeled examples to novel domain-specific target categories of only a few labeled examples. The key challenge of CD-FSL lies in the huge data shift between source and target domains, which is typically in the form of totally different visual styles. This makes it very nontrivial to directly extend the classical FSL methods to address the CD-FSL task. To this end, this paper studies the problem of CD-FSL by spanning the style distributions of the source dataset. Particularly, wavelet transform is introduced to enable the decomposition of visual representations into low-frequency components such as shape and style and high-frequency components e.g., texture. To make our model robust to visual styles, the source images are augmented by swapping the styles of their low-frequency components with each other. We propose a novel Style Augmentation (StyleAug) module to implement this idea. Furthermore, we present a Self-Supervised Learning (SSL) module to ensure the predictions of style-augmented images are semantically similar to the unchanged ones. This avoids the potential semantic drift problem in exchanging the styles. Extensive experiments on two CD-FSL benchmarks show the effectiveness of our method. Our codes and models will be released.
|
2006.09040
|
Christopher Brix
|
Christopher Brix, Thomas Noll
|
Debona: Decoupled Boundary Network Analysis for Tighter Bounds and
Faster Adversarial Robustness Proofs
| null | null | null | null |
cs.LG cs.AI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neural networks are commonly used in safety-critical real-world applications.
Unfortunately, the predicted output is often highly sensitive to small, and
possibly imperceptible, changes to the input data. Proving that either no such
adversarial examples exist, or providing a concrete instance, is therefore
crucial to ensure safe applications. As enumerating and testing all potential
adversarial examples is computationally infeasible, verification techniques
have been developed to provide mathematically sound proofs of their absence
using overestimations of the network activations. We propose an improved
technique for computing tight upper and lower bounds of these node values,
based on increased flexibility gained by computing both bounds independently of
each other. Furthermore, we gain an additional improvement by re-implementing
part of the original state-of-the-art software "Neurify", leading to a faster
analysis. Combined, these adaptations reduce the necessary runtime by up to
94%, and allow a successful search for networks and inputs that were previously
too complex. We provide proofs for tight upper and lower bounds on max-pooling
layers in convolutional networks. To ensure widespread usability, we open
source our implementation "Debona", featuring both the implementation specific
enhancements as well as the refined boundary computation for faster and more
exact~results.
|
[
{
"created": "Tue, 16 Jun 2020 10:00:33 GMT",
"version": "v1"
},
{
"created": "Tue, 2 Feb 2021 16:53:29 GMT",
"version": "v2"
}
] |
2021-02-03
|
[
[
"Brix",
"Christopher",
""
],
[
"Noll",
"Thomas",
""
]
] |
Neural networks are commonly used in safety-critical real-world applications. Unfortunately, the predicted output is often highly sensitive to small, and possibly imperceptible, changes to the input data. Proving that either no such adversarial examples exist, or providing a concrete instance, is therefore crucial to ensure safe applications. As enumerating and testing all potential adversarial examples is computationally infeasible, verification techniques have been developed to provide mathematically sound proofs of their absence using overestimations of the network activations. We propose an improved technique for computing tight upper and lower bounds of these node values, based on increased flexibility gained by computing both bounds independently of each other. Furthermore, we gain an additional improvement by re-implementing part of the original state-of-the-art software "Neurify", leading to a faster analysis. Combined, these adaptations reduce the necessary runtime by up to 94%, and allow a successful search for networks and inputs that were previously too complex. We provide proofs for tight upper and lower bounds on max-pooling layers in convolutional networks. To ensure widespread usability, we open source our implementation "Debona", featuring both the implementation specific enhancements as well as the refined boundary computation for faster and more exact~results.
|
2011.03290
|
Delei Kong
|
Delei Kong, Zheng Fang, Haojia Li, Kuanxu Hou, Sonya Coleman and
Dermot Kerr
|
Event-VPR: End-to-End Weakly Supervised Network Architecture for
Event-based Visual Place Recognition
| null | null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Traditional visual place recognition (VPR) methods generally use frame-based
cameras, which is easy to fail due to dramatic illumination changes or fast
motions. In this paper, we propose an end-to-end visual place recognition
network for event cameras, which can achieve good place recognition performance
in challenging environments. The key idea of the proposed algorithm is firstly
to characterize the event streams with the EST voxel grid, then extract
features using a convolution network, and finally aggregate features using an
improved VLAD network to realize end-to-end visual place recognition using
event streams. To verify the effectiveness of the proposed algorithm, we
compare the proposed method with classical VPR methods on the event-based
driving datasets (MVSEC, DDD17) and the synthetic datasets (Oxford RobotCar).
Experimental results show that the proposed method can achieve much better
performance in challenging scenarios. To our knowledge, this is the first
end-to-end event-based VPR method. The accompanying source code is available at
https://github.com/kongdelei/Event-VPR.
|
[
{
"created": "Fri, 6 Nov 2020 11:32:04 GMT",
"version": "v1"
}
] |
2020-11-09
|
[
[
"Kong",
"Delei",
""
],
[
"Fang",
"Zheng",
""
],
[
"Li",
"Haojia",
""
],
[
"Hou",
"Kuanxu",
""
],
[
"Coleman",
"Sonya",
""
],
[
"Kerr",
"Dermot",
""
]
] |
Traditional visual place recognition (VPR) methods generally use frame-based cameras, which is easy to fail due to dramatic illumination changes or fast motions. In this paper, we propose an end-to-end visual place recognition network for event cameras, which can achieve good place recognition performance in challenging environments. The key idea of the proposed algorithm is firstly to characterize the event streams with the EST voxel grid, then extract features using a convolution network, and finally aggregate features using an improved VLAD network to realize end-to-end visual place recognition using event streams. To verify the effectiveness of the proposed algorithm, we compare the proposed method with classical VPR methods on the event-based driving datasets (MVSEC, DDD17) and the synthetic datasets (Oxford RobotCar). Experimental results show that the proposed method can achieve much better performance in challenging scenarios. To our knowledge, this is the first end-to-end event-based VPR method. The accompanying source code is available at https://github.com/kongdelei/Event-VPR.
|
2203.01543
|
Andy T. Liu
|
Andy T. Liu, Wei Xiao, Henghui Zhu, Dejiao Zhang, Shang-Wen Li, Andrew
Arnold
|
QaNER: Prompting Question Answering Models for Few-shot Named Entity
Recognition
|
8 pages, 6 figures
| null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Recently, prompt-based learning for pre-trained language models has succeeded
in few-shot Named Entity Recognition (NER) by exploiting prompts as task
guidance to increase label efficiency. However, previous prompt-based methods
for few-shot NER have limitations such as a higher computational complexity,
poor zero-shot ability, requiring manual prompt engineering, or lack of prompt
robustness. In this work, we address these shortcomings by proposing a new
prompt-based learning NER method with Question Answering (QA), called QaNER.
Our approach includes 1) a refined strategy for converting NER problems into
the QA formulation; 2) NER prompt generation for QA models; 3) prompt-based
tuning with QA models on a few annotated NER examples; 4) zero-shot NER by
prompting the QA model. Comparing the proposed approach with previous methods,
QaNER is faster at inference, insensitive to the prompt quality, and robust to
hyper-parameters, as well as demonstrating significantly better low-resource
performance and zero-shot capability.
|
[
{
"created": "Thu, 3 Mar 2022 06:56:01 GMT",
"version": "v1"
},
{
"created": "Fri, 4 Mar 2022 07:58:08 GMT",
"version": "v2"
}
] |
2022-03-07
|
[
[
"Liu",
"Andy T.",
""
],
[
"Xiao",
"Wei",
""
],
[
"Zhu",
"Henghui",
""
],
[
"Zhang",
"Dejiao",
""
],
[
"Li",
"Shang-Wen",
""
],
[
"Arnold",
"Andrew",
""
]
] |
Recently, prompt-based learning for pre-trained language models has succeeded in few-shot Named Entity Recognition (NER) by exploiting prompts as task guidance to increase label efficiency. However, previous prompt-based methods for few-shot NER have limitations such as a higher computational complexity, poor zero-shot ability, requiring manual prompt engineering, or lack of prompt robustness. In this work, we address these shortcomings by proposing a new prompt-based learning NER method with Question Answering (QA), called QaNER. Our approach includes 1) a refined strategy for converting NER problems into the QA formulation; 2) NER prompt generation for QA models; 3) prompt-based tuning with QA models on a few annotated NER examples; 4) zero-shot NER by prompting the QA model. Comparing the proposed approach with previous methods, QaNER is faster at inference, insensitive to the prompt quality, and robust to hyper-parameters, as well as demonstrating significantly better low-resource performance and zero-shot capability.
|
1312.2218
|
EPTCS
|
Nobuko Yoshida (Imperial College London, UK), Wim Vanderbauwhede
(University of Glasgow, UK)
|
Proceedings 5th Workshop on Programming Language Approaches to
Concurrency and Communication-cEntric Software
| null |
EPTCS 137, 2013
|
10.4204/EPTCS.137
| null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
PLACES 2013 (full title: Programming Language Approaches to Concurrency- and
Communication-cEntric Software) was the sixth edition of the PLACES workshop
series. After the first PLACES, which was affiliated to DisCoTec in 2008, the
workshop has been part of ETAPS every year since 2009 and is now an established
part of the ETAPS satellite events. This year, PLACES was the best attended
workshop at ETAPS 2013.
The workshop series was started in order to promote the application of novel
programming language ideas to the increasingly important problem of developing
software for systems in which concurrency and communication are intrinsic
aspects. This includes software for multi- and many-core systems, accelerators
and large-scale distributed and/or service-oriented systems. The scope of
PLACES includes new programming language features, whole new programming
language designs, new type systems, new semantic approaches, new program
analysis techniques, and new implementation mechanisms.
|
[
{
"created": "Sun, 8 Dec 2013 14:19:11 GMT",
"version": "v1"
}
] |
2013-12-10
|
[
[
"Yoshida",
"Nobuko",
"",
"Imperial College London, UK"
],
[
"Vanderbauwhede",
"Wim",
"",
"University of Glasgow, UK"
]
] |
PLACES 2013 (full title: Programming Language Approaches to Concurrency- and Communication-cEntric Software) was the sixth edition of the PLACES workshop series. After the first PLACES, which was affiliated to DisCoTec in 2008, the workshop has been part of ETAPS every year since 2009 and is now an established part of the ETAPS satellite events. This year, PLACES was the best attended workshop at ETAPS 2013. The workshop series was started in order to promote the application of novel programming language ideas to the increasingly important problem of developing software for systems in which concurrency and communication are intrinsic aspects. This includes software for multi- and many-core systems, accelerators and large-scale distributed and/or service-oriented systems. The scope of PLACES includes new programming language features, whole new programming language designs, new type systems, new semantic approaches, new program analysis techniques, and new implementation mechanisms.
|
1909.06146
|
Qiao Jin
|
Qiao Jin, Bhuwan Dhingra, Zhengping Liu, William W. Cohen, Xinghua Lu
|
PubMedQA: A Dataset for Biomedical Research Question Answering
|
EMNLP 2019
| null | null | null |
cs.CL cs.LG q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce PubMedQA, a novel biomedical question answering (QA) dataset
collected from PubMed abstracts. The task of PubMedQA is to answer research
questions with yes/no/maybe (e.g.: Do preoperative statins reduce atrial
fibrillation after coronary artery bypass grafting?) using the corresponding
abstracts. PubMedQA has 1k expert-annotated, 61.2k unlabeled and 211.3k
artificially generated QA instances. Each PubMedQA instance is composed of (1)
a question which is either an existing research article title or derived from
one, (2) a context which is the corresponding abstract without its conclusion,
(3) a long answer, which is the conclusion of the abstract and, presumably,
answers the research question, and (4) a yes/no/maybe answer which summarizes
the conclusion. PubMedQA is the first QA dataset where reasoning over
biomedical research texts, especially their quantitative contents, is required
to answer the questions. Our best performing model, multi-phase fine-tuning of
BioBERT with long answer bag-of-word statistics as additional supervision,
achieves 68.1% accuracy, compared to single human performance of 78.0% accuracy
and majority-baseline of 55.2% accuracy, leaving much room for improvement.
PubMedQA is publicly available at https://pubmedqa.github.io.
|
[
{
"created": "Fri, 13 Sep 2019 11:18:20 GMT",
"version": "v1"
}
] |
2019-09-16
|
[
[
"Jin",
"Qiao",
""
],
[
"Dhingra",
"Bhuwan",
""
],
[
"Liu",
"Zhengping",
""
],
[
"Cohen",
"William W.",
""
],
[
"Lu",
"Xinghua",
""
]
] |
We introduce PubMedQA, a novel biomedical question answering (QA) dataset collected from PubMed abstracts. The task of PubMedQA is to answer research questions with yes/no/maybe (e.g.: Do preoperative statins reduce atrial fibrillation after coronary artery bypass grafting?) using the corresponding abstracts. PubMedQA has 1k expert-annotated, 61.2k unlabeled and 211.3k artificially generated QA instances. Each PubMedQA instance is composed of (1) a question which is either an existing research article title or derived from one, (2) a context which is the corresponding abstract without its conclusion, (3) a long answer, which is the conclusion of the abstract and, presumably, answers the research question, and (4) a yes/no/maybe answer which summarizes the conclusion. PubMedQA is the first QA dataset where reasoning over biomedical research texts, especially their quantitative contents, is required to answer the questions. Our best performing model, multi-phase fine-tuning of BioBERT with long answer bag-of-word statistics as additional supervision, achieves 68.1% accuracy, compared to single human performance of 78.0% accuracy and majority-baseline of 55.2% accuracy, leaving much room for improvement. PubMedQA is publicly available at https://pubmedqa.github.io.
|
2104.13818
|
Ali Ramezani-Kebrya
|
Ali Ramezani-Kebrya, Fartash Faghri, Ilya Markov, Vitalii Aksenov, Dan
Alistarh, Daniel M. Roy
|
NUQSGD: Provably Communication-efficient Data-parallel SGD via
Nonuniform Quantization
|
This entry is redundant and was created in error. See
arXiv:1908.06077 for the latest version
| null | null | null |
cs.LG math.OC stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As the size and complexity of models and datasets grow, so does the need for
communication-efficient variants of stochastic gradient descent that can be
deployed to perform parallel model training. One popular
communication-compression method for data-parallel SGD is QSGD (Alistarh et
al., 2017), which quantizes and encodes gradients to reduce communication
costs. The baseline variant of QSGD provides strong theoretical guarantees,
however, for practical purposes, the authors proposed a heuristic variant which
we call QSGDinf, which demonstrated impressive empirical gains for distributed
training of large neural networks. In this paper, we build on this work to
propose a new gradient quantization scheme, and show that it has both stronger
theoretical guarantees than QSGD, and matches and exceeds the empirical
performance of the QSGDinf heuristic and of other compression methods.
|
[
{
"created": "Wed, 28 Apr 2021 15:07:03 GMT",
"version": "v1"
},
{
"created": "Sat, 1 May 2021 20:34:38 GMT",
"version": "v2"
}
] |
2021-05-05
|
[
[
"Ramezani-Kebrya",
"Ali",
""
],
[
"Faghri",
"Fartash",
""
],
[
"Markov",
"Ilya",
""
],
[
"Aksenov",
"Vitalii",
""
],
[
"Alistarh",
"Dan",
""
],
[
"Roy",
"Daniel M.",
""
]
] |
As the size and complexity of models and datasets grow, so does the need for communication-efficient variants of stochastic gradient descent that can be deployed to perform parallel model training. One popular communication-compression method for data-parallel SGD is QSGD (Alistarh et al., 2017), which quantizes and encodes gradients to reduce communication costs. The baseline variant of QSGD provides strong theoretical guarantees, however, for practical purposes, the authors proposed a heuristic variant which we call QSGDinf, which demonstrated impressive empirical gains for distributed training of large neural networks. In this paper, we build on this work to propose a new gradient quantization scheme, and show that it has both stronger theoretical guarantees than QSGD, and matches and exceeds the empirical performance of the QSGDinf heuristic and of other compression methods.
|
1607.06757
|
Konrad Dabrowski
|
Alexandre Blanch\'e and Konrad K. Dabrowski and Matthew Johnson and
Dani\"el Paulusma
|
Hereditary Graph Classes: When the Complexities of Colouring and Clique
Cover Coincide
|
19 Pages, 5 Figures
| null | null | null |
cs.DS cs.CC cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A graph is $(H_1,H_2)$-free for a pair of graphs $H_1,H_2$ if it contains no
induced subgraph isomorphic to $H_1$ or $H_2$. In 2001, Kr\'al',
Kratochv\'{\i}l, Tuza, and Woeginger initiated a study into the complexity of
Colouring for $(H_1,H_2)$-free graphs. Since then, others have tried to
complete their study, but many cases remain open. We focus on those
$(H_1,H_2)$-free graphs where $H_2$ is $\overline{H_1}$, the complement of
$H_1$. As these classes are closed under complementation, the computational
complexities of Colouring and Clique Cover coincide. By combining new and known
results, we are able to classify the complexity of Colouring and Clique Cover
for $(H,\overline{H})$-free graphs for all cases except when $H=sP_1+ P_3$ for
$s\geq 3$ or $H=sP_1+P_4$ for $s\geq 2$. We also classify the complexity of
Colouring on graph classes characterized by forbidding a finite number of
self-complementary induced subgraphs, and we initiate a study of $k$-Colouring
for $(P_r,\overline{P_r})$-free graphs.
|
[
{
"created": "Fri, 22 Jul 2016 17:32:39 GMT",
"version": "v1"
},
{
"created": "Mon, 12 Dec 2016 20:29:41 GMT",
"version": "v2"
},
{
"created": "Wed, 7 Jun 2017 11:10:25 GMT",
"version": "v3"
}
] |
2017-06-08
|
[
[
"Blanché",
"Alexandre",
""
],
[
"Dabrowski",
"Konrad K.",
""
],
[
"Johnson",
"Matthew",
""
],
[
"Paulusma",
"Daniël",
""
]
] |
A graph is $(H_1,H_2)$-free for a pair of graphs $H_1,H_2$ if it contains no induced subgraph isomorphic to $H_1$ or $H_2$. In 2001, Kr\'al', Kratochv\'{\i}l, Tuza, and Woeginger initiated a study into the complexity of Colouring for $(H_1,H_2)$-free graphs. Since then, others have tried to complete their study, but many cases remain open. We focus on those $(H_1,H_2)$-free graphs where $H_2$ is $\overline{H_1}$, the complement of $H_1$. As these classes are closed under complementation, the computational complexities of Colouring and Clique Cover coincide. By combining new and known results, we are able to classify the complexity of Colouring and Clique Cover for $(H,\overline{H})$-free graphs for all cases except when $H=sP_1+ P_3$ for $s\geq 3$ or $H=sP_1+P_4$ for $s\geq 2$. We also classify the complexity of Colouring on graph classes characterized by forbidding a finite number of self-complementary induced subgraphs, and we initiate a study of $k$-Colouring for $(P_r,\overline{P_r})$-free graphs.
|
2205.10629
|
Phillip Swazinna
|
Phillip Swazinna, Steffen Udluft, Thomas Runkler
|
User-Interactive Offline Reinforcement Learning
|
Accepted at ICLR 2023 - 11th International Conference on Learning
Representations
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Offline reinforcement learning algorithms still lack trust in practice due to
the risk that the learned policy performs worse than the original policy that
generated the dataset or behaves in an unexpected way that is unfamiliar to the
user. At the same time, offline RL algorithms are not able to tune their most
important hyperparameter - the proximity of the learned policy to the original
policy. We propose an algorithm that allows the user to tune this
hyperparameter at runtime, thereby addressing both of the above mentioned
issues simultaneously. This allows users to start with the original behavior
and grant successively greater deviation, as well as stopping at any time when
the policy deteriorates or the behavior is too far from the familiar one.
|
[
{
"created": "Sat, 21 May 2022 15:50:23 GMT",
"version": "v1"
},
{
"created": "Wed, 25 Jan 2023 12:37:46 GMT",
"version": "v2"
}
] |
2023-01-26
|
[
[
"Swazinna",
"Phillip",
""
],
[
"Udluft",
"Steffen",
""
],
[
"Runkler",
"Thomas",
""
]
] |
Offline reinforcement learning algorithms still lack trust in practice due to the risk that the learned policy performs worse than the original policy that generated the dataset or behaves in an unexpected way that is unfamiliar to the user. At the same time, offline RL algorithms are not able to tune their most important hyperparameter - the proximity of the learned policy to the original policy. We propose an algorithm that allows the user to tune this hyperparameter at runtime, thereby addressing both of the above mentioned issues simultaneously. This allows users to start with the original behavior and grant successively greater deviation, as well as stopping at any time when the policy deteriorates or the behavior is too far from the familiar one.
|
2202.09338
|
Bennet Meyers
|
Bennet E. Meyers and Stephen P. Boyd
|
Signal Decomposition Using Masked Proximal Operators
|
The manuscript has 61 pages, 22 figures and 2 tables. Also hosted at
https://web.stanford.edu/~boyd/papers/sig_decomp_mprox.html. For code, see
https://github.com/cvxgrp/signal-decomposition
| null | null | null |
cs.LG eess.SP
|
http://creativecommons.org/licenses/by-sa/4.0/
|
We consider the well-studied problem of decomposing a vector time series
signal into components with different characteristics, such as smooth,
periodic, nonnegative, or sparse. We describe a simple and general framework in
which the components are defined by loss functions (which include constraints),
and the signal decomposition is carried out by minimizing the sum of losses of
the components (subject to the constraints). When each loss function is the
negative log-likelihood of a density for the signal component, this framework
coincides with maximum a posteriori probability (MAP) estimation; but it also
includes many other interesting cases. Summarizing and clarifying prior
results, we give two distributed optimization methods for computing the
decomposition, which find the optimal decomposition when the component class
loss functions are convex, and are good heuristics when they are not. Both
methods require only the masked proximal operator of each of the component loss
functions, a generalization of the well-known proximal operator that handles
missing entries in its argument. Both methods are distributed, i.e., handle
each component separately. We derive tractable methods for evaluating the
masked proximal operators of some loss functions that, to our knowledge, have
not appeared in the literature.
|
[
{
"created": "Fri, 18 Feb 2022 18:05:33 GMT",
"version": "v1"
},
{
"created": "Wed, 2 Mar 2022 16:46:36 GMT",
"version": "v2"
},
{
"created": "Tue, 3 May 2022 00:02:53 GMT",
"version": "v3"
},
{
"created": "Wed, 4 May 2022 16:05:04 GMT",
"version": "v4"
},
{
"created": "Mon, 20 Jun 2022 16:55:56 GMT",
"version": "v5"
},
{
"created": "Tue, 20 Sep 2022 04:15:43 GMT",
"version": "v6"
}
] |
2022-09-21
|
[
[
"Meyers",
"Bennet E.",
""
],
[
"Boyd",
"Stephen P.",
""
]
] |
We consider the well-studied problem of decomposing a vector time series signal into components with different characteristics, such as smooth, periodic, nonnegative, or sparse. We describe a simple and general framework in which the components are defined by loss functions (which include constraints), and the signal decomposition is carried out by minimizing the sum of losses of the components (subject to the constraints). When each loss function is the negative log-likelihood of a density for the signal component, this framework coincides with maximum a posteriori probability (MAP) estimation; but it also includes many other interesting cases. Summarizing and clarifying prior results, we give two distributed optimization methods for computing the decomposition, which find the optimal decomposition when the component class loss functions are convex, and are good heuristics when they are not. Both methods require only the masked proximal operator of each of the component loss functions, a generalization of the well-known proximal operator that handles missing entries in its argument. Both methods are distributed, i.e., handle each component separately. We derive tractable methods for evaluating the masked proximal operators of some loss functions that, to our knowledge, have not appeared in the literature.
|
1711.00354
|
Shinnosuke Takamichi
|
Ryosuke Sonobe, Shinnosuke Takamichi, Hiroshi Saruwatari
|
JSUT corpus: free large-scale Japanese speech corpus for end-to-end
speech synthesis
|
Submitted to ICASSP2018
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Thanks to improvements in machine learning techniques including deep
learning, a free large-scale speech corpus that can be shared between academic
institutions and commercial companies has an important role. However, such a
corpus for Japanese speech synthesis does not exist. In this paper, we designed
a novel Japanese speech corpus, named the "JSUT corpus," that is aimed at
achieving end-to-end speech synthesis. The corpus consists of 10 hours of
reading-style speech data and its transcription and covers all of the main
pronunciations of daily-use Japanese characters. In this paper, we describe how
we designed and analyzed the corpus. The corpus is freely available online.
|
[
{
"created": "Sat, 28 Oct 2017 05:28:01 GMT",
"version": "v1"
}
] |
2017-11-02
|
[
[
"Sonobe",
"Ryosuke",
""
],
[
"Takamichi",
"Shinnosuke",
""
],
[
"Saruwatari",
"Hiroshi",
""
]
] |
Thanks to improvements in machine learning techniques including deep learning, a free large-scale speech corpus that can be shared between academic institutions and commercial companies has an important role. However, such a corpus for Japanese speech synthesis does not exist. In this paper, we designed a novel Japanese speech corpus, named the "JSUT corpus," that is aimed at achieving end-to-end speech synthesis. The corpus consists of 10 hours of reading-style speech data and its transcription and covers all of the main pronunciations of daily-use Japanese characters. In this paper, we describe how we designed and analyzed the corpus. The corpus is freely available online.
|
1909.04942
|
Peiliang Li
|
Peiliang Li, Siqi Liu and Shaojie Shen
|
Multi-Sensor 3D Object Box Refinement for Autonomous Driving
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a 3D object detection system with multi-sensor refinement in the
context of autonomous driving. In our framework, the monocular camera serves as
the fundamental sensor for 2D object proposal and initial 3D bounding box
prediction. While the stereo cameras and LiDAR are treated as adaptive plug-in
sensors to refine the 3D box localization performance. For each observed
element in the raw measurement domain (e.g., pixels for stereo, 3D points for
LiDAR), we model the local geometry as an instance vector representation, which
indicates the 3D coordinate of each element respecting to the object frame.
Using this unified geometric representation, the 3D object location can be
unified refined by the stereo photometric alignment or point cloud alignment.
We demonstrate superior 3D detection and localization performance compared to
state-of-the-art monocular, stereo methods and competitive performance compared
with the baseline LiDAR method on the KITTI object benchmark.
|
[
{
"created": "Wed, 11 Sep 2019 09:38:56 GMT",
"version": "v1"
},
{
"created": "Tue, 19 Nov 2019 05:36:55 GMT",
"version": "v2"
}
] |
2019-11-20
|
[
[
"Li",
"Peiliang",
""
],
[
"Liu",
"Siqi",
""
],
[
"Shen",
"Shaojie",
""
]
] |
We propose a 3D object detection system with multi-sensor refinement in the context of autonomous driving. In our framework, the monocular camera serves as the fundamental sensor for 2D object proposal and initial 3D bounding box prediction. While the stereo cameras and LiDAR are treated as adaptive plug-in sensors to refine the 3D box localization performance. For each observed element in the raw measurement domain (e.g., pixels for stereo, 3D points for LiDAR), we model the local geometry as an instance vector representation, which indicates the 3D coordinate of each element respecting to the object frame. Using this unified geometric representation, the 3D object location can be unified refined by the stereo photometric alignment or point cloud alignment. We demonstrate superior 3D detection and localization performance compared to state-of-the-art monocular, stereo methods and competitive performance compared with the baseline LiDAR method on the KITTI object benchmark.
|
2402.10323
|
Melissa Greeff
|
Babak Akbari and Melissa Greeff
|
A Computationally Efficient Learning-Based Model Predictive Control for
Multirotors under Aerodynamic Disturbances
| null | null | null | null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neglecting complex aerodynamic effects hinders high-speed yet high-precision
multirotor autonomy. In this paper, we present a computationally efficient
learning-based model predictive controller that simultaneously optimizes a
trajectory that can be tracked within the physical limits (on thrust and
orientation) of the multirotor system despite unknown aerodynamic forces and
adapts the control input. To do this, we leverage the well-known differential
flatness property of multirotors, which allows us to transform their nonlinear
dynamics into a linear model. The main limitation of current flatness-based
planning and control approaches is that they often neglect dynamic feasibility.
This is because these constraints are nonlinear as a result of the mapping
between the input, i.e., multirotor thrust, and the flat state. In our
approach, we learn a novel representation of the drag forces by learning the
mapping from the flat state to the multirotor thrust vector (in a world frame)
as a Gaussian Process (GP). Our proposed approach leverages the properties of
GPs to develop a convex optimal controller that can be iteratively solved as a
second-order cone program (SOCP). In simulation experiments, our proposed
approach outperforms related model predictive controllers that do not account
for aerodynamic effects on trajectory feasibility, leading to a reduction of up
to 55% in absolute tracking error.
|
[
{
"created": "Thu, 15 Feb 2024 20:54:05 GMT",
"version": "v1"
}
] |
2024-02-19
|
[
[
"Akbari",
"Babak",
""
],
[
"Greeff",
"Melissa",
""
]
] |
Neglecting complex aerodynamic effects hinders high-speed yet high-precision multirotor autonomy. In this paper, we present a computationally efficient learning-based model predictive controller that simultaneously optimizes a trajectory that can be tracked within the physical limits (on thrust and orientation) of the multirotor system despite unknown aerodynamic forces and adapts the control input. To do this, we leverage the well-known differential flatness property of multirotors, which allows us to transform their nonlinear dynamics into a linear model. The main limitation of current flatness-based planning and control approaches is that they often neglect dynamic feasibility. This is because these constraints are nonlinear as a result of the mapping between the input, i.e., multirotor thrust, and the flat state. In our approach, we learn a novel representation of the drag forces by learning the mapping from the flat state to the multirotor thrust vector (in a world frame) as a Gaussian Process (GP). Our proposed approach leverages the properties of GPs to develop a convex optimal controller that can be iteratively solved as a second-order cone program (SOCP). In simulation experiments, our proposed approach outperforms related model predictive controllers that do not account for aerodynamic effects on trajectory feasibility, leading to a reduction of up to 55% in absolute tracking error.
|
2004.13477
|
Pavel Surynek
|
Pavel Surynek
|
Pushing the Envelope: From Discrete to Continuous Movements in
Multi-Agent Path Finding via Lazy Encodings
|
arXiv admin note: text overlap with arXiv:1903.09820
| null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-agent path finding in continuous space and time with geometric agents
MAPF$^\mathcal{R}$ is addressed in this paper. The task is to navigate agents
that move smoothly between predefined positions to their individual goals so
that they do not collide. We introduce a novel solving approach for obtaining
makespan optimal solutions called SMT-CBS$^\mathcal{R}$ based on {\em
satisfiability modulo theories} (SMT). The new algorithm combines collision
resolution known from conflict-based search (CBS) with previous generation of
incomplete SAT encodings on top of a novel scheme for selecting decision
variables in a potentially uncountable search space. We experimentally compare
SMT-CBS$^\mathcal{R}$ and previous CCBS algorithm for MAPF$^\mathcal{R}$.
|
[
{
"created": "Sat, 25 Apr 2020 13:21:32 GMT",
"version": "v1"
}
] |
2020-04-29
|
[
[
"Surynek",
"Pavel",
""
]
] |
Multi-agent path finding in continuous space and time with geometric agents MAPF$^\mathcal{R}$ is addressed in this paper. The task is to navigate agents that move smoothly between predefined positions to their individual goals so that they do not collide. We introduce a novel solving approach for obtaining makespan optimal solutions called SMT-CBS$^\mathcal{R}$ based on {\em satisfiability modulo theories} (SMT). The new algorithm combines collision resolution known from conflict-based search (CBS) with previous generation of incomplete SAT encodings on top of a novel scheme for selecting decision variables in a potentially uncountable search space. We experimentally compare SMT-CBS$^\mathcal{R}$ and previous CCBS algorithm for MAPF$^\mathcal{R}$.
|
1609.09430
|
Shawn Hershey
|
Shawn Hershey, Sourish Chaudhuri, Daniel P. W. Ellis, Jort F. Gemmeke,
Aren Jansen, R. Channing Moore, Manoj Plakal, Devin Platt, Rif A. Saurous,
Bryan Seybold, Malcolm Slaney, Ron J. Weiss, Kevin Wilson
|
CNN Architectures for Large-Scale Audio Classification
|
Accepted for publication at ICASSP 2017 Changes: Added definitions of
mAP, AUC, and d-prime. Updated mAP/AUC/d-prime numbers for Audio Set based on
changes of latest Audio Set revision. Changed wording to fit 4 page limit
with new additions
| null | null | null |
cs.SD cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Convolutional Neural Networks (CNNs) have proven very effective in image
classification and show promise for audio. We use various CNN architectures to
classify the soundtracks of a dataset of 70M training videos (5.24 million
hours) with 30,871 video-level labels. We examine fully connected Deep Neural
Networks (DNNs), AlexNet [1], VGG [2], Inception [3], and ResNet [4]. We
investigate varying the size of both training set and label vocabulary, finding
that analogs of the CNNs used in image classification do well on our audio
classification task, and larger training and label sets help up to a point. A
model using embeddings from these classifiers does much better than raw
features on the Audio Set [5] Acoustic Event Detection (AED) classification
task.
|
[
{
"created": "Thu, 29 Sep 2016 17:04:50 GMT",
"version": "v1"
},
{
"created": "Tue, 10 Jan 2017 18:06:51 GMT",
"version": "v2"
}
] |
2017-01-11
|
[
[
"Hershey",
"Shawn",
""
],
[
"Chaudhuri",
"Sourish",
""
],
[
"Ellis",
"Daniel P. W.",
""
],
[
"Gemmeke",
"Jort F.",
""
],
[
"Jansen",
"Aren",
""
],
[
"Moore",
"R. Channing",
""
],
[
"Plakal",
"Manoj",
""
],
[
"Platt",
"Devin",
""
],
[
"Saurous",
"Rif A.",
""
],
[
"Seybold",
"Bryan",
""
],
[
"Slaney",
"Malcolm",
""
],
[
"Weiss",
"Ron J.",
""
],
[
"Wilson",
"Kevin",
""
]
] |
Convolutional Neural Networks (CNNs) have proven very effective in image classification and show promise for audio. We use various CNN architectures to classify the soundtracks of a dataset of 70M training videos (5.24 million hours) with 30,871 video-level labels. We examine fully connected Deep Neural Networks (DNNs), AlexNet [1], VGG [2], Inception [3], and ResNet [4]. We investigate varying the size of both training set and label vocabulary, finding that analogs of the CNNs used in image classification do well on our audio classification task, and larger training and label sets help up to a point. A model using embeddings from these classifiers does much better than raw features on the Audio Set [5] Acoustic Event Detection (AED) classification task.
|
2005.05385
|
Murat Yildirim
|
Suleyman Yildirim, Alper Ekrem Murat, Murat Yildirim, Suzan Arslanturk
|
Process Knowledge Driven Change Point Detection for Automated
Calibration of Discrete Event Simulation Models Using Machine Learning
|
This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessible
| null | null | null |
cs.LG cs.SY eess.SY stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Initial development and subsequent calibration of discrete event simulation
models for complex systems require accurate identification of dynamically
changing process characteristics. Existing data driven change point methods
(DD-CPD) assume changes are extraneous to the system, thus cannot utilize
available process knowledge. This work proposes a unified framework for
process-driven multi-variate change point detection (PD-CPD) by combining
change point detection models with machine learning and process-driven
simulation modeling. The PD-CPD, after initializing with DD-CPD's change
point(s), uses simulation models to generate system level outputs as
time-series data streams which are then used to train neural network models to
predict system characteristics and change points. The accuracy of the
predictive models measures the likelihood that the actual process data conforms
to the simulated change points in system characteristics. PD-CPD iteratively
optimizes change points by repeating simulation and predictive model building
steps until the set of change point(s) with the maximum likelihood is
identified. Using an emergency department case study, we show that PD-CPD
significantly improves change point detection accuracy over DD-CPD estimates
and is able to detect actual change points.
|
[
{
"created": "Mon, 11 May 2020 19:07:26 GMT",
"version": "v1"
},
{
"created": "Mon, 21 Sep 2020 04:24:27 GMT",
"version": "v2"
}
] |
2020-09-22
|
[
[
"Yildirim",
"Suleyman",
""
],
[
"Murat",
"Alper Ekrem",
""
],
[
"Yildirim",
"Murat",
""
],
[
"Arslanturk",
"Suzan",
""
]
] |
Initial development and subsequent calibration of discrete event simulation models for complex systems require accurate identification of dynamically changing process characteristics. Existing data driven change point methods (DD-CPD) assume changes are extraneous to the system, thus cannot utilize available process knowledge. This work proposes a unified framework for process-driven multi-variate change point detection (PD-CPD) by combining change point detection models with machine learning and process-driven simulation modeling. The PD-CPD, after initializing with DD-CPD's change point(s), uses simulation models to generate system level outputs as time-series data streams which are then used to train neural network models to predict system characteristics and change points. The accuracy of the predictive models measures the likelihood that the actual process data conforms to the simulated change points in system characteristics. PD-CPD iteratively optimizes change points by repeating simulation and predictive model building steps until the set of change point(s) with the maximum likelihood is identified. Using an emergency department case study, we show that PD-CPD significantly improves change point detection accuracy over DD-CPD estimates and is able to detect actual change points.
|
1912.10398
|
L.A. Prashanth
|
Ajay Kumar Pandey, Prashanth L.A. and Sanjay P. Bhat
|
Estimation of Spectral Risk Measures
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider the problem of estimating a spectral risk measure (SRM) from
i.i.d. samples, and propose a novel method that is based on numerical
integration. We show that our SRM estimate concentrates exponentially, when the
underlying distribution has bounded support. Further, we also consider the case
when the underlying distribution is either Gaussian or exponential, and derive
a concentration bound for our estimation scheme. We validate the theoretical
findings on a synthetic setup, and in a vehicular traffic routing application.
|
[
{
"created": "Sun, 22 Dec 2019 08:11:42 GMT",
"version": "v1"
}
] |
2019-12-24
|
[
[
"Pandey",
"Ajay Kumar",
""
],
[
"A.",
"Prashanth L.",
""
],
[
"Bhat",
"Sanjay P.",
""
]
] |
We consider the problem of estimating a spectral risk measure (SRM) from i.i.d. samples, and propose a novel method that is based on numerical integration. We show that our SRM estimate concentrates exponentially, when the underlying distribution has bounded support. Further, we also consider the case when the underlying distribution is either Gaussian or exponential, and derive a concentration bound for our estimation scheme. We validate the theoretical findings on a synthetic setup, and in a vehicular traffic routing application.
|
1905.11445
|
Ke Wang
|
Ke Wang, Mihai Christodorescu
|
COSET: A Benchmark for Evaluating Neural Program Embeddings
|
8 Pages
| null | null | null |
cs.LG cs.PL stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neural program embedding can be helpful in analyzing large software, a task
that is challenging for traditional logic-based program analyses due to their
limited scalability. A key focus of recent machine-learning advances in this
area is on modeling program semantics instead of just syntax. Unfortunately
evaluating such advances is not obvious, as program semantics does not lend
itself to straightforward metrics. In this paper, we introduce a benchmarking
framework called COSET for standardizing the evaluation of neural program
embeddings. COSET consists of a diverse dataset of programs in source-code
format, labeled by human experts according to a number of program properties of
interest. A point of novelty is a suite of program transformations included in
COSET. These transformations when applied to the base dataset can simulate
natural changes to program code due to optimization and refactoring and can
serve as a "debugging" tool for classification mistakes. We conducted a pilot
study on four prominent models: TreeLSTM, gated graph neural network (GGNN),
AST-Path neural network (APNN), and DYPRO. We found that COSET is useful in
identifying the strengths and limitations of each model and in pinpointing
specific syntactic and semantic characteristics of programs that pose
challenges.
|
[
{
"created": "Mon, 27 May 2019 18:44:54 GMT",
"version": "v1"
}
] |
2019-05-29
|
[
[
"Wang",
"Ke",
""
],
[
"Christodorescu",
"Mihai",
""
]
] |
Neural program embedding can be helpful in analyzing large software, a task that is challenging for traditional logic-based program analyses due to their limited scalability. A key focus of recent machine-learning advances in this area is on modeling program semantics instead of just syntax. Unfortunately evaluating such advances is not obvious, as program semantics does not lend itself to straightforward metrics. In this paper, we introduce a benchmarking framework called COSET for standardizing the evaluation of neural program embeddings. COSET consists of a diverse dataset of programs in source-code format, labeled by human experts according to a number of program properties of interest. A point of novelty is a suite of program transformations included in COSET. These transformations when applied to the base dataset can simulate natural changes to program code due to optimization and refactoring and can serve as a "debugging" tool for classification mistakes. We conducted a pilot study on four prominent models: TreeLSTM, gated graph neural network (GGNN), AST-Path neural network (APNN), and DYPRO. We found that COSET is useful in identifying the strengths and limitations of each model and in pinpointing specific syntactic and semantic characteristics of programs that pose challenges.
|
2004.11475
|
Aayush Rana
|
Mamshad Nayeem Rizve, Ugur Demir, Praveen Tirupattur, Aayush Jung
Rana, Kevin Duarte, Ishan Dave, Yogesh Singh Rawat, Mubarak Shah
|
Gabriella: An Online System for Real-Time Activity Detection in
Untrimmed Security Videos
|
9 pages
| null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Activity detection in security videos is a difficult problem due to multiple
factors such as large field of view, presence of multiple activities, varying
scales and viewpoints, and its untrimmed nature. The existing research in
activity detection is mainly focused on datasets, such as UCF-101, JHMDB,
THUMOS, and AVA, which partially address these issues. The requirement of
processing the security videos in real-time makes this even more challenging.
In this work we propose Gabriella, a real-time online system to perform
activity detection on untrimmed security videos. The proposed method consists
of three stages: tubelet extraction, activity classification, and online
tubelet merging. For tubelet extraction, we propose a localization network
which takes a video clip as input and spatio-temporally detects potential
foreground regions at multiple scales to generate action tubelets. We propose a
novel Patch-Dice loss to handle large variations in actor size. Our online
processing of videos at a clip level drastically reduces the computation time
in detecting activities. The detected tubelets are assigned activity class
scores by the classification network and merged together using our proposed
Tubelet-Merge Action-Split (TMAS) algorithm to form the final action
detections. The TMAS algorithm efficiently connects the tubelets in an online
fashion to generate action detections which are robust against varying length
activities. We perform our experiments on the VIRAT and MEVA (Multiview
Extended Video with Activities) datasets and demonstrate the effectiveness of
the proposed approach in terms of speed (~100 fps) and performance with
state-of-the-art results. The code and models will be made publicly available.
|
[
{
"created": "Thu, 23 Apr 2020 22:20:10 GMT",
"version": "v1"
},
{
"created": "Tue, 19 May 2020 17:45:25 GMT",
"version": "v2"
}
] |
2020-05-20
|
[
[
"Rizve",
"Mamshad Nayeem",
""
],
[
"Demir",
"Ugur",
""
],
[
"Tirupattur",
"Praveen",
""
],
[
"Rana",
"Aayush Jung",
""
],
[
"Duarte",
"Kevin",
""
],
[
"Dave",
"Ishan",
""
],
[
"Rawat",
"Yogesh Singh",
""
],
[
"Shah",
"Mubarak",
""
]
] |
Activity detection in security videos is a difficult problem due to multiple factors such as large field of view, presence of multiple activities, varying scales and viewpoints, and its untrimmed nature. The existing research in activity detection is mainly focused on datasets, such as UCF-101, JHMDB, THUMOS, and AVA, which partially address these issues. The requirement of processing the security videos in real-time makes this even more challenging. In this work we propose Gabriella, a real-time online system to perform activity detection on untrimmed security videos. The proposed method consists of three stages: tubelet extraction, activity classification, and online tubelet merging. For tubelet extraction, we propose a localization network which takes a video clip as input and spatio-temporally detects potential foreground regions at multiple scales to generate action tubelets. We propose a novel Patch-Dice loss to handle large variations in actor size. Our online processing of videos at a clip level drastically reduces the computation time in detecting activities. The detected tubelets are assigned activity class scores by the classification network and merged together using our proposed Tubelet-Merge Action-Split (TMAS) algorithm to form the final action detections. The TMAS algorithm efficiently connects the tubelets in an online fashion to generate action detections which are robust against varying length activities. We perform our experiments on the VIRAT and MEVA (Multiview Extended Video with Activities) datasets and demonstrate the effectiveness of the proposed approach in terms of speed (~100 fps) and performance with state-of-the-art results. The code and models will be made publicly available.
|
2301.07696
|
Robert Beinert
|
Robert Beinert, Saghar Rezaei
|
Prony-Based Super-Resolution Phase Retrieval of Sparse, Multivariate
Signals
| null | null | null | null |
cs.IT cs.NA math.IT math.NA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Phase retrieval consists in the recovery of an unknown signal from phaseless
measurements of its usually complex-valued Fourier transform. Without further
assumptions, this problem is notorious to be severe ill posed such that the
recovery of the true signal is nearly impossible. In certain applications like
crystallography, speckle imaging in astronomy, or blind channel estimation in
communications, the unknown signal has a specific, sparse structure. In this
paper, we exploit these sparse structure to recover the unknown signal uniquely
up to inevitable ambiguities as global phase shifts, transitions, and
conjugated reflections. Although using a constructive proof essentially based
on Prony's method, our focus lies on the derivation of a recovery guarantee for
multivariate signals using an adaptive sampling scheme. Instead of sampling the
entire multivariate Fourier intensity, we only employ Fourier samples along
certain adaptively chosen lines. For bivariate signals, an analogous result can
be established for samples in generic directions. The number of samples here
scales quadratically to the sparsity level of the unknown signal.
|
[
{
"created": "Wed, 18 Jan 2023 18:36:16 GMT",
"version": "v1"
}
] |
2023-01-19
|
[
[
"Beinert",
"Robert",
""
],
[
"Rezaei",
"Saghar",
""
]
] |
Phase retrieval consists in the recovery of an unknown signal from phaseless measurements of its usually complex-valued Fourier transform. Without further assumptions, this problem is notorious to be severe ill posed such that the recovery of the true signal is nearly impossible. In certain applications like crystallography, speckle imaging in astronomy, or blind channel estimation in communications, the unknown signal has a specific, sparse structure. In this paper, we exploit these sparse structure to recover the unknown signal uniquely up to inevitable ambiguities as global phase shifts, transitions, and conjugated reflections. Although using a constructive proof essentially based on Prony's method, our focus lies on the derivation of a recovery guarantee for multivariate signals using an adaptive sampling scheme. Instead of sampling the entire multivariate Fourier intensity, we only employ Fourier samples along certain adaptively chosen lines. For bivariate signals, an analogous result can be established for samples in generic directions. The number of samples here scales quadratically to the sparsity level of the unknown signal.
|
2302.09703
|
Jihao Long
|
Jihao Long and Jiequn Han
|
Reinforcement Learning with Function Approximation: From Linear to
Nonlinear
| null |
J. Mach. Learn. , 2 (2023), pp. 161-193
|
10.4208/jml.230105
| null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Function approximation has been an indispensable component in modern
reinforcement learning algorithms designed to tackle problems with large state
spaces in high dimensions. This paper reviews recent results on error analysis
for these reinforcement learning algorithms in linear or nonlinear
approximation settings, emphasizing approximation error and estimation
error/sample complexity. We discuss various properties related to approximation
error and present concrete conditions on transition probability and reward
function under which these properties hold true. Sample complexity analysis in
reinforcement learning is more complicated than in supervised learning,
primarily due to the distribution mismatch phenomenon. With assumptions on the
linear structure of the problem, numerous algorithms in the literature achieve
polynomial sample complexity with respect to the number of features, episode
length, and accuracy, although the minimax rate has not been achieved yet.
These results rely on the $L^\infty$ and UCB estimation of estimation error,
which can handle the distribution mismatch phenomenon. The problem and analysis
become substantially more challenging in the setting of nonlinear function
approximation, as both $L^\infty$ and UCB estimation are inadequate for
bounding the error with a favorable rate in high dimensions. We discuss
additional assumptions necessary to address the distribution mismatch and
derive meaningful results for nonlinear RL problems.
|
[
{
"created": "Mon, 20 Feb 2023 00:31:18 GMT",
"version": "v1"
},
{
"created": "Fri, 19 May 2023 01:01:39 GMT",
"version": "v2"
}
] |
2024-02-27
|
[
[
"Long",
"Jihao",
""
],
[
"Han",
"Jiequn",
""
]
] |
Function approximation has been an indispensable component in modern reinforcement learning algorithms designed to tackle problems with large state spaces in high dimensions. This paper reviews recent results on error analysis for these reinforcement learning algorithms in linear or nonlinear approximation settings, emphasizing approximation error and estimation error/sample complexity. We discuss various properties related to approximation error and present concrete conditions on transition probability and reward function under which these properties hold true. Sample complexity analysis in reinforcement learning is more complicated than in supervised learning, primarily due to the distribution mismatch phenomenon. With assumptions on the linear structure of the problem, numerous algorithms in the literature achieve polynomial sample complexity with respect to the number of features, episode length, and accuracy, although the minimax rate has not been achieved yet. These results rely on the $L^\infty$ and UCB estimation of estimation error, which can handle the distribution mismatch phenomenon. The problem and analysis become substantially more challenging in the setting of nonlinear function approximation, as both $L^\infty$ and UCB estimation are inadequate for bounding the error with a favorable rate in high dimensions. We discuss additional assumptions necessary to address the distribution mismatch and derive meaningful results for nonlinear RL problems.
|
1806.08810
|
Nima Roohi
|
Nima Roohi, Ramneet Kaur, James Weimer, Oleg Sokolsky, Insup Lee
|
Self-Driving Vehicle Verification Towards a Benchmark
|
7 pages
| null | null | null |
cs.LO cs.RO cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Industrial cyber-physical systems are hybrid systems with strict safety
requirements. Despite not having a formal semantics, most of these systems are
modeled using Stateflow/Simulink for mainly two reasons: (1) it is easier to
model, test, and simulate using these tools, and (2) dynamics of these systems
are not supported by most other tools. Furthermore, with the ever growing
complexity of cyber-physical systems, grows the gap between what can be modeled
using an automatic formal verification tool and models of industrial
cyber-physical systems. In this paper, we present a simple formal model for
self-deriving cars. While after some simplification, safety of this system has
already been proven manually, to the best of our knowledge, no automatic formal
verification tool supports its dynamics. We hope this serves as a challenge
problem for formal verification tools targeting industrial applications.
|
[
{
"created": "Wed, 20 Jun 2018 12:23:35 GMT",
"version": "v1"
}
] |
2018-06-26
|
[
[
"Roohi",
"Nima",
""
],
[
"Kaur",
"Ramneet",
""
],
[
"Weimer",
"James",
""
],
[
"Sokolsky",
"Oleg",
""
],
[
"Lee",
"Insup",
""
]
] |
Industrial cyber-physical systems are hybrid systems with strict safety requirements. Despite not having a formal semantics, most of these systems are modeled using Stateflow/Simulink for mainly two reasons: (1) it is easier to model, test, and simulate using these tools, and (2) dynamics of these systems are not supported by most other tools. Furthermore, with the ever growing complexity of cyber-physical systems, grows the gap between what can be modeled using an automatic formal verification tool and models of industrial cyber-physical systems. In this paper, we present a simple formal model for self-deriving cars. While after some simplification, safety of this system has already been proven manually, to the best of our knowledge, no automatic formal verification tool supports its dynamics. We hope this serves as a challenge problem for formal verification tools targeting industrial applications.
|
1301.3527
|
Vamsi Potluru
|
Vamsi K. Potluru, Sergey M. Plis, Jonathan Le Roux, Barak A.
Pearlmutter, Vince D. Calhoun, Thomas P. Hayes
|
Block Coordinate Descent for Sparse NMF
| null | null | null | null |
cs.LG cs.NA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Nonnegative matrix factorization (NMF) has become a ubiquitous tool for data
analysis. An important variant is the sparse NMF problem which arises when we
explicitly require the learnt features to be sparse. A natural measure of
sparsity is the L$_0$ norm, however its optimization is NP-hard. Mixed norms,
such as L$_1$/L$_2$ measure, have been shown to model sparsity robustly, based
on intuitive attributes that such measures need to satisfy. This is in contrast
to computationally cheaper alternatives such as the plain L$_1$ norm. However,
present algorithms designed for optimizing the mixed norm L$_1$/L$_2$ are slow
and other formulations for sparse NMF have been proposed such as those based on
L$_1$ and L$_0$ norms. Our proposed algorithm allows us to solve the mixed norm
sparsity constraints while not sacrificing computation time. We present
experimental evidence on real-world datasets that shows our new algorithm
performs an order of magnitude faster compared to the current state-of-the-art
solvers optimizing the mixed norm and is suitable for large-scale datasets.
|
[
{
"created": "Tue, 15 Jan 2013 23:11:05 GMT",
"version": "v1"
},
{
"created": "Mon, 18 Mar 2013 22:42:11 GMT",
"version": "v2"
}
] |
2013-03-20
|
[
[
"Potluru",
"Vamsi K.",
""
],
[
"Plis",
"Sergey M.",
""
],
[
"Roux",
"Jonathan Le",
""
],
[
"Pearlmutter",
"Barak A.",
""
],
[
"Calhoun",
"Vince D.",
""
],
[
"Hayes",
"Thomas P.",
""
]
] |
Nonnegative matrix factorization (NMF) has become a ubiquitous tool for data analysis. An important variant is the sparse NMF problem which arises when we explicitly require the learnt features to be sparse. A natural measure of sparsity is the L$_0$ norm, however its optimization is NP-hard. Mixed norms, such as L$_1$/L$_2$ measure, have been shown to model sparsity robustly, based on intuitive attributes that such measures need to satisfy. This is in contrast to computationally cheaper alternatives such as the plain L$_1$ norm. However, present algorithms designed for optimizing the mixed norm L$_1$/L$_2$ are slow and other formulations for sparse NMF have been proposed such as those based on L$_1$ and L$_0$ norms. Our proposed algorithm allows us to solve the mixed norm sparsity constraints while not sacrificing computation time. We present experimental evidence on real-world datasets that shows our new algorithm performs an order of magnitude faster compared to the current state-of-the-art solvers optimizing the mixed norm and is suitable for large-scale datasets.
|
1410.1006
|
Fabrizio Frati
|
Giuseppe Di Battista and Fabrizio Frati
|
A Survey on Small-Area Planar Graph Drawing
|
Preliminary version appeared in "Thirty Essays on Geometric Graph
Theory", J. Pach (ed.), 2012
| null | null | null |
cs.CG cs.DM cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We survey algorithms and bounds for constructing planar drawings of graphs in
small area.
|
[
{
"created": "Sat, 4 Oct 2014 01:44:39 GMT",
"version": "v1"
}
] |
2014-10-07
|
[
[
"Di Battista",
"Giuseppe",
""
],
[
"Frati",
"Fabrizio",
""
]
] |
We survey algorithms and bounds for constructing planar drawings of graphs in small area.
|
2209.13750
|
Andrey Kutuzov
|
Anna Aksenova, Ekaterina Gavrishina, Elisey Rykov, Andrey Kutuzov
|
RuDSI: graph-based word sense induction dataset for Russian
|
TextGraphs-16 workshop at the CoLING-2022 conference
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We present RuDSI, a new benchmark for word sense induction (WSI) in Russian.
The dataset was created using manual annotation and semi-automatic clustering
of Word Usage Graphs (WUGs). Unlike prior WSI datasets for Russian, RuDSI is
completely data-driven (based on texts from Russian National Corpus), with no
external word senses imposed on annotators. Depending on the parameters of
graph clustering, different derivative datasets can be produced from raw
annotation. We report the performance that several baseline WSI methods obtain
on RuDSI and discuss possibilities for improving these scores.
|
[
{
"created": "Wed, 28 Sep 2022 00:08:24 GMT",
"version": "v1"
}
] |
2022-09-29
|
[
[
"Aksenova",
"Anna",
""
],
[
"Gavrishina",
"Ekaterina",
""
],
[
"Rykov",
"Elisey",
""
],
[
"Kutuzov",
"Andrey",
""
]
] |
We present RuDSI, a new benchmark for word sense induction (WSI) in Russian. The dataset was created using manual annotation and semi-automatic clustering of Word Usage Graphs (WUGs). Unlike prior WSI datasets for Russian, RuDSI is completely data-driven (based on texts from Russian National Corpus), with no external word senses imposed on annotators. Depending on the parameters of graph clustering, different derivative datasets can be produced from raw annotation. We report the performance that several baseline WSI methods obtain on RuDSI and discuss possibilities for improving these scores.
|
2007.14964
|
David Gotz
|
David Borland, Jonathan Zhang, Smiti Kaul, David Gotz
|
Selection-Bias-Corrected Visualization via Dynamic Reweighting
|
This article will be published in IEEE Transactions on Visualization
and Computer Graphics (TVCG) in January 2021. The work will also be presented
at IEEE VIS 2020. Video figure available here: https://vimeo.com/442775090
| null |
10.1109/TVCG.2020.3030455
| null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The collection and visual analysis of large-scale data from complex systems,
such as electronic health records or clickstream data, has become increasingly
common across a wide range of industries. This type of retrospective visual
analysis, however, is prone to a variety of selection bias effects, especially
for high-dimensional data where only a subset of dimensions is visualized at
any given time. The risk of selection bias is even higher when analysts
dynamically apply filters or perform grouping operations during ad hoc
analyses. These bias effects threatens the validity and generalizability of
insights discovered during visual analysis as the basis for decision making.
Past work has focused on bias transparency, helping users understand when
selection bias may have occurred. However, countering the effects of selection
bias via bias mitigation is typically left for the user to accomplish as a
separate process. Dynamic reweighting (DR) is a novel computational approach to
selection bias mitigation that helps users craft bias-corrected visualizations.
This paper describes the DR workflow, introduces key DR visualization designs,
and presents statistical methods that support the DR process. Use cases from
the medical domain, as well as findings from domain expert user interviews, are
also reported.
|
[
{
"created": "Wed, 29 Jul 2020 17:15:36 GMT",
"version": "v1"
},
{
"created": "Mon, 24 Aug 2020 21:02:54 GMT",
"version": "v2"
}
] |
2020-12-07
|
[
[
"Borland",
"David",
""
],
[
"Zhang",
"Jonathan",
""
],
[
"Kaul",
"Smiti",
""
],
[
"Gotz",
"David",
""
]
] |
The collection and visual analysis of large-scale data from complex systems, such as electronic health records or clickstream data, has become increasingly common across a wide range of industries. This type of retrospective visual analysis, however, is prone to a variety of selection bias effects, especially for high-dimensional data where only a subset of dimensions is visualized at any given time. The risk of selection bias is even higher when analysts dynamically apply filters or perform grouping operations during ad hoc analyses. These bias effects threatens the validity and generalizability of insights discovered during visual analysis as the basis for decision making. Past work has focused on bias transparency, helping users understand when selection bias may have occurred. However, countering the effects of selection bias via bias mitigation is typically left for the user to accomplish as a separate process. Dynamic reweighting (DR) is a novel computational approach to selection bias mitigation that helps users craft bias-corrected visualizations. This paper describes the DR workflow, introduces key DR visualization designs, and presents statistical methods that support the DR process. Use cases from the medical domain, as well as findings from domain expert user interviews, are also reported.
|
2204.08103
|
Ariel Rosenfeld
|
Ariel Rosenfeld and Oleg Maksimov
|
Should Young Computer Scientists Stop Collaborating with their Doctoral
Advisors?
|
Communications of the ACM (to appear)
| null |
10.1145/3529089
| null |
cs.CY cs.DL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
One of the first steps in an academic career, and perhaps the pillar thereof,
is completing a PhD under the supervision of a doctoral advisor. While prior
work has examined the advisor-advisee relationship and its potential effects on
the prospective academic success of the advisee, very little is known on the
possibly continued relationship after the advisee has graduated. We harnessed
three genealogical and scientometric datasets to identify 3 distinct groups of
computer scientists: Highly independent, who cease collaborating with their
advisors (almost) instantly upon graduation; Moderately independent, who
(quickly) reduce the collaboration rate over ~5 years; and Weakly independent
who continue collaborating with their advisors for at least 10 years
post-graduation. We find that highly independent researchers are more
academically successful than their peers in terms of H-index, i10-index and
total number of citations throughout their careers. Moderately independent
researchers perform, on average, better than weakly independent researchers,
yet the differences are not found to be statistically significant. In addition,
both highly and moderately independent researchers are found to have longer
academic careers. Interestingly, weakly independent researchers tend to be
supervised by more academically successful advisors.
|
[
{
"created": "Thu, 7 Apr 2022 18:49:39 GMT",
"version": "v1"
}
] |
2022-04-19
|
[
[
"Rosenfeld",
"Ariel",
""
],
[
"Maksimov",
"Oleg",
""
]
] |
One of the first steps in an academic career, and perhaps the pillar thereof, is completing a PhD under the supervision of a doctoral advisor. While prior work has examined the advisor-advisee relationship and its potential effects on the prospective academic success of the advisee, very little is known on the possibly continued relationship after the advisee has graduated. We harnessed three genealogical and scientometric datasets to identify 3 distinct groups of computer scientists: Highly independent, who cease collaborating with their advisors (almost) instantly upon graduation; Moderately independent, who (quickly) reduce the collaboration rate over ~5 years; and Weakly independent who continue collaborating with their advisors for at least 10 years post-graduation. We find that highly independent researchers are more academically successful than their peers in terms of H-index, i10-index and total number of citations throughout their careers. Moderately independent researchers perform, on average, better than weakly independent researchers, yet the differences are not found to be statistically significant. In addition, both highly and moderately independent researchers are found to have longer academic careers. Interestingly, weakly independent researchers tend to be supervised by more academically successful advisors.
|
1512.03131
|
Li Wang
|
Li Wang and Dennis Sng
|
Deep Learning Algorithms with Applications to Video Analytics for A
Smart City: A Survey
|
8 pages, 18 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep learning has recently achieved very promising results in a wide range of
areas such as computer vision, speech recognition and natural language
processing. It aims to learn hierarchical representations of data by using deep
architecture models. In a smart city, a lot of data (e.g. videos captured from
many distributed sensors) need to be automatically processed and analyzed. In
this paper, we review the deep learning algorithms applied to video analytics
of smart city in terms of different research topics: object detection, object
tracking, face recognition, image classification and scene labeling.
|
[
{
"created": "Thu, 10 Dec 2015 03:23:54 GMT",
"version": "v1"
}
] |
2015-12-11
|
[
[
"Wang",
"Li",
""
],
[
"Sng",
"Dennis",
""
]
] |
Deep learning has recently achieved very promising results in a wide range of areas such as computer vision, speech recognition and natural language processing. It aims to learn hierarchical representations of data by using deep architecture models. In a smart city, a lot of data (e.g. videos captured from many distributed sensors) need to be automatically processed and analyzed. In this paper, we review the deep learning algorithms applied to video analytics of smart city in terms of different research topics: object detection, object tracking, face recognition, image classification and scene labeling.
|
2212.13647
|
Miguel Pardal
|
Duarte M. Nascimento and Miguel Ferreira and Miguel L. Pardal
|
Does Big Data Require Complex Systems? A Performance Comparison Between
Spark and Unicage Shell Scripts
|
10 pages, 14 figures
| null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The paradigm of big data is characterized by the need to collect and process
data sets of great volume, arriving at the systems with great velocity, in a
variety of formats. Spark is a widely used big data processing system that can
be integrated with Hadoop to provide powerful abstractions to developers, such
as distributed storage through HDFS and resource management through YARN. When
all the required configurations are made, Spark can also provide quality
attributes, such as scalability, fault tolerance, and security. However, all of
these benefits come at the cost of complexity, with high memory requirements,
and additional latency in processing. An alternative approach is to use a lean
software stack, like Unicage, that delegates most control back to the
developer. In this work we evaluated the performance of big data processing
with Spark versus Unicage, in a cluster environment hosted in the IBM Cloud.
Two sets of experiments were performed: batch processing of unstructured data
sets, and query processing of structured data sets. The input data sets were of
significant size, ranging from 64 GB to 8192 GB in volume. The results show
that the performance of Unicage scripts is superior to Spark for search
workloads like grep and select, but that the abstractions of distributed
storage and resource management from the Hadoop stack enable Spark to execute
workloads with inter-record dependencies, such as sort and join, with correct
outputs.
|
[
{
"created": "Wed, 28 Dec 2022 00:04:13 GMT",
"version": "v1"
}
] |
2022-12-29
|
[
[
"Nascimento",
"Duarte M.",
""
],
[
"Ferreira",
"Miguel",
""
],
[
"Pardal",
"Miguel L.",
""
]
] |
The paradigm of big data is characterized by the need to collect and process data sets of great volume, arriving at the systems with great velocity, in a variety of formats. Spark is a widely used big data processing system that can be integrated with Hadoop to provide powerful abstractions to developers, such as distributed storage through HDFS and resource management through YARN. When all the required configurations are made, Spark can also provide quality attributes, such as scalability, fault tolerance, and security. However, all of these benefits come at the cost of complexity, with high memory requirements, and additional latency in processing. An alternative approach is to use a lean software stack, like Unicage, that delegates most control back to the developer. In this work we evaluated the performance of big data processing with Spark versus Unicage, in a cluster environment hosted in the IBM Cloud. Two sets of experiments were performed: batch processing of unstructured data sets, and query processing of structured data sets. The input data sets were of significant size, ranging from 64 GB to 8192 GB in volume. The results show that the performance of Unicage scripts is superior to Spark for search workloads like grep and select, but that the abstractions of distributed storage and resource management from the Hadoop stack enable Spark to execute workloads with inter-record dependencies, such as sort and join, with correct outputs.
|
1005.5065
|
Manar Mohaisen
|
Manar Mohaisen, KyungHi Chang
|
Upper-lower bounded-complexity QRD-M for spatial multiplexing MIMO-OFDM
systems
|
Springer, Wireless Personal Communications Journal (WPC'2010), 13
pages, 6 figures, 2 tables, 1 algorithm
| null |
10.1007/s11277-010-0014-8
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multiple-input multiple-output (MIMO) technology applied with orthogonal
frequency division multiplexing (OFDM) is considered as the ultimate solution
to increase channel capacity without any additional spectral resources. At the
receiver side, the challenge resides in designing low complexity detection
algorithms capable of separating independent streams sent simultaneously from
different antennas. In this paper, we introduce an upper-lower
bounded-complexity QRD-M algorithm (ULBC QRD-M). In the proposed algorithm we
solve the problem of high extreme complexity of the conventional sphere
decoding by fixing the upper bound complexity to that of the conventional
QRD-M. On the other hand, ULBC QRD-M intelligently cancels all unnecessary
hypotheses to achieve very low computational requirements. Analyses and
simulation results show that the proposed algorithm achieves the performance of
conventional QRD-M with only 26% of the required computations.
|
[
{
"created": "Thu, 27 May 2010 13:26:03 GMT",
"version": "v1"
}
] |
2010-05-28
|
[
[
"Mohaisen",
"Manar",
""
],
[
"Chang",
"KyungHi",
""
]
] |
Multiple-input multiple-output (MIMO) technology applied with orthogonal frequency division multiplexing (OFDM) is considered as the ultimate solution to increase channel capacity without any additional spectral resources. At the receiver side, the challenge resides in designing low complexity detection algorithms capable of separating independent streams sent simultaneously from different antennas. In this paper, we introduce an upper-lower bounded-complexity QRD-M algorithm (ULBC QRD-M). In the proposed algorithm we solve the problem of high extreme complexity of the conventional sphere decoding by fixing the upper bound complexity to that of the conventional QRD-M. On the other hand, ULBC QRD-M intelligently cancels all unnecessary hypotheses to achieve very low computational requirements. Analyses and simulation results show that the proposed algorithm achieves the performance of conventional QRD-M with only 26% of the required computations.
|
2303.15892
|
Yuhao Cheng
|
Yuhao Cheng and Yichao Yan and Wenhan Zhu and Ye Pan and Bowen Pan and
Xiaokang Yang
|
Head3D: Complete 3D Head Generation via Tri-plane Feature Distillation
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Head generation with diverse identities is an important task in computer
vision and computer graphics, widely used in multimedia applications. However,
current full head generation methods require a large number of 3D scans or
multi-view images to train the model, resulting in expensive data acquisition
cost. To address this issue, we propose Head3D, a method to generate full 3D
heads with limited multi-view images. Specifically, our approach first extracts
facial priors represented by tri-planes learned in EG3D, a 3D-aware generative
model, and then proposes feature distillation to deliver the 3D frontal faces
into complete heads without compromising head integrity. To mitigate the domain
gap between the face and head models, we present dual-discriminators to guide
the frontal and back head generation, respectively. Our model achieves
cost-efficient and diverse complete head generation with photo-realistic
renderings and high-quality geometry representations. Extensive experiments
demonstrate the effectiveness of our proposed Head3D, both qualitatively and
quantitatively.
|
[
{
"created": "Tue, 28 Mar 2023 11:12:26 GMT",
"version": "v1"
}
] |
2023-03-29
|
[
[
"Cheng",
"Yuhao",
""
],
[
"Yan",
"Yichao",
""
],
[
"Zhu",
"Wenhan",
""
],
[
"Pan",
"Ye",
""
],
[
"Pan",
"Bowen",
""
],
[
"Yang",
"Xiaokang",
""
]
] |
Head generation with diverse identities is an important task in computer vision and computer graphics, widely used in multimedia applications. However, current full head generation methods require a large number of 3D scans or multi-view images to train the model, resulting in expensive data acquisition cost. To address this issue, we propose Head3D, a method to generate full 3D heads with limited multi-view images. Specifically, our approach first extracts facial priors represented by tri-planes learned in EG3D, a 3D-aware generative model, and then proposes feature distillation to deliver the 3D frontal faces into complete heads without compromising head integrity. To mitigate the domain gap between the face and head models, we present dual-discriminators to guide the frontal and back head generation, respectively. Our model achieves cost-efficient and diverse complete head generation with photo-realistic renderings and high-quality geometry representations. Extensive experiments demonstrate the effectiveness of our proposed Head3D, both qualitatively and quantitatively.
|
2306.00757
|
Zou Zhou
|
Qing Huang, Zhou Zou, Zhenchang Xing, Zhenkang Zuo, Xiwei Xu, Qinghua
Lu
|
AI Chain on Large Language Model for Unsupervised Control Flow Graph
Generation for Statically-Typed Partial Code
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Control Flow Graphs (CFGs) are essential for visualizing, understanding and
analyzing program behavior. For statically-typed programming language like
Java, developers obtain CFGs by using bytecode-based methods for compilable
code and Abstract Syntax Tree (AST)-based methods for partially uncompilable
code. However, explicit syntax errors during AST construction and implicit
semantic errors caused by bad coding practices can lead to behavioral loss and
deviation of CFGs.To address the issue, we propose a novel approach that
leverages the error-tolerant and understanding ability of pre-trained Large
Language Models (LLMs) to generate CFGs. Our approach involves a Chain of
Thought (CoT) with four steps: structure hierarchy extraction, nested code
block extraction, CFG generation of nested code blocks, and fusion of all
nested code blocks' CFGs. To address the limitations of the original CoT's
single-prompt approach (i.e., completing all steps in a single generative
pass), which can result in an ``epic'' prompt with hard-to-control behavior and
error accumulation, we break down the CoT into an AI chain with explicit
sub-steps. Each sub-step corresponds to a separate AI-unit, with an effective
prompt assigned to each unit for interacting with LLMs to accomplish a specific
purpose.Our experiments confirmed that our method outperforms existing CFG
tools in terms of node and edge coverage, especially for incomplete or
erroneous code. We also conducted an ablation experiment and confirmed the
effectiveness of AI chain design principles: Hierarchical Task Breakdown, Unit
Composition, and Mix of AI Units and Non-AI Units.Our work opens up new
possibilities for building foundational software engineering tools based on
LLMs, as opposed to traditional program analysis methods.
|
[
{
"created": "Thu, 1 Jun 2023 14:52:59 GMT",
"version": "v1"
}
] |
2023-06-02
|
[
[
"Huang",
"Qing",
""
],
[
"Zou",
"Zhou",
""
],
[
"Xing",
"Zhenchang",
""
],
[
"Zuo",
"Zhenkang",
""
],
[
"Xu",
"Xiwei",
""
],
[
"Lu",
"Qinghua",
""
]
] |
Control Flow Graphs (CFGs) are essential for visualizing, understanding and analyzing program behavior. For statically-typed programming language like Java, developers obtain CFGs by using bytecode-based methods for compilable code and Abstract Syntax Tree (AST)-based methods for partially uncompilable code. However, explicit syntax errors during AST construction and implicit semantic errors caused by bad coding practices can lead to behavioral loss and deviation of CFGs.To address the issue, we propose a novel approach that leverages the error-tolerant and understanding ability of pre-trained Large Language Models (LLMs) to generate CFGs. Our approach involves a Chain of Thought (CoT) with four steps: structure hierarchy extraction, nested code block extraction, CFG generation of nested code blocks, and fusion of all nested code blocks' CFGs. To address the limitations of the original CoT's single-prompt approach (i.e., completing all steps in a single generative pass), which can result in an ``epic'' prompt with hard-to-control behavior and error accumulation, we break down the CoT into an AI chain with explicit sub-steps. Each sub-step corresponds to a separate AI-unit, with an effective prompt assigned to each unit for interacting with LLMs to accomplish a specific purpose.Our experiments confirmed that our method outperforms existing CFG tools in terms of node and edge coverage, especially for incomplete or erroneous code. We also conducted an ablation experiment and confirmed the effectiveness of AI chain design principles: Hierarchical Task Breakdown, Unit Composition, and Mix of AI Units and Non-AI Units.Our work opens up new possibilities for building foundational software engineering tools based on LLMs, as opposed to traditional program analysis methods.
|
2204.08575
|
Basheer Joudeh
|
Basheer Joudeh and Boris \v{S}kori\'c
|
Collusion-resistant fingerprinting of parallel content channels
|
15 pages. 1 figure. Submitted to IHMMSEC'22
| null | null | null |
cs.IT cs.CR math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
The fingerprinting game is analysed when the coalition size $k$ is known to
the tracer, but the colluders can distribute themselves across $L$ TV channels.
The collusion channel is introduced and the extra degrees of freedom for the
coalition are made manifest in our formulation. We introduce a payoff
functional that is analogous to the single TV channel case, and is conjectured
to be closely related to the fingerprinting capacity. For the binary alphabet
case under the marking assumption, and the restriction of access to one TV
channel per person per segment, we derive the asymptotic behavior of the payoff
functional. We find that the value of the maximin game for our payoff is
asymptotically equal to $L^2/k^2 2 \ln 2$, with optimal strategy for the tracer
being the arcsine distribution, and for the coalition being the interleaving
attack across all TV channels, as well as assigning an equal number of
colluders across the $L$ TV channels.
|
[
{
"created": "Mon, 18 Apr 2022 22:06:23 GMT",
"version": "v1"
}
] |
2022-04-20
|
[
[
"Joudeh",
"Basheer",
""
],
[
"Škorić",
"Boris",
""
]
] |
The fingerprinting game is analysed when the coalition size $k$ is known to the tracer, but the colluders can distribute themselves across $L$ TV channels. The collusion channel is introduced and the extra degrees of freedom for the coalition are made manifest in our formulation. We introduce a payoff functional that is analogous to the single TV channel case, and is conjectured to be closely related to the fingerprinting capacity. For the binary alphabet case under the marking assumption, and the restriction of access to one TV channel per person per segment, we derive the asymptotic behavior of the payoff functional. We find that the value of the maximin game for our payoff is asymptotically equal to $L^2/k^2 2 \ln 2$, with optimal strategy for the tracer being the arcsine distribution, and for the coalition being the interleaving attack across all TV channels, as well as assigning an equal number of colluders across the $L$ TV channels.
|
2105.14565
|
JingKai Siow
|
Yaqin Zhou, Jing Kai Siow, Chenyu Wang, Shangqing Liu, Yang Liu
|
SPI: Automated Identification of Security Patches via Commits
|
Accepted By ACM Transactions on Software Engineering and Methodology
(TOSEM), Continuous Special Section: AI and SE
| null | null | null |
cs.CR cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Security patches in open-source software, providing security fixes to
identified vulnerabilities, are crucial in protecting against cyberattacks.
Despite the National Vulnerability Database (NVD) publishes identified
vulnerabilities, a vast majority of vulnerabilities and their corresponding
security patches remain beyond public exposure, e.g., in the open-source
libraries that are heavily relied on by developers. An extensive security
patches dataset could help end-users such as security companies, e.g., building
a security knowledge base, or researchers, e.g., aiding in vulnerability
research. To curate security patches including undisclosed patches at a large
scale and low cost, we propose a deep neural-network-based approach built upon
commits of open-source repositories. We build security patch datasets that
include 38,291 security-related commits and 1,045 CVE patches from four C
libraries. We manually verify each commit, among the 38,291 security-related
commits, to determine if they are security-related. We devise a deep
learning-based security patch identification system that consists of two neural
networks: one commit-message neural network that utilizes pretrained word
representations learned from our commits dataset; and one code-revision neural
network that takes code before and after revision and learns the distinction on
the statement level. Our evaluation results show that our system outperforms
SVM and K-fold stacking algorithm, achieving as high as 87.93% F1-score and
precision of 86.24%. We deployed our pipeline and learned model in an
industrial production environment to evaluate the generalization ability of our
approach. The industrial dataset consists of 298,917 commits from 410 new
libraries that range from a wide functionality. Our experiment results and
observation proved that our approach identifies security patches effectively
among open-sourced projects.
|
[
{
"created": "Sun, 30 May 2021 15:09:40 GMT",
"version": "v1"
},
{
"created": "Sun, 6 Jun 2021 14:00:38 GMT",
"version": "v2"
}
] |
2021-06-08
|
[
[
"Zhou",
"Yaqin",
""
],
[
"Siow",
"Jing Kai",
""
],
[
"Wang",
"Chenyu",
""
],
[
"Liu",
"Shangqing",
""
],
[
"Liu",
"Yang",
""
]
] |
Security patches in open-source software, providing security fixes to identified vulnerabilities, are crucial in protecting against cyberattacks. Despite the National Vulnerability Database (NVD) publishes identified vulnerabilities, a vast majority of vulnerabilities and their corresponding security patches remain beyond public exposure, e.g., in the open-source libraries that are heavily relied on by developers. An extensive security patches dataset could help end-users such as security companies, e.g., building a security knowledge base, or researchers, e.g., aiding in vulnerability research. To curate security patches including undisclosed patches at a large scale and low cost, we propose a deep neural-network-based approach built upon commits of open-source repositories. We build security patch datasets that include 38,291 security-related commits and 1,045 CVE patches from four C libraries. We manually verify each commit, among the 38,291 security-related commits, to determine if they are security-related. We devise a deep learning-based security patch identification system that consists of two neural networks: one commit-message neural network that utilizes pretrained word representations learned from our commits dataset; and one code-revision neural network that takes code before and after revision and learns the distinction on the statement level. Our evaluation results show that our system outperforms SVM and K-fold stacking algorithm, achieving as high as 87.93% F1-score and precision of 86.24%. We deployed our pipeline and learned model in an industrial production environment to evaluate the generalization ability of our approach. The industrial dataset consists of 298,917 commits from 410 new libraries that range from a wide functionality. Our experiment results and observation proved that our approach identifies security patches effectively among open-sourced projects.
|
2211.01496
|
Yu Zhang
|
Yu Zhang, Mitchell Bucklew
|
Max Markov Chain
| null | null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we introduce Max Markov Chain (MMC), a novel representation
for a useful subset of High-order Markov Chains (HMCs) with sparse correlations
among the states. MMC is parsimony while retaining the expressiveness of HMCs.
Even though parameter optimization is generally intractable as with HMC
approximate models, it has an analytical solution, better sample efficiency,
and the desired spatial and computational advantages over HMCs and approximate
HMCs. Simultaneously, efficient approximate solutions exist for this type of
chains as we show empirically, which allow MMCs to scale to large domains where
HMCs and approximate HMCs would struggle to perform. We compare MMC with HMC,
first-order Markov chain, and an approximate HMC model in synthetic domains
with various data types to demonstrate that MMC is a valuable alternative for
modeling stochastic processes and has many potential applications.
|
[
{
"created": "Wed, 2 Nov 2022 21:50:54 GMT",
"version": "v1"
}
] |
2022-11-04
|
[
[
"Zhang",
"Yu",
""
],
[
"Bucklew",
"Mitchell",
""
]
] |
In this paper, we introduce Max Markov Chain (MMC), a novel representation for a useful subset of High-order Markov Chains (HMCs) with sparse correlations among the states. MMC is parsimony while retaining the expressiveness of HMCs. Even though parameter optimization is generally intractable as with HMC approximate models, it has an analytical solution, better sample efficiency, and the desired spatial and computational advantages over HMCs and approximate HMCs. Simultaneously, efficient approximate solutions exist for this type of chains as we show empirically, which allow MMCs to scale to large domains where HMCs and approximate HMCs would struggle to perform. We compare MMC with HMC, first-order Markov chain, and an approximate HMC model in synthetic domains with various data types to demonstrate that MMC is a valuable alternative for modeling stochastic processes and has many potential applications.
|
2403.14183
|
Jong Chul Ye
|
Kwanyoung Kim, Yujin Oh, Jong Chul Ye
|
OTSeg: Multi-prompt Sinkhorn Attention for Zero-Shot Semantic
Segmentation
|
ECCV 2024; 23 pages, 8 tables, 8 figures; Project Page:
https://cubeyoung.github.io/OTSeg_project/
| null | null | null |
cs.CV cs.AI cs.LG stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
The recent success of CLIP has demonstrated promising results in zero-shot
semantic segmentation by transferring muiltimodal knowledge to pixel-level
classification. However, leveraging pre-trained CLIP knowledge to closely align
text embeddings with pixel embeddings still has limitations in existing
approaches. To address this issue, we propose OTSeg, a novel multimodal
attention mechanism aimed at enhancing the potential of multiple text prompts
for matching associated pixel embeddings. We first propose Multi-Prompts
Sinkhorn (MPS) based on the Optimal Transport (OT) algorithm, which leads
multiple text prompts to selectively focus on various semantic features within
image pixels. Moreover, inspired by the success of Sinkformers in unimodal
settings, we introduce the extension of MPS, called Multi-Prompts Sinkhorn
Attention (MPSA) , which effectively replaces cross-attention mechanisms within
Transformer framework in multimodal settings. Through extensive experiments, we
demonstrate that OTSeg achieves state-of-the-art (SOTA) performance with
significant gains on Zero-Shot Semantic Segmentation (ZS3) tasks across three
benchmark datasets.
|
[
{
"created": "Thu, 21 Mar 2024 07:15:37 GMT",
"version": "v1"
},
{
"created": "Thu, 11 Jul 2024 18:09:48 GMT",
"version": "v2"
}
] |
2024-07-15
|
[
[
"Kim",
"Kwanyoung",
""
],
[
"Oh",
"Yujin",
""
],
[
"Ye",
"Jong Chul",
""
]
] |
The recent success of CLIP has demonstrated promising results in zero-shot semantic segmentation by transferring muiltimodal knowledge to pixel-level classification. However, leveraging pre-trained CLIP knowledge to closely align text embeddings with pixel embeddings still has limitations in existing approaches. To address this issue, we propose OTSeg, a novel multimodal attention mechanism aimed at enhancing the potential of multiple text prompts for matching associated pixel embeddings. We first propose Multi-Prompts Sinkhorn (MPS) based on the Optimal Transport (OT) algorithm, which leads multiple text prompts to selectively focus on various semantic features within image pixels. Moreover, inspired by the success of Sinkformers in unimodal settings, we introduce the extension of MPS, called Multi-Prompts Sinkhorn Attention (MPSA) , which effectively replaces cross-attention mechanisms within Transformer framework in multimodal settings. Through extensive experiments, we demonstrate that OTSeg achieves state-of-the-art (SOTA) performance with significant gains on Zero-Shot Semantic Segmentation (ZS3) tasks across three benchmark datasets.
|
2108.04897
|
Bijit Hore
|
Bijit Hore, Ravi Jammalamadaka, Sharad Mehrotra, Amedeo D'Ascanio
|
Contrained Generalization For Data Anonymization - A Systematic Search
Based Approach
|
45 pages
| null | null | null |
cs.DB cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
Data generalization is a powerful technique for sanitizing multi-attribute
data for publication. In a multidimensional model, a subset of attributes
called the quasi-identifiers (QI) are used to define the space and a
generalization scheme corresponds to a partitioning of the data space. The
process of sanitization can be modeled as a constrained optimization problem
where the information loss metric is to be minimized while ensuring that the
privacy criteria are enforced. The privacy requirements translate into
constraints on the partitions (bins), like minimum occupancy constraints for
k-anonymity, value diversity constraint for l-diversity etc. Most algorithms
proposed till date use some greedy search heuristic to search for a locally
optimal generalization scheme. The performance of such algorithms degrade
rapidly as the constraints are made more complex and numerous. To address this
issue, in this paper we develop a complete enumeration based systematic search
framework that searches for the globally optimal generalization scheme amongst
all feasible candidates. We employ a novel enumeration technique that
eliminates duplicates and develop effective pruning heuristics that cut down
the solution space in order to make the search tractable. Our scheme is
versatile enough to accommodate multiple constraints and information loss
functions satisfying a set of generic properties (that are usually satisfied by
most metrics proposed in literature). Additionally, our approach allows the
user to specify various stopping criteria and can give a bound on the
approximation factor achieved by any candidate solution. Finally, we carry out
extensive experimentation whose results illustrate the power of our algorithm
and its advantage over other competing approaches.
|
[
{
"created": "Tue, 10 Aug 2021 19:45:27 GMT",
"version": "v1"
}
] |
2021-08-12
|
[
[
"Hore",
"Bijit",
""
],
[
"Jammalamadaka",
"Ravi",
""
],
[
"Mehrotra",
"Sharad",
""
],
[
"D'Ascanio",
"Amedeo",
""
]
] |
Data generalization is a powerful technique for sanitizing multi-attribute data for publication. In a multidimensional model, a subset of attributes called the quasi-identifiers (QI) are used to define the space and a generalization scheme corresponds to a partitioning of the data space. The process of sanitization can be modeled as a constrained optimization problem where the information loss metric is to be minimized while ensuring that the privacy criteria are enforced. The privacy requirements translate into constraints on the partitions (bins), like minimum occupancy constraints for k-anonymity, value diversity constraint for l-diversity etc. Most algorithms proposed till date use some greedy search heuristic to search for a locally optimal generalization scheme. The performance of such algorithms degrade rapidly as the constraints are made more complex and numerous. To address this issue, in this paper we develop a complete enumeration based systematic search framework that searches for the globally optimal generalization scheme amongst all feasible candidates. We employ a novel enumeration technique that eliminates duplicates and develop effective pruning heuristics that cut down the solution space in order to make the search tractable. Our scheme is versatile enough to accommodate multiple constraints and information loss functions satisfying a set of generic properties (that are usually satisfied by most metrics proposed in literature). Additionally, our approach allows the user to specify various stopping criteria and can give a bound on the approximation factor achieved by any candidate solution. Finally, we carry out extensive experimentation whose results illustrate the power of our algorithm and its advantage over other competing approaches.
|
2107.09133
|
Daniel Kunin
|
Daniel Kunin, Javier Sagastuy-Brena, Lauren Gillespie, Eshed Margalit,
Hidenori Tanaka, Surya Ganguli, Daniel L. K. Yamins
|
The Limiting Dynamics of SGD: Modified Loss, Phase Space Oscillations,
and Anomalous Diffusion
|
78 pages, 9 figures, Neural Computation 2024
|
Neural Computation (2024) 36 (1) 151-174
|
10.1162/neco_a_01626
| null |
cs.LG cond-mat.stat-mech q-bio.NC stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work we explore the limiting dynamics of deep neural networks trained
with stochastic gradient descent (SGD). As observed previously, long after
performance has converged, networks continue to move through parameter space by
a process of anomalous diffusion in which distance travelled grows as a power
law in the number of gradient updates with a nontrivial exponent. We reveal an
intricate interaction between the hyperparameters of optimization, the
structure in the gradient noise, and the Hessian matrix at the end of training
that explains this anomalous diffusion. To build this understanding, we first
derive a continuous-time model for SGD with finite learning rates and batch
sizes as an underdamped Langevin equation. We study this equation in the
setting of linear regression, where we can derive exact, analytic expressions
for the phase space dynamics of the parameters and their instantaneous
velocities from initialization to stationarity. Using the Fokker-Planck
equation, we show that the key ingredient driving these dynamics is not the
original training loss, but rather the combination of a modified loss, which
implicitly regularizes the velocity, and probability currents, which cause
oscillations in phase space. We identify qualitative and quantitative
predictions of this theory in the dynamics of a ResNet-18 model trained on
ImageNet. Through the lens of statistical physics, we uncover a mechanistic
origin for the anomalous limiting dynamics of deep neural networks trained with
SGD.
|
[
{
"created": "Mon, 19 Jul 2021 20:18:57 GMT",
"version": "v1"
},
{
"created": "Tue, 5 Oct 2021 23:45:27 GMT",
"version": "v2"
},
{
"created": "Thu, 2 Dec 2021 17:30:08 GMT",
"version": "v3"
},
{
"created": "Thu, 28 Dec 2023 17:48:28 GMT",
"version": "v4"
}
] |
2023-12-29
|
[
[
"Kunin",
"Daniel",
""
],
[
"Sagastuy-Brena",
"Javier",
""
],
[
"Gillespie",
"Lauren",
""
],
[
"Margalit",
"Eshed",
""
],
[
"Tanaka",
"Hidenori",
""
],
[
"Ganguli",
"Surya",
""
],
[
"Yamins",
"Daniel L. K.",
""
]
] |
In this work we explore the limiting dynamics of deep neural networks trained with stochastic gradient descent (SGD). As observed previously, long after performance has converged, networks continue to move through parameter space by a process of anomalous diffusion in which distance travelled grows as a power law in the number of gradient updates with a nontrivial exponent. We reveal an intricate interaction between the hyperparameters of optimization, the structure in the gradient noise, and the Hessian matrix at the end of training that explains this anomalous diffusion. To build this understanding, we first derive a continuous-time model for SGD with finite learning rates and batch sizes as an underdamped Langevin equation. We study this equation in the setting of linear regression, where we can derive exact, analytic expressions for the phase space dynamics of the parameters and their instantaneous velocities from initialization to stationarity. Using the Fokker-Planck equation, we show that the key ingredient driving these dynamics is not the original training loss, but rather the combination of a modified loss, which implicitly regularizes the velocity, and probability currents, which cause oscillations in phase space. We identify qualitative and quantitative predictions of this theory in the dynamics of a ResNet-18 model trained on ImageNet. Through the lens of statistical physics, we uncover a mechanistic origin for the anomalous limiting dynamics of deep neural networks trained with SGD.
|
2305.03572
|
Enzo Tartaglione
|
Marta Milovanovi\'c, Enzo Tartaglione, Marco Cagnazzo, F\'elix Henry
|
Learn how to Prune Pixels for Multi-view Neural Image-based Synthesis
| null | null |
10.1109/ICMEW59549.2023.00034
| null |
cs.MM cs.AI cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Image-based rendering techniques stand at the core of an immersive experience
for the user, as they generate novel views given a set of multiple input
images. Since they have shown good performance in terms of objective and
subjective quality, the research community devotes great effort to their
improvement. However, the large volume of data necessary to render at the
receiver's side hinders applications in limited bandwidth environments or
prevents their employment in real-time applications. We present LeHoPP, a
method for input pixel pruning, where we examine the importance of each input
pixel concerning the rendered view, and we avoid the use of irrelevant pixels.
Even without retraining the image-based rendering network, our approach shows a
good trade-off between synthesis quality and pixel rate. When tested in the
general neural rendering framework, compared to other pruning baselines, LeHoPP
gains between $0.9$ dB and $3.6$ dB on average.
|
[
{
"created": "Fri, 5 May 2023 14:29:24 GMT",
"version": "v1"
}
] |
2023-09-13
|
[
[
"Milovanović",
"Marta",
""
],
[
"Tartaglione",
"Enzo",
""
],
[
"Cagnazzo",
"Marco",
""
],
[
"Henry",
"Félix",
""
]
] |
Image-based rendering techniques stand at the core of an immersive experience for the user, as they generate novel views given a set of multiple input images. Since they have shown good performance in terms of objective and subjective quality, the research community devotes great effort to their improvement. However, the large volume of data necessary to render at the receiver's side hinders applications in limited bandwidth environments or prevents their employment in real-time applications. We present LeHoPP, a method for input pixel pruning, where we examine the importance of each input pixel concerning the rendered view, and we avoid the use of irrelevant pixels. Even without retraining the image-based rendering network, our approach shows a good trade-off between synthesis quality and pixel rate. When tested in the general neural rendering framework, compared to other pruning baselines, LeHoPP gains between $0.9$ dB and $3.6$ dB on average.
|
2109.04650
|
Sang-Woo Lee
|
Boseop Kim, HyoungSeok Kim, Sang-Woo Lee, Gichang Lee, Donghyun Kwak,
Dong Hyeon Jeon, Sunghyun Park, Sungju Kim, Seonhoon Kim, Dongpil Seo,
Heungsub Lee, Minyoung Jeong, Sungjae Lee, Minsub Kim, Suk Hyun Ko, Seokhun
Kim, Taeyong Park, Jinuk Kim, Soyoung Kang, Na-Hyeon Ryu, Kang Min Yoo,
Minsuk Chang, Soobin Suh, Sookyo In, Jinseong Park, Kyungduk Kim, Hiun Kim,
Jisu Jeong, Yong Goo Yeo, Donghoon Ham, Dongju Park, Min Young Lee, Jaewook
Kang, Inho Kang, Jung-Woo Ha, Woomyoung Park, Nako Sung
|
What Changes Can Large-scale Language Models Bring? Intensive Study on
HyperCLOVA: Billions-scale Korean Generative Pretrained Transformers
|
Accepted to EMNLP2021 as a long paper. Fixed some typos
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
GPT-3 shows remarkable in-context learning ability of large-scale language
models (LMs) trained on hundreds of billion scale data. Here we address some
remaining issues less reported by the GPT-3 paper, such as a non-English LM,
the performances of different sized models, and the effect of recently
introduced prompt optimization on in-context learning. To achieve this, we
introduce HyperCLOVA, a Korean variant of 82B GPT-3 trained on a Korean-centric
corpus of 560B tokens. Enhanced by our Korean-specific tokenization, HyperCLOVA
with our training configuration shows state-of-the-art in-context zero-shot and
few-shot learning performances on various downstream tasks in Korean. Also, we
show the performance benefits of prompt-based learning and demonstrate how it
can be integrated into the prompt engineering pipeline. Then we discuss the
possibility of materializing the No Code AI paradigm by providing AI
prototyping capabilities to non-experts of ML by introducing HyperCLOVA studio,
an interactive prompt engineering interface. Lastly, we demonstrate the
potential of our methods with three successful in-house applications.
|
[
{
"created": "Fri, 10 Sep 2021 03:32:19 GMT",
"version": "v1"
},
{
"created": "Sun, 28 Nov 2021 10:56:27 GMT",
"version": "v2"
}
] |
2021-11-30
|
[
[
"Kim",
"Boseop",
""
],
[
"Kim",
"HyoungSeok",
""
],
[
"Lee",
"Sang-Woo",
""
],
[
"Lee",
"Gichang",
""
],
[
"Kwak",
"Donghyun",
""
],
[
"Jeon",
"Dong Hyeon",
""
],
[
"Park",
"Sunghyun",
""
],
[
"Kim",
"Sungju",
""
],
[
"Kim",
"Seonhoon",
""
],
[
"Seo",
"Dongpil",
""
],
[
"Lee",
"Heungsub",
""
],
[
"Jeong",
"Minyoung",
""
],
[
"Lee",
"Sungjae",
""
],
[
"Kim",
"Minsub",
""
],
[
"Ko",
"Suk Hyun",
""
],
[
"Kim",
"Seokhun",
""
],
[
"Park",
"Taeyong",
""
],
[
"Kim",
"Jinuk",
""
],
[
"Kang",
"Soyoung",
""
],
[
"Ryu",
"Na-Hyeon",
""
],
[
"Yoo",
"Kang Min",
""
],
[
"Chang",
"Minsuk",
""
],
[
"Suh",
"Soobin",
""
],
[
"In",
"Sookyo",
""
],
[
"Park",
"Jinseong",
""
],
[
"Kim",
"Kyungduk",
""
],
[
"Kim",
"Hiun",
""
],
[
"Jeong",
"Jisu",
""
],
[
"Yeo",
"Yong Goo",
""
],
[
"Ham",
"Donghoon",
""
],
[
"Park",
"Dongju",
""
],
[
"Lee",
"Min Young",
""
],
[
"Kang",
"Jaewook",
""
],
[
"Kang",
"Inho",
""
],
[
"Ha",
"Jung-Woo",
""
],
[
"Park",
"Woomyoung",
""
],
[
"Sung",
"Nako",
""
]
] |
GPT-3 shows remarkable in-context learning ability of large-scale language models (LMs) trained on hundreds of billion scale data. Here we address some remaining issues less reported by the GPT-3 paper, such as a non-English LM, the performances of different sized models, and the effect of recently introduced prompt optimization on in-context learning. To achieve this, we introduce HyperCLOVA, a Korean variant of 82B GPT-3 trained on a Korean-centric corpus of 560B tokens. Enhanced by our Korean-specific tokenization, HyperCLOVA with our training configuration shows state-of-the-art in-context zero-shot and few-shot learning performances on various downstream tasks in Korean. Also, we show the performance benefits of prompt-based learning and demonstrate how it can be integrated into the prompt engineering pipeline. Then we discuss the possibility of materializing the No Code AI paradigm by providing AI prototyping capabilities to non-experts of ML by introducing HyperCLOVA studio, an interactive prompt engineering interface. Lastly, we demonstrate the potential of our methods with three successful in-house applications.
|
2305.18718
|
Augustinos Saravanos
|
Augustinos D. Saravanos, Yihui Li, Evangelos A. Theodorou
|
Distributed Hierarchical Distribution Control for Very-Large-Scale
Clustered Multi-Agent Systems
|
Accepted at Robotics: Science and Systems 2023
| null | null | null |
cs.RO cs.MA cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
As the scale and complexity of multi-agent robotic systems are subject to a
continuous increase, this paper considers a class of systems labeled as
Very-Large-Scale Multi-Agent Systems (VLMAS) with dimensionality that can scale
up to the order of millions of agents. In particular, we consider the problem
of steering the state distributions of all agents of a VLMAS to prescribed
target distributions while satisfying probabilistic safety guarantees. Based on
the key assumption that such systems often admit a multi-level hierarchical
clustered structure - where the agents are organized into cliques of different
levels - we associate the control of such cliques with the control of
distributions, and introduce the Distributed Hierarchical Distribution Control
(DHDC) framework. The proposed approach consists of two sub-frameworks. The
first one, Distributed Hierarchical Distribution Estimation (DHDE), is a
bottom-up hierarchical decentralized algorithm which links the initial and
target configurations of the cliques of all levels with suitable Gaussian
distributions. The second part, Distributed Hierarchical Distribution Steering
(DHDS), is a top-down hierarchical distributed method that steers the
distributions of all cliques and agents from the initial to the targets ones
assigned by DHDE. Simulation results that scale up to two million agents
demonstrate the effectiveness and scalability of the proposed framework. The
increased computational efficiency and safety performance of DHDC against
related methods is also illustrated. The results of this work indicate the
importance of hierarchical distribution control approaches towards achieving
safe and scalable solutions for the control of VLMAS. A video with all results
is available in https://youtu.be/0QPyR4bD2q0 .
|
[
{
"created": "Tue, 30 May 2023 03:49:29 GMT",
"version": "v1"
}
] |
2023-05-31
|
[
[
"Saravanos",
"Augustinos D.",
""
],
[
"Li",
"Yihui",
""
],
[
"Theodorou",
"Evangelos A.",
""
]
] |
As the scale and complexity of multi-agent robotic systems are subject to a continuous increase, this paper considers a class of systems labeled as Very-Large-Scale Multi-Agent Systems (VLMAS) with dimensionality that can scale up to the order of millions of agents. In particular, we consider the problem of steering the state distributions of all agents of a VLMAS to prescribed target distributions while satisfying probabilistic safety guarantees. Based on the key assumption that such systems often admit a multi-level hierarchical clustered structure - where the agents are organized into cliques of different levels - we associate the control of such cliques with the control of distributions, and introduce the Distributed Hierarchical Distribution Control (DHDC) framework. The proposed approach consists of two sub-frameworks. The first one, Distributed Hierarchical Distribution Estimation (DHDE), is a bottom-up hierarchical decentralized algorithm which links the initial and target configurations of the cliques of all levels with suitable Gaussian distributions. The second part, Distributed Hierarchical Distribution Steering (DHDS), is a top-down hierarchical distributed method that steers the distributions of all cliques and agents from the initial to the targets ones assigned by DHDE. Simulation results that scale up to two million agents demonstrate the effectiveness and scalability of the proposed framework. The increased computational efficiency and safety performance of DHDC against related methods is also illustrated. The results of this work indicate the importance of hierarchical distribution control approaches towards achieving safe and scalable solutions for the control of VLMAS. A video with all results is available in https://youtu.be/0QPyR4bD2q0 .
|
2005.12444
|
Yuchuan Gou
|
Yuchuan Gou, Qiancheng Wu, Minghao Li, Bo Gong, Mei Han
|
SegAttnGAN: Text to Image Generation with Segmentation Attention
|
Accepted to the AI for Content Creation Workshop at CVPR 2020
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a novel generative network (SegAttnGAN) that
utilizes additional segmentation information for the text-to-image synthesis
task. As the segmentation data introduced to the model provides useful guidance
on the generator training, the proposed model can generate images with better
realism quality and higher quantitative measures compared with the previous
state-of-art methods. We achieved Inception Score of 4.84 on the CUB dataset
and 3.52 on the Oxford-102 dataset. Besides, we tested the self-attention
SegAttnGAN which uses generated segmentation data instead of masks from
datasets for attention and achieved similar high-quality results, suggesting
that our model can be adapted for the text-to-image synthesis task.
|
[
{
"created": "Mon, 25 May 2020 23:56:41 GMT",
"version": "v1"
}
] |
2020-05-27
|
[
[
"Gou",
"Yuchuan",
""
],
[
"Wu",
"Qiancheng",
""
],
[
"Li",
"Minghao",
""
],
[
"Gong",
"Bo",
""
],
[
"Han",
"Mei",
""
]
] |
In this paper, we propose a novel generative network (SegAttnGAN) that utilizes additional segmentation information for the text-to-image synthesis task. As the segmentation data introduced to the model provides useful guidance on the generator training, the proposed model can generate images with better realism quality and higher quantitative measures compared with the previous state-of-art methods. We achieved Inception Score of 4.84 on the CUB dataset and 3.52 on the Oxford-102 dataset. Besides, we tested the self-attention SegAttnGAN which uses generated segmentation data instead of masks from datasets for attention and achieved similar high-quality results, suggesting that our model can be adapted for the text-to-image synthesis task.
|
1107.4414
|
Annapurna Sharma Ms
|
Annapurna Sharma, Amit Purwar, Young-Dong Lee Young-Sook Lee Wan-Young
Chung
|
Frequency based Classification of Activities using Accelerometer Data
|
IEEE International Conference on Multisensor Fusion and Integration
for Intelligent Systems, 2008. MFI 2008
| null |
10.1109/MFI.2008.4648056
| null |
cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work presents, the classification of user activities such as Rest, Walk
and Run, on the basis of frequency component present in the acceleration data
in a wireless sensor network environment. As the frequencies of the above
mentioned activities differ slightly for different person, so it gives a more
accurate result. The algorithm uses just one parameter i.e. the frequency of
the body acceleration data of the three axes for classifying the activities in
a set of data. The algorithm includes a normalization step and hence there is
no need to set a different value of threshold value for magnitude for different
test person. The classification is automatic and done on a block by block
basis.
|
[
{
"created": "Fri, 22 Jul 2011 04:41:13 GMT",
"version": "v1"
}
] |
2011-07-25
|
[
[
"Sharma",
"Annapurna",
""
],
[
"Purwar",
"Amit",
""
],
[
"Chung",
"Young-Dong Lee Young-Sook Lee Wan-Young",
""
]
] |
This work presents, the classification of user activities such as Rest, Walk and Run, on the basis of frequency component present in the acceleration data in a wireless sensor network environment. As the frequencies of the above mentioned activities differ slightly for different person, so it gives a more accurate result. The algorithm uses just one parameter i.e. the frequency of the body acceleration data of the three axes for classifying the activities in a set of data. The algorithm includes a normalization step and hence there is no need to set a different value of threshold value for magnitude for different test person. The classification is automatic and done on a block by block basis.
|
2303.16245
|
Xingfu Wu
|
Xingfu Wu, Prasanna Balaprakash, Michael Kruse, Jaehoon Koo, Brice
Videau, Paul Hovland, Valerie Taylor, Brad Geltz, Siddhartha Jana, and Mary
Hall
|
ytopt: Autotuning Scientific Applications for Energy Efficiency at Large
Scales
| null |
to be pushilshed in CUG2023
| null | null |
cs.DC cs.LG cs.PF
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
As we enter the exascale computing era, efficiently utilizing power and
optimizing the performance of scientific applications under power and energy
constraints has become critical and challenging. We propose a low-overhead
autotuning framework to autotune performance and energy for various hybrid
MPI/OpenMP scientific applications at large scales and to explore the tradeoffs
between application runtime and power/energy for energy efficient application
execution, then use this framework to autotune four ECP proxy applications --
XSBench, AMG, SWFFT, and SW4lite. Our approach uses Bayesian optimization with
a Random Forest surrogate model to effectively search parameter spaces with up
to 6 million different configurations on two large-scale production systems,
Theta at Argonne National Laboratory and Summit at Oak Ridge National
Laboratory. The experimental results show that our autotuning framework at
large scales has low overhead and achieves good scalability. Using the proposed
autotuning framework to identify the best configurations, we achieve up to
91.59% performance improvement, up to 21.2% energy savings, and up to 37.84%
EDP improvement on up to 4,096 nodes.
|
[
{
"created": "Tue, 28 Mar 2023 18:50:55 GMT",
"version": "v1"
}
] |
2023-03-30
|
[
[
"Wu",
"Xingfu",
""
],
[
"Balaprakash",
"Prasanna",
""
],
[
"Kruse",
"Michael",
""
],
[
"Koo",
"Jaehoon",
""
],
[
"Videau",
"Brice",
""
],
[
"Hovland",
"Paul",
""
],
[
"Taylor",
"Valerie",
""
],
[
"Geltz",
"Brad",
""
],
[
"Jana",
"Siddhartha",
""
],
[
"Hall",
"Mary",
""
]
] |
As we enter the exascale computing era, efficiently utilizing power and optimizing the performance of scientific applications under power and energy constraints has become critical and challenging. We propose a low-overhead autotuning framework to autotune performance and energy for various hybrid MPI/OpenMP scientific applications at large scales and to explore the tradeoffs between application runtime and power/energy for energy efficient application execution, then use this framework to autotune four ECP proxy applications -- XSBench, AMG, SWFFT, and SW4lite. Our approach uses Bayesian optimization with a Random Forest surrogate model to effectively search parameter spaces with up to 6 million different configurations on two large-scale production systems, Theta at Argonne National Laboratory and Summit at Oak Ridge National Laboratory. The experimental results show that our autotuning framework at large scales has low overhead and achieves good scalability. Using the proposed autotuning framework to identify the best configurations, we achieve up to 91.59% performance improvement, up to 21.2% energy savings, and up to 37.84% EDP improvement on up to 4,096 nodes.
|
2108.09671
|
Jiefeng Peng
|
Jiefeng Peng, Jiqi Zhang, Changlin Li, Guangrun Wang, Xiaodan Liang,
Liang Lin
|
Pi-NAS: Improving Neural Architecture Search by Reducing Supernet
Training Consistency Shift
|
Accepted to ICCV 2021
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently proposed neural architecture search (NAS) methods co-train billions
of architectures in a supernet and estimate their potential accuracy using the
network weights detached from the supernet. However, the ranking correlation
between the architectures' predicted accuracy and their actual capability is
incorrect, which causes the existing NAS methods' dilemma. We attribute this
ranking correlation problem to the supernet training consistency shift,
including feature shift and parameter shift. Feature shift is identified as
dynamic input distributions of a hidden layer due to random path sampling. The
input distribution dynamic affects the loss descent and finally affects
architecture ranking. Parameter shift is identified as contradictory parameter
updates for a shared layer lay in different paths in different training steps.
The rapidly-changing parameter could not preserve architecture ranking. We
address these two shifts simultaneously using a nontrivial supernet-Pi model,
called Pi-NAS. Specifically, we employ a supernet-Pi model that contains
cross-path learning to reduce the feature consistency shift between different
paths. Meanwhile, we adopt a novel nontrivial mean teacher containing negative
samples to overcome parameter shift and model collision. Furthermore, our
Pi-NAS runs in an unsupervised manner, which can search for more transferable
architectures. Extensive experiments on ImageNet and a wide range of downstream
tasks (e.g., COCO 2017, ADE20K, and Cityscapes) demonstrate the effectiveness
and universality of our Pi-NAS compared to supervised NAS. See Codes:
https://github.com/Ernie1/Pi-NAS.
|
[
{
"created": "Sun, 22 Aug 2021 09:08:48 GMT",
"version": "v1"
}
] |
2021-08-24
|
[
[
"Peng",
"Jiefeng",
""
],
[
"Zhang",
"Jiqi",
""
],
[
"Li",
"Changlin",
""
],
[
"Wang",
"Guangrun",
""
],
[
"Liang",
"Xiaodan",
""
],
[
"Lin",
"Liang",
""
]
] |
Recently proposed neural architecture search (NAS) methods co-train billions of architectures in a supernet and estimate their potential accuracy using the network weights detached from the supernet. However, the ranking correlation between the architectures' predicted accuracy and their actual capability is incorrect, which causes the existing NAS methods' dilemma. We attribute this ranking correlation problem to the supernet training consistency shift, including feature shift and parameter shift. Feature shift is identified as dynamic input distributions of a hidden layer due to random path sampling. The input distribution dynamic affects the loss descent and finally affects architecture ranking. Parameter shift is identified as contradictory parameter updates for a shared layer lay in different paths in different training steps. The rapidly-changing parameter could not preserve architecture ranking. We address these two shifts simultaneously using a nontrivial supernet-Pi model, called Pi-NAS. Specifically, we employ a supernet-Pi model that contains cross-path learning to reduce the feature consistency shift between different paths. Meanwhile, we adopt a novel nontrivial mean teacher containing negative samples to overcome parameter shift and model collision. Furthermore, our Pi-NAS runs in an unsupervised manner, which can search for more transferable architectures. Extensive experiments on ImageNet and a wide range of downstream tasks (e.g., COCO 2017, ADE20K, and Cityscapes) demonstrate the effectiveness and universality of our Pi-NAS compared to supervised NAS. See Codes: https://github.com/Ernie1/Pi-NAS.
|
1803.06539
|
Claudio Qureshi
|
Claudio Qureshi and Daniel Panario
|
The Graph Structure of Chebyshev Polynomials over Finite Fields and
Applications
| null | null | null | null |
cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We completely describe the functional graph associated to iterations of
Chebyshev polynomials over finite fields. Then, we use our structural results
to obtain estimates for the average rho length, average number of connected
components and the expected value for the period and preperiod of iterating
Chebyshev polynomials.
|
[
{
"created": "Sat, 17 Mar 2018 16:59:58 GMT",
"version": "v1"
}
] |
2018-03-20
|
[
[
"Qureshi",
"Claudio",
""
],
[
"Panario",
"Daniel",
""
]
] |
We completely describe the functional graph associated to iterations of Chebyshev polynomials over finite fields. Then, we use our structural results to obtain estimates for the average rho length, average number of connected components and the expected value for the period and preperiod of iterating Chebyshev polynomials.
|
2307.16171
|
Sang-Hoon Lee
|
Sang-Hoon Lee, Ha-Yeong Choi, Hyung-Seok Oh, Seong-Whan Lee
|
HierVST: Hierarchical Adaptive Zero-shot Voice Style Transfer
|
INTERSPEECH 2023 (Oral)
| null | null | null |
cs.SD cs.AI cs.MM eess.AS
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Despite rapid progress in the voice style transfer (VST) field, recent
zero-shot VST systems still lack the ability to transfer the voice style of a
novel speaker. In this paper, we present HierVST, a hierarchical adaptive
end-to-end zero-shot VST model. Without any text transcripts, we only use the
speech dataset to train the model by utilizing hierarchical variational
inference and self-supervised representation. In addition, we adopt a
hierarchical adaptive generator that generates the pitch representation and
waveform audio sequentially. Moreover, we utilize unconditional generation to
improve the speaker-relative acoustic capacity in the acoustic representation.
With a hierarchical adaptive structure, the model can adapt to a novel voice
style and convert speech progressively. The experimental results demonstrate
that our method outperforms other VST models in zero-shot VST scenarios. Audio
samples are available at \url{https://hiervst.github.io/}.
|
[
{
"created": "Sun, 30 Jul 2023 08:49:55 GMT",
"version": "v1"
}
] |
2023-08-01
|
[
[
"Lee",
"Sang-Hoon",
""
],
[
"Choi",
"Ha-Yeong",
""
],
[
"Oh",
"Hyung-Seok",
""
],
[
"Lee",
"Seong-Whan",
""
]
] |
Despite rapid progress in the voice style transfer (VST) field, recent zero-shot VST systems still lack the ability to transfer the voice style of a novel speaker. In this paper, we present HierVST, a hierarchical adaptive end-to-end zero-shot VST model. Without any text transcripts, we only use the speech dataset to train the model by utilizing hierarchical variational inference and self-supervised representation. In addition, we adopt a hierarchical adaptive generator that generates the pitch representation and waveform audio sequentially. Moreover, we utilize unconditional generation to improve the speaker-relative acoustic capacity in the acoustic representation. With a hierarchical adaptive structure, the model can adapt to a novel voice style and convert speech progressively. The experimental results demonstrate that our method outperforms other VST models in zero-shot VST scenarios. Audio samples are available at \url{https://hiervst.github.io/}.
|
0707.2293
|
Maziar Nekovee
|
Maziar Nekovee
|
Worm Epidemics in Wireless Adhoc Networks
| null |
Published in New J. Phys. 9 189, 2007
|
10.1088/1367-2630/9/6/189
| null |
cs.NI cond-mat.stat-mech cs.CR physics.soc-ph
| null |
A dramatic increase in the number of computing devices with wireless
communication capability has resulted in the emergence of a new class of
computer worms which specifically target such devices. The most striking
feature of these worms is that they do not require Internet connectivity for
their propagation but can spread directly from device to device using a
short-range radio communication technology, such as WiFi or Bluetooth. In this
paper, we develop a new model for epidemic spreading of these worms and
investigate their spreading in wireless ad hoc networks via extensive Monte
Carlo simulations. Our studies show that the threshold behaviour and dynamics
of worm epidemics in these networks are greatly affected by a combination of
spatial and temporal correlations which characterize these networks, and are
significantly different from the previously studied epidemics in the Internet.
|
[
{
"created": "Mon, 16 Jul 2007 09:58:18 GMT",
"version": "v1"
}
] |
2008-07-10
|
[
[
"Nekovee",
"Maziar",
""
]
] |
A dramatic increase in the number of computing devices with wireless communication capability has resulted in the emergence of a new class of computer worms which specifically target such devices. The most striking feature of these worms is that they do not require Internet connectivity for their propagation but can spread directly from device to device using a short-range radio communication technology, such as WiFi or Bluetooth. In this paper, we develop a new model for epidemic spreading of these worms and investigate their spreading in wireless ad hoc networks via extensive Monte Carlo simulations. Our studies show that the threshold behaviour and dynamics of worm epidemics in these networks are greatly affected by a combination of spatial and temporal correlations which characterize these networks, and are significantly different from the previously studied epidemics in the Internet.
|
2310.18338
|
Subhabrata Dutta
|
Gurusha Juneja, Subhabrata Dutta, Soumen Chakrabarti, Sunny Manchanda,
Tanmoy Chakraborty
|
Small Language Models Fine-tuned to Coordinate Larger Language Models
improve Complex Reasoning
|
EMNLP 2023 (Typos corrected)
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Large Language Models (LLMs) prompted to generate chain-of-thought (CoT)
exhibit impressive reasoning capabilities. Recent attempts at prompt
decomposition toward solving complex, multi-step reasoning problems depend on
the ability of the LLM to simultaneously decompose and solve the problem. A
significant disadvantage is that foundational LLMs are typically not available
for fine-tuning, making adaptation computationally prohibitive. We believe (and
demonstrate) that problem decomposition and solution generation are distinct
capabilites, better addressed in separate modules, than by one monolithic LLM.
We introduce DaSLaM, which uses a decomposition generator to decompose complex
problems into subproblems that require fewer reasoning steps. These subproblems
are answered by a solver. We use a relatively small (13B parameters) LM as the
decomposition generator, which we train using policy gradient optimization to
interact with a solver LM (regarded as black-box) and guide it through
subproblems, thereby rendering our method solver-agnostic. Evaluation on
multiple different reasoning datasets reveal that with our method, a 175
billion parameter LM (text-davinci-003) can produce competitive or even better
performance, compared to its orders-of-magnitude larger successor, GPT-4.
Additionally, we show that DaSLaM is not limited by the solver's capabilities
as a function of scale; e.g., solver LMs with diverse sizes give significant
performance improvement with our solver-agnostic decomposition technique.
Exhaustive ablation studies evince the superiority of our modular finetuning
technique over exorbitantly large decomposer LLMs, based on prompting alone.
|
[
{
"created": "Sat, 21 Oct 2023 15:23:20 GMT",
"version": "v1"
},
{
"created": "Tue, 27 Feb 2024 13:24:06 GMT",
"version": "v2"
}
] |
2024-02-28
|
[
[
"Juneja",
"Gurusha",
""
],
[
"Dutta",
"Subhabrata",
""
],
[
"Chakrabarti",
"Soumen",
""
],
[
"Manchanda",
"Sunny",
""
],
[
"Chakraborty",
"Tanmoy",
""
]
] |
Large Language Models (LLMs) prompted to generate chain-of-thought (CoT) exhibit impressive reasoning capabilities. Recent attempts at prompt decomposition toward solving complex, multi-step reasoning problems depend on the ability of the LLM to simultaneously decompose and solve the problem. A significant disadvantage is that foundational LLMs are typically not available for fine-tuning, making adaptation computationally prohibitive. We believe (and demonstrate) that problem decomposition and solution generation are distinct capabilites, better addressed in separate modules, than by one monolithic LLM. We introduce DaSLaM, which uses a decomposition generator to decompose complex problems into subproblems that require fewer reasoning steps. These subproblems are answered by a solver. We use a relatively small (13B parameters) LM as the decomposition generator, which we train using policy gradient optimization to interact with a solver LM (regarded as black-box) and guide it through subproblems, thereby rendering our method solver-agnostic. Evaluation on multiple different reasoning datasets reveal that with our method, a 175 billion parameter LM (text-davinci-003) can produce competitive or even better performance, compared to its orders-of-magnitude larger successor, GPT-4. Additionally, we show that DaSLaM is not limited by the solver's capabilities as a function of scale; e.g., solver LMs with diverse sizes give significant performance improvement with our solver-agnostic decomposition technique. Exhaustive ablation studies evince the superiority of our modular finetuning technique over exorbitantly large decomposer LLMs, based on prompting alone.
|
2210.00145
|
Andrea Araldo
|
Rosario Patan\`e, Andrea Araldo, Tijani Chahed, Diego Kiedanski,
Daniel Kofman
|
Coalitional Game-Theoretical Approach to Coinvestment with Application
to Edge Computing
| null |
IEEE CCNC 2023
| null | null |
cs.GT cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose in this paper a coinvestment plan between several stakeholders of
different types, namely a physical network owner, operating network nodes, e.g.
a network operator or a tower company, and a set of service providers willing
to use these resources to provide services as video streaming, augmented
reality, autonomous driving assistance, etc. One such scenario is that of
deployment of Edge Computing resources.
Indeed, although the latter technology is ready, the high Capital Expenditure
(CAPEX) cost of such resources is the barrier to its deployment. For this
reason, a solid economical framework to guide the investment and the returns of
the stakeholders is key to solve this issue. We formalize the coinvestment
framework using coalitional game theory. We provide a solution to calculate how
to divide the profits and costs among the stakeholders, taking into account
their characteristics: traffic load, revenues, utility function. We prove that
it is always possible to form the grand coalition composed of all the
stakeholders, by showing that our game is convex. We derive the payoff of the
stakeholders using the Shapley value concept, and elaborate on some properties
of our game. We show our solution in simulation.
|
[
{
"created": "Fri, 30 Sep 2022 23:58:19 GMT",
"version": "v1"
}
] |
2022-10-04
|
[
[
"Patanè",
"Rosario",
""
],
[
"Araldo",
"Andrea",
""
],
[
"Chahed",
"Tijani",
""
],
[
"Kiedanski",
"Diego",
""
],
[
"Kofman",
"Daniel",
""
]
] |
We propose in this paper a coinvestment plan between several stakeholders of different types, namely a physical network owner, operating network nodes, e.g. a network operator or a tower company, and a set of service providers willing to use these resources to provide services as video streaming, augmented reality, autonomous driving assistance, etc. One such scenario is that of deployment of Edge Computing resources. Indeed, although the latter technology is ready, the high Capital Expenditure (CAPEX) cost of such resources is the barrier to its deployment. For this reason, a solid economical framework to guide the investment and the returns of the stakeholders is key to solve this issue. We formalize the coinvestment framework using coalitional game theory. We provide a solution to calculate how to divide the profits and costs among the stakeholders, taking into account their characteristics: traffic load, revenues, utility function. We prove that it is always possible to form the grand coalition composed of all the stakeholders, by showing that our game is convex. We derive the payoff of the stakeholders using the Shapley value concept, and elaborate on some properties of our game. We show our solution in simulation.
|
2302.10184
|
Zhongzhan Huang
|
Zhongzhan Huang, Mingfu Liang and Liang Lin
|
On Robust Numerical Solver for ODE via Self-Attention Mechanism
|
Work in progress. Technical report
| null | null | null |
cs.LG cs.AI cs.NA math.NA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the development of deep learning techniques, AI-enhanced numerical
solvers are expected to become a new paradigm for solving differential
equations due to their versatility and effectiveness in alleviating the
accuracy-speed trade-off in traditional numerical solvers. However, this
paradigm still inevitably requires a large amount of high-quality data, whose
acquisition is often very expensive in natural science and engineering
problems. Therefore, in this paper, we explore training efficient and robust
AI-enhanced numerical solvers with a small data size by mitigating intrinsic
noise disturbances. We first analyze the ability of the self-attention
mechanism to regulate noise in supervised learning and then propose a
simple-yet-effective numerical solver, AttSolver, which introduces an additive
self-attention mechanism to the numerical solution of differential equations
based on the dynamical system perspective of the residual neural network. Our
results on benchmarks, ranging from high-dimensional problems to chaotic
systems, demonstrate the effectiveness of AttSolver in generally improving the
performance of existing traditional numerical solvers without any elaborated
model crafting. Finally, we analyze the convergence, generalization, and
robustness of the proposed method experimentally and theoretically.
|
[
{
"created": "Sun, 5 Feb 2023 01:39:21 GMT",
"version": "v1"
}
] |
2023-02-22
|
[
[
"Huang",
"Zhongzhan",
""
],
[
"Liang",
"Mingfu",
""
],
[
"Lin",
"Liang",
""
]
] |
With the development of deep learning techniques, AI-enhanced numerical solvers are expected to become a new paradigm for solving differential equations due to their versatility and effectiveness in alleviating the accuracy-speed trade-off in traditional numerical solvers. However, this paradigm still inevitably requires a large amount of high-quality data, whose acquisition is often very expensive in natural science and engineering problems. Therefore, in this paper, we explore training efficient and robust AI-enhanced numerical solvers with a small data size by mitigating intrinsic noise disturbances. We first analyze the ability of the self-attention mechanism to regulate noise in supervised learning and then propose a simple-yet-effective numerical solver, AttSolver, which introduces an additive self-attention mechanism to the numerical solution of differential equations based on the dynamical system perspective of the residual neural network. Our results on benchmarks, ranging from high-dimensional problems to chaotic systems, demonstrate the effectiveness of AttSolver in generally improving the performance of existing traditional numerical solvers without any elaborated model crafting. Finally, we analyze the convergence, generalization, and robustness of the proposed method experimentally and theoretically.
|
1811.10201
|
AnChieh Cheng
|
An-Chieh Cheng, Chieh Hubert Lin, Da-Cheng Juan, Wei Wei, Min Sun
|
InstaNAS: Instance-aware Neural Architecture Search
| null | null | null | null |
cs.LG cs.CV stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Conventional Neural Architecture Search (NAS) aims at finding a single
architecture that achieves the best performance, which usually optimizes task
related learning objectives such as accuracy. However, a single architecture
may not be representative enough for the whole dataset with high diversity and
variety. Intuitively, electing domain-expert architectures that are proficient
in domain-specific features can further benefit architecture related objectives
such as latency. In this paper, we propose InstaNAS---an instance-aware NAS
framework---that employs a controller trained to search for a "distribution of
architectures" instead of a single architecture; This allows the model to use
sophisticated architectures for the difficult samples, which usually comes with
large architecture related cost, and shallow architectures for those easy
samples. During the inference phase, the controller assigns each of the unseen
input samples with a domain expert architecture that can achieve high accuracy
with customized inference costs. Experiments within a search space inspired by
MobileNetV2 show InstaNAS can achieve up to 48.8% latency reduction without
compromising accuracy on a series of datasets against MobileNetV2.
|
[
{
"created": "Mon, 26 Nov 2018 06:29:39 GMT",
"version": "v1"
},
{
"created": "Wed, 9 Jan 2019 14:12:40 GMT",
"version": "v2"
},
{
"created": "Thu, 23 May 2019 09:25:04 GMT",
"version": "v3"
}
] |
2019-05-24
|
[
[
"Cheng",
"An-Chieh",
""
],
[
"Lin",
"Chieh Hubert",
""
],
[
"Juan",
"Da-Cheng",
""
],
[
"Wei",
"Wei",
""
],
[
"Sun",
"Min",
""
]
] |
Conventional Neural Architecture Search (NAS) aims at finding a single architecture that achieves the best performance, which usually optimizes task related learning objectives such as accuracy. However, a single architecture may not be representative enough for the whole dataset with high diversity and variety. Intuitively, electing domain-expert architectures that are proficient in domain-specific features can further benefit architecture related objectives such as latency. In this paper, we propose InstaNAS---an instance-aware NAS framework---that employs a controller trained to search for a "distribution of architectures" instead of a single architecture; This allows the model to use sophisticated architectures for the difficult samples, which usually comes with large architecture related cost, and shallow architectures for those easy samples. During the inference phase, the controller assigns each of the unseen input samples with a domain expert architecture that can achieve high accuracy with customized inference costs. Experiments within a search space inspired by MobileNetV2 show InstaNAS can achieve up to 48.8% latency reduction without compromising accuracy on a series of datasets against MobileNetV2.
|
2405.07666
|
Andr\'e Chailloux
|
Andr\'e Chailloux and Thomas Debris-Alazard
|
New Solutions to Delsarte's Dual Linear Programs
| null | null | null | null |
cs.IT cs.DM math.IT
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Understanding the maximum size of a code with a given minimum distance is a
major question in computer science and discrete mathematics. The most fruitful
approach for finding asymptotic bounds on such codes is by using Delsarte's
theory of association schemes. With this approach, Delsarte constructs a linear
program such that its maximum value is an upper bound on the maximum size of a
code with a given minimum distance. Bounding this value can be done by finding
solutions to the corresponding dual linear program. Delsarte's theory is very
general and goes way beyond binary codes. In this work, we provide universal
bounds in the framework of association schemes that generalize the
Elias-Bassalygo bound, which can be applied to any association scheme
constructed from a distance function. These bounds are obtained by constructing
new solutions to Delsarte's dual linear program. We instantiate these results
and we recover known bounds for $q$-ary codes and for constant-weight binary
codes. Our other contribution is to recover, for essentially any $Q$-polynomial
scheme, MRRW-type solutions to Delsarte's dual linear program which are
inspired by the Laplacian approach of Friedman and Tillich instead of using the
Christoffel-Darboux formulas. We show in particular how the second linear
programming bound can be interpreted in this framework.
|
[
{
"created": "Mon, 13 May 2024 11:48:16 GMT",
"version": "v1"
},
{
"created": "Mon, 27 May 2024 13:45:05 GMT",
"version": "v2"
}
] |
2024-05-28
|
[
[
"Chailloux",
"André",
""
],
[
"Debris-Alazard",
"Thomas",
""
]
] |
Understanding the maximum size of a code with a given minimum distance is a major question in computer science and discrete mathematics. The most fruitful approach for finding asymptotic bounds on such codes is by using Delsarte's theory of association schemes. With this approach, Delsarte constructs a linear program such that its maximum value is an upper bound on the maximum size of a code with a given minimum distance. Bounding this value can be done by finding solutions to the corresponding dual linear program. Delsarte's theory is very general and goes way beyond binary codes. In this work, we provide universal bounds in the framework of association schemes that generalize the Elias-Bassalygo bound, which can be applied to any association scheme constructed from a distance function. These bounds are obtained by constructing new solutions to Delsarte's dual linear program. We instantiate these results and we recover known bounds for $q$-ary codes and for constant-weight binary codes. Our other contribution is to recover, for essentially any $Q$-polynomial scheme, MRRW-type solutions to Delsarte's dual linear program which are inspired by the Laplacian approach of Friedman and Tillich instead of using the Christoffel-Darboux formulas. We show in particular how the second linear programming bound can be interpreted in this framework.
|
2205.06770
|
Otavio Carpinteiro
|
Alfredo J. P. Barbosa, Edmilson M. Moreira, Carlos H. V. Moraes,
Ot\'avio A. S. Carpinteiro
|
A heuristic to determine the initial gravitational constant of the GSA
|
27 pages, 2 figures, 8 tables
| null | null | null |
cs.NE cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
The Gravitational Search Algorithm (GSA) is an optimization algorithm based
on Newton's laws of gravity and dynamics. Introduced in 2009, the GSA already
has several versions and applications. However, its performance depends on the
values of its parameters, which are determined empirically. Hence, its
generality is compromised, because the parameters that are suitable for a
particular application are not necessarily suitable for another. This paper
proposes the Gravitational Search Algorithm with Normalized Gravitational
Constant (GSA-NGC), which defines a new heuristic to determine the initial
gravitational constant of the GSA. The new heuristic is grounded in the
Brans-Dicke theory of gravitation and takes into consideration the multiple
dimensions of the search space of the application. It aims to improve the final
solution and reduce the number of iterations and premature convergences of the
GSA. The GSA-NGC is validated experimentally, proving to be suitable for
various applications and improving significantly the generality, performance,
and efficiency of the GSA.
|
[
{
"created": "Thu, 21 Apr 2022 21:38:13 GMT",
"version": "v1"
}
] |
2022-05-16
|
[
[
"Barbosa",
"Alfredo J. P.",
""
],
[
"Moreira",
"Edmilson M.",
""
],
[
"Moraes",
"Carlos H. V.",
""
],
[
"Carpinteiro",
"Otávio A. S.",
""
]
] |
The Gravitational Search Algorithm (GSA) is an optimization algorithm based on Newton's laws of gravity and dynamics. Introduced in 2009, the GSA already has several versions and applications. However, its performance depends on the values of its parameters, which are determined empirically. Hence, its generality is compromised, because the parameters that are suitable for a particular application are not necessarily suitable for another. This paper proposes the Gravitational Search Algorithm with Normalized Gravitational Constant (GSA-NGC), which defines a new heuristic to determine the initial gravitational constant of the GSA. The new heuristic is grounded in the Brans-Dicke theory of gravitation and takes into consideration the multiple dimensions of the search space of the application. It aims to improve the final solution and reduce the number of iterations and premature convergences of the GSA. The GSA-NGC is validated experimentally, proving to be suitable for various applications and improving significantly the generality, performance, and efficiency of the GSA.
|
2103.13020
|
Yue Yu
|
Chen Zeng, Yue Yu, Shanshan Li, Xin Xia, Zhiming Wang, Mingyang Geng,
Bailin Xiao, Wei Dong, Xiangke Liao
|
deGraphCS: Embedding Variable-based Flow Graph for Neural Code Search
|
32 pages
| null | null | null |
cs.SE cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
With the rapid increase in the amount of public code repositories, developers
maintain a great desire to retrieve precise code snippets by using natural
language. Despite existing deep learning based approaches(e.g., DeepCS and
MMAN) have provided the end-to-end solutions (i.e., accepts natural language as
queries and shows related code fragments retrieved directly from code corpus),
the accuracy of code search in the large-scale repositories is still limited by
the code representation (e.g., AST) and modeling (e.g., directly fusing the
features in the attention stage). In this paper, we propose a novel learnable
deep Graph for Code Search (calleddeGraphCS), to transfer source code into
variable-based flow graphs based on the intermediate representation technique,
which can model code semantics more precisely compared to process the code as
text directly or use the syntactic tree representation. Furthermore, we propose
a well-designed graph optimization mechanism to refine the code representation,
and apply an improved gated graph neural network to model variable-based flow
graphs. To evaluate the effectiveness of deGraphCS, we collect a large-scale
dataset from GitHub containing 41,152 code snippets written in C language, and
reproduce several typical deep code search methods for comparison. Besides, we
design a qualitative user study to verify the practical value of our approach.
The experimental results have shown that deGraphCS can achieve state-of-the-art
performances, and accurately retrieve code snippets satisfying the needs of the
users.
|
[
{
"created": "Wed, 24 Mar 2021 06:57:44 GMT",
"version": "v1"
},
{
"created": "Tue, 28 Sep 2021 15:12:49 GMT",
"version": "v2"
},
{
"created": "Sat, 16 Oct 2021 01:49:18 GMT",
"version": "v3"
}
] |
2021-10-19
|
[
[
"Zeng",
"Chen",
""
],
[
"Yu",
"Yue",
""
],
[
"Li",
"Shanshan",
""
],
[
"Xia",
"Xin",
""
],
[
"Wang",
"Zhiming",
""
],
[
"Geng",
"Mingyang",
""
],
[
"Xiao",
"Bailin",
""
],
[
"Dong",
"Wei",
""
],
[
"Liao",
"Xiangke",
""
]
] |
With the rapid increase in the amount of public code repositories, developers maintain a great desire to retrieve precise code snippets by using natural language. Despite existing deep learning based approaches(e.g., DeepCS and MMAN) have provided the end-to-end solutions (i.e., accepts natural language as queries and shows related code fragments retrieved directly from code corpus), the accuracy of code search in the large-scale repositories is still limited by the code representation (e.g., AST) and modeling (e.g., directly fusing the features in the attention stage). In this paper, we propose a novel learnable deep Graph for Code Search (calleddeGraphCS), to transfer source code into variable-based flow graphs based on the intermediate representation technique, which can model code semantics more precisely compared to process the code as text directly or use the syntactic tree representation. Furthermore, we propose a well-designed graph optimization mechanism to refine the code representation, and apply an improved gated graph neural network to model variable-based flow graphs. To evaluate the effectiveness of deGraphCS, we collect a large-scale dataset from GitHub containing 41,152 code snippets written in C language, and reproduce several typical deep code search methods for comparison. Besides, we design a qualitative user study to verify the practical value of our approach. The experimental results have shown that deGraphCS can achieve state-of-the-art performances, and accurately retrieve code snippets satisfying the needs of the users.
|
2005.06070
|
Ali H\"urriyeto\u{g}lu
|
Ali H\"urriyeto\u{g}lu, Vanni Zavarella, Hristo Tanev, Erdem
Y\"or\"uk, Ali Safaya, Osman Mutlu
|
Automated Extraction of Socio-political Events from News (AESPEN):
Workshop and Shared Task Report
| null | null | null | null |
cs.CL cs.CY cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We describe our effort on automated extraction of socio-political events from
news in the scope of a workshop and a shared task we organized at Language
Resources and Evaluation Conference (LREC 2020). We believe the event
extraction studies in computational linguistics and social and political
sciences should further support each other in order to enable large scale
socio-political event information collection across sources, countries, and
languages. The event consists of regular research papers and a shared task,
which is about event sentence coreference identification (ESCI), tracks. All
submissions were reviewed by five members of the program committee. The
workshop attracted research papers related to evaluation of machine learning
methodologies, language resources, material conflict forecasting, and a shared
task participation report in the scope of socio-political event information
collection. It has shown us the volume and variety of both the data sources and
event information collection approaches related to socio-political events and
the need to fill the gap between automated text processing techniques and
requirements of social and political sciences.
|
[
{
"created": "Tue, 12 May 2020 22:07:14 GMT",
"version": "v1"
}
] |
2020-05-14
|
[
[
"Hürriyetoğlu",
"Ali",
""
],
[
"Zavarella",
"Vanni",
""
],
[
"Tanev",
"Hristo",
""
],
[
"Yörük",
"Erdem",
""
],
[
"Safaya",
"Ali",
""
],
[
"Mutlu",
"Osman",
""
]
] |
We describe our effort on automated extraction of socio-political events from news in the scope of a workshop and a shared task we organized at Language Resources and Evaluation Conference (LREC 2020). We believe the event extraction studies in computational linguistics and social and political sciences should further support each other in order to enable large scale socio-political event information collection across sources, countries, and languages. The event consists of regular research papers and a shared task, which is about event sentence coreference identification (ESCI), tracks. All submissions were reviewed by five members of the program committee. The workshop attracted research papers related to evaluation of machine learning methodologies, language resources, material conflict forecasting, and a shared task participation report in the scope of socio-political event information collection. It has shown us the volume and variety of both the data sources and event information collection approaches related to socio-political events and the need to fill the gap between automated text processing techniques and requirements of social and political sciences.
|
2103.03206
|
Andrew Jaegle
|
Andrew Jaegle and Felix Gimeno and Andrew Brock and Andrew Zisserman
and Oriol Vinyals and Joao Carreira
|
Perceiver: General Perception with Iterative Attention
|
ICML 2021
| null | null | null |
cs.CV cs.AI cs.LG cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Biological systems perceive the world by simultaneously processing
high-dimensional inputs from modalities as diverse as vision, audition, touch,
proprioception, etc. The perception models used in deep learning on the other
hand are designed for individual modalities, often relying on domain-specific
assumptions such as the local grid structures exploited by virtually all
existing vision models. These priors introduce helpful inductive biases, but
also lock models to individual modalities. In this paper we introduce the
Perceiver - a model that builds upon Transformers and hence makes few
architectural assumptions about the relationship between its inputs, but that
also scales to hundreds of thousands of inputs, like ConvNets. The model
leverages an asymmetric attention mechanism to iteratively distill inputs into
a tight latent bottleneck, allowing it to scale to handle very large inputs. We
show that this architecture is competitive with or outperforms strong,
specialized models on classification tasks across various modalities: images,
point clouds, audio, video, and video+audio. The Perceiver obtains performance
comparable to ResNet-50 and ViT on ImageNet without 2D convolutions by directly
attending to 50,000 pixels. It is also competitive in all modalities in
AudioSet.
|
[
{
"created": "Thu, 4 Mar 2021 18:20:50 GMT",
"version": "v1"
},
{
"created": "Wed, 23 Jun 2021 00:25:31 GMT",
"version": "v2"
}
] |
2021-06-24
|
[
[
"Jaegle",
"Andrew",
""
],
[
"Gimeno",
"Felix",
""
],
[
"Brock",
"Andrew",
""
],
[
"Zisserman",
"Andrew",
""
],
[
"Vinyals",
"Oriol",
""
],
[
"Carreira",
"Joao",
""
]
] |
Biological systems perceive the world by simultaneously processing high-dimensional inputs from modalities as diverse as vision, audition, touch, proprioception, etc. The perception models used in deep learning on the other hand are designed for individual modalities, often relying on domain-specific assumptions such as the local grid structures exploited by virtually all existing vision models. These priors introduce helpful inductive biases, but also lock models to individual modalities. In this paper we introduce the Perceiver - a model that builds upon Transformers and hence makes few architectural assumptions about the relationship between its inputs, but that also scales to hundreds of thousands of inputs, like ConvNets. The model leverages an asymmetric attention mechanism to iteratively distill inputs into a tight latent bottleneck, allowing it to scale to handle very large inputs. We show that this architecture is competitive with or outperforms strong, specialized models on classification tasks across various modalities: images, point clouds, audio, video, and video+audio. The Perceiver obtains performance comparable to ResNet-50 and ViT on ImageNet without 2D convolutions by directly attending to 50,000 pixels. It is also competitive in all modalities in AudioSet.
|
2111.07765
|
Jobst Landgrebe
|
Jobst Landgrebe, Barry Smith
|
An argument for the impossibility of machine intelligence
| null | null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Since the noun phrase `artificial intelligence' (AI) was coined, it has been
debated whether humans are able to create intelligence using technology. We
shed new light on this question from the point of view of themodynamics and
mathematics. First, we define what it is to be an agent (device) that could be
the bearer of AI. Then we show that the mainstream definitions of
`intelligence' proposed by Hutter and others and still accepted by the AI
community are too weak even to capture what is involved when we ascribe
intelligence to an insect. We then summarise the highly useful definition of
basic (arthropod) intelligence proposed by Rodney Brooks, and we identify the
properties that an AI agent would need to possess in order to be the bearer of
intelligence by this definition. Finally, we show that, from the perspective of
the disciplines needed to create such an agent, namely mathematics and physics,
these properties are realisable by neither implicit nor explicit mathematical
design nor by setting up an environment in which an AI could evolve
spontaneously.
|
[
{
"created": "Wed, 20 Oct 2021 08:54:48 GMT",
"version": "v1"
}
] |
2021-11-16
|
[
[
"Landgrebe",
"Jobst",
""
],
[
"Smith",
"Barry",
""
]
] |
Since the noun phrase `artificial intelligence' (AI) was coined, it has been debated whether humans are able to create intelligence using technology. We shed new light on this question from the point of view of themodynamics and mathematics. First, we define what it is to be an agent (device) that could be the bearer of AI. Then we show that the mainstream definitions of `intelligence' proposed by Hutter and others and still accepted by the AI community are too weak even to capture what is involved when we ascribe intelligence to an insect. We then summarise the highly useful definition of basic (arthropod) intelligence proposed by Rodney Brooks, and we identify the properties that an AI agent would need to possess in order to be the bearer of intelligence by this definition. Finally, we show that, from the perspective of the disciplines needed to create such an agent, namely mathematics and physics, these properties are realisable by neither implicit nor explicit mathematical design nor by setting up an environment in which an AI could evolve spontaneously.
|
1903.00922
|
Benedikt Ahrens
|
Benedikt Ahrens, Andr\'e Hirschowitz, Ambroise Lafont, Marco Maggesi
|
Modular specification of monads through higher-order presentations
|
17 pages
|
Formal Structures for Computation and Deduction (FSCD) 2019,
LIPIcs Vol. 131, pp. 6:1-6:19
|
10.4230/LIPIcs.FSCD.2019.6
| null |
cs.LO math.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In their work on second-order equational logic, Fiore and Hur have studied
presentations of simply typed languages by generating binding constructions and
equations among them. To each pair consisting of a binding signature and a set
of equations, they associate a category of `models', and they give a monadicity
result which implies that this category has an initial object, which is the
language presented by the pair.
In the present work, we propose, for the untyped setting, a variant of their
approach where monads and modules over them are the central notions. More
precisely, we study, for monads over sets, presentations by generating
(`higher-order') operations and equations among them. We consider a notion of
2-signature which allows to specify a monad with a family of binding operations
subject to a family of equations, as is the case for the paradigmatic example
of the lambda calculus, specified by its two standard constructions
(application and abstraction) subject to $\beta$- and $\eta$-equalities. Such a
2-signature is hence a pair $(\Sigma,E)$ of a binding signature $\Sigma$ and a
family $E$ of equations for $\Sigma$. This notion of 2-signature has been
introduced earlier by Ahrens in a slightly different context.
We associate, to each 2-signature $(\Sigma,E)$, a category of `models of
$(\Sigma,E)$; and we say that a 2-signature is `effective' if this category has
an initial object; the monad underlying this (essentially unique) object is the
`monad specified by the 2-signature'. Not every 2-signature is effective; we
identify a class of 2-signatures, which we call `algebraic', that are
effective.
Importantly, our 2-signatures together with their models enjoy `modularity':
when we glue (algebraic) 2-signatures together, their initial models are glued
accordingly.
We provide a computer formalization for our main results.
|
[
{
"created": "Sun, 3 Mar 2019 15:00:36 GMT",
"version": "v1"
}
] |
2019-07-16
|
[
[
"Ahrens",
"Benedikt",
""
],
[
"Hirschowitz",
"André",
""
],
[
"Lafont",
"Ambroise",
""
],
[
"Maggesi",
"Marco",
""
]
] |
In their work on second-order equational logic, Fiore and Hur have studied presentations of simply typed languages by generating binding constructions and equations among them. To each pair consisting of a binding signature and a set of equations, they associate a category of `models', and they give a monadicity result which implies that this category has an initial object, which is the language presented by the pair. In the present work, we propose, for the untyped setting, a variant of their approach where monads and modules over them are the central notions. More precisely, we study, for monads over sets, presentations by generating (`higher-order') operations and equations among them. We consider a notion of 2-signature which allows to specify a monad with a family of binding operations subject to a family of equations, as is the case for the paradigmatic example of the lambda calculus, specified by its two standard constructions (application and abstraction) subject to $\beta$- and $\eta$-equalities. Such a 2-signature is hence a pair $(\Sigma,E)$ of a binding signature $\Sigma$ and a family $E$ of equations for $\Sigma$. This notion of 2-signature has been introduced earlier by Ahrens in a slightly different context. We associate, to each 2-signature $(\Sigma,E)$, a category of `models of $(\Sigma,E)$; and we say that a 2-signature is `effective' if this category has an initial object; the monad underlying this (essentially unique) object is the `monad specified by the 2-signature'. Not every 2-signature is effective; we identify a class of 2-signatures, which we call `algebraic', that are effective. Importantly, our 2-signatures together with their models enjoy `modularity': when we glue (algebraic) 2-signatures together, their initial models are glued accordingly. We provide a computer formalization for our main results.
|
2310.17193
|
Ryota Tanaka
|
Ryota Tanaka, Tomohiro Suzuki, Kazuya Takeda, Keisuke Fujii
|
Automatic Edge Error Judgment in Figure Skating Using 3D Pose Estimation
from a Monocular Camera and IMUs
| null | null | null | null |
cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automatic evaluating systems are fundamental issues in sports technologies.
In many sports, such as figure skating, automated evaluating methods based on
pose estimation have been proposed. However, previous studies have evaluated
skaters' skills in 2D analysis. In this paper, we propose an automatic edge
error judgment system with a monocular smartphone camera and inertial sensors,
which enable us to analyze 3D motions. Edge error is one of the most
significant scoring items and is challenging to automatically judge due to its
3D motion. The results show that the model using 3D joint position coordinates
estimated from the monocular camera as the input feature had the highest
accuracy at 83% for unknown skaters' data. We also analyzed the detailed motion
analysis for edge error judgment. These results indicate that the monocular
camera can be used to judge edge errors automatically. We will provide the
figure skating single Lutz jump dataset, including pre-processed videos and
labels, at https://github.com/ryota-takedalab/JudgeAI-LutzEdge.
|
[
{
"created": "Thu, 26 Oct 2023 07:15:40 GMT",
"version": "v1"
}
] |
2023-10-27
|
[
[
"Tanaka",
"Ryota",
""
],
[
"Suzuki",
"Tomohiro",
""
],
[
"Takeda",
"Kazuya",
""
],
[
"Fujii",
"Keisuke",
""
]
] |
Automatic evaluating systems are fundamental issues in sports technologies. In many sports, such as figure skating, automated evaluating methods based on pose estimation have been proposed. However, previous studies have evaluated skaters' skills in 2D analysis. In this paper, we propose an automatic edge error judgment system with a monocular smartphone camera and inertial sensors, which enable us to analyze 3D motions. Edge error is one of the most significant scoring items and is challenging to automatically judge due to its 3D motion. The results show that the model using 3D joint position coordinates estimated from the monocular camera as the input feature had the highest accuracy at 83% for unknown skaters' data. We also analyzed the detailed motion analysis for edge error judgment. These results indicate that the monocular camera can be used to judge edge errors automatically. We will provide the figure skating single Lutz jump dataset, including pre-processed videos and labels, at https://github.com/ryota-takedalab/JudgeAI-LutzEdge.
|
1607.02133
|
Fan Yang
|
Fan Yang and Andrew A. Chien
|
Extreme Scaling of Supercomputing with Stranded Power: Costs and
Capabilities
|
12 pages, 22 figures
| null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Power consumption (supply, heat, cost) and associated carbon emissions
(environmental impact) are increasingly critical challenges in scaling
supercomputing to Exascale and beyond. We proposes to exploit stranded power,
renewable energy that has no value to the power grid, for scaling
supercomputers, Zero-Carbon Cloud (ZCCloud), and showing that stranded power
can be employed effectively to expand computing [1]. We build on those results
with a new analysis of stranded power, characterizing temporal, geographic, and
interval properties. We simulate production supercomputing workloads and model
datacenter total-cost-of-ownership (TCO), assessing the costs and capabilities
of stranded-power based supercomputing. Results show that the ZCCloud approach
is cost-effective today in regions with high cost power. The ZCCloud approach
reduces TCO by 21-45%, and improves cost-effectiveness up to 34%. We study many
scenarios. With higher power price, cheaper computing hardware and higher
system power density, benefits rise to 55%, 97% and 116% respectively. Finally,
we study future extreme-scale systems, showing that beyond terascale, projected
power requirements in excess of 100MW make ZCCloud up to 45% lower cost, for a
fixed budget, increase peak PFLOPS achievable by 80%.
|
[
{
"created": "Thu, 7 Jul 2016 19:31:37 GMT",
"version": "v1"
}
] |
2016-07-08
|
[
[
"Yang",
"Fan",
""
],
[
"Chien",
"Andrew A.",
""
]
] |
Power consumption (supply, heat, cost) and associated carbon emissions (environmental impact) are increasingly critical challenges in scaling supercomputing to Exascale and beyond. We proposes to exploit stranded power, renewable energy that has no value to the power grid, for scaling supercomputers, Zero-Carbon Cloud (ZCCloud), and showing that stranded power can be employed effectively to expand computing [1]. We build on those results with a new analysis of stranded power, characterizing temporal, geographic, and interval properties. We simulate production supercomputing workloads and model datacenter total-cost-of-ownership (TCO), assessing the costs and capabilities of stranded-power based supercomputing. Results show that the ZCCloud approach is cost-effective today in regions with high cost power. The ZCCloud approach reduces TCO by 21-45%, and improves cost-effectiveness up to 34%. We study many scenarios. With higher power price, cheaper computing hardware and higher system power density, benefits rise to 55%, 97% and 116% respectively. Finally, we study future extreme-scale systems, showing that beyond terascale, projected power requirements in excess of 100MW make ZCCloud up to 45% lower cost, for a fixed budget, increase peak PFLOPS achievable by 80%.
|
1001.3497
|
William Jackson
|
Shahid Hussain, Sheikh Muhammad Saqib, Bashir Ahmad, Shakeel Ahmad
|
Mapping of SOA and RUP: DOA as Case Study
|
Journal of Computing, Vol. 2, Issue 1, January 2010,
https://sites.google.com/site/journalofcomputing/
|
Journal of Computing, Vol. 2, Issue 1, January 2010,
https://sites.google.com/site/journalofcomputing/
| null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
SOA (Service Oriented Architecture) is a new trend towards increasing the
profit margins in an organization due to incorporating business services to
business practices. Rational Unified Process (RUP) is a unified method planning
form for large business applications that provides a language for describing
method content and processes. The well defined mapping of SOA and RUP leads to
successful completion of RUP software projects to provide services to their
users. DOA (Digital Office Assistant) is a multi user SOA type application that
provides appropriate viewer for each user to assist him through services. In
this paper authors proposed the mapping strategy of SOA with RUP by considering
DOA as case study.
|
[
{
"created": "Wed, 20 Jan 2010 08:11:10 GMT",
"version": "v1"
},
{
"created": "Mon, 29 Mar 2010 07:37:16 GMT",
"version": "v2"
}
] |
2010-03-30
|
[
[
"Hussain",
"Shahid",
""
],
[
"Saqib",
"Sheikh Muhammad",
""
],
[
"Ahmad",
"Bashir",
""
],
[
"Ahmad",
"Shakeel",
""
]
] |
SOA (Service Oriented Architecture) is a new trend towards increasing the profit margins in an organization due to incorporating business services to business practices. Rational Unified Process (RUP) is a unified method planning form for large business applications that provides a language for describing method content and processes. The well defined mapping of SOA and RUP leads to successful completion of RUP software projects to provide services to their users. DOA (Digital Office Assistant) is a multi user SOA type application that provides appropriate viewer for each user to assist him through services. In this paper authors proposed the mapping strategy of SOA with RUP by considering DOA as case study.
|
2111.04746
|
Max Hopkins
|
Max Hopkins, Daniel M. Kane, Shachar Lovett, Gaurav Mahajan
|
Realizable Learning is All You Need
| null |
TheoretiCS, Volume 3 (February 6, 2024) theoretics:10093
|
10.46298/theoretics.24.2
| null |
cs.LG stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
The equivalence of realizable and agnostic learnability is a fundamental
phenomenon in learning theory. With variants ranging from classical settings
like PAC learning and regression to recent trends such as adversarially robust
learning, it's surprising that we still lack a unified theory; traditional
proofs of the equivalence tend to be disparate, and rely on strong
model-specific assumptions like uniform convergence and sample compression.
In this work, we give the first model-independent framework explaining the
equivalence of realizable and agnostic learnability: a three-line blackbox
reduction that simplifies, unifies, and extends our understanding across a wide
variety of settings. This includes models with no known characterization of
learnability such as learning with arbitrary distributional assumptions and
more general loss functions, as well as a host of other popular settings such
as robust learning, partial learning, fair learning, and the statistical query
model.
More generally, we argue that the equivalence of realizable and agnostic
learning is actually a special case of a broader phenomenon we call property
generalization: any desirable property of a learning algorithm (e.g. noise
tolerance, privacy, stability) that can be satisfied over finite hypothesis
classes extends (possibly in some variation) to any learnable hypothesis class.
|
[
{
"created": "Mon, 8 Nov 2021 19:00:00 GMT",
"version": "v1"
},
{
"created": "Sun, 25 Sep 2022 08:34:25 GMT",
"version": "v2"
},
{
"created": "Fri, 3 Feb 2023 12:06:15 GMT",
"version": "v3"
},
{
"created": "Sat, 3 Feb 2024 00:55:16 GMT",
"version": "v4"
}
] |
2024-08-07
|
[
[
"Hopkins",
"Max",
""
],
[
"Kane",
"Daniel M.",
""
],
[
"Lovett",
"Shachar",
""
],
[
"Mahajan",
"Gaurav",
""
]
] |
The equivalence of realizable and agnostic learnability is a fundamental phenomenon in learning theory. With variants ranging from classical settings like PAC learning and regression to recent trends such as adversarially robust learning, it's surprising that we still lack a unified theory; traditional proofs of the equivalence tend to be disparate, and rely on strong model-specific assumptions like uniform convergence and sample compression. In this work, we give the first model-independent framework explaining the equivalence of realizable and agnostic learnability: a three-line blackbox reduction that simplifies, unifies, and extends our understanding across a wide variety of settings. This includes models with no known characterization of learnability such as learning with arbitrary distributional assumptions and more general loss functions, as well as a host of other popular settings such as robust learning, partial learning, fair learning, and the statistical query model. More generally, we argue that the equivalence of realizable and agnostic learning is actually a special case of a broader phenomenon we call property generalization: any desirable property of a learning algorithm (e.g. noise tolerance, privacy, stability) that can be satisfied over finite hypothesis classes extends (possibly in some variation) to any learnable hypothesis class.
|
2108.01753
|
Andrew Reed
|
Andrew C. Reed, Michael K. Reiter
|
Optimally Hiding Object Sizes with Constrained Padding
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Among the most challenging traffic-analysis attacks to confound are those
leveraging the sizes of objects downloaded over the network. In this paper we
systematically analyze this problem under realistic constraints regarding the
padding overhead that the object store is willing to incur. We give algorithms
to compute privacy-optimal padding schemes -- specifically that minimize the
network observer's information gain from a downloaded object's padded size --
in several scenarios of interest: per-object padding, in which the object store
responds to each request for an object with the same padded copy; per-request
padding, in which the object store pads an object anew each time it serves that
object; and a scenario unlike the previous ones in that the object store is
unable to leverage a known distribution over the object queries. We provide
constructions for privacy-optimal padding in each case, compare them to recent
contenders in the research literature, and evaluate their performance on
practical datasets.
|
[
{
"created": "Tue, 3 Aug 2021 21:14:13 GMT",
"version": "v1"
}
] |
2021-08-05
|
[
[
"Reed",
"Andrew C.",
""
],
[
"Reiter",
"Michael K.",
""
]
] |
Among the most challenging traffic-analysis attacks to confound are those leveraging the sizes of objects downloaded over the network. In this paper we systematically analyze this problem under realistic constraints regarding the padding overhead that the object store is willing to incur. We give algorithms to compute privacy-optimal padding schemes -- specifically that minimize the network observer's information gain from a downloaded object's padded size -- in several scenarios of interest: per-object padding, in which the object store responds to each request for an object with the same padded copy; per-request padding, in which the object store pads an object anew each time it serves that object; and a scenario unlike the previous ones in that the object store is unable to leverage a known distribution over the object queries. We provide constructions for privacy-optimal padding in each case, compare them to recent contenders in the research literature, and evaluate their performance on practical datasets.
|
2304.03985
|
Anoop S. K. M.
|
Anoop S. K. M., Jayalal Sarma
|
On Rotation Distance of Rank Bounded Trees
|
28 pages, 2 figures, Abstract shortened to meet arxiv requirements,
accepted journal version
|
Fundamenta Informaticae, Volume 191, Issue 2 (July 8, 2024)
fi:11200
| null | null |
cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
Computing the rotation distance between two binary trees with $n$ internal
nodes efficiently (in $poly(n)$ time) is a long standing open question in the
study of height balancing in tree data structures. In this paper, we initiate
the study of this problem bounding the rank of the trees given at the input
(defined by Ehrenfeucht and Haussler (1989) in the context of decision trees).
We define the rank-bounded rotation distance between two given binary trees
$T_1$ and $T_2$ (with $n$ internal nodes) of rank at most $r$, denoted by
$d_r(T_1,T_2)$, as the length of the shortest sequence of rotations that
transforms $T_1$ to $T_2$ with the restriction that the intermediate trees must
be of rank at most $r$. We show that the rotation distance problem reduces in
polynomial time to the rank bounded rotation distance problem. This motivates
the study of the problem in the combinatorial and algorithmic frontiers.
Observing that trees with rank $1$ coincide exactly with skew trees (binary
trees where every internal node has at least one leaf as a child), we show the
following results in this frontier :
We present an $O(n^2)$ time algorithm for computing $d_1(T_1,T_2)$. That is,
when the given trees are skew trees (we call this variant as skew rotation
distance problem) - where the intermediate trees are restricted to be skew as
well. In particular, our techniques imply that for any two skew trees
$d(T_1,T_2) \le n^2$.
We show the following upper bound : for any two trees $T_1$ and $T_2$ of rank
at most $r_1$ and $r_2$ respectively, we have that: $d_r(T_1,T_2) \le n^2
(1+(2n+1)(r_1+r_2-2))$ where $r = max\{r_1,r_2\}$. This bound is asymptotically
tight for $r=1$.
En route our proof of the above theorems, we associate binary trees to
permutations and bivariate polynomials, and prove several characterizations in
the case of skew trees.
|
[
{
"created": "Sat, 8 Apr 2023 11:02:35 GMT",
"version": "v1"
},
{
"created": "Thu, 21 Mar 2024 15:42:46 GMT",
"version": "v2"
},
{
"created": "Fri, 10 May 2024 18:18:05 GMT",
"version": "v3"
}
] |
2024-08-07
|
[
[
"M.",
"Anoop S. K.",
""
],
[
"Sarma",
"Jayalal",
""
]
] |
Computing the rotation distance between two binary trees with $n$ internal nodes efficiently (in $poly(n)$ time) is a long standing open question in the study of height balancing in tree data structures. In this paper, we initiate the study of this problem bounding the rank of the trees given at the input (defined by Ehrenfeucht and Haussler (1989) in the context of decision trees). We define the rank-bounded rotation distance between two given binary trees $T_1$ and $T_2$ (with $n$ internal nodes) of rank at most $r$, denoted by $d_r(T_1,T_2)$, as the length of the shortest sequence of rotations that transforms $T_1$ to $T_2$ with the restriction that the intermediate trees must be of rank at most $r$. We show that the rotation distance problem reduces in polynomial time to the rank bounded rotation distance problem. This motivates the study of the problem in the combinatorial and algorithmic frontiers. Observing that trees with rank $1$ coincide exactly with skew trees (binary trees where every internal node has at least one leaf as a child), we show the following results in this frontier : We present an $O(n^2)$ time algorithm for computing $d_1(T_1,T_2)$. That is, when the given trees are skew trees (we call this variant as skew rotation distance problem) - where the intermediate trees are restricted to be skew as well. In particular, our techniques imply that for any two skew trees $d(T_1,T_2) \le n^2$. We show the following upper bound : for any two trees $T_1$ and $T_2$ of rank at most $r_1$ and $r_2$ respectively, we have that: $d_r(T_1,T_2) \le n^2 (1+(2n+1)(r_1+r_2-2))$ where $r = max\{r_1,r_2\}$. This bound is asymptotically tight for $r=1$. En route our proof of the above theorems, we associate binary trees to permutations and bivariate polynomials, and prove several characterizations in the case of skew trees.
|
1811.10855
|
Chen Yang
|
Chen Yang, Xiaofeng Meng, Zhihui Du, Zhiqiang Duan and Yongjie Du
|
Data Management in Time-Domain Astronomy: Requirements and Challenges
| null | null | null | null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In time-domain astronomy, we need to use the relational database to manage
star catalog data. With the development of sky survey technology, the size of
star catalog data is larger, and the speed of data generation is faster. So, in
this paper, we make a systematic and comprehensive introduction to process the
data in time-domain astronomy, and valuable research questions are detailed.
Then, we list candidate systems usually used in astronomy and point out the
advantages and disadvantages of these systems. In addition, we present the key
techniques needed to deal with astronomical data. Finally, we summarize the
challenges faced by the design of our database prototype.
|
[
{
"created": "Tue, 27 Nov 2018 07:54:43 GMT",
"version": "v1"
}
] |
2018-11-28
|
[
[
"Yang",
"Chen",
""
],
[
"Meng",
"Xiaofeng",
""
],
[
"Du",
"Zhihui",
""
],
[
"Duan",
"Zhiqiang",
""
],
[
"Du",
"Yongjie",
""
]
] |
In time-domain astronomy, we need to use the relational database to manage star catalog data. With the development of sky survey technology, the size of star catalog data is larger, and the speed of data generation is faster. So, in this paper, we make a systematic and comprehensive introduction to process the data in time-domain astronomy, and valuable research questions are detailed. Then, we list candidate systems usually used in astronomy and point out the advantages and disadvantages of these systems. In addition, we present the key techniques needed to deal with astronomical data. Finally, we summarize the challenges faced by the design of our database prototype.
|
2406.13793
|
Minghao Cai
|
Minghao Cai, and Carrie Demmans Epp
|
Exploring the Optimal Time Window for Predicting Cognitive Load Using
Physiological Sensor Data
|
Presented at PhysioCHI: Towards Best Practices for Integrating
Physiological Signals in HCI, May 11, 2024, Honolulu, HI, USA
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Learning analytics has begun to use physiological signals because these have
been linked with learners' cognitive and affective states. These signals, when
interpreted through machine learning techniques, offer a nuanced understanding
of the temporal dynamics of student learning experiences and processes.
However, there is a lack of clear guidance on the optimal time window to use
for analyzing physiological signals within predictive models. We conducted an
empirical investigation of different time windows (ranging from 60 to 210
seconds) when analysing multichannel physiological sensor data for predicting
cognitive load. Our results demonstrate a preference for longer time windows,
with optimal window length typically exceeding 90 seconds. These findings
challenge the conventional focus on immediate physiological responses,
suggesting that a broader temporal scope could provide a more comprehensive
understanding of cognitive processes. In addition, the variation in which time
windows best supported prediction across classifiers underscores the complexity
of integrating physiological measures. Our findings provide new insights for
developing educational technologies that more accurately reflect and respond to
the dynamic nature of learner cognitive load in complex learning environments.
|
[
{
"created": "Wed, 19 Jun 2024 19:39:14 GMT",
"version": "v1"
}
] |
2024-06-21
|
[
[
"Cai",
"Minghao",
""
],
[
"Epp",
"Carrie Demmans",
""
]
] |
Learning analytics has begun to use physiological signals because these have been linked with learners' cognitive and affective states. These signals, when interpreted through machine learning techniques, offer a nuanced understanding of the temporal dynamics of student learning experiences and processes. However, there is a lack of clear guidance on the optimal time window to use for analyzing physiological signals within predictive models. We conducted an empirical investigation of different time windows (ranging from 60 to 210 seconds) when analysing multichannel physiological sensor data for predicting cognitive load. Our results demonstrate a preference for longer time windows, with optimal window length typically exceeding 90 seconds. These findings challenge the conventional focus on immediate physiological responses, suggesting that a broader temporal scope could provide a more comprehensive understanding of cognitive processes. In addition, the variation in which time windows best supported prediction across classifiers underscores the complexity of integrating physiological measures. Our findings provide new insights for developing educational technologies that more accurately reflect and respond to the dynamic nature of learner cognitive load in complex learning environments.
|
2405.11708
|
Shao-Yuan Lo
|
Shao-Yuan Lo, Vishal M. Patel
|
Adaptive Batch Normalization Networks for Adversarial Robustness
|
Accepted at IEEE International Conference on Advanced Video and
Signal-based Surveillance (AVSS) 2024
| null | null | null |
cs.LG cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep networks are vulnerable to adversarial examples. Adversarial Training
(AT) has been a standard foundation of modern adversarial defense approaches
due to its remarkable effectiveness. However, AT is extremely time-consuming,
refraining it from wide deployment in practical applications. In this paper, we
aim at a non-AT defense: How to design a defense method that gets rid of AT but
is still robust against strong adversarial attacks? To answer this question, we
resort to adaptive Batch Normalization (BN), inspired by the recent advances in
test-time domain adaptation. We propose a novel defense accordingly, referred
to as the Adaptive Batch Normalization Network (ABNN). ABNN employs a
pre-trained substitute model to generate clean BN statistics and sends them to
the target model. The target model is exclusively trained on clean data and
learns to align the substitute model's BN statistics. Experimental results show
that ABNN consistently improves adversarial robustness against both digital and
physically realizable attacks on both image and video datasets. Furthermore,
ABNN can achieve higher clean data performance and significantly lower training
time complexity compared to AT-based approaches.
|
[
{
"created": "Mon, 20 May 2024 00:58:53 GMT",
"version": "v1"
},
{
"created": "Mon, 27 May 2024 00:38:08 GMT",
"version": "v2"
}
] |
2024-05-28
|
[
[
"Lo",
"Shao-Yuan",
""
],
[
"Patel",
"Vishal M.",
""
]
] |
Deep networks are vulnerable to adversarial examples. Adversarial Training (AT) has been a standard foundation of modern adversarial defense approaches due to its remarkable effectiveness. However, AT is extremely time-consuming, refraining it from wide deployment in practical applications. In this paper, we aim at a non-AT defense: How to design a defense method that gets rid of AT but is still robust against strong adversarial attacks? To answer this question, we resort to adaptive Batch Normalization (BN), inspired by the recent advances in test-time domain adaptation. We propose a novel defense accordingly, referred to as the Adaptive Batch Normalization Network (ABNN). ABNN employs a pre-trained substitute model to generate clean BN statistics and sends them to the target model. The target model is exclusively trained on clean data and learns to align the substitute model's BN statistics. Experimental results show that ABNN consistently improves adversarial robustness against both digital and physically realizable attacks on both image and video datasets. Furthermore, ABNN can achieve higher clean data performance and significantly lower training time complexity compared to AT-based approaches.
|
1403.2294
|
Sergey Nikolaev
|
Sergei Nikolaev
|
Non-linear mass-spring system for large soft tissue deformations
modeling
|
9 pages, 2 figures, 4 charts
|
Scientific and Technical Journal of Information Technologies,
Mechanics and Optics 5(87) (2013) 88-94
| null | null |
cs.NA physics.comp-ph
|
http://creativecommons.org/licenses/by-nc-sa/3.0/
|
Implant placement under soft tissues operation is described. In this
operation tissues can reach such deformations that nonlinear properties are
appeared. A mass-spring model modification for modeling nonlinear tissue
operation is developed. A method for creating elasticity module using splines
is described. For Poisson ratio different stiffness for different types of
springs in cubic grid is used. For stiffness finding an equation system that
described material tension is solved. The model is verified with quadratic
sample tension experiment. These tests show that sample tension under external
forces is equal to defined nonlinear elasticity module. The accuracy of Poisson
ratio modeling is thirty five percent that is better the results of available
ratio modeling method.
|
[
{
"created": "Mon, 10 Mar 2014 16:34:40 GMT",
"version": "v1"
}
] |
2014-03-11
|
[
[
"Nikolaev",
"Sergei",
""
]
] |
Implant placement under soft tissues operation is described. In this operation tissues can reach such deformations that nonlinear properties are appeared. A mass-spring model modification for modeling nonlinear tissue operation is developed. A method for creating elasticity module using splines is described. For Poisson ratio different stiffness for different types of springs in cubic grid is used. For stiffness finding an equation system that described material tension is solved. The model is verified with quadratic sample tension experiment. These tests show that sample tension under external forces is equal to defined nonlinear elasticity module. The accuracy of Poisson ratio modeling is thirty five percent that is better the results of available ratio modeling method.
|
1802.08249
|
Maziar Sanjabi
|
Maziar Sanjabi, Jimmy Ba, Meisam Razaviyayn, Jason D. Lee
|
On the Convergence and Robustness of Training GANs with Regularized
Optimal Transport
| null | null | null | null |
cs.LG math.OC stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Generative Adversarial Networks (GANs) are one of the most practical methods
for learning data distributions. A popular GAN formulation is based on the use
of Wasserstein distance as a metric between probability distributions.
Unfortunately, minimizing the Wasserstein distance between the data
distribution and the generative model distribution is a computationally
challenging problem as its objective is non-convex, non-smooth, and even hard
to compute. In this work, we show that obtaining gradient information of the
smoothed Wasserstein GAN formulation, which is based on regularized Optimal
Transport (OT), is computationally effortless and hence one can apply first
order optimization methods to minimize this objective. Consequently, we
establish theoretical convergence guarantee to stationarity for a proposed
class of GAN optimization algorithms. Unlike the original non-smooth
formulation, our algorithm only requires solving the discriminator to
approximate optimality. We apply our method to learning MNIST digits as well as
CIFAR-10images. Our experiments show that our method is computationally
efficient and generates images comparable to the state of the art algorithms
given the same architecture and computational power.
|
[
{
"created": "Thu, 22 Feb 2018 04:11:58 GMT",
"version": "v1"
},
{
"created": "Tue, 22 May 2018 05:11:47 GMT",
"version": "v2"
}
] |
2018-05-23
|
[
[
"Sanjabi",
"Maziar",
""
],
[
"Ba",
"Jimmy",
""
],
[
"Razaviyayn",
"Meisam",
""
],
[
"Lee",
"Jason D.",
""
]
] |
Generative Adversarial Networks (GANs) are one of the most practical methods for learning data distributions. A popular GAN formulation is based on the use of Wasserstein distance as a metric between probability distributions. Unfortunately, minimizing the Wasserstein distance between the data distribution and the generative model distribution is a computationally challenging problem as its objective is non-convex, non-smooth, and even hard to compute. In this work, we show that obtaining gradient information of the smoothed Wasserstein GAN formulation, which is based on regularized Optimal Transport (OT), is computationally effortless and hence one can apply first order optimization methods to minimize this objective. Consequently, we establish theoretical convergence guarantee to stationarity for a proposed class of GAN optimization algorithms. Unlike the original non-smooth formulation, our algorithm only requires solving the discriminator to approximate optimality. We apply our method to learning MNIST digits as well as CIFAR-10images. Our experiments show that our method is computationally efficient and generates images comparable to the state of the art algorithms given the same architecture and computational power.
|
2112.06921
|
Letitia Sabburg
|
Letitia Sabburg, Alan Woodley and Kerrie Mengersen
|
A Data- and Task- Oriented Design Framework for Bivariate Communication
of Uncertainty
| null | null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The communication of uncertainty estimates, predictions and insights based on
spatio-temporal models is important for decision-making as it impacts the
utilisation and interpretation of information. Bivariate mapping is commonly
used for communication of estimates and associated uncertainty; however, it is
known that different visual qualities resulting from choics of symbols and
consequent interaction between the display dimensions can lead to different
interpretations and consequently affect resultant decisions. Characteristics of
the data to be presented, such as spatial format, statistical level and
continuousness, shape the range of available bivairate symbols. The subsequent
utility of these bivariate symbols depends on their ability to achieve
end-user's goals. In this paper we present a novel design framework, which,
through consideration of both input data characteristics and potential
operational tasks (as proxy to end-user goals), assists map designers in
appropriate selection of bivariate symbols for the coincident presentation of
spatio-temporal modelled data and associated uncertainty. The framework is
showcased through application to a case study pertaining to sediment pollution
in the Great Barrier Reef.
|
[
{
"created": "Mon, 13 Dec 2021 05:37:16 GMT",
"version": "v1"
}
] |
2021-12-15
|
[
[
"Sabburg",
"Letitia",
""
],
[
"Woodley",
"Alan",
""
],
[
"Mengersen",
"Kerrie",
""
]
] |
The communication of uncertainty estimates, predictions and insights based on spatio-temporal models is important for decision-making as it impacts the utilisation and interpretation of information. Bivariate mapping is commonly used for communication of estimates and associated uncertainty; however, it is known that different visual qualities resulting from choics of symbols and consequent interaction between the display dimensions can lead to different interpretations and consequently affect resultant decisions. Characteristics of the data to be presented, such as spatial format, statistical level and continuousness, shape the range of available bivairate symbols. The subsequent utility of these bivariate symbols depends on their ability to achieve end-user's goals. In this paper we present a novel design framework, which, through consideration of both input data characteristics and potential operational tasks (as proxy to end-user goals), assists map designers in appropriate selection of bivariate symbols for the coincident presentation of spatio-temporal modelled data and associated uncertainty. The framework is showcased through application to a case study pertaining to sediment pollution in the Great Barrier Reef.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.