id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2405.18357
|
Jundong Xu
|
Jundong Xu, Hao Fei, Liangming Pan, Qian Liu, Mong-Li Lee, Wynne Hsu
|
Faithful Logical Reasoning via Symbolic Chain-of-Thought
|
Accepted by ACL 2024 (main proceeding)
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
While the recent Chain-of-Thought (CoT) technique enhances the reasoning
ability of large language models (LLMs) with the theory of mind, it might still
struggle in handling logical reasoning that relies much on symbolic expressions
and rigid deducing rules. To strengthen the logical reasoning capability of
LLMs, we propose a novel Symbolic Chain-of-Thought, namely SymbCoT, a fully
LLM-based framework that integrates symbolic expressions and logic rules with
CoT prompting. Technically, building upon an LLM, SymbCoT 1) first translates
the natural language context into the symbolic format, and then 2) derives a
step-by-step plan to solve the problem with symbolic logical rules, 3) followed
by a verifier to check the translation and reasoning chain. Via thorough
evaluations on 5 standard datasets with both First-Order Logic and Constraint
Optimization symbolic expressions, SymbCoT shows striking improvements over the
CoT method consistently, meanwhile refreshing the current state-of-the-art
performances. We further demonstrate that our system advances in more faithful,
flexible, and explainable logical reasoning. To our knowledge, this is the
first to combine symbolic expressions and rules into CoT for logical reasoning
with LLMs. Code is open at https://github.com/Aiden0526/SymbCoT.
|
[
{
"created": "Tue, 28 May 2024 16:55:33 GMT",
"version": "v1"
},
{
"created": "Tue, 11 Jun 2024 07:41:03 GMT",
"version": "v2"
}
] |
2024-06-12
|
[
[
"Xu",
"Jundong",
""
],
[
"Fei",
"Hao",
""
],
[
"Pan",
"Liangming",
""
],
[
"Liu",
"Qian",
""
],
[
"Lee",
"Mong-Li",
""
],
[
"Hsu",
"Wynne",
""
]
] |
While the recent Chain-of-Thought (CoT) technique enhances the reasoning ability of large language models (LLMs) with the theory of mind, it might still struggle in handling logical reasoning that relies much on symbolic expressions and rigid deducing rules. To strengthen the logical reasoning capability of LLMs, we propose a novel Symbolic Chain-of-Thought, namely SymbCoT, a fully LLM-based framework that integrates symbolic expressions and logic rules with CoT prompting. Technically, building upon an LLM, SymbCoT 1) first translates the natural language context into the symbolic format, and then 2) derives a step-by-step plan to solve the problem with symbolic logical rules, 3) followed by a verifier to check the translation and reasoning chain. Via thorough evaluations on 5 standard datasets with both First-Order Logic and Constraint Optimization symbolic expressions, SymbCoT shows striking improvements over the CoT method consistently, meanwhile refreshing the current state-of-the-art performances. We further demonstrate that our system advances in more faithful, flexible, and explainable logical reasoning. To our knowledge, this is the first to combine symbolic expressions and rules into CoT for logical reasoning with LLMs. Code is open at https://github.com/Aiden0526/SymbCoT.
|
1605.06020
|
Zeineb Guizani
|
Zeineb Guizani and Noureddine Hamdi
|
Spectrum Resource Management and Interference Mitigation for D2D
Communications with Awareness of BER Constraint in mmWave 5G Underlay Network
|
Accepted in IEEE Symposium on Computers and Communications June, 2016
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The work presented in this paper deals with the issue of massive demands for
higher capacity. For that matter, we investigate the spectrum resource
management in outdoor mmWave cell for the uplink of cellular and D2D
communications. Indeed, we provide a first insight how to optimize the system
performance in terms of achievable throughput while realizing a compromise
between the large number of admitted devices and the generated interference
constraint. We propose a mathematical formulation of the optimization objective
which falls in the mixed integer-real optimization scheme. To overcome its
complexity, we apply a heuristic algorithm and test its efficiency through
simulation results with a particular regard to the BER impact in the QoS.
|
[
{
"created": "Thu, 19 May 2016 15:53:52 GMT",
"version": "v1"
}
] |
2016-05-20
|
[
[
"Guizani",
"Zeineb",
""
],
[
"Hamdi",
"Noureddine",
""
]
] |
The work presented in this paper deals with the issue of massive demands for higher capacity. For that matter, we investigate the spectrum resource management in outdoor mmWave cell for the uplink of cellular and D2D communications. Indeed, we provide a first insight how to optimize the system performance in terms of achievable throughput while realizing a compromise between the large number of admitted devices and the generated interference constraint. We propose a mathematical formulation of the optimization objective which falls in the mixed integer-real optimization scheme. To overcome its complexity, we apply a heuristic algorithm and test its efficiency through simulation results with a particular regard to the BER impact in the QoS.
|
2312.04838
|
Suhas Srinath
|
Suhas Srinath, Shankhanil Mitra, Shika Rao and Rajiv Soundararajan
|
Learning Generalizable Perceptual Representations for Data-Efficient
No-Reference Image Quality Assessment
|
Accepted to IEEE/CVF WACV 2024
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
No-reference (NR) image quality assessment (IQA) is an important tool in
enhancing the user experience in diverse visual applications. A major drawback
of state-of-the-art NR-IQA techniques is their reliance on a large number of
human annotations to train models for a target IQA application. To mitigate
this requirement, there is a need for unsupervised learning of generalizable
quality representations that capture diverse distortions. We enable the
learning of low-level quality features agnostic to distortion types by
introducing a novel quality-aware contrastive loss. Further, we leverage the
generalizability of vision-language models by fine-tuning one such model to
extract high-level image quality information through relevant text prompts. The
two sets of features are combined to effectively predict quality by training a
simple regressor with very few samples on a target dataset. Additionally, we
design zero-shot quality predictions from both pathways in a completely blind
setting. Our experiments on diverse datasets encompassing various distortions
show the generalizability of the features and their superior performance in the
data-efficient and zero-shot settings. Code will be made available at
https://github.com/suhas-srinath/GRepQ.
|
[
{
"created": "Fri, 8 Dec 2023 05:24:21 GMT",
"version": "v1"
}
] |
2023-12-11
|
[
[
"Srinath",
"Suhas",
""
],
[
"Mitra",
"Shankhanil",
""
],
[
"Rao",
"Shika",
""
],
[
"Soundararajan",
"Rajiv",
""
]
] |
No-reference (NR) image quality assessment (IQA) is an important tool in enhancing the user experience in diverse visual applications. A major drawback of state-of-the-art NR-IQA techniques is their reliance on a large number of human annotations to train models for a target IQA application. To mitigate this requirement, there is a need for unsupervised learning of generalizable quality representations that capture diverse distortions. We enable the learning of low-level quality features agnostic to distortion types by introducing a novel quality-aware contrastive loss. Further, we leverage the generalizability of vision-language models by fine-tuning one such model to extract high-level image quality information through relevant text prompts. The two sets of features are combined to effectively predict quality by training a simple regressor with very few samples on a target dataset. Additionally, we design zero-shot quality predictions from both pathways in a completely blind setting. Our experiments on diverse datasets encompassing various distortions show the generalizability of the features and their superior performance in the data-efficient and zero-shot settings. Code will be made available at https://github.com/suhas-srinath/GRepQ.
|
1606.01292
|
Kaisheng Yao
|
Kaisheng Yao and Baolin Peng and Geoffrey Zweig and Kam-Fai Wong
|
An Attentional Neural Conversation Model with Improved Specificity
| null | null | null | null |
cs.CL cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we propose a neural conversation model for conducting
dialogues. We demonstrate the use of this model to generate help desk
responses, where users are asking questions about PC applications. Our model is
distinguished by two characteristics. First, it models intention across turns
with a recurrent network, and incorporates an attention model that is
conditioned on the representation of intention. Secondly, it avoids generating
non-specific responses by incorporating an IDF term in the objective function.
The model is evaluated both as a pure generation model in which a help-desk
response is generated from scratch, and as a retrieval model with performance
measured using recall rates of the correct response. Experimental results
indicate that the model outperforms previously proposed neural conversation
architectures, and that using specificity in the objective function
significantly improves performances for both generation and retrieval.
|
[
{
"created": "Fri, 3 Jun 2016 22:26:01 GMT",
"version": "v1"
}
] |
2016-06-07
|
[
[
"Yao",
"Kaisheng",
""
],
[
"Peng",
"Baolin",
""
],
[
"Zweig",
"Geoffrey",
""
],
[
"Wong",
"Kam-Fai",
""
]
] |
In this paper we propose a neural conversation model for conducting dialogues. We demonstrate the use of this model to generate help desk responses, where users are asking questions about PC applications. Our model is distinguished by two characteristics. First, it models intention across turns with a recurrent network, and incorporates an attention model that is conditioned on the representation of intention. Secondly, it avoids generating non-specific responses by incorporating an IDF term in the objective function. The model is evaluated both as a pure generation model in which a help-desk response is generated from scratch, and as a retrieval model with performance measured using recall rates of the correct response. Experimental results indicate that the model outperforms previously proposed neural conversation architectures, and that using specificity in the objective function significantly improves performances for both generation and retrieval.
|
2201.09208
|
I-Hsi Kao
|
I-Hsi Kao, Ya-Zhu Yian, Jian-An Su, Yi-Horng Lai, Jau-Woei Perng,
Tung-Li Hsieh, Yi-Shueh Tsai, and Min-Shiu Hsieh
|
Design of Sensor Fusion Driver Assistance System for Active Pedestrian
Safety
|
The 14th International Conference on Automation Technology
(Automation 2017), December 8-10, 2017, Kaohsiung, Taiwan
| null | null | null |
cs.CV eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present a parallel architecture for a sensor fusion
detection system that combines a camera and 1D light detection and ranging
(lidar) sensor for object detection. The system contains two object detection
methods, one based on an optical flow, and the other using lidar. The two
sensors can effectively complement the defects of the other. The accurate
longitudinal accuracy of the object's location and its lateral movement
information can be achieved simultaneously. Using a spatio-temporal alignment
and a policy of sensor fusion, we completed the development of a fusion
detection system with high reliability at distances of up to 20 m. Test results
show that the proposed system achieves a high level of accuracy for pedestrian
or object detection in front of a vehicle, and has high robustness to special
environments.
|
[
{
"created": "Sun, 23 Jan 2022 08:52:32 GMT",
"version": "v1"
}
] |
2022-01-25
|
[
[
"Kao",
"I-Hsi",
""
],
[
"Yian",
"Ya-Zhu",
""
],
[
"Su",
"Jian-An",
""
],
[
"Lai",
"Yi-Horng",
""
],
[
"Perng",
"Jau-Woei",
""
],
[
"Hsieh",
"Tung-Li",
""
],
[
"Tsai",
"Yi-Shueh",
""
],
[
"Hsieh",
"Min-Shiu",
""
]
] |
In this paper, we present a parallel architecture for a sensor fusion detection system that combines a camera and 1D light detection and ranging (lidar) sensor for object detection. The system contains two object detection methods, one based on an optical flow, and the other using lidar. The two sensors can effectively complement the defects of the other. The accurate longitudinal accuracy of the object's location and its lateral movement information can be achieved simultaneously. Using a spatio-temporal alignment and a policy of sensor fusion, we completed the development of a fusion detection system with high reliability at distances of up to 20 m. Test results show that the proposed system achieves a high level of accuracy for pedestrian or object detection in front of a vehicle, and has high robustness to special environments.
|
2402.09500
|
Matthew Fox
|
Matthew Fox
|
On Formally Undecidable Traits of Intelligent Machines
|
34 pages
| null | null | null |
cs.AI cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
Building on work by Alfonseca et al. (2021), we study the conditions
necessary for it to be logically possible to prove that an arbitrary
artificially intelligent machine will exhibit certain behavior. To do this, we
develop a formalism like -- but mathematically distinct from -- the theory of
formal languages and their properties. Our formalism affords a precise means
for not only talking about the traits we desire of machines (such as them being
intelligent, contained, moral, and so forth), but also for detailing the
conditions necessary for it to be logically possible to decide whether a given
arbitrary machine possesses such a trait or not. Contrary to Alfonseca et al.'s
(2021) results, we find that Rice's theorem from computability theory cannot in
general be used to determine whether an arbitrary machine possesses a given
trait or not. Therefore, it is not necessarily the case that deciding whether
an arbitrary machine is intelligent, contained, moral, and so forth is
logically impossible.
|
[
{
"created": "Wed, 14 Feb 2024 18:59:37 GMT",
"version": "v1"
}
] |
2024-02-16
|
[
[
"Fox",
"Matthew",
""
]
] |
Building on work by Alfonseca et al. (2021), we study the conditions necessary for it to be logically possible to prove that an arbitrary artificially intelligent machine will exhibit certain behavior. To do this, we develop a formalism like -- but mathematically distinct from -- the theory of formal languages and their properties. Our formalism affords a precise means for not only talking about the traits we desire of machines (such as them being intelligent, contained, moral, and so forth), but also for detailing the conditions necessary for it to be logically possible to decide whether a given arbitrary machine possesses such a trait or not. Contrary to Alfonseca et al.'s (2021) results, we find that Rice's theorem from computability theory cannot in general be used to determine whether an arbitrary machine possesses a given trait or not. Therefore, it is not necessarily the case that deciding whether an arbitrary machine is intelligent, contained, moral, and so forth is logically impossible.
|
2401.15720
|
Anat Hashavit
|
Anat Hashavit, Tamar Stern, Hongning Wang, Sarit Kraus
|
The Impact of Snippet Reliability on Misinformation in Online Health
Search
| null | null | null | null |
cs.IR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Search result snippets are crucial in modern search engines, providing users
with a quick overview of a website's content. Snippets help users determine the
relevance of a document to their information needs, and in certain scenarios
even enable them to satisfy those needs without visiting web documents. Hence,
it is crucial for snippets to reliably represent the content of their
corresponding documents. While this may be a straightforward requirement for
some queries, it can become challenging in the complex domain of healthcare,
and can lead to misinformation. This paper aims to examine snippets'
reliability in representing their corresponding documents, specifically in the
health domain. To achieve this, we conduct a series of user studies using
Google's search results, where participants are asked to infer viewpoints of
search results pertaining to queries about the effectiveness of a medical
intervention for a medical condition, based solely on their titles and
snippets. Our findings reveal that a considerable portion of Google's snippets
(28%) failed to present any viewpoint on the intervention's effectiveness, and
that 35% were interpreted by participants as having a different viewpoint
compared to their corresponding documents. To address this issue, we propose a
snippet extraction solution tailored directly to users' information needs,
i.e., extracting snippets that summarize documents' viewpoints regarding the
intervention and condition that appear in the query. User study demonstrates
that our information need-focused solution outperforms the mainstream
query-based approach. With only 19.67% of snippets generated by our solution
reported as not presenting a viewpoint and a mere 20.33% misinterpreted by
participants. These results strongly suggest that an information need-focused
approach can significantly improve the reliability of extracted snippets in
online health search.
|
[
{
"created": "Sun, 28 Jan 2024 17:59:55 GMT",
"version": "v1"
}
] |
2024-01-30
|
[
[
"Hashavit",
"Anat",
""
],
[
"Stern",
"Tamar",
""
],
[
"Wang",
"Hongning",
""
],
[
"Kraus",
"Sarit",
""
]
] |
Search result snippets are crucial in modern search engines, providing users with a quick overview of a website's content. Snippets help users determine the relevance of a document to their information needs, and in certain scenarios even enable them to satisfy those needs without visiting web documents. Hence, it is crucial for snippets to reliably represent the content of their corresponding documents. While this may be a straightforward requirement for some queries, it can become challenging in the complex domain of healthcare, and can lead to misinformation. This paper aims to examine snippets' reliability in representing their corresponding documents, specifically in the health domain. To achieve this, we conduct a series of user studies using Google's search results, where participants are asked to infer viewpoints of search results pertaining to queries about the effectiveness of a medical intervention for a medical condition, based solely on their titles and snippets. Our findings reveal that a considerable portion of Google's snippets (28%) failed to present any viewpoint on the intervention's effectiveness, and that 35% were interpreted by participants as having a different viewpoint compared to their corresponding documents. To address this issue, we propose a snippet extraction solution tailored directly to users' information needs, i.e., extracting snippets that summarize documents' viewpoints regarding the intervention and condition that appear in the query. User study demonstrates that our information need-focused solution outperforms the mainstream query-based approach. With only 19.67% of snippets generated by our solution reported as not presenting a viewpoint and a mere 20.33% misinterpreted by participants. These results strongly suggest that an information need-focused approach can significantly improve the reliability of extracted snippets in online health search.
|
1109.3437
|
Jia Zeng
|
Jia Zeng and William K. Cheung and Jiming Liu
|
Learning Topic Models by Belief Propagation
|
14 pages, 17 figures
|
IEEE Transactions on Pattern Analysis and Machine Intelligence,
Volume 33, Number 5, Pages 1121-1134, 2013
|
10.1109/TPAMI.2012.185
| null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Latent Dirichlet allocation (LDA) is an important hierarchical Bayesian model
for probabilistic topic modeling, which attracts worldwide interests and
touches on many important applications in text mining, computer vision and
computational biology. This paper represents LDA as a factor graph within the
Markov random field (MRF) framework, which enables the classic loopy belief
propagation (BP) algorithm for approximate inference and parameter estimation.
Although two commonly-used approximate inference methods, such as variational
Bayes (VB) and collapsed Gibbs sampling (GS), have gained great successes in
learning LDA, the proposed BP is competitive in both speed and accuracy as
validated by encouraging experimental results on four large-scale document data
sets. Furthermore, the BP algorithm has the potential to become a generic
learning scheme for variants of LDA-based topic models. To this end, we show
how to learn two typical variants of LDA-based topic models, such as
author-topic models (ATM) and relational topic models (RTM), using BP based on
the factor graph representation.
|
[
{
"created": "Thu, 15 Sep 2011 19:20:48 GMT",
"version": "v1"
},
{
"created": "Sun, 25 Sep 2011 21:17:41 GMT",
"version": "v2"
},
{
"created": "Mon, 3 Oct 2011 03:17:44 GMT",
"version": "v3"
},
{
"created": "Sat, 24 Mar 2012 12:47:02 GMT",
"version": "v4"
}
] |
2015-03-19
|
[
[
"Zeng",
"Jia",
""
],
[
"Cheung",
"William K.",
""
],
[
"Liu",
"Jiming",
""
]
] |
Latent Dirichlet allocation (LDA) is an important hierarchical Bayesian model for probabilistic topic modeling, which attracts worldwide interests and touches on many important applications in text mining, computer vision and computational biology. This paper represents LDA as a factor graph within the Markov random field (MRF) framework, which enables the classic loopy belief propagation (BP) algorithm for approximate inference and parameter estimation. Although two commonly-used approximate inference methods, such as variational Bayes (VB) and collapsed Gibbs sampling (GS), have gained great successes in learning LDA, the proposed BP is competitive in both speed and accuracy as validated by encouraging experimental results on four large-scale document data sets. Furthermore, the BP algorithm has the potential to become a generic learning scheme for variants of LDA-based topic models. To this end, we show how to learn two typical variants of LDA-based topic models, such as author-topic models (ATM) and relational topic models (RTM), using BP based on the factor graph representation.
|
2108.09105
|
Pierre-Louis Guhur
|
Pierre-Louis Guhur, Makarand Tapaswi, Shizhe Chen, Ivan Laptev,
Cordelia Schmid
|
Airbert: In-domain Pretraining for Vision-and-Language Navigation
|
To be published on ICCV 2021. Webpage is at
https://airbert-vln.github.io/ linking to our dataset, codes and models
| null | null | null |
cs.CV cs.AI cs.CL cs.HC cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Vision-and-language navigation (VLN) aims to enable embodied agents to
navigate in realistic environments using natural language instructions. Given
the scarcity of domain-specific training data and the high diversity of image
and language inputs, the generalization of VLN agents to unseen environments
remains challenging. Recent methods explore pretraining to improve
generalization, however, the use of generic image-caption datasets or existing
small-scale VLN environments is suboptimal and results in limited improvements.
In this work, we introduce BnB, a large-scale and diverse in-domain VLN
dataset. We first collect image-caption (IC) pairs from hundreds of thousands
of listings from online rental marketplaces. Using IC pairs we next propose
automatic strategies to generate millions of VLN path-instruction (PI) pairs.
We further propose a shuffling loss that improves the learning of temporal
order inside PI pairs. We use BnB pretrain our Airbert model that can be
adapted to discriminative and generative settings and show that it outperforms
state of the art for Room-to-Room (R2R) navigation and Remote Referring
Expression (REVERIE) benchmarks. Moreover, our in-domain pretraining
significantly increases performance on a challenging few-shot VLN evaluation,
where we train the model only on VLN instructions from a few houses.
|
[
{
"created": "Fri, 20 Aug 2021 10:58:09 GMT",
"version": "v1"
}
] |
2021-08-23
|
[
[
"Guhur",
"Pierre-Louis",
""
],
[
"Tapaswi",
"Makarand",
""
],
[
"Chen",
"Shizhe",
""
],
[
"Laptev",
"Ivan",
""
],
[
"Schmid",
"Cordelia",
""
]
] |
Vision-and-language navigation (VLN) aims to enable embodied agents to navigate in realistic environments using natural language instructions. Given the scarcity of domain-specific training data and the high diversity of image and language inputs, the generalization of VLN agents to unseen environments remains challenging. Recent methods explore pretraining to improve generalization, however, the use of generic image-caption datasets or existing small-scale VLN environments is suboptimal and results in limited improvements. In this work, we introduce BnB, a large-scale and diverse in-domain VLN dataset. We first collect image-caption (IC) pairs from hundreds of thousands of listings from online rental marketplaces. Using IC pairs we next propose automatic strategies to generate millions of VLN path-instruction (PI) pairs. We further propose a shuffling loss that improves the learning of temporal order inside PI pairs. We use BnB pretrain our Airbert model that can be adapted to discriminative and generative settings and show that it outperforms state of the art for Room-to-Room (R2R) navigation and Remote Referring Expression (REVERIE) benchmarks. Moreover, our in-domain pretraining significantly increases performance on a challenging few-shot VLN evaluation, where we train the model only on VLN instructions from a few houses.
|
2405.02782
|
David Wood
|
David A. Wood, Emily Guilhem, Sina Kafiabadi, Ayisha Al Busaidi,
Kishan Dissanayake, Ahmed Hammam, Nina Mansoor, Matthew Townend, Siddharth
Agarwal, Yiran Wei, Asif Mazumder, Gareth J. Barker, Peter Sasieni, Sebastien
Ourselin, James H. Cole, Thomas C. Booth
|
A self-supervised text-vision framework for automated brain abnormality
detection
|
Under Review
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Artificial neural networks trained on large, expert-labelled datasets are
considered state-of-the-art for a range of medical image recognition tasks.
However, categorically labelled datasets are time-consuming to generate and
constrain classification to a pre-defined, fixed set of classes. For
neuroradiological applications in particular, this represents a barrier to
clinical adoption. To address these challenges, we present a self-supervised
text-vision framework that learns to detect clinically relevant abnormalities
in brain MRI scans by directly leveraging the rich information contained in
accompanying free-text neuroradiology reports. Our training approach consisted
of two-steps. First, a dedicated neuroradiological language model - NeuroBERT -
was trained to generate fixed-dimensional vector representations of
neuroradiology reports (N = 50,523) via domain-specific self-supervised
learning tasks. Next, convolutional neural networks (one per MRI sequence)
learnt to map individual brain scans to their corresponding text vector
representations by optimising a mean square error loss. Once trained, our
text-vision framework can be used to detect abnormalities in unreported brain
MRI examinations by scoring scans against suitable query sentences (e.g.,
'there is an acute stroke', 'there is hydrocephalus' etc.), enabling a range of
classification-based applications including automated triage. Potentially, our
framework could also serve as a clinical decision support tool, not only by
suggesting findings to radiologists and detecting errors in provisional
reports, but also by retrieving and displaying examples of pathologies from
historical examinations that could be relevant to the current case based on
textual descriptors.
|
[
{
"created": "Sun, 5 May 2024 01:51:58 GMT",
"version": "v1"
},
{
"created": "Wed, 12 Jun 2024 01:01:51 GMT",
"version": "v2"
}
] |
2024-06-13
|
[
[
"Wood",
"David A.",
""
],
[
"Guilhem",
"Emily",
""
],
[
"Kafiabadi",
"Sina",
""
],
[
"Busaidi",
"Ayisha Al",
""
],
[
"Dissanayake",
"Kishan",
""
],
[
"Hammam",
"Ahmed",
""
],
[
"Mansoor",
"Nina",
""
],
[
"Townend",
"Matthew",
""
],
[
"Agarwal",
"Siddharth",
""
],
[
"Wei",
"Yiran",
""
],
[
"Mazumder",
"Asif",
""
],
[
"Barker",
"Gareth J.",
""
],
[
"Sasieni",
"Peter",
""
],
[
"Ourselin",
"Sebastien",
""
],
[
"Cole",
"James H.",
""
],
[
"Booth",
"Thomas C.",
""
]
] |
Artificial neural networks trained on large, expert-labelled datasets are considered state-of-the-art for a range of medical image recognition tasks. However, categorically labelled datasets are time-consuming to generate and constrain classification to a pre-defined, fixed set of classes. For neuroradiological applications in particular, this represents a barrier to clinical adoption. To address these challenges, we present a self-supervised text-vision framework that learns to detect clinically relevant abnormalities in brain MRI scans by directly leveraging the rich information contained in accompanying free-text neuroradiology reports. Our training approach consisted of two-steps. First, a dedicated neuroradiological language model - NeuroBERT - was trained to generate fixed-dimensional vector representations of neuroradiology reports (N = 50,523) via domain-specific self-supervised learning tasks. Next, convolutional neural networks (one per MRI sequence) learnt to map individual brain scans to their corresponding text vector representations by optimising a mean square error loss. Once trained, our text-vision framework can be used to detect abnormalities in unreported brain MRI examinations by scoring scans against suitable query sentences (e.g., 'there is an acute stroke', 'there is hydrocephalus' etc.), enabling a range of classification-based applications including automated triage. Potentially, our framework could also serve as a clinical decision support tool, not only by suggesting findings to radiologists and detecting errors in provisional reports, but also by retrieving and displaying examples of pathologies from historical examinations that could be relevant to the current case based on textual descriptors.
|
2207.08536
|
Xi Li
|
Zequn Qin, Jingyu Chen, Chao Chen, Xiaozhi Chen, Xi Li
|
UniFusion: Unified Multi-view Fusion Transformer for Spatial-Temporal
Representation in Bird's-Eye-View
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Bird's eye view (BEV) representation is a new perception formulation for
autonomous driving, which is based on spatial fusion. Further, temporal fusion
is also introduced in BEV representation and gains great success. In this work,
we propose a new method that unifies both spatial and temporal fusion and
merges them into a unified mathematical formulation. The unified fusion could
not only provide a new perspective on BEV fusion but also brings new
capabilities. With the proposed unified spatial-temporal fusion, our method
could support long-range fusion, which is hard to achieve in conventional BEV
methods. Moreover, the BEV fusion in our work is temporal-adaptive and the
weights of temporal fusion are learnable. In contrast, conventional methods
mainly use fixed and equal weights for temporal fusion. Besides, the proposed
unified fusion could avoid information lost in conventional BEV fusion methods
and make full use of features. Extensive experiments and ablation studies on
the NuScenes dataset show the effectiveness of the proposed method and our
method gains the state-of-the-art performance in the map segmentation task.
|
[
{
"created": "Mon, 18 Jul 2022 11:59:10 GMT",
"version": "v1"
},
{
"created": "Mon, 20 Mar 2023 03:12:09 GMT",
"version": "v2"
}
] |
2023-03-21
|
[
[
"Qin",
"Zequn",
""
],
[
"Chen",
"Jingyu",
""
],
[
"Chen",
"Chao",
""
],
[
"Chen",
"Xiaozhi",
""
],
[
"Li",
"Xi",
""
]
] |
Bird's eye view (BEV) representation is a new perception formulation for autonomous driving, which is based on spatial fusion. Further, temporal fusion is also introduced in BEV representation and gains great success. In this work, we propose a new method that unifies both spatial and temporal fusion and merges them into a unified mathematical formulation. The unified fusion could not only provide a new perspective on BEV fusion but also brings new capabilities. With the proposed unified spatial-temporal fusion, our method could support long-range fusion, which is hard to achieve in conventional BEV methods. Moreover, the BEV fusion in our work is temporal-adaptive and the weights of temporal fusion are learnable. In contrast, conventional methods mainly use fixed and equal weights for temporal fusion. Besides, the proposed unified fusion could avoid information lost in conventional BEV fusion methods and make full use of features. Extensive experiments and ablation studies on the NuScenes dataset show the effectiveness of the proposed method and our method gains the state-of-the-art performance in the map segmentation task.
|
2309.13808
|
EPTCS
|
Dafina Trufa\c{s} (University of Bucharest), Ioan Teodorescu
(University of Bucharest), Denisa Diaconescu (University of Bucharest),
Traian \c{S}erb\u{a}nu\c{t}\u{a} (University of Bucharest), Vlad Zamfir
(independent researcher)
|
Asynchronous Muddy Children Puzzle (work in progress)
|
In Proceedings FROM 2023, arXiv:2309.12959
|
EPTCS 389, 2023, pp. 152-166
|
10.4204/EPTCS.389.13
| null |
cs.LO cs.MA
|
http://creativecommons.org/licenses/by/4.0/
|
In this work-in-progress paper we explore using the recently introduced VLSM
formalism to define and reason about the dynamics of agent-based systems. To
this aim we use VLSMs to formally present several possible approaches to
modeling the interactions in the Muddy Children Puzzle as protocols that reach
consensus asynchronously.
|
[
{
"created": "Mon, 25 Sep 2023 01:24:21 GMT",
"version": "v1"
}
] |
2023-09-26
|
[
[
"Trufaş",
"Dafina",
"",
"University of Bucharest"
],
[
"Teodorescu",
"Ioan",
"",
"University of Bucharest"
],
[
"Diaconescu",
"Denisa",
"",
"University of Bucharest"
],
[
"Şerbănuţă",
"Traian",
"",
"University of Bucharest"
],
[
"Zamfir",
"Vlad",
"",
"independent researcher"
]
] |
In this work-in-progress paper we explore using the recently introduced VLSM formalism to define and reason about the dynamics of agent-based systems. To this aim we use VLSMs to formally present several possible approaches to modeling the interactions in the Muddy Children Puzzle as protocols that reach consensus asynchronously.
|
1211.6940
|
Keehang Kwon
|
Keehang Kwon and Daeseong Kang
|
Choice Disjunctive Queries in Logic Programming
|
IEICE transaction on Information and Systems (to appear)
|
IEICE transaction on Information and Systems vol.E106-D,No.3, 2023
| null | null |
cs.LO cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
One of the long-standing research problems on logic programming is to treat
the cut predicate in a logical, high-level way. We argue that this problem can
be solved by adopting linear logic and choice-disjunctive goal formulas of the
form $G_0 \add G_1$ where $G_0, G_1$ are goals. These goals have the following
intended semantics: $choose$ the true disjunct $G_i$ and execute $G_i$ where $i
(= 0\ {\rm or}\ 1)$, while $discarding$ the unchosen disjunct. Note that only
one goal can remain alive during execution. These goals thus allow us to
specify mutually exclusive tasks in a high-level way.
|
[
{
"created": "Thu, 29 Nov 2012 15:04:31 GMT",
"version": "v1"
},
{
"created": "Mon, 22 Sep 2014 03:30:04 GMT",
"version": "v2"
},
{
"created": "Fri, 4 Sep 2015 06:27:20 GMT",
"version": "v3"
},
{
"created": "Tue, 11 Oct 2022 04:28:33 GMT",
"version": "v4"
}
] |
2023-01-31
|
[
[
"Kwon",
"Keehang",
""
],
[
"Kang",
"Daeseong",
""
]
] |
One of the long-standing research problems on logic programming is to treat the cut predicate in a logical, high-level way. We argue that this problem can be solved by adopting linear logic and choice-disjunctive goal formulas of the form $G_0 \add G_1$ where $G_0, G_1$ are goals. These goals have the following intended semantics: $choose$ the true disjunct $G_i$ and execute $G_i$ where $i (= 0\ {\rm or}\ 1)$, while $discarding$ the unchosen disjunct. Note that only one goal can remain alive during execution. These goals thus allow us to specify mutually exclusive tasks in a high-level way.
|
1508.02439
|
Di Wang
|
Di Wang, Satish Rao, Michael W. Mahoney
|
Unified Acceleration Method for Packing and Covering Problems via
Diameter Reduction
|
Fixed typo in packing LP formulation (page 1), and wrong citation in
the discussion of earlier works on page 2
| null | null | null |
cs.DS cs.NA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The linear coupling method was introduced recently by Allen-Zhu and Orecchia
for solving convex optimization problems with first order methods, and it
provides a conceptually simple way to integrate a gradient descent step and
mirror descent step in each iteration. The high-level approach of the linear
coupling method is very flexible, and it has shown initial promise by providing
improved algorithms for packing and covering linear programs. Somewhat
surprisingly, however, while the dependence of the convergence rate on the
error parameter $\epsilon$ for packing problems was improved to
$O(1/\epsilon)$, which corresponds to what accelerated gradient methods are
designed to achieve, the dependence for covering problems was only improved to
$O(1/\epsilon^{1.5})$, and even that required a different more complicated
algorithm. Given the close connections between packing and covering problems
and since previous algorithms for these very related problems have led to the
same $\epsilon$ dependence, this discrepancy is surprising, and it leaves open
the question of the exact role that the linear coupling is playing in
coordinating the complementary gradient and mirror descent step of the
algorithm. In this paper, we clarify these issues for linear coupling
algorithms for packing and covering linear programs, illustrating that the
linear coupling method can lead to improved $O(1/\epsilon)$ dependence for both
packing and covering problems in a unified manner, i.e., with the same
algorithm and almost identical analysis. Our main technical result is a novel
diameter reduction method for covering problems that is of independent interest
and that may be useful in applying the accelerated linear coupling method to
other combinatorial problems.
|
[
{
"created": "Mon, 10 Aug 2015 21:56:20 GMT",
"version": "v1"
},
{
"created": "Tue, 6 Oct 2015 06:41:38 GMT",
"version": "v2"
}
] |
2015-10-07
|
[
[
"Wang",
"Di",
""
],
[
"Rao",
"Satish",
""
],
[
"Mahoney",
"Michael W.",
""
]
] |
The linear coupling method was introduced recently by Allen-Zhu and Orecchia for solving convex optimization problems with first order methods, and it provides a conceptually simple way to integrate a gradient descent step and mirror descent step in each iteration. The high-level approach of the linear coupling method is very flexible, and it has shown initial promise by providing improved algorithms for packing and covering linear programs. Somewhat surprisingly, however, while the dependence of the convergence rate on the error parameter $\epsilon$ for packing problems was improved to $O(1/\epsilon)$, which corresponds to what accelerated gradient methods are designed to achieve, the dependence for covering problems was only improved to $O(1/\epsilon^{1.5})$, and even that required a different more complicated algorithm. Given the close connections between packing and covering problems and since previous algorithms for these very related problems have led to the same $\epsilon$ dependence, this discrepancy is surprising, and it leaves open the question of the exact role that the linear coupling is playing in coordinating the complementary gradient and mirror descent step of the algorithm. In this paper, we clarify these issues for linear coupling algorithms for packing and covering linear programs, illustrating that the linear coupling method can lead to improved $O(1/\epsilon)$ dependence for both packing and covering problems in a unified manner, i.e., with the same algorithm and almost identical analysis. Our main technical result is a novel diameter reduction method for covering problems that is of independent interest and that may be useful in applying the accelerated linear coupling method to other combinatorial problems.
|
1812.10037
|
Jianpeng Cheng J
|
Jianpeng Cheng and Siva Reddy and Mirella Lapata
|
Building a Neural Semantic Parser from a Domain Ontology
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Semantic parsing is the task of converting natural language utterances into
machine interpretable meaning representations which can be executed against a
real-world environment such as a database. Scaling semantic parsing to
arbitrary domains faces two interrelated challenges: obtaining broad coverage
training data effectively and cheaply; and developing a model that generalizes
to compositional utterances and complex intentions. We address these challenges
with a framework which allows to elicit training data from a domain ontology
and bootstrap a neural parser which recursively builds derivations of logical
forms. In our framework meaning representations are described by sequences of
natural language templates, where each template corresponds to a decomposed
fragment of the underlying meaning representation. Although artificial,
templates can be understood and paraphrased by humans to create natural
utterances, resulting in parallel triples of utterances, meaning
representations, and their decompositions. These allow us to train a neural
semantic parser which learns to compose rules in deriving meaning
representations. We crowdsource training data on six domains, covering both
single-turn utterances which exhibit rich compositionality, and sequential
utterances where a complex task is procedurally performed in steps. We then
develop neural semantic parsers which perform such compositional tasks. In
general, our approach allows to deploy neural semantic parsers quickly and
cheaply from a given domain ontology.
|
[
{
"created": "Tue, 25 Dec 2018 05:30:18 GMT",
"version": "v1"
}
] |
2018-12-27
|
[
[
"Cheng",
"Jianpeng",
""
],
[
"Reddy",
"Siva",
""
],
[
"Lapata",
"Mirella",
""
]
] |
Semantic parsing is the task of converting natural language utterances into machine interpretable meaning representations which can be executed against a real-world environment such as a database. Scaling semantic parsing to arbitrary domains faces two interrelated challenges: obtaining broad coverage training data effectively and cheaply; and developing a model that generalizes to compositional utterances and complex intentions. We address these challenges with a framework which allows to elicit training data from a domain ontology and bootstrap a neural parser which recursively builds derivations of logical forms. In our framework meaning representations are described by sequences of natural language templates, where each template corresponds to a decomposed fragment of the underlying meaning representation. Although artificial, templates can be understood and paraphrased by humans to create natural utterances, resulting in parallel triples of utterances, meaning representations, and their decompositions. These allow us to train a neural semantic parser which learns to compose rules in deriving meaning representations. We crowdsource training data on six domains, covering both single-turn utterances which exhibit rich compositionality, and sequential utterances where a complex task is procedurally performed in steps. We then develop neural semantic parsers which perform such compositional tasks. In general, our approach allows to deploy neural semantic parsers quickly and cheaply from a given domain ontology.
|
2404.08259
|
Udo Kruschwitz
|
Wan-Hua Her and Udo Kruschwitz
|
Investigating Neural Machine Translation for Low-Resource Languages:
Using Bavarian as a Case Study
|
Preprint accepted at the 3rd Annual Meeting of the Special Interest
Group on Under-resourced Languages (SIGUL 2024)
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Machine Translation has made impressive progress in recent years offering
close to human-level performance on many languages, but studies have primarily
focused on high-resource languages with broad online presence and resources.
With the help of growing Large Language Models, more and more low-resource
languages achieve better results through the presence of other languages.
However, studies have shown that not all low-resource languages can benefit
from multilingual systems, especially those with insufficient training and
evaluation data. In this paper, we revisit state-of-the-art Neural Machine
Translation techniques to develop automatic translation systems between German
and Bavarian. We investigate conditions of low-resource languages such as data
scarcity and parameter sensitivity and focus on refined solutions that combat
low-resource difficulties and creative solutions such as harnessing language
similarity. Our experiment entails applying Back-translation and Transfer
Learning to automatically generate more training data and achieve higher
translation performance. We demonstrate noisiness in the data and present our
approach to carry out text preprocessing extensively. Evaluation was conducted
using combined metrics: BLEU, chrF and TER. Statistical significance results
with Bonferroni correction show surprisingly high baseline systems, and that
Back-translation leads to significant improvement. Furthermore, we present a
qualitative analysis of translation errors and system limitations.
|
[
{
"created": "Fri, 12 Apr 2024 06:16:26 GMT",
"version": "v1"
}
] |
2024-04-15
|
[
[
"Her",
"Wan-Hua",
""
],
[
"Kruschwitz",
"Udo",
""
]
] |
Machine Translation has made impressive progress in recent years offering close to human-level performance on many languages, but studies have primarily focused on high-resource languages with broad online presence and resources. With the help of growing Large Language Models, more and more low-resource languages achieve better results through the presence of other languages. However, studies have shown that not all low-resource languages can benefit from multilingual systems, especially those with insufficient training and evaluation data. In this paper, we revisit state-of-the-art Neural Machine Translation techniques to develop automatic translation systems between German and Bavarian. We investigate conditions of low-resource languages such as data scarcity and parameter sensitivity and focus on refined solutions that combat low-resource difficulties and creative solutions such as harnessing language similarity. Our experiment entails applying Back-translation and Transfer Learning to automatically generate more training data and achieve higher translation performance. We demonstrate noisiness in the data and present our approach to carry out text preprocessing extensively. Evaluation was conducted using combined metrics: BLEU, chrF and TER. Statistical significance results with Bonferroni correction show surprisingly high baseline systems, and that Back-translation leads to significant improvement. Furthermore, we present a qualitative analysis of translation errors and system limitations.
|
1303.2211
|
Nilanjan Dey
|
Nilanjan Dey, Suvojit Acharjee, Debalina Biswas, Achintya Das, Sheli
Sinha Chaudhuri
|
Medical Information Embedding in Compressed Watermarked Intravascular
Ultrasound Video
|
Pages-7 Fig.-15 Tables-2
|
Scientific Bulletin of the Politehnica University of Timisoara -
Transactions on Electronics and Communications p-ISSN 1583-3380 , vol.
57(71), no. 2, 2012
| null | null |
cs.MM cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In medical field, intravascular ultrasound (IVUS) is a tomographic imaging
modality, which can identify the boundaries of different layers of blood
vessels. IVUS can detect myocardial infarction (heart attack) that remains
ignored and unattended when only angioplasty is done. During the past decade,
it became easier for some individuals or groups to copy and transmits digital
information without the permission of the owner. For increasing authentication
and security of copyrights, digital watermarking, an information hiding
technique, was introduced. Achieving watermarking technique with lesser amount
of distortion in biomedical data is a challenging task. Watermark can be
embedded into an image or in a video. As video data is a huge amount of
information, therefore a large storage area is needed which is not feasible. In
this case motion vector based video compression is done to reduce size. In this
present paper, an Electronic Patient Record (EPR) is embedded as watermark
within an IVUS video and then motion vector is calculated. This proposed method
proves robustness as the extracted watermark has good PSNR value and less MSE.
|
[
{
"created": "Sat, 9 Mar 2013 14:08:23 GMT",
"version": "v1"
}
] |
2013-03-12
|
[
[
"Dey",
"Nilanjan",
""
],
[
"Acharjee",
"Suvojit",
""
],
[
"Biswas",
"Debalina",
""
],
[
"Das",
"Achintya",
""
],
[
"Chaudhuri",
"Sheli Sinha",
""
]
] |
In medical field, intravascular ultrasound (IVUS) is a tomographic imaging modality, which can identify the boundaries of different layers of blood vessels. IVUS can detect myocardial infarction (heart attack) that remains ignored and unattended when only angioplasty is done. During the past decade, it became easier for some individuals or groups to copy and transmits digital information without the permission of the owner. For increasing authentication and security of copyrights, digital watermarking, an information hiding technique, was introduced. Achieving watermarking technique with lesser amount of distortion in biomedical data is a challenging task. Watermark can be embedded into an image or in a video. As video data is a huge amount of information, therefore a large storage area is needed which is not feasible. In this case motion vector based video compression is done to reduce size. In this present paper, an Electronic Patient Record (EPR) is embedded as watermark within an IVUS video and then motion vector is calculated. This proposed method proves robustness as the extracted watermark has good PSNR value and less MSE.
|
1911.07292
|
Hufei Zhu
|
Hufei Zhu
|
Two Efficient Ridge Solutions for the Incremental Broad Learning System
on Added Inputs
|
arXiv admin note: text overlap with arXiv:1911.04872
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper proposes the recursive and square-root BLS algorithms to improve
the original BLS for new added inputs, which utilize the inverse and inverse
Cholesky factor of the Hermitian matrix in the ridge inverse, respectively, to
update the ridge solution. The recursive BLS updates the inverse by the matrix
inversion lemma, while the square-root BLS updates the upper-triangular inverse
Cholesky factor by multiplying it with an upper-triangular intermediate matrix.
When the added p training samples are more than the total k nodes in the
network, i.e., p>k, the inverse of a sum of matrices is applied to take a
smaller matrix inversion or inverse Cholesky factorization. For the distributed
BLS with data-parallelism, we introduce the parallel implementation of the
square-root BLS, which is deduced from the parallel implementation of the
inverse Cholesky factorization.
The original BLS based on the generalized inverse with the ridge regression
assumes the ridge parameter lamda->0 in the ridge inverse. When lambda->0 is
not satisfied, the numerical experiments on the MNIST and NORB datasets show
that both the proposed ridge solutions improve the testing accuracy of the
original BLS, and the improvement becomes more significant as lambda is bigger.
On the other hand, compared to the original BLS, both the proposed BLS
algorithms theoretically require less complexities, and are significantly
faster in the simulations on the MNIST dataset. The speedups in total training
time of the recursive and square-root BLS algorithms over the original BLS are
4.41 and 6.92 respectively when p > k, and are 2.80 and 1.59 respectively when
p < k.
|
[
{
"created": "Tue, 12 Nov 2019 14:19:52 GMT",
"version": "v1"
},
{
"created": "Tue, 13 Apr 2021 04:36:01 GMT",
"version": "v2"
},
{
"created": "Fri, 16 Apr 2021 06:34:18 GMT",
"version": "v3"
},
{
"created": "Mon, 22 Nov 2021 14:12:51 GMT",
"version": "v4"
},
{
"created": "Wed, 25 Jan 2023 02:35:55 GMT",
"version": "v5"
}
] |
2023-01-26
|
[
[
"Zhu",
"Hufei",
""
]
] |
This paper proposes the recursive and square-root BLS algorithms to improve the original BLS for new added inputs, which utilize the inverse and inverse Cholesky factor of the Hermitian matrix in the ridge inverse, respectively, to update the ridge solution. The recursive BLS updates the inverse by the matrix inversion lemma, while the square-root BLS updates the upper-triangular inverse Cholesky factor by multiplying it with an upper-triangular intermediate matrix. When the added p training samples are more than the total k nodes in the network, i.e., p>k, the inverse of a sum of matrices is applied to take a smaller matrix inversion or inverse Cholesky factorization. For the distributed BLS with data-parallelism, we introduce the parallel implementation of the square-root BLS, which is deduced from the parallel implementation of the inverse Cholesky factorization. The original BLS based on the generalized inverse with the ridge regression assumes the ridge parameter lamda->0 in the ridge inverse. When lambda->0 is not satisfied, the numerical experiments on the MNIST and NORB datasets show that both the proposed ridge solutions improve the testing accuracy of the original BLS, and the improvement becomes more significant as lambda is bigger. On the other hand, compared to the original BLS, both the proposed BLS algorithms theoretically require less complexities, and are significantly faster in the simulations on the MNIST dataset. The speedups in total training time of the recursive and square-root BLS algorithms over the original BLS are 4.41 and 6.92 respectively when p > k, and are 2.80 and 1.59 respectively when p < k.
|
2103.07658
|
Mallikarjun Byrasandra Ramalinga Reddy
|
Mallikarjun B R, Ayush Tewari, Abdallah Dib, Tim Weyrich, Bernd
Bickel, Hans-Peter Seidel, Hanspeter Pfister, Wojciech Matusik, Louis
Chevallier, Mohamed Elgharib, Christian Theobalt
|
PhotoApp: Photorealistic Appearance Editing of Head Portraits
|
http://gvv.mpi-inf.mpg.de/projects/PhotoApp/
| null | null | null |
cs.CV cs.GR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Photorealistic editing of portraits is a challenging task as humans are very
sensitive to inconsistencies in faces. We present an approach for high-quality
intuitive editing of the camera viewpoint and scene illumination in a portrait
image. This requires our method to capture and control the full reflectance
field of the person in the image. Most editing approaches rely on supervised
learning using training data captured with setups such as light and camera
stages. Such datasets are expensive to acquire, not readily available and do
not capture all the rich variations of in-the-wild portrait images. In
addition, most supervised approaches only focus on relighting, and do not allow
camera viewpoint editing. Thus, they only capture and control a subset of the
reflectance field. Recently, portrait editing has been demonstrated by
operating in the generative model space of StyleGAN. While such approaches do
not require direct supervision, there is a significant loss of quality when
compared to the supervised approaches. In this paper, we present a method which
learns from limited supervised training data. The training images only include
people in a fixed neutral expression with eyes closed, without much hair or
background variations. Each person is captured under 150 one-light-at-a-time
conditions and under 8 camera poses. Instead of training directly in the image
space, we design a supervised problem which learns transformations in the
latent space of StyleGAN. This combines the best of supervised learning and
generative adversarial modeling. We show that the StyleGAN prior allows for
generalisation to different expressions, hairstyles and backgrounds. This
produces high-quality photorealistic results for in-the-wild images and
significantly outperforms existing methods. Our approach can edit the
illumination and pose simultaneously, and runs at interactive rates.
|
[
{
"created": "Sat, 13 Mar 2021 08:59:49 GMT",
"version": "v1"
},
{
"created": "Thu, 13 May 2021 17:59:43 GMT",
"version": "v2"
}
] |
2021-05-14
|
[
[
"R",
"Mallikarjun B",
""
],
[
"Tewari",
"Ayush",
""
],
[
"Dib",
"Abdallah",
""
],
[
"Weyrich",
"Tim",
""
],
[
"Bickel",
"Bernd",
""
],
[
"Seidel",
"Hans-Peter",
""
],
[
"Pfister",
"Hanspeter",
""
],
[
"Matusik",
"Wojciech",
""
],
[
"Chevallier",
"Louis",
""
],
[
"Elgharib",
"Mohamed",
""
],
[
"Theobalt",
"Christian",
""
]
] |
Photorealistic editing of portraits is a challenging task as humans are very sensitive to inconsistencies in faces. We present an approach for high-quality intuitive editing of the camera viewpoint and scene illumination in a portrait image. This requires our method to capture and control the full reflectance field of the person in the image. Most editing approaches rely on supervised learning using training data captured with setups such as light and camera stages. Such datasets are expensive to acquire, not readily available and do not capture all the rich variations of in-the-wild portrait images. In addition, most supervised approaches only focus on relighting, and do not allow camera viewpoint editing. Thus, they only capture and control a subset of the reflectance field. Recently, portrait editing has been demonstrated by operating in the generative model space of StyleGAN. While such approaches do not require direct supervision, there is a significant loss of quality when compared to the supervised approaches. In this paper, we present a method which learns from limited supervised training data. The training images only include people in a fixed neutral expression with eyes closed, without much hair or background variations. Each person is captured under 150 one-light-at-a-time conditions and under 8 camera poses. Instead of training directly in the image space, we design a supervised problem which learns transformations in the latent space of StyleGAN. This combines the best of supervised learning and generative adversarial modeling. We show that the StyleGAN prior allows for generalisation to different expressions, hairstyles and backgrounds. This produces high-quality photorealistic results for in-the-wild images and significantly outperforms existing methods. Our approach can edit the illumination and pose simultaneously, and runs at interactive rates.
|
2305.15805
|
Sotiris Anagnostidis
|
Sotiris Anagnostidis, Dario Pavllo, Luca Biggio, Lorenzo Noci,
Aurelien Lucchi, Thomas Hofmann
|
Dynamic Context Pruning for Efficient and Interpretable Autoregressive
Transformers
| null | null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Autoregressive Transformers adopted in Large Language Models (LLMs) are hard
to scale to long sequences. Despite several works trying to reduce their
computational cost, most of LLMs still adopt attention layers between all pairs
of tokens in the sequence, thus incurring a quadratic cost. In this study, we
present a novel approach that dynamically prunes contextual information while
preserving the model's expressiveness, resulting in reduced memory and
computational requirements during inference. Our method employs a learnable
mechanism that determines which uninformative tokens can be dropped from the
context at any point across the generation process. By doing so, our approach
not only addresses performance concerns but also enhances interpretability,
providing valuable insight into the model's decision-making process. Our
technique can be applied to existing pre-trained models through a
straightforward fine-tuning process, and the pruning strength can be specified
by a sparsity parameter. Notably, our empirical findings demonstrate that we
can effectively prune up to 80\% of the context without significant performance
degradation on downstream tasks, offering a valuable tool for mitigating
inference costs. Our reference implementation achieves up to $2\times$ increase
in inference throughput and even greater memory savings.
|
[
{
"created": "Thu, 25 May 2023 07:39:41 GMT",
"version": "v1"
},
{
"created": "Sun, 28 May 2023 12:11:11 GMT",
"version": "v2"
},
{
"created": "Fri, 31 May 2024 14:02:24 GMT",
"version": "v3"
}
] |
2024-06-03
|
[
[
"Anagnostidis",
"Sotiris",
""
],
[
"Pavllo",
"Dario",
""
],
[
"Biggio",
"Luca",
""
],
[
"Noci",
"Lorenzo",
""
],
[
"Lucchi",
"Aurelien",
""
],
[
"Hofmann",
"Thomas",
""
]
] |
Autoregressive Transformers adopted in Large Language Models (LLMs) are hard to scale to long sequences. Despite several works trying to reduce their computational cost, most of LLMs still adopt attention layers between all pairs of tokens in the sequence, thus incurring a quadratic cost. In this study, we present a novel approach that dynamically prunes contextual information while preserving the model's expressiveness, resulting in reduced memory and computational requirements during inference. Our method employs a learnable mechanism that determines which uninformative tokens can be dropped from the context at any point across the generation process. By doing so, our approach not only addresses performance concerns but also enhances interpretability, providing valuable insight into the model's decision-making process. Our technique can be applied to existing pre-trained models through a straightforward fine-tuning process, and the pruning strength can be specified by a sparsity parameter. Notably, our empirical findings demonstrate that we can effectively prune up to 80\% of the context without significant performance degradation on downstream tasks, offering a valuable tool for mitigating inference costs. Our reference implementation achieves up to $2\times$ increase in inference throughput and even greater memory savings.
|
0908.0980
|
R Doomun
|
Syed S. Rizvi, Khaled M. Elleithy, Aasia Riasat
|
Deterministic Formulization of SNR for Wireless Multiuser DS-CDMA
Networks
|
9 pages IEEE format, International Journal of Computer Science and
Information Security, IJCSIS July 2009, ISSN 1947 5500, Impact Factor 0.423
|
International Journal of Computer Science and Information
Security, IJCSIS, Vol. 3, No. 1, July 2009, USA
| null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Wireless Multiuser receivers suffer from their relatively higher
computational complexity that prevents widespread use of this technique. In
addition, one of the main characteristics of multi-channel communications that
can severely degrade the performance is the inconsistent and low values of SNR
that result in high BER and poor channel capacity. It has been shown that the
computational complexity of a multiuser receiver can be reduced by using the
transformation matrix (TM) algorithm [4]. In this paper, we provide
quantification of SNR based on the computational complexity of TM algorithm. We
show that the reduction of complexity results high and consistent values of SNR
that can consequently be used to achieve a desirable BER performance. In
addition, our simulation results suggest that the high and consistent values of
SNR can be achieved for a desirable BER performance. The performance measure
adopted in this paper is the consistent values of SNR.
|
[
{
"created": "Sun, 9 Aug 2009 06:57:39 GMT",
"version": "v1"
}
] |
2009-08-10
|
[
[
"Rizvi",
"Syed S.",
""
],
[
"Elleithy",
"Khaled M.",
""
],
[
"Riasat",
"Aasia",
""
]
] |
Wireless Multiuser receivers suffer from their relatively higher computational complexity that prevents widespread use of this technique. In addition, one of the main characteristics of multi-channel communications that can severely degrade the performance is the inconsistent and low values of SNR that result in high BER and poor channel capacity. It has been shown that the computational complexity of a multiuser receiver can be reduced by using the transformation matrix (TM) algorithm [4]. In this paper, we provide quantification of SNR based on the computational complexity of TM algorithm. We show that the reduction of complexity results high and consistent values of SNR that can consequently be used to achieve a desirable BER performance. In addition, our simulation results suggest that the high and consistent values of SNR can be achieved for a desirable BER performance. The performance measure adopted in this paper is the consistent values of SNR.
|
2210.16083
|
JunKyu Lee
|
JunKyu Lee, Blesson Varghese, Hans Vandierendonck
|
ROMA: Run-Time Object Detection To Maximize Real-Time Accuracy
|
Accepted at the IEEE/CVF Winter Conference on Applications of
Computer Vision (WACV) 2023
| null |
10.1109/WACV56688.2023.00634
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper analyzes the effects of dynamically varying video contents and
detection latency on the real-time detection accuracy of a detector and
proposes a new run-time accuracy variation model, ROMA, based on the findings
from the analysis. ROMA is designed to select an optimal detector out of a set
of detectors in real time without label information to maximize real-time
object detection accuracy. ROMA utilizing four YOLOv4 detectors on an NVIDIA
Jetson Nano shows real-time accuracy improvements by 4 to 37% for a scenario of
dynamically varying video contents and detection latency consisting of MOT17Det
and MOT20Det datasets, compared to individual YOLOv4 detectors and two
state-of-the-art runtime techniques.
|
[
{
"created": "Fri, 28 Oct 2022 12:06:29 GMT",
"version": "v1"
}
] |
2024-04-30
|
[
[
"Lee",
"JunKyu",
""
],
[
"Varghese",
"Blesson",
""
],
[
"Vandierendonck",
"Hans",
""
]
] |
This paper analyzes the effects of dynamically varying video contents and detection latency on the real-time detection accuracy of a detector and proposes a new run-time accuracy variation model, ROMA, based on the findings from the analysis. ROMA is designed to select an optimal detector out of a set of detectors in real time without label information to maximize real-time object detection accuracy. ROMA utilizing four YOLOv4 detectors on an NVIDIA Jetson Nano shows real-time accuracy improvements by 4 to 37% for a scenario of dynamically varying video contents and detection latency consisting of MOT17Det and MOT20Det datasets, compared to individual YOLOv4 detectors and two state-of-the-art runtime techniques.
|
2406.15111
|
T\'eo Guichoux
|
Teo Guichoux, Laure Soulier, Nicolas Obin, Catherine Pelachaud
|
Investigating the impact of 2D gesture representation on co-speech
gesture generation
|
8 pages. Paper accepted at WACAI 2024
| null | null | null |
cs.AI cs.CL cs.CV
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Co-speech gestures play a crucial role in the interactions between humans and
embodied conversational agents (ECA). Recent deep learning methods enable the
generation of realistic, natural co-speech gestures synchronized with speech,
but such approaches require large amounts of training data. "In-the-wild"
datasets, which compile videos from sources such as YouTube through human pose
detection models, offer a solution by providing 2D skeleton sequences that are
paired with speech. Concurrently, innovative lifting models have emerged,
capable of transforming these 2D pose sequences into their 3D counterparts,
leading to large and diverse datasets of 3D gestures. However, the derived 3D
pose estimation is essentially a pseudo-ground truth, with the actual ground
truth being the 2D motion data. This distinction raises questions about the
impact of gesture representation dimensionality on the quality of generated
motions, a topic that, to our knowledge, remains largely unexplored. In this
work, we evaluate the impact of the dimensionality of the training data, 2D or
3D joint coordinates, on the performance of a multimodal speech-to-gesture deep
generative model. We use a lifting model to convert 2D-generated sequences of
body pose to 3D. Then, we compare the sequence of gestures generated directly
in 3D to the gestures generated in 2D and lifted to 3D as post-processing.
|
[
{
"created": "Fri, 21 Jun 2024 12:59:20 GMT",
"version": "v1"
},
{
"created": "Mon, 24 Jun 2024 08:19:00 GMT",
"version": "v2"
}
] |
2024-06-25
|
[
[
"Guichoux",
"Teo",
""
],
[
"Soulier",
"Laure",
""
],
[
"Obin",
"Nicolas",
""
],
[
"Pelachaud",
"Catherine",
""
]
] |
Co-speech gestures play a crucial role in the interactions between humans and embodied conversational agents (ECA). Recent deep learning methods enable the generation of realistic, natural co-speech gestures synchronized with speech, but such approaches require large amounts of training data. "In-the-wild" datasets, which compile videos from sources such as YouTube through human pose detection models, offer a solution by providing 2D skeleton sequences that are paired with speech. Concurrently, innovative lifting models have emerged, capable of transforming these 2D pose sequences into their 3D counterparts, leading to large and diverse datasets of 3D gestures. However, the derived 3D pose estimation is essentially a pseudo-ground truth, with the actual ground truth being the 2D motion data. This distinction raises questions about the impact of gesture representation dimensionality on the quality of generated motions, a topic that, to our knowledge, remains largely unexplored. In this work, we evaluate the impact of the dimensionality of the training data, 2D or 3D joint coordinates, on the performance of a multimodal speech-to-gesture deep generative model. We use a lifting model to convert 2D-generated sequences of body pose to 3D. Then, we compare the sequence of gestures generated directly in 3D to the gestures generated in 2D and lifted to 3D as post-processing.
|
2107.14110
|
Juan C. P\'erez
|
Juan C. P\'erez, Motasem Alfarra, Guillaume Jeanneret, Laura Rueda,
Ali Thabet, Bernard Ghanem, Pablo Arbel\'aez
|
Enhancing Adversarial Robustness via Test-time Transformation Ensembling
| null | null | null | null |
cs.LG cs.CR cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Deep learning models are prone to being fooled by imperceptible perturbations
known as adversarial attacks. In this work, we study how equipping models with
Test-time Transformation Ensembling (TTE) can work as a reliable defense
against such attacks. While transforming the input data, both at train and test
times, is known to enhance model performance, its effects on adversarial
robustness have not been studied. Here, we present a comprehensive empirical
study of the impact of TTE, in the form of widely-used image transforms, on
adversarial robustness. We show that TTE consistently improves model robustness
against a variety of powerful attacks without any need for re-training, and
that this improvement comes at virtually no trade-off with accuracy on clean
samples. Finally, we show that the benefits of TTE transfer even to the
certified robustness domain, in which TTE provides sizable and consistent
improvements.
|
[
{
"created": "Thu, 29 Jul 2021 15:32:35 GMT",
"version": "v1"
}
] |
2021-07-30
|
[
[
"Pérez",
"Juan C.",
""
],
[
"Alfarra",
"Motasem",
""
],
[
"Jeanneret",
"Guillaume",
""
],
[
"Rueda",
"Laura",
""
],
[
"Thabet",
"Ali",
""
],
[
"Ghanem",
"Bernard",
""
],
[
"Arbeláez",
"Pablo",
""
]
] |
Deep learning models are prone to being fooled by imperceptible perturbations known as adversarial attacks. In this work, we study how equipping models with Test-time Transformation Ensembling (TTE) can work as a reliable defense against such attacks. While transforming the input data, both at train and test times, is known to enhance model performance, its effects on adversarial robustness have not been studied. Here, we present a comprehensive empirical study of the impact of TTE, in the form of widely-used image transforms, on adversarial robustness. We show that TTE consistently improves model robustness against a variety of powerful attacks without any need for re-training, and that this improvement comes at virtually no trade-off with accuracy on clean samples. Finally, we show that the benefits of TTE transfer even to the certified robustness domain, in which TTE provides sizable and consistent improvements.
|
2405.15640
|
Sungwoo Oh
|
Sungwoo Oh and Donggyu Kim
|
GECKO: Generative Language Model for English, Code and Korean
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce GECKO, a bilingual large language model (LLM) optimized for
Korean and English, along with programming languages. GECKO is pretrained on
the balanced, high-quality corpus of Korean and English employing LLaMA
architecture. In this report, we share the experiences of several efforts to
build a better data pipeline for the corpus and to train our model. GECKO shows
great efficiency in token generations for both Korean and English, despite its
small size of vocabulary. We measure the performance on the representative
benchmarks in terms of Korean, English and Code, and it exhibits great
performance on KMMLU (Korean MMLU) and modest performance in English and Code,
even with its smaller number of trained tokens compared to English-focused
LLMs. GECKO is available to the open-source community under a permissive
license. We hope our work offers a research baseline and practical insights for
Korean LLM research. The model can be found at:
https://huggingface.co/kifai/GECKO-7B
|
[
{
"created": "Fri, 24 May 2024 15:30:41 GMT",
"version": "v1"
}
] |
2024-05-27
|
[
[
"Oh",
"Sungwoo",
""
],
[
"Kim",
"Donggyu",
""
]
] |
We introduce GECKO, a bilingual large language model (LLM) optimized for Korean and English, along with programming languages. GECKO is pretrained on the balanced, high-quality corpus of Korean and English employing LLaMA architecture. In this report, we share the experiences of several efforts to build a better data pipeline for the corpus and to train our model. GECKO shows great efficiency in token generations for both Korean and English, despite its small size of vocabulary. We measure the performance on the representative benchmarks in terms of Korean, English and Code, and it exhibits great performance on KMMLU (Korean MMLU) and modest performance in English and Code, even with its smaller number of trained tokens compared to English-focused LLMs. GECKO is available to the open-source community under a permissive license. We hope our work offers a research baseline and practical insights for Korean LLM research. The model can be found at: https://huggingface.co/kifai/GECKO-7B
|
2312.17641
|
Pan Liao
|
Yang Feng, Liao Pan, Wu Di, Liu Bo, Zhang Xingle
|
Motion State: A New Benchmark Multiple Object Tracking
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In the realm of video analysis, the field of multiple object tracking (MOT)
assumes paramount importance, with the motion state of objects-whether static
or dynamic relative to the ground-holding practical significance across diverse
scenarios. However, the extant literature exhibits a notable dearth in the
exploration of this aspect. Deep learning methodologies encounter challenges in
accurately discerning object motion states, while conventional approaches
reliant on comprehensive mathematical modeling may yield suboptimal tracking
accuracy. To address these challenges, we introduce a Model-Data-Driven Motion
State Judgment Object Tracking Method (MoD2T). This innovative architecture
adeptly amalgamates traditional mathematical modeling with deep learning-based
multi-object tracking frameworks. The integration of mathematical modeling and
deep learning within MoD2T enhances the precision of object motion state
determination, thereby elevating tracking accuracy. Our empirical
investigations comprehensively validate the efficacy of MoD2T across varied
scenarios, encompassing unmanned aerial vehicle surveillance and street-level
tracking. Furthermore, to gauge the method's adeptness in discerning object
motion states, we introduce the Motion State Validation F1 (MVF1) metric. This
novel performance metric aims to quantitatively assess the accuracy of motion
state classification, furnishing a comprehensive evaluation of MoD2T's
performance. Elaborate experimental validations corroborate the rationality of
MVF1. In order to holistically appraise MoD2T's performance, we meticulously
annotate several renowned datasets and subject MoD2T to stringent testing.
Remarkably, under conditions characterized by minimal or moderate camera
motion, the achieved MVF1 values are particularly noteworthy, with exemplars
including 0.774 for the KITTI dataset, 0.521 for MOT17, and 0.827 for UAVDT.
|
[
{
"created": "Fri, 29 Dec 2023 15:08:06 GMT",
"version": "v1"
},
{
"created": "Tue, 7 May 2024 13:42:52 GMT",
"version": "v2"
}
] |
2024-05-08
|
[
[
"Feng",
"Yang",
""
],
[
"Pan",
"Liao",
""
],
[
"Di",
"Wu",
""
],
[
"Bo",
"Liu",
""
],
[
"Xingle",
"Zhang",
""
]
] |
In the realm of video analysis, the field of multiple object tracking (MOT) assumes paramount importance, with the motion state of objects-whether static or dynamic relative to the ground-holding practical significance across diverse scenarios. However, the extant literature exhibits a notable dearth in the exploration of this aspect. Deep learning methodologies encounter challenges in accurately discerning object motion states, while conventional approaches reliant on comprehensive mathematical modeling may yield suboptimal tracking accuracy. To address these challenges, we introduce a Model-Data-Driven Motion State Judgment Object Tracking Method (MoD2T). This innovative architecture adeptly amalgamates traditional mathematical modeling with deep learning-based multi-object tracking frameworks. The integration of mathematical modeling and deep learning within MoD2T enhances the precision of object motion state determination, thereby elevating tracking accuracy. Our empirical investigations comprehensively validate the efficacy of MoD2T across varied scenarios, encompassing unmanned aerial vehicle surveillance and street-level tracking. Furthermore, to gauge the method's adeptness in discerning object motion states, we introduce the Motion State Validation F1 (MVF1) metric. This novel performance metric aims to quantitatively assess the accuracy of motion state classification, furnishing a comprehensive evaluation of MoD2T's performance. Elaborate experimental validations corroborate the rationality of MVF1. In order to holistically appraise MoD2T's performance, we meticulously annotate several renowned datasets and subject MoD2T to stringent testing. Remarkably, under conditions characterized by minimal or moderate camera motion, the achieved MVF1 values are particularly noteworthy, with exemplars including 0.774 for the KITTI dataset, 0.521 for MOT17, and 0.827 for UAVDT.
|
2208.01352
|
Milad Ganjalizadeh
|
Milad Ganjalizadeh, Hossein S. Ghadikolaei, Johan Haraldson, Marina
Petrova
|
Interplay between Distributed AI Workflow and URLLC
|
Accepted in 2022 IEEE Global Communications Conference (GLOBECOM)
| null | null | null |
cs.NI eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Distributed artificial intelligence (AI) has recently accomplished tremendous
breakthroughs in various communication services, ranging from fault-tolerant
factory automation to smart cities. When distributed learning is run over a set
of wireless connected devices, random channel fluctuations, and the incumbent
services simultaneously running on the same network affect the performance of
distributed learning. In this paper, we investigate the interplay between
distributed AI workflow and ultra-reliable low latency communication (URLLC)
services running concurrently over a network. Using 3GPP compliant simulations
in a factory automation use case, we show the impact of various distributed AI
settings (e.g., model size and the number of participating devices) on the
convergence time of distributed AI and the application layer performance of
URLLC. Unless we leverage the existing 5G-NR quality of service handling
mechanisms to separate the traffic from the two services, our simulation
results show that the impact of distributed AI on the availability of the URLLC
devices is significant. Moreover, with proper setting of distributed AI (e.g.,
proper user selection), we can substantially reduce network resource
utilization, leading to lower latency for distributed AI and higher
availability for the URLLC users. Our results provide important insights for
future 6G and AI standardization.
|
[
{
"created": "Tue, 2 Aug 2022 10:46:50 GMT",
"version": "v1"
}
] |
2022-08-03
|
[
[
"Ganjalizadeh",
"Milad",
""
],
[
"Ghadikolaei",
"Hossein S.",
""
],
[
"Haraldson",
"Johan",
""
],
[
"Petrova",
"Marina",
""
]
] |
Distributed artificial intelligence (AI) has recently accomplished tremendous breakthroughs in various communication services, ranging from fault-tolerant factory automation to smart cities. When distributed learning is run over a set of wireless connected devices, random channel fluctuations, and the incumbent services simultaneously running on the same network affect the performance of distributed learning. In this paper, we investigate the interplay between distributed AI workflow and ultra-reliable low latency communication (URLLC) services running concurrently over a network. Using 3GPP compliant simulations in a factory automation use case, we show the impact of various distributed AI settings (e.g., model size and the number of participating devices) on the convergence time of distributed AI and the application layer performance of URLLC. Unless we leverage the existing 5G-NR quality of service handling mechanisms to separate the traffic from the two services, our simulation results show that the impact of distributed AI on the availability of the URLLC devices is significant. Moreover, with proper setting of distributed AI (e.g., proper user selection), we can substantially reduce network resource utilization, leading to lower latency for distributed AI and higher availability for the URLLC users. Our results provide important insights for future 6G and AI standardization.
|
2108.09858
|
Martin Baigorria Alonso
|
Mart\'in Baigorria Alonso
|
Data Augmentation Using Many-To-Many RNNs for Session-Aware Recommender
Systems
| null |
Proceedings of the ACM WSDM Workshop on Web Tourism (WSDM Webtour
2021)
| null | null |
cs.IR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The ACM WSDM WebTour 2021 Challenge organized by Booking.com focuses on
applying Session-Aware recommender systems in the travel domain. Given a
sequence of travel bookings in a user trip, we look to recommend the user's
next destination. To handle the large dimensionality of the output's space, we
propose a many-to-many RNN model, predicting the next destination chosen by the
user at every sequence step as opposed to only the final one. We show how this
is a computationally efficient alternative to doing data augmentation in a
many-to-one RNN, where we consider every subsequence of a session starting from
the first element. Our solution achieved 4th place in the final leaderboard,
with an accuracy@4 of 0.5566.
|
[
{
"created": "Sun, 22 Aug 2021 22:12:25 GMT",
"version": "v1"
}
] |
2021-08-27
|
[
[
"Alonso",
"Martín Baigorria",
""
]
] |
The ACM WSDM WebTour 2021 Challenge organized by Booking.com focuses on applying Session-Aware recommender systems in the travel domain. Given a sequence of travel bookings in a user trip, we look to recommend the user's next destination. To handle the large dimensionality of the output's space, we propose a many-to-many RNN model, predicting the next destination chosen by the user at every sequence step as opposed to only the final one. We show how this is a computationally efficient alternative to doing data augmentation in a many-to-one RNN, where we consider every subsequence of a session starting from the first element. Our solution achieved 4th place in the final leaderboard, with an accuracy@4 of 0.5566.
|
2307.07240
|
Bin-Cheng Yang
|
Bincheng Yang and Gangshan Wu
|
MaxSR: Image Super-Resolution Using Improved MaxViT
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While transformer models have been demonstrated to be effective for natural
language processing tasks and high-level vision tasks, only a few attempts have
been made to use powerful transformer models for single image super-resolution.
Because transformer models have powerful representation capacity and the
in-built self-attention mechanisms in transformer models help to leverage
self-similarity prior in input low-resolution image to improve performance for
single image super-resolution, we present a single image super-resolution model
based on recent hybrid vision transformer of MaxViT, named as MaxSR. MaxSR
consists of four parts, a shallow feature extraction block, multiple cascaded
adaptive MaxViT blocks to extract deep hierarchical features and model global
self-similarity from low-level features efficiently, a hierarchical feature
fusion block, and finally a reconstruction block. The key component of MaxSR,
i.e., adaptive MaxViT block, is based on MaxViT block which mixes MBConv with
squeeze-and-excitation, block attention and grid attention. In order to achieve
better global modelling of self-similarity in input low-resolution image, we
improve block attention and grid attention in MaxViT block to adaptive block
attention and adaptive grid attention which do self-attention inside each
window across all grids and each grid across all windows respectively in the
most efficient way. We instantiate proposed model for classical single image
super-resolution (MaxSR) and lightweight single image super-resolution
(MaxSR-light). Experiments show that our MaxSR and MaxSR-light establish new
state-of-the-art performance efficiently.
|
[
{
"created": "Fri, 14 Jul 2023 09:26:47 GMT",
"version": "v1"
}
] |
2023-07-17
|
[
[
"Yang",
"Bincheng",
""
],
[
"Wu",
"Gangshan",
""
]
] |
While transformer models have been demonstrated to be effective for natural language processing tasks and high-level vision tasks, only a few attempts have been made to use powerful transformer models for single image super-resolution. Because transformer models have powerful representation capacity and the in-built self-attention mechanisms in transformer models help to leverage self-similarity prior in input low-resolution image to improve performance for single image super-resolution, we present a single image super-resolution model based on recent hybrid vision transformer of MaxViT, named as MaxSR. MaxSR consists of four parts, a shallow feature extraction block, multiple cascaded adaptive MaxViT blocks to extract deep hierarchical features and model global self-similarity from low-level features efficiently, a hierarchical feature fusion block, and finally a reconstruction block. The key component of MaxSR, i.e., adaptive MaxViT block, is based on MaxViT block which mixes MBConv with squeeze-and-excitation, block attention and grid attention. In order to achieve better global modelling of self-similarity in input low-resolution image, we improve block attention and grid attention in MaxViT block to adaptive block attention and adaptive grid attention which do self-attention inside each window across all grids and each grid across all windows respectively in the most efficient way. We instantiate proposed model for classical single image super-resolution (MaxSR) and lightweight single image super-resolution (MaxSR-light). Experiments show that our MaxSR and MaxSR-light establish new state-of-the-art performance efficiently.
|
0811.4170
|
Alain Barrat
|
Alain Barrat, Ciro Cattuto, Vittoria Colizza, Jean-Francois Pinton,
Wouter Van den Broeck, Alessandro Vespignani
|
High resolution dynamical mapping of social interactions with active
RFID
| null |
PLoS ONE 5(7): e11596 (2010)
|
10.1371/journal.pone.0011596
| null |
cs.CY cs.HC physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we present an experimental framework to gather data on
face-to-face social interactions between individuals, with a high spatial and
temporal resolution. We use active Radio Frequency Identification (RFID)
devices that assess contacts with one another by exchanging low-power radio
packets. When individuals wear the beacons as a badge, a persistent radio
contact between the RFID devices can be used as a proxy for a social
interaction between individuals. We present the results of a pilot study
recently performed during a conference, and a subsequent preliminary data
analysis, that provides an assessment of our method and highlights its
versatility and applicability in many areas concerned with human dynamics.
|
[
{
"created": "Tue, 25 Nov 2008 20:54:34 GMT",
"version": "v1"
},
{
"created": "Tue, 25 Nov 2008 21:01:28 GMT",
"version": "v2"
}
] |
2010-08-18
|
[
[
"Barrat",
"Alain",
""
],
[
"Cattuto",
"Ciro",
""
],
[
"Colizza",
"Vittoria",
""
],
[
"Pinton",
"Jean-Francois",
""
],
[
"Broeck",
"Wouter Van den",
""
],
[
"Vespignani",
"Alessandro",
""
]
] |
In this paper we present an experimental framework to gather data on face-to-face social interactions between individuals, with a high spatial and temporal resolution. We use active Radio Frequency Identification (RFID) devices that assess contacts with one another by exchanging low-power radio packets. When individuals wear the beacons as a badge, a persistent radio contact between the RFID devices can be used as a proxy for a social interaction between individuals. We present the results of a pilot study recently performed during a conference, and a subsequent preliminary data analysis, that provides an assessment of our method and highlights its versatility and applicability in many areas concerned with human dynamics.
|
2009.04426
|
Felipe Del Rio
|
Pablo Messina, Manuel Cartagena, Patricio Cerda-Mardini, Felipe del
Rio and Denis Parra
|
CuratorNet: Visually-aware Recommendation of Art Images
| null | null | null | null |
cs.IR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Although there are several visually-aware recommendation models in domains
like fashion or even movies, the art domain lacks thesame level of research
attention, despite the recent growth of the online artwork market. To reduce
this gap, in this article we introduceCuratorNet, a neural network architecture
for visually-aware recommendation of art images. CuratorNet is designed at the
core withthe goal of maximizing generalization: the network has a fixed set of
parameters that only need to be trained once, and thereafter themodel is able
to generalize to new users or items never seen before, without further
training. This is achieved by leveraging visualcontent: items are mapped to
item vectors through visual embeddings, and users are mapped to user vectors by
aggregating the visualcontent of items they have consumed. Besides the model
architecture, we also introduce novel triplet sampling strategies to build
atraining set for rank learning in the art domain, resulting in more effective
learning than naive random sampling. With an evaluationover a real-world
dataset of physical paintings, we show that CuratorNet achieves the best
performance among several baselines,including the state-of-the-art model VBPR.
CuratorNet is motivated and evaluated in the art domain, but its architecture
and trainingscheme could be adapted to recommend images in other areas
|
[
{
"created": "Wed, 9 Sep 2020 17:22:17 GMT",
"version": "v1"
},
{
"created": "Wed, 30 Sep 2020 12:35:08 GMT",
"version": "v2"
}
] |
2020-10-01
|
[
[
"Messina",
"Pablo",
""
],
[
"Cartagena",
"Manuel",
""
],
[
"Cerda-Mardini",
"Patricio",
""
],
[
"del Rio",
"Felipe",
""
],
[
"Parra",
"Denis",
""
]
] |
Although there are several visually-aware recommendation models in domains like fashion or even movies, the art domain lacks thesame level of research attention, despite the recent growth of the online artwork market. To reduce this gap, in this article we introduceCuratorNet, a neural network architecture for visually-aware recommendation of art images. CuratorNet is designed at the core withthe goal of maximizing generalization: the network has a fixed set of parameters that only need to be trained once, and thereafter themodel is able to generalize to new users or items never seen before, without further training. This is achieved by leveraging visualcontent: items are mapped to item vectors through visual embeddings, and users are mapped to user vectors by aggregating the visualcontent of items they have consumed. Besides the model architecture, we also introduce novel triplet sampling strategies to build atraining set for rank learning in the art domain, resulting in more effective learning than naive random sampling. With an evaluationover a real-world dataset of physical paintings, we show that CuratorNet achieves the best performance among several baselines,including the state-of-the-art model VBPR. CuratorNet is motivated and evaluated in the art domain, but its architecture and trainingscheme could be adapted to recommend images in other areas
|
2407.05458
|
Fei Wang
|
Fei Wang, Weibo Gao, Qi Liu, Jiatong Li, Guanhao Zhao, Zheng Zhang,
Zhenya Huang, Mengxiao Zhu, Shijin Wang, Wei Tong, Enhong Chen
|
A Survey of Models for Cognitive Diagnosis: New Developments and Future
Directions
| null | null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cognitive diagnosis has been developed for decades as an effective
measurement tool to evaluate human cognitive status such as ability level and
knowledge mastery. It has been applied to a wide range of fields including
education, sport, psychological diagnosis, etc. By providing better awareness
of cognitive status, it can serve as the basis for personalized services such
as well-designed medical treatment, teaching strategy and vocational training.
This paper aims to provide a survey of current models for cognitive diagnosis,
with more attention on new developments using machine learning-based methods.
By comparing the model structures, parameter estimation algorithms, model
evaluation methods and applications, we provide a relatively comprehensive
review of the recent trends in cognitive diagnosis models. Further, we discuss
future directions that are worthy of exploration. In addition, we release two
Python libraries: EduData for easy access to some relevant public datasets we
have collected, and EduCDM that implements popular CDMs to facilitate both
applications and research purposes.
|
[
{
"created": "Sun, 7 Jul 2024 18:02:00 GMT",
"version": "v1"
}
] |
2024-07-09
|
[
[
"Wang",
"Fei",
""
],
[
"Gao",
"Weibo",
""
],
[
"Liu",
"Qi",
""
],
[
"Li",
"Jiatong",
""
],
[
"Zhao",
"Guanhao",
""
],
[
"Zhang",
"Zheng",
""
],
[
"Huang",
"Zhenya",
""
],
[
"Zhu",
"Mengxiao",
""
],
[
"Wang",
"Shijin",
""
],
[
"Tong",
"Wei",
""
],
[
"Chen",
"Enhong",
""
]
] |
Cognitive diagnosis has been developed for decades as an effective measurement tool to evaluate human cognitive status such as ability level and knowledge mastery. It has been applied to a wide range of fields including education, sport, psychological diagnosis, etc. By providing better awareness of cognitive status, it can serve as the basis for personalized services such as well-designed medical treatment, teaching strategy and vocational training. This paper aims to provide a survey of current models for cognitive diagnosis, with more attention on new developments using machine learning-based methods. By comparing the model structures, parameter estimation algorithms, model evaluation methods and applications, we provide a relatively comprehensive review of the recent trends in cognitive diagnosis models. Further, we discuss future directions that are worthy of exploration. In addition, we release two Python libraries: EduData for easy access to some relevant public datasets we have collected, and EduCDM that implements popular CDMs to facilitate both applications and research purposes.
|
2312.07685
|
Yinmin Zhang
|
Yinmin Zhang, Jie Liu, Chuming Li, Yazhe Niu, Yaodong Yang, Yu Liu,
Wanli Ouyang
|
A Perspective of Q-value Estimation on Offline-to-Online Reinforcement
Learning
|
Accepted at AAAI 2024
| null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Offline-to-online Reinforcement Learning (O2O RL) aims to improve the
performance of offline pretrained policy using only a few online samples. Built
on offline RL algorithms, most O2O methods focus on the balance between RL
objective and pessimism, or the utilization of offline and online samples. In
this paper, from a novel perspective, we systematically study the challenges
that remain in O2O RL and identify that the reason behind the slow improvement
of the performance and the instability of online finetuning lies in the
inaccurate Q-value estimation inherited from offline pretraining. Specifically,
we demonstrate that the estimation bias and the inaccurate rank of Q-value
cause a misleading signal for the policy update, making the standard offline RL
algorithms, such as CQL and TD3-BC, ineffective in the online finetuning. Based
on this observation, we address the problem of Q-value estimation by two
techniques: (1) perturbed value update and (2) increased frequency of Q-value
updates. The first technique smooths out biased Q-value estimation with sharp
peaks, preventing early-stage policy exploitation of sub-optimal actions. The
second one alleviates the estimation bias inherited from offline pretraining by
accelerating learning. Extensive experiments on the MuJoco and Adroit
environments demonstrate that the proposed method, named SO2, significantly
alleviates Q-value estimation issues, and consistently improves the performance
against the state-of-the-art methods by up to 83.1%.
|
[
{
"created": "Tue, 12 Dec 2023 19:24:35 GMT",
"version": "v1"
}
] |
2023-12-14
|
[
[
"Zhang",
"Yinmin",
""
],
[
"Liu",
"Jie",
""
],
[
"Li",
"Chuming",
""
],
[
"Niu",
"Yazhe",
""
],
[
"Yang",
"Yaodong",
""
],
[
"Liu",
"Yu",
""
],
[
"Ouyang",
"Wanli",
""
]
] |
Offline-to-online Reinforcement Learning (O2O RL) aims to improve the performance of offline pretrained policy using only a few online samples. Built on offline RL algorithms, most O2O methods focus on the balance between RL objective and pessimism, or the utilization of offline and online samples. In this paper, from a novel perspective, we systematically study the challenges that remain in O2O RL and identify that the reason behind the slow improvement of the performance and the instability of online finetuning lies in the inaccurate Q-value estimation inherited from offline pretraining. Specifically, we demonstrate that the estimation bias and the inaccurate rank of Q-value cause a misleading signal for the policy update, making the standard offline RL algorithms, such as CQL and TD3-BC, ineffective in the online finetuning. Based on this observation, we address the problem of Q-value estimation by two techniques: (1) perturbed value update and (2) increased frequency of Q-value updates. The first technique smooths out biased Q-value estimation with sharp peaks, preventing early-stage policy exploitation of sub-optimal actions. The second one alleviates the estimation bias inherited from offline pretraining by accelerating learning. Extensive experiments on the MuJoco and Adroit environments demonstrate that the proposed method, named SO2, significantly alleviates Q-value estimation issues, and consistently improves the performance against the state-of-the-art methods by up to 83.1%.
|
1909.07818
|
Lasse Hansen
|
Lasse Hansen, Doris Dittmer, Mattias P. Heinrich
|
Learning Deformable Point Set Registration with Regularized Dynamic
Graph CNNs for Large Lung Motion in COPD Patients
|
accepted for MICCAI 2019 Workshop Graph Learning in Medical Imaging
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deformable registration continues to be one of the key challenges in medical
image analysis. While iconic registration methods have started to benefit from
the recent advances in medical deep learning, the same does not yet apply for
the registration of point sets, e.g. registration based on surfaces, keypoints
or landmarks. This is mainly due to the restriction of the convolution operator
in modern CNNs to densely gridded input. However, with the newly developed
methods from the field of geometric deep learning suitable tools are now
emerging, which enable powerful analysis of medical data on irregular domains.
In this work, we present a new method that enables the learning of regularized
feature descriptors with dynamic graph CNNs. By incorporating the learned
geometric features as prior probabilities into the well-established coherent
point drift (CPD) algorithm, formulated as differentiable network layer, we
establish an end-to-end framework for robust registration of two point sets.
Our approach is evaluated on the challenging task of aligning keypoints
extracted from lung CT scans in inhale and exhale states with large
deformations and without any additional intensity information. Our results
indicate that the inherent geometric structure of the extracted keypoints is
sufficient to establish descriptive point features, which yield a significantly
improved performance and robustness of our registration framework.
|
[
{
"created": "Tue, 17 Sep 2019 13:59:04 GMT",
"version": "v1"
}
] |
2019-09-18
|
[
[
"Hansen",
"Lasse",
""
],
[
"Dittmer",
"Doris",
""
],
[
"Heinrich",
"Mattias P.",
""
]
] |
Deformable registration continues to be one of the key challenges in medical image analysis. While iconic registration methods have started to benefit from the recent advances in medical deep learning, the same does not yet apply for the registration of point sets, e.g. registration based on surfaces, keypoints or landmarks. This is mainly due to the restriction of the convolution operator in modern CNNs to densely gridded input. However, with the newly developed methods from the field of geometric deep learning suitable tools are now emerging, which enable powerful analysis of medical data on irregular domains. In this work, we present a new method that enables the learning of regularized feature descriptors with dynamic graph CNNs. By incorporating the learned geometric features as prior probabilities into the well-established coherent point drift (CPD) algorithm, formulated as differentiable network layer, we establish an end-to-end framework for robust registration of two point sets. Our approach is evaluated on the challenging task of aligning keypoints extracted from lung CT scans in inhale and exhale states with large deformations and without any additional intensity information. Our results indicate that the inherent geometric structure of the extracted keypoints is sufficient to establish descriptive point features, which yield a significantly improved performance and robustness of our registration framework.
|
2403.11369
|
KV Aditya Srivatsa
|
KV Aditya Srivatsa and Ekaterina Kochmar
|
What Makes Math Word Problems Challenging for LLMs?
|
Accepted to NAACL Findings 2024
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
This paper investigates the question of what makes math word problems (MWPs)
in English challenging for large language models (LLMs). We conduct an in-depth
analysis of the key linguistic and mathematical characteristics of MWPs. In
addition, we train feature-based classifiers to better understand the impact of
each feature on the overall difficulty of MWPs for prominent LLMs and
investigate whether this helps predict how well LLMs fare against specific
categories of MWPs.
|
[
{
"created": "Sun, 17 Mar 2024 23:18:40 GMT",
"version": "v1"
},
{
"created": "Mon, 1 Apr 2024 13:58:34 GMT",
"version": "v2"
}
] |
2024-04-02
|
[
[
"Srivatsa",
"KV Aditya",
""
],
[
"Kochmar",
"Ekaterina",
""
]
] |
This paper investigates the question of what makes math word problems (MWPs) in English challenging for large language models (LLMs). We conduct an in-depth analysis of the key linguistic and mathematical characteristics of MWPs. In addition, we train feature-based classifiers to better understand the impact of each feature on the overall difficulty of MWPs for prominent LLMs and investigate whether this helps predict how well LLMs fare against specific categories of MWPs.
|
2301.04727
|
Joaquin Garcia-Alfaro
|
Iain Burge, Michel Barbeau, Joaquin Garcia-Alfaro
|
A Quantum Algorithm for Shapley Value Estimation
|
29 pages, 8 figures, 21 references, baseline (preprint) QCE 2023
(IEEE International Conference on Quantum Computing and Engineering)
Technical Paper (Quantum Algorithms for Shapley Value Calculation)
| null | null | null |
cs.ET cs.CR math.QA
|
http://creativecommons.org/licenses/by/4.0/
|
The introduction of the European Union's (EU) set of comprehensive
regulations relating to technology, the General Data Protection Regulation,
grants EU citizens the right to explanations for automated decisions that have
significant effects on their life. This poses a substantial challenge, as many
of today's state-of-the-art algorithms are generally unexplainable black boxes.
Simultaneously, we have seen an emergence of the fields of quantum computation
and quantum AI. Due to the fickle nature of quantum information, the problem of
explainability is amplified, as measuring a quantum system destroys the
information. As a result, there is a need for post-hoc explanations for quantum
AI algorithms. In the classical context, the cooperative game theory concept of
the Shapley value has been adapted for post-hoc explanations. However, this
approach does not translate to use in quantum computing trivially and can be
exponentially difficult to implement if not handled with care. We propose a
novel algorithm which reduces the problem of accurately estimating the Shapley
values of a quantum algorithm into a far simpler problem of estimating the true
average of a binomial distribution in polynomial time.
|
[
{
"created": "Wed, 11 Jan 2023 21:32:59 GMT",
"version": "v1"
},
{
"created": "Wed, 15 Mar 2023 16:10:32 GMT",
"version": "v2"
},
{
"created": "Fri, 14 Jul 2023 08:17:46 GMT",
"version": "v3"
}
] |
2023-08-24
|
[
[
"Burge",
"Iain",
""
],
[
"Barbeau",
"Michel",
""
],
[
"Garcia-Alfaro",
"Joaquin",
""
]
] |
The introduction of the European Union's (EU) set of comprehensive regulations relating to technology, the General Data Protection Regulation, grants EU citizens the right to explanations for automated decisions that have significant effects on their life. This poses a substantial challenge, as many of today's state-of-the-art algorithms are generally unexplainable black boxes. Simultaneously, we have seen an emergence of the fields of quantum computation and quantum AI. Due to the fickle nature of quantum information, the problem of explainability is amplified, as measuring a quantum system destroys the information. As a result, there is a need for post-hoc explanations for quantum AI algorithms. In the classical context, the cooperative game theory concept of the Shapley value has been adapted for post-hoc explanations. However, this approach does not translate to use in quantum computing trivially and can be exponentially difficult to implement if not handled with care. We propose a novel algorithm which reduces the problem of accurately estimating the Shapley values of a quantum algorithm into a far simpler problem of estimating the true average of a binomial distribution in polynomial time.
|
2311.09394
|
Marco Elver
|
Kostya Serebryany, Chris Kennelly, Mitch Phillips, Matt Denton, Marco
Elver, Alexander Potapenko, Matt Morehouse, Vlad Tsyrklevich, Christian
Holler, Julian Lettner, David Kilzer, Lander Brandt
|
GWP-ASan: Sampling-Based Detection of Memory-Safety Bugs in Production
| null | null | null | null |
cs.SE cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
Despite the recent advances in pre-production bug detection,
heap-use-after-free and heap-buffer-overflow bugs remain the primary problem
for security, reliability, and developer productivity for applications written
in C or C++, across all major software ecosystems. Memory-safe languages solve
this problem when they are used, but the existing code bases consisting of
billions of lines of C and C++ continue to grow, and we need additional bug
detection mechanisms.
This paper describes a family of tools that detect these two classes of
memory-safety bugs, while running in production, at near-zero overhead. These
tools combine page-granular guarded allocation and low-rate sampling. In other
words, we added an "if" statement to a 36-year-old idea and made it work at
scale.
We describe the basic algorithm, several of its variants and implementations,
and the results of multi-year deployments across mobile, desktop, and server
applications.
|
[
{
"created": "Wed, 15 Nov 2023 21:41:53 GMT",
"version": "v1"
},
{
"created": "Sat, 13 Jan 2024 14:42:26 GMT",
"version": "v2"
}
] |
2024-01-17
|
[
[
"Serebryany",
"Kostya",
""
],
[
"Kennelly",
"Chris",
""
],
[
"Phillips",
"Mitch",
""
],
[
"Denton",
"Matt",
""
],
[
"Elver",
"Marco",
""
],
[
"Potapenko",
"Alexander",
""
],
[
"Morehouse",
"Matt",
""
],
[
"Tsyrklevich",
"Vlad",
""
],
[
"Holler",
"Christian",
""
],
[
"Lettner",
"Julian",
""
],
[
"Kilzer",
"David",
""
],
[
"Brandt",
"Lander",
""
]
] |
Despite the recent advances in pre-production bug detection, heap-use-after-free and heap-buffer-overflow bugs remain the primary problem for security, reliability, and developer productivity for applications written in C or C++, across all major software ecosystems. Memory-safe languages solve this problem when they are used, but the existing code bases consisting of billions of lines of C and C++ continue to grow, and we need additional bug detection mechanisms. This paper describes a family of tools that detect these two classes of memory-safety bugs, while running in production, at near-zero overhead. These tools combine page-granular guarded allocation and low-rate sampling. In other words, we added an "if" statement to a 36-year-old idea and made it work at scale. We describe the basic algorithm, several of its variants and implementations, and the results of multi-year deployments across mobile, desktop, and server applications.
|
2103.05469
|
Mark Stamp
|
Andy Phung and Mark Stamp
|
Universal Adversarial Perturbations and Image Spam Classifiers
| null | null | null | null |
cs.CR cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
As the name suggests, image spam is spam email that has been embedded in an
image. Image spam was developed in an effort to evade text-based filters.
Modern deep learning-based classifiers perform well in detecting typical image
spam that is seen in the wild. In this chapter, we evaluate numerous
adversarial techniques for the purpose of attacking deep learning-based image
spam classifiers. Of the techniques tested, we find that universal perturbation
performs best. Using universal adversarial perturbations, we propose and
analyze a new transformation-based adversarial attack that enables us to create
tailored "natural perturbations" in image spam. The resulting spam images
benefit from both the presence of concentrated natural features and a universal
adversarial perturbation. We show that the proposed technique outperforms
existing adversarial attacks in terms of accuracy reduction, computation time
per example, and perturbation distance. We apply our technique to create a
dataset of adversarial spam images, which can serve as a challenge dataset for
future research in image spam detection.
|
[
{
"created": "Sun, 7 Mar 2021 14:36:02 GMT",
"version": "v1"
}
] |
2021-03-10
|
[
[
"Phung",
"Andy",
""
],
[
"Stamp",
"Mark",
""
]
] |
As the name suggests, image spam is spam email that has been embedded in an image. Image spam was developed in an effort to evade text-based filters. Modern deep learning-based classifiers perform well in detecting typical image spam that is seen in the wild. In this chapter, we evaluate numerous adversarial techniques for the purpose of attacking deep learning-based image spam classifiers. Of the techniques tested, we find that universal perturbation performs best. Using universal adversarial perturbations, we propose and analyze a new transformation-based adversarial attack that enables us to create tailored "natural perturbations" in image spam. The resulting spam images benefit from both the presence of concentrated natural features and a universal adversarial perturbation. We show that the proposed technique outperforms existing adversarial attacks in terms of accuracy reduction, computation time per example, and perturbation distance. We apply our technique to create a dataset of adversarial spam images, which can serve as a challenge dataset for future research in image spam detection.
|
1606.02409
|
Zeng Yulong
|
Pingzhong Tang and Yulong Zeng
|
How to manipulate truthful prior-dependent mechanisms?
|
29 pages, 1 figure
| null | null | null |
cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the standard formulation of mechanism design, a key assumption is that the
designer has reliable information and technology to determine a prior
distribution on types of the agents. In the meanwhile, as pointed out by the
Wilson's Principle, a mechanism should reply as little as possible on the
accuracy of prior type distribution. In this paper, we put forward a model to
formalize and quantify this statement.
In our model, each agent has a type distribution. In addition, the agent can
commit to a fake distribution and bids consistently and credibly with respect
to the fake distribution (i.e., plays Bayes equilibrium under the fake
distributions). We study the equilibria of the induced distribution-committing
games in several well-known mechanisms. Our results can be summarized as
follows: (1) the game induced by Myerson's auction under our model is
strategically equivalent to the first price auction under the standard model.
As a consequence, they are revenue-equivalent as well. (2) the second-price
auction yields weakly better revenue than several reserve-based and
virtual-value-based auctions, under our fake distribution model. These results
echo the recent literature on prior-independent mechanism design.
|
[
{
"created": "Wed, 8 Jun 2016 05:59:14 GMT",
"version": "v1"
},
{
"created": "Tue, 19 Jul 2016 05:18:21 GMT",
"version": "v2"
}
] |
2016-07-20
|
[
[
"Tang",
"Pingzhong",
""
],
[
"Zeng",
"Yulong",
""
]
] |
In the standard formulation of mechanism design, a key assumption is that the designer has reliable information and technology to determine a prior distribution on types of the agents. In the meanwhile, as pointed out by the Wilson's Principle, a mechanism should reply as little as possible on the accuracy of prior type distribution. In this paper, we put forward a model to formalize and quantify this statement. In our model, each agent has a type distribution. In addition, the agent can commit to a fake distribution and bids consistently and credibly with respect to the fake distribution (i.e., plays Bayes equilibrium under the fake distributions). We study the equilibria of the induced distribution-committing games in several well-known mechanisms. Our results can be summarized as follows: (1) the game induced by Myerson's auction under our model is strategically equivalent to the first price auction under the standard model. As a consequence, they are revenue-equivalent as well. (2) the second-price auction yields weakly better revenue than several reserve-based and virtual-value-based auctions, under our fake distribution model. These results echo the recent literature on prior-independent mechanism design.
|
1904.06903
|
Xiangyu Xu
|
Xiangyu Xu, Muchen Li, Wenxiu Sun
|
Learning Deformable Kernels for Image and Video Denoising
|
10 pages
| null | null | null |
cs.CV cs.AI cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most of the classical denoising methods restore clear results by selecting
and averaging pixels in the noisy input. Instead of relying on hand-crafted
selecting and averaging strategies, we propose to explicitly learn this process
with deep neural networks. Specifically, we propose deformable 2D kernels for
image denoising where the sampling locations and kernel weights are both
learned. The proposed kernel naturally adapts to image structures and could
effectively reduce the oversmoothing artifacts. Furthermore, we develop 3D
deformable kernels for video denoising to more efficiently sample pixels across
the spatial-temporal space. Our method is able to solve the misalignment issues
of large motion from dynamic scenes. For better training our video denoising
model, we introduce the trilinear sampler and a new regularization term. We
demonstrate that the proposed method performs favorably against the
state-of-the-art image and video denoising approaches on both synthetic and
real-world data.
|
[
{
"created": "Mon, 15 Apr 2019 08:15:09 GMT",
"version": "v1"
}
] |
2019-04-16
|
[
[
"Xu",
"Xiangyu",
""
],
[
"Li",
"Muchen",
""
],
[
"Sun",
"Wenxiu",
""
]
] |
Most of the classical denoising methods restore clear results by selecting and averaging pixels in the noisy input. Instead of relying on hand-crafted selecting and averaging strategies, we propose to explicitly learn this process with deep neural networks. Specifically, we propose deformable 2D kernels for image denoising where the sampling locations and kernel weights are both learned. The proposed kernel naturally adapts to image structures and could effectively reduce the oversmoothing artifacts. Furthermore, we develop 3D deformable kernels for video denoising to more efficiently sample pixels across the spatial-temporal space. Our method is able to solve the misalignment issues of large motion from dynamic scenes. For better training our video denoising model, we introduce the trilinear sampler and a new regularization term. We demonstrate that the proposed method performs favorably against the state-of-the-art image and video denoising approaches on both synthetic and real-world data.
|
1610.04872
|
Bodhisattwa Majumder
|
Bodhisattwa Prasad Majumder, Ayan Sengupta, Sajal jain, Parikshit
Bhaduri
|
Fault Detection Engine in Intelligent Predictive Analytics Platform for
DCIM
|
Accepted in 4th International Conference on Business Analytics and
Intelligence (ICBAI 2016)
| null | null | null |
cs.AI cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the advancement of huge data generation and data handling capability,
Machine Learning and Probabilistic modelling enables an immense opportunity to
employ predictive analytics platform in high security critical industries
namely data centers, electricity grids, utilities, airport etc. where downtime
minimization is one of the primary objectives. This paper proposes a novel,
complete architecture of an intelligent predictive analytics platform, Fault
Engine, for huge device network connected with electrical/information flow.
Three unique modules, here proposed, seamlessly integrate with available
technology stack of data handling and connect with middleware to produce online
intelligent prediction in critical failure scenarios. The Markov Failure module
predicts the severity of a failure along with survival probability of a device
at any given instances. The Root Cause Analysis model indicates probable
devices as potential root cause employing Bayesian probability assignment and
topological sort. Finally, a community detection algorithm produces correlated
clusters of device in terms of failure probability which will further narrow
down the search space of finding route cause. The whole Engine has been tested
with different size of network with simulated failure environments and shows
its potential to be scalable in real-time implementation.
|
[
{
"created": "Sun, 16 Oct 2016 15:14:36 GMT",
"version": "v1"
}
] |
2016-10-18
|
[
[
"Majumder",
"Bodhisattwa Prasad",
""
],
[
"Sengupta",
"Ayan",
""
],
[
"jain",
"Sajal",
""
],
[
"Bhaduri",
"Parikshit",
""
]
] |
With the advancement of huge data generation and data handling capability, Machine Learning and Probabilistic modelling enables an immense opportunity to employ predictive analytics platform in high security critical industries namely data centers, electricity grids, utilities, airport etc. where downtime minimization is one of the primary objectives. This paper proposes a novel, complete architecture of an intelligent predictive analytics platform, Fault Engine, for huge device network connected with electrical/information flow. Three unique modules, here proposed, seamlessly integrate with available technology stack of data handling and connect with middleware to produce online intelligent prediction in critical failure scenarios. The Markov Failure module predicts the severity of a failure along with survival probability of a device at any given instances. The Root Cause Analysis model indicates probable devices as potential root cause employing Bayesian probability assignment and topological sort. Finally, a community detection algorithm produces correlated clusters of device in terms of failure probability which will further narrow down the search space of finding route cause. The whole Engine has been tested with different size of network with simulated failure environments and shows its potential to be scalable in real-time implementation.
|
2106.11196
|
Benedikt Boenninghoff
|
Benedikt Boenninghoff, Dorothea Kolossa, Robert M. Nickel
|
Self-Calibrating Neural-Probabilistic Model for Authorship Verification
Under Covariate Shift
|
12th International Conference of the CLEF Association, 2021
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We are addressing two fundamental problems in authorship verification (AV):
Topic variability and miscalibration. Variations in the topic of two disputed
texts are a major cause of error for most AV systems. In addition, it is
observed that the underlying probability estimates produced by deep learning AV
mechanisms oftentimes do not match the actual case counts in the respective
training data. As such, probability estimates are poorly calibrated. We are
expanding our framework from PAN 2020 to include Bayes factor scoring (BFS) and
an uncertainty adaptation layer (UAL) to address both problems. Experiments
with the 2020/21 PAN AV shared task data show that the proposed method
significantly reduces sensitivities to topical variations and significantly
improves the system's calibration.
|
[
{
"created": "Mon, 21 Jun 2021 15:33:48 GMT",
"version": "v1"
}
] |
2021-06-22
|
[
[
"Boenninghoff",
"Benedikt",
""
],
[
"Kolossa",
"Dorothea",
""
],
[
"Nickel",
"Robert M.",
""
]
] |
We are addressing two fundamental problems in authorship verification (AV): Topic variability and miscalibration. Variations in the topic of two disputed texts are a major cause of error for most AV systems. In addition, it is observed that the underlying probability estimates produced by deep learning AV mechanisms oftentimes do not match the actual case counts in the respective training data. As such, probability estimates are poorly calibrated. We are expanding our framework from PAN 2020 to include Bayes factor scoring (BFS) and an uncertainty adaptation layer (UAL) to address both problems. Experiments with the 2020/21 PAN AV shared task data show that the proposed method significantly reduces sensitivities to topical variations and significantly improves the system's calibration.
|
2107.12429
|
Pan Ji
|
Pan Ji, Runze Li, Bir Bhanu, Yi Xu
|
MonoIndoor: Towards Good Practice of Self-Supervised Monocular Depth
Estimation for Indoor Environments
|
ICCV 2021
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Self-supervised depth estimation for indoor environments is more challenging
than its outdoor counterpart in at least the following two aspects: (i) the
depth range of indoor sequences varies a lot across different frames, making it
difficult for the depth network to induce consistent depth cues, whereas the
maximum distance in outdoor scenes mostly stays the same as the camera usually
sees the sky; (ii) the indoor sequences contain much more rotational motions,
which cause difficulties for the pose network, while the motions of outdoor
sequences are pre-dominantly translational, especially for driving datasets
such as KITTI. In this paper, special considerations are given to those
challenges and a set of good practices are consolidated for improving the
performance of self-supervised monocular depth estimation in indoor
environments. The proposed method mainly consists of two novel modules, \ie, a
depth factorization module and a residual pose estimation module, each of which
is designed to respectively tackle the aforementioned challenges. The
effectiveness of each module is shown through a carefully conducted ablation
study and the demonstration of the state-of-the-art performance on three indoor
datasets, \ie, EuRoC, NYUv2, and 7-scenes.
|
[
{
"created": "Mon, 26 Jul 2021 18:45:14 GMT",
"version": "v1"
},
{
"created": "Wed, 28 Jul 2021 00:32:57 GMT",
"version": "v2"
}
] |
2021-07-29
|
[
[
"Ji",
"Pan",
""
],
[
"Li",
"Runze",
""
],
[
"Bhanu",
"Bir",
""
],
[
"Xu",
"Yi",
""
]
] |
Self-supervised depth estimation for indoor environments is more challenging than its outdoor counterpart in at least the following two aspects: (i) the depth range of indoor sequences varies a lot across different frames, making it difficult for the depth network to induce consistent depth cues, whereas the maximum distance in outdoor scenes mostly stays the same as the camera usually sees the sky; (ii) the indoor sequences contain much more rotational motions, which cause difficulties for the pose network, while the motions of outdoor sequences are pre-dominantly translational, especially for driving datasets such as KITTI. In this paper, special considerations are given to those challenges and a set of good practices are consolidated for improving the performance of self-supervised monocular depth estimation in indoor environments. The proposed method mainly consists of two novel modules, \ie, a depth factorization module and a residual pose estimation module, each of which is designed to respectively tackle the aforementioned challenges. The effectiveness of each module is shown through a carefully conducted ablation study and the demonstration of the state-of-the-art performance on three indoor datasets, \ie, EuRoC, NYUv2, and 7-scenes.
|
2208.11904
|
Gayan Kulatilleke
|
Gayan K. Kulatilleke, Sugandika Samarakoon
|
Empirical study of Machine Learning Classifier Evaluation Metrics
behavior in Massively Imbalanced and Noisy data
| null | null | null | null |
cs.LG cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With growing credit card transaction volumes, the fraud percentages are also
rising, including overhead costs for institutions to combat and compensate
victims. The use of machine learning into the financial sector permits more
effective protection against fraud and other economic crime. Suitably trained
machine learning classifiers help proactive fraud detection, improving
stakeholder trust and robustness against illicit transactions. However, the
design of machine learning based fraud detection algorithms has been
challenging and slow due the massively unbalanced nature of fraud data and the
challenges of identifying the frauds accurately and completely to create a gold
standard ground truth. Furthermore, there are no benchmarks or standard
classifier evaluation metrics to measure and identify better performing
classifiers, thus keeping researchers in the dark.
In this work, we develop a theoretical foundation to model human annotation
errors and extreme imbalance typical in real world fraud detection data sets.
By conducting empirical experiments on a hypothetical classifier, with a
synthetic data distribution approximated to a popular real world credit card
fraud data set, we simulate human annotation errors and extreme imbalance to
observe the behavior of popular machine learning classifier evaluation
matrices. We demonstrate that a combined F1 score and g-mean, in that specific
order, is the best evaluation metric for typical imbalanced fraud detection
model classification.
|
[
{
"created": "Thu, 25 Aug 2022 07:30:31 GMT",
"version": "v1"
}
] |
2022-08-26
|
[
[
"Kulatilleke",
"Gayan K.",
""
],
[
"Samarakoon",
"Sugandika",
""
]
] |
With growing credit card transaction volumes, the fraud percentages are also rising, including overhead costs for institutions to combat and compensate victims. The use of machine learning into the financial sector permits more effective protection against fraud and other economic crime. Suitably trained machine learning classifiers help proactive fraud detection, improving stakeholder trust and robustness against illicit transactions. However, the design of machine learning based fraud detection algorithms has been challenging and slow due the massively unbalanced nature of fraud data and the challenges of identifying the frauds accurately and completely to create a gold standard ground truth. Furthermore, there are no benchmarks or standard classifier evaluation metrics to measure and identify better performing classifiers, thus keeping researchers in the dark. In this work, we develop a theoretical foundation to model human annotation errors and extreme imbalance typical in real world fraud detection data sets. By conducting empirical experiments on a hypothetical classifier, with a synthetic data distribution approximated to a popular real world credit card fraud data set, we simulate human annotation errors and extreme imbalance to observe the behavior of popular machine learning classifier evaluation matrices. We demonstrate that a combined F1 score and g-mean, in that specific order, is the best evaluation metric for typical imbalanced fraud detection model classification.
|
1807.11164
|
Xiangyu Zhang
|
Ningning Ma, Xiangyu Zhang, Hai-Tao Zheng, Jian Sun
|
ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture
Design
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Currently, the neural network architecture design is mostly guided by the
\emph{indirect} metric of computation complexity, i.e., FLOPs. However, the
\emph{direct} metric, e.g., speed, also depends on the other factors such as
memory access cost and platform characterics. Thus, this work proposes to
evaluate the direct metric on the target platform, beyond only considering
FLOPs. Based on a series of controlled experiments, this work derives several
practical \emph{guidelines} for efficient network design. Accordingly, a new
architecture is presented, called \emph{ShuffleNet V2}. Comprehensive ablation
experiments verify that our model is the state-of-the-art in terms of speed and
accuracy tradeoff.
|
[
{
"created": "Mon, 30 Jul 2018 04:18:25 GMT",
"version": "v1"
}
] |
2018-07-31
|
[
[
"Ma",
"Ningning",
""
],
[
"Zhang",
"Xiangyu",
""
],
[
"Zheng",
"Hai-Tao",
""
],
[
"Sun",
"Jian",
""
]
] |
Currently, the neural network architecture design is mostly guided by the \emph{indirect} metric of computation complexity, i.e., FLOPs. However, the \emph{direct} metric, e.g., speed, also depends on the other factors such as memory access cost and platform characterics. Thus, this work proposes to evaluate the direct metric on the target platform, beyond only considering FLOPs. Based on a series of controlled experiments, this work derives several practical \emph{guidelines} for efficient network design. Accordingly, a new architecture is presented, called \emph{ShuffleNet V2}. Comprehensive ablation experiments verify that our model is the state-of-the-art in terms of speed and accuracy tradeoff.
|
2105.10011
|
Leonard Berrada
|
Leonard Berrada, Andrew Zisserman, M. Pawan Kumar
|
Comment on Stochastic Polyak Step-Size: Performance of ALI-G
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This is a short note on the performance of the ALI-G algorithm (Berrada et
al., 2020) as reported in (Loizou et al., 2021). ALI-G (Berrada et al., 2020)
and SPS (Loizou et al., 2021) are both adaptations of the Polyak step-size to
optimize machine learning models that can interpolate the training data. The
main algorithmic differences are that (1) SPS employs a multiplicative constant
in the denominator of the learning-rate while ALI-G uses an additive constant,
and (2) SPS uses an iteration-dependent maximal learning-rate while ALI-G uses
a constant one. There are also differences in the analysis provided by the two
works, with less restrictive assumptions proposed in (Loizou et al., 2021). In
their experiments, (Loizou et al., 2021) did not use momentum for ALI-G (which
is a standard part of the algorithm) or standard hyper-parameter tuning (for
e.g. learning-rate and regularization). Hence this note as a reference for the
improved performance that ALI-G can obtain with well-chosen hyper-parameters.
In particular, we show that when training a ResNet-34 on CIFAR-10 and
CIFAR-100, the performance of ALI-G can reach respectively 93.5% (+6%) and 76%
(+8%) with a very small amount of tuning. Thus ALI-G remains a very competitive
method for training interpolating neural networks.
|
[
{
"created": "Thu, 20 May 2021 19:57:34 GMT",
"version": "v1"
}
] |
2021-05-24
|
[
[
"Berrada",
"Leonard",
""
],
[
"Zisserman",
"Andrew",
""
],
[
"Kumar",
"M. Pawan",
""
]
] |
This is a short note on the performance of the ALI-G algorithm (Berrada et al., 2020) as reported in (Loizou et al., 2021). ALI-G (Berrada et al., 2020) and SPS (Loizou et al., 2021) are both adaptations of the Polyak step-size to optimize machine learning models that can interpolate the training data. The main algorithmic differences are that (1) SPS employs a multiplicative constant in the denominator of the learning-rate while ALI-G uses an additive constant, and (2) SPS uses an iteration-dependent maximal learning-rate while ALI-G uses a constant one. There are also differences in the analysis provided by the two works, with less restrictive assumptions proposed in (Loizou et al., 2021). In their experiments, (Loizou et al., 2021) did not use momentum for ALI-G (which is a standard part of the algorithm) or standard hyper-parameter tuning (for e.g. learning-rate and regularization). Hence this note as a reference for the improved performance that ALI-G can obtain with well-chosen hyper-parameters. In particular, we show that when training a ResNet-34 on CIFAR-10 and CIFAR-100, the performance of ALI-G can reach respectively 93.5% (+6%) and 76% (+8%) with a very small amount of tuning. Thus ALI-G remains a very competitive method for training interpolating neural networks.
|
cs/0208012
|
Jim Gray
|
Jim Gray, Alexander S. Szalay, Ani R. Thakar, Christopher Stoughton,
Jan vandenBerg
|
Online Scientific Data Curation, Publication, and Archiving
|
original at
http://research.microsoft.com/scripts/pubs/view.asp?TR_ID=MSR-TR-2002-74
| null |
10.1117/12.461524
|
MSR-TR-2002-74
|
cs.DL
| null |
Science projects are data publishers. The scale and complexity of current and
future science data changes the nature of the publication process. Publication
is becoming a major project component. At a minimum, a project must preserve
the ephemeral data it gathers. Derived data can be reconstructed from metadata,
but metadata is ephemeral. Longer term, a project should expect some archive to
preserve the data. We observe that pub-lished scientific data needs to be
available forever ? this gives rise to the data pyramid of versions and to data
inflation where the derived data volumes explode. As an example, this article
describes the Sloan Digital Sky Survey (SDSS) strategies for data publication,
data access, curation, and preservation.
|
[
{
"created": "Wed, 7 Aug 2002 22:42:31 GMT",
"version": "v1"
}
] |
2015-06-25
|
[
[
"Gray",
"Jim",
""
],
[
"Szalay",
"Alexander S.",
""
],
[
"Thakar",
"Ani R.",
""
],
[
"Stoughton",
"Christopher",
""
],
[
"vandenBerg",
"Jan",
""
]
] |
Science projects are data publishers. The scale and complexity of current and future science data changes the nature of the publication process. Publication is becoming a major project component. At a minimum, a project must preserve the ephemeral data it gathers. Derived data can be reconstructed from metadata, but metadata is ephemeral. Longer term, a project should expect some archive to preserve the data. We observe that pub-lished scientific data needs to be available forever ? this gives rise to the data pyramid of versions and to data inflation where the derived data volumes explode. As an example, this article describes the Sloan Digital Sky Survey (SDSS) strategies for data publication, data access, curation, and preservation.
|
2207.04606
|
Zihao Ye
|
Zihao Ye, Ruihang Lai, Junru Shao, Tianqi Chen, Luis Ceze
|
SparseTIR: Composable Abstractions for Sparse Compilation in Deep
Learning
|
To appear at ASPLOS 2023 (19 pages, 23 figures), source code
available at https://github.com/uwsampl/sparsetir, artifact available at
https://github.com/uwsampl/sparsetir-artifact
| null | null | null |
cs.LG cs.AI cs.PL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Sparse tensors are rapidly becoming critical components of modern deep
learning workloads. However, developing high-performance sparse operators can
be difficult and tedious, and existing vendor libraries cannot satisfy the
escalating demands from new operators. Sparse tensor compilers simplify the
development of operators, but efficient sparse compilation for deep learning
remains challenging because a single sparse format cannot maximize hardware
efficiency, and single-shot compilers cannot keep up with latest hardware and
system advances. In this paper, we observe that the key to addressing both
these challenges is to leverage composable formats and composable
transformations. We propose SparseTIR, a sparse tensor compilation abstraction
that offers composable formats and composable transformations for deep learning
workloads. SparseTIR constructs a search space over these composable components
for performance tuning. With these improvements, SparseTIR obtains consistent
performance speedups vs vendor libraries on GPUs for single operators:
1.20-2.34x for GNN operators, 1.05-2.98x for sparse attention operators, and
0.56-7.45x for sparse convolution operators. SparseTIR also accelerates
end-to-end GNNs by 1.08-1.52x for GraphSAGE training, and 4.20-40.18x for RGCN
inference.
|
[
{
"created": "Mon, 11 Jul 2022 03:49:53 GMT",
"version": "v1"
},
{
"created": "Fri, 26 Aug 2022 03:57:10 GMT",
"version": "v2"
},
{
"created": "Fri, 11 Nov 2022 01:48:13 GMT",
"version": "v3"
},
{
"created": "Tue, 21 Feb 2023 16:51:55 GMT",
"version": "v4"
}
] |
2023-02-22
|
[
[
"Ye",
"Zihao",
""
],
[
"Lai",
"Ruihang",
""
],
[
"Shao",
"Junru",
""
],
[
"Chen",
"Tianqi",
""
],
[
"Ceze",
"Luis",
""
]
] |
Sparse tensors are rapidly becoming critical components of modern deep learning workloads. However, developing high-performance sparse operators can be difficult and tedious, and existing vendor libraries cannot satisfy the escalating demands from new operators. Sparse tensor compilers simplify the development of operators, but efficient sparse compilation for deep learning remains challenging because a single sparse format cannot maximize hardware efficiency, and single-shot compilers cannot keep up with latest hardware and system advances. In this paper, we observe that the key to addressing both these challenges is to leverage composable formats and composable transformations. We propose SparseTIR, a sparse tensor compilation abstraction that offers composable formats and composable transformations for deep learning workloads. SparseTIR constructs a search space over these composable components for performance tuning. With these improvements, SparseTIR obtains consistent performance speedups vs vendor libraries on GPUs for single operators: 1.20-2.34x for GNN operators, 1.05-2.98x for sparse attention operators, and 0.56-7.45x for sparse convolution operators. SparseTIR also accelerates end-to-end GNNs by 1.08-1.52x for GraphSAGE training, and 4.20-40.18x for RGCN inference.
|
1602.00248
|
Adam Kucharski
|
Adam J. Kucharski
|
Modelling the transmission dynamics of online social contagion
|
13 pages, 6 figures, 2 tables
| null | null | null |
cs.SI physics.soc-ph
|
http://creativecommons.org/licenses/by/4.0/
|
During 2014-15, there were several outbreaks of nominated-based online social
contagion. These infections, which were transmitted from one individual to
another via posts on social media, included games such as 'neknomination', 'ice
bucket challenge', 'no make up selfies', and Facebook users re-posting their
first profile pictures. Fitting a mathematical model of infectious disease
transmission to outbreaks of these four games in the United Kingdom, I
estimated the basic reproduction number, $R_0$, and generation time of each
infection. Median estimates for $R_0$ ranged from 1.9-2.5 across the four
outbreaks, and the estimated generation times were between 1.0 and 2.0 days.
Tests using out-of-sample data from Australia suggested that the model had
reasonable predictive power, with $R^2$ values between 0.52-0.70 across the
four Australian datasets. Further, the relatively low basic reproduction
numbers for the infections suggests that only 48-60% of index cases in
nomination-based games may subsequently generate major outbreaks.
|
[
{
"created": "Sun, 31 Jan 2016 13:58:17 GMT",
"version": "v1"
}
] |
2016-02-02
|
[
[
"Kucharski",
"Adam J.",
""
]
] |
During 2014-15, there were several outbreaks of nominated-based online social contagion. These infections, which were transmitted from one individual to another via posts on social media, included games such as 'neknomination', 'ice bucket challenge', 'no make up selfies', and Facebook users re-posting their first profile pictures. Fitting a mathematical model of infectious disease transmission to outbreaks of these four games in the United Kingdom, I estimated the basic reproduction number, $R_0$, and generation time of each infection. Median estimates for $R_0$ ranged from 1.9-2.5 across the four outbreaks, and the estimated generation times were between 1.0 and 2.0 days. Tests using out-of-sample data from Australia suggested that the model had reasonable predictive power, with $R^2$ values between 0.52-0.70 across the four Australian datasets. Further, the relatively low basic reproduction numbers for the infections suggests that only 48-60% of index cases in nomination-based games may subsequently generate major outbreaks.
|
2403.00278
|
Jinho Bok
|
Jinho Bok, Weijie Su, Jason M. Altschuler
|
Shifted Interpolation for Differential Privacy
|
45 pages, ICML 2024. v2: added lower bounds (Appendix C.5)
| null | null | null |
cs.LG cs.CR math.OC math.ST stat.ML stat.TH
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Noisy gradient descent and its variants are the predominant algorithms for
differentially private machine learning. It is a fundamental question to
quantify their privacy leakage, yet tight characterizations remain open even in
the foundational setting of convex losses. This paper improves over previous
analyses by establishing (and refining) the "privacy amplification by
iteration" phenomenon in the unifying framework of $f$-differential
privacy--which tightly captures all aspects of the privacy loss and immediately
implies tighter privacy accounting in other notions of differential privacy,
e.g., $(\varepsilon,\delta)$-DP and R\'enyi DP. Our key technical insight is
the construction of shifted interpolated processes that unravel the popular
shifted-divergences argument, enabling generalizations beyond divergence-based
relaxations of DP. Notably, this leads to the first exact privacy analysis in
the foundational setting of strongly convex optimization. Our techniques extend
to many settings: convex/strongly convex, constrained/unconstrained,
full/cyclic/stochastic batches, and all combinations thereof. As an immediate
corollary, we recover the $f$-DP characterization of the exponential mechanism
for strongly convex optimization in Gopi et al. (2022), and moreover extend
this result to more general settings.
|
[
{
"created": "Fri, 1 Mar 2024 04:50:04 GMT",
"version": "v1"
},
{
"created": "Wed, 12 Jun 2024 04:08:27 GMT",
"version": "v2"
}
] |
2024-06-13
|
[
[
"Bok",
"Jinho",
""
],
[
"Su",
"Weijie",
""
],
[
"Altschuler",
"Jason M.",
""
]
] |
Noisy gradient descent and its variants are the predominant algorithms for differentially private machine learning. It is a fundamental question to quantify their privacy leakage, yet tight characterizations remain open even in the foundational setting of convex losses. This paper improves over previous analyses by establishing (and refining) the "privacy amplification by iteration" phenomenon in the unifying framework of $f$-differential privacy--which tightly captures all aspects of the privacy loss and immediately implies tighter privacy accounting in other notions of differential privacy, e.g., $(\varepsilon,\delta)$-DP and R\'enyi DP. Our key technical insight is the construction of shifted interpolated processes that unravel the popular shifted-divergences argument, enabling generalizations beyond divergence-based relaxations of DP. Notably, this leads to the first exact privacy analysis in the foundational setting of strongly convex optimization. Our techniques extend to many settings: convex/strongly convex, constrained/unconstrained, full/cyclic/stochastic batches, and all combinations thereof. As an immediate corollary, we recover the $f$-DP characterization of the exponential mechanism for strongly convex optimization in Gopi et al. (2022), and moreover extend this result to more general settings.
|
2207.03618
|
Shannan Guan
|
Shannan Guan, Haiyan Lu, Linchao Zhu, Gengfa Fang
|
PoseGU: 3D Human Pose Estimation with Novel Human Pose Generator and
Unbiased Learning
| null | null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
3D pose estimation has recently gained substantial interests in computer
vision domain. Existing 3D pose estimation methods have a strong reliance on
large size well-annotated 3D pose datasets, and they suffer poor model
generalization on unseen poses due to limited diversity of 3D poses in training
sets. In this work, we propose PoseGU, a novel human pose generator that
generates diverse poses with access only to a small size of seed samples, while
equipping the Counterfactual Risk Minimization to pursue an unbiased evaluation
objective. Extensive experiments demonstrate PoseGU outforms almost all the
state-of-the-art 3D human pose methods under consideration over three popular
benchmark datasets. Empirical analysis also proves PoseGU generates 3D poses
with improved data diversity and better generalization ability.
|
[
{
"created": "Thu, 7 Jul 2022 23:43:53 GMT",
"version": "v1"
}
] |
2022-07-11
|
[
[
"Guan",
"Shannan",
""
],
[
"Lu",
"Haiyan",
""
],
[
"Zhu",
"Linchao",
""
],
[
"Fang",
"Gengfa",
""
]
] |
3D pose estimation has recently gained substantial interests in computer vision domain. Existing 3D pose estimation methods have a strong reliance on large size well-annotated 3D pose datasets, and they suffer poor model generalization on unseen poses due to limited diversity of 3D poses in training sets. In this work, we propose PoseGU, a novel human pose generator that generates diverse poses with access only to a small size of seed samples, while equipping the Counterfactual Risk Minimization to pursue an unbiased evaluation objective. Extensive experiments demonstrate PoseGU outforms almost all the state-of-the-art 3D human pose methods under consideration over three popular benchmark datasets. Empirical analysis also proves PoseGU generates 3D poses with improved data diversity and better generalization ability.
|
2104.07516
|
Chengtang Yao
|
Chengtang Yao, Yunde Jia, Huijun Di, Pengxiang Li, Yuwei Wu
|
A Decomposition Model for Stereo Matching
|
CVPR 2021
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we present a decomposition model for stereo matching to solve
the problem of excessive growth in computational cost (time and memory cost) as
the resolution increases. In order to reduce the huge cost of stereo matching
at the original resolution, our model only runs dense matching at a very low
resolution and uses sparse matching at different higher resolutions to recover
the disparity of lost details scale-by-scale. After the decomposition of stereo
matching, our model iteratively fuses the sparse and dense disparity maps from
adjacent scales with an occlusion-aware mask. A refinement network is also
applied to improving the fusion result. Compared with high-performance methods
like PSMNet and GANet, our method achieves $10-100\times$ speed increase while
obtaining comparable disparity estimation results.
|
[
{
"created": "Thu, 15 Apr 2021 15:16:23 GMT",
"version": "v1"
}
] |
2021-04-16
|
[
[
"Yao",
"Chengtang",
""
],
[
"Jia",
"Yunde",
""
],
[
"Di",
"Huijun",
""
],
[
"Li",
"Pengxiang",
""
],
[
"Wu",
"Yuwei",
""
]
] |
In this paper, we present a decomposition model for stereo matching to solve the problem of excessive growth in computational cost (time and memory cost) as the resolution increases. In order to reduce the huge cost of stereo matching at the original resolution, our model only runs dense matching at a very low resolution and uses sparse matching at different higher resolutions to recover the disparity of lost details scale-by-scale. After the decomposition of stereo matching, our model iteratively fuses the sparse and dense disparity maps from adjacent scales with an occlusion-aware mask. A refinement network is also applied to improving the fusion result. Compared with high-performance methods like PSMNet and GANet, our method achieves $10-100\times$ speed increase while obtaining comparable disparity estimation results.
|
2208.01250
|
Chaozhuo Li
|
Yiding Zhang, Chaozhuo Li, Senzhang Wang, Jianxun Lian, Xing Xie
|
Geometric Interaction Augmented Graph Collaborative Filtering
| null | null | null | null |
cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
Graph-based collaborative filtering is capable of capturing the essential and
abundant collaborative signals from the high-order interactions, and thus
received increasingly research interests. Conventionally, the embeddings of
users and items are defined in the Euclidean spaces, along with the propagation
on the interaction graphs. Meanwhile, recent works point out that the
high-order interactions naturally form up the tree-likeness structures, which
the hyperbolic models thrive on. However, the interaction graphs inherently
exhibit the hybrid and nested geometric characteristics, while the existing
single geometry-based models are inadequate to fully capture such sophisticated
topological patterns. In this paper, we propose to model the user-item
interactions in a hybrid geometric space, in which the merits of Euclidean and
hyperbolic spaces are simultaneously enjoyed to learn expressive
representations. Experimental results on public datasets validate the
effectiveness of our proposal.
|
[
{
"created": "Tue, 2 Aug 2022 04:53:17 GMT",
"version": "v1"
}
] |
2022-08-03
|
[
[
"Zhang",
"Yiding",
""
],
[
"Li",
"Chaozhuo",
""
],
[
"Wang",
"Senzhang",
""
],
[
"Lian",
"Jianxun",
""
],
[
"Xie",
"Xing",
""
]
] |
Graph-based collaborative filtering is capable of capturing the essential and abundant collaborative signals from the high-order interactions, and thus received increasingly research interests. Conventionally, the embeddings of users and items are defined in the Euclidean spaces, along with the propagation on the interaction graphs. Meanwhile, recent works point out that the high-order interactions naturally form up the tree-likeness structures, which the hyperbolic models thrive on. However, the interaction graphs inherently exhibit the hybrid and nested geometric characteristics, while the existing single geometry-based models are inadequate to fully capture such sophisticated topological patterns. In this paper, we propose to model the user-item interactions in a hybrid geometric space, in which the merits of Euclidean and hyperbolic spaces are simultaneously enjoyed to learn expressive representations. Experimental results on public datasets validate the effectiveness of our proposal.
|
2003.13320
|
Kai Niu
|
Kai Niu and Yan Li
|
Polar Coded Diversity on Block Fading Channels via Polar Spectrum
|
13 pages, 5 figues
| null |
10.1109/TSP.2021.3094652
| null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Due to the advantage of capacity-achieving, polar codes have been extended to
the block fading channel whereas most constructions involve complex
iterative-calculation. In this paper, we establish a systematic framework to
analyze the error performance of polar codes in the case of block mapping and
random mapping. For both the mappings, by introducing the new concept, named
split polar spectrum, we derive the upper bound on the error probability of
polarized channel which explicitly reveals the relationship between the
diversity order L and the block-wise weight distribution of the codeword. For
the special case L=2 in the block mapping, we design the enumeration algorithm
to calculate the exact split polar spectrum based on the general MacWilliams
identities. For arbitrary diversity order in the random mapping, with the help
of uniform interleaving, we derive the approximate split polar spectrum by
combining the polar spectrum and the probability of fading pattern for a
specific weight. Furthermore, we propose the design criteria to construct polar
codes over the block fading channel. The full diversity criterion is the
primary target so as to achieve the diversity gain and the product distance
criterion requires to maximize the product of the block-wise Hamming distance
whereby obtain the coding gain. Guided by these design criteria, the
construction metric, named polarized diversity weight (PDW) is proposed to
design the polar codes in both mappings. Such a simple metric can construct
polar codes with similar or better performance over those based on traditional
methods in block fading channel.
|
[
{
"created": "Mon, 30 Mar 2020 10:12:14 GMT",
"version": "v1"
}
] |
2021-08-11
|
[
[
"Niu",
"Kai",
""
],
[
"Li",
"Yan",
""
]
] |
Due to the advantage of capacity-achieving, polar codes have been extended to the block fading channel whereas most constructions involve complex iterative-calculation. In this paper, we establish a systematic framework to analyze the error performance of polar codes in the case of block mapping and random mapping. For both the mappings, by introducing the new concept, named split polar spectrum, we derive the upper bound on the error probability of polarized channel which explicitly reveals the relationship between the diversity order L and the block-wise weight distribution of the codeword. For the special case L=2 in the block mapping, we design the enumeration algorithm to calculate the exact split polar spectrum based on the general MacWilliams identities. For arbitrary diversity order in the random mapping, with the help of uniform interleaving, we derive the approximate split polar spectrum by combining the polar spectrum and the probability of fading pattern for a specific weight. Furthermore, we propose the design criteria to construct polar codes over the block fading channel. The full diversity criterion is the primary target so as to achieve the diversity gain and the product distance criterion requires to maximize the product of the block-wise Hamming distance whereby obtain the coding gain. Guided by these design criteria, the construction metric, named polarized diversity weight (PDW) is proposed to design the polar codes in both mappings. Such a simple metric can construct polar codes with similar or better performance over those based on traditional methods in block fading channel.
|
2209.13803
|
Luo Ping
|
Ping Luo, Jieren Cheng, Zhenhao Liu, N.Xiong, Jie Wu
|
FedVeca: Federated Vectorized Averaging on Non-IID Data with Adaptive
Bi-directional Global Objective
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Federated Learning (FL) is a distributed machine learning framework to
alleviate the data silos, where decentralized clients collaboratively learn a
global model without sharing their private data. However, the clients'
Non-Independent and Identically Distributed (Non-IID) data negatively affect
the trained model, and clients with different numbers of local updates may
cause significant gaps to the local gradients in each communication round. In
this paper, we propose a Federated Vectorized Averaging (FedVeca) method to
address the above problem on Non-IID data. Specifically, we set a novel
objective for the global model which is related to the local gradients. The
local gradient is defined as a bi-directional vector with step size and
direction, where the step size is the number of local updates and the direction
is divided into positive and negative according to our definition. In FedVeca,
the direction is influenced by the step size, thus we average the
bi-directional vectors to reduce the effect of different step sizes. Then, we
theoretically analyze the relationship between the step sizes and the global
objective, and obtain upper bounds on the step sizes per communication round.
Based on the upper bounds, we design an algorithm for the server and the client
to adaptively adjusts the step sizes that make the objective close to the
optimum. Finally, we conduct experiments on different datasets, models and
scenarios by building a prototype system, and the experimental results
demonstrate the effectiveness and efficiency of the FedVeca method.
|
[
{
"created": "Wed, 28 Sep 2022 03:14:10 GMT",
"version": "v1"
},
{
"created": "Tue, 7 Feb 2023 11:14:06 GMT",
"version": "v2"
}
] |
2023-02-08
|
[
[
"Luo",
"Ping",
""
],
[
"Cheng",
"Jieren",
""
],
[
"Liu",
"Zhenhao",
""
],
[
"Xiong",
"N.",
""
],
[
"Wu",
"Jie",
""
]
] |
Federated Learning (FL) is a distributed machine learning framework to alleviate the data silos, where decentralized clients collaboratively learn a global model without sharing their private data. However, the clients' Non-Independent and Identically Distributed (Non-IID) data negatively affect the trained model, and clients with different numbers of local updates may cause significant gaps to the local gradients in each communication round. In this paper, we propose a Federated Vectorized Averaging (FedVeca) method to address the above problem on Non-IID data. Specifically, we set a novel objective for the global model which is related to the local gradients. The local gradient is defined as a bi-directional vector with step size and direction, where the step size is the number of local updates and the direction is divided into positive and negative according to our definition. In FedVeca, the direction is influenced by the step size, thus we average the bi-directional vectors to reduce the effect of different step sizes. Then, we theoretically analyze the relationship between the step sizes and the global objective, and obtain upper bounds on the step sizes per communication round. Based on the upper bounds, we design an algorithm for the server and the client to adaptively adjusts the step sizes that make the objective close to the optimum. Finally, we conduct experiments on different datasets, models and scenarios by building a prototype system, and the experimental results demonstrate the effectiveness and efficiency of the FedVeca method.
|
2002.01358
|
Shan Zhang
|
Xiao Ma, Ao Zhou, Shan Zhang, Shangguang Wang
|
Cooperative Service Caching and Workload Scheduling in Mobile Edge
Computing
|
INFOCOM 2020
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Mobile edge computing is beneficial to reduce service response time and core
network traffic by pushing cloud functionalities to network edge. Equipped with
storage and computation capacities, edge nodes can cache services of
resource-intensive and delay-sensitive mobile applications and process the
corresponding computation tasks without outsourcing to central clouds. However,
the heterogeneity of edge resource capacities and inconsistence of edge storage
and computation capacities make it difficult to jointly fully utilize the
storage and computation capacities when there is no cooperation among edge
nodes. To address this issue, we consider cooperation among edge nodes and
investigate cooperative service caching and workload scheduling in mobile edge
computing. This problem can be formulated as a mixed integer nonlinear
programming problem, which has non-polynomial computation complexity. To
overcome the challenges of subproblem coupling, computation-communication
tradeoff, and edge node heterogeneity, we develop an iterative algorithm called
ICE. This algorithm is designed based on Gibbs sampling, which has provably
near-optimal results, and the idea of water filling, which has polynomial
computation complexity. Simulations are conducted and the results demonstrate
that our algorithm can jointly reduce the service response time and the
outsourcing traffic compared with the benchmark algorithms.
|
[
{
"created": "Tue, 4 Feb 2020 15:06:44 GMT",
"version": "v1"
}
] |
2020-02-05
|
[
[
"Ma",
"Xiao",
""
],
[
"Zhou",
"Ao",
""
],
[
"Zhang",
"Shan",
""
],
[
"Wang",
"Shangguang",
""
]
] |
Mobile edge computing is beneficial to reduce service response time and core network traffic by pushing cloud functionalities to network edge. Equipped with storage and computation capacities, edge nodes can cache services of resource-intensive and delay-sensitive mobile applications and process the corresponding computation tasks without outsourcing to central clouds. However, the heterogeneity of edge resource capacities and inconsistence of edge storage and computation capacities make it difficult to jointly fully utilize the storage and computation capacities when there is no cooperation among edge nodes. To address this issue, we consider cooperation among edge nodes and investigate cooperative service caching and workload scheduling in mobile edge computing. This problem can be formulated as a mixed integer nonlinear programming problem, which has non-polynomial computation complexity. To overcome the challenges of subproblem coupling, computation-communication tradeoff, and edge node heterogeneity, we develop an iterative algorithm called ICE. This algorithm is designed based on Gibbs sampling, which has provably near-optimal results, and the idea of water filling, which has polynomial computation complexity. Simulations are conducted and the results demonstrate that our algorithm can jointly reduce the service response time and the outsourcing traffic compared with the benchmark algorithms.
|
1907.12891
|
Olivier Rukundo
|
Olivier Rukundo
|
4X4 Census Transform
|
3 pages, 9 figures, 2 tables
| null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper proposes a 4X4 Census Transform (4X4CT) to encourage further
research in computer vision and visual computing. Unlike the traditional 3X3 CT
which uses a nine pixels kernel, the proposed 4X4CT uses a sixteen pixels
kernel with four overlapped groups of 3X3 kernel size. In each overlapping
group, a reference input pixel profits from its nearest eight pixels to produce
an eight bits binary string convertible to a grayscale integer of the 4X4CT's
output pixel. Preliminary experiments demonstrated more image textural
crispness and contrast than the CT as well as alternativeness to enable
meaningful solutions to be achieved.
|
[
{
"created": "Tue, 30 Jul 2019 13:30:45 GMT",
"version": "v1"
}
] |
2019-07-31
|
[
[
"Rukundo",
"Olivier",
""
]
] |
This paper proposes a 4X4 Census Transform (4X4CT) to encourage further research in computer vision and visual computing. Unlike the traditional 3X3 CT which uses a nine pixels kernel, the proposed 4X4CT uses a sixteen pixels kernel with four overlapped groups of 3X3 kernel size. In each overlapping group, a reference input pixel profits from its nearest eight pixels to produce an eight bits binary string convertible to a grayscale integer of the 4X4CT's output pixel. Preliminary experiments demonstrated more image textural crispness and contrast than the CT as well as alternativeness to enable meaningful solutions to be achieved.
|
2108.01454
|
Albert Weichselbraun
|
Albert Weichselbraun
|
Inscriptis -- A Python-based HTML to text conversion library optimized
for knowledge extraction from the Web
|
Preprint of the published version, which includes all improvements
made during the review process
|
Journal of Open Source Software (2021), 6(66), 3557
|
10.21105/joss.03557
| null |
cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
Inscriptis provides a library, command line client and Web service for
converting HTML to plain text. Its development has been triggered by the need
to obtain accurate text representations for knowledge extraction tasks that
preserve the spatial alignment of text without drawing upon heavyweight,
browser-based solutions such as Selenium. In contrast to related software
packages, Inscriptis (i) provides a layout-aware conversion of HTML that more
closely resembles the rendering obtained from standard Web browsers; and (ii)
supports annotation rules, i.e., user-provided mappings that allow for
annotating the extracted text based on structural and semantic information
encoded in HTML tags and attributes. These unique features ensure that
downstream knowledge extraction components can operate on accurate text
representations, and may even use information on the semantics and structure of
the original HTML document.
|
[
{
"created": "Mon, 12 Jul 2021 12:40:43 GMT",
"version": "v1"
},
{
"created": "Fri, 22 Oct 2021 07:04:02 GMT",
"version": "v2"
}
] |
2021-10-25
|
[
[
"Weichselbraun",
"Albert",
""
]
] |
Inscriptis provides a library, command line client and Web service for converting HTML to plain text. Its development has been triggered by the need to obtain accurate text representations for knowledge extraction tasks that preserve the spatial alignment of text without drawing upon heavyweight, browser-based solutions such as Selenium. In contrast to related software packages, Inscriptis (i) provides a layout-aware conversion of HTML that more closely resembles the rendering obtained from standard Web browsers; and (ii) supports annotation rules, i.e., user-provided mappings that allow for annotating the extracted text based on structural and semantic information encoded in HTML tags and attributes. These unique features ensure that downstream knowledge extraction components can operate on accurate text representations, and may even use information on the semantics and structure of the original HTML document.
|
2112.04019
|
Hongwei Zhu
|
Hongwei Zhu, Minjia Shi
|
The b-symbol weight hierarchy of the Kasami codes
| null | null | null | null |
cs.IT math.GR math.IT
|
http://creativecommons.org/publicdomain/zero/1.0/
|
The symbol-pair read channel was first proposed by Cassuto and Blaum. Later,
Yaakobi et al. generalized it to the $b$-symbol read channel. It is motivated
by the limitations of the reading process in high density data storage systems.
One main task in $b$-symbol coding theory is to determine the $b$-symbol weight
hierarchy of codes. In this paper, we study the $b$-symbol weight hierarchy of
the Kasami codes, which are well known for their applications to construct
sequences with optimal correlation magnitudes. The complete symbol-pair weight
distribution of the Kasami codes is determined.
|
[
{
"created": "Tue, 7 Dec 2021 22:19:05 GMT",
"version": "v1"
}
] |
2021-12-09
|
[
[
"Zhu",
"Hongwei",
""
],
[
"Shi",
"Minjia",
""
]
] |
The symbol-pair read channel was first proposed by Cassuto and Blaum. Later, Yaakobi et al. generalized it to the $b$-symbol read channel. It is motivated by the limitations of the reading process in high density data storage systems. One main task in $b$-symbol coding theory is to determine the $b$-symbol weight hierarchy of codes. In this paper, we study the $b$-symbol weight hierarchy of the Kasami codes, which are well known for their applications to construct sequences with optimal correlation magnitudes. The complete symbol-pair weight distribution of the Kasami codes is determined.
|
1009.1407
|
Grenville Croll
|
Sebastian Dewhurst
|
Transforming Critical Spreadsheets into Web Applications at Zurich
Financial
|
10 pages, 6 colour figures; ISBN 978-1-905404-50-6
|
Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2010 23-32
| null | null |
cs.SE cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the insurance industry, spreadsheets have emerged as an invaluable tool to
for product pricing, because it is relatively straightforward to create and
maintain complex pricing models using Excel. In fact, Excel is often preferred
to "hard-code" whenever there are frequent changes to the calculations and
business logic which under-pin the pricing of an insurance product. However,
problems arise as soon as spreadsheets are deployed to end-users: version
control, security of intellectual property, and ensuring correct usage are
obvious issues; frequently, integration with other systems is also a
requirement. Zurich Financial Services Group is a leading financial services
provider; several possible solutions to these problems have been evaluated, and
EASA has been selected as the preferred technology. Other spreadsheet
collaboration approaches which were considered include Excel Services, and/or
custom-built software; however, EASA has provided clear benefits over these
strategies.
|
[
{
"created": "Tue, 7 Sep 2010 21:15:47 GMT",
"version": "v1"
}
] |
2010-09-09
|
[
[
"Dewhurst",
"Sebastian",
""
]
] |
In the insurance industry, spreadsheets have emerged as an invaluable tool to for product pricing, because it is relatively straightforward to create and maintain complex pricing models using Excel. In fact, Excel is often preferred to "hard-code" whenever there are frequent changes to the calculations and business logic which under-pin the pricing of an insurance product. However, problems arise as soon as spreadsheets are deployed to end-users: version control, security of intellectual property, and ensuring correct usage are obvious issues; frequently, integration with other systems is also a requirement. Zurich Financial Services Group is a leading financial services provider; several possible solutions to these problems have been evaluated, and EASA has been selected as the preferred technology. Other spreadsheet collaboration approaches which were considered include Excel Services, and/or custom-built software; however, EASA has provided clear benefits over these strategies.
|
2310.18912
|
Hao Zhang
|
Hao Zhang, Yang Liu, Xiaoyan Liu, Tianming Liang, Gaurav Sharma, Liang
Xue, and Maozu Guo
|
Sentence Bag Graph Formulation for Biomedical Distant Supervision
Relation Extraction
|
in IEEE Transactions on Knowledge and Data Engineering, 2024
| null |
10.1109/TKDE.2024.3377229
| null |
cs.LG cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce a novel graph-based framework for alleviating key challenges in
distantly-supervised relation extraction and demonstrate its effectiveness in
the challenging and important domain of biomedical data. Specifically, we
propose a graph view of sentence bags referring to an entity pair, which
enables message-passing based aggregation of information related to the entity
pair over the sentence bag. The proposed framework alleviates the common
problem of noisy labeling in distantly supervised relation extraction and also
effectively incorporates inter-dependencies between sentences within a bag.
Extensive experiments on two large-scale biomedical relation datasets and the
widely utilized NYT dataset demonstrate that our proposed framework
significantly outperforms the state-of-the-art methods for biomedical distant
supervision relation extraction while also providing excellent performance for
relation extraction in the general text mining domain.
|
[
{
"created": "Sun, 29 Oct 2023 05:48:04 GMT",
"version": "v1"
}
] |
2024-04-08
|
[
[
"Zhang",
"Hao",
""
],
[
"Liu",
"Yang",
""
],
[
"Liu",
"Xiaoyan",
""
],
[
"Liang",
"Tianming",
""
],
[
"Sharma",
"Gaurav",
""
],
[
"Xue",
"Liang",
""
],
[
"Guo",
"Maozu",
""
]
] |
We introduce a novel graph-based framework for alleviating key challenges in distantly-supervised relation extraction and demonstrate its effectiveness in the challenging and important domain of biomedical data. Specifically, we propose a graph view of sentence bags referring to an entity pair, which enables message-passing based aggregation of information related to the entity pair over the sentence bag. The proposed framework alleviates the common problem of noisy labeling in distantly supervised relation extraction and also effectively incorporates inter-dependencies between sentences within a bag. Extensive experiments on two large-scale biomedical relation datasets and the widely utilized NYT dataset demonstrate that our proposed framework significantly outperforms the state-of-the-art methods for biomedical distant supervision relation extraction while also providing excellent performance for relation extraction in the general text mining domain.
|
2405.19773
|
Jasper Uijlings
|
Tautvydas Misiunas and Hassan Mansoor and Jasper Uijlings and Oriana
Riva and Victor Carbune
|
VQA Training Sets are Self-play Environments for Generating Few-shot
Pools
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Large-language models and large-vision models are increasingly capable of
solving compositional reasoning tasks, as measured by breakthroughs in
visual-question answering benchmarks. However, state-of-the-art solutions often
involve careful construction of large pre-training and fine-tuning datasets,
which can be expensive. The use of external tools, whether other ML models,
search engines, or APIs, can significantly improve performance by breaking down
high-level reasoning questions into sub-questions that are answerable by
individual tools, but this approach has similar dataset construction costs to
teach fine-tuned models how to use the available tools. We propose a technique
in which existing training sets can be directly used for constructing
computational environments with task metrics as rewards. This enables a model
to autonomously teach itself to use itself or another model as a tool. By doing
so, we augment training sets by integrating external signals. The proposed
method starts with zero-shot prompts and iteratively refines them by selecting
few-shot examples that maximize the task metric on the training set. Our
experiments showcase how Gemini learns how to use itself, or another smaller
and specialized model such as ScreenAI, to iteratively improve performance on
training sets. Our approach successfully generalizes and improves upon zeroshot
performance on charts, infographics, and document visual question-answering
datasets
|
[
{
"created": "Thu, 30 May 2024 07:38:58 GMT",
"version": "v1"
}
] |
2024-05-31
|
[
[
"Misiunas",
"Tautvydas",
""
],
[
"Mansoor",
"Hassan",
""
],
[
"Uijlings",
"Jasper",
""
],
[
"Riva",
"Oriana",
""
],
[
"Carbune",
"Victor",
""
]
] |
Large-language models and large-vision models are increasingly capable of solving compositional reasoning tasks, as measured by breakthroughs in visual-question answering benchmarks. However, state-of-the-art solutions often involve careful construction of large pre-training and fine-tuning datasets, which can be expensive. The use of external tools, whether other ML models, search engines, or APIs, can significantly improve performance by breaking down high-level reasoning questions into sub-questions that are answerable by individual tools, but this approach has similar dataset construction costs to teach fine-tuned models how to use the available tools. We propose a technique in which existing training sets can be directly used for constructing computational environments with task metrics as rewards. This enables a model to autonomously teach itself to use itself or another model as a tool. By doing so, we augment training sets by integrating external signals. The proposed method starts with zero-shot prompts and iteratively refines them by selecting few-shot examples that maximize the task metric on the training set. Our experiments showcase how Gemini learns how to use itself, or another smaller and specialized model such as ScreenAI, to iteratively improve performance on training sets. Our approach successfully generalizes and improves upon zeroshot performance on charts, infographics, and document visual question-answering datasets
|
cs/0511012
|
Hamilton Link
|
Hamilton Link and Randall A. LaViolette and Jared Saia and Terran Lane
|
Parameters Affecting the Resilience of Scale-Free Networks to Random
Failures
|
12 pages, 7 figures. Submitting to Phys. Rev. Lett
| null | null | null |
cs.NI cs.AR cs.MA
| null |
It is commonly believed that scale-free networks are robust to massive
numbers of random node deletions. For example, Cohen et al. study scale-free
networks including some which approximate the measured degree distribution of
the Internet. Their results suggest that if each node in this network failed
independently with probability 0.99, the remaining network would continue to
have a giant component. In this paper, we show that a large and important
subclass of scale-free networks are not robust to massive numbers of random
node deletions for practical purposes. In particular, we study finite
scale-free networks which have minimum node degree of 1 and a power-law degree
distribution beginning with nodes of degree 1 (power-law networks). We show
that, in a power-law network approximating the Internet's reported
distribution, when the probability of deletion of each node is 0.5 only about
25% of the surviving nodes in the network remain connected in a giant
component, and the giant component does not persist beyond a critical failure
rate of 0.9. The new result is partially due to improved analytical
accommodation of the large number of degree-0 nodes that result after node
deletions. Our results apply to finite power-law networks with a wide range of
power-law exponents, including Internet-like networks. We give both analytical
and empirical evidence that such networks are not generally robust to massive
random node deletions.
|
[
{
"created": "Wed, 2 Nov 2005 22:22:14 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Link",
"Hamilton",
""
],
[
"LaViolette",
"Randall A.",
""
],
[
"Saia",
"Jared",
""
],
[
"Lane",
"Terran",
""
]
] |
It is commonly believed that scale-free networks are robust to massive numbers of random node deletions. For example, Cohen et al. study scale-free networks including some which approximate the measured degree distribution of the Internet. Their results suggest that if each node in this network failed independently with probability 0.99, the remaining network would continue to have a giant component. In this paper, we show that a large and important subclass of scale-free networks are not robust to massive numbers of random node deletions for practical purposes. In particular, we study finite scale-free networks which have minimum node degree of 1 and a power-law degree distribution beginning with nodes of degree 1 (power-law networks). We show that, in a power-law network approximating the Internet's reported distribution, when the probability of deletion of each node is 0.5 only about 25% of the surviving nodes in the network remain connected in a giant component, and the giant component does not persist beyond a critical failure rate of 0.9. The new result is partially due to improved analytical accommodation of the large number of degree-0 nodes that result after node deletions. Our results apply to finite power-law networks with a wide range of power-law exponents, including Internet-like networks. We give both analytical and empirical evidence that such networks are not generally robust to massive random node deletions.
|
2406.10738
|
Yao Zhao
|
Yao Zhao, Kwang-Sung Jun, Tanner Fiez, Lalit Jain
|
Adaptive Experimentation When You Can't Experiment
| null | null | null | null |
cs.LG stat.ME
|
http://creativecommons.org/licenses/by/4.0/
|
This paper introduces the \emph{confounded pure exploration transductive
linear bandit} (\texttt{CPET-LB}) problem. As a motivating example, often
online services cannot directly assign users to specific control or treatment
experiences either for business or practical reasons. In these settings,
naively comparing treatment and control groups that may result from
self-selection can lead to biased estimates of underlying treatment effects.
Instead, online services can employ a properly randomized encouragement that
incentivizes users toward a specific treatment. Our methodology provides online
services with an adaptive experimental design approach for learning the
best-performing treatment for such \textit{encouragement designs}. We consider
a more general underlying model captured by a linear structural equation and
formulate pure exploration linear bandits in this setting. Though pure
exploration has been extensively studied in standard adaptive experimental
design settings, we believe this is the first work considering a setting where
noise is confounded. Elimination-style algorithms using experimental design
methods in combination with a novel finite-time confidence interval on an
instrumental variable style estimator are presented with sample complexity
upper bounds nearly matching a minimax lower bound. Finally, experiments are
conducted that demonstrate the efficacy of our approach.
|
[
{
"created": "Sat, 15 Jun 2024 20:54:48 GMT",
"version": "v1"
}
] |
2024-06-18
|
[
[
"Zhao",
"Yao",
""
],
[
"Jun",
"Kwang-Sung",
""
],
[
"Fiez",
"Tanner",
""
],
[
"Jain",
"Lalit",
""
]
] |
This paper introduces the \emph{confounded pure exploration transductive linear bandit} (\texttt{CPET-LB}) problem. As a motivating example, often online services cannot directly assign users to specific control or treatment experiences either for business or practical reasons. In these settings, naively comparing treatment and control groups that may result from self-selection can lead to biased estimates of underlying treatment effects. Instead, online services can employ a properly randomized encouragement that incentivizes users toward a specific treatment. Our methodology provides online services with an adaptive experimental design approach for learning the best-performing treatment for such \textit{encouragement designs}. We consider a more general underlying model captured by a linear structural equation and formulate pure exploration linear bandits in this setting. Though pure exploration has been extensively studied in standard adaptive experimental design settings, we believe this is the first work considering a setting where noise is confounded. Elimination-style algorithms using experimental design methods in combination with a novel finite-time confidence interval on an instrumental variable style estimator are presented with sample complexity upper bounds nearly matching a minimax lower bound. Finally, experiments are conducted that demonstrate the efficacy of our approach.
|
1611.03898
|
Thai Pham
|
Derek Farren and Thai Pham and Marco Alban-Hidalgo
|
Low Latency Anomaly Detection and Bayesian Network Prediction of Anomaly
Likelihood
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We develop a supervised machine learning model that detects anomalies in
systems in real time. Our model processes unbounded streams of data into time
series which then form the basis of a low-latency anomaly detection model.
Moreover, we extend our preliminary goal of just anomaly detection to
simultaneous anomaly prediction. We approach this very challenging problem by
developing a Bayesian Network framework that captures the information about the
parameters of the lagged regressors calibrated in the first part of our
approach and use this structure to learn local conditional probability
distributions.
|
[
{
"created": "Fri, 11 Nov 2016 22:20:41 GMT",
"version": "v1"
}
] |
2016-11-16
|
[
[
"Farren",
"Derek",
""
],
[
"Pham",
"Thai",
""
],
[
"Alban-Hidalgo",
"Marco",
""
]
] |
We develop a supervised machine learning model that detects anomalies in systems in real time. Our model processes unbounded streams of data into time series which then form the basis of a low-latency anomaly detection model. Moreover, we extend our preliminary goal of just anomaly detection to simultaneous anomaly prediction. We approach this very challenging problem by developing a Bayesian Network framework that captures the information about the parameters of the lagged regressors calibrated in the first part of our approach and use this structure to learn local conditional probability distributions.
|
2211.16958
|
Prerak Srivastava
|
Prerak Srivastava, Antoine Deleforge, Archontis Politis, Emmanuel
Vincent
|
How to (virtually) train your speaker localizer
|
Published in INTERSPEECH 2023
| null | null | null |
cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Learning-based methods have become ubiquitous in speaker localization.
Existing systems rely on simulated training sets for the lack of sufficiently
large, diverse and annotated real datasets. Most room acoustics simulators used
for this purpose rely on the image source method (ISM) because of its
computational efficiency. This paper argues that carefully extending the ISM to
incorporate more realistic surface, source and microphone responses into
training sets can significantly boost the real-world performance of speaker
localization systems. It is shown that increasing the training-set realism of a
state-of-the-art direction-of-arrival estimator yields consistent improvements
across three different real test sets featuring human speakers in a variety of
rooms and various microphone arrays. An ablation study further reveals that
every added layer of realism contributes positively to these improvements.
|
[
{
"created": "Wed, 30 Nov 2022 13:01:11 GMT",
"version": "v1"
},
{
"created": "Thu, 25 May 2023 14:51:43 GMT",
"version": "v2"
}
] |
2023-05-26
|
[
[
"Srivastava",
"Prerak",
""
],
[
"Deleforge",
"Antoine",
""
],
[
"Politis",
"Archontis",
""
],
[
"Vincent",
"Emmanuel",
""
]
] |
Learning-based methods have become ubiquitous in speaker localization. Existing systems rely on simulated training sets for the lack of sufficiently large, diverse and annotated real datasets. Most room acoustics simulators used for this purpose rely on the image source method (ISM) because of its computational efficiency. This paper argues that carefully extending the ISM to incorporate more realistic surface, source and microphone responses into training sets can significantly boost the real-world performance of speaker localization systems. It is shown that increasing the training-set realism of a state-of-the-art direction-of-arrival estimator yields consistent improvements across three different real test sets featuring human speakers in a variety of rooms and various microphone arrays. An ablation study further reveals that every added layer of realism contributes positively to these improvements.
|
1808.04337
|
Samir Chowdhury
|
Samir Chowdhury and Facundo M\'emoli
|
The Gromov-Wasserstein distance between networks and stable network
invariants
|
To appear in Information and Inference. Current version is a
substantial update from the previous version and includes new computational
experiments and also new results on the Gromov-Prokhorov distance between
spheres
| null | null | null |
cs.DM math.MG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We define a metric---the network Gromov-Wasserstein distance---on weighted,
directed networks that is sensitive to the presence of outliers. In addition to
proving its theoretical properties, we supply network invariants based on
optimal transport that approximate this distance by means of lower bounds. We
test these methods on a range of simulated network datasets and on a dataset of
real-world global bilateral migration. For our simulations, we define a network
generative model based on the stochastic block model. This may be of
independent interest for benchmarking purposes.
|
[
{
"created": "Mon, 13 Aug 2018 17:24:45 GMT",
"version": "v1"
},
{
"created": "Wed, 4 Sep 2019 14:46:48 GMT",
"version": "v2"
}
] |
2019-09-05
|
[
[
"Chowdhury",
"Samir",
""
],
[
"Mémoli",
"Facundo",
""
]
] |
We define a metric---the network Gromov-Wasserstein distance---on weighted, directed networks that is sensitive to the presence of outliers. In addition to proving its theoretical properties, we supply network invariants based on optimal transport that approximate this distance by means of lower bounds. We test these methods on a range of simulated network datasets and on a dataset of real-world global bilateral migration. For our simulations, we define a network generative model based on the stochastic block model. This may be of independent interest for benchmarking purposes.
|
2301.10827
|
Matthew Alan Le Brun
|
Matthew Alan Le Brun and Ornela Dardha
|
MAG$\pi$: Types for Failure-Prone Communication
|
To be published in ESOP'23
| null | null | null |
cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
Multiparty Session Types (MPST) are a typing discipline for
communication-centric systems, guaranteeing communication safety, deadlock
freedom and protocol compliance. Several works have emerged which model
failures and introduce fault-tolerance techniques. However, such works often
make assumptions on the underlying network, e.g., TCP-based communication where
messages are guaranteed to be delivered; or adopt centralised reliable nodes
and an ad-hoc notion of reliability; or only address a single kind of failure,
such as node crash failures. In this work, we develop MAG$\pi$ -- a Multiparty,
Asynchronous and Generalised $\pi$-calculus, which is the first language and
type system to accommodate in unison: (i) the widest range of non-Byzantine
faults, including message loss, delays and reordering; crash failures and link
failures; and network partitioning; (ii) a novel and most general notion of
reliability, taking into account the viewpoint of each participant in the
protocol; (iii) a spectrum of network assumptions from the lowest UDP-based
network programming to the TCP-based application level. We prove subject
reduction and session fidelity; process properties (deadlock freedom,
termination, etc.); failure-handling safety and reliability adherence.
|
[
{
"created": "Wed, 25 Jan 2023 21:04:02 GMT",
"version": "v1"
}
] |
2023-01-27
|
[
[
"Brun",
"Matthew Alan Le",
""
],
[
"Dardha",
"Ornela",
""
]
] |
Multiparty Session Types (MPST) are a typing discipline for communication-centric systems, guaranteeing communication safety, deadlock freedom and protocol compliance. Several works have emerged which model failures and introduce fault-tolerance techniques. However, such works often make assumptions on the underlying network, e.g., TCP-based communication where messages are guaranteed to be delivered; or adopt centralised reliable nodes and an ad-hoc notion of reliability; or only address a single kind of failure, such as node crash failures. In this work, we develop MAG$\pi$ -- a Multiparty, Asynchronous and Generalised $\pi$-calculus, which is the first language and type system to accommodate in unison: (i) the widest range of non-Byzantine faults, including message loss, delays and reordering; crash failures and link failures; and network partitioning; (ii) a novel and most general notion of reliability, taking into account the viewpoint of each participant in the protocol; (iii) a spectrum of network assumptions from the lowest UDP-based network programming to the TCP-based application level. We prove subject reduction and session fidelity; process properties (deadlock freedom, termination, etc.); failure-handling safety and reliability adherence.
|
2110.09570
|
Arijit Nag
|
Arijit Nag, Bidisha Samanta, Animesh Mukherjee, Niloy Ganguly, Soumen
Chakrabarti
|
A Data Bootstrapping Recipe for Low Resource Multilingual Relation
Classification
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Relation classification (sometimes called 'extraction') requires trustworthy
datasets for fine-tuning large language models, as well as for evaluation. Data
collection is challenging for Indian languages, because they are syntactically
and morphologically diverse, as well as different from resource-rich languages
like English. Despite recent interest in deep generative models for Indian
languages, relation classification is still not well served by public data
sets. In response, we present IndoRE, a dataset with 21K entity and relation
tagged gold sentences in three Indian languages, plus English. We start with a
multilingual BERT (mBERT) based system that captures entity span positions and
type information and provides competitive monolingual relation classification.
Using this system, we explore and compare transfer mechanisms between
languages. In particular, we study the accuracy efficiency tradeoff between
expensive gold instances vs. translated and aligned 'silver' instances. We
release the dataset for future research.
|
[
{
"created": "Mon, 18 Oct 2021 18:40:46 GMT",
"version": "v1"
}
] |
2021-10-20
|
[
[
"Nag",
"Arijit",
""
],
[
"Samanta",
"Bidisha",
""
],
[
"Mukherjee",
"Animesh",
""
],
[
"Ganguly",
"Niloy",
""
],
[
"Chakrabarti",
"Soumen",
""
]
] |
Relation classification (sometimes called 'extraction') requires trustworthy datasets for fine-tuning large language models, as well as for evaluation. Data collection is challenging for Indian languages, because they are syntactically and morphologically diverse, as well as different from resource-rich languages like English. Despite recent interest in deep generative models for Indian languages, relation classification is still not well served by public data sets. In response, we present IndoRE, a dataset with 21K entity and relation tagged gold sentences in three Indian languages, plus English. We start with a multilingual BERT (mBERT) based system that captures entity span positions and type information and provides competitive monolingual relation classification. Using this system, we explore and compare transfer mechanisms between languages. In particular, we study the accuracy efficiency tradeoff between expensive gold instances vs. translated and aligned 'silver' instances. We release the dataset for future research.
|
1805.12081
|
Md. Mostafa Kamal Sarker
|
Md. Mostafa Kamal Sarker, Mohammed Jabreel, Hatem A. Rashwan, Syeda
Furruka Banu, Antonio Moreno, Petia Radeva, Domenec Puig
|
CuisineNet: Food Attributes Classification using Multi-scale Convolution
Network
|
8 pages, Submitted in CCIA 2018
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Diversity of food and its attributes represents the culinary habits of
peoples from different countries. Thus, this paper addresses the problem of
identifying food culture of people around the world and its flavor by
classifying two main food attributes, cuisine and flavor. A deep learning model
based on multi-scale convotuional networks is proposed for extracting more
accurate features from input images. The aggregation of multi-scale convolution
layers with different kernel size is also used for weighting the features
results from different scales. In addition, a joint loss function based on
Negative Log Likelihood (NLL) is used to fit the model probability to multi
labeled classes for multi-modal classification task. Furthermore, this work
provides a new dataset for food attributes, so-called Yummly48K, extracted from
the popular food website, Yummly. Our model is assessed on the constructed
Yummly48K dataset. The experimental results show that our proposed method
yields 65% and 62% average F1 score on validation and test set which
outperforming the state-of-the-art models.
|
[
{
"created": "Wed, 30 May 2018 16:56:32 GMT",
"version": "v1"
},
{
"created": "Fri, 8 Jun 2018 16:44:15 GMT",
"version": "v2"
}
] |
2018-06-11
|
[
[
"Sarker",
"Md. Mostafa Kamal",
""
],
[
"Jabreel",
"Mohammed",
""
],
[
"Rashwan",
"Hatem A.",
""
],
[
"Banu",
"Syeda Furruka",
""
],
[
"Moreno",
"Antonio",
""
],
[
"Radeva",
"Petia",
""
],
[
"Puig",
"Domenec",
""
]
] |
Diversity of food and its attributes represents the culinary habits of peoples from different countries. Thus, this paper addresses the problem of identifying food culture of people around the world and its flavor by classifying two main food attributes, cuisine and flavor. A deep learning model based on multi-scale convotuional networks is proposed for extracting more accurate features from input images. The aggregation of multi-scale convolution layers with different kernel size is also used for weighting the features results from different scales. In addition, a joint loss function based on Negative Log Likelihood (NLL) is used to fit the model probability to multi labeled classes for multi-modal classification task. Furthermore, this work provides a new dataset for food attributes, so-called Yummly48K, extracted from the popular food website, Yummly. Our model is assessed on the constructed Yummly48K dataset. The experimental results show that our proposed method yields 65% and 62% average F1 score on validation and test set which outperforming the state-of-the-art models.
|
1705.04269
|
Xingqin Lin
|
Xingqin Lin, Johan Bergman, Fredrik Gunnarsson, Olof Liberg, Sara
Modarres Razavi, Hazhir Shokri Razaghi, Henrik Ryd\'en, and Yutao Sui
|
Positioning for the Internet of Things: A 3GPP Perspective
|
8 pages; 7 figures; 1 table; submitted for publication
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many use cases in the Internet of Things (IoT) will require or benefit from
location information, making positioning a vital dimension of the IoT. The 3rd
Generation Partnership Project (3GPP) has dedicated a significant effort during
its Release 14 to enhance positioning support for its IoT technologies to
further improve the 3GPP-based IoT eco-system. In this article, we identify the
design challenges of positioning support in Long-Term Evolution Machine Type
Communication (LTE-M) and Narrowband IoT (NB-IoT), and overview the 3GPP's work
in enhancing the positioning support for LTE-M and NB-IoT. We focus on Observed
Time Difference of Arrival (OTDOA), which is a downlink based positioning
method. We provide an overview of the OTDOA architecture and protocols,
summarize the designs of OTDOA positioning reference signals, and present
simulation results to illustrate the positioning performance.
|
[
{
"created": "Thu, 13 Apr 2017 00:37:56 GMT",
"version": "v1"
},
{
"created": "Fri, 16 Jun 2017 16:03:38 GMT",
"version": "v2"
}
] |
2017-06-19
|
[
[
"Lin",
"Xingqin",
""
],
[
"Bergman",
"Johan",
""
],
[
"Gunnarsson",
"Fredrik",
""
],
[
"Liberg",
"Olof",
""
],
[
"Razavi",
"Sara Modarres",
""
],
[
"Razaghi",
"Hazhir Shokri",
""
],
[
"Rydén",
"Henrik",
""
],
[
"Sui",
"Yutao",
""
]
] |
Many use cases in the Internet of Things (IoT) will require or benefit from location information, making positioning a vital dimension of the IoT. The 3rd Generation Partnership Project (3GPP) has dedicated a significant effort during its Release 14 to enhance positioning support for its IoT technologies to further improve the 3GPP-based IoT eco-system. In this article, we identify the design challenges of positioning support in Long-Term Evolution Machine Type Communication (LTE-M) and Narrowband IoT (NB-IoT), and overview the 3GPP's work in enhancing the positioning support for LTE-M and NB-IoT. We focus on Observed Time Difference of Arrival (OTDOA), which is a downlink based positioning method. We provide an overview of the OTDOA architecture and protocols, summarize the designs of OTDOA positioning reference signals, and present simulation results to illustrate the positioning performance.
|
1903.12271
|
Mahmoud El-Haj
|
Mahmoud El-Haj, Paul Rayson, Martin Walker, Steven Young, Vasiliki
Simaki
|
In Search of Meaning: Lessons, Resources and Next Steps for
Computational Analysis of Financial Discourse
|
70 page, 18 pages of references, Journal Article
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We critically assess mainstream accounting and finance research applying
methods from computational linguistics (CL) to study financial discourse. We
also review common themes and innovations in the literature and assess the
incremental contributions of work applying CL methods over manual content
analysis. Key conclusions emerging from our analysis are: (a) accounting and
finance research is behind the curve in terms of CL methods generally and word
sense disambiguation in particular; (b) implementation issues mean the proposed
benefits of CL are often less pronounced than proponents suggest; (c)
structural issues limit practical relevance; and (d) CL methods and high
quality manual analysis represent complementary approaches to analyzing
financial discourse. We describe four CL tools that have yet to gain traction
in mainstream AF research but which we believe offer promising ways to enhance
the study of meaning in financial discourse. The four tools are named entity
recognition (NER), summarization, semantics and corpus linguistics.
|
[
{
"created": "Thu, 28 Mar 2019 21:12:59 GMT",
"version": "v1"
}
] |
2019-04-01
|
[
[
"El-Haj",
"Mahmoud",
""
],
[
"Rayson",
"Paul",
""
],
[
"Walker",
"Martin",
""
],
[
"Young",
"Steven",
""
],
[
"Simaki",
"Vasiliki",
""
]
] |
We critically assess mainstream accounting and finance research applying methods from computational linguistics (CL) to study financial discourse. We also review common themes and innovations in the literature and assess the incremental contributions of work applying CL methods over manual content analysis. Key conclusions emerging from our analysis are: (a) accounting and finance research is behind the curve in terms of CL methods generally and word sense disambiguation in particular; (b) implementation issues mean the proposed benefits of CL are often less pronounced than proponents suggest; (c) structural issues limit practical relevance; and (d) CL methods and high quality manual analysis represent complementary approaches to analyzing financial discourse. We describe four CL tools that have yet to gain traction in mainstream AF research but which we believe offer promising ways to enhance the study of meaning in financial discourse. The four tools are named entity recognition (NER), summarization, semantics and corpus linguistics.
|
1510.08301
|
Wonju Lee
|
Wonju Lee, Osvaldo Simeone, Joonhyuk Kang and Shlomo Shamai
|
Multivariate Fronthaul Quantization for Downlink C-RAN
|
Submitted
| null |
10.1109/TSP.2016.2593682
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Cloud-Radio Access Network (C-RAN) cellular architecture relies on the
transfer of complex baseband signals to and from a central unit (CU) over
digital fronthaul links to enable the virtualization of the baseband processing
functionalities of distributed radio units (RUs). The standard design of
digital fronthauling is based on either scalar quantization or on more
sophisticated point to-point compression techniques operating on baseband
signals. Motivated by network-information theoretic results, techniques for
fronthaul quantization and compression that improve over point-to-point
solutions by allowing for joint processing across multiple fronthaul links at
the CU have been recently proposed for both the uplink and the downlink. For
the downlink, a form of joint compression, known in network information theory
as multivariate compression, was shown to be advantageous under a
non-constructive asymptotic information-theoretic framework. In this paper,
instead, the design of a practical symbol-by-symbol fronthaul quantization
algorithm that implements the idea of multivariate compression is investigated
for the C-RAN downlink. As compared to current standards, the proposed
multivariate quantization (MQ) only requires changes in the CU processing while
no modification is needed at the RUs. The algorithm is extended to enable the
joint optimization of downlink precoding and quantization, reduced-complexity
MQ via successive block quantization, and variable-length compression.
Numerical results, which include performance evaluations over standard cellular
models, demonstrate the advantages of MQ and the merits of a joint optimization
with precoding.
|
[
{
"created": "Wed, 28 Oct 2015 13:27:36 GMT",
"version": "v1"
},
{
"created": "Mon, 11 Jul 2016 01:31:28 GMT",
"version": "v2"
}
] |
2016-08-24
|
[
[
"Lee",
"Wonju",
""
],
[
"Simeone",
"Osvaldo",
""
],
[
"Kang",
"Joonhyuk",
""
],
[
"Shamai",
"Shlomo",
""
]
] |
The Cloud-Radio Access Network (C-RAN) cellular architecture relies on the transfer of complex baseband signals to and from a central unit (CU) over digital fronthaul links to enable the virtualization of the baseband processing functionalities of distributed radio units (RUs). The standard design of digital fronthauling is based on either scalar quantization or on more sophisticated point to-point compression techniques operating on baseband signals. Motivated by network-information theoretic results, techniques for fronthaul quantization and compression that improve over point-to-point solutions by allowing for joint processing across multiple fronthaul links at the CU have been recently proposed for both the uplink and the downlink. For the downlink, a form of joint compression, known in network information theory as multivariate compression, was shown to be advantageous under a non-constructive asymptotic information-theoretic framework. In this paper, instead, the design of a practical symbol-by-symbol fronthaul quantization algorithm that implements the idea of multivariate compression is investigated for the C-RAN downlink. As compared to current standards, the proposed multivariate quantization (MQ) only requires changes in the CU processing while no modification is needed at the RUs. The algorithm is extended to enable the joint optimization of downlink precoding and quantization, reduced-complexity MQ via successive block quantization, and variable-length compression. Numerical results, which include performance evaluations over standard cellular models, demonstrate the advantages of MQ and the merits of a joint optimization with precoding.
|
1201.0081
|
Yuan Liu Yuan Liu
|
Hao Zhang, Yuan Liu, and Meixia Tao
|
Resource Allocation with Subcarrier Pairing in OFDMA Two-Way Relay
Networks
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/3.0/
|
This study considers an orthogonal frequency-division multiple-access
(OFDMA)-based multi-user two-way relay network where multiple mobile stations
(MSs) communicate with a common base station (BS) via multiple relay stations
(RSs). We study the joint optimization problem of subcarrier-pairing based
relay-power allocation, relay selection, and subcarrier assignment. The problem
is formulated as a mixed integer programming problem. By using the dual method,
we propose an efficient algorithm to solve the problem in an asymptotically
optimal manner. Simulation results show that the proposed method can improve
system performance significantly over the conventional methods.
|
[
{
"created": "Fri, 30 Dec 2011 08:49:17 GMT",
"version": "v1"
}
] |
2012-01-04
|
[
[
"Zhang",
"Hao",
""
],
[
"Liu",
"Yuan",
""
],
[
"Tao",
"Meixia",
""
]
] |
This study considers an orthogonal frequency-division multiple-access (OFDMA)-based multi-user two-way relay network where multiple mobile stations (MSs) communicate with a common base station (BS) via multiple relay stations (RSs). We study the joint optimization problem of subcarrier-pairing based relay-power allocation, relay selection, and subcarrier assignment. The problem is formulated as a mixed integer programming problem. By using the dual method, we propose an efficient algorithm to solve the problem in an asymptotically optimal manner. Simulation results show that the proposed method can improve system performance significantly over the conventional methods.
|
2110.11404
|
Edgar Du\'e\~nez-Guzm\'an
|
Edgar A. Du\'e\~nez-Guzm\'an, Kevin R. McKee, Yiran Mao, Ben Coppin,
Silvia Chiappa, Alexander Sasha Vezhnevets, Michiel A. Bakker, Yoram
Bachrach, Suzanne Sadedin, William Isaac, Karl Tuyls, Joel Z. Leibo
|
Statistical discrimination in learning agents
|
29 pages, 10 figures
| null | null | null |
cs.LG cs.AI cs.GT cs.MA
|
http://creativecommons.org/licenses/by/4.0/
|
Undesired bias afflicts both human and algorithmic decision making, and may
be especially prevalent when information processing trade-offs incentivize the
use of heuristics. One primary example is \textit{statistical discrimination}
-- selecting social partners based not on their underlying attributes, but on
readily perceptible characteristics that covary with their suitability for the
task at hand. We present a theoretical model to examine how information
processing influences statistical discrimination and test its predictions using
multi-agent reinforcement learning with various agent architectures in a
partner choice-based social dilemma. As predicted, statistical discrimination
emerges in agent policies as a function of both the bias in the training
population and of agent architecture. All agents showed substantial statistical
discrimination, defaulting to using the readily available correlates instead of
the outcome relevant features. We show that less discrimination emerges with
agents that use recurrent neural networks, and when their training environment
has less bias. However, all agent algorithms we tried still exhibited
substantial bias after learning in biased training populations.
|
[
{
"created": "Thu, 21 Oct 2021 18:28:57 GMT",
"version": "v1"
}
] |
2021-10-25
|
[
[
"Duéñez-Guzmán",
"Edgar A.",
""
],
[
"McKee",
"Kevin R.",
""
],
[
"Mao",
"Yiran",
""
],
[
"Coppin",
"Ben",
""
],
[
"Chiappa",
"Silvia",
""
],
[
"Vezhnevets",
"Alexander Sasha",
""
],
[
"Bakker",
"Michiel A.",
""
],
[
"Bachrach",
"Yoram",
""
],
[
"Sadedin",
"Suzanne",
""
],
[
"Isaac",
"William",
""
],
[
"Tuyls",
"Karl",
""
],
[
"Leibo",
"Joel Z.",
""
]
] |
Undesired bias afflicts both human and algorithmic decision making, and may be especially prevalent when information processing trade-offs incentivize the use of heuristics. One primary example is \textit{statistical discrimination} -- selecting social partners based not on their underlying attributes, but on readily perceptible characteristics that covary with their suitability for the task at hand. We present a theoretical model to examine how information processing influences statistical discrimination and test its predictions using multi-agent reinforcement learning with various agent architectures in a partner choice-based social dilemma. As predicted, statistical discrimination emerges in agent policies as a function of both the bias in the training population and of agent architecture. All agents showed substantial statistical discrimination, defaulting to using the readily available correlates instead of the outcome relevant features. We show that less discrimination emerges with agents that use recurrent neural networks, and when their training environment has less bias. However, all agent algorithms we tried still exhibited substantial bias after learning in biased training populations.
|
2009.09719
|
arXiv Admin
|
Sagar Verma
|
A Survey on Machine Learning Applied to Dynamic Physical Systems
|
arXiv admin note: submission has been withdrawn by arXiv
administrators due to inappropriate text overlap with external source
| null | null | null |
cs.LG cs.NE
|
http://creativecommons.org/licenses/by/4.0/
|
This survey is on recent advancements in the intersection of physical
modeling and machine learning. We focus on the modeling of nonlinear systems
which are closer to electric motors. Survey on motor control and fault
detection in operation of electric motors has been done.
|
[
{
"created": "Mon, 21 Sep 2020 09:41:54 GMT",
"version": "v1"
},
{
"created": "Mon, 28 Sep 2020 13:27:14 GMT",
"version": "v2"
}
] |
2020-09-29
|
[
[
"Verma",
"Sagar",
""
]
] |
This survey is on recent advancements in the intersection of physical modeling and machine learning. We focus on the modeling of nonlinear systems which are closer to electric motors. Survey on motor control and fault detection in operation of electric motors has been done.
|
1903.04959
|
Haotian Fu
|
Haotian Fu, Hongyao Tang, Jianye Hao, Zihan Lei, Yingfeng Chen,
Changjie Fan
|
Deep Multi-Agent Reinforcement Learning with Discrete-Continuous Hybrid
Action Spaces
| null |
IJCAI 2019
| null | null |
cs.LG cs.AI cs.MA stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep Reinforcement Learning (DRL) has been applied to address a variety of
cooperative multi-agent problems with either discrete action spaces or
continuous action spaces. However, to the best of our knowledge, no previous
work has ever succeeded in applying DRL to multi-agent problems with
discrete-continuous hybrid (or parameterized) action spaces which is very
common in practice. Our work fills this gap by proposing two novel algorithms:
Deep Multi-Agent Parameterized Q-Networks (Deep MAPQN) and Deep Multi-Agent
Hierarchical Hybrid Q-Networks (Deep MAHHQN). We follow the centralized
training but decentralized execution paradigm: different levels of
communication between different agents are used to facilitate the training
process, while each agent executes its policy independently based on local
observations during execution. Our empirical results on several challenging
tasks (simulated RoboCup Soccer and game Ghost Story) show that both Deep MAPQN
and Deep MAHHQN are effective and significantly outperform existing independent
deep parameterized Q-learning method.
|
[
{
"created": "Tue, 12 Mar 2019 14:40:32 GMT",
"version": "v1"
}
] |
2019-06-04
|
[
[
"Fu",
"Haotian",
""
],
[
"Tang",
"Hongyao",
""
],
[
"Hao",
"Jianye",
""
],
[
"Lei",
"Zihan",
""
],
[
"Chen",
"Yingfeng",
""
],
[
"Fan",
"Changjie",
""
]
] |
Deep Reinforcement Learning (DRL) has been applied to address a variety of cooperative multi-agent problems with either discrete action spaces or continuous action spaces. However, to the best of our knowledge, no previous work has ever succeeded in applying DRL to multi-agent problems with discrete-continuous hybrid (or parameterized) action spaces which is very common in practice. Our work fills this gap by proposing two novel algorithms: Deep Multi-Agent Parameterized Q-Networks (Deep MAPQN) and Deep Multi-Agent Hierarchical Hybrid Q-Networks (Deep MAHHQN). We follow the centralized training but decentralized execution paradigm: different levels of communication between different agents are used to facilitate the training process, while each agent executes its policy independently based on local observations during execution. Our empirical results on several challenging tasks (simulated RoboCup Soccer and game Ghost Story) show that both Deep MAPQN and Deep MAHHQN are effective and significantly outperform existing independent deep parameterized Q-learning method.
|
2003.00145
|
Nupur Patanker
|
Nupur Patanker, Sanjay Kumar Singh
|
Generalization of trace codes to places of higher degree
|
Due to error in Section 6 on the dimension of codes
| null | null | null |
cs.IT math.AG math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this note, we give a construction of codes on algebraic function field $F/
\mathbb{F}_{q}$ using places of $F$ (not necessarily of degree one) and trace
functions from various extensions of $\mathbb{F}_{q}$. This is a generalization
of trace code of geometric Goppa codes to higher degree places. We compute a
bound on the dimension of this code. Furthermore, we give a condition under
which we get exact dimension of the code. We also determine a bound on the
minimum distance of this code in terms of $B_{r}(F)$ ( the number of places of
degree $r$ in $F$), $1 \leq r < \infty$. Few quasi-cyclic codes over
$\mathbb{F}_{p}$ are also obtained as examples of these codes.
|
[
{
"created": "Sat, 29 Feb 2020 01:19:05 GMT",
"version": "v1"
},
{
"created": "Fri, 2 Oct 2020 16:05:02 GMT",
"version": "v2"
},
{
"created": "Wed, 14 Apr 2021 03:05:04 GMT",
"version": "v3"
}
] |
2021-04-15
|
[
[
"Patanker",
"Nupur",
""
],
[
"Singh",
"Sanjay Kumar",
""
]
] |
In this note, we give a construction of codes on algebraic function field $F/ \mathbb{F}_{q}$ using places of $F$ (not necessarily of degree one) and trace functions from various extensions of $\mathbb{F}_{q}$. This is a generalization of trace code of geometric Goppa codes to higher degree places. We compute a bound on the dimension of this code. Furthermore, we give a condition under which we get exact dimension of the code. We also determine a bound on the minimum distance of this code in terms of $B_{r}(F)$ ( the number of places of degree $r$ in $F$), $1 \leq r < \infty$. Few quasi-cyclic codes over $\mathbb{F}_{p}$ are also obtained as examples of these codes.
|
2305.10621
|
Yulin Sun
|
Yulin Sun, Qingming Qu, Chenxingyu Zhao, Arvind Krishnamurthy, Hong
Chang, Ying Xiong
|
TSoR: TCP Socket over RDMA Container Network for Cloud Native Computing
| null | null | null | null |
cs.NI cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Cloud-native containerized applications constantly seek high-performance and
easy-to-operate container network solutions. RDMA network is a potential
enabler with higher throughput and lower latency than the standard TCP/IP
network stack. However, several challenges remain in equipping containerized
applications with RDMA network: 1) How to deliver transparent improvements
without modifying application code; 2) How to integrate RDMA-based network
solutions with container orchestration systems; 3) How to efficiently utilize
RDMA for container networks.
In this paper, we present an RDMA-based container network solution, TCP
Socket over RDMA (TSoR), which addresses all the above challenges. To
transparently accelerate applications using POSIX socket interfaces without
modifications, we integrate TSoR with a container runtime that can intercept
system calls for socket interfaces. To be compatible with orchestration systems
like Kubernetes, TSoR implements a container network following the Kubernetes
network model and satisfies all requirements of the model. To leverage RDMA
benefits, TSoR designs a high-performance network stack that efficiently
transfers TCP traffic using RDMA network. Thus, TSoR provides a turn-key
solution for existing Kubernetes clusters to adopt the high-performance RDMA
network with minimal effort.
Our evaluation results show that TSoR provides up to 2.3x higher throughput
and 64\% lower latency for existing containerized applications, such as Redis
key-value store and Node.js web server, with no code changes. TSoR code will be
open-sourced.
|
[
{
"created": "Thu, 18 May 2023 00:20:56 GMT",
"version": "v1"
}
] |
2023-05-19
|
[
[
"Sun",
"Yulin",
""
],
[
"Qu",
"Qingming",
""
],
[
"Zhao",
"Chenxingyu",
""
],
[
"Krishnamurthy",
"Arvind",
""
],
[
"Chang",
"Hong",
""
],
[
"Xiong",
"Ying",
""
]
] |
Cloud-native containerized applications constantly seek high-performance and easy-to-operate container network solutions. RDMA network is a potential enabler with higher throughput and lower latency than the standard TCP/IP network stack. However, several challenges remain in equipping containerized applications with RDMA network: 1) How to deliver transparent improvements without modifying application code; 2) How to integrate RDMA-based network solutions with container orchestration systems; 3) How to efficiently utilize RDMA for container networks. In this paper, we present an RDMA-based container network solution, TCP Socket over RDMA (TSoR), which addresses all the above challenges. To transparently accelerate applications using POSIX socket interfaces without modifications, we integrate TSoR with a container runtime that can intercept system calls for socket interfaces. To be compatible with orchestration systems like Kubernetes, TSoR implements a container network following the Kubernetes network model and satisfies all requirements of the model. To leverage RDMA benefits, TSoR designs a high-performance network stack that efficiently transfers TCP traffic using RDMA network. Thus, TSoR provides a turn-key solution for existing Kubernetes clusters to adopt the high-performance RDMA network with minimal effort. Our evaluation results show that TSoR provides up to 2.3x higher throughput and 64\% lower latency for existing containerized applications, such as Redis key-value store and Node.js web server, with no code changes. TSoR code will be open-sourced.
|
2407.02442
|
Hao Xu
|
Hao Xu, Kai-Kit Wong, Giuseppe Caire
|
A New Achievable Region of the $K$-User MAC Wiretap Channel with
Confidential and Open Messages Under Strong Secrecy
|
61 pages, 15 figures. arXiv admin note: text overlap with
arXiv:2209.05403
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper investigates the achievable region of a $K$-user discrete
memoryless (DM) multiple access wiretap (MAC-WT) channel, where each user
transmits both secret and open messages. All these messages are intended for
Bob, while Eve is only interested in the secret messages. In the achievable
coding strategy, the confidential information is protected by open messages and
also by the introduction of auxiliary messages. When introducing an auxiliary
message, one has to ensure that, on one hand, its rate is large enough for
protecting the secret message from Eve and, on the other hand, the resulting
sum rate (together with the secret and open message rate) does not exceed Bob's
decoding capability. This yields an inequality structure involving the rates of
all users' secret, open, and auxiliary messages. To obtain the rate region, the
auxiliary message rates must be eliminated from the system of inequalities. A
direct application of the Fourier-Motzkin elimination procedure is elusive
since a) it requires that the number of users $K$ is explicitly given, and b)
even for small $K = 3, 4, \ldots$, the number of inequalities becomes extremely
large. We prove the result for general $K$ through the combined use of
Fourier-Motzkin elimination procedure and mathematical induction. This paper
adopts the strong secrecy metric, characterized by information leakage. To
prove the achievability under this criterion, we analyze the resolvability
region of a $K$-user DM-MAC channel. In addition, we show that users with zero
secrecy rate can play different roles and use different strategies in encoding
their messages. These strategies yield non-redundant rate inequalities. By
considering all possible coding strategies, we provide a new achievable region
for the considered channel, and show that it strictly improves those already
known in the existing literature by considering a specific example.
|
[
{
"created": "Tue, 2 Jul 2024 17:17:53 GMT",
"version": "v1"
}
] |
2024-07-03
|
[
[
"Xu",
"Hao",
""
],
[
"Wong",
"Kai-Kit",
""
],
[
"Caire",
"Giuseppe",
""
]
] |
This paper investigates the achievable region of a $K$-user discrete memoryless (DM) multiple access wiretap (MAC-WT) channel, where each user transmits both secret and open messages. All these messages are intended for Bob, while Eve is only interested in the secret messages. In the achievable coding strategy, the confidential information is protected by open messages and also by the introduction of auxiliary messages. When introducing an auxiliary message, one has to ensure that, on one hand, its rate is large enough for protecting the secret message from Eve and, on the other hand, the resulting sum rate (together with the secret and open message rate) does not exceed Bob's decoding capability. This yields an inequality structure involving the rates of all users' secret, open, and auxiliary messages. To obtain the rate region, the auxiliary message rates must be eliminated from the system of inequalities. A direct application of the Fourier-Motzkin elimination procedure is elusive since a) it requires that the number of users $K$ is explicitly given, and b) even for small $K = 3, 4, \ldots$, the number of inequalities becomes extremely large. We prove the result for general $K$ through the combined use of Fourier-Motzkin elimination procedure and mathematical induction. This paper adopts the strong secrecy metric, characterized by information leakage. To prove the achievability under this criterion, we analyze the resolvability region of a $K$-user DM-MAC channel. In addition, we show that users with zero secrecy rate can play different roles and use different strategies in encoding their messages. These strategies yield non-redundant rate inequalities. By considering all possible coding strategies, we provide a new achievable region for the considered channel, and show that it strictly improves those already known in the existing literature by considering a specific example.
|
2303.02673
|
Jiguo Li
|
Jiguo Li, Tianzi Zhang, Xiaobin Liu, Lirong Zheng
|
Time-frequency Network for Robust Speaker Recognition
|
5pages, 3 figures
| null | null | null |
cs.SD cs.MM eess.AS
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The wide deployment of speech-based biometric systems usually demands
high-performance speaker recognition algorithms. However, most of the prior
works for speaker recognition either process the speech in the frequency domain
or time domain, which may produce suboptimal results because both time and
frequency domains are important for speaker recognition. In this paper, we
attempt to analyze the speech signal in both time and frequency domains and
propose the time-frequency network~(TFN) for speaker recognition by extracting
and fusing the features in the two domains. Based on the recent advance of deep
neural networks, we propose a convolution neural network to encode the raw
speech waveform and the frequency spectrum into domain-specific features, which
are then fused and transformed into a classification feature space for speaker
recognition. Experimental results on the publicly available datasets TIMIT and
LibriSpeech show that our framework is effective to combine the information in
the two domains and performs better than the state-of-the-art methods for
speaker recognition.
|
[
{
"created": "Sun, 5 Mar 2023 13:48:47 GMT",
"version": "v1"
},
{
"created": "Tue, 7 Mar 2023 02:38:16 GMT",
"version": "v2"
}
] |
2023-03-08
|
[
[
"Li",
"Jiguo",
""
],
[
"Zhang",
"Tianzi",
""
],
[
"Liu",
"Xiaobin",
""
],
[
"Zheng",
"Lirong",
""
]
] |
The wide deployment of speech-based biometric systems usually demands high-performance speaker recognition algorithms. However, most of the prior works for speaker recognition either process the speech in the frequency domain or time domain, which may produce suboptimal results because both time and frequency domains are important for speaker recognition. In this paper, we attempt to analyze the speech signal in both time and frequency domains and propose the time-frequency network~(TFN) for speaker recognition by extracting and fusing the features in the two domains. Based on the recent advance of deep neural networks, we propose a convolution neural network to encode the raw speech waveform and the frequency spectrum into domain-specific features, which are then fused and transformed into a classification feature space for speaker recognition. Experimental results on the publicly available datasets TIMIT and LibriSpeech show that our framework is effective to combine the information in the two domains and performs better than the state-of-the-art methods for speaker recognition.
|
2104.04909
|
Saed Rezayi
|
Saed Rezayi, Handong Zhao, Sungchul Kim, Ryan A. Rossi, Nedim Lipka,
Sheng Li
|
Edge: Enriching Knowledge Graph Embeddings with External Text
|
Accepted in NAACL'21
| null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Knowledge graphs suffer from sparsity which degrades the quality of
representations generated by various methods. While there is an abundance of
textual information throughout the web and many existing knowledge bases,
aligning information across these diverse data sources remains a challenge in
the literature. Previous work has partially addressed this issue by enriching
knowledge graph entities based on "hard" co-occurrence of words present in the
entities of the knowledge graphs and external text, while we achieve "soft"
augmentation by proposing a knowledge graph enrichment and embedding framework
named Edge. Given an original knowledge graph, we first generate a rich but
noisy augmented graph using external texts in semantic and structural level. To
distill the relevant knowledge and suppress the introduced noise, we design a
graph alignment term in a shared embedding space between the original graph and
augmented graph. To enhance the embedding learning on the augmented graph, we
further regularize the locality relationship of target entity based on negative
sampling. Experimental results on four benchmark datasets demonstrate the
robustness and effectiveness of Edge in link prediction and node
classification.
|
[
{
"created": "Sun, 11 Apr 2021 03:47:06 GMT",
"version": "v1"
}
] |
2021-04-13
|
[
[
"Rezayi",
"Saed",
""
],
[
"Zhao",
"Handong",
""
],
[
"Kim",
"Sungchul",
""
],
[
"Rossi",
"Ryan A.",
""
],
[
"Lipka",
"Nedim",
""
],
[
"Li",
"Sheng",
""
]
] |
Knowledge graphs suffer from sparsity which degrades the quality of representations generated by various methods. While there is an abundance of textual information throughout the web and many existing knowledge bases, aligning information across these diverse data sources remains a challenge in the literature. Previous work has partially addressed this issue by enriching knowledge graph entities based on "hard" co-occurrence of words present in the entities of the knowledge graphs and external text, while we achieve "soft" augmentation by proposing a knowledge graph enrichment and embedding framework named Edge. Given an original knowledge graph, we first generate a rich but noisy augmented graph using external texts in semantic and structural level. To distill the relevant knowledge and suppress the introduced noise, we design a graph alignment term in a shared embedding space between the original graph and augmented graph. To enhance the embedding learning on the augmented graph, we further regularize the locality relationship of target entity based on negative sampling. Experimental results on four benchmark datasets demonstrate the robustness and effectiveness of Edge in link prediction and node classification.
|
2009.14361
|
Angus Addlesee
|
Angus Addlesee and Pierre Albert
|
Ethically Collecting Multi-Modal Spontaneous Conversations with People
that have Cognitive Impairments
|
Published at LREC's Workshop on Legal and Ethical Issues in Human
Language Technologies 2020
|
LREC Workshop on Legal and Ethical Issues in Human Language
Technologies (2020) 15-20
| null | null |
cs.CL cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In order to make spoken dialogue systems (such as Amazon Alexa or Google
Assistant) more accessible and naturally interactive for people with cognitive
impairments, appropriate data must be obtainable. Recordings of multi-modal
spontaneous conversations with vulnerable user groups are scarce however and
this valuable data is challenging to collect. Researchers that call for this
data are commonly inexperienced in ethical and legal issues around working with
vulnerable participants. Additionally, standard recording equipment is insecure
and should not be used to capture sensitive data. We spent a year consulting
experts on how to ethically capture and share recordings of multi-modal
spontaneous conversations with vulnerable user groups. In this paper we provide
guidance, collated from these experts, on how to ethically collect such data
and we present a new system - "CUSCO" - to capture, transport and exchange
sensitive data securely. This framework is intended to be easily followed and
implemented to encourage further publications of similar corpora. Using this
guide and secure recording system, researchers can review and refine their
ethical measures.
|
[
{
"created": "Wed, 30 Sep 2020 00:57:33 GMT",
"version": "v1"
}
] |
2020-10-01
|
[
[
"Addlesee",
"Angus",
""
],
[
"Albert",
"Pierre",
""
]
] |
In order to make spoken dialogue systems (such as Amazon Alexa or Google Assistant) more accessible and naturally interactive for people with cognitive impairments, appropriate data must be obtainable. Recordings of multi-modal spontaneous conversations with vulnerable user groups are scarce however and this valuable data is challenging to collect. Researchers that call for this data are commonly inexperienced in ethical and legal issues around working with vulnerable participants. Additionally, standard recording equipment is insecure and should not be used to capture sensitive data. We spent a year consulting experts on how to ethically capture and share recordings of multi-modal spontaneous conversations with vulnerable user groups. In this paper we provide guidance, collated from these experts, on how to ethically collect such data and we present a new system - "CUSCO" - to capture, transport and exchange sensitive data securely. This framework is intended to be easily followed and implemented to encourage further publications of similar corpora. Using this guide and secure recording system, researchers can review and refine their ethical measures.
|
2306.01953
|
Xuandong Zhao
|
Xuandong Zhao, Kexun Zhang, Zihao Su, Saastha Vasan, Ilya Grishchenko,
Christopher Kruegel, Giovanni Vigna, Yu-Xiang Wang, Lei Li
|
Invisible Image Watermarks Are Provably Removable Using Generative AI
| null | null | null | null |
cs.CR cs.AI cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Invisible watermarks safeguard images' copyright by embedding hidden messages
only detectable by owners. They also prevent people from misusing images,
especially those generated by AI models. We propose a family of regeneration
attacks to remove these invisible watermarks. The proposed attack method first
adds random noise to an image to destroy the watermark and then reconstructs
the image. This approach is flexible and can be instantiated with many existing
image-denoising algorithms and pre-trained generative models such as diffusion
models. Through formal proofs and empirical results, we show that all invisible
watermarks are vulnerable to the proposed attack. For a particularly resilient
watermark, RivaGAN, regeneration attacks remove 93-99% of the invisible
watermarks while the baseline attacks remove no more than 3%. However, if we do
not require the watermarked image to look the same as the original one,
watermarks that keep the image semantically similar can be an alternative
defense against our attack. Our finding underscores the need for a shift in
research/industry emphasis from invisible watermarks to semantically similar
ones. Code is available at https://github.com/XuandongZhao/WatermarkAttacker.
|
[
{
"created": "Fri, 2 Jun 2023 23:29:28 GMT",
"version": "v1"
},
{
"created": "Sun, 6 Aug 2023 17:17:04 GMT",
"version": "v2"
}
] |
2023-08-08
|
[
[
"Zhao",
"Xuandong",
""
],
[
"Zhang",
"Kexun",
""
],
[
"Su",
"Zihao",
""
],
[
"Vasan",
"Saastha",
""
],
[
"Grishchenko",
"Ilya",
""
],
[
"Kruegel",
"Christopher",
""
],
[
"Vigna",
"Giovanni",
""
],
[
"Wang",
"Yu-Xiang",
""
],
[
"Li",
"Lei",
""
]
] |
Invisible watermarks safeguard images' copyright by embedding hidden messages only detectable by owners. They also prevent people from misusing images, especially those generated by AI models. We propose a family of regeneration attacks to remove these invisible watermarks. The proposed attack method first adds random noise to an image to destroy the watermark and then reconstructs the image. This approach is flexible and can be instantiated with many existing image-denoising algorithms and pre-trained generative models such as diffusion models. Through formal proofs and empirical results, we show that all invisible watermarks are vulnerable to the proposed attack. For a particularly resilient watermark, RivaGAN, regeneration attacks remove 93-99% of the invisible watermarks while the baseline attacks remove no more than 3%. However, if we do not require the watermarked image to look the same as the original one, watermarks that keep the image semantically similar can be an alternative defense against our attack. Our finding underscores the need for a shift in research/industry emphasis from invisible watermarks to semantically similar ones. Code is available at https://github.com/XuandongZhao/WatermarkAttacker.
|
2104.08894
|
Ahmed Abdelkader
|
Phillip Pope, Chen Zhu, Ahmed Abdelkader, Micah Goldblum, Tom
Goldstein
|
The Intrinsic Dimension of Images and Its Impact on Learning
|
To appear at ICLR 2021 (spotlight), 17 pages with appendix, 15
figures
| null | null | null |
cs.CV cs.LG stat.ML
|
http://creativecommons.org/licenses/by-sa/4.0/
|
It is widely believed that natural image data exhibits low-dimensional
structure despite the high dimensionality of conventional pixel
representations. This idea underlies a common intuition for the remarkable
success of deep learning in computer vision. In this work, we apply dimension
estimation tools to popular datasets and investigate the role of
low-dimensional structure in deep learning. We find that common natural image
datasets indeed have very low intrinsic dimension relative to the high number
of pixels in the images. Additionally, we find that low dimensional datasets
are easier for neural networks to learn, and models solving these tasks
generalize better from training to test data. Along the way, we develop a
technique for validating our dimension estimation tools on synthetic data
generated by GANs allowing us to actively manipulate the intrinsic dimension by
controlling the image generation process. Code for our experiments may be found
here https://github.com/ppope/dimensions.
|
[
{
"created": "Sun, 18 Apr 2021 16:29:23 GMT",
"version": "v1"
}
] |
2021-04-20
|
[
[
"Pope",
"Phillip",
""
],
[
"Zhu",
"Chen",
""
],
[
"Abdelkader",
"Ahmed",
""
],
[
"Goldblum",
"Micah",
""
],
[
"Goldstein",
"Tom",
""
]
] |
It is widely believed that natural image data exhibits low-dimensional structure despite the high dimensionality of conventional pixel representations. This idea underlies a common intuition for the remarkable success of deep learning in computer vision. In this work, we apply dimension estimation tools to popular datasets and investigate the role of low-dimensional structure in deep learning. We find that common natural image datasets indeed have very low intrinsic dimension relative to the high number of pixels in the images. Additionally, we find that low dimensional datasets are easier for neural networks to learn, and models solving these tasks generalize better from training to test data. Along the way, we develop a technique for validating our dimension estimation tools on synthetic data generated by GANs allowing us to actively manipulate the intrinsic dimension by controlling the image generation process. Code for our experiments may be found here https://github.com/ppope/dimensions.
|
1603.06679
|
Wenya Wang
|
Wenya Wang, Sinno Jialin Pan, Daniel Dahlmeier and Xiaokui Xiao
|
Recursive Neural Conditional Random Fields for Aspect-based Sentiment
Analysis
| null | null | null | null |
cs.CL cs.IR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In aspect-based sentiment analysis, extracting aspect terms along with the
opinions being expressed from user-generated content is one of the most
important subtasks. Previous studies have shown that exploiting connections
between aspect and opinion terms is promising for this task. In this paper, we
propose a novel joint model that integrates recursive neural networks and
conditional random fields into a unified framework for explicit aspect and
opinion terms co-extraction. The proposed model learns high-level
discriminative features and double propagate information between aspect and
opinion terms, simultaneously. Moreover, it is flexible to incorporate
hand-crafted features into the proposed model to further boost its information
extraction performance. Experimental results on the SemEval Challenge 2014
dataset show the superiority of our proposed model over several baseline
methods as well as the winning systems of the challenge.
|
[
{
"created": "Tue, 22 Mar 2016 05:59:00 GMT",
"version": "v1"
},
{
"created": "Wed, 8 Jun 2016 06:24:06 GMT",
"version": "v2"
},
{
"created": "Mon, 19 Sep 2016 14:00:43 GMT",
"version": "v3"
}
] |
2016-09-20
|
[
[
"Wang",
"Wenya",
""
],
[
"Pan",
"Sinno Jialin",
""
],
[
"Dahlmeier",
"Daniel",
""
],
[
"Xiao",
"Xiaokui",
""
]
] |
In aspect-based sentiment analysis, extracting aspect terms along with the opinions being expressed from user-generated content is one of the most important subtasks. Previous studies have shown that exploiting connections between aspect and opinion terms is promising for this task. In this paper, we propose a novel joint model that integrates recursive neural networks and conditional random fields into a unified framework for explicit aspect and opinion terms co-extraction. The proposed model learns high-level discriminative features and double propagate information between aspect and opinion terms, simultaneously. Moreover, it is flexible to incorporate hand-crafted features into the proposed model to further boost its information extraction performance. Experimental results on the SemEval Challenge 2014 dataset show the superiority of our proposed model over several baseline methods as well as the winning systems of the challenge.
|
2105.10440
|
Prajwol Kumar Nakarmi
|
John Preu{\ss} Mattsson, Prajwol Kumar Nakarmi
|
Nori: Concealing the Concealed Identifier in 5G
|
9 pages, 8 figures, 1 table
|
2021
|
10.1145/3465481
|
ARES '21: Proceedings of the 16th International Conference on
Availability, Reliability and Security
|
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
IMSI catchers have been a long standing and serious privacy problem in pre-5G
mobile networks. To tackle this, 3GPP introduced the Subscription Concealed
Identifier (SUCI) and other countermeasures in 5G. In this paper, we analyze
the new SUCI mechanism and discover that it provides very poor anonymity when
used with the variable length Network Specific Identifiers (NSI), which are
part of the 5G standard. When applied to real-world name length data, we see
that SUCI only provides 1-anonymity, meaning that individual subscribers can
easily be identified and tracked. We strongly recommend 3GPP and GSMA to
standardize and recommend the use of a padding mechanism for SUCI before
variable length identifiers get more commonly used. We further show that the
padding schemes, commonly used for network traffic, are not optimal for padding
of identifiers based on real names. We propose a new improved padding scheme
that achieves much less message expansion for a given $k$-anonymity.
|
[
{
"created": "Fri, 21 May 2021 16:20:16 GMT",
"version": "v1"
},
{
"created": "Mon, 14 Jun 2021 16:00:17 GMT",
"version": "v2"
}
] |
2023-09-12
|
[
[
"Mattsson",
"John Preuß",
""
],
[
"Nakarmi",
"Prajwol Kumar",
""
]
] |
IMSI catchers have been a long standing and serious privacy problem in pre-5G mobile networks. To tackle this, 3GPP introduced the Subscription Concealed Identifier (SUCI) and other countermeasures in 5G. In this paper, we analyze the new SUCI mechanism and discover that it provides very poor anonymity when used with the variable length Network Specific Identifiers (NSI), which are part of the 5G standard. When applied to real-world name length data, we see that SUCI only provides 1-anonymity, meaning that individual subscribers can easily be identified and tracked. We strongly recommend 3GPP and GSMA to standardize and recommend the use of a padding mechanism for SUCI before variable length identifiers get more commonly used. We further show that the padding schemes, commonly used for network traffic, are not optimal for padding of identifiers based on real names. We propose a new improved padding scheme that achieves much less message expansion for a given $k$-anonymity.
|
1905.13340
|
Hsin-Po Wang
|
Hsin-Po Wang and Iwan Duursma
|
Log-logarithmic Time Pruned Polar Coding
|
13 pages, 13 figures; we extend arXiv:1812.08106 and remove "BEC"
from title
| null |
10.1109/TIT.2020.3041523
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A pruned variant of polar coding is proposed for binary erasure channels. For
sufficiently small $\varepsilon>0$, we construct a series of capacity achieving
codes with block length $N=\varepsilon^{-5}$, code rate
$R=\text{Capacity}-\varepsilon$, error probability $P=\varepsilon$, and
encoding and decoding time complexity
$\text{bC}=O(\log\left|\log\varepsilon\right|)$ per information bit.
The given per-bit complexity $\text{bC}$ is log-logarithmic in $N$, in
$\text{Capacity}-R$, and in $P$; no known family of codes possesses this
property. It is also the second lowest $\text{bC}$ after repeat-accumulate
codes and their variants. While random codes and classical polar codes are the
only two families of capacity-achieving codes whose $N$, $R$, $P$, and
$\text{bC}$ were written down as explicit functions, our construction gives the
third family.
Then we generalize the result to: Fix a prime $q$ and fix a $q$-ary-input
discrete symmetric memoryless channel. For sufficiently small $\varepsilon>0$,
we construct a series of capacity achieving codes with block length
$N=\varepsilon^{-O(1)}$, code rate $R=\text{Capacity}-\varepsilon$, error
probability $P=\varepsilon$, and encoding and decoding time complexity
$\text{bC}=O(\log\left|\log\varepsilon\right|)$ per information bit. The later
construction gives the fastest family of capacity-achieving codes to date on
those channels.
|
[
{
"created": "Thu, 30 May 2019 22:37:41 GMT",
"version": "v1"
}
] |
2020-12-14
|
[
[
"Wang",
"Hsin-Po",
""
],
[
"Duursma",
"Iwan",
""
]
] |
A pruned variant of polar coding is proposed for binary erasure channels. For sufficiently small $\varepsilon>0$, we construct a series of capacity achieving codes with block length $N=\varepsilon^{-5}$, code rate $R=\text{Capacity}-\varepsilon$, error probability $P=\varepsilon$, and encoding and decoding time complexity $\text{bC}=O(\log\left|\log\varepsilon\right|)$ per information bit. The given per-bit complexity $\text{bC}$ is log-logarithmic in $N$, in $\text{Capacity}-R$, and in $P$; no known family of codes possesses this property. It is also the second lowest $\text{bC}$ after repeat-accumulate codes and their variants. While random codes and classical polar codes are the only two families of capacity-achieving codes whose $N$, $R$, $P$, and $\text{bC}$ were written down as explicit functions, our construction gives the third family. Then we generalize the result to: Fix a prime $q$ and fix a $q$-ary-input discrete symmetric memoryless channel. For sufficiently small $\varepsilon>0$, we construct a series of capacity achieving codes with block length $N=\varepsilon^{-O(1)}$, code rate $R=\text{Capacity}-\varepsilon$, error probability $P=\varepsilon$, and encoding and decoding time complexity $\text{bC}=O(\log\left|\log\varepsilon\right|)$ per information bit. The later construction gives the fastest family of capacity-achieving codes to date on those channels.
|
2406.01080
|
Zhibo Xing
|
Zhibo Xing, Zijian Zhang, Zi'ang Zhang, Jiamou Liu, Liehuang Zhu,
Giovanni Russello
|
No Vandalism: Privacy-Preserving and Byzantine-Robust Federated Learning
| null | null | null | null |
cs.CR cs.DC cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Federated learning allows several clients to train one machine learning model
jointly without sharing private data, providing privacy protection. However,
traditional federated learning is vulnerable to poisoning attacks, which can
not only decrease the model performance, but also implant malicious backdoors.
In addition, direct submission of local model parameters can also lead to the
privacy leakage of the training dataset. In this paper, we aim to build a
privacy-preserving and Byzantine-robust federated learning scheme to provide an
environment with no vandalism (NoV) against attacks from malicious
participants. Specifically, we construct a model filter for poisoned local
models, protecting the global model from data and model poisoning attacks. This
model filter combines zero-knowledge proofs to provide further privacy
protection. Then, we adopt secret sharing to provide verifiable secure
aggregation, removing malicious clients that disrupting the aggregation
process. Our formal analysis proves that NoV can protect data privacy and weed
out Byzantine attackers. Our experiments illustrate that NoV can effectively
address data and model poisoning attacks, including PGD, and outperforms other
related schemes.
|
[
{
"created": "Mon, 3 Jun 2024 07:59:10 GMT",
"version": "v1"
}
] |
2024-06-05
|
[
[
"Xing",
"Zhibo",
""
],
[
"Zhang",
"Zijian",
""
],
[
"Zhang",
"Zi'ang",
""
],
[
"Liu",
"Jiamou",
""
],
[
"Zhu",
"Liehuang",
""
],
[
"Russello",
"Giovanni",
""
]
] |
Federated learning allows several clients to train one machine learning model jointly without sharing private data, providing privacy protection. However, traditional federated learning is vulnerable to poisoning attacks, which can not only decrease the model performance, but also implant malicious backdoors. In addition, direct submission of local model parameters can also lead to the privacy leakage of the training dataset. In this paper, we aim to build a privacy-preserving and Byzantine-robust federated learning scheme to provide an environment with no vandalism (NoV) against attacks from malicious participants. Specifically, we construct a model filter for poisoned local models, protecting the global model from data and model poisoning attacks. This model filter combines zero-knowledge proofs to provide further privacy protection. Then, we adopt secret sharing to provide verifiable secure aggregation, removing malicious clients that disrupting the aggregation process. Our formal analysis proves that NoV can protect data privacy and weed out Byzantine attackers. Our experiments illustrate that NoV can effectively address data and model poisoning attacks, including PGD, and outperforms other related schemes.
|
1604.04586
|
Mouhacine Benosman
|
Mouhacine Benosman, Jeff Borggaard, Boris Kramer
|
Robust Reduced-Order Model Stabilization for Partial Differential
Equations Based on Lyapunov Theory and Extremum Seeking with Application to
the 3D Boussinesq Equations
|
arXiv admin note: text overlap with arXiv:1510.01728
| null | null | null |
cs.SY math.NA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present some results on stabilization for reduced-order models (ROMs) of
partial differential equations. The stabilization is achieved using Lyapunov
theory to design a new closure model that is robust to parametric
uncertainties. The free parameters in the proposed ROM stabilization method are
optimized using a model-free multi-parametric extremum seeking (MES) algorithm.
The 3D Boussinesq equations provide a challenging numerical test-problem that
is used to demonstrate the advantages of the proposed method.
|
[
{
"created": "Fri, 15 Apr 2016 18:04:32 GMT",
"version": "v1"
}
] |
2016-04-18
|
[
[
"Benosman",
"Mouhacine",
""
],
[
"Borggaard",
"Jeff",
""
],
[
"Kramer",
"Boris",
""
]
] |
We present some results on stabilization for reduced-order models (ROMs) of partial differential equations. The stabilization is achieved using Lyapunov theory to design a new closure model that is robust to parametric uncertainties. The free parameters in the proposed ROM stabilization method are optimized using a model-free multi-parametric extremum seeking (MES) algorithm. The 3D Boussinesq equations provide a challenging numerical test-problem that is used to demonstrate the advantages of the proposed method.
|
2205.10123
|
Stefano Teso
|
Andrea Bontempelli, Marcelo Rodas Britez, Xiaoyue Li, Haonan Zhao,
Luca Erculiani, Stefano Teso, Andrea Passerini, Fausto Giunchiglia
|
Lifelong Personal Context Recognition
|
8 pages
| null | null | null |
cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We focus on the development of AIs which live in lifelong symbiosis with a
human. The key prerequisite for this task is that the AI understands - at any
moment in time - the personal situational context that the human is in. We
outline the key challenges that this task brings forth, namely (i) handling the
human-like and ego-centric nature of the the user's context, necessary for
understanding and providing useful suggestions, (ii) performing lifelong
context recognition using machine learning in a way that is robust to change,
and (iii) maintaining alignment between the AI's and human's representations of
the world through continual bidirectional interaction. In this short paper, we
summarize our recent attempts at tackling these challenges, discuss the lessons
learned, and highlight directions of future research. The main take-away
message is that pursuing this project requires research which lies at the
intersection of knowledge representation and machine learning. Neither
technology can achieve this goal without the other.
|
[
{
"created": "Tue, 10 May 2022 13:24:47 GMT",
"version": "v1"
}
] |
2022-05-23
|
[
[
"Bontempelli",
"Andrea",
""
],
[
"Britez",
"Marcelo Rodas",
""
],
[
"Li",
"Xiaoyue",
""
],
[
"Zhao",
"Haonan",
""
],
[
"Erculiani",
"Luca",
""
],
[
"Teso",
"Stefano",
""
],
[
"Passerini",
"Andrea",
""
],
[
"Giunchiglia",
"Fausto",
""
]
] |
We focus on the development of AIs which live in lifelong symbiosis with a human. The key prerequisite for this task is that the AI understands - at any moment in time - the personal situational context that the human is in. We outline the key challenges that this task brings forth, namely (i) handling the human-like and ego-centric nature of the the user's context, necessary for understanding and providing useful suggestions, (ii) performing lifelong context recognition using machine learning in a way that is robust to change, and (iii) maintaining alignment between the AI's and human's representations of the world through continual bidirectional interaction. In this short paper, we summarize our recent attempts at tackling these challenges, discuss the lessons learned, and highlight directions of future research. The main take-away message is that pursuing this project requires research which lies at the intersection of knowledge representation and machine learning. Neither technology can achieve this goal without the other.
|
2310.01882
|
Nick Brown
|
Nick Brown, Maurice Jamieson, Anton Lydike, Emilien Bauer, Tobias
Grosser
|
Fortran performance optimisation and auto-parallelisation by leveraging
MLIR-based domain specific abstractions in Flang
|
Author accepted version of paper in ACM Workshops of The
International Conference on High Performance Computing, Network, Storage, and
Analysis (SC-W 2023)
| null |
10.1145/3624062.3624167
| null |
cs.DC cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
MLIR has become popular since it was open sourced in 2019. A sub-project of
LLVM, the flexibility provided by MLIR to represent Intermediate
Representations (IR) as dialects at different abstraction levels, to mix these,
and to leverage transformations between dialects provides opportunities for
automated program optimisation and parallelisation. In addition to general
purpose compilers built upon MLIR, domain specific abstractions have also been
developed.
In this paper we explore complimenting the Flang MLIR general purpose
compiler by combining with the domain specific Open Earth Compiler's MLIR
stencil dialect. Developing transformations to discover and extracts stencils
from Fortran, this specialisation delivers between a 2 and 10 times performance
improvement for our benchmarks on a Cray supercomputer compared to using Flang
alone. Furthermore, by leveraging existing MLIR transformations we develop an
auto-parallelisation approach targeting multi-threaded and distributed memory
parallelism, and optimised execution on GPUs, without any modifications to the
serial Fortran source code.
|
[
{
"created": "Tue, 3 Oct 2023 08:36:26 GMT",
"version": "v1"
}
] |
2023-10-04
|
[
[
"Brown",
"Nick",
""
],
[
"Jamieson",
"Maurice",
""
],
[
"Lydike",
"Anton",
""
],
[
"Bauer",
"Emilien",
""
],
[
"Grosser",
"Tobias",
""
]
] |
MLIR has become popular since it was open sourced in 2019. A sub-project of LLVM, the flexibility provided by MLIR to represent Intermediate Representations (IR) as dialects at different abstraction levels, to mix these, and to leverage transformations between dialects provides opportunities for automated program optimisation and parallelisation. In addition to general purpose compilers built upon MLIR, domain specific abstractions have also been developed. In this paper we explore complimenting the Flang MLIR general purpose compiler by combining with the domain specific Open Earth Compiler's MLIR stencil dialect. Developing transformations to discover and extracts stencils from Fortran, this specialisation delivers between a 2 and 10 times performance improvement for our benchmarks on a Cray supercomputer compared to using Flang alone. Furthermore, by leveraging existing MLIR transformations we develop an auto-parallelisation approach targeting multi-threaded and distributed memory parallelism, and optimised execution on GPUs, without any modifications to the serial Fortran source code.
|
2309.11119
|
Minsu Kim
|
Minsu Kim, Giseop Kim, Kyong Hwan Jin, Sunwook Choi
|
BroadBEV: Collaborative LiDAR-camera Fusion for Broad-sighted Bird's Eye
View Map Construction
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
A recent sensor fusion in a Bird's Eye View (BEV) space has shown its utility
in various tasks such as 3D detection, map segmentation, etc. However, the
approach struggles with inaccurate camera BEV estimation, and a perception of
distant areas due to the sparsity of LiDAR points. In this paper, we propose a
broad BEV fusion (BroadBEV) that addresses the problems with a spatial
synchronization approach of cross-modality. Our strategy aims to enhance camera
BEV estimation for a broad-sighted perception while simultaneously improving
the completion of LiDAR's sparsity in the entire BEV space. Toward that end, we
devise Point-scattering that scatters LiDAR BEV distribution to camera depth
distribution. The method boosts the learning of depth estimation of the camera
branch and induces accurate location of dense camera features in BEV space. For
an effective BEV fusion between the spatially synchronized features, we suggest
ColFusion that applies self-attention weights of LiDAR and camera BEV features
to each other. Our extensive experiments demonstrate that BroadBEV provides a
broad-sighted BEV perception with remarkable performance gains.
|
[
{
"created": "Wed, 20 Sep 2023 07:55:57 GMT",
"version": "v1"
},
{
"created": "Thu, 21 Sep 2023 01:14:02 GMT",
"version": "v2"
},
{
"created": "Mon, 25 Sep 2023 06:46:12 GMT",
"version": "v3"
},
{
"created": "Wed, 8 Nov 2023 11:18:24 GMT",
"version": "v4"
}
] |
2023-11-09
|
[
[
"Kim",
"Minsu",
""
],
[
"Kim",
"Giseop",
""
],
[
"Jin",
"Kyong Hwan",
""
],
[
"Choi",
"Sunwook",
""
]
] |
A recent sensor fusion in a Bird's Eye View (BEV) space has shown its utility in various tasks such as 3D detection, map segmentation, etc. However, the approach struggles with inaccurate camera BEV estimation, and a perception of distant areas due to the sparsity of LiDAR points. In this paper, we propose a broad BEV fusion (BroadBEV) that addresses the problems with a spatial synchronization approach of cross-modality. Our strategy aims to enhance camera BEV estimation for a broad-sighted perception while simultaneously improving the completion of LiDAR's sparsity in the entire BEV space. Toward that end, we devise Point-scattering that scatters LiDAR BEV distribution to camera depth distribution. The method boosts the learning of depth estimation of the camera branch and induces accurate location of dense camera features in BEV space. For an effective BEV fusion between the spatially synchronized features, we suggest ColFusion that applies self-attention weights of LiDAR and camera BEV features to each other. Our extensive experiments demonstrate that BroadBEV provides a broad-sighted BEV perception with remarkable performance gains.
|
1611.04145
|
Zijie Zheng
|
Zijie Zheng, Lingyang Song, Dusit Niyato, and Zhu Han
|
Resource Allocation in Wireless Powered Relay Networks: A Bargaining
Game Approach
|
14 pages, 7 figures, journal paper
| null | null | null |
cs.IT cs.GT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Simultaneously information and power transfer in mobile relay networks have
recently emerged, where the relay can harvest the radio frequency (RF) energy
and then use this energy for data forwarding and system operation. Most of the
previous works do not consider that the relay may have its own objectives, such
as using the harvested energy for its own transmission instead of maximizing
transmission of the network. Therefore, in this paper, we propose a Nash
bargaining approach to balance the information transmission efficiency of
source-destination pairs and the harvested energy of the relay in a wireless
powered relay network with multiple source-destination pairs and one relay. We
analyze and prove that the Nash bargaining problem has several desirable
properties such as the discreteness and quasi-concavity, when it is decomposed
into three sub-problems: the energy transmission power optimization, the power
control for data transmission and the time division between energy transmission
and data transmission. Based on the theoretical analysis, we propose an
alternating power control and time division algorithm to find a suboptimal
solution. Simulation results clearly show and demonstrate the properties of the
problem and the convergence of our algorithm.
|
[
{
"created": "Sun, 13 Nov 2016 15:34:14 GMT",
"version": "v1"
}
] |
2016-11-15
|
[
[
"Zheng",
"Zijie",
""
],
[
"Song",
"Lingyang",
""
],
[
"Niyato",
"Dusit",
""
],
[
"Han",
"Zhu",
""
]
] |
Simultaneously information and power transfer in mobile relay networks have recently emerged, where the relay can harvest the radio frequency (RF) energy and then use this energy for data forwarding and system operation. Most of the previous works do not consider that the relay may have its own objectives, such as using the harvested energy for its own transmission instead of maximizing transmission of the network. Therefore, in this paper, we propose a Nash bargaining approach to balance the information transmission efficiency of source-destination pairs and the harvested energy of the relay in a wireless powered relay network with multiple source-destination pairs and one relay. We analyze and prove that the Nash bargaining problem has several desirable properties such as the discreteness and quasi-concavity, when it is decomposed into three sub-problems: the energy transmission power optimization, the power control for data transmission and the time division between energy transmission and data transmission. Based on the theoretical analysis, we propose an alternating power control and time division algorithm to find a suboptimal solution. Simulation results clearly show and demonstrate the properties of the problem and the convergence of our algorithm.
|
2307.13900
|
Hyunjong Ok
|
Hyunjong Ok
|
FinTree: Financial Dataset Pretrain Transformer Encoder for Relation
Extraction
|
4pages, 2 figures, The SIGIR'23 Workshop on Knowledge Discovery from
Unstructured Data in Financial Services
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present FinTree, Financial Dataset Pretrain Transformer Encoder for
Relation Extraction. Utilizing an encoder language model, we further pretrain
FinTree on the financial dataset, adapting the model in financial domain tasks.
FinTree stands out with its novel structure that predicts a masked token
instead of the conventional [CLS] token, inspired by the Pattern Exploiting
Training methodology. This structure allows for more accurate relation
predictions between two given entities. The model is trained with a unique
input pattern to provide contextual and positional information about the
entities of interest, and a post-processing step ensures accurate predictions
in line with the entity types. Our experiments demonstrate that FinTree
outperforms on the REFinD, a large-scale financial relation extraction dataset.
The code and pretrained models are available at
https://github.com/HJ-Ok/FinTree.
|
[
{
"created": "Wed, 26 Jul 2023 01:48:52 GMT",
"version": "v1"
}
] |
2023-07-27
|
[
[
"Ok",
"Hyunjong",
""
]
] |
We present FinTree, Financial Dataset Pretrain Transformer Encoder for Relation Extraction. Utilizing an encoder language model, we further pretrain FinTree on the financial dataset, adapting the model in financial domain tasks. FinTree stands out with its novel structure that predicts a masked token instead of the conventional [CLS] token, inspired by the Pattern Exploiting Training methodology. This structure allows for more accurate relation predictions between two given entities. The model is trained with a unique input pattern to provide contextual and positional information about the entities of interest, and a post-processing step ensures accurate predictions in line with the entity types. Our experiments demonstrate that FinTree outperforms on the REFinD, a large-scale financial relation extraction dataset. The code and pretrained models are available at https://github.com/HJ-Ok/FinTree.
|
2403.13249
|
Zhenyi Wang
|
Zhenyi Wang, Yan Li, Li Shen, Heng Huang
|
A Unified and General Framework for Continual Learning
|
ICLR 2024
| null | null | null |
cs.LG cs.AI cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Continual Learning (CL) focuses on learning from dynamic and changing data
distributions while retaining previously acquired knowledge. Various methods
have been developed to address the challenge of catastrophic forgetting,
including regularization-based, Bayesian-based, and memory-replay-based
techniques. However, these methods lack a unified framework and common
terminology for describing their approaches. This research aims to bridge this
gap by introducing a comprehensive and overarching framework that encompasses
and reconciles these existing methodologies. Notably, this new framework is
capable of encompassing established CL approaches as special instances within a
unified and general optimization objective. An intriguing finding is that
despite their diverse origins, these methods share common mathematical
structures. This observation highlights the compatibility of these seemingly
distinct techniques, revealing their interconnectedness through a shared
underlying optimization objective. Moreover, the proposed general framework
introduces an innovative concept called refresh learning, specifically designed
to enhance the CL performance. This novel approach draws inspiration from
neuroscience, where the human brain often sheds outdated information to improve
the retention of crucial knowledge and facilitate the acquisition of new
information. In essence, refresh learning operates by initially unlearning
current data and subsequently relearning it. It serves as a versatile plug-in
that seamlessly integrates with existing CL methods, offering an adaptable and
effective enhancement to the learning process. Extensive experiments on CL
benchmarks and theoretical analysis demonstrate the effectiveness of the
proposed refresh learning. Code is available at
\url{https://github.com/joey-wang123/CL-refresh-learning}.
|
[
{
"created": "Wed, 20 Mar 2024 02:21:44 GMT",
"version": "v1"
}
] |
2024-03-21
|
[
[
"Wang",
"Zhenyi",
""
],
[
"Li",
"Yan",
""
],
[
"Shen",
"Li",
""
],
[
"Huang",
"Heng",
""
]
] |
Continual Learning (CL) focuses on learning from dynamic and changing data distributions while retaining previously acquired knowledge. Various methods have been developed to address the challenge of catastrophic forgetting, including regularization-based, Bayesian-based, and memory-replay-based techniques. However, these methods lack a unified framework and common terminology for describing their approaches. This research aims to bridge this gap by introducing a comprehensive and overarching framework that encompasses and reconciles these existing methodologies. Notably, this new framework is capable of encompassing established CL approaches as special instances within a unified and general optimization objective. An intriguing finding is that despite their diverse origins, these methods share common mathematical structures. This observation highlights the compatibility of these seemingly distinct techniques, revealing their interconnectedness through a shared underlying optimization objective. Moreover, the proposed general framework introduces an innovative concept called refresh learning, specifically designed to enhance the CL performance. This novel approach draws inspiration from neuroscience, where the human brain often sheds outdated information to improve the retention of crucial knowledge and facilitate the acquisition of new information. In essence, refresh learning operates by initially unlearning current data and subsequently relearning it. It serves as a versatile plug-in that seamlessly integrates with existing CL methods, offering an adaptable and effective enhancement to the learning process. Extensive experiments on CL benchmarks and theoretical analysis demonstrate the effectiveness of the proposed refresh learning. Code is available at \url{https://github.com/joey-wang123/CL-refresh-learning}.
|
2012.06354
|
Alexander Ziller
|
Alexander Ziller, Jonathan Passerat-Palmbach, Th\'eo Ryffel, Dmitrii
Usynin, Andrew Trask, Ion\'esio Da Lima Costa Junior, Jason Mancuso, Marcus
Makowski, Daniel Rueckert, Rickmer Braren, Georgios Kaissis
|
Privacy-preserving medical image analysis
|
Accepted at the workshop for Medical Imaging meets NeurIPS, 34th
Conference on Neural Information Processing Systems (NeurIPS) December 11,
2020
| null | null | null |
cs.CR cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The utilisation of artificial intelligence in medicine and healthcare has led
to successful clinical applications in several domains. The conflict between
data usage and privacy protection requirements in such systems must be resolved
for optimal results as well as ethical and legal compliance. This calls for
innovative solutions such as privacy-preserving machine learning (PPML). We
present PriMIA (Privacy-preserving Medical Image Analysis), a software
framework designed for PPML in medical imaging. In a real-life case study we
demonstrate significantly better classification performance of a securely
aggregated federated learning model compared to human experts on unseen
datasets. Furthermore, we show an inference-as-a-service scenario for
end-to-end encrypted diagnosis, where neither the data nor the model are
revealed. Lastly, we empirically evaluate the framework's security against a
gradient-based model inversion attack and demonstrate that no usable
information can be recovered from the model.
|
[
{
"created": "Thu, 10 Dec 2020 13:56:00 GMT",
"version": "v1"
}
] |
2020-12-14
|
[
[
"Ziller",
"Alexander",
""
],
[
"Passerat-Palmbach",
"Jonathan",
""
],
[
"Ryffel",
"Théo",
""
],
[
"Usynin",
"Dmitrii",
""
],
[
"Trask",
"Andrew",
""
],
[
"Junior",
"Ionésio Da Lima Costa",
""
],
[
"Mancuso",
"Jason",
""
],
[
"Makowski",
"Marcus",
""
],
[
"Rueckert",
"Daniel",
""
],
[
"Braren",
"Rickmer",
""
],
[
"Kaissis",
"Georgios",
""
]
] |
The utilisation of artificial intelligence in medicine and healthcare has led to successful clinical applications in several domains. The conflict between data usage and privacy protection requirements in such systems must be resolved for optimal results as well as ethical and legal compliance. This calls for innovative solutions such as privacy-preserving machine learning (PPML). We present PriMIA (Privacy-preserving Medical Image Analysis), a software framework designed for PPML in medical imaging. In a real-life case study we demonstrate significantly better classification performance of a securely aggregated federated learning model compared to human experts on unseen datasets. Furthermore, we show an inference-as-a-service scenario for end-to-end encrypted diagnosis, where neither the data nor the model are revealed. Lastly, we empirically evaluate the framework's security against a gradient-based model inversion attack and demonstrate that no usable information can be recovered from the model.
|
2205.13741
|
Ali Seyfi
|
Ali Seyfi, Jean-Francois Rajotte, Raymond T. Ng
|
Generating multivariate time series with COmmon Source CoordInated GAN
(COSCI-GAN)
|
19 pages, 16 figures
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Generating multivariate time series is a promising approach for sharing
sensitive data in many medical, financial, and IoT applications. A common type
of multivariate time series originates from a single source such as the
biometric measurements from a medical patient. This leads to complex dynamical
patterns between individual time series that are hard to learn by typical
generation models such as GANs. There is valuable information in those patterns
that machine learning models can use to better classify, predict or perform
other downstream tasks. We propose a novel framework that takes time series'
common origin into account and favors channel/feature relationships
preservation. The two key points of our method are: 1) the individual time
series are generated from a common point in latent space and 2) a central
discriminator favors the preservation of inter-channel/feature dynamics. We
demonstrate empirically that our method helps preserve channel/feature
correlations and that our synthetic data performs very well in downstream tasks
with medical and financial data.
|
[
{
"created": "Fri, 27 May 2022 03:09:55 GMT",
"version": "v1"
},
{
"created": "Thu, 15 Dec 2022 00:19:16 GMT",
"version": "v2"
}
] |
2022-12-16
|
[
[
"Seyfi",
"Ali",
""
],
[
"Rajotte",
"Jean-Francois",
""
],
[
"Ng",
"Raymond T.",
""
]
] |
Generating multivariate time series is a promising approach for sharing sensitive data in many medical, financial, and IoT applications. A common type of multivariate time series originates from a single source such as the biometric measurements from a medical patient. This leads to complex dynamical patterns between individual time series that are hard to learn by typical generation models such as GANs. There is valuable information in those patterns that machine learning models can use to better classify, predict or perform other downstream tasks. We propose a novel framework that takes time series' common origin into account and favors channel/feature relationships preservation. The two key points of our method are: 1) the individual time series are generated from a common point in latent space and 2) a central discriminator favors the preservation of inter-channel/feature dynamics. We demonstrate empirically that our method helps preserve channel/feature correlations and that our synthetic data performs very well in downstream tasks with medical and financial data.
|
1401.4802
|
J\"urgen M\"unch
|
Alexis Ocampo, J\"urgen M\"unch
|
Process Evolution Supported by Rationale: An Empirical Investigation of
Process Changes
|
8 pages. The final publication is available at
http://link.springer.com/chapter/10.1007%2F11754305_36
|
Software Process Change, volume 3966 of Lecture Notes in Computer
Science, pages 334-341, Springer Berlin Heidelberg, 2006
|
10.1007/11754305_36
| null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Evolving a software process model without a retrospective and, in
consequence, without an understanding of the process evolution, can lead to
severe problems for the software development organization, e.g., inefficient
performance as a consequence of the arbitrary introduction of changes or
difficulty in demonstrating compliance to a given standard. Capturing
information on the rationale behind changes can provide a means for better
understanding process evolution. This article presents the results of an
exploratory study with the goal of understanding the nature of process changes
in a given context. It presents the most important issues that motivated
process engineers changing important aerospace software process standards
during an industrial project. The study is part of research work intended to
incrementally define a systematic mechanism for process evolution supported by
rationale information.
|
[
{
"created": "Mon, 20 Jan 2014 06:41:59 GMT",
"version": "v1"
}
] |
2014-01-21
|
[
[
"Ocampo",
"Alexis",
""
],
[
"Münch",
"Jürgen",
""
]
] |
Evolving a software process model without a retrospective and, in consequence, without an understanding of the process evolution, can lead to severe problems for the software development organization, e.g., inefficient performance as a consequence of the arbitrary introduction of changes or difficulty in demonstrating compliance to a given standard. Capturing information on the rationale behind changes can provide a means for better understanding process evolution. This article presents the results of an exploratory study with the goal of understanding the nature of process changes in a given context. It presents the most important issues that motivated process engineers changing important aerospace software process standards during an industrial project. The study is part of research work intended to incrementally define a systematic mechanism for process evolution supported by rationale information.
|
2102.05802
|
Leighton Barnes
|
Leighton Pate Barnes and Ayfer Ozgur
|
Fisher Information and Mutual Information Constraints
| null | null | null | null |
cs.IT math.IT math.ST stat.TH
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider the processing of statistical samples $X\sim P_\theta$ by a
channel $p(y|x)$, and characterize how the statistical information from the
samples for estimating the parameter $\theta\in\mathbb{R}^d$ can scale with the
mutual information or capacity of the channel. We show that if the statistical
model has a sub-Gaussian score function, then the trace of the Fisher
information matrix for estimating $\theta$ from $Y$ can scale at most linearly
with the mutual information between $X$ and $Y$. We apply this result to obtain
minimax lower bounds in distributed statistical estimation problems, and obtain
a tight preconstant for Gaussian mean estimation. We then show how our Fisher
information bound can also imply mutual information or Jensen-Shannon
divergence based distributed strong data processing inequalities.
|
[
{
"created": "Thu, 11 Feb 2021 01:53:09 GMT",
"version": "v1"
},
{
"created": "Thu, 8 Jul 2021 22:28:26 GMT",
"version": "v2"
}
] |
2021-07-12
|
[
[
"Barnes",
"Leighton Pate",
""
],
[
"Ozgur",
"Ayfer",
""
]
] |
We consider the processing of statistical samples $X\sim P_\theta$ by a channel $p(y|x)$, and characterize how the statistical information from the samples for estimating the parameter $\theta\in\mathbb{R}^d$ can scale with the mutual information or capacity of the channel. We show that if the statistical model has a sub-Gaussian score function, then the trace of the Fisher information matrix for estimating $\theta$ from $Y$ can scale at most linearly with the mutual information between $X$ and $Y$. We apply this result to obtain minimax lower bounds in distributed statistical estimation problems, and obtain a tight preconstant for Gaussian mean estimation. We then show how our Fisher information bound can also imply mutual information or Jensen-Shannon divergence based distributed strong data processing inequalities.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.