id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2205.01448
|
Chih-Hung Liu
|
Shengyu Huang, Chih-Hung Liu, Daniel Rutschman
|
Approximate Selection with Unreliable Comparisons in Optimal Expected
Time
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Given $n$ elements, an integer $k$ and a parameter $\varepsilon$, we study to
select an element with rank in $(k-n\varepsilon,k+n\varepsilon]$ using
unreliable comparisons where the outcome of each comparison is incorrect
independently with a constant error probability, and multiple comparisons
between the same pair of elements are independent. In this fault model, the
fundamental problems of finding the minimum, selecting the $k$-th smallest
element and sorting have been shown to require $\Theta\big(n \log
\frac{1}{Q}\big)$, $\Theta\big(n\log \frac{\min\{k,n-k\}}{Q}\big)$ and
$\Theta\big(n\log \frac{n}{Q}\big)$ comparisons, respectively, to achieve
success probability $1-Q$. Recently, Leucci and Liu proved that the approximate
minimum selection problem ($k=0$) requires expected
$\Theta(\varepsilon^{-1}\log \frac{1}{Q})$ comparisons.
We develop a randomized algorithm that performs expected
$O(\frac{k}{n}\varepsilon^{-2} \log \frac{1}{Q})$ comparisons to achieve
success probability at least $1-Q$. We also prove that any randomized algorithm
with success probability at least $1-Q$ performs expected
$\Omega(\frac{k}{n}\varepsilon^{-2}\log \frac{1}{Q})$ comparisons. Our results
indicate a clear distinction between approximating the minimum and
approximating the $k$-th smallest element, which holds even for the high
probability guarantee, e.g., if $k=\frac{n}{2}$ and $Q=\frac{1}{n}$,
$\Theta(\varepsilon^{-1}\log n)$ versus $\Theta(\varepsilon^{-2}\log n)$.
Moreover, if $\varepsilon=n^{-\alpha}$ for $\alpha \in (0,\frac{1}{2})$, the
asymptotic difference is almost quadratic, i.e., $\tilde{\Theta}(n^{\alpha})$
versus $\tilde{\Theta}(n^{2\alpha})$.
|
[
{
"created": "Tue, 3 May 2022 12:20:31 GMT",
"version": "v1"
}
] |
2022-05-04
|
[
[
"Huang",
"Shengyu",
""
],
[
"Liu",
"Chih-Hung",
""
],
[
"Rutschman",
"Daniel",
""
]
] |
Given $n$ elements, an integer $k$ and a parameter $\varepsilon$, we study to select an element with rank in $(k-n\varepsilon,k+n\varepsilon]$ using unreliable comparisons where the outcome of each comparison is incorrect independently with a constant error probability, and multiple comparisons between the same pair of elements are independent. In this fault model, the fundamental problems of finding the minimum, selecting the $k$-th smallest element and sorting have been shown to require $\Theta\big(n \log \frac{1}{Q}\big)$, $\Theta\big(n\log \frac{\min\{k,n-k\}}{Q}\big)$ and $\Theta\big(n\log \frac{n}{Q}\big)$ comparisons, respectively, to achieve success probability $1-Q$. Recently, Leucci and Liu proved that the approximate minimum selection problem ($k=0$) requires expected $\Theta(\varepsilon^{-1}\log \frac{1}{Q})$ comparisons. We develop a randomized algorithm that performs expected $O(\frac{k}{n}\varepsilon^{-2} \log \frac{1}{Q})$ comparisons to achieve success probability at least $1-Q$. We also prove that any randomized algorithm with success probability at least $1-Q$ performs expected $\Omega(\frac{k}{n}\varepsilon^{-2}\log \frac{1}{Q})$ comparisons. Our results indicate a clear distinction between approximating the minimum and approximating the $k$-th smallest element, which holds even for the high probability guarantee, e.g., if $k=\frac{n}{2}$ and $Q=\frac{1}{n}$, $\Theta(\varepsilon^{-1}\log n)$ versus $\Theta(\varepsilon^{-2}\log n)$. Moreover, if $\varepsilon=n^{-\alpha}$ for $\alpha \in (0,\frac{1}{2})$, the asymptotic difference is almost quadratic, i.e., $\tilde{\Theta}(n^{\alpha})$ versus $\tilde{\Theta}(n^{2\alpha})$.
|
2306.15755
|
Mozhgan Pourkeshavarz
|
Mozhgan Pourkeshavarz, Mohammad Sabokrou, Amir Rasouli
|
Adversarial Backdoor Attack by Naturalistic Data Poisoning on Trajectory
Prediction in Autonomous Driving
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In autonomous driving, behavior prediction is fundamental for safe motion
planning, hence the security and robustness of prediction models against
adversarial attacks are of paramount importance. We propose a novel adversarial
backdoor attack against trajectory prediction models as a means of studying
their potential vulnerabilities. Our attack affects the victim at training time
via naturalistic, hence stealthy, poisoned samples crafted using a novel
two-step approach. First, the triggers are crafted by perturbing the trajectory
of attacking vehicle and then disguised by transforming the scene using a
bi-level optimization technique. The proposed attack does not depend on a
particular model architecture and operates in a black-box manner, thus can be
effective without any knowledge of the victim model. We conduct extensive
empirical studies using state-of-the-art prediction models on two benchmark
datasets using metrics customized for trajectory prediction. We show that the
proposed attack is highly effective, as it can significantly hinder the
performance of prediction models, unnoticeable by the victims, and efficient as
it forces the victim to generate malicious behavior even under constrained
conditions. Via ablative studies, we analyze the impact of different attack
design choices followed by an evaluation of existing defence mechanisms against
the proposed attack.
|
[
{
"created": "Tue, 27 Jun 2023 19:15:06 GMT",
"version": "v1"
},
{
"created": "Wed, 22 Nov 2023 16:16:43 GMT",
"version": "v2"
}
] |
2023-11-23
|
[
[
"Pourkeshavarz",
"Mozhgan",
""
],
[
"Sabokrou",
"Mohammad",
""
],
[
"Rasouli",
"Amir",
""
]
] |
In autonomous driving, behavior prediction is fundamental for safe motion planning, hence the security and robustness of prediction models against adversarial attacks are of paramount importance. We propose a novel adversarial backdoor attack against trajectory prediction models as a means of studying their potential vulnerabilities. Our attack affects the victim at training time via naturalistic, hence stealthy, poisoned samples crafted using a novel two-step approach. First, the triggers are crafted by perturbing the trajectory of attacking vehicle and then disguised by transforming the scene using a bi-level optimization technique. The proposed attack does not depend on a particular model architecture and operates in a black-box manner, thus can be effective without any knowledge of the victim model. We conduct extensive empirical studies using state-of-the-art prediction models on two benchmark datasets using metrics customized for trajectory prediction. We show that the proposed attack is highly effective, as it can significantly hinder the performance of prediction models, unnoticeable by the victims, and efficient as it forces the victim to generate malicious behavior even under constrained conditions. Via ablative studies, we analyze the impact of different attack design choices followed by an evaluation of existing defence mechanisms against the proposed attack.
|
2408.01999
|
Mohamed Chahine Ghanem Dr
|
Dipo Dunsin, Mohamed Chahine Ghanem, Karim Ouazzane, Vassil Vassilev
|
Reinforcement Learning for an Efficient and Effective Malware
Investigation during Cyber Incident Response
|
v1.1
| null | null | null |
cs.CR cs.AI cs.ET
|
http://creativecommons.org/licenses/by/4.0/
|
This research focused on enhancing post-incident malware forensic
investigation using reinforcement learning RL. We proposed an advanced MDP post
incident malware forensics investigation model and framework to expedite post
incident forensics. We then implement our RL Malware Investigation Model based
on structured MDP within the proposed framework. To identify malware artefacts,
the RL agent acquires and examines forensics evidence files, iteratively
improving its capabilities using Q Table and temporal difference learning. The
Q learning algorithm significantly improved the agent ability to identify
malware. An epsilon greedy exploration strategy and Q learning updates enabled
efficient learning and decision making. Our experimental testing revealed that
optimal learning rates depend on the MDP environment complexity, with simpler
environments benefiting from higher rates for quicker convergence and complex
ones requiring lower rates for stability. Our model performance in identifying
and classifying malware reduced malware analysis time compared to human
experts, demonstrating robustness and adaptability. The study highlighted the
significance of hyper parameter tuning and suggested adaptive strategies for
complex environments. Our RL based approach produced promising results and is
validated as an alternative to traditional methods notably by offering
continuous learning and adaptation to new and evolving malware threats which
ultimately enhance the post incident forensics investigations.
|
[
{
"created": "Sun, 4 Aug 2024 11:55:24 GMT",
"version": "v1"
}
] |
2024-08-06
|
[
[
"Dunsin",
"Dipo",
""
],
[
"Ghanem",
"Mohamed Chahine",
""
],
[
"Ouazzane",
"Karim",
""
],
[
"Vassilev",
"Vassil",
""
]
] |
This research focused on enhancing post-incident malware forensic investigation using reinforcement learning RL. We proposed an advanced MDP post incident malware forensics investigation model and framework to expedite post incident forensics. We then implement our RL Malware Investigation Model based on structured MDP within the proposed framework. To identify malware artefacts, the RL agent acquires and examines forensics evidence files, iteratively improving its capabilities using Q Table and temporal difference learning. The Q learning algorithm significantly improved the agent ability to identify malware. An epsilon greedy exploration strategy and Q learning updates enabled efficient learning and decision making. Our experimental testing revealed that optimal learning rates depend on the MDP environment complexity, with simpler environments benefiting from higher rates for quicker convergence and complex ones requiring lower rates for stability. Our model performance in identifying and classifying malware reduced malware analysis time compared to human experts, demonstrating robustness and adaptability. The study highlighted the significance of hyper parameter tuning and suggested adaptive strategies for complex environments. Our RL based approach produced promising results and is validated as an alternative to traditional methods notably by offering continuous learning and adaptation to new and evolving malware threats which ultimately enhance the post incident forensics investigations.
|
2210.08226
|
Adriano Cardace
|
Adriano Cardace, Riccardo Spezialetti, Pierluigi Zama Ramirez, Samuele
Salti, Luigi Di Stefano
|
Self-Distillation for Unsupervised 3D Domain Adaptation
|
WACV 2023, Project Page:
https://cvlab-unibo.github.io/FeatureDistillation/
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Point cloud classification is a popular task in 3D vision. However, previous
works, usually assume that point clouds at test time are obtained with the same
procedure or sensor as those at training time. Unsupervised Domain Adaptation
(UDA) instead, breaks this assumption and tries to solve the task on an
unlabeled target domain, leveraging only on a supervised source domain. For
point cloud classification, recent UDA methods try to align features across
domains via auxiliary tasks such as point cloud reconstruction, which however
do not optimize the discriminative power in the target domain in feature space.
In contrast, in this work, we focus on obtaining a discriminative feature space
for the target domain enforcing consistency between a point cloud and its
augmented version. We then propose a novel iterative self-training methodology
that exploits Graph Neural Networks in the UDA context to refine pseudo-labels.
We perform extensive experiments and set the new state-of-the-art in standard
UDA benchmarks for point cloud classification. Finally, we show how our
approach can be extended to more complex tasks such as part segmentation.
|
[
{
"created": "Sat, 15 Oct 2022 08:37:02 GMT",
"version": "v1"
}
] |
2022-10-18
|
[
[
"Cardace",
"Adriano",
""
],
[
"Spezialetti",
"Riccardo",
""
],
[
"Ramirez",
"Pierluigi Zama",
""
],
[
"Salti",
"Samuele",
""
],
[
"Di Stefano",
"Luigi",
""
]
] |
Point cloud classification is a popular task in 3D vision. However, previous works, usually assume that point clouds at test time are obtained with the same procedure or sensor as those at training time. Unsupervised Domain Adaptation (UDA) instead, breaks this assumption and tries to solve the task on an unlabeled target domain, leveraging only on a supervised source domain. For point cloud classification, recent UDA methods try to align features across domains via auxiliary tasks such as point cloud reconstruction, which however do not optimize the discriminative power in the target domain in feature space. In contrast, in this work, we focus on obtaining a discriminative feature space for the target domain enforcing consistency between a point cloud and its augmented version. We then propose a novel iterative self-training methodology that exploits Graph Neural Networks in the UDA context to refine pseudo-labels. We perform extensive experiments and set the new state-of-the-art in standard UDA benchmarks for point cloud classification. Finally, we show how our approach can be extended to more complex tasks such as part segmentation.
|
1904.12389
|
Fang Fang
|
Fang Fang, Yanqing Xu, Zhiguo Ding, Chao Shen, Mugen Peng, George K.
Karagiannidis
|
Optimal Task Assignment and Power Allocation for NOMA Mobile-Edge
Computing Networks
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Mobile edge computing (MEC) can enhance the computing capability of mobile
devices, and non-orthogonal multiple access (NOMA) can provide high data rates.
Combining these two technologies can effectively benefit the network with
spectrum and energy efficiency. In this paper, we investigate the task
completion time minimization in NOMA multiuser MEC networks, where multiple
users can offload their tasks simultaneously via the same frequency band. We
adopt the \emph{partial} offloading, in which each user can partition its
computation task into offloading computing and locally computing parts. We aim
to minimize the maximum task latency among users by optimizing their tasks
partition ratios and offloading transmit power. By considering the energy
consumption and transmitted power limitation of each user, the formulated
problem is quasi-convex. Thus, a bisection search (BSS) iterative algorithm is
proposed to obtain the minimum task completion time. To reduce the complexity
of the BSS algorithm and evaluate its optimality, we further derive the
closed-form expressions of the optimal task partition ratio and offloading
power for two-user NOMA MEC networks based on the analysed results. Simulation
results demonstrate the convergence and optimality of the proposed a BSS
algorithm and the effectiveness of the proposed optimal derivation.
|
[
{
"created": "Sun, 28 Apr 2019 22:14:22 GMT",
"version": "v1"
}
] |
2019-04-30
|
[
[
"Fang",
"Fang",
""
],
[
"Xu",
"Yanqing",
""
],
[
"Ding",
"Zhiguo",
""
],
[
"Shen",
"Chao",
""
],
[
"Peng",
"Mugen",
""
],
[
"Karagiannidis",
"George K.",
""
]
] |
Mobile edge computing (MEC) can enhance the computing capability of mobile devices, and non-orthogonal multiple access (NOMA) can provide high data rates. Combining these two technologies can effectively benefit the network with spectrum and energy efficiency. In this paper, we investigate the task completion time minimization in NOMA multiuser MEC networks, where multiple users can offload their tasks simultaneously via the same frequency band. We adopt the \emph{partial} offloading, in which each user can partition its computation task into offloading computing and locally computing parts. We aim to minimize the maximum task latency among users by optimizing their tasks partition ratios and offloading transmit power. By considering the energy consumption and transmitted power limitation of each user, the formulated problem is quasi-convex. Thus, a bisection search (BSS) iterative algorithm is proposed to obtain the minimum task completion time. To reduce the complexity of the BSS algorithm and evaluate its optimality, we further derive the closed-form expressions of the optimal task partition ratio and offloading power for two-user NOMA MEC networks based on the analysed results. Simulation results demonstrate the convergence and optimality of the proposed a BSS algorithm and the effectiveness of the proposed optimal derivation.
|
1909.00280
|
Yunior Ram\'irez-Cruz
|
Xihui Chen, Sjouke Mauw, Yunior Ram\'irez-Cruz
|
Publishing Community-Preserving Attributed Social Graphs with a
Differential Privacy Guarantee
| null |
Proceedings on Privacy Enhancing Technologies 2020(4):131-152,
2020
|
10.2478/popets-2020-0066
| null |
cs.SI cs.CR physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a novel method for publishing differentially private synthetic
attributed graphs. Unlike preceding approaches, our method is able to preserve
the community structure of the original graph without sacrificing the ability
to capture global structural properties. Our proposal relies on C-AGM, a new
community-preserving generative model for attributed graphs. We equip C-AGM
with efficient methods for attributed graph sampling and parameter estimation.
For the latter, we introduce differentially private computation methods, which
allow us to release community-preserving synthetic attributed social graphs
with a strong formal privacy guarantee. Through comprehensive experiments, we
show that our new model outperforms its most relevant counterparts in
synthesising differentially private attributed social graphs that preserve the
community structure of the original graph, as well as degree sequences and
clustering coefficients.
|
[
{
"created": "Sat, 31 Aug 2019 20:16:51 GMT",
"version": "v1"
}
] |
2020-09-15
|
[
[
"Chen",
"Xihui",
""
],
[
"Mauw",
"Sjouke",
""
],
[
"Ramírez-Cruz",
"Yunior",
""
]
] |
We present a novel method for publishing differentially private synthetic attributed graphs. Unlike preceding approaches, our method is able to preserve the community structure of the original graph without sacrificing the ability to capture global structural properties. Our proposal relies on C-AGM, a new community-preserving generative model for attributed graphs. We equip C-AGM with efficient methods for attributed graph sampling and parameter estimation. For the latter, we introduce differentially private computation methods, which allow us to release community-preserving synthetic attributed social graphs with a strong formal privacy guarantee. Through comprehensive experiments, we show that our new model outperforms its most relevant counterparts in synthesising differentially private attributed social graphs that preserve the community structure of the original graph, as well as degree sequences and clustering coefficients.
|
2405.16311
|
Agathe Balayn
|
Agathe Balayn, Lorenzo Corti, Fanny Rancourt, Fabio Casati, Ujwal
Gadiraju
|
Understanding Stakeholders' Perceptions and Needs Across the LLM Supply
Chain
|
Paper accepted at the HCXAI workshop, co-located with CHI'24
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Explainability and transparency of AI systems are undeniably important,
leading to several research studies and tools addressing them. Existing works
fall short of accounting for the diverse stakeholders of the AI supply chain
who may differ in their needs and consideration of the facets of explainability
and transparency. In this paper, we argue for the need to revisit the inquiries
of these vital constructs in the context of LLMs. To this end, we report on a
qualitative study with 71 different stakeholders, where we explore the
prevalent perceptions and needs around these concepts. This study not only
confirms the importance of exploring the ``who'' in XAI and transparency for
LLMs, but also reflects on best practices to do so while surfacing the often
forgotten stakeholders and their information needs. Our insights suggest that
researchers and practitioners should simultaneously clarify the ``who'' in
considerations of explainability and transparency, the ``what'' in the
information needs, and ``why'' they are needed to ensure responsible design and
development across the LLM supply chain.
|
[
{
"created": "Sat, 25 May 2024 17:41:56 GMT",
"version": "v1"
}
] |
2024-05-28
|
[
[
"Balayn",
"Agathe",
""
],
[
"Corti",
"Lorenzo",
""
],
[
"Rancourt",
"Fanny",
""
],
[
"Casati",
"Fabio",
""
],
[
"Gadiraju",
"Ujwal",
""
]
] |
Explainability and transparency of AI systems are undeniably important, leading to several research studies and tools addressing them. Existing works fall short of accounting for the diverse stakeholders of the AI supply chain who may differ in their needs and consideration of the facets of explainability and transparency. In this paper, we argue for the need to revisit the inquiries of these vital constructs in the context of LLMs. To this end, we report on a qualitative study with 71 different stakeholders, where we explore the prevalent perceptions and needs around these concepts. This study not only confirms the importance of exploring the ``who'' in XAI and transparency for LLMs, but also reflects on best practices to do so while surfacing the often forgotten stakeholders and their information needs. Our insights suggest that researchers and practitioners should simultaneously clarify the ``who'' in considerations of explainability and transparency, the ``what'' in the information needs, and ``why'' they are needed to ensure responsible design and development across the LLM supply chain.
|
1910.11028
|
Jimmy Lin
|
Jimmy Lin, Lori Paniak, and Gordon Boerke
|
The Performance Envelope of Inverted Indexing on Modern Hardware
| null | null | null | null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper explores the performance envelope of "traditional" inverted
indexing on modern hardware using the implementation in the open-source Lucene
search library. We benchmark indexing throughput on a single high-end
multi-core commodity server in a number of configurations varying the media of
the source collection and target index, examining a network-attached store, a
direct-attached disk array, and an SSD. Experiments show that the largest
determinants of performance are the physical characteristics of the source and
target media, and that physically isolating the two yields the highest indexing
throughput. Results suggest that current indexing techniques have reached
physical device limits, and that further algorithmic improvements in
performance are unlikely without rethinking the inverted indexing pipeline in
light of observed bottlenecks.
|
[
{
"created": "Thu, 24 Oct 2019 11:04:59 GMT",
"version": "v1"
},
{
"created": "Mon, 28 Oct 2019 01:45:15 GMT",
"version": "v2"
}
] |
2019-10-29
|
[
[
"Lin",
"Jimmy",
""
],
[
"Paniak",
"Lori",
""
],
[
"Boerke",
"Gordon",
""
]
] |
This paper explores the performance envelope of "traditional" inverted indexing on modern hardware using the implementation in the open-source Lucene search library. We benchmark indexing throughput on a single high-end multi-core commodity server in a number of configurations varying the media of the source collection and target index, examining a network-attached store, a direct-attached disk array, and an SSD. Experiments show that the largest determinants of performance are the physical characteristics of the source and target media, and that physically isolating the two yields the highest indexing throughput. Results suggest that current indexing techniques have reached physical device limits, and that further algorithmic improvements in performance are unlikely without rethinking the inverted indexing pipeline in light of observed bottlenecks.
|
2210.06111
|
Jinghan Peng
|
Yu Zheng, Jinghan Peng, Miao Zhao, Yufeng Ma, Min Liu, Xinyue Ma,
Tianyu Liang, Tianlong Kong, Liang He, Minqiang Xu
|
THUEE system description for NIST 2020 SRE CTS challenge
|
3 pages, 1 table; System desciption of NIST 2020 SRE CTS challenge
| null | null | null |
cs.SD cs.AI eess.AS eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents the system description of the THUEE team for the NIST
2020 Speaker Recognition Evaluation (SRE) conversational telephone speech (CTS)
challenge. The subsystems including ResNet74, ResNet152, and RepVGG-B2 are
developed as speaker embedding extractors in this evaluation. We used combined
AM-Softmax and AAM-Softmax based loss functions, namely CM-Softmax. We adopted
a two-staged training strategy to further improve system performance. We fused
all individual systems as our final submission. Our approach leads to excellent
performance and ranks 1st in the challenge.
|
[
{
"created": "Wed, 12 Oct 2022 12:01:59 GMT",
"version": "v1"
}
] |
2022-10-13
|
[
[
"Zheng",
"Yu",
""
],
[
"Peng",
"Jinghan",
""
],
[
"Zhao",
"Miao",
""
],
[
"Ma",
"Yufeng",
""
],
[
"Liu",
"Min",
""
],
[
"Ma",
"Xinyue",
""
],
[
"Liang",
"Tianyu",
""
],
[
"Kong",
"Tianlong",
""
],
[
"He",
"Liang",
""
],
[
"Xu",
"Minqiang",
""
]
] |
This paper presents the system description of the THUEE team for the NIST 2020 Speaker Recognition Evaluation (SRE) conversational telephone speech (CTS) challenge. The subsystems including ResNet74, ResNet152, and RepVGG-B2 are developed as speaker embedding extractors in this evaluation. We used combined AM-Softmax and AAM-Softmax based loss functions, namely CM-Softmax. We adopted a two-staged training strategy to further improve system performance. We fused all individual systems as our final submission. Our approach leads to excellent performance and ranks 1st in the challenge.
|
1711.06448
|
Jie Chang
|
Jie Chang, Yujun Gu, Ya Zhang
|
Chinese Typeface Transformation with Hierarchical Adversarial Network
|
8 pages(exclude reference), 6 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we explore automated typeface generation through image style
transfer which has shown great promise in natural image generation. Existing
style transfer methods for natural images generally assume that the source and
target images share similar high-frequency features. However, this assumption
is no longer true in typeface transformation. Inspired by the recent
advancement in Generative Adversarial Networks (GANs), we propose a
Hierarchical Adversarial Network (HAN) for typeface transformation. The
proposed HAN consists of two sub-networks: a transfer network and a
hierarchical adversarial discriminator. The transfer network maps characters
from one typeface to another. A unique characteristic of typefaces is that the
same radicals may have quite different appearances in different characters even
under the same typeface. Hence, a stage-decoder is employed by the transfer
network to leverage multiple feature layers, aiming to capture both the global
and local features. The hierarchical adversarial discriminator implicitly
measures data discrepancy between the generated domain and the target domain.
To leverage the complementary discriminating capability of different feature
layers, a hierarchical structure is proposed for the discriminator. We have
experimentally demonstrated that HAN is an effective framework for typeface
transfer and characters restoration.
|
[
{
"created": "Fri, 17 Nov 2017 08:05:49 GMT",
"version": "v1"
}
] |
2017-11-20
|
[
[
"Chang",
"Jie",
""
],
[
"Gu",
"Yujun",
""
],
[
"Zhang",
"Ya",
""
]
] |
In this paper, we explore automated typeface generation through image style transfer which has shown great promise in natural image generation. Existing style transfer methods for natural images generally assume that the source and target images share similar high-frequency features. However, this assumption is no longer true in typeface transformation. Inspired by the recent advancement in Generative Adversarial Networks (GANs), we propose a Hierarchical Adversarial Network (HAN) for typeface transformation. The proposed HAN consists of two sub-networks: a transfer network and a hierarchical adversarial discriminator. The transfer network maps characters from one typeface to another. A unique characteristic of typefaces is that the same radicals may have quite different appearances in different characters even under the same typeface. Hence, a stage-decoder is employed by the transfer network to leverage multiple feature layers, aiming to capture both the global and local features. The hierarchical adversarial discriminator implicitly measures data discrepancy between the generated domain and the target domain. To leverage the complementary discriminating capability of different feature layers, a hierarchical structure is proposed for the discriminator. We have experimentally demonstrated that HAN is an effective framework for typeface transfer and characters restoration.
|
2204.07619
|
Max Winkelmann
|
Max Winkelmann, Constantin Vasconi, Steffen M\"uller
|
Transfer Importance Sampling -- How Testing Automated Vehicles in
Multiple Test Setups Helps With the Bias-Variance Tradeoff
|
6 pages, 5 figures, 1 table
|
2022 IEEE 25th International Conference on Intelligent
Transportation Systems (ITSC), Oct. 2022, pp. 26-31
|
10.1109/ITSC55140.2022.9922091
| null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The promise of increased road safety is a key motivator for the development
of automated vehicles (AV). Yet, demonstrating that an AV is as safe as, or
even safer than, a human-driven vehicle has proven to be challenging. Should an
AV be examined purely virtually, allowing large numbers of fully controllable
tests? Or should it be tested under real environmental conditions on a proving
ground? Since different test setups have different strengths and weaknesses, it
is still an open question how virtual and real tests should be combined. On the
way to answer this question, this paper proposes transfer importance sampling
(TIS), a risk estimation method linking different test setups. Fusing the
concepts of transfer learning and importance sampling, TIS uses a scalable,
cost-effective test setup to comprehensively explore an AV's behavior. The
insights gained then allow parameterizing tests in a more trustworthy test
setup accurately reflecting risks. We show that when using a trustworthy test
setup alone is prohibitively expensive, linking it to a scalable test setup can
increase efficiency $\unicode{x2013}$ without sacrificing the result's
validity. Thus, the test setups' individual deficiencies are compensated for by
their systematic linkage.
|
[
{
"created": "Fri, 15 Apr 2022 19:24:38 GMT",
"version": "v1"
},
{
"created": "Fri, 4 Nov 2022 16:28:10 GMT",
"version": "v2"
}
] |
2022-11-07
|
[
[
"Winkelmann",
"Max",
""
],
[
"Vasconi",
"Constantin",
""
],
[
"Müller",
"Steffen",
""
]
] |
The promise of increased road safety is a key motivator for the development of automated vehicles (AV). Yet, demonstrating that an AV is as safe as, or even safer than, a human-driven vehicle has proven to be challenging. Should an AV be examined purely virtually, allowing large numbers of fully controllable tests? Or should it be tested under real environmental conditions on a proving ground? Since different test setups have different strengths and weaknesses, it is still an open question how virtual and real tests should be combined. On the way to answer this question, this paper proposes transfer importance sampling (TIS), a risk estimation method linking different test setups. Fusing the concepts of transfer learning and importance sampling, TIS uses a scalable, cost-effective test setup to comprehensively explore an AV's behavior. The insights gained then allow parameterizing tests in a more trustworthy test setup accurately reflecting risks. We show that when using a trustworthy test setup alone is prohibitively expensive, linking it to a scalable test setup can increase efficiency $\unicode{x2013}$ without sacrificing the result's validity. Thus, the test setups' individual deficiencies are compensated for by their systematic linkage.
|
1509.08254
|
Laura Luzzi
|
Laura Luzzi, Roope Vehkalahti and Alexander Gorodnik
|
Towards a complete DMT classification of division algebra codes
|
7 pages, 1 figure, conference version
| null | null | null |
cs.IT math.IT math.NT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work aims at providing new bounds for the diversity multiplexing gain
trade-off of a general class of division algebra based lattice codes. In the
low multiplexing gain regime, some bounds were previously obtained from the
high signal-to-noise ratio estimate of the union bound for the pairwise error
probabilities. Here these results are extended to cover a larger range of
multiplexing gains. The improvement is achieved by using ergodic theory in Lie
groups to estimate the behavior of the sum arising from the union bound. In
particular, the new bounds for lattice codes derived from Q-central division
algebras suggest that these codes can be divided into two subclasses based on
their Hasse invariants at the infinite places. Algebras with ramification at
the infinite place seem to provide better diversity-multiplexing gain tradeoff.
|
[
{
"created": "Mon, 28 Sep 2015 09:46:17 GMT",
"version": "v1"
}
] |
2015-09-29
|
[
[
"Luzzi",
"Laura",
""
],
[
"Vehkalahti",
"Roope",
""
],
[
"Gorodnik",
"Alexander",
""
]
] |
This work aims at providing new bounds for the diversity multiplexing gain trade-off of a general class of division algebra based lattice codes. In the low multiplexing gain regime, some bounds were previously obtained from the high signal-to-noise ratio estimate of the union bound for the pairwise error probabilities. Here these results are extended to cover a larger range of multiplexing gains. The improvement is achieved by using ergodic theory in Lie groups to estimate the behavior of the sum arising from the union bound. In particular, the new bounds for lattice codes derived from Q-central division algebras suggest that these codes can be divided into two subclasses based on their Hasse invariants at the infinite places. Algebras with ramification at the infinite place seem to provide better diversity-multiplexing gain tradeoff.
|
2401.02634
|
Thanh Nhat Huy Nguyen
|
Huy Nguyen, Kien Nguyen, Sridha Sridharan, Clinton Fookes
|
AG-ReID.v2: Bridging Aerial and Ground Views for Person
Re-identification
|
13 pages, Accepted by TIFS 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Aerial-ground person re-identification (Re-ID) presents unique challenges in
computer vision, stemming from the distinct differences in viewpoints, poses,
and resolutions between high-altitude aerial and ground-based cameras. Existing
research predominantly focuses on ground-to-ground matching, with aerial
matching less explored due to a dearth of comprehensive datasets. To address
this, we introduce AG-ReID.v2, a dataset specifically designed for person Re-ID
in mixed aerial and ground scenarios. This dataset comprises 100,502 images of
1,615 unique individuals, each annotated with matching IDs and 15 soft
attribute labels. Data were collected from diverse perspectives using a UAV,
stationary CCTV, and smart glasses-integrated camera, providing a rich variety
of intra-identity variations. Additionally, we have developed an explainable
attention network tailored for this dataset. This network features a
three-stream architecture that efficiently processes pairwise image distances,
emphasizes key top-down features, and adapts to variations in appearance due to
altitude differences. Comparative evaluations demonstrate the superiority of
our approach over existing baselines. We plan to release the dataset and
algorithm source code publicly, aiming to advance research in this specialized
field of computer vision. For access, please visit
https://github.com/huynguyen792/AG-ReID.v2.
|
[
{
"created": "Fri, 5 Jan 2024 04:53:33 GMT",
"version": "v1"
},
{
"created": "Sun, 7 Apr 2024 22:18:52 GMT",
"version": "v2"
}
] |
2024-04-09
|
[
[
"Nguyen",
"Huy",
""
],
[
"Nguyen",
"Kien",
""
],
[
"Sridharan",
"Sridha",
""
],
[
"Fookes",
"Clinton",
""
]
] |
Aerial-ground person re-identification (Re-ID) presents unique challenges in computer vision, stemming from the distinct differences in viewpoints, poses, and resolutions between high-altitude aerial and ground-based cameras. Existing research predominantly focuses on ground-to-ground matching, with aerial matching less explored due to a dearth of comprehensive datasets. To address this, we introduce AG-ReID.v2, a dataset specifically designed for person Re-ID in mixed aerial and ground scenarios. This dataset comprises 100,502 images of 1,615 unique individuals, each annotated with matching IDs and 15 soft attribute labels. Data were collected from diverse perspectives using a UAV, stationary CCTV, and smart glasses-integrated camera, providing a rich variety of intra-identity variations. Additionally, we have developed an explainable attention network tailored for this dataset. This network features a three-stream architecture that efficiently processes pairwise image distances, emphasizes key top-down features, and adapts to variations in appearance due to altitude differences. Comparative evaluations demonstrate the superiority of our approach over existing baselines. We plan to release the dataset and algorithm source code publicly, aiming to advance research in this specialized field of computer vision. For access, please visit https://github.com/huynguyen792/AG-ReID.v2.
|
2209.13865
|
Siyu Long
|
Siyu Long, Yi Zhou, Xinyu Dai, Hao Zhou
|
Zero-Shot 3D Drug Design by Sketching and Generating
|
NeurIPS 2022 camera-ready
| null | null | null |
cs.CE q-bio.BM
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Drug design is a crucial step in the drug discovery cycle. Recently, various
deep learning-based methods design drugs by generating novel molecules from
scratch, avoiding traversing large-scale drug libraries. However, they depend
on scarce experimental data or time-consuming docking simulation, leading to
overfitting issues with limited training data and slow generation speed. In
this study, we propose the zero-shot drug design method DESERT (Drug dEsign by
SkEtching and geneRaTing). Specifically, DESERT splits the design process into
two stages: sketching and generating, and bridges them with the molecular
shape. The two-stage fashion enables our method to utilize the large-scale
molecular database to reduce the need for experimental data and docking
simulation. Experiments show that DESERT achieves a new state-of-the-art at a
fast speed.
|
[
{
"created": "Wed, 28 Sep 2022 06:43:14 GMT",
"version": "v1"
},
{
"created": "Tue, 4 Oct 2022 08:47:39 GMT",
"version": "v2"
}
] |
2024-03-05
|
[
[
"Long",
"Siyu",
""
],
[
"Zhou",
"Yi",
""
],
[
"Dai",
"Xinyu",
""
],
[
"Zhou",
"Hao",
""
]
] |
Drug design is a crucial step in the drug discovery cycle. Recently, various deep learning-based methods design drugs by generating novel molecules from scratch, avoiding traversing large-scale drug libraries. However, they depend on scarce experimental data or time-consuming docking simulation, leading to overfitting issues with limited training data and slow generation speed. In this study, we propose the zero-shot drug design method DESERT (Drug dEsign by SkEtching and geneRaTing). Specifically, DESERT splits the design process into two stages: sketching and generating, and bridges them with the molecular shape. The two-stage fashion enables our method to utilize the large-scale molecular database to reduce the need for experimental data and docking simulation. Experiments show that DESERT achieves a new state-of-the-art at a fast speed.
|
2402.16889
|
Aditya Desu
|
Aditya Desu, Xuanli He, Qiongkai Xu, Wei Lu
|
Generative Models are Self-Watermarked: Declaring Model Authentication
through Re-Generation
| null | null | null | null |
cs.LG cs.AI cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
As machine- and AI-generated content proliferates, protecting the
intellectual property of generative models has become imperative, yet verifying
data ownership poses formidable challenges, particularly in cases of
unauthorized reuse of generated data. The challenge of verifying data ownership
is further amplified by using Machine Learning as a Service (MLaaS), which
often functions as a black-box system.
Our work is dedicated to detecting data reuse from even an individual sample.
Traditionally, watermarking has been leveraged to detect AI-generated content.
However, unlike watermarking techniques that embed additional information as
triggers into models or generated content, potentially compromising output
quality, our approach identifies latent fingerprints inherently present within
the outputs through re-generation. We propose an explainable verification
procedure that attributes data ownership through re-generation, and further
amplifies these fingerprints in the generative models through iterative data
re-generation. This methodology is theoretically grounded and demonstrates
viability and robustness using recent advanced text and image generative
models. Our methodology is significant as it goes beyond protecting the
intellectual property of APIs and addresses important issues such as the spread
of misinformation and academic misconduct. It provides a useful tool to ensure
the integrity of sources and authorship, expanding its application in different
scenarios where authenticity and ownership verification are essential.
|
[
{
"created": "Fri, 23 Feb 2024 10:48:21 GMT",
"version": "v1"
}
] |
2024-02-28
|
[
[
"Desu",
"Aditya",
""
],
[
"He",
"Xuanli",
""
],
[
"Xu",
"Qiongkai",
""
],
[
"Lu",
"Wei",
""
]
] |
As machine- and AI-generated content proliferates, protecting the intellectual property of generative models has become imperative, yet verifying data ownership poses formidable challenges, particularly in cases of unauthorized reuse of generated data. The challenge of verifying data ownership is further amplified by using Machine Learning as a Service (MLaaS), which often functions as a black-box system. Our work is dedicated to detecting data reuse from even an individual sample. Traditionally, watermarking has been leveraged to detect AI-generated content. However, unlike watermarking techniques that embed additional information as triggers into models or generated content, potentially compromising output quality, our approach identifies latent fingerprints inherently present within the outputs through re-generation. We propose an explainable verification procedure that attributes data ownership through re-generation, and further amplifies these fingerprints in the generative models through iterative data re-generation. This methodology is theoretically grounded and demonstrates viability and robustness using recent advanced text and image generative models. Our methodology is significant as it goes beyond protecting the intellectual property of APIs and addresses important issues such as the spread of misinformation and academic misconduct. It provides a useful tool to ensure the integrity of sources and authorship, expanding its application in different scenarios where authenticity and ownership verification are essential.
|
2006.04969
|
Heiko Hamann
|
Heiko Hamann and Andreagiovanni Reina
|
Scalability in Computing and Robotics
|
33 pages, 8 figures
| null |
10.1109/TC.2021.3089044
| null |
cs.DC cs.MA cs.PF cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Efficient engineered systems require scalability. A scalable system has
increasing performance with increasing system size. In an ideal case, the
increase in performance (e.g., speedup) corresponds to the number of units that
are added to the system. However, if multiple units work on the same task, then
coordination among these units is required. This coordination can introduce
overheads with an impact on system performance. The coordination costs can lead
to sublinear improvement or even diminishing performance with increasing system
size. However, there are also systems that implement efficient coordination and
exploit collaboration of units to attain superlinear improvement. Modeling the
scalability dynamics is key to understanding efficient systems. Known laws of
scalability, such as Amdahl's law, Gustafson's law, and Gunther's Universal
Scalability Law, are minimalistic phenomenological models that explain a rich
variety of system behaviors through concise equations. While useful to gain
general insights, the phenomenological nature of these models may limit the
understanding of the underlying dynamics, as they are detached from first
principles that could explain coordination overheads among units. Through a
decentralized system approach, we propose a general model based on generic
interactions between units that is able to describe, as specific cases, any
general pattern of scalability included by previously reported laws. The
proposed general model of scalability is built on first principles, or at least
on a microscopic description of interaction between units, and therefore has
the potential to contribute to a better understanding of system behavior and
scalability. We show that this model can be applied to a diverse set of
systems, such as parallel supercomputers, robot swarms, or wireless sensor
networks, creating a unified view on interdisciplinary design for scalability.
|
[
{
"created": "Mon, 8 Jun 2020 22:28:59 GMT",
"version": "v1"
},
{
"created": "Fri, 11 Jun 2021 14:25:50 GMT",
"version": "v2"
}
] |
2021-06-14
|
[
[
"Hamann",
"Heiko",
""
],
[
"Reina",
"Andreagiovanni",
""
]
] |
Efficient engineered systems require scalability. A scalable system has increasing performance with increasing system size. In an ideal case, the increase in performance (e.g., speedup) corresponds to the number of units that are added to the system. However, if multiple units work on the same task, then coordination among these units is required. This coordination can introduce overheads with an impact on system performance. The coordination costs can lead to sublinear improvement or even diminishing performance with increasing system size. However, there are also systems that implement efficient coordination and exploit collaboration of units to attain superlinear improvement. Modeling the scalability dynamics is key to understanding efficient systems. Known laws of scalability, such as Amdahl's law, Gustafson's law, and Gunther's Universal Scalability Law, are minimalistic phenomenological models that explain a rich variety of system behaviors through concise equations. While useful to gain general insights, the phenomenological nature of these models may limit the understanding of the underlying dynamics, as they are detached from first principles that could explain coordination overheads among units. Through a decentralized system approach, we propose a general model based on generic interactions between units that is able to describe, as specific cases, any general pattern of scalability included by previously reported laws. The proposed general model of scalability is built on first principles, or at least on a microscopic description of interaction between units, and therefore has the potential to contribute to a better understanding of system behavior and scalability. We show that this model can be applied to a diverse set of systems, such as parallel supercomputers, robot swarms, or wireless sensor networks, creating a unified view on interdisciplinary design for scalability.
|
2311.07946
|
Adam Piaseczny
|
Adam Piaseczny, Eric Ruzomberka, Rohit Parasnis, Christopher G.
Brinton
|
The Impact of Adversarial Node Placement in Decentralized Federated
Learning Networks
|
Accepted to ICC 2024 conference
| null | null | null |
cs.CR cs.AI cs.MA
|
http://creativecommons.org/licenses/by/4.0/
|
As Federated Learning (FL) grows in popularity, new decentralized frameworks
are becoming widespread. These frameworks leverage the benefits of
decentralized environments to enable fast and energy-efficient inter-device
communication. However, this growing popularity also intensifies the need for
robust security measures. While existing research has explored various aspects
of FL security, the role of adversarial node placement in decentralized
networks remains largely unexplored. This paper addresses this gap by analyzing
the performance of decentralized FL for various adversarial placement
strategies when adversaries can jointly coordinate their placement within a
network. We establish two baseline strategies for placing adversarial node:
random placement and network centrality-based placement. Building on this
foundation, we propose a novel attack algorithm that prioritizes adversarial
spread over adversarial centrality by maximizing the average network distance
between adversaries. We show that the new attack algorithm significantly
impacts key performance metrics such as testing accuracy, outperforming the
baseline frameworks by between $9\%$ and $66.5\%$ for the considered setups.
Our findings provide valuable insights into the vulnerabilities of
decentralized FL systems, setting the stage for future research aimed at
developing more secure and robust decentralized FL frameworks.
|
[
{
"created": "Tue, 14 Nov 2023 06:48:50 GMT",
"version": "v1"
},
{
"created": "Mon, 8 Jan 2024 15:50:24 GMT",
"version": "v2"
},
{
"created": "Thu, 11 Jan 2024 21:07:25 GMT",
"version": "v3"
},
{
"created": "Tue, 19 Mar 2024 19:25:21 GMT",
"version": "v4"
}
] |
2024-03-21
|
[
[
"Piaseczny",
"Adam",
""
],
[
"Ruzomberka",
"Eric",
""
],
[
"Parasnis",
"Rohit",
""
],
[
"Brinton",
"Christopher G.",
""
]
] |
As Federated Learning (FL) grows in popularity, new decentralized frameworks are becoming widespread. These frameworks leverage the benefits of decentralized environments to enable fast and energy-efficient inter-device communication. However, this growing popularity also intensifies the need for robust security measures. While existing research has explored various aspects of FL security, the role of adversarial node placement in decentralized networks remains largely unexplored. This paper addresses this gap by analyzing the performance of decentralized FL for various adversarial placement strategies when adversaries can jointly coordinate their placement within a network. We establish two baseline strategies for placing adversarial node: random placement and network centrality-based placement. Building on this foundation, we propose a novel attack algorithm that prioritizes adversarial spread over adversarial centrality by maximizing the average network distance between adversaries. We show that the new attack algorithm significantly impacts key performance metrics such as testing accuracy, outperforming the baseline frameworks by between $9\%$ and $66.5\%$ for the considered setups. Our findings provide valuable insights into the vulnerabilities of decentralized FL systems, setting the stage for future research aimed at developing more secure and robust decentralized FL frameworks.
|
2311.06783
|
Haoning Wu Mr
|
Haoning Wu, Zicheng Zhang, Erli Zhang, Chaofeng Chen, Liang Liao,
Annan Wang, Kaixin Xu, Chunyi Li, Jingwen Hou, Guangtao Zhai, Geng Xue,
Wenxiu Sun, Qiong Yan, Weisi Lin
|
Q-Instruct: Improving Low-level Visual Abilities for Multi-modality
Foundation Models
|
16 pages, 11 figures, page 12-16 as appendix
| null | null | null |
cs.CV cs.MM
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Multi-modality foundation models, as represented by GPT-4V, have brought a
new paradigm for low-level visual perception and understanding tasks, that can
respond to a broad range of natural human instructions in a model. While
existing foundation models have shown exciting potentials on low-level visual
tasks, their related abilities are still preliminary and need to be improved.
In order to enhance these models, we conduct a large-scale subjective
experiment collecting a vast number of real human feedbacks on low-level
vision. Each feedback follows a pathway that starts with a detailed description
on the low-level visual appearance (*e.g. clarity, color, brightness* of an
image, and ends with an overall conclusion, with an average length of 45 words.
The constructed **Q-Pathway** dataset includes 58K detailed human feedbacks on
18,973 images with diverse low-level appearance. Moreover, to enable foundation
models to robustly respond to diverse types of questions, we design a
GPT-participated conversion to process these feedbacks into diverse-format 200K
instruction-response pairs. Experimental results indicate that the
**Q-Instruct** consistently elevates low-level perception and understanding
abilities across several foundational models. We anticipate that our datasets
can pave the way for a future that general intelligence can perceive,
understand low-level visual appearance and evaluate visual quality like a
human. Our dataset, model zoo, and demo is published at:
https://q-future.github.io/Q-Instruct.
|
[
{
"created": "Sun, 12 Nov 2023 09:10:51 GMT",
"version": "v1"
}
] |
2023-11-14
|
[
[
"Wu",
"Haoning",
""
],
[
"Zhang",
"Zicheng",
""
],
[
"Zhang",
"Erli",
""
],
[
"Chen",
"Chaofeng",
""
],
[
"Liao",
"Liang",
""
],
[
"Wang",
"Annan",
""
],
[
"Xu",
"Kaixin",
""
],
[
"Li",
"Chunyi",
""
],
[
"Hou",
"Jingwen",
""
],
[
"Zhai",
"Guangtao",
""
],
[
"Xue",
"Geng",
""
],
[
"Sun",
"Wenxiu",
""
],
[
"Yan",
"Qiong",
""
],
[
"Lin",
"Weisi",
""
]
] |
Multi-modality foundation models, as represented by GPT-4V, have brought a new paradigm for low-level visual perception and understanding tasks, that can respond to a broad range of natural human instructions in a model. While existing foundation models have shown exciting potentials on low-level visual tasks, their related abilities are still preliminary and need to be improved. In order to enhance these models, we conduct a large-scale subjective experiment collecting a vast number of real human feedbacks on low-level vision. Each feedback follows a pathway that starts with a detailed description on the low-level visual appearance (*e.g. clarity, color, brightness* of an image, and ends with an overall conclusion, with an average length of 45 words. The constructed **Q-Pathway** dataset includes 58K detailed human feedbacks on 18,973 images with diverse low-level appearance. Moreover, to enable foundation models to robustly respond to diverse types of questions, we design a GPT-participated conversion to process these feedbacks into diverse-format 200K instruction-response pairs. Experimental results indicate that the **Q-Instruct** consistently elevates low-level perception and understanding abilities across several foundational models. We anticipate that our datasets can pave the way for a future that general intelligence can perceive, understand low-level visual appearance and evaluate visual quality like a human. Our dataset, model zoo, and demo is published at: https://q-future.github.io/Q-Instruct.
|
2212.14071
|
M. Tu\u{g}berk \.I\c{s}yapar
|
M. Tu\u{g}berk \.I\c{s}yapar, Ufuk Uyan, Mahiye Uluya\u{g}mur
\"Ozt\"urk
|
Large-Scale Cell-Level Quality of Service Estimation on 5G Networks
Using Machine Learning Techniques
| null | null | null | null |
cs.LG cs.AI cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This study presents a general machine learning framework to estimate the
traffic-measurement-level experience rate at given throughput values in the
form of a Key Performance Indicator for the cells on base stations across
various cities, using busy-hour counter data, and several technical parameters
together with the network topology. Relying on feature engineering techniques,
scores of additional predictors are proposed to enhance the effects of raw
correlated counter values over the corresponding targets, and to represent the
underlying interactions among groups of cells within nearby spatial locations
effectively. An end-to-end regression modeling is applied on the transformed
data, with results presented on unseen cities of varying sizes.
|
[
{
"created": "Wed, 28 Dec 2022 19:14:03 GMT",
"version": "v1"
},
{
"created": "Fri, 6 Jan 2023 14:45:53 GMT",
"version": "v2"
}
] |
2023-01-09
|
[
[
"İşyapar",
"M. Tuğberk",
""
],
[
"Uyan",
"Ufuk",
""
],
[
"Öztürk",
"Mahiye Uluyağmur",
""
]
] |
This study presents a general machine learning framework to estimate the traffic-measurement-level experience rate at given throughput values in the form of a Key Performance Indicator for the cells on base stations across various cities, using busy-hour counter data, and several technical parameters together with the network topology. Relying on feature engineering techniques, scores of additional predictors are proposed to enhance the effects of raw correlated counter values over the corresponding targets, and to represent the underlying interactions among groups of cells within nearby spatial locations effectively. An end-to-end regression modeling is applied on the transformed data, with results presented on unseen cities of varying sizes.
|
2011.08045
|
Johann Laconte
|
Johann Laconte, Abderrahim Kasmi, Fran\c{c}ois Pomerleau, Roland
Chapuis, Laurent Malaterre, Christophe Debain and Romuald Aufr\`ere
|
A Novel Occupancy Mapping Framework for Risk-Aware Path Planning in
Unstructured Environments
|
Published in the Special Issue "Frontiers in Mobile Robot Navigation"
of Sensors. https://www.mdpi.com/1424-8220/21/22/7562
|
Sensors 2021, 21, 7562
|
10.3390/s21227562
| null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
In the context of autonomous robots, one of the most important tasks is to
prevent potential damage to the robot during navigation. For this purpose, it
is often assumed that one must deal with known probabilistic obstacles, then
compute the probability of collision with each obstacle. However, in complex
scenarios or unstructured environments, it might be difficult to detect such
obstacles. In these cases, a metric map is used, where each position stores the
information of occupancy. The most common type of metric map is the Bayesian
occupancy map. However, this type of map is not well suited for computing risk
assessments for continuous paths due to its discrete nature. Hence, we
introduce a novel type of map called the Lambda Field, which is specially
designed for risk assessment. We first propose a way to compute such a map and
the expectation of a generic risk over a path. Then, we demonstrate the
benefits of our generic formulation with a use case defining the risk as the
expected collision force over a path. Using this risk definition and the Lambda
Field, we show that our framework is capable of doing classical path planning
while having a physical-based metric. Furthermore, the Lambda Field gives a
natural way to deal with unstructured environments, such as tall grass. Where
standard environment representations would always generate trajectories going
around such obstacles, our framework allows the robot to go through the grass
while being aware of the risk taken.
|
[
{
"created": "Mon, 16 Nov 2020 15:52:40 GMT",
"version": "v1"
},
{
"created": "Tue, 30 Nov 2021 14:19:48 GMT",
"version": "v2"
}
] |
2021-12-01
|
[
[
"Laconte",
"Johann",
""
],
[
"Kasmi",
"Abderrahim",
""
],
[
"Pomerleau",
"François",
""
],
[
"Chapuis",
"Roland",
""
],
[
"Malaterre",
"Laurent",
""
],
[
"Debain",
"Christophe",
""
],
[
"Aufrère",
"Romuald",
""
]
] |
In the context of autonomous robots, one of the most important tasks is to prevent potential damage to the robot during navigation. For this purpose, it is often assumed that one must deal with known probabilistic obstacles, then compute the probability of collision with each obstacle. However, in complex scenarios or unstructured environments, it might be difficult to detect such obstacles. In these cases, a metric map is used, where each position stores the information of occupancy. The most common type of metric map is the Bayesian occupancy map. However, this type of map is not well suited for computing risk assessments for continuous paths due to its discrete nature. Hence, we introduce a novel type of map called the Lambda Field, which is specially designed for risk assessment. We first propose a way to compute such a map and the expectation of a generic risk over a path. Then, we demonstrate the benefits of our generic formulation with a use case defining the risk as the expected collision force over a path. Using this risk definition and the Lambda Field, we show that our framework is capable of doing classical path planning while having a physical-based metric. Furthermore, the Lambda Field gives a natural way to deal with unstructured environments, such as tall grass. Where standard environment representations would always generate trajectories going around such obstacles, our framework allows the robot to go through the grass while being aware of the risk taken.
|
2211.14449
|
Gabriele Prato
|
Gabriele Prato, Yale Song, Janarthanan Rajendran, R Devon Hjelm, Neel
Joshi, Sarath Chandar
|
PatchBlender: A Motion Prior for Video Transformers
| null | null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Transformers have become one of the dominant architectures in the field of
computer vision. However, there are yet several challenges when applying such
architectures to video data. Most notably, these models struggle to model the
temporal patterns of video data effectively. Directly targeting this issue, we
introduce PatchBlender, a learnable blending function that operates over patch
embeddings across the temporal dimension of the latent space. We show that our
method is successful at enabling vision transformers to encode the temporal
component of video data. On Something-Something v2 and MOVi-A, we show that our
method improves the baseline performance of video Transformers. PatchBlender
has the advantage of being compatible with almost any Transformer architecture
and since it is learnable, the model can adaptively turn on or off the prior.
It is also extremely lightweight compute-wise, 0.005% the GFLOPs of a ViT-B.
|
[
{
"created": "Fri, 11 Nov 2022 14:43:16 GMT",
"version": "v1"
},
{
"created": "Fri, 10 Feb 2023 19:01:14 GMT",
"version": "v2"
}
] |
2023-02-14
|
[
[
"Prato",
"Gabriele",
""
],
[
"Song",
"Yale",
""
],
[
"Rajendran",
"Janarthanan",
""
],
[
"Hjelm",
"R Devon",
""
],
[
"Joshi",
"Neel",
""
],
[
"Chandar",
"Sarath",
""
]
] |
Transformers have become one of the dominant architectures in the field of computer vision. However, there are yet several challenges when applying such architectures to video data. Most notably, these models struggle to model the temporal patterns of video data effectively. Directly targeting this issue, we introduce PatchBlender, a learnable blending function that operates over patch embeddings across the temporal dimension of the latent space. We show that our method is successful at enabling vision transformers to encode the temporal component of video data. On Something-Something v2 and MOVi-A, we show that our method improves the baseline performance of video Transformers. PatchBlender has the advantage of being compatible with almost any Transformer architecture and since it is learnable, the model can adaptively turn on or off the prior. It is also extremely lightweight compute-wise, 0.005% the GFLOPs of a ViT-B.
|
2111.04309
|
Dung Truong
|
Dung Truong, Scott Makeig, Arnaud Delorme
|
Assessing learned features of Deep Learning applied to EEG
| null | null | null | null |
cs.LG eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
Convolutional Neural Networks (CNNs) have achieved impressive performance on
many computer vision related tasks, such as object detection, image
recognition, image retrieval, etc. These achievements benefit from the CNNs'
outstanding capability to learn discriminative features with deep layers of
neuron structures and iterative training process. This has inspired the EEG
research community to adopt CNN in performing EEG classification tasks.
However, CNNs learned features are not immediately interpretable, causing a
lack of understanding of the CNNs' internal working mechanism. To improve CNN
interpretability, CNN visualization methods are applied to translate the
internal features into visually perceptible patterns for qualitative analysis
of CNN layers. Many CNN visualization methods have been proposed in the
Computer Vision literature to interpret the CNN network structure, operation,
and semantic concept, yet applications to EEG data analysis have been limited.
In this work we use 3 different methods to extract EEG-relevant features from a
CNN trained on raw EEG data: optimal samples for each classification category,
activation maximization, and reverse convolution. We applied these methods to a
high-performing Deep Learning model with state-of-the-art performance for an
EEG sex classification task, and show that the model features a difference in
the theta frequency band. We show that visualization of a CNN model can reveal
interesting EEG results. Using these tools, EEG researchers using Deep Learning
can better identify the learned EEG features, possibly identifying new class
relevant biomarkers.
|
[
{
"created": "Mon, 8 Nov 2021 07:43:40 GMT",
"version": "v1"
}
] |
2021-11-09
|
[
[
"Truong",
"Dung",
""
],
[
"Makeig",
"Scott",
""
],
[
"Delorme",
"Arnaud",
""
]
] |
Convolutional Neural Networks (CNNs) have achieved impressive performance on many computer vision related tasks, such as object detection, image recognition, image retrieval, etc. These achievements benefit from the CNNs' outstanding capability to learn discriminative features with deep layers of neuron structures and iterative training process. This has inspired the EEG research community to adopt CNN in performing EEG classification tasks. However, CNNs learned features are not immediately interpretable, causing a lack of understanding of the CNNs' internal working mechanism. To improve CNN interpretability, CNN visualization methods are applied to translate the internal features into visually perceptible patterns for qualitative analysis of CNN layers. Many CNN visualization methods have been proposed in the Computer Vision literature to interpret the CNN network structure, operation, and semantic concept, yet applications to EEG data analysis have been limited. In this work we use 3 different methods to extract EEG-relevant features from a CNN trained on raw EEG data: optimal samples for each classification category, activation maximization, and reverse convolution. We applied these methods to a high-performing Deep Learning model with state-of-the-art performance for an EEG sex classification task, and show that the model features a difference in the theta frequency band. We show that visualization of a CNN model can reveal interesting EEG results. Using these tools, EEG researchers using Deep Learning can better identify the learned EEG features, possibly identifying new class relevant biomarkers.
|
0903.5045
|
Amelia Sparavigna
|
Amelia Sparavigna
|
Digital Restoration of Ancient Papyri
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Image processing can be used for digital restoration of ancient papyri, that
is, for a restoration performed on their digital images. The digital
manipulation allows reducing the background signals and enhancing the
readability of texts. In the case of very old and damaged documents, this is
fundamental for identification of the patterns of letters. Some examples of
restoration, obtained with an image processing which uses edges detection and
Fourier filtering, are shown. One of them concerns 7Q5 fragment of the Dead Sea
Scrolls.
|
[
{
"created": "Mon, 30 Mar 2009 06:00:15 GMT",
"version": "v1"
}
] |
2009-03-31
|
[
[
"Sparavigna",
"Amelia",
""
]
] |
Image processing can be used for digital restoration of ancient papyri, that is, for a restoration performed on their digital images. The digital manipulation allows reducing the background signals and enhancing the readability of texts. In the case of very old and damaged documents, this is fundamental for identification of the patterns of letters. Some examples of restoration, obtained with an image processing which uses edges detection and Fourier filtering, are shown. One of them concerns 7Q5 fragment of the Dead Sea Scrolls.
|
2305.00600
|
Vamsi Krishna Yepuri
|
Vamsi Krishna Yepuri, Venkata Kalyan Polamarasetty, Shivani Donthi,
Ajay Kumar Reddy Gondi
|
Containerization of a polyglot microservice application using Docker and
Kubernetes
| null | null | null | null |
cs.SE cs.PF
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This project investigates the benefits of containerization technology in
modern software development and deployment. The study emphasizes the advantages
of using Kubernetes and Docker in the development process, including the easy
packaging and deployment of microservices, efficient resource utilization,
faster startup times, and greater scalability and flexibility. The project
concludes by proposing a study that involves creating a polyglot microservice
application using Java, Python, and JavaScript, containerizing it with Docker,
and deploying it in Kubernetes. The study aims to evaluate service discovery
and auto-scaling in distributed mode and compare the performance metrics with
virtual machines and containers. The results of this study can inform software
development teams about the benefits of containerization in modern software
development and deployment.
|
[
{
"created": "Sun, 30 Apr 2023 23:28:47 GMT",
"version": "v1"
}
] |
2023-05-02
|
[
[
"Yepuri",
"Vamsi Krishna",
""
],
[
"Polamarasetty",
"Venkata Kalyan",
""
],
[
"Donthi",
"Shivani",
""
],
[
"Gondi",
"Ajay Kumar Reddy",
""
]
] |
This project investigates the benefits of containerization technology in modern software development and deployment. The study emphasizes the advantages of using Kubernetes and Docker in the development process, including the easy packaging and deployment of microservices, efficient resource utilization, faster startup times, and greater scalability and flexibility. The project concludes by proposing a study that involves creating a polyglot microservice application using Java, Python, and JavaScript, containerizing it with Docker, and deploying it in Kubernetes. The study aims to evaluate service discovery and auto-scaling in distributed mode and compare the performance metrics with virtual machines and containers. The results of this study can inform software development teams about the benefits of containerization in modern software development and deployment.
|
1912.03746
|
Lucas Gren
|
Lucas Gren
|
A Flipped Classroom Approach to Teaching Empirical Software Engineering
|
IEEE Transactions on Education, Preprint December 8, 2019
| null |
10.1109/TE.2019.2960264
| null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Contribution: A flipped classroom approach to teaching empirical software
engineering increases student learning by providing more time for active
learning in class. Background: There is a need for longitudinal studies of the
flipped classroom approach in general. Although a few cross-sectional studies
show that a flipped classroom approach can increase student learning by
providing more time for other in-class activities, such as active learning,
such studies are also rare in the context of teaching software engineering.
Intended outcomes: To assess the usefulness of a flipped classroom approach in
teaching software engineering. Application design: The study was conducted at
an international Master's program in Sweden, given in English, and partially
replicated at a university in Africa. Findings: The results suggest that
students' academic success, as measured by their exam grades, can be improved
by introducing a flipped classroom to teach software engineering topics, but
this may not extend to their subjective liking of the material, as measured by
student evaluations. Furthermore, the effect of the change in teaching
methodology was not replicated when changing the teaching team.
|
[
{
"created": "Sun, 8 Dec 2019 19:32:17 GMT",
"version": "v1"
},
{
"created": "Sat, 14 Dec 2019 07:03:14 GMT",
"version": "v2"
}
] |
2020-01-13
|
[
[
"Gren",
"Lucas",
""
]
] |
Contribution: A flipped classroom approach to teaching empirical software engineering increases student learning by providing more time for active learning in class. Background: There is a need for longitudinal studies of the flipped classroom approach in general. Although a few cross-sectional studies show that a flipped classroom approach can increase student learning by providing more time for other in-class activities, such as active learning, such studies are also rare in the context of teaching software engineering. Intended outcomes: To assess the usefulness of a flipped classroom approach in teaching software engineering. Application design: The study was conducted at an international Master's program in Sweden, given in English, and partially replicated at a university in Africa. Findings: The results suggest that students' academic success, as measured by their exam grades, can be improved by introducing a flipped classroom to teach software engineering topics, but this may not extend to their subjective liking of the material, as measured by student evaluations. Furthermore, the effect of the change in teaching methodology was not replicated when changing the teaching team.
|
1607.06787
|
Enzo Ferrante
|
Mahsa Shakeri (2 and 4), Enzo Ferrante (1), Stavros Tsogkas (1), Sarah
Lippe (3 and 4), Samuel Kadoury (2 and 4), Iasonas Kokkinos (1), Nikos
Paragios (1) ((1) CVN, CentraleSupelec-Inria, Universite Paris-Saclay,
France, (2) Polytechnique Montreal, Canada (3) University of Montreal, Canada
(4) CHU Sainte-Justine Research Center, Montreal, Canada)
|
Prior-based Coregistration and Cosegmentation
|
The first two authors contributed equally
|
MICCAI 2016
| null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a modular and scalable framework for dense coregistration and
cosegmentation with two key characteristics: first, we substitute ground truth
data with the semantic map output of a classifier; second, we combine this
output with population deformable registration to improve both alignment and
segmentation. Our approach deforms all volumes towards consensus, taking into
account image similarities and label consistency. Our pipeline can incorporate
any classifier and similarity metric. Results on two datasets, containing
annotations of challenging brain structures, demonstrate the potential of our
method.
|
[
{
"created": "Fri, 22 Jul 2016 18:49:09 GMT",
"version": "v1"
}
] |
2016-07-25
|
[
[
"Shakeri",
"Mahsa",
"",
"2 and 4"
],
[
"Ferrante",
"Enzo",
"",
"3 and 4"
],
[
"Tsogkas",
"Stavros",
"",
"3 and 4"
],
[
"Lippe",
"Sarah",
"",
"3 and 4"
],
[
"Kadoury",
"Samuel",
"",
"2 and 4"
],
[
"Kokkinos",
"Iasonas",
""
],
[
"Paragios",
"Nikos",
""
]
] |
We propose a modular and scalable framework for dense coregistration and cosegmentation with two key characteristics: first, we substitute ground truth data with the semantic map output of a classifier; second, we combine this output with population deformable registration to improve both alignment and segmentation. Our approach deforms all volumes towards consensus, taking into account image similarities and label consistency. Our pipeline can incorporate any classifier and similarity metric. Results on two datasets, containing annotations of challenging brain structures, demonstrate the potential of our method.
|
2403.09477
|
Cornelius von Einem
|
Nicolaj Schmid, Cornelius von Einem, Cesar Cadena, Roland Siegwart,
Lorenz Hruby, Florian Tschopp
|
VIRUS-NeRF -- Vision, InfraRed and UltraSonic based Neural Radiance
Fields
| null | null | null | null |
cs.RO cs.CV cs.LG eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Autonomous mobile robots are an increasingly integral part of modern factory
and warehouse operations. Obstacle detection, avoidance and path planning are
critical safety-relevant tasks, which are often solved using expensive LiDAR
sensors and depth cameras. We propose to use cost-effective low-resolution
ranging sensors, such as ultrasonic and infrared time-of-flight sensors by
developing VIRUS-NeRF - Vision, InfraRed, and UltraSonic based Neural Radiance
Fields. Building upon Instant Neural Graphics Primitives with a Multiresolution
Hash Encoding (Instant-NGP), VIRUS-NeRF incorporates depth measurements from
ultrasonic and infrared sensors and utilizes them to update the occupancy grid
used for ray marching. Experimental evaluation in 2D demonstrates that
VIRUS-NeRF achieves comparable mapping performance to LiDAR point clouds
regarding coverage. Notably, in small environments, its accuracy aligns with
that of LiDAR measurements, while in larger ones, it is bounded by the utilized
ultrasonic sensors. An in-depth ablation study reveals that adding ultrasonic
and infrared sensors is highly effective when dealing with sparse data and low
view variation. Further, the proposed occupancy grid of VIRUS-NeRF improves the
mapping capabilities and increases the training speed by 46% compared to
Instant-NGP. Overall, VIRUS-NeRF presents a promising approach for
cost-effective local mapping in mobile robotics, with potential applications in
safety and navigation tasks. The code can be found at
https://github.com/ethz-asl/virus nerf.
|
[
{
"created": "Thu, 14 Mar 2024 15:19:19 GMT",
"version": "v1"
},
{
"created": "Wed, 14 Aug 2024 12:43:52 GMT",
"version": "v2"
}
] |
2024-08-15
|
[
[
"Schmid",
"Nicolaj",
""
],
[
"von Einem",
"Cornelius",
""
],
[
"Cadena",
"Cesar",
""
],
[
"Siegwart",
"Roland",
""
],
[
"Hruby",
"Lorenz",
""
],
[
"Tschopp",
"Florian",
""
]
] |
Autonomous mobile robots are an increasingly integral part of modern factory and warehouse operations. Obstacle detection, avoidance and path planning are critical safety-relevant tasks, which are often solved using expensive LiDAR sensors and depth cameras. We propose to use cost-effective low-resolution ranging sensors, such as ultrasonic and infrared time-of-flight sensors by developing VIRUS-NeRF - Vision, InfraRed, and UltraSonic based Neural Radiance Fields. Building upon Instant Neural Graphics Primitives with a Multiresolution Hash Encoding (Instant-NGP), VIRUS-NeRF incorporates depth measurements from ultrasonic and infrared sensors and utilizes them to update the occupancy grid used for ray marching. Experimental evaluation in 2D demonstrates that VIRUS-NeRF achieves comparable mapping performance to LiDAR point clouds regarding coverage. Notably, in small environments, its accuracy aligns with that of LiDAR measurements, while in larger ones, it is bounded by the utilized ultrasonic sensors. An in-depth ablation study reveals that adding ultrasonic and infrared sensors is highly effective when dealing with sparse data and low view variation. Further, the proposed occupancy grid of VIRUS-NeRF improves the mapping capabilities and increases the training speed by 46% compared to Instant-NGP. Overall, VIRUS-NeRF presents a promising approach for cost-effective local mapping in mobile robotics, with potential applications in safety and navigation tasks. The code can be found at https://github.com/ethz-asl/virus nerf.
|
2004.07031
|
Guotai Wang
|
Qi Duan, Guotai Wang, Rui Wang, Chao Fu, Xinjun Li, Na Wang, Yechong
Huang, Xiaodi Huang, Tao Song, Liang Zhao, Xinglong Liu, Qing Xia, Zhiqiang
Hu, Yinan Chen and Shaoting Zhang
|
SenseCare: A Research Platform for Medical Image Informatics and
Interactive 3D Visualization
|
15 pages, 16 figures
| null | null | null |
cs.HC eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Clinical research on smart health has an increasing demand for intelligent
and clinic-oriented medical image computing algorithms and platforms that
support various applications. To this end, we have developed SenseCare research
platform, which is designed to facilitate translational research on intelligent
diagnosis and treatment planning in various clinical scenarios. To enable
clinical research with Artificial Intelligence (AI), SenseCare provides a range
of AI toolkits for different tasks, including image segmentation, registration,
lesion and landmark detection from various image modalities ranging from
radiology to pathology. In addition, SenseCare is clinic-oriented and supports
a wide range of clinical applications such as diagnosis and surgical planning
for lung cancer, pelvic tumor, coronary artery disease, etc. SenseCare provides
several appealing functions and features such as advanced 3D visualization,
concurrent and efficient web-based access, fast data synchronization and high
data security, multi-center deployment, support for collaborative research,
etc. In this report, we present an overview of SenseCare as an efficient
platform providing comprehensive toolkits and high extensibility for
intelligent image analysis and clinical research in different application
scenarios. We also summarize the research outcome through the collaboration
with multiple hospitals.
|
[
{
"created": "Fri, 3 Apr 2020 03:17:04 GMT",
"version": "v1"
},
{
"created": "Fri, 2 Sep 2022 13:03:13 GMT",
"version": "v2"
}
] |
2022-09-05
|
[
[
"Duan",
"Qi",
""
],
[
"Wang",
"Guotai",
""
],
[
"Wang",
"Rui",
""
],
[
"Fu",
"Chao",
""
],
[
"Li",
"Xinjun",
""
],
[
"Wang",
"Na",
""
],
[
"Huang",
"Yechong",
""
],
[
"Huang",
"Xiaodi",
""
],
[
"Song",
"Tao",
""
],
[
"Zhao",
"Liang",
""
],
[
"Liu",
"Xinglong",
""
],
[
"Xia",
"Qing",
""
],
[
"Hu",
"Zhiqiang",
""
],
[
"Chen",
"Yinan",
""
],
[
"Zhang",
"Shaoting",
""
]
] |
Clinical research on smart health has an increasing demand for intelligent and clinic-oriented medical image computing algorithms and platforms that support various applications. To this end, we have developed SenseCare research platform, which is designed to facilitate translational research on intelligent diagnosis and treatment planning in various clinical scenarios. To enable clinical research with Artificial Intelligence (AI), SenseCare provides a range of AI toolkits for different tasks, including image segmentation, registration, lesion and landmark detection from various image modalities ranging from radiology to pathology. In addition, SenseCare is clinic-oriented and supports a wide range of clinical applications such as diagnosis and surgical planning for lung cancer, pelvic tumor, coronary artery disease, etc. SenseCare provides several appealing functions and features such as advanced 3D visualization, concurrent and efficient web-based access, fast data synchronization and high data security, multi-center deployment, support for collaborative research, etc. In this report, we present an overview of SenseCare as an efficient platform providing comprehensive toolkits and high extensibility for intelligent image analysis and clinical research in different application scenarios. We also summarize the research outcome through the collaboration with multiple hospitals.
|
0905.2367
|
Zhe Chen
|
Zhe Chen, Gilles Motet
|
A Language-theoretic View on Guidelines and Consistency Rules of UML
|
16 pages. In Proceedings of the 5th European Conference on Model
Driven Architecture - Foundations and Applications (ECMDA-FA 2009), Enschede,
The Netherlands, Lecture Notes in Computer Science 5562, pp. 66-81. Springer,
2009
| null | null | null |
cs.SE cs.FL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Guidelines and consistency rules of UML are used to control the degrees of
freedom provided by the language to prevent faults. Guidelines are used in
specific domains (e.g., avionics) to recommend the proper use of technologies.
Consistency rules are used to deal with inconsistencies in models. However,
guidelines and consistency rules use informal restrictions on the uses of
languages, which makes checking difficult. In this paper, we consider these
problems from a language-theoretic view. We propose the formalism of C-Systems,
short for "formal language control systems". A C-System consists of a
controlled grammar and a controlling grammar. Guidelines and consistency rules
are formalized as controlling grammars that control the uses of UML, i.e. the
derivations using the grammar of UML. This approach can be implemented as a
parser, which can automatically verify the rules on a UML user model in XMI
format. A comparison to related work shows our contribution: a generic top-down
and syntax-based approach that checks language level constraints at
compile-time.
|
[
{
"created": "Thu, 14 May 2009 16:13:48 GMT",
"version": "v1"
}
] |
2009-05-15
|
[
[
"Chen",
"Zhe",
""
],
[
"Motet",
"Gilles",
""
]
] |
Guidelines and consistency rules of UML are used to control the degrees of freedom provided by the language to prevent faults. Guidelines are used in specific domains (e.g., avionics) to recommend the proper use of technologies. Consistency rules are used to deal with inconsistencies in models. However, guidelines and consistency rules use informal restrictions on the uses of languages, which makes checking difficult. In this paper, we consider these problems from a language-theoretic view. We propose the formalism of C-Systems, short for "formal language control systems". A C-System consists of a controlled grammar and a controlling grammar. Guidelines and consistency rules are formalized as controlling grammars that control the uses of UML, i.e. the derivations using the grammar of UML. This approach can be implemented as a parser, which can automatically verify the rules on a UML user model in XMI format. A comparison to related work shows our contribution: a generic top-down and syntax-based approach that checks language level constraints at compile-time.
|
1908.06258
|
Tianyu He
|
Tianyu He, Jiale Chen, Xu Tan, Tao Qin
|
Language Graph Distillation for Low-Resource Machine Translation
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neural machine translation on low-resource language is challenging due to the
lack of bilingual sentence pairs. Previous works usually solve the low-resource
translation problem with knowledge transfer in a multilingual setting. In this
paper, we propose the concept of Language Graph and further design a novel
graph distillation algorithm that boosts the accuracy of low-resource
translations in the graph with forward and backward knowledge distillation.
Preliminary experiments on the TED talks multilingual dataset demonstrate the
effectiveness of our proposed method. Specifically, we improve the low-resource
translation pair by more than 3.13 points in terms of BLEU score.
|
[
{
"created": "Sat, 17 Aug 2019 08:01:05 GMT",
"version": "v1"
}
] |
2019-08-20
|
[
[
"He",
"Tianyu",
""
],
[
"Chen",
"Jiale",
""
],
[
"Tan",
"Xu",
""
],
[
"Qin",
"Tao",
""
]
] |
Neural machine translation on low-resource language is challenging due to the lack of bilingual sentence pairs. Previous works usually solve the low-resource translation problem with knowledge transfer in a multilingual setting. In this paper, we propose the concept of Language Graph and further design a novel graph distillation algorithm that boosts the accuracy of low-resource translations in the graph with forward and backward knowledge distillation. Preliminary experiments on the TED talks multilingual dataset demonstrate the effectiveness of our proposed method. Specifically, we improve the low-resource translation pair by more than 3.13 points in terms of BLEU score.
|
1912.11278
|
Reid McIlroy-Young
|
Reid McIlroy-Young and Ashton Anderson
|
From Welcome New Gabbers to the Pittsburgh Synagogue Shooting: The
Evolution of Gab
| null |
Proceedings of the International AAAI Conference on Web and Social
Media, 13(01), 651-654 2019
| null | null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Gab, an online social media platform with very little content moderation, has
recently come to prominence as an alt-right community and a haven for hate
speech. We document the evolution of Gab since its inception until a Gab user
carried out the most deadly attack on the Jewish community in US history. We
investigate Gab language use, study how topics evolved over time, and find that
the shooters' posts were among the most consistently anti-Semitic on Gab, but
that hundreds of other users were even more extreme.
|
[
{
"created": "Tue, 24 Dec 2019 10:19:18 GMT",
"version": "v1"
}
] |
2019-12-25
|
[
[
"McIlroy-Young",
"Reid",
""
],
[
"Anderson",
"Ashton",
""
]
] |
Gab, an online social media platform with very little content moderation, has recently come to prominence as an alt-right community and a haven for hate speech. We document the evolution of Gab since its inception until a Gab user carried out the most deadly attack on the Jewish community in US history. We investigate Gab language use, study how topics evolved over time, and find that the shooters' posts were among the most consistently anti-Semitic on Gab, but that hundreds of other users were even more extreme.
|
1401.4234
|
Xiang Zuo
|
Xiang Zuo and Jeremy Blackburn and Nicolas Kourtellis and John
Skvoretz and Adriana Iamnitchi
|
The power of indirect social ties
|
Technical Report
| null | null | null |
cs.SI physics.soc-ph
|
http://creativecommons.org/licenses/by/3.0/
|
While direct social ties have been intensely studied in the context of
computer-mediated social networks, indirect ties (e.g., friends of friends)
have seen little attention. Yet in real life, we often rely on friends of our
friends for recommendations (of good doctors, good schools, or good
babysitters), for introduction to a new job opportunity, and for many other
occasional needs. In this work we attempt to 1) quantify the strength of
indirect social ties, 2) validate it, and 3) empirically demonstrate its
usefulness for distributed applications on two examples. We quantify social
strength of indirect ties using a(ny) measure of the strength of the direct
ties that connect two people and the intuition provided by the sociology
literature. We validate the proposed metric experimentally by comparing
correlations with other direct social tie evaluators. We show via data-driven
experiments that the proposed metric for social strength can be used
successfully for social applications. Specifically, we show that it alleviates
known problems in friend-to-friend storage systems by addressing two previously
documented shortcomings: reduced set of storage candidates and data
availability correlations. We also show that it can be used for predicting the
effects of a social diffusion with an accuracy of up to 93.5%.
|
[
{
"created": "Fri, 17 Jan 2014 03:56:03 GMT",
"version": "v1"
}
] |
2014-01-20
|
[
[
"Zuo",
"Xiang",
""
],
[
"Blackburn",
"Jeremy",
""
],
[
"Kourtellis",
"Nicolas",
""
],
[
"Skvoretz",
"John",
""
],
[
"Iamnitchi",
"Adriana",
""
]
] |
While direct social ties have been intensely studied in the context of computer-mediated social networks, indirect ties (e.g., friends of friends) have seen little attention. Yet in real life, we often rely on friends of our friends for recommendations (of good doctors, good schools, or good babysitters), for introduction to a new job opportunity, and for many other occasional needs. In this work we attempt to 1) quantify the strength of indirect social ties, 2) validate it, and 3) empirically demonstrate its usefulness for distributed applications on two examples. We quantify social strength of indirect ties using a(ny) measure of the strength of the direct ties that connect two people and the intuition provided by the sociology literature. We validate the proposed metric experimentally by comparing correlations with other direct social tie evaluators. We show via data-driven experiments that the proposed metric for social strength can be used successfully for social applications. Specifically, we show that it alleviates known problems in friend-to-friend storage systems by addressing two previously documented shortcomings: reduced set of storage candidates and data availability correlations. We also show that it can be used for predicting the effects of a social diffusion with an accuracy of up to 93.5%.
|
2307.12775
|
Giorgos Papanastasiou
|
Giorgos Papanastasiou, Nikolaos Dikaios, Jiahao Huang, Chengjia Wang,
Guang Yang
|
Is attention all you need in medical image analysis? A review
| null | null |
10.1109/JBHI.2023.3348436
| null |
cs.CV cs.AI cs.LG eess.IV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Medical imaging is a key component in clinical diagnosis, treatment planning
and clinical trial design, accounting for almost 90% of all healthcare data.
CNNs achieved performance gains in medical image analysis (MIA) over the last
years. CNNs can efficiently model local pixel interactions and be trained on
small-scale MI data. The main disadvantage of typical CNN models is that they
ignore global pixel relationships within images, which limits their
generalisation ability to understand out-of-distribution data with different
'global' information. The recent progress of Artificial Intelligence gave rise
to Transformers, which can learn global relationships from data. However, full
Transformer models need to be trained on large-scale data and involve
tremendous computational complexity. Attention and Transformer compartments
(Transf/Attention) which can well maintain properties for modelling global
relationships, have been proposed as lighter alternatives of full Transformers.
Recently, there is an increasing trend to co-pollinate complementary
local-global properties from CNN and Transf/Attention architectures, which led
to a new era of hybrid models. The past years have witnessed substantial growth
in hybrid CNN-Transf/Attention models across diverse MIA problems. In this
systematic review, we survey existing hybrid CNN-Transf/Attention models,
review and unravel key architectural designs, analyse breakthroughs, and
evaluate current and future opportunities as well as challenges. We also
introduced a comprehensive analysis framework on generalisation opportunities
of scientific and clinical impact, based on which new data-driven domain
generalisation and adaptation methods can be stimulated.
|
[
{
"created": "Mon, 24 Jul 2023 13:24:56 GMT",
"version": "v1"
}
] |
2024-02-13
|
[
[
"Papanastasiou",
"Giorgos",
""
],
[
"Dikaios",
"Nikolaos",
""
],
[
"Huang",
"Jiahao",
""
],
[
"Wang",
"Chengjia",
""
],
[
"Yang",
"Guang",
""
]
] |
Medical imaging is a key component in clinical diagnosis, treatment planning and clinical trial design, accounting for almost 90% of all healthcare data. CNNs achieved performance gains in medical image analysis (MIA) over the last years. CNNs can efficiently model local pixel interactions and be trained on small-scale MI data. The main disadvantage of typical CNN models is that they ignore global pixel relationships within images, which limits their generalisation ability to understand out-of-distribution data with different 'global' information. The recent progress of Artificial Intelligence gave rise to Transformers, which can learn global relationships from data. However, full Transformer models need to be trained on large-scale data and involve tremendous computational complexity. Attention and Transformer compartments (Transf/Attention) which can well maintain properties for modelling global relationships, have been proposed as lighter alternatives of full Transformers. Recently, there is an increasing trend to co-pollinate complementary local-global properties from CNN and Transf/Attention architectures, which led to a new era of hybrid models. The past years have witnessed substantial growth in hybrid CNN-Transf/Attention models across diverse MIA problems. In this systematic review, we survey existing hybrid CNN-Transf/Attention models, review and unravel key architectural designs, analyse breakthroughs, and evaluate current and future opportunities as well as challenges. We also introduced a comprehensive analysis framework on generalisation opportunities of scientific and clinical impact, based on which new data-driven domain generalisation and adaptation methods can be stimulated.
|
1811.02309
|
Ali Reihanian
|
Ali Reihanian, Mohammad-Reza Feizi-Derakhshi, Hadi S. Aghdasi
|
An Enhanced Multi-Objective Biogeography-Based Optimization for
Overlapping Community Detection in Social Networks with Node Attributes
|
1. This paper has been published in the journal of "Information
Sciences". 2. https://doi.org/10.1016/j.ins.2022.11.125
|
Information Sciences, 622, pp.903-929 (2023)
|
10.1016/j.ins.2022.11.125
| null |
cs.SI cs.NE
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Community detection is one of the most important and interesting issues in
social network analysis. In recent years, simultaneous considering of nodes'
attributes and topological structures of social networks in the process of
community detection has attracted the attentions of many scholars, and this
consideration has been recently used in some community detection methods to
increase their efficiencies and to enhance their performances in finding
meaningful and relevant communities. But the problem is that most of these
methods tend to find non-overlapping communities, while many real-world
networks include communities that often overlap to some extent. In order to
solve this problem, an evolutionary algorithm called MOBBO-OCD, which is based
on multi-objective biogeography-based optimization (BBO), is proposed in this
paper to automatically find overlapping communities in a social network with
node attributes with synchronously considering the density of connections and
the similarity of nodes' attributes in the network. In MOBBO-OCD, an extended
locus-based adjacency representation called OLAR is introduced to encode and
decode overlapping communities. Based on OLAR, a rank-based migration operator
along with a novel two-phase mutation strategy and a new double-point crossover
are used in the evolution process of MOBBO-OCD to effectively lead the
population into the evolution path. In order to assess the performance of
MOBBO-OCD, a new metric called alpha_SAEM is proposed in this paper, which is
able to evaluate the goodness of both overlapping and non-overlapping
partitions with considering the two aspects of node attributes and linkage
structure. Quantitative evaluations reveal that MOBBO-OCD achieves favorable
results which are quite superior to the results of 15 relevant community
detection algorithms in the literature.
|
[
{
"created": "Tue, 6 Nov 2018 12:09:36 GMT",
"version": "v1"
},
{
"created": "Mon, 15 Nov 2021 22:10:16 GMT",
"version": "v2"
},
{
"created": "Wed, 28 Dec 2022 11:16:14 GMT",
"version": "v3"
}
] |
2022-12-29
|
[
[
"Reihanian",
"Ali",
""
],
[
"Feizi-Derakhshi",
"Mohammad-Reza",
""
],
[
"Aghdasi",
"Hadi S.",
""
]
] |
Community detection is one of the most important and interesting issues in social network analysis. In recent years, simultaneous considering of nodes' attributes and topological structures of social networks in the process of community detection has attracted the attentions of many scholars, and this consideration has been recently used in some community detection methods to increase their efficiencies and to enhance their performances in finding meaningful and relevant communities. But the problem is that most of these methods tend to find non-overlapping communities, while many real-world networks include communities that often overlap to some extent. In order to solve this problem, an evolutionary algorithm called MOBBO-OCD, which is based on multi-objective biogeography-based optimization (BBO), is proposed in this paper to automatically find overlapping communities in a social network with node attributes with synchronously considering the density of connections and the similarity of nodes' attributes in the network. In MOBBO-OCD, an extended locus-based adjacency representation called OLAR is introduced to encode and decode overlapping communities. Based on OLAR, a rank-based migration operator along with a novel two-phase mutation strategy and a new double-point crossover are used in the evolution process of MOBBO-OCD to effectively lead the population into the evolution path. In order to assess the performance of MOBBO-OCD, a new metric called alpha_SAEM is proposed in this paper, which is able to evaluate the goodness of both overlapping and non-overlapping partitions with considering the two aspects of node attributes and linkage structure. Quantitative evaluations reveal that MOBBO-OCD achieves favorable results which are quite superior to the results of 15 relevant community detection algorithms in the literature.
|
2103.02282
|
Milan Stute
|
Alexander Heinrich, Milan Stute, Tim Kornhuber, Matthias Hollick
|
Who Can Find My Devices? Security and Privacy of Apple's Crowd-Sourced
Bluetooth Location Tracking System
|
Accepted at Privacy Enhancing Technologies Symposium (PETS) 2021
| null | null | null |
cs.CR cs.NI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Overnight, Apple has turned its hundreds-of-million-device ecosystem into the
world's largest crowd-sourced location tracking network called offline finding
(OF). OF leverages online finder devices to detect the presence of missing
offline devices using Bluetooth and report an approximate location back to the
owner via the Internet. While OF is not the first system of its kind, it is the
first to commit to strong privacy goals. In particular, OF aims to ensure
finder anonymity, untrackability of owner devices, and confidentiality of
location reports. This paper presents the first comprehensive security and
privacy analysis of OF. To this end, we recover the specifications of the
closed-source OF protocols by means of reverse engineering. We experimentally
show that unauthorized access to the location reports allows for accurate
device tracking and retrieving a user's top locations with an error in the
order of 10 meters in urban areas. While we find that OF's design achieves its
privacy goals, we discover two distinct design and implementation flaws that
can lead to a location correlation attack and unauthorized access to the
location history of the past seven days, which could deanonymize users. Apple
has partially addressed the issues following our responsible disclosure.
Finally, we make our research artifacts publicly available.
|
[
{
"created": "Wed, 3 Mar 2021 09:46:34 GMT",
"version": "v1"
}
] |
2021-03-04
|
[
[
"Heinrich",
"Alexander",
""
],
[
"Stute",
"Milan",
""
],
[
"Kornhuber",
"Tim",
""
],
[
"Hollick",
"Matthias",
""
]
] |
Overnight, Apple has turned its hundreds-of-million-device ecosystem into the world's largest crowd-sourced location tracking network called offline finding (OF). OF leverages online finder devices to detect the presence of missing offline devices using Bluetooth and report an approximate location back to the owner via the Internet. While OF is not the first system of its kind, it is the first to commit to strong privacy goals. In particular, OF aims to ensure finder anonymity, untrackability of owner devices, and confidentiality of location reports. This paper presents the first comprehensive security and privacy analysis of OF. To this end, we recover the specifications of the closed-source OF protocols by means of reverse engineering. We experimentally show that unauthorized access to the location reports allows for accurate device tracking and retrieving a user's top locations with an error in the order of 10 meters in urban areas. While we find that OF's design achieves its privacy goals, we discover two distinct design and implementation flaws that can lead to a location correlation attack and unauthorized access to the location history of the past seven days, which could deanonymize users. Apple has partially addressed the issues following our responsible disclosure. Finally, we make our research artifacts publicly available.
|
2305.06448
|
Nikhil Churamani
|
Nikhil Churamani, Tolga Dimlioglu, German I. Parisi and Hatice Gunes
|
Continual Facial Expression Recognition: A Benchmark
| null | null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Understanding human affective behaviour, especially in the dynamics of
real-world settings, requires Facial Expression Recognition (FER) models to
continuously adapt to individual differences in user expression, contextual
attributions, and the environment. Current (deep) Machine Learning (ML)-based
FER approaches pre-trained in isolation on benchmark datasets fail to capture
the nuances of real-world interactions where data is available only
incrementally, acquired by the agent or robot during interactions. New learning
comes at the cost of previous knowledge, resulting in catastrophic forgetting.
Lifelong or Continual Learning (CL), on the other hand, enables adaptability in
agents by being sensitive to changing data distributions, integrating new
information without interfering with previously learnt knowledge. Positing CL
as an effective learning paradigm for FER, this work presents the Continual
Facial Expression Recognition (ConFER) benchmark that evaluates popular CL
techniques on FER tasks. It presents a comparative analysis of several CL-based
approaches on popular FER datasets such as CK+, RAF-DB, and AffectNet and
present strategies for a successful implementation of ConFER for Affective
Computing (AC) research. CL techniques, under different learning settings, are
shown to achieve state-of-the-art (SOTA) performance across several datasets,
thus motivating a discussion on the benefits of applying CL principles towards
human behaviour understanding, particularly from facial expressions, as well
the challenges entailed.
|
[
{
"created": "Wed, 10 May 2023 20:35:38 GMT",
"version": "v1"
}
] |
2023-05-12
|
[
[
"Churamani",
"Nikhil",
""
],
[
"Dimlioglu",
"Tolga",
""
],
[
"Parisi",
"German I.",
""
],
[
"Gunes",
"Hatice",
""
]
] |
Understanding human affective behaviour, especially in the dynamics of real-world settings, requires Facial Expression Recognition (FER) models to continuously adapt to individual differences in user expression, contextual attributions, and the environment. Current (deep) Machine Learning (ML)-based FER approaches pre-trained in isolation on benchmark datasets fail to capture the nuances of real-world interactions where data is available only incrementally, acquired by the agent or robot during interactions. New learning comes at the cost of previous knowledge, resulting in catastrophic forgetting. Lifelong or Continual Learning (CL), on the other hand, enables adaptability in agents by being sensitive to changing data distributions, integrating new information without interfering with previously learnt knowledge. Positing CL as an effective learning paradigm for FER, this work presents the Continual Facial Expression Recognition (ConFER) benchmark that evaluates popular CL techniques on FER tasks. It presents a comparative analysis of several CL-based approaches on popular FER datasets such as CK+, RAF-DB, and AffectNet and present strategies for a successful implementation of ConFER for Affective Computing (AC) research. CL techniques, under different learning settings, are shown to achieve state-of-the-art (SOTA) performance across several datasets, thus motivating a discussion on the benefits of applying CL principles towards human behaviour understanding, particularly from facial expressions, as well the challenges entailed.
|
2101.01909
|
Yifan Xu
|
Yifan Xu, Weijian Xu, David Cheung and Zhuowen Tu
|
Line Segment Detection Using Transformers without Edges
|
Accepted to CVPR 2021 (Oral)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present a joint end-to-end line segment detection algorithm
using Transformers that is post-processing and heuristics-guided intermediate
processing (edge/junction/region detection) free. Our method, named LinE
segment TRansformers (LETR), takes advantages of having integrated tokenized
queries, a self-attention mechanism, and an encoding-decoding strategy within
Transformers by skipping standard heuristic designs for the edge element
detection and perceptual grouping processes. We equip Transformers with a
multi-scale encoder/decoder strategy to perform fine-grained line segment
detection under a direct endpoint distance loss. This loss term is particularly
suitable for detecting geometric structures such as line segments that are not
conveniently represented by the standard bounding box representations. The
Transformers learn to gradually refine line segments through layers of
self-attention. In our experiments, we show state-of-the-art results on
Wireframe and YorkUrban benchmarks.
|
[
{
"created": "Wed, 6 Jan 2021 08:00:18 GMT",
"version": "v1"
},
{
"created": "Fri, 30 Apr 2021 17:34:55 GMT",
"version": "v2"
}
] |
2021-05-03
|
[
[
"Xu",
"Yifan",
""
],
[
"Xu",
"Weijian",
""
],
[
"Cheung",
"David",
""
],
[
"Tu",
"Zhuowen",
""
]
] |
In this paper, we present a joint end-to-end line segment detection algorithm using Transformers that is post-processing and heuristics-guided intermediate processing (edge/junction/region detection) free. Our method, named LinE segment TRansformers (LETR), takes advantages of having integrated tokenized queries, a self-attention mechanism, and an encoding-decoding strategy within Transformers by skipping standard heuristic designs for the edge element detection and perceptual grouping processes. We equip Transformers with a multi-scale encoder/decoder strategy to perform fine-grained line segment detection under a direct endpoint distance loss. This loss term is particularly suitable for detecting geometric structures such as line segments that are not conveniently represented by the standard bounding box representations. The Transformers learn to gradually refine line segments through layers of self-attention. In our experiments, we show state-of-the-art results on Wireframe and YorkUrban benchmarks.
|
1811.10199
|
Naranchimeg Bold
|
Bold Naranchimeg, Chao Zhang, Takuya Akashi
|
Cross-domain Deep Feature Combination for Bird Species Classification
with Audio-visual Data
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent decade, many state-of-the-art algorithms on image classification as
well as audio classification have achieved noticeable successes with the
development of deep convolutional neural network (CNN). However, most of the
works only exploit single type of training data. In this paper, we present a
study on classifying bird species by exploiting the combination of both visual
(images) and audio (sounds) data using CNN, which has been sparsely treated so
far. Specifically, we propose CNN-based multimodal learning models in three
types of fusion strategies (early, middle, late) to settle the issues of
combining training data cross domains. The advantage of our proposed method
lies on the fact that We can utilize CNN not only to extract features from
image and audio data (spectrogram) but also to combine the features across
modalities. In the experiment, we train and evaluate the network structure on a
comprehensive CUB-200-2011 standard data set combing our originally collected
audio data set with respect to the data species. We observe that a model which
utilizes the combination of both data outperforms models trained with only an
either type of data. We also show that transfer learning can significantly
increase the classification performance.
|
[
{
"created": "Mon, 26 Nov 2018 06:28:44 GMT",
"version": "v1"
}
] |
2018-11-27
|
[
[
"Naranchimeg",
"Bold",
""
],
[
"Zhang",
"Chao",
""
],
[
"Akashi",
"Takuya",
""
]
] |
In recent decade, many state-of-the-art algorithms on image classification as well as audio classification have achieved noticeable successes with the development of deep convolutional neural network (CNN). However, most of the works only exploit single type of training data. In this paper, we present a study on classifying bird species by exploiting the combination of both visual (images) and audio (sounds) data using CNN, which has been sparsely treated so far. Specifically, we propose CNN-based multimodal learning models in three types of fusion strategies (early, middle, late) to settle the issues of combining training data cross domains. The advantage of our proposed method lies on the fact that We can utilize CNN not only to extract features from image and audio data (spectrogram) but also to combine the features across modalities. In the experiment, we train and evaluate the network structure on a comprehensive CUB-200-2011 standard data set combing our originally collected audio data set with respect to the data species. We observe that a model which utilizes the combination of both data outperforms models trained with only an either type of data. We also show that transfer learning can significantly increase the classification performance.
|
2308.09435
|
Alena Fenogenova Ms
|
Nikita Martynov, Mark Baushenko, Anastasia Kozlova, Katerina
Kolomeytseva, Aleksandr Abramov, Alena Fenogenova
|
A Methodology for Generative Spelling Correction via Natural Spelling
Errors Emulation across Multiple Domains and Languages
|
to appear in EACL 2024
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Modern large language models demonstrate impressive capabilities in text
generation and generalization. However, they often struggle with solving text
editing tasks, particularly when it comes to correcting spelling errors and
mistypings. In this paper, we present a methodology for generative spelling
correction (SC), which was tested on English and Russian languages and
potentially can be extended to any language with minor changes. Our research
mainly focuses on exploring natural spelling errors and mistypings in texts and
studying the ways those errors can be emulated in correct sentences to
effectively enrich generative models' pre-train procedure. We investigate the
impact of such emulations and the models' abilities across different text
domains. In this work, we investigate two spelling corruption techniques: 1)
first one mimics human behavior when making a mistake through leveraging
statistics of errors from particular dataset and 2) second adds the most common
spelling errors, keyboard miss clicks, and some heuristics within the texts. We
conducted experiments employing various corruption strategies, models'
architectures and sizes on the pre-training and fine-tuning stages and
evaluated the models using single-domain and multi-domain test sets. As a
practical outcome of our work, we introduce SAGE(Spell checking via
Augmentation and Generative distribution Emulation). It is a library for
automatic generative SC that includes a family of pre-trained generative models
and built-in augmentation algorithms.
|
[
{
"created": "Fri, 18 Aug 2023 10:07:28 GMT",
"version": "v1"
},
{
"created": "Wed, 13 Sep 2023 15:22:29 GMT",
"version": "v2"
}
] |
2023-09-14
|
[
[
"Martynov",
"Nikita",
""
],
[
"Baushenko",
"Mark",
""
],
[
"Kozlova",
"Anastasia",
""
],
[
"Kolomeytseva",
"Katerina",
""
],
[
"Abramov",
"Aleksandr",
""
],
[
"Fenogenova",
"Alena",
""
]
] |
Modern large language models demonstrate impressive capabilities in text generation and generalization. However, they often struggle with solving text editing tasks, particularly when it comes to correcting spelling errors and mistypings. In this paper, we present a methodology for generative spelling correction (SC), which was tested on English and Russian languages and potentially can be extended to any language with minor changes. Our research mainly focuses on exploring natural spelling errors and mistypings in texts and studying the ways those errors can be emulated in correct sentences to effectively enrich generative models' pre-train procedure. We investigate the impact of such emulations and the models' abilities across different text domains. In this work, we investigate two spelling corruption techniques: 1) first one mimics human behavior when making a mistake through leveraging statistics of errors from particular dataset and 2) second adds the most common spelling errors, keyboard miss clicks, and some heuristics within the texts. We conducted experiments employing various corruption strategies, models' architectures and sizes on the pre-training and fine-tuning stages and evaluated the models using single-domain and multi-domain test sets. As a practical outcome of our work, we introduce SAGE(Spell checking via Augmentation and Generative distribution Emulation). It is a library for automatic generative SC that includes a family of pre-trained generative models and built-in augmentation algorithms.
|
1210.1317
|
Phong Nguyen
|
Phong Nguyen, Jun Wang, Melanie Hilario and Alexandros Kalousis
|
Learning Heterogeneous Similarity Measures for Hybrid-Recommendations in
Meta-Mining
| null | null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The notion of meta-mining has appeared recently and extends the traditional
meta-learning in two ways. First it does not learn meta-models that provide
support only for the learning algorithm selection task but ones that support
the whole data-mining process. In addition it abandons the so called black-box
approach to algorithm description followed in meta-learning. Now in addition to
the datasets, algorithms also have descriptors, workflows as well. For the
latter two these descriptions are semantic, describing properties of the
algorithms. With the availability of descriptors both for datasets and data
mining workflows the traditional modelling techniques followed in
meta-learning, typically based on classification and regression algorithms, are
no longer appropriate. Instead we are faced with a problem the nature of which
is much more similar to the problems that appear in recommendation systems. The
most important meta-mining requirements are that suggestions should use only
datasets and workflows descriptors and the cold-start problem, e.g. providing
workflow suggestions for new datasets.
In this paper we take a different view on the meta-mining modelling problem
and treat it as a recommender problem. In order to account for the meta-mining
specificities we derive a novel metric-based-learning recommender approach. Our
method learns two homogeneous metrics, one in the dataset and one in the
workflow space, and a heterogeneous one in the dataset-workflow space. All
learned metrics reflect similarities established from the dataset-workflow
preference matrix. We demonstrate our method on meta-mining over biological
(microarray datasets) problems. The application of our method is not limited to
the meta-mining problem, its formulations is general enough so that it can be
applied on problems with similar requirements.
|
[
{
"created": "Thu, 4 Oct 2012 07:17:37 GMT",
"version": "v1"
}
] |
2012-10-05
|
[
[
"Nguyen",
"Phong",
""
],
[
"Wang",
"Jun",
""
],
[
"Hilario",
"Melanie",
""
],
[
"Kalousis",
"Alexandros",
""
]
] |
The notion of meta-mining has appeared recently and extends the traditional meta-learning in two ways. First it does not learn meta-models that provide support only for the learning algorithm selection task but ones that support the whole data-mining process. In addition it abandons the so called black-box approach to algorithm description followed in meta-learning. Now in addition to the datasets, algorithms also have descriptors, workflows as well. For the latter two these descriptions are semantic, describing properties of the algorithms. With the availability of descriptors both for datasets and data mining workflows the traditional modelling techniques followed in meta-learning, typically based on classification and regression algorithms, are no longer appropriate. Instead we are faced with a problem the nature of which is much more similar to the problems that appear in recommendation systems. The most important meta-mining requirements are that suggestions should use only datasets and workflows descriptors and the cold-start problem, e.g. providing workflow suggestions for new datasets. In this paper we take a different view on the meta-mining modelling problem and treat it as a recommender problem. In order to account for the meta-mining specificities we derive a novel metric-based-learning recommender approach. Our method learns two homogeneous metrics, one in the dataset and one in the workflow space, and a heterogeneous one in the dataset-workflow space. All learned metrics reflect similarities established from the dataset-workflow preference matrix. We demonstrate our method on meta-mining over biological (microarray datasets) problems. The application of our method is not limited to the meta-mining problem, its formulations is general enough so that it can be applied on problems with similar requirements.
|
1302.3982
|
Harish Chintakunta
|
Harish Chintakunta and Hamid Krim
|
Distributed boundary tracking using alpha and Delaunay-Cech shapes
| null | null | null | null |
cs.CG
|
http://creativecommons.org/licenses/by/3.0/
|
For a given point set $S$ in a plane, we develop a distributed algorithm to
compute the $\alpha-$shape of $S$. $\alpha-$shapes are well known geometric
objects which generalize the idea of a convex hull, and provide a good
definition for the shape of $S$. We assume that the distances between pairs of
points which are closer than a certain distance $r>0$ are provided, and we show
constructively that this information is sufficient to compute the alpha shapes
for a range of parameters, where the range depends on $r$.
Such distributed algorithms are very useful in domains such as sensor
networks, where each point represents a sensing node, the location of which is
not necessarily known.
We also introduce a new geometric object called the Delaunay-\v{C}ech shape,
which is geometrically more appropriate than an $\alpha-$shape for some cases,
and show that it is topologically equivalent to $\alpha-$shapes.
|
[
{
"created": "Sat, 16 Feb 2013 17:56:15 GMT",
"version": "v1"
}
] |
2013-02-19
|
[
[
"Chintakunta",
"Harish",
""
],
[
"Krim",
"Hamid",
""
]
] |
For a given point set $S$ in a plane, we develop a distributed algorithm to compute the $\alpha-$shape of $S$. $\alpha-$shapes are well known geometric objects which generalize the idea of a convex hull, and provide a good definition for the shape of $S$. We assume that the distances between pairs of points which are closer than a certain distance $r>0$ are provided, and we show constructively that this information is sufficient to compute the alpha shapes for a range of parameters, where the range depends on $r$. Such distributed algorithms are very useful in domains such as sensor networks, where each point represents a sensing node, the location of which is not necessarily known. We also introduce a new geometric object called the Delaunay-\v{C}ech shape, which is geometrically more appropriate than an $\alpha-$shape for some cases, and show that it is topologically equivalent to $\alpha-$shapes.
|
2402.14469
|
Philipp Liznerski
|
Philipp Liznerski, Saurabh Varshneya, Ece Calikus, Sophie Fellenz, and
Marius Kloft
|
Reimagining Anomalies: What If Anomalies Were Normal?
|
30 pages; preprint
| null | null | null |
cs.CV cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep learning-based methods have achieved a breakthrough in image anomaly
detection, but their complexity introduces a considerable challenge to
understanding why an instance is predicted to be anomalous. We introduce a
novel explanation method that generates multiple counterfactual examples for
each anomaly, capturing diverse concepts of anomalousness. A counterfactual
example is a modification of the anomaly that is perceived as normal by the
anomaly detector. The method provides a high-level semantic explanation of the
mechanism that triggered the anomaly detector, allowing users to explore
"what-if scenarios." Qualitative and quantitative analyses across various image
datasets show that the method applied to state-of-the-art anomaly detectors can
achieve high-quality semantic explanations of detectors.
|
[
{
"created": "Thu, 22 Feb 2024 11:56:44 GMT",
"version": "v1"
}
] |
2024-02-23
|
[
[
"Liznerski",
"Philipp",
""
],
[
"Varshneya",
"Saurabh",
""
],
[
"Calikus",
"Ece",
""
],
[
"Fellenz",
"Sophie",
""
],
[
"Kloft",
"Marius",
""
]
] |
Deep learning-based methods have achieved a breakthrough in image anomaly detection, but their complexity introduces a considerable challenge to understanding why an instance is predicted to be anomalous. We introduce a novel explanation method that generates multiple counterfactual examples for each anomaly, capturing diverse concepts of anomalousness. A counterfactual example is a modification of the anomaly that is perceived as normal by the anomaly detector. The method provides a high-level semantic explanation of the mechanism that triggered the anomaly detector, allowing users to explore "what-if scenarios." Qualitative and quantitative analyses across various image datasets show that the method applied to state-of-the-art anomaly detectors can achieve high-quality semantic explanations of detectors.
|
1701.06051
|
Mohammad Hassan Lotfi
|
Mohammad Hassan Lotfi and Saswati Sarkar
|
The Economics of Competition and Cooperation Between MNOs and MVNOs
|
8 Pages, Tech report for CISS2017 submission
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we consider the economics of the interaction between Mobile
Virtual Network Operators (MVNOs) and Mobile Network Operators (MNOs). We
investigate the incentives of an MNO for offering some of her resources to an
MVNO instead of using the resources for her own. We formulate the problem as a
sequential game. We consider a market with one MNO and one MVNO, and a
continuum of undecided end-users. We assume that EUs have different preferences
for the MNO and the MVNO. These preferences can be because of the differences
in the service they are offering or the reluctance of an EU to buy her plan
from one of them. We assume that the preferences also depend on the investment
level the MNO and the MVNO. We show that there exists a unique interior SPNE,
i.e. the SPNE by which both SPs receive a positive mass of EUs, and
characterize it. We also consider a benchmark case in which the MNO and the
MVNO do not cooperate, characterize the unique SPNE of this case, and compare
the results of our model to the benchmark case to assess the incentive of the
MNO to invest in her infrastructure and to offer it to the MVNO.
|
[
{
"created": "Sat, 21 Jan 2017 17:07:20 GMT",
"version": "v1"
}
] |
2017-01-24
|
[
[
"Lotfi",
"Mohammad Hassan",
""
],
[
"Sarkar",
"Saswati",
""
]
] |
In this work, we consider the economics of the interaction between Mobile Virtual Network Operators (MVNOs) and Mobile Network Operators (MNOs). We investigate the incentives of an MNO for offering some of her resources to an MVNO instead of using the resources for her own. We formulate the problem as a sequential game. We consider a market with one MNO and one MVNO, and a continuum of undecided end-users. We assume that EUs have different preferences for the MNO and the MVNO. These preferences can be because of the differences in the service they are offering or the reluctance of an EU to buy her plan from one of them. We assume that the preferences also depend on the investment level the MNO and the MVNO. We show that there exists a unique interior SPNE, i.e. the SPNE by which both SPs receive a positive mass of EUs, and characterize it. We also consider a benchmark case in which the MNO and the MVNO do not cooperate, characterize the unique SPNE of this case, and compare the results of our model to the benchmark case to assess the incentive of the MNO to invest in her infrastructure and to offer it to the MVNO.
|
1303.5756
|
Wilson X. Wen
|
Wilson X. Wen
|
From Relational Databases to Belief Networks
|
Appears in Proceedings of the Seventh Conference on Uncertainty in
Artificial Intelligence (UAI1991)
| null | null |
UAI-P-1991-PG-406-413
|
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The relationship between belief networks and relational databases is
examined. Based on this analysis, a method to construct belief networks
automatically from statistical relational data is proposed. A comparison
between our method and other methods shows that our method has several
advantages when generalization or prediction is deeded.
|
[
{
"created": "Wed, 20 Mar 2013 15:33:53 GMT",
"version": "v1"
}
] |
2013-03-26
|
[
[
"Wen",
"Wilson X.",
""
]
] |
The relationship between belief networks and relational databases is examined. Based on this analysis, a method to construct belief networks automatically from statistical relational data is proposed. A comparison between our method and other methods shows that our method has several advantages when generalization or prediction is deeded.
|
2201.13052
|
Pini Zilber
|
Pini Zilber and Boaz Nadler
|
Inductive Matrix Completion: No Bad Local Minima and a Fast Algorithm
| null | null | null | null |
cs.LG math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The inductive matrix completion (IMC) problem is to recover a low rank matrix
from few observed entries while incorporating prior knowledge about its row and
column subspaces. In this work, we make three contributions to the IMC problem:
(i) we prove that under suitable conditions, the IMC optimization landscape has
no bad local minima; (ii) we derive a simple scheme with theoretical guarantees
to estimate the rank of the unknown matrix; and (iii) we propose GNIMC, a
simple Gauss-Newton based method to solve the IMC problem, analyze its runtime
and derive recovery guarantees for it. The guarantees for GNIMC are sharper in
several aspects than those available for other methods, including a quadratic
convergence rate, fewer required observed entries and stability to errors or
deviations from low-rank. Empirically, given entries observed uniformly at
random, GNIMC recovers the underlying matrix substantially faster than several
competing methods.
|
[
{
"created": "Mon, 31 Jan 2022 08:20:08 GMT",
"version": "v1"
}
] |
2022-02-01
|
[
[
"Zilber",
"Pini",
""
],
[
"Nadler",
"Boaz",
""
]
] |
The inductive matrix completion (IMC) problem is to recover a low rank matrix from few observed entries while incorporating prior knowledge about its row and column subspaces. In this work, we make three contributions to the IMC problem: (i) we prove that under suitable conditions, the IMC optimization landscape has no bad local minima; (ii) we derive a simple scheme with theoretical guarantees to estimate the rank of the unknown matrix; and (iii) we propose GNIMC, a simple Gauss-Newton based method to solve the IMC problem, analyze its runtime and derive recovery guarantees for it. The guarantees for GNIMC are sharper in several aspects than those available for other methods, including a quadratic convergence rate, fewer required observed entries and stability to errors or deviations from low-rank. Empirically, given entries observed uniformly at random, GNIMC recovers the underlying matrix substantially faster than several competing methods.
|
2004.03937
|
Kevin Stowe
|
Bernd Skiera, Lukas J\"urgensmeier, Kevin Stowe, Iryna Gurevych
|
How to Best Predict the Daily Number of New Infections of Covid-19
|
15 pages, 5 figures
| null | null | null |
cs.SI physics.soc-ph q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Knowledge about the daily number of new infections of Covid-19 is important
because it is the basis for political decisions resulting in lockdowns and
urgent health care measures. We use Germany as an example to illustrate
shortcomings of official numbers, which are, at least in Germany, disclosed
only with several days of delay and severely underreported on weekends (more
than 40%). These shortcomings outline an urgent need for alternative data
sources. The other widely cited source provided by the Center for Systems
Science and Engineering at Johns Hopkins University (JHU) also deviates for
Germany on average by 79% from the official numbers. We argue that Google
Search and Twitter data should complement official numbers. They predict even
better than the original values from Johns Hopkins University and do so several
days ahead. These two data sources could also be used in parts of the world
where official numbers do not exist or are perceived to be unreliable.
|
[
{
"created": "Wed, 8 Apr 2020 11:08:58 GMT",
"version": "v1"
}
] |
2020-04-09
|
[
[
"Skiera",
"Bernd",
""
],
[
"Jürgensmeier",
"Lukas",
""
],
[
"Stowe",
"Kevin",
""
],
[
"Gurevych",
"Iryna",
""
]
] |
Knowledge about the daily number of new infections of Covid-19 is important because it is the basis for political decisions resulting in lockdowns and urgent health care measures. We use Germany as an example to illustrate shortcomings of official numbers, which are, at least in Germany, disclosed only with several days of delay and severely underreported on weekends (more than 40%). These shortcomings outline an urgent need for alternative data sources. The other widely cited source provided by the Center for Systems Science and Engineering at Johns Hopkins University (JHU) also deviates for Germany on average by 79% from the official numbers. We argue that Google Search and Twitter data should complement official numbers. They predict even better than the original values from Johns Hopkins University and do so several days ahead. These two data sources could also be used in parts of the world where official numbers do not exist or are perceived to be unreliable.
|
1503.00338
|
Thibault Lesieur
|
Thibault Lesieur, Florent Krzakala, Lenka Zdeborova
|
Phase Transitions in Sparse PCA
|
6 pages, 3 figures
|
IEEE International Symposium on Information Theory (ISIT),
pp.1635-1639 (2015)
|
10.1109/ISIT.2015.7282733
| null |
cs.IT cond-mat.stat-mech math.IT stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study optimal estimation for sparse principal component analysis when the
number of non-zero elements is small but on the same order as the dimension of
the data. We employ approximate message passing (AMP) algorithm and its state
evolution to analyze what is the information theoretically minimal mean-squared
error and the one achieved by AMP in the limit of large sizes. For a special
case of rank one and large enough density of non-zeros Deshpande and Montanari
[1] proved that AMP is asymptotically optimal. We show that both for low
density and for large rank the problem undergoes a series of phase transitions
suggesting existence of a region of parameters where estimation is information
theoretically possible, but AMP (and presumably every other polynomial
algorithm) fails. The analysis of the large rank limit is particularly
instructive.
|
[
{
"created": "Sun, 1 Mar 2015 19:26:39 GMT",
"version": "v1"
}
] |
2020-01-22
|
[
[
"Lesieur",
"Thibault",
""
],
[
"Krzakala",
"Florent",
""
],
[
"Zdeborova",
"Lenka",
""
]
] |
We study optimal estimation for sparse principal component analysis when the number of non-zero elements is small but on the same order as the dimension of the data. We employ approximate message passing (AMP) algorithm and its state evolution to analyze what is the information theoretically minimal mean-squared error and the one achieved by AMP in the limit of large sizes. For a special case of rank one and large enough density of non-zeros Deshpande and Montanari [1] proved that AMP is asymptotically optimal. We show that both for low density and for large rank the problem undergoes a series of phase transitions suggesting existence of a region of parameters where estimation is information theoretically possible, but AMP (and presumably every other polynomial algorithm) fails. The analysis of the large rank limit is particularly instructive.
|
1606.07729
|
Sebastian Schlecht
|
Sebastian J. Schlecht and Emanuel A. P. Habets
|
On Lossless Feedback Delay Networks
| null | null |
10.1109/TSP.2016.2637323
| null |
cs.SY cs.SD
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Lossless Feedback Delay Networks (FDNs) are commonly used as a design
prototype for artificial reverberation algorithms. The lossless property is
dependent on the feedback matrix, which connects the output of a set of delays
to their inputs, and the lengths of the delays. Both, unitary and triangular
feedback matrices are known to constitute lossless FDNs, however, the most
general class of lossless feedback matrices has not been identified. In this
contribution, it is shown that the FDN is lossless for any set of delays, if
all irreducible components of the feedback matrix are diagonally similar to a
unitary matrix. The necessity of the generalized class of feedback matrices is
demonstrated by examples of FDN designs proposed in literature.
|
[
{
"created": "Fri, 24 Jun 2016 15:51:44 GMT",
"version": "v1"
}
] |
2017-04-05
|
[
[
"Schlecht",
"Sebastian J.",
""
],
[
"Habets",
"Emanuel A. P.",
""
]
] |
Lossless Feedback Delay Networks (FDNs) are commonly used as a design prototype for artificial reverberation algorithms. The lossless property is dependent on the feedback matrix, which connects the output of a set of delays to their inputs, and the lengths of the delays. Both, unitary and triangular feedback matrices are known to constitute lossless FDNs, however, the most general class of lossless feedback matrices has not been identified. In this contribution, it is shown that the FDN is lossless for any set of delays, if all irreducible components of the feedback matrix are diagonally similar to a unitary matrix. The necessity of the generalized class of feedback matrices is demonstrated by examples of FDN designs proposed in literature.
|
2007.08864
|
Vineet Nair
|
Nir Ailon, Omer Leibovich, Vineet Nair
|
Sparse Linear Networks with a Fixed Butterfly Structure: Theory and
Practice
|
Accepted to UAI 2021
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A butterfly network consists of logarithmically many layers, each with a
linear number of non-zero weights (pre-specified). The fast
Johnson-Lindenstrauss transform (FJLT) can be represented as a butterfly
network followed by a projection onto a random subset of the coordinates.
Moreover, a random matrix based on FJLT with high probability approximates the
action of any matrix on a vector. Motivated by these facts, we propose to
replace a dense linear layer in any neural network by an architecture based on
the butterfly network. The proposed architecture significantly improves upon
the quadratic number of weights required in a standard dense layer to nearly
linear with little compromise in expressibility of the resulting operator. In a
collection of wide variety of experiments, including supervised prediction on
both the NLP and vision data, we show that this not only produces results that
match and at times outperform existing well-known architectures, but it also
offers faster training and prediction in deployment. To understand the
optimization problems posed by neural networks with a butterfly network, we
also study the optimization landscape of the encoder-decoder network, where the
encoder is replaced by a butterfly network followed by a dense linear layer in
smaller dimension. Theoretical result presented in the paper explains why the
training speed and outcome are not compromised by our proposed approach.
|
[
{
"created": "Fri, 17 Jul 2020 09:45:03 GMT",
"version": "v1"
},
{
"created": "Sun, 4 Jul 2021 11:12:29 GMT",
"version": "v2"
}
] |
2021-07-06
|
[
[
"Ailon",
"Nir",
""
],
[
"Leibovich",
"Omer",
""
],
[
"Nair",
"Vineet",
""
]
] |
A butterfly network consists of logarithmically many layers, each with a linear number of non-zero weights (pre-specified). The fast Johnson-Lindenstrauss transform (FJLT) can be represented as a butterfly network followed by a projection onto a random subset of the coordinates. Moreover, a random matrix based on FJLT with high probability approximates the action of any matrix on a vector. Motivated by these facts, we propose to replace a dense linear layer in any neural network by an architecture based on the butterfly network. The proposed architecture significantly improves upon the quadratic number of weights required in a standard dense layer to nearly linear with little compromise in expressibility of the resulting operator. In a collection of wide variety of experiments, including supervised prediction on both the NLP and vision data, we show that this not only produces results that match and at times outperform existing well-known architectures, but it also offers faster training and prediction in deployment. To understand the optimization problems posed by neural networks with a butterfly network, we also study the optimization landscape of the encoder-decoder network, where the encoder is replaced by a butterfly network followed by a dense linear layer in smaller dimension. Theoretical result presented in the paper explains why the training speed and outcome are not compromised by our proposed approach.
|
2405.19872
|
Igor Podlubny
|
Igor Podlubny
|
Detection of the papermilling behavior
|
14 pages, 6 figures
| null | null | null |
cs.DL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Based on the analysis of the data obtainable from the Web of Science
publication and citation database, typical signs of possible papermilling
behavior are described, quantified, and illustrated by examples. A MATLAB
function is provided for the analysis of the outputs from the Web of Science. A
new quantitative indicator -- integrity index, or I-index -- is proposed for
using it along with standard bibliographic and scientometric indicators.
|
[
{
"created": "Thu, 30 May 2024 09:27:34 GMT",
"version": "v1"
},
{
"created": "Mon, 3 Jun 2024 09:38:07 GMT",
"version": "v2"
}
] |
2024-06-04
|
[
[
"Podlubny",
"Igor",
""
]
] |
Based on the analysis of the data obtainable from the Web of Science publication and citation database, typical signs of possible papermilling behavior are described, quantified, and illustrated by examples. A MATLAB function is provided for the analysis of the outputs from the Web of Science. A new quantitative indicator -- integrity index, or I-index -- is proposed for using it along with standard bibliographic and scientometric indicators.
|
2405.08263
|
Chenlei Lv
|
Chenlei Lv, Dan Zhang
|
Palette-based Color Transfer between Images
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As an important subtopic of image enhancement, color transfer aims to enhance
the color scheme of a source image according to a reference one while
preserving the semantic context. To implement color transfer, the palette-based
color mapping framework was proposed. \textcolor{black}{It is a classical
solution that does not depend on complex semantic analysis to generate a new
color scheme. However, the framework usually requires manual settings,
blackucing its practicality.} The quality of traditional palette generation
depends on the degree of color separation. In this paper, we propose a new
palette-based color transfer method that can automatically generate a new color
scheme. With a redesigned palette-based clustering method, pixels can be
classified into different segments according to color distribution with better
applicability. {By combining deep learning-based image segmentation and a new
color mapping strategy, color transfer can be implemented on foreground and
background parts independently while maintaining semantic consistency.} The
experimental results indicate that our method exhibits significant advantages
over peer methods in terms of natural realism, color consistency, generality,
and robustness.
|
[
{
"created": "Tue, 14 May 2024 01:41:19 GMT",
"version": "v1"
}
] |
2024-05-15
|
[
[
"Lv",
"Chenlei",
""
],
[
"Zhang",
"Dan",
""
]
] |
As an important subtopic of image enhancement, color transfer aims to enhance the color scheme of a source image according to a reference one while preserving the semantic context. To implement color transfer, the palette-based color mapping framework was proposed. \textcolor{black}{It is a classical solution that does not depend on complex semantic analysis to generate a new color scheme. However, the framework usually requires manual settings, blackucing its practicality.} The quality of traditional palette generation depends on the degree of color separation. In this paper, we propose a new palette-based color transfer method that can automatically generate a new color scheme. With a redesigned palette-based clustering method, pixels can be classified into different segments according to color distribution with better applicability. {By combining deep learning-based image segmentation and a new color mapping strategy, color transfer can be implemented on foreground and background parts independently while maintaining semantic consistency.} The experimental results indicate that our method exhibits significant advantages over peer methods in terms of natural realism, color consistency, generality, and robustness.
|
2104.00851
|
Yang Zhao
|
Yang Zhao and Hao Zhang
|
Estimating the Generalization in Deep Neural Networks via Sparsity
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Generalization is the key capability for deep neural networks (DNNs).
However, it is challenging to give a reliable measure of the generalization
ability of a DNN via only its nature. In this paper, we propose a novel method
for estimating the generalization gap based on network sparsity. In our method,
two key quantities are proposed first. They have close relationship with the
generalization ability and can be calculated directly from the training results
alone. Then a simple linear model involving two key quantities are constructed
to give accurate estimation of the generalization gap. By training DNNs with a
wide range of generalization gap on popular datasets, we show that our key
quantities and linear model could be efficient tools for estimating the
generalization gap of DNNs.
|
[
{
"created": "Fri, 2 Apr 2021 02:10:32 GMT",
"version": "v1"
},
{
"created": "Sun, 16 Jan 2022 15:00:02 GMT",
"version": "v2"
},
{
"created": "Mon, 20 Nov 2023 08:50:33 GMT",
"version": "v3"
}
] |
2023-11-21
|
[
[
"Zhao",
"Yang",
""
],
[
"Zhang",
"Hao",
""
]
] |
Generalization is the key capability for deep neural networks (DNNs). However, it is challenging to give a reliable measure of the generalization ability of a DNN via only its nature. In this paper, we propose a novel method for estimating the generalization gap based on network sparsity. In our method, two key quantities are proposed first. They have close relationship with the generalization ability and can be calculated directly from the training results alone. Then a simple linear model involving two key quantities are constructed to give accurate estimation of the generalization gap. By training DNNs with a wide range of generalization gap on popular datasets, we show that our key quantities and linear model could be efficient tools for estimating the generalization gap of DNNs.
|
1904.03746
|
Yoon Kim
|
Yoon Kim, Alexander M. Rush, Lei Yu, Adhiguna Kuncoro, Chris Dyer,
G\'abor Melis
|
Unsupervised Recurrent Neural Network Grammars
|
NAACL 2019
| null | null | null |
cs.CL stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recurrent neural network grammars (RNNG) are generative models of language
which jointly model syntax and surface structure by incrementally generating a
syntax tree and sentence in a top-down, left-to-right order. Supervised RNNGs
achieve strong language modeling and parsing performance, but require an
annotated corpus of parse trees. In this work, we experiment with unsupervised
learning of RNNGs. Since directly marginalizing over the space of latent trees
is intractable, we instead apply amortized variational inference. To maximize
the evidence lower bound, we develop an inference network parameterized as a
neural CRF constituency parser. On language modeling, unsupervised RNNGs
perform as well their supervised counterparts on benchmarks in English and
Chinese. On constituency grammar induction, they are competitive with recent
neural language models that induce tree structures from words through attention
mechanisms.
|
[
{
"created": "Sun, 7 Apr 2019 21:14:43 GMT",
"version": "v1"
},
{
"created": "Tue, 9 Apr 2019 17:00:47 GMT",
"version": "v2"
},
{
"created": "Mon, 15 Apr 2019 17:56:54 GMT",
"version": "v3"
},
{
"created": "Wed, 12 Jun 2019 04:48:38 GMT",
"version": "v4"
},
{
"created": "Fri, 14 Jun 2019 02:58:11 GMT",
"version": "v5"
},
{
"created": "Mon, 5 Aug 2019 01:21:15 GMT",
"version": "v6"
}
] |
2019-08-06
|
[
[
"Kim",
"Yoon",
""
],
[
"Rush",
"Alexander M.",
""
],
[
"Yu",
"Lei",
""
],
[
"Kuncoro",
"Adhiguna",
""
],
[
"Dyer",
"Chris",
""
],
[
"Melis",
"Gábor",
""
]
] |
Recurrent neural network grammars (RNNG) are generative models of language which jointly model syntax and surface structure by incrementally generating a syntax tree and sentence in a top-down, left-to-right order. Supervised RNNGs achieve strong language modeling and parsing performance, but require an annotated corpus of parse trees. In this work, we experiment with unsupervised learning of RNNGs. Since directly marginalizing over the space of latent trees is intractable, we instead apply amortized variational inference. To maximize the evidence lower bound, we develop an inference network parameterized as a neural CRF constituency parser. On language modeling, unsupervised RNNGs perform as well their supervised counterparts on benchmarks in English and Chinese. On constituency grammar induction, they are competitive with recent neural language models that induce tree structures from words through attention mechanisms.
|
2308.08868
|
P{\aa}l Gr{\o}n{\aa}s Drange
|
P{\aa}l Gr{\o}n{\aa}s Drange, Patrick Greaves, Irene Muzi, Felix Reidl
|
Computing complexity measures of degenerate graphs
|
Accepted for publication in the 18th International Symposium on
Parameterized and Exact Computation (IPEC 2023)
| null | null | null |
cs.DS
|
http://creativecommons.org/licenses/by-sa/4.0/
|
We show that the VC-dimension of a graph can be computed in time $n^{\log
d+1} d^{O(d)}$, where $d$ is the degeneracy of the input graph. The core idea
of our algorithm is a data structure to efficiently query the number of
vertices that see a specific subset of vertices inside of a (small) query set.
The construction of this data structure takes time $O(d2^dn)$, afterwards
queries can be computed efficiently using fast M\"obius inversion.
This data structure turns out to be useful for a range of tasks, especially
for finding bipartite patterns in degenerate graphs, and we outline an
efficient algorithms for counting the number of times specific patterns occur
in a graph. The largest factor in the running time of this algorithm is
$O(n^c)$, where $c$ is a parameter of the pattern we call its left covering
number.
Concrete applications of this algorithm include counting the number of
(non-induced) bicliques in linear time, the number of co-matchings in quadratic
time, as well as a constant-factor approximation of the ladder index in linear
time.
Finally, we supplement our theoretical results with several implementations
and run experiments on more than 200 real-world datasets -- the largest of
which has 8 million edges -- where we obtain interesting insights into the
VC-dimension of real-world networks.
|
[
{
"created": "Thu, 17 Aug 2023 09:01:47 GMT",
"version": "v1"
}
] |
2023-08-21
|
[
[
"Drange",
"Pål Grønås",
""
],
[
"Greaves",
"Patrick",
""
],
[
"Muzi",
"Irene",
""
],
[
"Reidl",
"Felix",
""
]
] |
We show that the VC-dimension of a graph can be computed in time $n^{\log d+1} d^{O(d)}$, where $d$ is the degeneracy of the input graph. The core idea of our algorithm is a data structure to efficiently query the number of vertices that see a specific subset of vertices inside of a (small) query set. The construction of this data structure takes time $O(d2^dn)$, afterwards queries can be computed efficiently using fast M\"obius inversion. This data structure turns out to be useful for a range of tasks, especially for finding bipartite patterns in degenerate graphs, and we outline an efficient algorithms for counting the number of times specific patterns occur in a graph. The largest factor in the running time of this algorithm is $O(n^c)$, where $c$ is a parameter of the pattern we call its left covering number. Concrete applications of this algorithm include counting the number of (non-induced) bicliques in linear time, the number of co-matchings in quadratic time, as well as a constant-factor approximation of the ladder index in linear time. Finally, we supplement our theoretical results with several implementations and run experiments on more than 200 real-world datasets -- the largest of which has 8 million edges -- where we obtain interesting insights into the VC-dimension of real-world networks.
|
2206.09900
|
Chen Min
|
Chen Min and Xinli Xu and Dawei Zhao and Liang Xiao and Yiming Nie and
Bin Dai
|
Occupancy-MAE: Self-supervised Pre-training Large-scale LiDAR Point
Clouds with Masked Occupancy Autoencoders
|
Accepted by TIV
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Current perception models in autonomous driving heavily rely on large-scale
labelled 3D data, which is both costly and time-consuming to annotate. This
work proposes a solution to reduce the dependence on labelled 3D training data
by leveraging pre-training on large-scale unlabeled outdoor LiDAR point clouds
using masked autoencoders (MAE). While existing masked point autoencoding
methods mainly focus on small-scale indoor point clouds or pillar-based
large-scale outdoor LiDAR data, our approach introduces a new self-supervised
masked occupancy pre-training method called Occupancy-MAE, specifically
designed for voxel-based large-scale outdoor LiDAR point clouds. Occupancy-MAE
takes advantage of the gradually sparse voxel occupancy structure of outdoor
LiDAR point clouds and incorporates a range-aware random masking strategy and a
pretext task of occupancy prediction. By randomly masking voxels based on their
distance to the LiDAR and predicting the masked occupancy structure of the
entire 3D surrounding scene, Occupancy-MAE encourages the extraction of
high-level semantic information to reconstruct the masked voxel using only a
small number of visible voxels. Extensive experiments demonstrate the
effectiveness of Occupancy-MAE across several downstream tasks. For 3D object
detection, Occupancy-MAE reduces the labelled data required for car detection
on the KITTI dataset by half and improves small object detection by
approximately 2% in AP on the Waymo dataset. For 3D semantic segmentation,
Occupancy-MAE outperforms training from scratch by around 2% in mIoU. For
multi-object tracking, Occupancy-MAE enhances training from scratch by
approximately 1% in terms of AMOTA and AMOTP. Codes are publicly available at
https://github.com/chaytonmin/Occupancy-MAE.
|
[
{
"created": "Mon, 20 Jun 2022 17:15:50 GMT",
"version": "v1"
},
{
"created": "Fri, 24 Jun 2022 06:46:02 GMT",
"version": "v2"
},
{
"created": "Mon, 27 Jun 2022 09:01:51 GMT",
"version": "v3"
},
{
"created": "Tue, 16 Aug 2022 14:16:21 GMT",
"version": "v4"
},
{
"created": "Wed, 23 Nov 2022 06:15:30 GMT",
"version": "v5"
},
{
"created": "Sat, 29 Apr 2023 00:54:33 GMT",
"version": "v6"
},
{
"created": "Mon, 9 Oct 2023 12:34:02 GMT",
"version": "v7"
}
] |
2023-10-10
|
[
[
"Min",
"Chen",
""
],
[
"Xu",
"Xinli",
""
],
[
"Zhao",
"Dawei",
""
],
[
"Xiao",
"Liang",
""
],
[
"Nie",
"Yiming",
""
],
[
"Dai",
"Bin",
""
]
] |
Current perception models in autonomous driving heavily rely on large-scale labelled 3D data, which is both costly and time-consuming to annotate. This work proposes a solution to reduce the dependence on labelled 3D training data by leveraging pre-training on large-scale unlabeled outdoor LiDAR point clouds using masked autoencoders (MAE). While existing masked point autoencoding methods mainly focus on small-scale indoor point clouds or pillar-based large-scale outdoor LiDAR data, our approach introduces a new self-supervised masked occupancy pre-training method called Occupancy-MAE, specifically designed for voxel-based large-scale outdoor LiDAR point clouds. Occupancy-MAE takes advantage of the gradually sparse voxel occupancy structure of outdoor LiDAR point clouds and incorporates a range-aware random masking strategy and a pretext task of occupancy prediction. By randomly masking voxels based on their distance to the LiDAR and predicting the masked occupancy structure of the entire 3D surrounding scene, Occupancy-MAE encourages the extraction of high-level semantic information to reconstruct the masked voxel using only a small number of visible voxels. Extensive experiments demonstrate the effectiveness of Occupancy-MAE across several downstream tasks. For 3D object detection, Occupancy-MAE reduces the labelled data required for car detection on the KITTI dataset by half and improves small object detection by approximately 2% in AP on the Waymo dataset. For 3D semantic segmentation, Occupancy-MAE outperforms training from scratch by around 2% in mIoU. For multi-object tracking, Occupancy-MAE enhances training from scratch by approximately 1% in terms of AMOTA and AMOTP. Codes are publicly available at https://github.com/chaytonmin/Occupancy-MAE.
|
2303.06551
|
Zhirui Sun
|
Zhirui Sun, Weinan Chen, Jiankun Wang, and Max Q.-H. Meng
|
A Systematic Evaluation of Different Indoor Localization Methods in
Robotic Autonomous Luggage Trolley Collection at Airports
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This article addresses the localization problem in robotic autonomous luggage
trolley collection at airports and provides a systematic evaluation of
different methods to solve it. The robotic autonomous luggage trolley
collection is a complex system that involves object detection, localization,
motion planning and control, manipulation, etc. Among these components,
effective localization is essential for the robot to employ subsequent motion
planning and end-effector manipulation because it can provide a correct goal
position. In this article, we survey four popular and representative
localization methods to achieve object localization in the luggage collection
process, including radio frequency identification (RFID), Keypoints,
ultrawideband (UWB), and Reflectors. To test their performance, we construct a
qualitative evaluation framework with Localization Accuracy, Mobile Power
Supplies, Coverage Area, Cost, and Scalability. Besides, we conduct a series of
quantitative experiments regarding Localization Accuracy and Success Rate on a
real-world robotic autonomous luggage trolley collection system. We further
analyze the performance of different localization methods based on experiment
results, revealing that the Keypoints method is most suitable for indoor
environments to achieve the luggage trolley collection.
|
[
{
"created": "Sun, 12 Mar 2023 03:31:10 GMT",
"version": "v1"
}
] |
2023-03-14
|
[
[
"Sun",
"Zhirui",
""
],
[
"Chen",
"Weinan",
""
],
[
"Wang",
"Jiankun",
""
],
[
"Meng",
"Max Q. -H.",
""
]
] |
This article addresses the localization problem in robotic autonomous luggage trolley collection at airports and provides a systematic evaluation of different methods to solve it. The robotic autonomous luggage trolley collection is a complex system that involves object detection, localization, motion planning and control, manipulation, etc. Among these components, effective localization is essential for the robot to employ subsequent motion planning and end-effector manipulation because it can provide a correct goal position. In this article, we survey four popular and representative localization methods to achieve object localization in the luggage collection process, including radio frequency identification (RFID), Keypoints, ultrawideband (UWB), and Reflectors. To test their performance, we construct a qualitative evaluation framework with Localization Accuracy, Mobile Power Supplies, Coverage Area, Cost, and Scalability. Besides, we conduct a series of quantitative experiments regarding Localization Accuracy and Success Rate on a real-world robotic autonomous luggage trolley collection system. We further analyze the performance of different localization methods based on experiment results, revealing that the Keypoints method is most suitable for indoor environments to achieve the luggage trolley collection.
|
1412.3377
|
Niall Murphy
|
Niall Murphy and Damien Woods
|
Uniformity is weaker than semi-uniformity for some membrane systems
|
28 pages, 1 figure
|
Fundamenta Informaticae, 134(1-2):129-152. 2014
|
10.3233/FI-2014-1095
| null |
cs.CC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We investigate computing models that are presented as families of finite
computing devices with a uniformity condition on the entire family. Examples of
such models include Boolean circuits, membrane systems, DNA computers, chemical
reaction networks and tile assembly systems, and there are many others.
However, in such models there are actually two distinct kinds of uniformity
condition. The first is the most common and well-understood, where each input
length is mapped to a single computing device (e.g. a Boolean circuit) that
computes on the finite set of inputs of that length. The second, called
semi-uniformity, is where each input is mapped to a computing device for that
input (e.g. a circuit with the input encoded as constants). The former notion
is well-known and used in Boolean circuit complexity, while the latter notion
is frequently found in literature on nature-inspired computation from the past
20 years or so.
Are these two notions distinct? For many models it has been found that these
notions are in fact the same, in the sense that the choice of uniformity or
semi-uniformity leads to characterisations of the same complexity classes. In
other related work, we showed that these notions are actually distinct for
certain classes of Boolean circuits. Here, we give analogous results for
membrane systems by showing that certain classes of uniform membrane systems
are strictly weaker than the analogous semi-uniform classes. This solves a
known open problem in the theory of membrane systems. We then go on to present
results towards characterising the power of these semi-uniform and uniform
membrane models in terms of NL and languages reducible to the unary languages
in NL, respectively.
|
[
{
"created": "Wed, 10 Dec 2014 17:30:22 GMT",
"version": "v1"
}
] |
2014-12-11
|
[
[
"Murphy",
"Niall",
""
],
[
"Woods",
"Damien",
""
]
] |
We investigate computing models that are presented as families of finite computing devices with a uniformity condition on the entire family. Examples of such models include Boolean circuits, membrane systems, DNA computers, chemical reaction networks and tile assembly systems, and there are many others. However, in such models there are actually two distinct kinds of uniformity condition. The first is the most common and well-understood, where each input length is mapped to a single computing device (e.g. a Boolean circuit) that computes on the finite set of inputs of that length. The second, called semi-uniformity, is where each input is mapped to a computing device for that input (e.g. a circuit with the input encoded as constants). The former notion is well-known and used in Boolean circuit complexity, while the latter notion is frequently found in literature on nature-inspired computation from the past 20 years or so. Are these two notions distinct? For many models it has been found that these notions are in fact the same, in the sense that the choice of uniformity or semi-uniformity leads to characterisations of the same complexity classes. In other related work, we showed that these notions are actually distinct for certain classes of Boolean circuits. Here, we give analogous results for membrane systems by showing that certain classes of uniform membrane systems are strictly weaker than the analogous semi-uniform classes. This solves a known open problem in the theory of membrane systems. We then go on to present results towards characterising the power of these semi-uniform and uniform membrane models in terms of NL and languages reducible to the unary languages in NL, respectively.
|
1502.04068
|
Eric Duchene
|
Eric Duch\^ene, Matthieu Dufour, Silvia Heubach, Urban Larsson
|
Building Nim
| null | null | null | null |
cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The game of nim, with its simple rules, its elegant solution and its
historical importance is the quintessence of a combinatorial game, which is why
it led to so many generalizations and modifications. We present a modification
with a new spin: building nim. With given finite numbers of tokens and stacks,
this two-player game is played in two stages (thus belonging to the same family
of games as e.g. nine-men's morris): first building, where players alternate to
put one token on one of the, initially empty, stacks until all tokens have been
used. Then, the players play nim. Of course, because the solution for the game
of nim is known, the goal of the player who starts nim play is a placement of
the tokens so that the Nim-sum of the stack heights at the end of building is
different from 0. This game is trivial if the total number of tokens is odd as
the Nim-sum could never be 0, or if both the number of tokens and the number of
stacks are even, since a simple mimicking strategy results in a Nim-sum of 0
after each of the second player's moves. We present the solution for this game
for some non-trivial cases and state a general conjecture.
|
[
{
"created": "Fri, 13 Feb 2015 17:50:35 GMT",
"version": "v1"
},
{
"created": "Thu, 27 Aug 2015 09:24:35 GMT",
"version": "v2"
}
] |
2015-08-28
|
[
[
"Duchêne",
"Eric",
""
],
[
"Dufour",
"Matthieu",
""
],
[
"Heubach",
"Silvia",
""
],
[
"Larsson",
"Urban",
""
]
] |
The game of nim, with its simple rules, its elegant solution and its historical importance is the quintessence of a combinatorial game, which is why it led to so many generalizations and modifications. We present a modification with a new spin: building nim. With given finite numbers of tokens and stacks, this two-player game is played in two stages (thus belonging to the same family of games as e.g. nine-men's morris): first building, where players alternate to put one token on one of the, initially empty, stacks until all tokens have been used. Then, the players play nim. Of course, because the solution for the game of nim is known, the goal of the player who starts nim play is a placement of the tokens so that the Nim-sum of the stack heights at the end of building is different from 0. This game is trivial if the total number of tokens is odd as the Nim-sum could never be 0, or if both the number of tokens and the number of stacks are even, since a simple mimicking strategy results in a Nim-sum of 0 after each of the second player's moves. We present the solution for this game for some non-trivial cases and state a general conjecture.
|
2405.17476
|
Sheng Yue
|
Sheng Yue, Jiani Liu, Xingyuan Hua, Ju Ren, Sen Lin, Junshan Zhang,
Yaoxue Zhang
|
How to Leverage Diverse Demonstrations in Offline Imitation Learning
|
International Conference on Machine Learning (ICML)
| null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Offline Imitation Learning (IL) with imperfect demonstrations has garnered
increasing attention owing to the scarcity of expert data in many real-world
domains. A fundamental problem in this scenario is how to extract positive
behaviors from noisy data. In general, current approaches to the problem select
data building on state-action similarity to given expert demonstrations,
neglecting precious information in (potentially abundant) $\textit{diverse}$
state-actions that deviate from expert ones. In this paper, we introduce a
simple yet effective data selection method that identifies positive behaviors
based on their resultant states -- a more informative criterion enabling
explicit utilization of dynamics information and effective extraction of both
expert and beneficial diverse behaviors. Further, we devise a lightweight
behavior cloning algorithm capable of leveraging the expert and selected data
correctly. In the experiments, we evaluate our method on a suite of complex and
high-dimensional offline IL benchmarks, including continuous-control and
vision-based tasks. The results demonstrate that our method achieves
state-of-the-art performance, outperforming existing methods on
$\textbf{20/21}$ benchmarks, typically by $\textbf{2-5x}$, while maintaining a
comparable runtime to Behavior Cloning ($\texttt{BC}$).
|
[
{
"created": "Fri, 24 May 2024 04:56:39 GMT",
"version": "v1"
},
{
"created": "Wed, 29 May 2024 01:41:13 GMT",
"version": "v2"
},
{
"created": "Thu, 30 May 2024 17:15:09 GMT",
"version": "v3"
}
] |
2024-05-31
|
[
[
"Yue",
"Sheng",
""
],
[
"Liu",
"Jiani",
""
],
[
"Hua",
"Xingyuan",
""
],
[
"Ren",
"Ju",
""
],
[
"Lin",
"Sen",
""
],
[
"Zhang",
"Junshan",
""
],
[
"Zhang",
"Yaoxue",
""
]
] |
Offline Imitation Learning (IL) with imperfect demonstrations has garnered increasing attention owing to the scarcity of expert data in many real-world domains. A fundamental problem in this scenario is how to extract positive behaviors from noisy data. In general, current approaches to the problem select data building on state-action similarity to given expert demonstrations, neglecting precious information in (potentially abundant) $\textit{diverse}$ state-actions that deviate from expert ones. In this paper, we introduce a simple yet effective data selection method that identifies positive behaviors based on their resultant states -- a more informative criterion enabling explicit utilization of dynamics information and effective extraction of both expert and beneficial diverse behaviors. Further, we devise a lightweight behavior cloning algorithm capable of leveraging the expert and selected data correctly. In the experiments, we evaluate our method on a suite of complex and high-dimensional offline IL benchmarks, including continuous-control and vision-based tasks. The results demonstrate that our method achieves state-of-the-art performance, outperforming existing methods on $\textbf{20/21}$ benchmarks, typically by $\textbf{2-5x}$, while maintaining a comparable runtime to Behavior Cloning ($\texttt{BC}$).
|
2405.01813
|
Yiwen Zhu
|
Yiwen Zhu, Yuanyuan Tian, Joyce Cahoon, Subru Krishnan, Ankita
Agarwal, Rana Alotaibi, Jes\'us Camacho-Rodr\'iguez, Bibin Chundatt, Andrew
Chung, Niharika Dutta, Andrew Fogarty, Anja Gruenheid, Brandon Haynes, Matteo
Interlandi, Minu Iyer, Nick Jurgens, Sumeet Khushalani, Brian Kroth, Manoj
Kumar, Jyoti Leeka, Sergiy Matusevych, Minni Mittal, Andreas Mueller,
Kartheek Muthyala, Harsha Nagulapalli, Yoonjae Park, Hiren Patel, Anna
Pavlenko, Olga Poppe, Santhosh Ravindran, Karla Saur, Rathijit Sen, Steve
Suh, Arijit Tarafdar, Kunal Waghray, Demin Wang, Carlo Curino, Raghu
Ramakrishnan
|
Towards Building Autonomous Data Services on Azure
|
SIGMOD Companion of the 2023 International Conference on Management
of Data. 2023
| null |
10.1145/3555041.3589674
| null |
cs.DC
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Modern cloud has turned data services into easily accessible commodities.
With just a few clicks, users are now able to access a catalog of data
processing systems for a wide range of tasks. However, the cloud brings in both
complexity and opportunity. While cloud users can quickly start an application
by using various data services, it can be difficult to configure and optimize
these services to gain the most value from them. For cloud providers, managing
every aspect of an ever-increasing set of data services, while meeting customer
SLAs and minimizing operational cost is becoming more challenging. Cloud
technology enables the collection of significant amounts of workload traces and
system telemetry. With the progress in data science (DS) and machine learning
(ML), it is feasible and desirable to utilize a data-driven, ML-based approach
to automate various aspects of data services, resulting in the creation of
autonomous data services. This paper presents our perspectives and insights on
creating autonomous data services on Azure. It also covers the future endeavors
we plan to undertake and unresolved issues that still need attention.
|
[
{
"created": "Fri, 3 May 2024 02:13:20 GMT",
"version": "v1"
}
] |
2024-05-06
|
[
[
"Zhu",
"Yiwen",
""
],
[
"Tian",
"Yuanyuan",
""
],
[
"Cahoon",
"Joyce",
""
],
[
"Krishnan",
"Subru",
""
],
[
"Agarwal",
"Ankita",
""
],
[
"Alotaibi",
"Rana",
""
],
[
"Camacho-Rodríguez",
"Jesús",
""
],
[
"Chundatt",
"Bibin",
""
],
[
"Chung",
"Andrew",
""
],
[
"Dutta",
"Niharika",
""
],
[
"Fogarty",
"Andrew",
""
],
[
"Gruenheid",
"Anja",
""
],
[
"Haynes",
"Brandon",
""
],
[
"Interlandi",
"Matteo",
""
],
[
"Iyer",
"Minu",
""
],
[
"Jurgens",
"Nick",
""
],
[
"Khushalani",
"Sumeet",
""
],
[
"Kroth",
"Brian",
""
],
[
"Kumar",
"Manoj",
""
],
[
"Leeka",
"Jyoti",
""
],
[
"Matusevych",
"Sergiy",
""
],
[
"Mittal",
"Minni",
""
],
[
"Mueller",
"Andreas",
""
],
[
"Muthyala",
"Kartheek",
""
],
[
"Nagulapalli",
"Harsha",
""
],
[
"Park",
"Yoonjae",
""
],
[
"Patel",
"Hiren",
""
],
[
"Pavlenko",
"Anna",
""
],
[
"Poppe",
"Olga",
""
],
[
"Ravindran",
"Santhosh",
""
],
[
"Saur",
"Karla",
""
],
[
"Sen",
"Rathijit",
""
],
[
"Suh",
"Steve",
""
],
[
"Tarafdar",
"Arijit",
""
],
[
"Waghray",
"Kunal",
""
],
[
"Wang",
"Demin",
""
],
[
"Curino",
"Carlo",
""
],
[
"Ramakrishnan",
"Raghu",
""
]
] |
Modern cloud has turned data services into easily accessible commodities. With just a few clicks, users are now able to access a catalog of data processing systems for a wide range of tasks. However, the cloud brings in both complexity and opportunity. While cloud users can quickly start an application by using various data services, it can be difficult to configure and optimize these services to gain the most value from them. For cloud providers, managing every aspect of an ever-increasing set of data services, while meeting customer SLAs and minimizing operational cost is becoming more challenging. Cloud technology enables the collection of significant amounts of workload traces and system telemetry. With the progress in data science (DS) and machine learning (ML), it is feasible and desirable to utilize a data-driven, ML-based approach to automate various aspects of data services, resulting in the creation of autonomous data services. This paper presents our perspectives and insights on creating autonomous data services on Azure. It also covers the future endeavors we plan to undertake and unresolved issues that still need attention.
|
2111.14427
|
Lies Hadjadj
|
Lies Hadjadj, Massih-Reza Amini, Sana Louhichi, Alexis Deschamps
|
Self-Training of Halfspaces with Generalization Guarantees under Massart
Mislabeling Noise Model
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We investigate the generalization properties of a self-training algorithm
with halfspaces. The approach learns a list of halfspaces iteratively from
labeled and unlabeled training data, in which each iteration consists of two
steps: exploration and pruning. In the exploration phase, the halfspace is
found sequentially by maximizing the unsigned-margin among unlabeled examples
and then assigning pseudo-labels to those that have a distance higher than the
current threshold. The pseudo-labeled examples are then added to the training
set, and a new classifier is learned. This process is repeated until no more
unlabeled examples remain for pseudo-labeling. In the pruning phase,
pseudo-labeled samples that have a distance to the last halfspace greater than
the associated unsigned-margin are then discarded. We prove that the
misclassification error of the resulting sequence of classifiers is bounded and
show that the resulting semi-supervised approach never degrades performance
compared to the classifier learned using only the initial labeled training set.
Experiments carried out on a variety of benchmarks demonstrate the efficiency
of the proposed approach compared to state-of-the-art methods.
|
[
{
"created": "Mon, 29 Nov 2021 10:17:04 GMT",
"version": "v1"
},
{
"created": "Thu, 2 Dec 2021 12:32:46 GMT",
"version": "v2"
},
{
"created": "Tue, 15 Feb 2022 14:30:23 GMT",
"version": "v3"
}
] |
2022-02-16
|
[
[
"Hadjadj",
"Lies",
""
],
[
"Amini",
"Massih-Reza",
""
],
[
"Louhichi",
"Sana",
""
],
[
"Deschamps",
"Alexis",
""
]
] |
We investigate the generalization properties of a self-training algorithm with halfspaces. The approach learns a list of halfspaces iteratively from labeled and unlabeled training data, in which each iteration consists of two steps: exploration and pruning. In the exploration phase, the halfspace is found sequentially by maximizing the unsigned-margin among unlabeled examples and then assigning pseudo-labels to those that have a distance higher than the current threshold. The pseudo-labeled examples are then added to the training set, and a new classifier is learned. This process is repeated until no more unlabeled examples remain for pseudo-labeling. In the pruning phase, pseudo-labeled samples that have a distance to the last halfspace greater than the associated unsigned-margin are then discarded. We prove that the misclassification error of the resulting sequence of classifiers is bounded and show that the resulting semi-supervised approach never degrades performance compared to the classifier learned using only the initial labeled training set. Experiments carried out on a variety of benchmarks demonstrate the efficiency of the proposed approach compared to state-of-the-art methods.
|
2009.10292
|
Ty Nguyen
|
Ty Nguyen, Ian D. Miller, Avi Cohen, Dinesh Thakur, Shashank Prasad,
Camillo J. Taylor, Pratik Chaudrahi, Vijay Kumar
|
PennSyn2Real: Training Object Recognition Models without Human Labeling
|
7 pages, 9 figures, 3 tables. Submitted to R-AL and ICRA 2021
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Scalable training data generation is a critical problem in deep learning. We
propose PennSyn2Real - a photo-realistic synthetic dataset consisting of more
than 100,000 4K images of more than 20 types of micro aerial vehicles (MAVs).
The dataset can be used to generate arbitrary numbers of training images for
high-level computer vision tasks such as MAV detection and classification. Our
data generation framework bootstraps chroma-keying, a mature cinematography
technique with a motion tracking system, providing artifact-free and curated
annotated images where object orientations and lighting are controlled. This
framework is easy to set up and can be applied to a broad range of objects,
reducing the gap between synthetic and real-world data. We show that synthetic
data generated using this framework can be directly used to train CNN models
for common object recognition tasks such as detection and segmentation. We
demonstrate competitive performance in comparison with training using only real
images. Furthermore, bootstrapping the generated synthetic data in few-shot
learning can significantly improve the overall performance, reducing the number
of required training data samples to achieve the desired accuracy.
|
[
{
"created": "Tue, 22 Sep 2020 02:53:40 GMT",
"version": "v1"
},
{
"created": "Fri, 16 Oct 2020 04:58:40 GMT",
"version": "v2"
}
] |
2020-10-19
|
[
[
"Nguyen",
"Ty",
""
],
[
"Miller",
"Ian D.",
""
],
[
"Cohen",
"Avi",
""
],
[
"Thakur",
"Dinesh",
""
],
[
"Prasad",
"Shashank",
""
],
[
"Taylor",
"Camillo J.",
""
],
[
"Chaudrahi",
"Pratik",
""
],
[
"Kumar",
"Vijay",
""
]
] |
Scalable training data generation is a critical problem in deep learning. We propose PennSyn2Real - a photo-realistic synthetic dataset consisting of more than 100,000 4K images of more than 20 types of micro aerial vehicles (MAVs). The dataset can be used to generate arbitrary numbers of training images for high-level computer vision tasks such as MAV detection and classification. Our data generation framework bootstraps chroma-keying, a mature cinematography technique with a motion tracking system, providing artifact-free and curated annotated images where object orientations and lighting are controlled. This framework is easy to set up and can be applied to a broad range of objects, reducing the gap between synthetic and real-world data. We show that synthetic data generated using this framework can be directly used to train CNN models for common object recognition tasks such as detection and segmentation. We demonstrate competitive performance in comparison with training using only real images. Furthermore, bootstrapping the generated synthetic data in few-shot learning can significantly improve the overall performance, reducing the number of required training data samples to achieve the desired accuracy.
|
1709.08748
|
S. Karen Khatamifard
|
S. Karen Khatamifard and M. Hassan Najafi and Ali Ghoreyshi and Ulya
R. Karpuzcu and David Lilja
|
On Memory System Design for Stochastic Computing
| null | null |
10.1109/LCA.2018.2804926
| null |
cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Growing uncertainty in design parameters (and therefore, in design
functionality) renders stochastic computing particularly promising, which
represents and processes data as quantized probabilities. However, due to the
difference in data representation, integrating conventional memory (designed
and optimized for non-stochastic computing) in stochastic computing systems
inevitably incurs a significant data conversion overhead. Barely any stochastic
computing proposal to-date covers the memory impact. In this paper, as the
first study of its kind to the best of our knowledge, we rethink the memory
system design for stochastic computing. The result is a seamless stochastic
system, StochMem, which features analog memory to trade the energy and area
overhead of data conversion for computation accuracy. In this manner StochMem
can reduce the energy (area) overhead by up-to 52.8% (93.7%) at the cost of at
most 0.7% loss in computation accuracy.
|
[
{
"created": "Mon, 25 Sep 2017 23:23:47 GMT",
"version": "v1"
}
] |
2018-03-28
|
[
[
"Khatamifard",
"S. Karen",
""
],
[
"Najafi",
"M. Hassan",
""
],
[
"Ghoreyshi",
"Ali",
""
],
[
"Karpuzcu",
"Ulya R.",
""
],
[
"Lilja",
"David",
""
]
] |
Growing uncertainty in design parameters (and therefore, in design functionality) renders stochastic computing particularly promising, which represents and processes data as quantized probabilities. However, due to the difference in data representation, integrating conventional memory (designed and optimized for non-stochastic computing) in stochastic computing systems inevitably incurs a significant data conversion overhead. Barely any stochastic computing proposal to-date covers the memory impact. In this paper, as the first study of its kind to the best of our knowledge, we rethink the memory system design for stochastic computing. The result is a seamless stochastic system, StochMem, which features analog memory to trade the energy and area overhead of data conversion for computation accuracy. In this manner StochMem can reduce the energy (area) overhead by up-to 52.8% (93.7%) at the cost of at most 0.7% loss in computation accuracy.
|
2403.15715
|
Daijun Ding
|
Daijun Ding, Li Dong, Zhichao Huang, Guangning Xu, Xu Huang, Bo Liu,
Liwen Jing, Bowen Zhang
|
EDDA: A Encoder-Decoder Data Augmentation Framework for Zero-Shot Stance
Detection
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Stance detection aims to determine the attitude expressed in text towards a
given target. Zero-shot stance detection (ZSSD) has emerged to classify stances
towards unseen targets during inference. Recent data augmentation techniques
for ZSSD increase transferable knowledge between targets through text or target
augmentation. However, these methods exhibit limitations. Target augmentation
lacks logical connections between generated targets and source text, while text
augmentation relies solely on training data, resulting in insufficient
generalization. To address these issues, we propose an encoder-decoder data
augmentation (EDDA) framework. The encoder leverages large language models and
chain-of-thought prompting to summarize texts into target-specific if-then
rationales, establishing logical relationships. The decoder generates new
samples based on these expressions using a semantic correlation word
replacement strategy to increase syntactic diversity. We also analyze the
generated expressions to develop a rationale-enhanced network that fully
utilizes the augmented data. Experiments on benchmark datasets demonstrate our
approach substantially improves over state-of-the-art ZSSD techniques. The
proposed EDDA framework increases semantic relevance and syntactic variety in
augmented texts while enabling interpretable rationale-based learning.
|
[
{
"created": "Sat, 23 Mar 2024 04:29:29 GMT",
"version": "v1"
}
] |
2024-03-26
|
[
[
"Ding",
"Daijun",
""
],
[
"Dong",
"Li",
""
],
[
"Huang",
"Zhichao",
""
],
[
"Xu",
"Guangning",
""
],
[
"Huang",
"Xu",
""
],
[
"Liu",
"Bo",
""
],
[
"Jing",
"Liwen",
""
],
[
"Zhang",
"Bowen",
""
]
] |
Stance detection aims to determine the attitude expressed in text towards a given target. Zero-shot stance detection (ZSSD) has emerged to classify stances towards unseen targets during inference. Recent data augmentation techniques for ZSSD increase transferable knowledge between targets through text or target augmentation. However, these methods exhibit limitations. Target augmentation lacks logical connections between generated targets and source text, while text augmentation relies solely on training data, resulting in insufficient generalization. To address these issues, we propose an encoder-decoder data augmentation (EDDA) framework. The encoder leverages large language models and chain-of-thought prompting to summarize texts into target-specific if-then rationales, establishing logical relationships. The decoder generates new samples based on these expressions using a semantic correlation word replacement strategy to increase syntactic diversity. We also analyze the generated expressions to develop a rationale-enhanced network that fully utilizes the augmented data. Experiments on benchmark datasets demonstrate our approach substantially improves over state-of-the-art ZSSD techniques. The proposed EDDA framework increases semantic relevance and syntactic variety in augmented texts while enabling interpretable rationale-based learning.
|
2312.13538
|
Rodrigo de Lamare
|
S. Mashdour, A. Schmeink, R. C. de Lamare and J. P. Sales
|
Sequential Multiuser Scheduling and Power Allocation for Cell-Free
Multiple-Antenna Networks
|
7 pages, 2 figures
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Resource allocation is a fundamental task in cell-free (CF) massive
multi-input multi-output (MIMO) systems, which can effectively improve the
network performance. In this paper, we study the downlink of CF MIMO networks
with network clustering and linear precoding, and develop a sequential
multiuser scheduling and power allocation scheme. In particular, we present a
multiuser scheduling algorithm based on greedy techniques and a gradient ascent
{(GA)} power allocation algorithm for sum-rate maximization when imperfect
channel state information (CSI) is considered. Numerical results show the
superiority of the proposed sequential scheduling and power allocation scheme
and algorithms to existing approaches while reducing the computational
complexity and the signaling load.
|
[
{
"created": "Thu, 21 Dec 2023 02:41:15 GMT",
"version": "v1"
}
] |
2023-12-22
|
[
[
"Mashdour",
"S.",
""
],
[
"Schmeink",
"A.",
""
],
[
"de Lamare",
"R. C.",
""
],
[
"Sales",
"J. P.",
""
]
] |
Resource allocation is a fundamental task in cell-free (CF) massive multi-input multi-output (MIMO) systems, which can effectively improve the network performance. In this paper, we study the downlink of CF MIMO networks with network clustering and linear precoding, and develop a sequential multiuser scheduling and power allocation scheme. In particular, we present a multiuser scheduling algorithm based on greedy techniques and a gradient ascent {(GA)} power allocation algorithm for sum-rate maximization when imperfect channel state information (CSI) is considered. Numerical results show the superiority of the proposed sequential scheduling and power allocation scheme and algorithms to existing approaches while reducing the computational complexity and the signaling load.
|
2206.01570
|
Yushan Liu
|
Tong Liu, Yushan Liu, Marcel Hildebrandt, Mitchell Joblin, Hang Li,
Volker Tresp
|
On Calibration of Graph Neural Networks for Node Classification
|
Accepted by IJCNN 2022 (IEEE WCCI 2022)
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Graphs can model real-world, complex systems by representing entities and
their interactions in terms of nodes and edges. To better exploit the graph
structure, graph neural networks have been developed, which learn entity and
edge embeddings for tasks such as node classification and link prediction.
These models achieve good performance with respect to accuracy, but the
confidence scores associated with the predictions might not be calibrated. That
means that the scores might not reflect the ground-truth probabilities of the
predicted events, which would be especially important for safety-critical
applications. Even though graph neural networks are used for a wide range of
tasks, the calibration thereof has not been sufficiently explored yet. We
investigate the calibration of graph neural networks for node classification,
study the effect of existing post-processing calibration methods, and analyze
the influence of model capacity, graph density, and a new loss function on
calibration. Further, we propose a topology-aware calibration method that takes
the neighboring nodes into account and yields improved calibration compared to
baseline methods.
|
[
{
"created": "Fri, 3 Jun 2022 13:48:10 GMT",
"version": "v1"
}
] |
2022-06-06
|
[
[
"Liu",
"Tong",
""
],
[
"Liu",
"Yushan",
""
],
[
"Hildebrandt",
"Marcel",
""
],
[
"Joblin",
"Mitchell",
""
],
[
"Li",
"Hang",
""
],
[
"Tresp",
"Volker",
""
]
] |
Graphs can model real-world, complex systems by representing entities and their interactions in terms of nodes and edges. To better exploit the graph structure, graph neural networks have been developed, which learn entity and edge embeddings for tasks such as node classification and link prediction. These models achieve good performance with respect to accuracy, but the confidence scores associated with the predictions might not be calibrated. That means that the scores might not reflect the ground-truth probabilities of the predicted events, which would be especially important for safety-critical applications. Even though graph neural networks are used for a wide range of tasks, the calibration thereof has not been sufficiently explored yet. We investigate the calibration of graph neural networks for node classification, study the effect of existing post-processing calibration methods, and analyze the influence of model capacity, graph density, and a new loss function on calibration. Further, we propose a topology-aware calibration method that takes the neighboring nodes into account and yields improved calibration compared to baseline methods.
|
2207.12995
|
Xingqun Qi
|
Xingqun Qi, Zhuojie Wu, Min Ren, Muyi Sun, Caifeng Shan, Zhenan Sun
|
Exploring Generalizable Distillation for Efficient Medical Image
Segmentation
|
Under Review
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Efficient medical image segmentation aims to provide accurate pixel-wise
predictions for medical images with a lightweight implementation framework.
However, lightweight frameworks generally fail to achieve superior performance
and suffer from poor generalizable ability on cross-domain tasks. In this
paper, we explore the generalizable knowledge distillation for the efficient
segmentation of cross-domain medical images. Considering the domain gaps
between different medical datasets, we propose the Model-Specific Alignment
Networks (MSAN) to obtain the domain-invariant representations. Meanwhile, a
customized Alignment Consistency Training (ACT) strategy is designed to promote
the MSAN training. Considering the domain-invariant representative vectors in
MSAN, we propose two generalizable knowledge distillation schemes for
cross-domain distillation, Dual Contrastive Graph Distillation (DCGD) and
Domain-Invariant Cross Distillation (DICD). Specifically, in DCGD, two types of
implicit contrastive graphs are designed to represent the intra-coupling and
inter-coupling semantic correlations from the perspective of data distribution.
In DICD, the domain-invariant semantic vectors from the two models (i.e.,
teacher and student) are leveraged to cross-reconstruct features by the header
exchange of MSAN, which achieves improvement in the generalization of both the
encoder and decoder in the student model. Furthermore, a metric named Frechet
Semantic Distance (FSD) is tailored to verify the effectiveness of the
regularized domain-invariant features. Extensive experiments conducted on the
Liver and Retinal Vessel Segmentation datasets demonstrate the superiority of
our method, in terms of performance and generalization on lightweight
frameworks.
|
[
{
"created": "Tue, 26 Jul 2022 15:55:36 GMT",
"version": "v1"
},
{
"created": "Mon, 20 Feb 2023 05:11:27 GMT",
"version": "v2"
}
] |
2023-02-21
|
[
[
"Qi",
"Xingqun",
""
],
[
"Wu",
"Zhuojie",
""
],
[
"Ren",
"Min",
""
],
[
"Sun",
"Muyi",
""
],
[
"Shan",
"Caifeng",
""
],
[
"Sun",
"Zhenan",
""
]
] |
Efficient medical image segmentation aims to provide accurate pixel-wise predictions for medical images with a lightweight implementation framework. However, lightweight frameworks generally fail to achieve superior performance and suffer from poor generalizable ability on cross-domain tasks. In this paper, we explore the generalizable knowledge distillation for the efficient segmentation of cross-domain medical images. Considering the domain gaps between different medical datasets, we propose the Model-Specific Alignment Networks (MSAN) to obtain the domain-invariant representations. Meanwhile, a customized Alignment Consistency Training (ACT) strategy is designed to promote the MSAN training. Considering the domain-invariant representative vectors in MSAN, we propose two generalizable knowledge distillation schemes for cross-domain distillation, Dual Contrastive Graph Distillation (DCGD) and Domain-Invariant Cross Distillation (DICD). Specifically, in DCGD, two types of implicit contrastive graphs are designed to represent the intra-coupling and inter-coupling semantic correlations from the perspective of data distribution. In DICD, the domain-invariant semantic vectors from the two models (i.e., teacher and student) are leveraged to cross-reconstruct features by the header exchange of MSAN, which achieves improvement in the generalization of both the encoder and decoder in the student model. Furthermore, a metric named Frechet Semantic Distance (FSD) is tailored to verify the effectiveness of the regularized domain-invariant features. Extensive experiments conducted on the Liver and Retinal Vessel Segmentation datasets demonstrate the superiority of our method, in terms of performance and generalization on lightweight frameworks.
|
1907.10491
|
Zijia Zhong
|
Zijia Zhong and Earl E. Lee
|
Alternative Intersection Designs with Connected and Automated Vehicle
|
6 pages, 6 figures, 2019 IEEE 2nd Connected and Automated Vehicles
Symposium. arXiv admin note: text overlap with arXiv:1811.03074
| null | null | null |
cs.MA eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Alternative intersection designs (AIDs) can improve the performance of an
intersection by not only reducing the number of signal phases but also change
the configuration of the conflicting points by re-routing traffic. However the
AID studies have rarely been extended to Connected and Automated Vehicle (CAV)
which is expected to revolutionize our transportation system. In this study, we
investigate the potential benefits of CAV to two AIDs: the diverging diamond
interchange (DDI) and the restricted crossing U-turn intersection. The
potential enhancements of AID, CAV, and the combination of both are quantified
via microscopic traffic simulation. We found that CAV is able to positively
contribute to the performance of an intersection. However, converting an
existing conventional diamond interchange (CDI) to a diverging one is a more
effective way according to the simulation results. DDI improves the throughput
of a CDI by 950 vehicles per hour, a near 20% improvement; whereas with full
penetration of CAV, the throughput of a CDI is increased only by 300 vehicles
per hour. A similar trend is observed in the average delay per vehicle as well.
Furthermore, we assess the impact for the driver's confusion, a concern for
deploying AIDs, on the traffic flow. According to the ANOVA test, the negative
impacts of driver's confusion are of statistical significance.
|
[
{
"created": "Mon, 22 Jul 2019 01:41:28 GMT",
"version": "v1"
}
] |
2019-07-25
|
[
[
"Zhong",
"Zijia",
""
],
[
"Lee",
"Earl E.",
""
]
] |
Alternative intersection designs (AIDs) can improve the performance of an intersection by not only reducing the number of signal phases but also change the configuration of the conflicting points by re-routing traffic. However the AID studies have rarely been extended to Connected and Automated Vehicle (CAV) which is expected to revolutionize our transportation system. In this study, we investigate the potential benefits of CAV to two AIDs: the diverging diamond interchange (DDI) and the restricted crossing U-turn intersection. The potential enhancements of AID, CAV, and the combination of both are quantified via microscopic traffic simulation. We found that CAV is able to positively contribute to the performance of an intersection. However, converting an existing conventional diamond interchange (CDI) to a diverging one is a more effective way according to the simulation results. DDI improves the throughput of a CDI by 950 vehicles per hour, a near 20% improvement; whereas with full penetration of CAV, the throughput of a CDI is increased only by 300 vehicles per hour. A similar trend is observed in the average delay per vehicle as well. Furthermore, we assess the impact for the driver's confusion, a concern for deploying AIDs, on the traffic flow. According to the ANOVA test, the negative impacts of driver's confusion are of statistical significance.
|
2112.15272
|
Vinh Nguyen Van
|
Nguyen Hoang Quan, Nguyen Thanh Dat, Nguyen Hoang Minh Cong, Nguyen
Van Vinh, Ngo Thi Vinh, Nguyen Phuong Thai, and Tran Hong Viet
|
ViNMT: Neural Machine Translation Toolkit
| null | null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We present an open-source toolkit for neural machine translation (NMT). The
new toolkit is mainly based on vaulted Transformer (Vaswani et al., 2017) along
with many other improvements detailed below, in order to create a
self-contained, simple to use, consistent and comprehensive framework for
Machine Translation tasks of various domains. It is tooled to support both
bilingual and multilingual translation tasks, starting from building the model
from respective corpora, to inferring new predictions or packaging the model to
serving-capable JIT format.
|
[
{
"created": "Fri, 31 Dec 2021 02:42:39 GMT",
"version": "v1"
},
{
"created": "Sat, 8 Jan 2022 05:29:58 GMT",
"version": "v2"
},
{
"created": "Fri, 4 Mar 2022 09:28:57 GMT",
"version": "v3"
},
{
"created": "Mon, 7 Mar 2022 07:19:12 GMT",
"version": "v4"
},
{
"created": "Tue, 8 Mar 2022 07:57:44 GMT",
"version": "v5"
}
] |
2022-03-09
|
[
[
"Quan",
"Nguyen Hoang",
""
],
[
"Dat",
"Nguyen Thanh",
""
],
[
"Cong",
"Nguyen Hoang Minh",
""
],
[
"Van Vinh",
"Nguyen",
""
],
[
"Vinh",
"Ngo Thi",
""
],
[
"Thai",
"Nguyen Phuong",
""
],
[
"Viet",
"Tran Hong",
""
]
] |
We present an open-source toolkit for neural machine translation (NMT). The new toolkit is mainly based on vaulted Transformer (Vaswani et al., 2017) along with many other improvements detailed below, in order to create a self-contained, simple to use, consistent and comprehensive framework for Machine Translation tasks of various domains. It is tooled to support both bilingual and multilingual translation tasks, starting from building the model from respective corpora, to inferring new predictions or packaging the model to serving-capable JIT format.
|
2402.08264
|
Ville Junnila
|
Olivier Hudry, Ville Junnila and Antoine Lobstein
|
On Iiro Honkala's contributions to identifying codes
| null | null | null | null |
cs.DM math.CO
|
http://creativecommons.org/licenses/by/4.0/
|
A set $C$ of vertices in a graph $G=(V,E)$ is an identifying code if it is
dominating and any two vertices of $V$ are dominated by distinct sets of
codewords. This paper presents a survey of Iiro Honkala's contributions to the
study of identifying codes with respect to several aspects: complexity of
computing an identifying code, combinatorics in binary Hamming spaces, infinite
grids, relationships between identifying codes and usual parameters in graphs,
structural properties of graphs admitting identifying codes, and number of
optimal identifying codes.
|
[
{
"created": "Tue, 13 Feb 2024 07:35:34 GMT",
"version": "v1"
},
{
"created": "Mon, 13 May 2024 13:02:36 GMT",
"version": "v2"
},
{
"created": "Thu, 4 Jul 2024 13:45:00 GMT",
"version": "v3"
}
] |
2024-07-08
|
[
[
"Hudry",
"Olivier",
""
],
[
"Junnila",
"Ville",
""
],
[
"Lobstein",
"Antoine",
""
]
] |
A set $C$ of vertices in a graph $G=(V,E)$ is an identifying code if it is dominating and any two vertices of $V$ are dominated by distinct sets of codewords. This paper presents a survey of Iiro Honkala's contributions to the study of identifying codes with respect to several aspects: complexity of computing an identifying code, combinatorics in binary Hamming spaces, infinite grids, relationships between identifying codes and usual parameters in graphs, structural properties of graphs admitting identifying codes, and number of optimal identifying codes.
|
2302.13288
|
Ying Xu
|
Ying Xu, Kiran Raja, Luisa Verdoliva, Marius Pedersen
|
Learning Pairwise Interaction for Generalizable DeepFake Detection
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
A fast-paced development of DeepFake generation techniques challenge the
detection schemes designed for known type DeepFakes. A reliable Deepfake
detection approach must be agnostic to generation types, which can present
diverse quality and appearance. Limited generalizability across different
generation schemes will restrict the wide-scale deployment of detectors if they
fail to handle unseen attacks in an open set scenario. We propose a new
approach, Multi-Channel Xception Attention Pairwise Interaction (MCX-API), that
exploits the power of pairwise learning and complementary information from
different color space representations in a fine-grained manner. We first
validate our idea on a publicly available dataset in a intra-class setting
(closed set) with four different Deepfake schemes. Further, we report all the
results using balanced-open-set-classification (BOSC) accuracy in an
inter-class setting (open-set) using three public datasets. Our experiments
indicate that our proposed method can generalize better than the
state-of-the-art Deepfakes detectors. We obtain 98.48% BOSC accuracy on the
FF++ dataset and 90.87% BOSC accuracy on the CelebDF dataset suggesting a
promising direction for generalization of DeepFake detection. We further
utilize t-SNE and attention maps to interpret and visualize the decision-making
process of our proposed network. https://github.com/xuyingzhongguo/MCX-API
|
[
{
"created": "Sun, 26 Feb 2023 10:39:08 GMT",
"version": "v1"
}
] |
2023-02-28
|
[
[
"Xu",
"Ying",
""
],
[
"Raja",
"Kiran",
""
],
[
"Verdoliva",
"Luisa",
""
],
[
"Pedersen",
"Marius",
""
]
] |
A fast-paced development of DeepFake generation techniques challenge the detection schemes designed for known type DeepFakes. A reliable Deepfake detection approach must be agnostic to generation types, which can present diverse quality and appearance. Limited generalizability across different generation schemes will restrict the wide-scale deployment of detectors if they fail to handle unseen attacks in an open set scenario. We propose a new approach, Multi-Channel Xception Attention Pairwise Interaction (MCX-API), that exploits the power of pairwise learning and complementary information from different color space representations in a fine-grained manner. We first validate our idea on a publicly available dataset in a intra-class setting (closed set) with four different Deepfake schemes. Further, we report all the results using balanced-open-set-classification (BOSC) accuracy in an inter-class setting (open-set) using three public datasets. Our experiments indicate that our proposed method can generalize better than the state-of-the-art Deepfakes detectors. We obtain 98.48% BOSC accuracy on the FF++ dataset and 90.87% BOSC accuracy on the CelebDF dataset suggesting a promising direction for generalization of DeepFake detection. We further utilize t-SNE and attention maps to interpret and visualize the decision-making process of our proposed network. https://github.com/xuyingzhongguo/MCX-API
|
2408.00864
|
Elijah Bouma-Sims
|
Elijah Bouma-Sims, Lily Klucinec, Mandy Lanyon, Lorrie Faith Cranor,
Julie Downs
|
Recruiting Teenage Participants for an Online Security Experiment: A
Case Study Using Peachjar
|
To be presented at the 9th Workshop on Inclusive Privacy and Security
(WIPS 2024) at the Twentieth Symposium on Usable Privacy and Security (SOUPS
2024)
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
The recruitment of teenagers for usable privacy and security research is
challenging, but essential. This case study presents our experience using the
online flier distribution service Peachjar to recruit minor teenagers for an
online security experiment. By distributing fliers to 90 K-12 schools, we
recruited a diverse sample of 55 participants at an estimated cost per
participant of $43.18. We discuss the benefits and drawbacks of Peachjar,
concluding that it can facilitate the recruitment of a geographically diverse
sample of teens for online studies, but it requires careful design to protect
against spam and may be more expensive than other online methods. We conclude
by proposing ways of using Peachjar more effectively.
|
[
{
"created": "Thu, 1 Aug 2024 18:33:06 GMT",
"version": "v1"
}
] |
2024-08-05
|
[
[
"Bouma-Sims",
"Elijah",
""
],
[
"Klucinec",
"Lily",
""
],
[
"Lanyon",
"Mandy",
""
],
[
"Cranor",
"Lorrie Faith",
""
],
[
"Downs",
"Julie",
""
]
] |
The recruitment of teenagers for usable privacy and security research is challenging, but essential. This case study presents our experience using the online flier distribution service Peachjar to recruit minor teenagers for an online security experiment. By distributing fliers to 90 K-12 schools, we recruited a diverse sample of 55 participants at an estimated cost per participant of $43.18. We discuss the benefits and drawbacks of Peachjar, concluding that it can facilitate the recruitment of a geographically diverse sample of teens for online studies, but it requires careful design to protect against spam and may be more expensive than other online methods. We conclude by proposing ways of using Peachjar more effectively.
|
2310.12467
|
Etsuko Ishii
|
Etsuko Ishii, Yan Xu, Bryan Wilie, Ziwei Ji, Holy Lovenia, Willy
Chung, Pascale Fung
|
Contrastive Learning for Inference in Dialogue
|
Accepted to EMNLP2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Inference, especially those derived from inductive processes, is a crucial
component in our conversation to complement the information implicitly or
explicitly conveyed by a speaker. While recent large language models show
remarkable advances in inference tasks, their performance in inductive
reasoning, where not all information is present in the context, is far behind
deductive reasoning. In this paper, we analyze the behavior of the models based
on the task difficulty defined by the semantic information gap -- which
distinguishes inductive and deductive reasoning (Johnson-Laird, 1988, 1993).
Our analysis reveals that the disparity in information between dialogue
contexts and desired inferences poses a significant challenge to the inductive
inference process. To mitigate this information gap, we investigate a
contrastive learning approach by feeding negative samples. Our experiments
suggest negative samples help models understand what is wrong and improve their
inference generations.
|
[
{
"created": "Thu, 19 Oct 2023 04:49:36 GMT",
"version": "v1"
},
{
"created": "Mon, 13 Nov 2023 04:18:58 GMT",
"version": "v2"
}
] |
2023-11-14
|
[
[
"Ishii",
"Etsuko",
""
],
[
"Xu",
"Yan",
""
],
[
"Wilie",
"Bryan",
""
],
[
"Ji",
"Ziwei",
""
],
[
"Lovenia",
"Holy",
""
],
[
"Chung",
"Willy",
""
],
[
"Fung",
"Pascale",
""
]
] |
Inference, especially those derived from inductive processes, is a crucial component in our conversation to complement the information implicitly or explicitly conveyed by a speaker. While recent large language models show remarkable advances in inference tasks, their performance in inductive reasoning, where not all information is present in the context, is far behind deductive reasoning. In this paper, we analyze the behavior of the models based on the task difficulty defined by the semantic information gap -- which distinguishes inductive and deductive reasoning (Johnson-Laird, 1988, 1993). Our analysis reveals that the disparity in information between dialogue contexts and desired inferences poses a significant challenge to the inductive inference process. To mitigate this information gap, we investigate a contrastive learning approach by feeding negative samples. Our experiments suggest negative samples help models understand what is wrong and improve their inference generations.
|
2309.14247
|
Spyridon Mastorakis
|
Sifat Ut Taki, Spyridon Mastorakis
|
Rethinking Internet Communication Through LLMs: How Close Are We?
| null | null | null | null |
cs.NI cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we rethink the way that communication among users over the
Internet, one of the fundamental outcomes of the Internet evolution, takes
place. Instead of users communicating directly over the Internet, we explore an
architecture that enables users to communicate with (query) Large Language
Models (LLMs) that capture the cognition of users on the other end of the
communication channel. We present an architecture to achieve such LLM-based
communication and we perform a reality check to assess how close we are today
to realizing such a communication architecture from a technical point of view.
Finally, we discuss several research challenges and identify interesting
directions for future research.
|
[
{
"created": "Mon, 25 Sep 2023 16:07:07 GMT",
"version": "v1"
}
] |
2023-09-26
|
[
[
"Taki",
"Sifat Ut",
""
],
[
"Mastorakis",
"Spyridon",
""
]
] |
In this paper, we rethink the way that communication among users over the Internet, one of the fundamental outcomes of the Internet evolution, takes place. Instead of users communicating directly over the Internet, we explore an architecture that enables users to communicate with (query) Large Language Models (LLMs) that capture the cognition of users on the other end of the communication channel. We present an architecture to achieve such LLM-based communication and we perform a reality check to assess how close we are today to realizing such a communication architecture from a technical point of view. Finally, we discuss several research challenges and identify interesting directions for future research.
|
1511.04458
|
Xun Xu
|
Xun Xu, Timothy Hospedales, Shaogang Gong
|
Transductive Zero-Shot Action Recognition by Word-Vector Embedding
|
Accepted by IJCV
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The number of categories for action recognition is growing rapidly and it has
become increasingly hard to label sufficient training data for learning
conventional models for all categories. Instead of collecting ever more data
and labelling them exhaustively for all categories, an attractive alternative
approach is zero-shot learning" (ZSL). To that end, in this study we construct
a mapping between visual features and a semantic descriptor of each action
category, allowing new categories to be recognised in the absence of any visual
training data. Existing ZSL studies focus primarily on still images, and
attribute-based semantic representations. In this work, we explore word-vectors
as the shared semantic space to embed videos and category labels for ZSL action
recognition. This is a more challenging problem than existing ZSL of still
images and/or attributes, because the mapping between video spacetime features
of actions and the semantic space is more complex and harder to learn for the
purpose of generalising over any cross-category domain shift. To solve this
generalisation problem in ZSL action recognition, we investigate a series of
synergistic strategies to improve upon the standard ZSL pipeline. Most of these
strategies are transductive in nature which means access to testing data in the
training phase.
|
[
{
"created": "Fri, 13 Nov 2015 21:05:20 GMT",
"version": "v1"
},
{
"created": "Fri, 2 Dec 2016 07:17:03 GMT",
"version": "v2"
}
] |
2016-12-05
|
[
[
"Xu",
"Xun",
""
],
[
"Hospedales",
"Timothy",
""
],
[
"Gong",
"Shaogang",
""
]
] |
The number of categories for action recognition is growing rapidly and it has become increasingly hard to label sufficient training data for learning conventional models for all categories. Instead of collecting ever more data and labelling them exhaustively for all categories, an attractive alternative approach is zero-shot learning" (ZSL). To that end, in this study we construct a mapping between visual features and a semantic descriptor of each action category, allowing new categories to be recognised in the absence of any visual training data. Existing ZSL studies focus primarily on still images, and attribute-based semantic representations. In this work, we explore word-vectors as the shared semantic space to embed videos and category labels for ZSL action recognition. This is a more challenging problem than existing ZSL of still images and/or attributes, because the mapping between video spacetime features of actions and the semantic space is more complex and harder to learn for the purpose of generalising over any cross-category domain shift. To solve this generalisation problem in ZSL action recognition, we investigate a series of synergistic strategies to improve upon the standard ZSL pipeline. Most of these strategies are transductive in nature which means access to testing data in the training phase.
|
2103.05368
|
Jin-Man Park
|
Jin-Man Park, Jae-Hyuk Jang, Sahng-Min Yoo, Sun-Kyung Lee, Ue-Hwan
Kim, and Jong-Hwan Kim
|
ChangeSim: Towards End-to-End Online Scene Change Detection in
Industrial Indoor Environments
|
Accepted to IROS 2021
| null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by-sa/4.0/
|
We present a challenging dataset, ChangeSim, aimed at online scene change
detection (SCD) and more. The data is collected in photo-realistic simulation
environments with the presence of environmental non-targeted variations, such
as air turbidity and light condition changes, as well as targeted object
changes in industrial indoor environments. By collecting data in simulations,
multi-modal sensor data and precise ground truth labels are obtainable such as
the RGB image, depth image, semantic segmentation, change segmentation, camera
poses, and 3D reconstructions. While the previous online SCD datasets evaluate
models given well-aligned image pairs, ChangeSim also provides raw unpaired
sequences that present an opportunity to develop an online SCD model in an
end-to-end manner, considering both pairing and detection. Experiments show
that even the latest pair-based SCD models suffer from the bottleneck of the
pairing process, and it gets worse when the environment contains the
non-targeted variations. Our dataset is available at
http://sammica.github.io/ChangeSim/.
|
[
{
"created": "Tue, 9 Mar 2021 11:36:29 GMT",
"version": "v1"
},
{
"created": "Thu, 22 Jul 2021 06:52:15 GMT",
"version": "v2"
}
] |
2021-07-23
|
[
[
"Park",
"Jin-Man",
""
],
[
"Jang",
"Jae-Hyuk",
""
],
[
"Yoo",
"Sahng-Min",
""
],
[
"Lee",
"Sun-Kyung",
""
],
[
"Kim",
"Ue-Hwan",
""
],
[
"Kim",
"Jong-Hwan",
""
]
] |
We present a challenging dataset, ChangeSim, aimed at online scene change detection (SCD) and more. The data is collected in photo-realistic simulation environments with the presence of environmental non-targeted variations, such as air turbidity and light condition changes, as well as targeted object changes in industrial indoor environments. By collecting data in simulations, multi-modal sensor data and precise ground truth labels are obtainable such as the RGB image, depth image, semantic segmentation, change segmentation, camera poses, and 3D reconstructions. While the previous online SCD datasets evaluate models given well-aligned image pairs, ChangeSim also provides raw unpaired sequences that present an opportunity to develop an online SCD model in an end-to-end manner, considering both pairing and detection. Experiments show that even the latest pair-based SCD models suffer from the bottleneck of the pairing process, and it gets worse when the environment contains the non-targeted variations. Our dataset is available at http://sammica.github.io/ChangeSim/.
|
2007.08285
|
Troy Lee
|
Troy Lee and Miklos Santha and Shengyu Zhang
|
Quantum algorithms for graph problems with cut queries
|
Corrected an error in Lemma 1. This led to an extra log factor in the
complexity of the connectivity and spanning forest algorithms
| null | null | null |
cs.DS quant-ph
|
http://creativecommons.org/licenses/by/4.0/
|
Let $G$ be an $n$-vertex graph with $m$ edges. When asked a subset $S$ of
vertices, a cut query on $G$ returns the number of edges of $G$ that have
exactly one endpoint in $S$. We show that there is a bounded-error quantum
algorithm that determines all connected components of $G$ after making
$O(\log(n)^6)$ many cut queries. In contrast, it follows from results in
communication complexity that any randomized algorithm even just to decide
whether the graph is connected or not must make at least $\Omega(n/\log(n))$
many cut queries. We further show that with $O(\log(n)^8)$ many cut queries a
quantum algorithm can with high probability output a spanning forest for $G$.
En route to proving these results, we design quantum algorithms for learning
a graph using cut queries. We show that a quantum algorithm can learn a graph
with maximum degree $d$ after $O(d \log(n)^2)$ many cut queries, and can learn
a general graph with $O(\sqrt{m} \log(n)^{3/2})$ many cut queries. These two
upper bounds are tight up to the poly-logarithmic factors, and compare to
$\Omega(dn)$ and $\Omega(m/\log(n))$ lower bounds on the number of cut queries
needed by a randomized algorithm for the same problems, respectively.
The key ingredients in our results are the Bernstein-Vazirani algorithm,
approximate counting with "OR queries", and learning sparse vectors from inner
products as in compressed sensing.
|
[
{
"created": "Thu, 16 Jul 2020 12:21:01 GMT",
"version": "v1"
},
{
"created": "Tue, 4 Aug 2020 12:04:07 GMT",
"version": "v2"
}
] |
2020-08-05
|
[
[
"Lee",
"Troy",
""
],
[
"Santha",
"Miklos",
""
],
[
"Zhang",
"Shengyu",
""
]
] |
Let $G$ be an $n$-vertex graph with $m$ edges. When asked a subset $S$ of vertices, a cut query on $G$ returns the number of edges of $G$ that have exactly one endpoint in $S$. We show that there is a bounded-error quantum algorithm that determines all connected components of $G$ after making $O(\log(n)^6)$ many cut queries. In contrast, it follows from results in communication complexity that any randomized algorithm even just to decide whether the graph is connected or not must make at least $\Omega(n/\log(n))$ many cut queries. We further show that with $O(\log(n)^8)$ many cut queries a quantum algorithm can with high probability output a spanning forest for $G$. En route to proving these results, we design quantum algorithms for learning a graph using cut queries. We show that a quantum algorithm can learn a graph with maximum degree $d$ after $O(d \log(n)^2)$ many cut queries, and can learn a general graph with $O(\sqrt{m} \log(n)^{3/2})$ many cut queries. These two upper bounds are tight up to the poly-logarithmic factors, and compare to $\Omega(dn)$ and $\Omega(m/\log(n))$ lower bounds on the number of cut queries needed by a randomized algorithm for the same problems, respectively. The key ingredients in our results are the Bernstein-Vazirani algorithm, approximate counting with "OR queries", and learning sparse vectors from inner products as in compressed sensing.
|
1805.10638
|
Bailin Deng
|
Juyong Zhang, Yuxin Yao, Yue Peng, Hao Yu, Bailin Deng
|
Fast K-Means Clustering with Anderson Acceleration
| null | null | null | null |
cs.LG cs.NA stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a novel method to accelerate Lloyd's algorithm for K-Means
clustering. Unlike previous acceleration approaches that reduce computational
cost per iterations or improve initialization, our approach is focused on
reducing the number of iterations required for convergence. This is achieved by
treating the assignment step and the update step of Lloyd's algorithm as a
fixed-point iteration, and applying Anderson acceleration, a well-established
technique for accelerating fixed-point solvers. Classical Anderson acceleration
utilizes m previous iterates to find an accelerated iterate, and its
performance on K-Means clustering can be sensitive to choice of m and the
distribution of samples. We propose a new strategy to dynamically adjust the
value of m, which achieves robust and consistent speedups across different
problem instances. Our method complements existing acceleration techniques, and
can be combined with them to achieve state-of-the-art performance. We perform
extensive experiments to evaluate the performance of the proposed method, where
it outperforms other algorithms in 106 out of 120 test cases, and the mean
decrease ratio of computational time is more than 33%.
|
[
{
"created": "Sun, 27 May 2018 15:17:33 GMT",
"version": "v1"
}
] |
2018-05-29
|
[
[
"Zhang",
"Juyong",
""
],
[
"Yao",
"Yuxin",
""
],
[
"Peng",
"Yue",
""
],
[
"Yu",
"Hao",
""
],
[
"Deng",
"Bailin",
""
]
] |
We propose a novel method to accelerate Lloyd's algorithm for K-Means clustering. Unlike previous acceleration approaches that reduce computational cost per iterations or improve initialization, our approach is focused on reducing the number of iterations required for convergence. This is achieved by treating the assignment step and the update step of Lloyd's algorithm as a fixed-point iteration, and applying Anderson acceleration, a well-established technique for accelerating fixed-point solvers. Classical Anderson acceleration utilizes m previous iterates to find an accelerated iterate, and its performance on K-Means clustering can be sensitive to choice of m and the distribution of samples. We propose a new strategy to dynamically adjust the value of m, which achieves robust and consistent speedups across different problem instances. Our method complements existing acceleration techniques, and can be combined with them to achieve state-of-the-art performance. We perform extensive experiments to evaluate the performance of the proposed method, where it outperforms other algorithms in 106 out of 120 test cases, and the mean decrease ratio of computational time is more than 33%.
|
2403.17852
|
Shuyi Chen
|
Shuyi Chen, Shixiang Zhu
|
Counterfactual Fairness through Transforming Data Orthogonal to Bias
| null | null | null | null |
cs.LG stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
Machine learning models have shown exceptional prowess in solving complex
issues across various domains. However, these models can sometimes exhibit
biased decision-making, resulting in unequal treatment of different groups.
Despite substantial research on counterfactual fairness, methods to reduce the
impact of multivariate and continuous sensitive variables on decision-making
outcomes are still underdeveloped. We propose a novel data pre-processing
algorithm, Orthogonal to Bias (OB), which is designed to eliminate the
influence of a group of continuous sensitive variables, thus promoting
counterfactual fairness in machine learning applications. Our approach, based
on the assumption of a jointly normal distribution within a structural causal
model (SCM), demonstrates that counterfactual fairness can be achieved by
ensuring the data is orthogonal to the observed sensitive variables. The OB
algorithm is model-agnostic, making it applicable to a wide range of machine
learning models and tasks. Additionally, it includes a sparse variant to
improve numerical stability through regularization. Empirical evaluations on
both simulated and real-world datasets, encompassing settings with both
discrete and continuous sensitive variables, show that our methodology
effectively promotes fairer outcomes without compromising accuracy.
|
[
{
"created": "Tue, 26 Mar 2024 16:40:08 GMT",
"version": "v1"
},
{
"created": "Sun, 30 Jun 2024 01:51:00 GMT",
"version": "v2"
}
] |
2024-07-02
|
[
[
"Chen",
"Shuyi",
""
],
[
"Zhu",
"Shixiang",
""
]
] |
Machine learning models have shown exceptional prowess in solving complex issues across various domains. However, these models can sometimes exhibit biased decision-making, resulting in unequal treatment of different groups. Despite substantial research on counterfactual fairness, methods to reduce the impact of multivariate and continuous sensitive variables on decision-making outcomes are still underdeveloped. We propose a novel data pre-processing algorithm, Orthogonal to Bias (OB), which is designed to eliminate the influence of a group of continuous sensitive variables, thus promoting counterfactual fairness in machine learning applications. Our approach, based on the assumption of a jointly normal distribution within a structural causal model (SCM), demonstrates that counterfactual fairness can be achieved by ensuring the data is orthogonal to the observed sensitive variables. The OB algorithm is model-agnostic, making it applicable to a wide range of machine learning models and tasks. Additionally, it includes a sparse variant to improve numerical stability through regularization. Empirical evaluations on both simulated and real-world datasets, encompassing settings with both discrete and continuous sensitive variables, show that our methodology effectively promotes fairer outcomes without compromising accuracy.
|
1606.02393
|
Paul Hongsuck Seo
|
Paul Hongsuck Seo, Zhe Lin, Scott Cohen, Xiaohui Shen, Bohyung Han
|
Progressive Attention Networks for Visual Attribute Prediction
|
BMVC 2018 accepted paper
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a novel attention model that can accurately attends to target
objects of various scales and shapes in images. The model is trained to
gradually suppress irrelevant regions in an input image via a progressive
attentive process over multiple layers of a convolutional neural network. The
attentive process in each layer determines whether to pass or block features at
certain spatial locations for use in the subsequent layers. The proposed
progressive attention mechanism works well especially when combined with hard
attention. We further employ local contexts to incorporate neighborhood
features of each location and estimate a better attention probability map. The
experiments on synthetic and real datasets show that the proposed attention
networks outperform traditional attention methods in visual attribute
prediction tasks.
|
[
{
"created": "Wed, 8 Jun 2016 04:27:52 GMT",
"version": "v1"
},
{
"created": "Fri, 2 Dec 2016 08:48:44 GMT",
"version": "v2"
},
{
"created": "Wed, 7 Dec 2016 05:39:56 GMT",
"version": "v3"
},
{
"created": "Fri, 3 Mar 2017 14:07:07 GMT",
"version": "v4"
},
{
"created": "Mon, 6 Aug 2018 22:03:19 GMT",
"version": "v5"
}
] |
2018-08-08
|
[
[
"Seo",
"Paul Hongsuck",
""
],
[
"Lin",
"Zhe",
""
],
[
"Cohen",
"Scott",
""
],
[
"Shen",
"Xiaohui",
""
],
[
"Han",
"Bohyung",
""
]
] |
We propose a novel attention model that can accurately attends to target objects of various scales and shapes in images. The model is trained to gradually suppress irrelevant regions in an input image via a progressive attentive process over multiple layers of a convolutional neural network. The attentive process in each layer determines whether to pass or block features at certain spatial locations for use in the subsequent layers. The proposed progressive attention mechanism works well especially when combined with hard attention. We further employ local contexts to incorporate neighborhood features of each location and estimate a better attention probability map. The experiments on synthetic and real datasets show that the proposed attention networks outperform traditional attention methods in visual attribute prediction tasks.
|
2111.05991
|
Sachithra Lokuge
|
Ali Alruthaya, Thanh-Thuy Nguyen and Sachithra Lokuge
|
The Application of Digital Technology and the Learning Characteristics
of Generation Z in Higher Education
| null | null | null | null |
cs.CY cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
The Generation Z (Gen Z), or the digital natives have never experienced a
life without the internet. In addition, the advancement of digital technologies
such as social media, smart mobile technologies, cloud computing, and the
Internet-of-things has transformed how individuals perform their day-to-day
activities. Especially for Gen Z, the use of digital technology has become an
essential part of their daily routine, as a result, challenging the norm. As
such, Gen Z displays unique learning characteristics which are different from
previous generations. This change opens new avenues for exploring the impact of
digital technology on the learning characteristics of Gen Z and possible
applications to the higher education environment. By conducting a literature
review of 80 studies, this paper presents a comprehensive framework for
understanding the influence of digital technologies on the learning
characteristics of Gen Z in higher education.
|
[
{
"created": "Wed, 10 Nov 2021 23:43:49 GMT",
"version": "v1"
}
] |
2021-11-12
|
[
[
"Alruthaya",
"Ali",
""
],
[
"Nguyen",
"Thanh-Thuy",
""
],
[
"Lokuge",
"Sachithra",
""
]
] |
The Generation Z (Gen Z), or the digital natives have never experienced a life without the internet. In addition, the advancement of digital technologies such as social media, smart mobile technologies, cloud computing, and the Internet-of-things has transformed how individuals perform their day-to-day activities. Especially for Gen Z, the use of digital technology has become an essential part of their daily routine, as a result, challenging the norm. As such, Gen Z displays unique learning characteristics which are different from previous generations. This change opens new avenues for exploring the impact of digital technology on the learning characteristics of Gen Z and possible applications to the higher education environment. By conducting a literature review of 80 studies, this paper presents a comprehensive framework for understanding the influence of digital technologies on the learning characteristics of Gen Z in higher education.
|
2106.15664
|
Ariel Sapir
|
Amir Sapir, Ariel Sapir
|
Is 2NF a Stable Normal Form?
|
10 pages, 1 figure
| null | null | null |
cs.DB
|
http://creativecommons.org/licenses/by/4.0/
|
Traditionally, it was accepted that a relational database can be normalized
step-by-step, from a set of un-normalized tables to tables in $1NF$, then to
$2NF$, then to $3NF$, then (possibly) to $BCNF$. The rule applied to a table in
$1NF$ in order to transform it to a set of tables in $2NF$ seems to be too
straightforward to pose any difficulty.
However, we show that, depending on the set of functional dependencies, it is
impossible to reach $2NF$ and stop there; one must, in these cases, perform the
normalization from $1NF$ to $3NF$ as an indecomposable move. The minimal setup
to exhibit the phenomena requires a single composite key, and two partially
overlapping chains of transitive dependencies.
|
[
{
"created": "Tue, 29 Jun 2021 18:14:26 GMT",
"version": "v1"
}
] |
2021-07-01
|
[
[
"Sapir",
"Amir",
""
],
[
"Sapir",
"Ariel",
""
]
] |
Traditionally, it was accepted that a relational database can be normalized step-by-step, from a set of un-normalized tables to tables in $1NF$, then to $2NF$, then to $3NF$, then (possibly) to $BCNF$. The rule applied to a table in $1NF$ in order to transform it to a set of tables in $2NF$ seems to be too straightforward to pose any difficulty. However, we show that, depending on the set of functional dependencies, it is impossible to reach $2NF$ and stop there; one must, in these cases, perform the normalization from $1NF$ to $3NF$ as an indecomposable move. The minimal setup to exhibit the phenomena requires a single composite key, and two partially overlapping chains of transitive dependencies.
|
2112.07938
|
Francesc Wilhelmi
|
Francesc Wilhelmi, Lorenza Giupponi, Paolo Dini
|
Analysis and Evaluation of Synchronous and Asynchronous FLchain
| null | null | null | null |
cs.LG cs.DC cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
Motivated by the heterogeneous nature of devices participating in large-scale
Federated Learning (FL) optimization, we focus on an asynchronous server-less
FL solution empowered by blockchain technology. In contrast to mostly adopted
FL approaches, which assume synchronous operation, we advocate an asynchronous
method whereby model aggregation is done as clients submit their local updates.
The asynchronous setting fits well with the federated optimization idea in
practical large-scale settings with heterogeneous clients. Thus, it potentially
leads to higher efficiency in terms of communication overhead and idle periods.
To evaluate the learning completion delay of BC-enabled FL, we provide an
analytical model based on batch service queue theory. Furthermore, we provide
simulation results to assess the performance of both synchronous and
asynchronous mechanisms. Important aspects involved in the BC-enabled FL
optimization, such as the network size, link capacity, or user requirements,
are put together and analyzed. As our results show, the synchronous setting
leads to higher prediction accuracy than the asynchronous case. Nevertheless,
asynchronous federated optimization provides much lower latency in many cases,
thus becoming an appealing solution for FL when dealing with large datasets,
tough timing constraints (e.g., near-real-time applications), or highly varying
training data.
|
[
{
"created": "Wed, 15 Dec 2021 07:41:23 GMT",
"version": "v1"
},
{
"created": "Fri, 29 Jul 2022 17:56:13 GMT",
"version": "v2"
},
{
"created": "Mon, 5 Sep 2022 08:30:11 GMT",
"version": "v3"
}
] |
2022-09-07
|
[
[
"Wilhelmi",
"Francesc",
""
],
[
"Giupponi",
"Lorenza",
""
],
[
"Dini",
"Paolo",
""
]
] |
Motivated by the heterogeneous nature of devices participating in large-scale Federated Learning (FL) optimization, we focus on an asynchronous server-less FL solution empowered by blockchain technology. In contrast to mostly adopted FL approaches, which assume synchronous operation, we advocate an asynchronous method whereby model aggregation is done as clients submit their local updates. The asynchronous setting fits well with the federated optimization idea in practical large-scale settings with heterogeneous clients. Thus, it potentially leads to higher efficiency in terms of communication overhead and idle periods. To evaluate the learning completion delay of BC-enabled FL, we provide an analytical model based on batch service queue theory. Furthermore, we provide simulation results to assess the performance of both synchronous and asynchronous mechanisms. Important aspects involved in the BC-enabled FL optimization, such as the network size, link capacity, or user requirements, are put together and analyzed. As our results show, the synchronous setting leads to higher prediction accuracy than the asynchronous case. Nevertheless, asynchronous federated optimization provides much lower latency in many cases, thus becoming an appealing solution for FL when dealing with large datasets, tough timing constraints (e.g., near-real-time applications), or highly varying training data.
|
1812.08125
|
Jimmy Ren
|
Yan Chen, Jimmy Ren, Xuanye Cheng, Keyuan Qian, Jinwei Gu
|
Very Power Efficient Neural Time-of-Flight
|
preprint
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Time-of-Flight (ToF) cameras require active illumination to obtain depth
information thus the power of illumination directly affects the performance of
ToF cameras. Traditional ToF imaging algorithms is very sensitive to
illumination and the depth accuracy degenerates rapidly with the power of it.
Therefore, the design of a power efficient ToF camera always creates a painful
dilemma for the illumination and the performance trade-off. In this paper, we
show that despite the weak signals in many areas under extreme short exposure
setting, these signals as a whole can be well utilized through a learning
process which directly translates the weak and noisy ToF camera raw to depth
map. This creates an opportunity to tackle the aforementioned dilemma and make
a very power efficient ToF camera possible. To enable the learning, we collect
a comprehensive dataset under a variety of scenes and photographic conditions
by a specialized ToF camera. Experiments show that our method is able to
robustly process ToF camera raw with the exposure time of one order of
magnitude shorter than that used in conventional ToF cameras. In addition to
evaluating our approach both quantitatively and qualitatively, we also discuss
its implication to designing the next generation power efficient ToF cameras.
We will make our dataset and code publicly available.
|
[
{
"created": "Wed, 19 Dec 2018 18:08:48 GMT",
"version": "v1"
}
] |
2018-12-20
|
[
[
"Chen",
"Yan",
""
],
[
"Ren",
"Jimmy",
""
],
[
"Cheng",
"Xuanye",
""
],
[
"Qian",
"Keyuan",
""
],
[
"Gu",
"Jinwei",
""
]
] |
Time-of-Flight (ToF) cameras require active illumination to obtain depth information thus the power of illumination directly affects the performance of ToF cameras. Traditional ToF imaging algorithms is very sensitive to illumination and the depth accuracy degenerates rapidly with the power of it. Therefore, the design of a power efficient ToF camera always creates a painful dilemma for the illumination and the performance trade-off. In this paper, we show that despite the weak signals in many areas under extreme short exposure setting, these signals as a whole can be well utilized through a learning process which directly translates the weak and noisy ToF camera raw to depth map. This creates an opportunity to tackle the aforementioned dilemma and make a very power efficient ToF camera possible. To enable the learning, we collect a comprehensive dataset under a variety of scenes and photographic conditions by a specialized ToF camera. Experiments show that our method is able to robustly process ToF camera raw with the exposure time of one order of magnitude shorter than that used in conventional ToF cameras. In addition to evaluating our approach both quantitatively and qualitatively, we also discuss its implication to designing the next generation power efficient ToF cameras. We will make our dataset and code publicly available.
|
2108.07049
|
Vladislav Sovrasov
|
Kirill Prokofiev and Vladislav Sovrasov
|
Towards Efficient and Data Agnostic Image Classification Training
Pipeline for Embedded Systems
|
Submitted to ICIAP 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Nowadays deep learning-based methods have achieved a remarkable progress at
the image classification task among a wide range of commonly used datasets
(ImageNet, CIFAR, SVHN, Caltech 101, SUN397, etc.). SOTA performance on each of
the mentioned datasets is obtained by careful tuning of the model architecture
and training tricks according to the properties of the target data. Although
this approach allows setting academic records, it is unrealistic that an
average data scientist would have enough resources to build a sophisticated
training pipeline for every image classification task he meets in practice.
This work is focusing on reviewing the latest augmentation and regularization
methods for the image classification and exploring ways to automatically choose
some of the most important hyperparameters: total number of epochs, initial
learning rate value and it's schedule. Having a training procedure equipped
with a lightweight modern CNN architecture (like bileNetV3 or EfficientNet),
sufficient level of regularization and adaptive to data learning rate schedule,
we can achieve a reasonable performance on a variety of downstream image
classification tasks without manual tuning of parameters to each particular
task. Resulting models are computationally efficient and can be deployed to CPU
using the OpenVINO toolkit. Source code is available as a part of the OpenVINO
Training Extensions (https://github.com/openvinotoolkit/training_extensions).
|
[
{
"created": "Mon, 16 Aug 2021 12:38:05 GMT",
"version": "v1"
}
] |
2021-08-17
|
[
[
"Prokofiev",
"Kirill",
""
],
[
"Sovrasov",
"Vladislav",
""
]
] |
Nowadays deep learning-based methods have achieved a remarkable progress at the image classification task among a wide range of commonly used datasets (ImageNet, CIFAR, SVHN, Caltech 101, SUN397, etc.). SOTA performance on each of the mentioned datasets is obtained by careful tuning of the model architecture and training tricks according to the properties of the target data. Although this approach allows setting academic records, it is unrealistic that an average data scientist would have enough resources to build a sophisticated training pipeline for every image classification task he meets in practice. This work is focusing on reviewing the latest augmentation and regularization methods for the image classification and exploring ways to automatically choose some of the most important hyperparameters: total number of epochs, initial learning rate value and it's schedule. Having a training procedure equipped with a lightweight modern CNN architecture (like bileNetV3 or EfficientNet), sufficient level of regularization and adaptive to data learning rate schedule, we can achieve a reasonable performance on a variety of downstream image classification tasks without manual tuning of parameters to each particular task. Resulting models are computationally efficient and can be deployed to CPU using the OpenVINO toolkit. Source code is available as a part of the OpenVINO Training Extensions (https://github.com/openvinotoolkit/training_extensions).
|
1607.04805
|
Paris Perdikaris
|
Maziar Raissi, Paris Perdikaris, George Em. Karniadakis
|
Inferring solutions of differential equations using noisy multi-fidelity
data
|
19 pages, 3 figures
| null |
10.1016/j.jcp.2017.01.060
| null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For more than two centuries, solutions of differential equations have been
obtained either analytically or numerically based on typically well-behaved
forcing and boundary conditions for well-posed problems. We are changing this
paradigm in a fundamental way by establishing an interface between
probabilistic machine learning and differential equations. We develop
data-driven algorithms for general linear equations using Gaussian process
priors tailored to the corresponding integro-differential operators. The only
observables are scarce noisy multi-fidelity data for the forcing and solution
that are not required to reside on the domain boundary. The resulting
predictive posterior distributions quantify uncertainty and naturally lead to
adaptive solution refinement via active learning. This general framework
circumvents the tyranny of numerical discretization as well as the consistency
and stability issues of time-integration, and is scalable to high-dimensions.
|
[
{
"created": "Sat, 16 Jul 2016 22:12:26 GMT",
"version": "v1"
}
] |
2017-03-08
|
[
[
"Raissi",
"Maziar",
""
],
[
"Perdikaris",
"Paris",
""
],
[
"Karniadakis",
"George Em.",
""
]
] |
For more than two centuries, solutions of differential equations have been obtained either analytically or numerically based on typically well-behaved forcing and boundary conditions for well-posed problems. We are changing this paradigm in a fundamental way by establishing an interface between probabilistic machine learning and differential equations. We develop data-driven algorithms for general linear equations using Gaussian process priors tailored to the corresponding integro-differential operators. The only observables are scarce noisy multi-fidelity data for the forcing and solution that are not required to reside on the domain boundary. The resulting predictive posterior distributions quantify uncertainty and naturally lead to adaptive solution refinement via active learning. This general framework circumvents the tyranny of numerical discretization as well as the consistency and stability issues of time-integration, and is scalable to high-dimensions.
|
1910.00292
|
Florian Schmidt
|
Florian Schmidt
|
Generalization in Generation: A closer look at Exposure Bias
|
wngt2019 camera ready
| null | null | null |
cs.LG cs.CL stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Exposure bias refers to the train-test discrepancy that seemingly arises when
an autoregressive generative model uses only ground-truth contexts at training
time but generated ones at test time. We separate the contributions of the
model and the learning framework to clarify the debate on consequences and
review proposed counter-measures. In this light, we argue that generalization
is the underlying property to address and propose unconditional generation as
its fundamental benchmark. Finally, we combine latent variable modeling with a
recent formulation of exploration in reinforcement learning to obtain a
rigorous handling of true and generated contexts. Results on language modeling
and variational sentence auto-encoding confirm the model's generalization
capability.
|
[
{
"created": "Tue, 1 Oct 2019 10:28:32 GMT",
"version": "v1"
},
{
"created": "Thu, 7 Nov 2019 06:55:36 GMT",
"version": "v2"
}
] |
2019-11-11
|
[
[
"Schmidt",
"Florian",
""
]
] |
Exposure bias refers to the train-test discrepancy that seemingly arises when an autoregressive generative model uses only ground-truth contexts at training time but generated ones at test time. We separate the contributions of the model and the learning framework to clarify the debate on consequences and review proposed counter-measures. In this light, we argue that generalization is the underlying property to address and propose unconditional generation as its fundamental benchmark. Finally, we combine latent variable modeling with a recent formulation of exploration in reinforcement learning to obtain a rigorous handling of true and generated contexts. Results on language modeling and variational sentence auto-encoding confirm the model's generalization capability.
|
1903.06440
|
Agata Barci\'s
|
Agata Barci\'s, Micha{\l} Barci\'s, Christian Bettstetter
|
Robots that Sync and Swarm: A Proof of Concept in ROS 2
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A unified mathematical model for synchronisation and swarming has recently
been proposed. Each system entity, called a "swarmalator", coordinates its
internal phase and location with the other entities in a way that these two
attributes are mutually coupled. This paper realises and studies, for the first
time, the concept of swarmalators in a technical system. We adapt and extend
the original model for its use with mobile robots and implement it in the Robot
Operating System 2 (ROS 2). Simulations and experiments with small robots
demonstrate the feasibility of the model and show its potential to be applied
to real-world systems. All types of space-time patterns achieved in theory can
be reproduced in practice. Applications can be found in monitoring,
exploration, entertainment and art, among other domains.
|
[
{
"created": "Fri, 15 Mar 2019 10:16:50 GMT",
"version": "v1"
},
{
"created": "Mon, 8 Apr 2019 14:09:09 GMT",
"version": "v2"
},
{
"created": "Fri, 16 Aug 2019 21:42:42 GMT",
"version": "v3"
},
{
"created": "Thu, 12 Sep 2019 10:44:04 GMT",
"version": "v4"
}
] |
2019-09-13
|
[
[
"Barciś",
"Agata",
""
],
[
"Barciś",
"Michał",
""
],
[
"Bettstetter",
"Christian",
""
]
] |
A unified mathematical model for synchronisation and swarming has recently been proposed. Each system entity, called a "swarmalator", coordinates its internal phase and location with the other entities in a way that these two attributes are mutually coupled. This paper realises and studies, for the first time, the concept of swarmalators in a technical system. We adapt and extend the original model for its use with mobile robots and implement it in the Robot Operating System 2 (ROS 2). Simulations and experiments with small robots demonstrate the feasibility of the model and show its potential to be applied to real-world systems. All types of space-time patterns achieved in theory can be reproduced in practice. Applications can be found in monitoring, exploration, entertainment and art, among other domains.
|
cs/0409006
|
Dumitru Vulcanov
|
Dumitru N. Vulcanov, Valentina D. Vulcanov (The West University of
Timisoara, Romania)
|
Maple+GrTensorII libraries for cosmology
|
LaTeX LLNCS style, 8 pages, accepted for SYNASC 2004 - 6th
International Symposium on Symbolic and Numeric Algorithms for Scientific
Computing, Timisoara, Romania, September 26-30 2004
| null | null | null |
cs.SC gr-qc
| null |
The article mainly presents some results in using MAPLE platform for computer
algebra and GrTensorII package in doing calculations for theoretical and
numerical cosmology
|
[
{
"created": "Sat, 4 Sep 2004 12:52:22 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Vulcanov",
"Dumitru N.",
"",
"The West University of\n Timisoara, Romania"
],
[
"Vulcanov",
"Valentina D.",
"",
"The West University of\n Timisoara, Romania"
]
] |
The article mainly presents some results in using MAPLE platform for computer algebra and GrTensorII package in doing calculations for theoretical and numerical cosmology
|
2209.07795
|
Astrid Orcesi
|
Adrien Maglo, Astrid Orcesi and Quoc Cuong Pham
|
KaliCalib: A Framework for Basketball Court Registration
|
Accepted at ACM MMSports 2022 (5th International ACM Workshop on
Multimedia Content Analysis in Sports)
| null |
10.1145/3552437.3555701
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Tracking the players and the ball in team sports is key to analyse the
performance or to enhance the game watching experience with augmented reality.
When the only sources for this data are broadcast videos, sports-field
registration systems are required to estimate the homography and re-project the
ball or the players from the image space to the field space. This paper
describes a new basketball court registration framework in the context of the
MMSports 2022 camera calibration challenge. The method is based on the
estimation by an encoder-decoder network of the positions of keypoints sampled
with perspective-aware constraints. The regression of the basket positions and
heavy data augmentation techniques make the model robust to different arenas.
Ablation studies show the positive effects of our contributions on the
challenge test set. Our method divides the mean squared error by 4.7 compared
to the challenge baseline.
|
[
{
"created": "Fri, 16 Sep 2022 08:52:29 GMT",
"version": "v1"
}
] |
2022-09-19
|
[
[
"Maglo",
"Adrien",
""
],
[
"Orcesi",
"Astrid",
""
],
[
"Pham",
"Quoc Cuong",
""
]
] |
Tracking the players and the ball in team sports is key to analyse the performance or to enhance the game watching experience with augmented reality. When the only sources for this data are broadcast videos, sports-field registration systems are required to estimate the homography and re-project the ball or the players from the image space to the field space. This paper describes a new basketball court registration framework in the context of the MMSports 2022 camera calibration challenge. The method is based on the estimation by an encoder-decoder network of the positions of keypoints sampled with perspective-aware constraints. The regression of the basket positions and heavy data augmentation techniques make the model robust to different arenas. Ablation studies show the positive effects of our contributions on the challenge test set. Our method divides the mean squared error by 4.7 compared to the challenge baseline.
|
1604.06764
|
Jan Otop
|
Krishnendu Chatterjee, Thomas A. Henzinger and Jan Otop
|
Quantitative Automata under Probabilistic Semantics
| null |
Logical Methods in Computer Science, Volume 15, Issue 3 (August
13, 2019) lmcs:4512
|
10.23638/LMCS-15(3:16)2019
| null |
cs.FL
|
http://creativecommons.org/licenses/by/4.0/
|
Automata with monitor counters, where the transitions do not depend on
counter values, and nested weighted automata are two expressive
automata-theoretic frameworks for quantitative properties. For a well-studied
and wide class of quantitative functions, we establish that automata with
monitor counters and nested weighted automata are equivalent. We study for the
first time such quantitative automata under probabilistic semantics. We show
that several problems that are undecidable for the classical questions of
emptiness and universality become decidable under the probabilistic semantics.
We present a complete picture of decidability for such automata, and even an
almost-complete picture of computational complexity, for the probabilistic
questions we consider.
|
[
{
"created": "Fri, 22 Apr 2016 18:11:36 GMT",
"version": "v1"
},
{
"created": "Mon, 14 May 2018 10:23:43 GMT",
"version": "v2"
},
{
"created": "Thu, 9 May 2019 14:31:55 GMT",
"version": "v3"
},
{
"created": "Sat, 10 Aug 2019 15:12:50 GMT",
"version": "v4"
}
] |
2023-06-22
|
[
[
"Chatterjee",
"Krishnendu",
""
],
[
"Henzinger",
"Thomas A.",
""
],
[
"Otop",
"Jan",
""
]
] |
Automata with monitor counters, where the transitions do not depend on counter values, and nested weighted automata are two expressive automata-theoretic frameworks for quantitative properties. For a well-studied and wide class of quantitative functions, we establish that automata with monitor counters and nested weighted automata are equivalent. We study for the first time such quantitative automata under probabilistic semantics. We show that several problems that are undecidable for the classical questions of emptiness and universality become decidable under the probabilistic semantics. We present a complete picture of decidability for such automata, and even an almost-complete picture of computational complexity, for the probabilistic questions we consider.
|
2005.02858
|
H\'elio M. de Oliveira
|
Ernande F. Melo and H. M. de Oliveira
|
An Overview of Self-Similar Traffic: Its Implications in the Network
Design
|
9 pages, 16 figures
|
Revista de Tecnologia da Informa\c{c}\~ao e Comunica\c{c}\~ao, v.
9, n. 1, p. 38-46, May 2020
| null |
ISSN 2237-5104
|
cs.NI cs.PF
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The knowledge about the true nature of the traffic in computer networking is
a key requirement in the design of such networks. The phenomenon of
self-similarity is a characteristic of the traffic of current client/server
packet networks in LAN/WAN environments dominated by network technologies such
as Ethernet and the TCP/IP protocol stack. The development of networks traffic
simulators, which take into account this attribute, is necessary for a more
realistic description the traffic on these networks and their use in the design
of resources (contention elements) and protocols of flow control and network
congestion. In this scenario it is recommended do not adopt standard traffic
models of the Poisson type.
|
[
{
"created": "Wed, 6 May 2020 14:38:27 GMT",
"version": "v1"
}
] |
2020-05-22
|
[
[
"Melo",
"Ernande F.",
""
],
[
"de Oliveira",
"H. M.",
""
]
] |
The knowledge about the true nature of the traffic in computer networking is a key requirement in the design of such networks. The phenomenon of self-similarity is a characteristic of the traffic of current client/server packet networks in LAN/WAN environments dominated by network technologies such as Ethernet and the TCP/IP protocol stack. The development of networks traffic simulators, which take into account this attribute, is necessary for a more realistic description the traffic on these networks and their use in the design of resources (contention elements) and protocols of flow control and network congestion. In this scenario it is recommended do not adopt standard traffic models of the Poisson type.
|
2311.07884
|
Yusen Zhang
|
Yusen Zhang, Nan Zhang, Yixin Liu, Alexander Fabbri, Junru Liu, Ryo
Kamoi, Xiaoxin Lu, Caiming Xiong, Jieyu Zhao, Dragomir Radev, Kathleen
McKeown, Rui Zhang
|
Fair Abstractive Summarization of Diverse Perspectives
|
NAACL 2024
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
People from different social and demographic groups express diverse
perspectives and conflicting opinions on a broad set of topics such as product
reviews, healthcare, law, and politics. A fair summary should provide a
comprehensive coverage of diverse perspectives without underrepresenting
certain groups. However, current work in summarization metrics and Large
Language Models (LLMs) evaluation has not explored fair abstractive
summarization. In this paper, we systematically investigate fair abstractive
summarization for user-generated data. We first formally define fairness in
abstractive summarization as not underrepresenting perspectives of any groups
of people, and we propose four reference-free automatic metrics by measuring
the differences between target and source perspectives. We evaluate nine LLMs,
including three GPT models, four LLaMA models, PaLM 2, and Claude, on six
datasets collected from social media, online reviews, and recorded transcripts.
Experiments show that both the model-generated and the human-written reference
summaries suffer from low fairness. We conduct a comprehensive analysis of the
common factors influencing fairness and propose three simple but effective
methods to alleviate unfair summarization. Our dataset and code are available
at https://github.com/psunlpgroup/FairSumm.
|
[
{
"created": "Tue, 14 Nov 2023 03:38:55 GMT",
"version": "v1"
},
{
"created": "Sat, 30 Mar 2024 03:54:06 GMT",
"version": "v2"
}
] |
2024-04-02
|
[
[
"Zhang",
"Yusen",
""
],
[
"Zhang",
"Nan",
""
],
[
"Liu",
"Yixin",
""
],
[
"Fabbri",
"Alexander",
""
],
[
"Liu",
"Junru",
""
],
[
"Kamoi",
"Ryo",
""
],
[
"Lu",
"Xiaoxin",
""
],
[
"Xiong",
"Caiming",
""
],
[
"Zhao",
"Jieyu",
""
],
[
"Radev",
"Dragomir",
""
],
[
"McKeown",
"Kathleen",
""
],
[
"Zhang",
"Rui",
""
]
] |
People from different social and demographic groups express diverse perspectives and conflicting opinions on a broad set of topics such as product reviews, healthcare, law, and politics. A fair summary should provide a comprehensive coverage of diverse perspectives without underrepresenting certain groups. However, current work in summarization metrics and Large Language Models (LLMs) evaluation has not explored fair abstractive summarization. In this paper, we systematically investigate fair abstractive summarization for user-generated data. We first formally define fairness in abstractive summarization as not underrepresenting perspectives of any groups of people, and we propose four reference-free automatic metrics by measuring the differences between target and source perspectives. We evaluate nine LLMs, including three GPT models, four LLaMA models, PaLM 2, and Claude, on six datasets collected from social media, online reviews, and recorded transcripts. Experiments show that both the model-generated and the human-written reference summaries suffer from low fairness. We conduct a comprehensive analysis of the common factors influencing fairness and propose three simple but effective methods to alleviate unfair summarization. Our dataset and code are available at https://github.com/psunlpgroup/FairSumm.
|
2210.07239
|
Menelaos Kanakis
|
Menelaos Kanakis, Thomas E. Huang, David Bruggemann, Fisher Yu, Luc
Van Gool
|
Composite Learning for Robust and Effective Dense Predictions
|
Winter Conference on Applications of Computer Vision (WACV), 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Multi-task learning promises better model generalization on a target task by
jointly optimizing it with an auxiliary task. However, the current practice
requires additional labeling efforts for the auxiliary task, while not
guaranteeing better model performance. In this paper, we find that jointly
training a dense prediction (target) task with a self-supervised (auxiliary)
task can consistently improve the performance of the target task, while
eliminating the need for labeling auxiliary tasks. We refer to this joint
training as Composite Learning (CompL). Experiments of CompL on monocular depth
estimation, semantic segmentation, and boundary detection show consistent
performance improvements in fully and partially labeled datasets. Further
analysis on depth estimation reveals that joint training with self-supervision
outperforms most labeled auxiliary tasks. We also find that CompL can improve
model robustness when the models are evaluated in new domains. These results
demonstrate the benefits of self-supervision as an auxiliary task, and
establish the design of novel task-specific self-supervised methods as a new
axis of investigation for future multi-task learning research.
|
[
{
"created": "Thu, 13 Oct 2022 17:59:16 GMT",
"version": "v1"
}
] |
2022-10-14
|
[
[
"Kanakis",
"Menelaos",
""
],
[
"Huang",
"Thomas E.",
""
],
[
"Bruggemann",
"David",
""
],
[
"Yu",
"Fisher",
""
],
[
"Van Gool",
"Luc",
""
]
] |
Multi-task learning promises better model generalization on a target task by jointly optimizing it with an auxiliary task. However, the current practice requires additional labeling efforts for the auxiliary task, while not guaranteeing better model performance. In this paper, we find that jointly training a dense prediction (target) task with a self-supervised (auxiliary) task can consistently improve the performance of the target task, while eliminating the need for labeling auxiliary tasks. We refer to this joint training as Composite Learning (CompL). Experiments of CompL on monocular depth estimation, semantic segmentation, and boundary detection show consistent performance improvements in fully and partially labeled datasets. Further analysis on depth estimation reveals that joint training with self-supervision outperforms most labeled auxiliary tasks. We also find that CompL can improve model robustness when the models are evaluated in new domains. These results demonstrate the benefits of self-supervision as an auxiliary task, and establish the design of novel task-specific self-supervised methods as a new axis of investigation for future multi-task learning research.
|
0804.2991
|
Enrico Paolini
|
Enrico Paolini, Gianluigi Liva, Michela Varrella, Balazs Matuz, Marco
Chiani
|
Low-Complexity LDPC Codes with Near-Optimum Performance over the BEC
|
2008 Advanced Satellite Mobile Systems Conference. 9 pages, 12
figures
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent works showed how low-density parity-check (LDPC) erasure correcting
codes, under maximum likelihood (ML) decoding, are capable of tightly
approaching the performance of an ideal maximum-distance-separable code on the
binary erasure channel. Such result is achievable down to low error rates, even
for small and moderate block sizes, while keeping the decoding complexity low,
thanks to a class of decoding algorithms which exploits the sparseness of the
parity-check matrix to reduce the complexity of Gaussian elimination (GE). In
this paper the main concepts underlying ML decoding of LDPC codes are recalled.
A performance analysis among various LDPC code classes is then carried out,
including a comparison with fixed-rate Raptor codes. The results show that LDPC
and Raptor codes provide almost identical performance in terms of decoding
failure probability vs. overhead.
|
[
{
"created": "Fri, 18 Apr 2008 10:49:44 GMT",
"version": "v1"
}
] |
2008-04-21
|
[
[
"Paolini",
"Enrico",
""
],
[
"Liva",
"Gianluigi",
""
],
[
"Varrella",
"Michela",
""
],
[
"Matuz",
"Balazs",
""
],
[
"Chiani",
"Marco",
""
]
] |
Recent works showed how low-density parity-check (LDPC) erasure correcting codes, under maximum likelihood (ML) decoding, are capable of tightly approaching the performance of an ideal maximum-distance-separable code on the binary erasure channel. Such result is achievable down to low error rates, even for small and moderate block sizes, while keeping the decoding complexity low, thanks to a class of decoding algorithms which exploits the sparseness of the parity-check matrix to reduce the complexity of Gaussian elimination (GE). In this paper the main concepts underlying ML decoding of LDPC codes are recalled. A performance analysis among various LDPC code classes is then carried out, including a comparison with fixed-rate Raptor codes. The results show that LDPC and Raptor codes provide almost identical performance in terms of decoding failure probability vs. overhead.
|
2304.07258
|
Benjamin Towle
|
Benjamin Towle and Ke Zhou
|
Learn What Is Possible, Then Choose What Is Best: Disentangling
One-To-Many Relations in Language Through Text-based Games
|
EMNLP Findings 2022
| null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Language models pre-trained on large self-supervised corpora, followed by
task-specific fine-tuning has become the dominant paradigm in NLP. These
pre-training datasets often have a one-to-many structure--e.g. in dialogue
there are many valid responses for a given context. However, only some of these
responses will be desirable in our downstream task. This raises the question of
how we should train the model such that it can emulate the desirable
behaviours, but not the undesirable ones. Current approaches train in a
one-to-one setup--only a single target response is given for a single dialogue
context--leading to models only learning to predict the average response, while
ignoring the full range of possible responses. Using text-based games as a
testbed, our approach, PASA, uses discrete latent variables to capture the
range of different behaviours represented in our larger pre-training dataset.
We then use knowledge distillation to distil the posterior probability
distribution into a student model. This probability distribution is far richer
than learning from only the hard targets of the dataset, and thus allows the
student model to benefit from the richer range of actions the teacher model has
learned. Results show up to 49% empirical improvement over the previous
state-of-the-art model on the Jericho Walkthroughs dataset.
|
[
{
"created": "Fri, 14 Apr 2023 17:11:26 GMT",
"version": "v1"
},
{
"created": "Wed, 26 Apr 2023 10:35:13 GMT",
"version": "v2"
}
] |
2023-04-27
|
[
[
"Towle",
"Benjamin",
""
],
[
"Zhou",
"Ke",
""
]
] |
Language models pre-trained on large self-supervised corpora, followed by task-specific fine-tuning has become the dominant paradigm in NLP. These pre-training datasets often have a one-to-many structure--e.g. in dialogue there are many valid responses for a given context. However, only some of these responses will be desirable in our downstream task. This raises the question of how we should train the model such that it can emulate the desirable behaviours, but not the undesirable ones. Current approaches train in a one-to-one setup--only a single target response is given for a single dialogue context--leading to models only learning to predict the average response, while ignoring the full range of possible responses. Using text-based games as a testbed, our approach, PASA, uses discrete latent variables to capture the range of different behaviours represented in our larger pre-training dataset. We then use knowledge distillation to distil the posterior probability distribution into a student model. This probability distribution is far richer than learning from only the hard targets of the dataset, and thus allows the student model to benefit from the richer range of actions the teacher model has learned. Results show up to 49% empirical improvement over the previous state-of-the-art model on the Jericho Walkthroughs dataset.
|
2402.05236
|
Fernando Barbosa
|
Erik Warberg (1), Adam Miksits (1 and 2), Fernando S. Barbosa (2) ((1)
KTH Royal Institute of Technology, (2) Ericsson Research)
|
Real-Time Line-Based Room Segmentation and Continuous Euclidean Distance
Fields
|
Open-source code:
https://github.com/EricssonResearch/Line-Based-Room-Segmentation-and-EDF
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Continuous maps representations, as opposed to traditional discrete ones such
as grid maps, have been gaining traction in the research community. However,
current approaches still suffer from high computation costs, making them unable
to be used in large environments without sacrificing precision. In this paper,
a scalable method building upon Gaussian Process-based Euclidean Distance
Fields (GP-EDFs) is proposed. By leveraging structure inherent to indoor
environments, namely walls and rooms, we achieve an accurate continuous map
representation that is fast enough to be updated and used in real-time. This is
possible thanks to a novel line-based room segmentation algorithm, enabling the
creation of smaller local GP-EDFs for each room, which in turn also use line
segments as its shape priors, thus representing the map more efficiently with
fewer data points. We evaluate this method in simulation experiments, and make
the code available open-source.
|
[
{
"created": "Wed, 7 Feb 2024 20:18:15 GMT",
"version": "v1"
}
] |
2024-02-09
|
[
[
"Warberg",
"Erik",
"",
"1 and 2"
],
[
"Miksits",
"Adam",
"",
"1 and 2"
],
[
"Barbosa",
"Fernando S.",
""
]
] |
Continuous maps representations, as opposed to traditional discrete ones such as grid maps, have been gaining traction in the research community. However, current approaches still suffer from high computation costs, making them unable to be used in large environments without sacrificing precision. In this paper, a scalable method building upon Gaussian Process-based Euclidean Distance Fields (GP-EDFs) is proposed. By leveraging structure inherent to indoor environments, namely walls and rooms, we achieve an accurate continuous map representation that is fast enough to be updated and used in real-time. This is possible thanks to a novel line-based room segmentation algorithm, enabling the creation of smaller local GP-EDFs for each room, which in turn also use line segments as its shape priors, thus representing the map more efficiently with fewer data points. We evaluate this method in simulation experiments, and make the code available open-source.
|
2103.13127
|
Yinpeng Dong
|
Yinpeng Dong, Xiao Yang, Zhijie Deng, Tianyu Pang, Zihao Xiao, Hang
Su, Jun Zhu
|
Black-box Detection of Backdoor Attacks with Limited Information and
Data
| null | null | null | null |
cs.CR cs.CV cs.LG stat.ML
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Although deep neural networks (DNNs) have made rapid progress in recent
years, they are vulnerable in adversarial environments. A malicious backdoor
could be embedded in a model by poisoning the training dataset, whose intention
is to make the infected model give wrong predictions during inference when the
specific trigger appears. To mitigate the potential threats of backdoor
attacks, various backdoor detection and defense methods have been proposed.
However, the existing techniques usually require the poisoned training data or
access to the white-box model, which is commonly unavailable in practice. In
this paper, we propose a black-box backdoor detection (B3D) method to identify
backdoor attacks with only query access to the model. We introduce a
gradient-free optimization algorithm to reverse-engineer the potential trigger
for each class, which helps to reveal the existence of backdoor attacks. In
addition to backdoor detection, we also propose a simple strategy for reliable
predictions using the identified backdoored models. Extensive experiments on
hundreds of DNN models trained on several datasets corroborate the
effectiveness of our method under the black-box setting against various
backdoor attacks.
|
[
{
"created": "Wed, 24 Mar 2021 12:06:40 GMT",
"version": "v1"
}
] |
2021-03-25
|
[
[
"Dong",
"Yinpeng",
""
],
[
"Yang",
"Xiao",
""
],
[
"Deng",
"Zhijie",
""
],
[
"Pang",
"Tianyu",
""
],
[
"Xiao",
"Zihao",
""
],
[
"Su",
"Hang",
""
],
[
"Zhu",
"Jun",
""
]
] |
Although deep neural networks (DNNs) have made rapid progress in recent years, they are vulnerable in adversarial environments. A malicious backdoor could be embedded in a model by poisoning the training dataset, whose intention is to make the infected model give wrong predictions during inference when the specific trigger appears. To mitigate the potential threats of backdoor attacks, various backdoor detection and defense methods have been proposed. However, the existing techniques usually require the poisoned training data or access to the white-box model, which is commonly unavailable in practice. In this paper, we propose a black-box backdoor detection (B3D) method to identify backdoor attacks with only query access to the model. We introduce a gradient-free optimization algorithm to reverse-engineer the potential trigger for each class, which helps to reveal the existence of backdoor attacks. In addition to backdoor detection, we also propose a simple strategy for reliable predictions using the identified backdoored models. Extensive experiments on hundreds of DNN models trained on several datasets corroborate the effectiveness of our method under the black-box setting against various backdoor attacks.
|
1111.4737
|
EPTCS
|
Sebastian Buchwald (Karlsruhe Institute of Technology (KIT)), Edgar
Jakumeit (Karlsruhe Institute of Technology (KIT))
|
Compiler Optimization: A Case for the Transformation Tool Contest
|
In Proceedings TTC 2011, arXiv:1111.4407
|
EPTCS 74, 2011, pp. 6-16
|
10.4204/EPTCS.74.2
| null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An optimizing compiler consists of a front end parsing a textual programming
language into an intermediate representation (IR), a middle end performing
optimizations on the IR, and a back end lowering the IR to a target
representation (TR) built of operations supported by the target hardware. In
modern compiler construction graph-based IRs are employed. Optimization and
lowering tasks can then be implemented with graph transformation rules. This
case provides two compiler tasks to evaluate the participating tools regarding
performance.
|
[
{
"created": "Mon, 21 Nov 2011 05:24:23 GMT",
"version": "v1"
}
] |
2011-11-22
|
[
[
"Buchwald",
"Sebastian",
"",
"Karlsruhe Institute of Technology"
],
[
"Jakumeit",
"Edgar",
"",
"Karlsruhe Institute of Technology"
]
] |
An optimizing compiler consists of a front end parsing a textual programming language into an intermediate representation (IR), a middle end performing optimizations on the IR, and a back end lowering the IR to a target representation (TR) built of operations supported by the target hardware. In modern compiler construction graph-based IRs are employed. Optimization and lowering tasks can then be implemented with graph transformation rules. This case provides two compiler tasks to evaluate the participating tools regarding performance.
|
2309.14356
|
Phillip Howard
|
Tiep Le and Vasudev Lal and Phillip Howard
|
COCO-Counterfactuals: Automatically Constructed Counterfactual Examples
for Image-Text Pairs
|
Accepted to NeurIPS 2023 Datasets and Benchmarks Track
| null | null | null |
cs.LG cs.CL cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Counterfactual examples have proven to be valuable in the field of natural
language processing (NLP) for both evaluating and improving the robustness of
language models to spurious correlations in datasets. Despite their
demonstrated utility for NLP, multimodal counterfactual examples have been
relatively unexplored due to the difficulty of creating paired image-text data
with minimal counterfactual changes. To address this challenge, we introduce a
scalable framework for automatic generation of counterfactual examples using
text-to-image diffusion models. We use our framework to create
COCO-Counterfactuals, a multimodal counterfactual dataset of paired image and
text captions based on the MS-COCO dataset. We validate the quality of
COCO-Counterfactuals through human evaluations and show that existing
multimodal models are challenged by our counterfactual image-text pairs.
Additionally, we demonstrate the usefulness of COCO-Counterfactuals for
improving out-of-domain generalization of multimodal vision-language models via
training data augmentation.
|
[
{
"created": "Sat, 23 Sep 2023 00:16:47 GMT",
"version": "v1"
},
{
"created": "Tue, 31 Oct 2023 15:41:25 GMT",
"version": "v2"
}
] |
2023-11-01
|
[
[
"Le",
"Tiep",
""
],
[
"Lal",
"Vasudev",
""
],
[
"Howard",
"Phillip",
""
]
] |
Counterfactual examples have proven to be valuable in the field of natural language processing (NLP) for both evaluating and improving the robustness of language models to spurious correlations in datasets. Despite their demonstrated utility for NLP, multimodal counterfactual examples have been relatively unexplored due to the difficulty of creating paired image-text data with minimal counterfactual changes. To address this challenge, we introduce a scalable framework for automatic generation of counterfactual examples using text-to-image diffusion models. We use our framework to create COCO-Counterfactuals, a multimodal counterfactual dataset of paired image and text captions based on the MS-COCO dataset. We validate the quality of COCO-Counterfactuals through human evaluations and show that existing multimodal models are challenged by our counterfactual image-text pairs. Additionally, we demonstrate the usefulness of COCO-Counterfactuals for improving out-of-domain generalization of multimodal vision-language models via training data augmentation.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.