id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2201.03092
|
Xiyang Hu
|
Xiyang Hu, Yan Huang, Beibei Li, Tian Lu
|
Uncovering the Source of Machine Bias
|
accepted by KDD 2021, MLCM workshop
| null | null | null |
cs.LG econ.GN q-fin.EC stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We develop a structural econometric model to capture the decision dynamics of
human evaluators on an online micro-lending platform, and estimate the model
parameters using a real-world dataset. We find two types of biases in gender,
preference-based bias and belief-based bias, are present in human evaluators'
decisions. Both types of biases are in favor of female applicants. Through
counterfactual simulations, we quantify the effect of gender bias on loan
granting outcomes and the welfare of the company and the borrowers. Our results
imply that both the existence of the preference-based bias and that of the
belief-based bias reduce the company's profits. When the preference-based bias
is removed, the company earns more profits. When the belief-based bias is
removed, the company's profits also increase. Both increases result from
raising the approval probability for borrowers, especially male borrowers, who
eventually pay back loans. For borrowers, the elimination of either bias
decreases the gender gap of the true positive rates in the credit risk
evaluation. We also train machine learning algorithms on both the real-world
data and the data from the counterfactual simulations. We compare the decisions
made by those algorithms to see how evaluators' biases are inherited by the
algorithms and reflected in machine-based decisions. We find that machine
learning algorithms can mitigate both the preference-based bias and the
belief-based bias.
|
[
{
"created": "Sun, 9 Jan 2022 21:05:24 GMT",
"version": "v1"
}
] |
2022-01-11
|
[
[
"Hu",
"Xiyang",
""
],
[
"Huang",
"Yan",
""
],
[
"Li",
"Beibei",
""
],
[
"Lu",
"Tian",
""
]
] |
We develop a structural econometric model to capture the decision dynamics of human evaluators on an online micro-lending platform, and estimate the model parameters using a real-world dataset. We find two types of biases in gender, preference-based bias and belief-based bias, are present in human evaluators' decisions. Both types of biases are in favor of female applicants. Through counterfactual simulations, we quantify the effect of gender bias on loan granting outcomes and the welfare of the company and the borrowers. Our results imply that both the existence of the preference-based bias and that of the belief-based bias reduce the company's profits. When the preference-based bias is removed, the company earns more profits. When the belief-based bias is removed, the company's profits also increase. Both increases result from raising the approval probability for borrowers, especially male borrowers, who eventually pay back loans. For borrowers, the elimination of either bias decreases the gender gap of the true positive rates in the credit risk evaluation. We also train machine learning algorithms on both the real-world data and the data from the counterfactual simulations. We compare the decisions made by those algorithms to see how evaluators' biases are inherited by the algorithms and reflected in machine-based decisions. We find that machine learning algorithms can mitigate both the preference-based bias and the belief-based bias.
|
1807.00507
|
Michael Codish
|
Michael Codish
|
A SAT Encoding for the $n$-Fractions Problem
| null | null | null | null |
cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This note describes a SAT encoding for the $n$-fractions puzzle which is
problem 041 of the CSPLib. Using a SAT solver we obtain a solution for two of
the six remaining open instances of this problem.
|
[
{
"created": "Mon, 2 Jul 2018 07:54:17 GMT",
"version": "v1"
}
] |
2018-07-03
|
[
[
"Codish",
"Michael",
""
]
] |
This note describes a SAT encoding for the $n$-fractions puzzle which is problem 041 of the CSPLib. Using a SAT solver we obtain a solution for two of the six remaining open instances of this problem.
|
2107.03434
|
Trinh Van Chien
|
Trinh Van Chien and Hien Quoc Ngo and Symeon Chatzinotas and Bj\"orn
Ottersten
|
Reconfigurable Intelligent Surface-Assisted Massive MIMO: Favorable
Propagation, Channel Hardening, and Rank Deficiency
|
7 pages, 2 Figures. Submitted to IEEE for possible publication
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Massive multiple-input multiple-output (MIMO) and reconfigurable intelligent
surface (RIS) are two promising technologies for 5G-and-beyond wireless
networks, capable of providing large array gain and multiuser spatial
multiplexing. Without requiring additional frequency bands, those technologies
offer significant improvements in both spectral and energy efficiency by
simultaneously serving many users. The performance analysis of an RIS-assisted
Massive MIMO system as a function of the channel statistics relies heavily on
fundamental properties including favorable propagation, channel hardening, and
rank deficiency. The coexistence of both direct and indirect links results in
aggregated channels, whose properties are the main concerns of this lecture
note. For practical systems with a finite number of antennas and scattering
elements of the RIS, we evaluate the corresponding deterministic metrics with
Rayleigh fading channels as a typical example.
|
[
{
"created": "Wed, 7 Jul 2021 18:35:44 GMT",
"version": "v1"
},
{
"created": "Sun, 5 Sep 2021 16:17:41 GMT",
"version": "v2"
}
] |
2021-09-07
|
[
[
"Van Chien",
"Trinh",
""
],
[
"Ngo",
"Hien Quoc",
""
],
[
"Chatzinotas",
"Symeon",
""
],
[
"Ottersten",
"Björn",
""
]
] |
Massive multiple-input multiple-output (MIMO) and reconfigurable intelligent surface (RIS) are two promising technologies for 5G-and-beyond wireless networks, capable of providing large array gain and multiuser spatial multiplexing. Without requiring additional frequency bands, those technologies offer significant improvements in both spectral and energy efficiency by simultaneously serving many users. The performance analysis of an RIS-assisted Massive MIMO system as a function of the channel statistics relies heavily on fundamental properties including favorable propagation, channel hardening, and rank deficiency. The coexistence of both direct and indirect links results in aggregated channels, whose properties are the main concerns of this lecture note. For practical systems with a finite number of antennas and scattering elements of the RIS, we evaluate the corresponding deterministic metrics with Rayleigh fading channels as a typical example.
|
2203.12324
|
Maaike Los
|
Maaike Los, Zo\'e Christoff, Davide Grossi
|
Proportional Budget Allocations: Towards a Systematization
|
17 pages
| null | null | null |
cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We contribute to the programme of lifting proportionality axioms from the
multi-winner voting setting to participatory budgeting. We define novel
proportionality axioms for participatory budgeting and test them on known
proportionality-driven rules such as Phragm\'en and Rule X. We investigate
logical implications among old and new axioms and provide a systematic overview
of proportionality criteria in participatory budgeting.
|
[
{
"created": "Wed, 23 Mar 2022 11:01:31 GMT",
"version": "v1"
},
{
"created": "Wed, 4 May 2022 14:16:16 GMT",
"version": "v2"
}
] |
2022-05-05
|
[
[
"Los",
"Maaike",
""
],
[
"Christoff",
"Zoé",
""
],
[
"Grossi",
"Davide",
""
]
] |
We contribute to the programme of lifting proportionality axioms from the multi-winner voting setting to participatory budgeting. We define novel proportionality axioms for participatory budgeting and test them on known proportionality-driven rules such as Phragm\'en and Rule X. We investigate logical implications among old and new axioms and provide a systematic overview of proportionality criteria in participatory budgeting.
|
2309.13904
|
Katsuya Hotta
|
Katsuya Hotta, Chao Zhang, Yoshihiro Hagihara, Takuya Akashi
|
Subspace-Guided Feature Reconstruction for Unsupervised Anomaly
Localization
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Unsupervised anomaly localization, which plays a critical role in industrial
manufacturing, aims to identify anomalous regions that deviate from normal
sample patterns. Most recent methods perform feature matching or reconstruction
for the target sample with pre-trained deep neural networks. However, they
still struggle to address challenging anomalies because the deep embeddings
stored in the memory bank can be less powerful and informative. More
specifically, prior methods often overly rely on the finite resources stored in
the memory bank, which leads to low robustness to unseen targets. In this
paper, we propose a novel subspace-guided feature reconstruction framework to
pursue adaptive feature approximation for anomaly localization. It first learns
to construct low-dimensional subspaces from the given nominal samples, and then
learns to reconstruct the given deep target embedding by linearly combining the
subspace basis vectors using the self-expressive model. Our core is that,
despite the limited resources in the memory bank, the out-of-bank features can
be alternatively ``mimicked'' under the self-expressive mechanism to adaptively
model the target. Eventually, the poorly reconstructed feature dimensions
indicate anomalies for localization. Moreover, we propose a sampling method
that leverages the sparsity of subspaces and allows the feature reconstruction
to depend only on a small resource subset, which contributes to less memory
overhead. Extensive experiments on three industrial benchmark datasets
demonstrate that our approach generally achieves state-of-the-art anomaly
localization performance.
|
[
{
"created": "Mon, 25 Sep 2023 06:58:57 GMT",
"version": "v1"
},
{
"created": "Wed, 28 Feb 2024 09:16:53 GMT",
"version": "v2"
}
] |
2024-02-29
|
[
[
"Hotta",
"Katsuya",
""
],
[
"Zhang",
"Chao",
""
],
[
"Hagihara",
"Yoshihiro",
""
],
[
"Akashi",
"Takuya",
""
]
] |
Unsupervised anomaly localization, which plays a critical role in industrial manufacturing, aims to identify anomalous regions that deviate from normal sample patterns. Most recent methods perform feature matching or reconstruction for the target sample with pre-trained deep neural networks. However, they still struggle to address challenging anomalies because the deep embeddings stored in the memory bank can be less powerful and informative. More specifically, prior methods often overly rely on the finite resources stored in the memory bank, which leads to low robustness to unseen targets. In this paper, we propose a novel subspace-guided feature reconstruction framework to pursue adaptive feature approximation for anomaly localization. It first learns to construct low-dimensional subspaces from the given nominal samples, and then learns to reconstruct the given deep target embedding by linearly combining the subspace basis vectors using the self-expressive model. Our core is that, despite the limited resources in the memory bank, the out-of-bank features can be alternatively ``mimicked'' under the self-expressive mechanism to adaptively model the target. Eventually, the poorly reconstructed feature dimensions indicate anomalies for localization. Moreover, we propose a sampling method that leverages the sparsity of subspaces and allows the feature reconstruction to depend only on a small resource subset, which contributes to less memory overhead. Extensive experiments on three industrial benchmark datasets demonstrate that our approach generally achieves state-of-the-art anomaly localization performance.
|
2103.11119
|
Yiwei Bao
|
Yiwei Bao, Yihua Cheng, Yunfei Liu and Feng Lu
|
Adaptive Feature Fusion Network for Gaze Tracking in Mobile Tablets
|
Accepted at International Conference on Pattern Recognition 2020
(ICPR)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, many multi-stream gaze estimation methods have been proposed. They
estimate gaze from eye and face appearances and achieve reasonable accuracy.
However, most of the methods simply concatenate the features extracted from eye
and face appearance. The feature fusion process has been ignored. In this
paper, we propose a novel Adaptive Feature Fusion Network (AFF-Net), which
performs gaze tracking task in mobile tablets. We stack two-eye feature maps
and utilize Squeeze-and-Excitation layers to adaptively fuse two-eye features
according to their similarity on appearance. Meanwhile, we also propose
Adaptive Group Normalization to recalibrate eye features with the guidance of
facial feature. Extensive experiments on both GazeCapture and MPIIFaceGaze
datasets demonstrate consistently superior performance of the proposed method.
|
[
{
"created": "Sat, 20 Mar 2021 07:16:10 GMT",
"version": "v1"
}
] |
2021-03-23
|
[
[
"Bao",
"Yiwei",
""
],
[
"Cheng",
"Yihua",
""
],
[
"Liu",
"Yunfei",
""
],
[
"Lu",
"Feng",
""
]
] |
Recently, many multi-stream gaze estimation methods have been proposed. They estimate gaze from eye and face appearances and achieve reasonable accuracy. However, most of the methods simply concatenate the features extracted from eye and face appearance. The feature fusion process has been ignored. In this paper, we propose a novel Adaptive Feature Fusion Network (AFF-Net), which performs gaze tracking task in mobile tablets. We stack two-eye feature maps and utilize Squeeze-and-Excitation layers to adaptively fuse two-eye features according to their similarity on appearance. Meanwhile, we also propose Adaptive Group Normalization to recalibrate eye features with the guidance of facial feature. Extensive experiments on both GazeCapture and MPIIFaceGaze datasets demonstrate consistently superior performance of the proposed method.
|
2011.04501
|
Yitao Chen
|
Yitao Chen and Deepanshu Vasal
|
Multi-Agent Decentralized Belief Propagation on Graphs
|
16 pages. arXiv admin note: text overlap with arXiv:1109.2135,
arXiv:1209.1695, arXiv:1802.08757 by other authors
| null | null | null |
cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider the problem of interactive partially observable Markov decision
processes (I-POMDPs), where the agents are located at the nodes of a
communication network. Specifically, we assume a certain message type for all
messages. Moreover, each agent makes individual decisions based on the
interactive belief states, the information observed locally and the messages
received from its neighbors over the network. Within this setting, the
collective goal of the agents is to maximize the globally averaged return over
the network through exchanging information with their neighbors. We propose a
decentralized belief propagation algorithm for the problem, and prove the
convergence of our algorithm. Finally we show multiple applications of our
framework. Our work appears to be the first study of decentralized belief
propagation algorithm for networked multi-agent I-POMDPs.
|
[
{
"created": "Fri, 6 Nov 2020 18:16:26 GMT",
"version": "v1"
},
{
"created": "Tue, 10 Nov 2020 02:25:35 GMT",
"version": "v2"
}
] |
2020-11-11
|
[
[
"Chen",
"Yitao",
""
],
[
"Vasal",
"Deepanshu",
""
]
] |
We consider the problem of interactive partially observable Markov decision processes (I-POMDPs), where the agents are located at the nodes of a communication network. Specifically, we assume a certain message type for all messages. Moreover, each agent makes individual decisions based on the interactive belief states, the information observed locally and the messages received from its neighbors over the network. Within this setting, the collective goal of the agents is to maximize the globally averaged return over the network through exchanging information with their neighbors. We propose a decentralized belief propagation algorithm for the problem, and prove the convergence of our algorithm. Finally we show multiple applications of our framework. Our work appears to be the first study of decentralized belief propagation algorithm for networked multi-agent I-POMDPs.
|
2010.06362
|
Jinting Wu
|
Jinting Wu, Yujia Zhang, Xiaoguang Zhao and Wenbin Gao
|
A Generalized Zero-Shot Framework for Emotion Recognition from Body
Gestures
|
The new version adds a co-author and revises the layout of Fig.3
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Although automatic emotion recognition from facial expressions and speech has
made remarkable progress, emotion recognition from body gestures has not been
thoroughly explored. People often use a variety of body language to express
emotions, and it is difficult to enumerate all emotional body gestures and
collect enough samples for each category. Therefore, recognizing new emotional
body gestures is critical for better understanding human emotions. However, the
existing methods fail to accurately determine which emotional state a new body
gesture belongs to. In order to solve this problem, we introduce a Generalized
Zero-Shot Learning (GZSL) framework, which consists of three branches to infer
the emotional state of the new body gestures with only their semantic
descriptions. The first branch is a Prototype-Based Detector (PBD) which is
used to determine whether an sample belongs to a seen body gesture category and
obtain the prediction results of the samples from the seen categories. The
second branch is a Stacked AutoEncoder (StAE) with manifold regularization,
which utilizes semantic representations to predict samples from unseen
categories. Note that both of the above branches are for body gesture
recognition. We further add an emotion classifier with a softmax layer as the
third branch in order to better learn the feature representations for this
emotion classification task. The input features for these three branches are
learned by a shared feature extraction network, i.e., a Bidirectional Long
Short-Term Memory Networks (BLSTM) with a self-attention module. We treat these
three branches as subtasks and use multi-task learning strategies for joint
training. The performance of our framework on an emotion recognition dataset is
significantly superior to the traditional method of emotion classification and
state-of-the-art zero-shot learning methods.
|
[
{
"created": "Tue, 13 Oct 2020 13:16:38 GMT",
"version": "v1"
},
{
"created": "Tue, 20 Oct 2020 08:15:45 GMT",
"version": "v2"
}
] |
2020-10-21
|
[
[
"Wu",
"Jinting",
""
],
[
"Zhang",
"Yujia",
""
],
[
"Zhao",
"Xiaoguang",
""
],
[
"Gao",
"Wenbin",
""
]
] |
Although automatic emotion recognition from facial expressions and speech has made remarkable progress, emotion recognition from body gestures has not been thoroughly explored. People often use a variety of body language to express emotions, and it is difficult to enumerate all emotional body gestures and collect enough samples for each category. Therefore, recognizing new emotional body gestures is critical for better understanding human emotions. However, the existing methods fail to accurately determine which emotional state a new body gesture belongs to. In order to solve this problem, we introduce a Generalized Zero-Shot Learning (GZSL) framework, which consists of three branches to infer the emotional state of the new body gestures with only their semantic descriptions. The first branch is a Prototype-Based Detector (PBD) which is used to determine whether an sample belongs to a seen body gesture category and obtain the prediction results of the samples from the seen categories. The second branch is a Stacked AutoEncoder (StAE) with manifold regularization, which utilizes semantic representations to predict samples from unseen categories. Note that both of the above branches are for body gesture recognition. We further add an emotion classifier with a softmax layer as the third branch in order to better learn the feature representations for this emotion classification task. The input features for these three branches are learned by a shared feature extraction network, i.e., a Bidirectional Long Short-Term Memory Networks (BLSTM) with a self-attention module. We treat these three branches as subtasks and use multi-task learning strategies for joint training. The performance of our framework on an emotion recognition dataset is significantly superior to the traditional method of emotion classification and state-of-the-art zero-shot learning methods.
|
2303.08233
|
Timothy Yu
|
Rindranirina Ramamonjison, Timothy T. Yu, Raymond Li, Haley Li,
Giuseppe Carenini, Bissan Ghaddar, Shiqi He, Mahdi Mostajabdaveh, Amin
Banitalebi-Dehkordi, Zirui Zhou, Yong Zhang
|
NL4Opt Competition: Formulating Optimization Problems Based on Their
Natural Language Descriptions
| null | null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Natural Language for Optimization (NL4Opt) Competition was created to
investigate methods of extracting the meaning and formulation of an
optimization problem based on its text description. Specifically, the goal of
the competition is to increase the accessibility and usability of optimization
solvers by allowing non-experts to interface with them using natural language.
We separate this challenging goal into two sub-tasks: (1) recognize and label
the semantic entities that correspond to the components of the optimization
problem; (2) generate a meaning representation (i.e., a logical form) of the
problem from its detected problem entities. The first task aims to reduce
ambiguity by detecting and tagging the entities of the optimization problems.
The second task creates an intermediate representation of the linear
programming (LP) problem that is converted into a format that can be used by
commercial solvers. In this report, we present the LP word problem dataset and
shared tasks for the NeurIPS 2022 competition. Furthermore, we investigate and
compare the performance of the ChatGPT large language model against the winning
solutions. Through this competition, we hope to bring interest towards the
development of novel machine learning applications and datasets for
optimization modeling.
|
[
{
"created": "Tue, 14 Mar 2023 20:59:04 GMT",
"version": "v1"
},
{
"created": "Mon, 27 Mar 2023 01:10:12 GMT",
"version": "v2"
}
] |
2023-03-28
|
[
[
"Ramamonjison",
"Rindranirina",
""
],
[
"Yu",
"Timothy T.",
""
],
[
"Li",
"Raymond",
""
],
[
"Li",
"Haley",
""
],
[
"Carenini",
"Giuseppe",
""
],
[
"Ghaddar",
"Bissan",
""
],
[
"He",
"Shiqi",
""
],
[
"Mostajabdaveh",
"Mahdi",
""
],
[
"Banitalebi-Dehkordi",
"Amin",
""
],
[
"Zhou",
"Zirui",
""
],
[
"Zhang",
"Yong",
""
]
] |
The Natural Language for Optimization (NL4Opt) Competition was created to investigate methods of extracting the meaning and formulation of an optimization problem based on its text description. Specifically, the goal of the competition is to increase the accessibility and usability of optimization solvers by allowing non-experts to interface with them using natural language. We separate this challenging goal into two sub-tasks: (1) recognize and label the semantic entities that correspond to the components of the optimization problem; (2) generate a meaning representation (i.e., a logical form) of the problem from its detected problem entities. The first task aims to reduce ambiguity by detecting and tagging the entities of the optimization problems. The second task creates an intermediate representation of the linear programming (LP) problem that is converted into a format that can be used by commercial solvers. In this report, we present the LP word problem dataset and shared tasks for the NeurIPS 2022 competition. Furthermore, we investigate and compare the performance of the ChatGPT large language model against the winning solutions. Through this competition, we hope to bring interest towards the development of novel machine learning applications and datasets for optimization modeling.
|
2105.13478
|
Kyle MacMillan
|
Kyle MacMillan, Tarun Mangla, James Saxon, Nick Feamster
|
Measuring the Performance and Network Utilization of Popular Video
Conferencing Applications
| null | null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
Video conferencing applications (VCAs) have become a critical Internet
application, even more so during the COVID-19 pandemic, as users worldwide now
rely on them for work, school, and telehealth. It is thus increasingly
important to understand the resource requirements of different VCAs and how
they perform under different network conditions, including: how much speed
(upstream and downstream throughput) a VCA needs to support high quality of
experience; how VCAs perform under temporary reductions in available capacity;
how they compete with themselves, with each other, and with other applications;
and how usage modality (e.g., number of participants) affects utilization. We
study three modern VCAs: Zoom, Google Meet, and Microsoft Teams. Answers to
these questions differ substantially depending on VCA. First, the average
utilization on an unconstrained link varies between 0.8 Mbps and 1.9 Mbps.
Given temporary reduction of capacity, some VCAs can take as long as 50 seconds
to recover to steady state. Differences in proprietary congestion control
algorithms also result in unfair bandwidth allocations: in constrained
bandwidth settings, one Zoom video conference can consume more than 75% of the
available bandwidth when competing with another VCA (e.g., Meet, Teams). For
some VCAs, client utilization can decrease as the number of participants
increases, due to the reduced video resolution of each participant's video
stream given a larger number of participants. Finally, one participant's
viewing mode (e.g., pinning a speaker) can affect the upstream utilization of
other participants.
|
[
{
"created": "Thu, 27 May 2021 22:21:57 GMT",
"version": "v1"
}
] |
2021-05-31
|
[
[
"MacMillan",
"Kyle",
""
],
[
"Mangla",
"Tarun",
""
],
[
"Saxon",
"James",
""
],
[
"Feamster",
"Nick",
""
]
] |
Video conferencing applications (VCAs) have become a critical Internet application, even more so during the COVID-19 pandemic, as users worldwide now rely on them for work, school, and telehealth. It is thus increasingly important to understand the resource requirements of different VCAs and how they perform under different network conditions, including: how much speed (upstream and downstream throughput) a VCA needs to support high quality of experience; how VCAs perform under temporary reductions in available capacity; how they compete with themselves, with each other, and with other applications; and how usage modality (e.g., number of participants) affects utilization. We study three modern VCAs: Zoom, Google Meet, and Microsoft Teams. Answers to these questions differ substantially depending on VCA. First, the average utilization on an unconstrained link varies between 0.8 Mbps and 1.9 Mbps. Given temporary reduction of capacity, some VCAs can take as long as 50 seconds to recover to steady state. Differences in proprietary congestion control algorithms also result in unfair bandwidth allocations: in constrained bandwidth settings, one Zoom video conference can consume more than 75% of the available bandwidth when competing with another VCA (e.g., Meet, Teams). For some VCAs, client utilization can decrease as the number of participants increases, due to the reduced video resolution of each participant's video stream given a larger number of participants. Finally, one participant's viewing mode (e.g., pinning a speaker) can affect the upstream utilization of other participants.
|
2404.00790
|
Mingyang Wang
|
Mingyang Wang, Heike Adel, Lukas Lange, Jannik Str\"otgen, Hinrich
Sch\"utze
|
Rehearsal-Free Modular and Compositional Continual Learning for Language
Models
| null | null | null | null |
cs.LG cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Continual learning aims at incrementally acquiring new knowledge while not
forgetting existing knowledge. To overcome catastrophic forgetting, methods are
either rehearsal-based, i.e., store data examples from previous tasks for data
replay, or isolate parameters dedicated to each task. However, rehearsal-based
methods raise privacy and memory issues, and parameter-isolation continual
learning does not consider interaction between tasks, thus hindering knowledge
transfer. In this work, we propose MoCL, a rehearsal-free Modular and
Compositional Continual Learning framework which continually adds new modules
to language models and composes them with existing modules. Experiments on
various benchmarks show that MoCL outperforms state of the art and effectively
facilitates knowledge transfer.
|
[
{
"created": "Sun, 31 Mar 2024 20:28:44 GMT",
"version": "v1"
}
] |
2024-04-02
|
[
[
"Wang",
"Mingyang",
""
],
[
"Adel",
"Heike",
""
],
[
"Lange",
"Lukas",
""
],
[
"Strötgen",
"Jannik",
""
],
[
"Schütze",
"Hinrich",
""
]
] |
Continual learning aims at incrementally acquiring new knowledge while not forgetting existing knowledge. To overcome catastrophic forgetting, methods are either rehearsal-based, i.e., store data examples from previous tasks for data replay, or isolate parameters dedicated to each task. However, rehearsal-based methods raise privacy and memory issues, and parameter-isolation continual learning does not consider interaction between tasks, thus hindering knowledge transfer. In this work, we propose MoCL, a rehearsal-free Modular and Compositional Continual Learning framework which continually adds new modules to language models and composes them with existing modules. Experiments on various benchmarks show that MoCL outperforms state of the art and effectively facilitates knowledge transfer.
|
2301.04788
|
Shaonan Wang
|
Shaonan Wang, Nai Ding, Nan Lin, Jiajun Zhang, Chengqing Zong
|
Language Cognition and Language Computation -- Human and Machine
Language Understanding
|
A survey of language comprehension in cognitive sciences and language
understanding in computer sciences and their relations
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Language understanding is a key scientific issue in the fields of cognitive
and computer science. However, the two disciplines differ substantially in the
specific research questions. Cognitive science focuses on analyzing the
specific mechanism of the brain and investigating the brain's response to
language; few studies have examined the brain's language system as a whole. By
contrast, computer scientists focus on the efficiency of practical applications
when choosing research questions but may ignore the most essential laws of
language. Given these differences, can a combination of the disciplines offer
new insights for building intelligent language models and studying language
cognitive mechanisms? In the following text, we first review the research
questions, history, and methods of language understanding in cognitive and
computer science, focusing on the current progress and challenges. We then
compare and contrast the research of language understanding in cognitive and
computer sciences. Finally, we review existing work that combines insights from
language cognition and language computation and offer prospects for future
development trends.
|
[
{
"created": "Thu, 12 Jan 2023 02:37:00 GMT",
"version": "v1"
}
] |
2023-01-13
|
[
[
"Wang",
"Shaonan",
""
],
[
"Ding",
"Nai",
""
],
[
"Lin",
"Nan",
""
],
[
"Zhang",
"Jiajun",
""
],
[
"Zong",
"Chengqing",
""
]
] |
Language understanding is a key scientific issue in the fields of cognitive and computer science. However, the two disciplines differ substantially in the specific research questions. Cognitive science focuses on analyzing the specific mechanism of the brain and investigating the brain's response to language; few studies have examined the brain's language system as a whole. By contrast, computer scientists focus on the efficiency of practical applications when choosing research questions but may ignore the most essential laws of language. Given these differences, can a combination of the disciplines offer new insights for building intelligent language models and studying language cognitive mechanisms? In the following text, we first review the research questions, history, and methods of language understanding in cognitive and computer science, focusing on the current progress and challenges. We then compare and contrast the research of language understanding in cognitive and computer sciences. Finally, we review existing work that combines insights from language cognition and language computation and offer prospects for future development trends.
|
cs/0405092
|
Jiang Qiu
|
Yves Caseau, Glenn Silverstein, Francois Laburthe
|
Learning Hybrid Algorithms for Vehicle Routing Problems
|
Appeared in Theory and Practice of Logic Programming, vol. 1, no. 6,
2001
|
Theory and Practice of Logic Programming, vol. 1, no. 6, 2001
| null | null |
cs.PL
| null |
This paper presents a generic technique for improving hybrid algorithms
through the discovery of and tuning of meta-heuristics. The idea is to
represent a family of push/pull heuristics that are based upon inserting and
removing tasks in a current solution, with an algebra. We then let a learning
algorithm search for the best possible algebraic term, which represents a
hybrid algorithm for a given set of problems and an optimization criterion. In
a previous paper, we described this algebra in detail and provided a set of
preliminary results demonstrating the utility of this approach, using vehicle
routing with time windows (VRPTW) as a domain example. In this paper we expand
upon our results providing a more robust experimental framework and learning
algorithms, and report on some new results using the standard Solomon
benchmarks. In particular, we show that our learning algorithm is able to
achieve results similar to the best-published algorithms using only a fraction
of the CPU time. We also show that the automatic tuning of the best hybrid
combination of such techniques yields a better solution than hand tuning, with
considerably less effort.
|
[
{
"created": "Mon, 24 May 2004 17:41:50 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Caseau",
"Yves",
""
],
[
"Silverstein",
"Glenn",
""
],
[
"Laburthe",
"Francois",
""
]
] |
This paper presents a generic technique for improving hybrid algorithms through the discovery of and tuning of meta-heuristics. The idea is to represent a family of push/pull heuristics that are based upon inserting and removing tasks in a current solution, with an algebra. We then let a learning algorithm search for the best possible algebraic term, which represents a hybrid algorithm for a given set of problems and an optimization criterion. In a previous paper, we described this algebra in detail and provided a set of preliminary results demonstrating the utility of this approach, using vehicle routing with time windows (VRPTW) as a domain example. In this paper we expand upon our results providing a more robust experimental framework and learning algorithms, and report on some new results using the standard Solomon benchmarks. In particular, we show that our learning algorithm is able to achieve results similar to the best-published algorithms using only a fraction of the CPU time. We also show that the automatic tuning of the best hybrid combination of such techniques yields a better solution than hand tuning, with considerably less effort.
|
2402.09299
|
Vahid Majdinasab
|
Vahid Majdinasab, Amin Nikanjam, Foutse Khomh
|
Trained Without My Consent: Detecting Code Inclusion In Language Models
Trained on Code
|
Submitted to TOSEM (ACM Transactions on Software Engineering and
Methodology)
| null | null | null |
cs.SE cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Code auditing ensures that the developed code adheres to standards,
regulations, and copyright protection by verifying that it does not contain
code from protected sources. The recent advent of Large Language Models (LLMs)
as coding assistants in the software development process poses new challenges
for code auditing. The dataset for training these models is mainly collected
from publicly available sources. This raises the issue of intellectual property
infringement as developers' codes are already included in the dataset.
Therefore, auditing code developed using LLMs is challenging, as it is
difficult to reliably assert if an LLM used during development has been trained
on specific copyrighted codes, given that we do not have access to the training
datasets of these models. Given the non-disclosure of the training datasets,
traditional approaches such as code clone detection are insufficient for
asserting copyright infringement. To address this challenge, we propose a new
approach, TraWiC; a model-agnostic and interpretable method based on membership
inference for detecting code inclusion in an LLM's training dataset. We extract
syntactic and semantic identifiers unique to each program to train a classifier
for detecting code inclusion. In our experiments, we observe that TraWiC is
capable of detecting 83.87% of codes that were used to train an LLM. In
comparison, the prevalent clone detection tool NiCad is only capable of
detecting 47.64%. In addition to its remarkable performance, TraWiC has low
resource overhead in contrast to pair-wise clone detection that is conducted
during the auditing process of tools like CodeWhisperer reference tracker,
across thousands of code snippets.
|
[
{
"created": "Wed, 14 Feb 2024 16:41:35 GMT",
"version": "v1"
}
] |
2024-02-15
|
[
[
"Majdinasab",
"Vahid",
""
],
[
"Nikanjam",
"Amin",
""
],
[
"Khomh",
"Foutse",
""
]
] |
Code auditing ensures that the developed code adheres to standards, regulations, and copyright protection by verifying that it does not contain code from protected sources. The recent advent of Large Language Models (LLMs) as coding assistants in the software development process poses new challenges for code auditing. The dataset for training these models is mainly collected from publicly available sources. This raises the issue of intellectual property infringement as developers' codes are already included in the dataset. Therefore, auditing code developed using LLMs is challenging, as it is difficult to reliably assert if an LLM used during development has been trained on specific copyrighted codes, given that we do not have access to the training datasets of these models. Given the non-disclosure of the training datasets, traditional approaches such as code clone detection are insufficient for asserting copyright infringement. To address this challenge, we propose a new approach, TraWiC; a model-agnostic and interpretable method based on membership inference for detecting code inclusion in an LLM's training dataset. We extract syntactic and semantic identifiers unique to each program to train a classifier for detecting code inclusion. In our experiments, we observe that TraWiC is capable of detecting 83.87% of codes that were used to train an LLM. In comparison, the prevalent clone detection tool NiCad is only capable of detecting 47.64%. In addition to its remarkable performance, TraWiC has low resource overhead in contrast to pair-wise clone detection that is conducted during the auditing process of tools like CodeWhisperer reference tracker, across thousands of code snippets.
|
1903.02740
|
Zaiwang Gu
|
Zaiwang Gu, Jun Cheng, Huazhu Fu, Kang Zhou, Huaying Hao, Yitian Zhao,
Tianyang Zhang, Shenghua Gao and Jiang Liu
|
CE-Net: Context Encoder Network for 2D Medical Image Segmentation
|
accepted by IEEE transcations on medical imaging, (TMI)
| null |
10.1109/TMI.2019.2903562
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Medical image segmentation is an important step in medical image analysis.
With the rapid development of convolutional neural network in image processing,
deep learning has been used for medical image segmentation, such as optic disc
segmentation, blood vessel detection, lung segmentation, cell segmentation,
etc. Previously, U-net based approaches have been proposed. However, the
consecutive pooling and strided convolutional operations lead to the loss of
some spatial information. In this paper, we propose a context encoder network
(referred to as CE-Net) to capture more high-level information and preserve
spatial information for 2D medical image segmentation. CE-Net mainly contains
three major components: a feature encoder module, a context extractor and a
feature decoder module. We use pretrained ResNet block as the fixed feature
extractor. The context extractor module is formed by a newly proposed dense
atrous convolution (DAC) block and residual multi-kernel pooling (RMP) block.
We applied the proposed CE-Net to different 2D medical image segmentation
tasks. Comprehensive results show that the proposed method outperforms the
original U-Net method and other state-of-the-art methods for optic disc
segmentation, vessel detection, lung segmentation, cell contour segmentation
and retinal optical coherence tomography layer segmentation.
|
[
{
"created": "Thu, 7 Mar 2019 06:24:27 GMT",
"version": "v1"
}
] |
2019-03-08
|
[
[
"Gu",
"Zaiwang",
""
],
[
"Cheng",
"Jun",
""
],
[
"Fu",
"Huazhu",
""
],
[
"Zhou",
"Kang",
""
],
[
"Hao",
"Huaying",
""
],
[
"Zhao",
"Yitian",
""
],
[
"Zhang",
"Tianyang",
""
],
[
"Gao",
"Shenghua",
""
],
[
"Liu",
"Jiang",
""
]
] |
Medical image segmentation is an important step in medical image analysis. With the rapid development of convolutional neural network in image processing, deep learning has been used for medical image segmentation, such as optic disc segmentation, blood vessel detection, lung segmentation, cell segmentation, etc. Previously, U-net based approaches have been proposed. However, the consecutive pooling and strided convolutional operations lead to the loss of some spatial information. In this paper, we propose a context encoder network (referred to as CE-Net) to capture more high-level information and preserve spatial information for 2D medical image segmentation. CE-Net mainly contains three major components: a feature encoder module, a context extractor and a feature decoder module. We use pretrained ResNet block as the fixed feature extractor. The context extractor module is formed by a newly proposed dense atrous convolution (DAC) block and residual multi-kernel pooling (RMP) block. We applied the proposed CE-Net to different 2D medical image segmentation tasks. Comprehensive results show that the proposed method outperforms the original U-Net method and other state-of-the-art methods for optic disc segmentation, vessel detection, lung segmentation, cell contour segmentation and retinal optical coherence tomography layer segmentation.
|
2011.03168
|
Hiroyasu Tsukamoto
|
Hiroyasu Tsukamoto and Soon-Jo Chung and Jean-Jacques E. Slotine
|
Neural Stochastic Contraction Metrics for Learning-based Control and
Estimation
|
IEEE CONTROL SYSTEMS LETTERS (L-CSS), preprint version, accepted Dec.
2020 (DOI: 10.1109/LCSYS.2020.3046529).
https://ieeexplore.ieee.org/document/9302618
| null |
10.1109/LCSYS.2020.3046529
| null |
cs.LG cs.AI cs.RO cs.SY eess.SY math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present Neural Stochastic Contraction Metrics (NSCM), a new design
framework for provably-stable robust control and estimation for a class of
stochastic nonlinear systems. It uses a spectrally-normalized deep neural
network to construct a contraction metric, sampled via simplified convex
optimization in the stochastic setting. Spectral normalization constrains the
state-derivatives of the metric to be Lipschitz continuous, thereby ensuring
exponential boundedness of the mean squared distance of system trajectories
under stochastic disturbances. The NSCM framework allows autonomous agents to
approximate optimal stable control and estimation policies in real-time, and
outperforms existing nonlinear control and estimation techniques including the
state-dependent Riccati equation, iterative LQR, EKF, and the deterministic
neural contraction metric, as illustrated in simulation results.
|
[
{
"created": "Fri, 6 Nov 2020 03:04:42 GMT",
"version": "v1"
},
{
"created": "Thu, 19 Nov 2020 17:39:43 GMT",
"version": "v2"
},
{
"created": "Wed, 9 Dec 2020 23:43:24 GMT",
"version": "v3"
},
{
"created": "Sun, 3 Jan 2021 14:12:28 GMT",
"version": "v4"
}
] |
2021-01-05
|
[
[
"Tsukamoto",
"Hiroyasu",
""
],
[
"Chung",
"Soon-Jo",
""
],
[
"Slotine",
"Jean-Jacques E.",
""
]
] |
We present Neural Stochastic Contraction Metrics (NSCM), a new design framework for provably-stable robust control and estimation for a class of stochastic nonlinear systems. It uses a spectrally-normalized deep neural network to construct a contraction metric, sampled via simplified convex optimization in the stochastic setting. Spectral normalization constrains the state-derivatives of the metric to be Lipschitz continuous, thereby ensuring exponential boundedness of the mean squared distance of system trajectories under stochastic disturbances. The NSCM framework allows autonomous agents to approximate optimal stable control and estimation policies in real-time, and outperforms existing nonlinear control and estimation techniques including the state-dependent Riccati equation, iterative LQR, EKF, and the deterministic neural contraction metric, as illustrated in simulation results.
|
1012.3295
|
Laura Sanit\`a
|
Friedrich Eisenbrand, Naonori Kakimura, Thomas Rothvo{\ss}, Laura
Sanit\`a
|
Set Covering with Ordered Replacement -- Additive and Multiplicative
Gaps
| null | null | null | null |
cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider set covering problems where the underlying set system satisfies a
particular replacement property w.r.t. a given partial order on the elements:
Whenever a set is in the set system then a set stemming from it via the
replacement of an element by a smaller element is also in the set system. Many
variants of BIN PACKING that have appeared in the literature are such set
covering problems with ordered replacement. We provide a rigorous account on
the additive and multiplicative integrality gap and approximability of set
covering with replacement. In particular we provide a polylogarithmic upper
bound on the additive integrality gap that also yields a polynomial time
additive approximation algorithm if the linear programming relaxation can be
efficiently solved. We furthermore present an extensive list of covering
problems that fall into our framework and consequently have polylogarithmic
additive gaps as well.
|
[
{
"created": "Wed, 15 Dec 2010 11:56:31 GMT",
"version": "v1"
}
] |
2015-03-17
|
[
[
"Eisenbrand",
"Friedrich",
""
],
[
"Kakimura",
"Naonori",
""
],
[
"Rothvoß",
"Thomas",
""
],
[
"Sanità",
"Laura",
""
]
] |
We consider set covering problems where the underlying set system satisfies a particular replacement property w.r.t. a given partial order on the elements: Whenever a set is in the set system then a set stemming from it via the replacement of an element by a smaller element is also in the set system. Many variants of BIN PACKING that have appeared in the literature are such set covering problems with ordered replacement. We provide a rigorous account on the additive and multiplicative integrality gap and approximability of set covering with replacement. In particular we provide a polylogarithmic upper bound on the additive integrality gap that also yields a polynomial time additive approximation algorithm if the linear programming relaxation can be efficiently solved. We furthermore present an extensive list of covering problems that fall into our framework and consequently have polylogarithmic additive gaps as well.
|
2303.09181
|
Yong Liu
|
Kunyang Han, Yong Liu, Jun Hao Liew, Henghui Ding, Yunchao Wei, Jiajun
Liu, Yitong Wang, Yansong Tang, Yujiu Yang, Jiashi Feng, Yao Zhao
|
Global Knowledge Calibration for Fast Open-Vocabulary Segmentation
|
Accepted by ICCV2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent advancements in pre-trained vision-language models, such as CLIP, have
enabled the segmentation of arbitrary concepts solely from textual inputs, a
process commonly referred to as open-vocabulary semantic segmentation (OVS).
However, existing OVS techniques confront a fundamental challenge: the trained
classifier tends to overfit on the base classes observed during training,
resulting in suboptimal generalization performance to unseen classes. To
mitigate this issue, recent studies have proposed the use of an additional
frozen pre-trained CLIP for classification. Nonetheless, this approach incurs
heavy computational overheads as the CLIP vision encoder must be repeatedly
forward-passed for each mask, rendering it impractical for real-world
applications. To address this challenge, our objective is to develop a fast OVS
model that can perform comparably or better without the extra computational
burden of the CLIP image encoder during inference. To this end, we propose a
core idea of preserving the generalizable representation when fine-tuning on
known classes. Specifically, we introduce a text diversification strategy that
generates a set of synonyms for each training category, which prevents the
learned representation from collapsing onto specific known category names.
Additionally, we employ a text-guided knowledge distillation method to preserve
the generalizable knowledge of CLIP. Extensive experiments demonstrate that our
proposed model achieves robust generalization performance across various
datasets. Furthermore, we perform a preliminary exploration of open-vocabulary
video segmentation and present a benchmark that can facilitate future
open-vocabulary research in the video domain.
|
[
{
"created": "Thu, 16 Mar 2023 09:51:41 GMT",
"version": "v1"
},
{
"created": "Sat, 15 Jul 2023 05:10:22 GMT",
"version": "v2"
}
] |
2023-07-18
|
[
[
"Han",
"Kunyang",
""
],
[
"Liu",
"Yong",
""
],
[
"Liew",
"Jun Hao",
""
],
[
"Ding",
"Henghui",
""
],
[
"Wei",
"Yunchao",
""
],
[
"Liu",
"Jiajun",
""
],
[
"Wang",
"Yitong",
""
],
[
"Tang",
"Yansong",
""
],
[
"Yang",
"Yujiu",
""
],
[
"Feng",
"Jiashi",
""
],
[
"Zhao",
"Yao",
""
]
] |
Recent advancements in pre-trained vision-language models, such as CLIP, have enabled the segmentation of arbitrary concepts solely from textual inputs, a process commonly referred to as open-vocabulary semantic segmentation (OVS). However, existing OVS techniques confront a fundamental challenge: the trained classifier tends to overfit on the base classes observed during training, resulting in suboptimal generalization performance to unseen classes. To mitigate this issue, recent studies have proposed the use of an additional frozen pre-trained CLIP for classification. Nonetheless, this approach incurs heavy computational overheads as the CLIP vision encoder must be repeatedly forward-passed for each mask, rendering it impractical for real-world applications. To address this challenge, our objective is to develop a fast OVS model that can perform comparably or better without the extra computational burden of the CLIP image encoder during inference. To this end, we propose a core idea of preserving the generalizable representation when fine-tuning on known classes. Specifically, we introduce a text diversification strategy that generates a set of synonyms for each training category, which prevents the learned representation from collapsing onto specific known category names. Additionally, we employ a text-guided knowledge distillation method to preserve the generalizable knowledge of CLIP. Extensive experiments demonstrate that our proposed model achieves robust generalization performance across various datasets. Furthermore, we perform a preliminary exploration of open-vocabulary video segmentation and present a benchmark that can facilitate future open-vocabulary research in the video domain.
|
1612.03472
|
Dominik Sch\"urmann
|
Dominik Sch\"urmann and Arne Br\"usch and Stephan Sigg and Lars Wolf
|
BANDANA -- Body Area Network Device-to-device Authentication using
Natural gAit
| null |
IEEE International Conference on Pervasive Computing and
Communications (PerCom) 2017
|
10.1109/PERCOM.2017.7917865
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Secure spontaneous authentication between devices worn at arbitrary location
on the same body is a challenging, yet unsolved problem. We propose BANDANA,
the first-ever implicit secure device-to-device authentication scheme for
devices worn on the same body. Our approach leverages instantaneous variation
in acceleration patterns from gait sequences to extract always-fresh secure
secrets. It enables secure spontaneous pairing of devices worn on the same body
or interacted with. The method is robust against noise in sensor readings and
active attackers. We demonstrate the robustness of BANDANA on two gait datasets
and discuss the discriminability of intra- and inter-body cases, robustness to
statistical bias, as well as possible attack scenarios.
|
[
{
"created": "Sun, 11 Dec 2016 20:43:57 GMT",
"version": "v1"
}
] |
2018-04-09
|
[
[
"Schürmann",
"Dominik",
""
],
[
"Brüsch",
"Arne",
""
],
[
"Sigg",
"Stephan",
""
],
[
"Wolf",
"Lars",
""
]
] |
Secure spontaneous authentication between devices worn at arbitrary location on the same body is a challenging, yet unsolved problem. We propose BANDANA, the first-ever implicit secure device-to-device authentication scheme for devices worn on the same body. Our approach leverages instantaneous variation in acceleration patterns from gait sequences to extract always-fresh secure secrets. It enables secure spontaneous pairing of devices worn on the same body or interacted with. The method is robust against noise in sensor readings and active attackers. We demonstrate the robustness of BANDANA on two gait datasets and discuss the discriminability of intra- and inter-body cases, robustness to statistical bias, as well as possible attack scenarios.
|
1703.06337
|
Andriy Miranskyy
|
Mefta Sadat and Ayse Basar Bener and Andriy V. Miranskyy
|
Rediscovery Datasets: Connecting Duplicate Reports
| null |
Proceedings of the 14th International Conference on Mining
Software Repositories (MSR '17). IEEE Press, Piscataway, NJ, USA, 527-530,
2017
|
10.1109/MSR.2017.50
| null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The same defect can be rediscovered by multiple clients, causing unplanned
outages and leading to reduced customer satisfaction. In the case of popular
open source software, high volume of defects is reported on a regular basis. A
large number of these reports are actually duplicates / rediscoveries of each
other. Researchers have analyzed the factors related to the content of
duplicate defect reports in the past. However, some of the other potentially
important factors, such as the inter-relationships among duplicate defect
reports, are not readily available in defect tracking systems such as Bugzilla.
This information may speed up bug fixing, enable efficient triaging, improve
customer profiles, etc.
In this paper, we present three defect rediscovery datasets mined from
Bugzilla. The datasets capture data for three groups of open source software
projects: Apache, Eclipse, and KDE. The datasets contain information about
approximately 914 thousands of defect reports over a period of 18 years
(1999-2017) to capture the inter-relationships among duplicate defects. We
believe that sharing these data with the community will help researchers and
practitioners to better understand the nature of defect rediscovery and enhance
the analysis of defect reports.
|
[
{
"created": "Sat, 18 Mar 2017 19:01:38 GMT",
"version": "v1"
}
] |
2017-06-14
|
[
[
"Sadat",
"Mefta",
""
],
[
"Bener",
"Ayse Basar",
""
],
[
"Miranskyy",
"Andriy V.",
""
]
] |
The same defect can be rediscovered by multiple clients, causing unplanned outages and leading to reduced customer satisfaction. In the case of popular open source software, high volume of defects is reported on a regular basis. A large number of these reports are actually duplicates / rediscoveries of each other. Researchers have analyzed the factors related to the content of duplicate defect reports in the past. However, some of the other potentially important factors, such as the inter-relationships among duplicate defect reports, are not readily available in defect tracking systems such as Bugzilla. This information may speed up bug fixing, enable efficient triaging, improve customer profiles, etc. In this paper, we present three defect rediscovery datasets mined from Bugzilla. The datasets capture data for three groups of open source software projects: Apache, Eclipse, and KDE. The datasets contain information about approximately 914 thousands of defect reports over a period of 18 years (1999-2017) to capture the inter-relationships among duplicate defects. We believe that sharing these data with the community will help researchers and practitioners to better understand the nature of defect rediscovery and enhance the analysis of defect reports.
|
2210.06739
|
Ueverton Souza
|
Janio Carlos Nascimento Silva, U\'everton S. Souza
|
Computing the Best Case Energy Complexity of Satisfying Assignments in
Monotone Circuits
| null | null | null | null |
cs.CC cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Measures of circuit complexity are usually analyzed to ensure the computation
of Boolean functions with economy and efficiency. One of these measures is
energy complexity, which is related to the number of gates that output true in
a circuit for an assignment. The idea behind energy complexity comes from the
counting of `firing' neurons in a natural neural network. The initial model is
based on threshold circuits, but recent works also have analyzed the energy
complexity of traditional Boolean circuits. In this work, we discuss the time
complexity needed to compute the best-case energy complexity among satisfying
assignments of a monotone Boolean circuit, and we call such a problem as
MinEC$^+_M$. In the MinEC$^+_M$ problem, we are given a monotone Boolean
circuit $C$, a positive integer $k$ and asked to determine whether there is a
satisfying assignment $X$ for $C$ such that $EC(C,X) \leq k$, where $EC(C,X)$
is the number of gates that output true in $C$ according to the assignment $X$.
We prove that MinEC$^+_M$ is NP-complete even when the input monotone circuit
is planar. Besides, we show that the problem is W[1]-hard but in XP when
parameterized by the size of the solution. In contrast, we show that when the
size of the solution and the genus of the input circuit are aggregated
parameters, the MinEC$^+_M$ problem becomes fixed-parameter tractable.
|
[
{
"created": "Thu, 13 Oct 2022 05:05:56 GMT",
"version": "v1"
}
] |
2022-10-14
|
[
[
"Silva",
"Janio Carlos Nascimento",
""
],
[
"Souza",
"Uéverton S.",
""
]
] |
Measures of circuit complexity are usually analyzed to ensure the computation of Boolean functions with economy and efficiency. One of these measures is energy complexity, which is related to the number of gates that output true in a circuit for an assignment. The idea behind energy complexity comes from the counting of `firing' neurons in a natural neural network. The initial model is based on threshold circuits, but recent works also have analyzed the energy complexity of traditional Boolean circuits. In this work, we discuss the time complexity needed to compute the best-case energy complexity among satisfying assignments of a monotone Boolean circuit, and we call such a problem as MinEC$^+_M$. In the MinEC$^+_M$ problem, we are given a monotone Boolean circuit $C$, a positive integer $k$ and asked to determine whether there is a satisfying assignment $X$ for $C$ such that $EC(C,X) \leq k$, where $EC(C,X)$ is the number of gates that output true in $C$ according to the assignment $X$. We prove that MinEC$^+_M$ is NP-complete even when the input monotone circuit is planar. Besides, we show that the problem is W[1]-hard but in XP when parameterized by the size of the solution. In contrast, we show that when the size of the solution and the genus of the input circuit are aggregated parameters, the MinEC$^+_M$ problem becomes fixed-parameter tractable.
|
2407.19988
|
Yili Jin
|
Yili Jin, Xize Duan, Fangxin Wang, Xue Liu
|
HeadsetOff: Enabling Photorealistic Video Conferencing on Economical VR
Headsets
|
Accepted by ACM Multimedia 2024
| null | null | null |
cs.MM
|
http://creativecommons.org/licenses/by/4.0/
|
Virtual Reality (VR) headsets have become increasingly popular for remote
collaboration, but video conferencing poses challenges when the user's face is
covered by the headset. Existing solutions have limitations in terms of
accessibility. In this paper, we propose HeadsetOff, a novel system that
achieves photorealistic video conferencing on economical VR headsets by
leveraging voice-driven face reconstruction. HeadsetOff consists of three main
components: a multimodal attention-based predictor, a generator, and an
adaptive controller. The predictor effectively predicts user future behavior
based on different modalities. The generator employs voice input, head motion,
and eye blink to animate the human face. The adaptive controller dynamically
selects the appropriate generator model based on the trade-off between video
quality and delay, aiming to maximize Quality of Experience while minimizing
latency. Experimental results demonstrate the effectiveness of HeadsetOff in
achieving high-quality, low-latency video conferencing on economical VR
headsets.
|
[
{
"created": "Mon, 29 Jul 2024 13:20:22 GMT",
"version": "v1"
}
] |
2024-07-30
|
[
[
"Jin",
"Yili",
""
],
[
"Duan",
"Xize",
""
],
[
"Wang",
"Fangxin",
""
],
[
"Liu",
"Xue",
""
]
] |
Virtual Reality (VR) headsets have become increasingly popular for remote collaboration, but video conferencing poses challenges when the user's face is covered by the headset. Existing solutions have limitations in terms of accessibility. In this paper, we propose HeadsetOff, a novel system that achieves photorealistic video conferencing on economical VR headsets by leveraging voice-driven face reconstruction. HeadsetOff consists of three main components: a multimodal attention-based predictor, a generator, and an adaptive controller. The predictor effectively predicts user future behavior based on different modalities. The generator employs voice input, head motion, and eye blink to animate the human face. The adaptive controller dynamically selects the appropriate generator model based on the trade-off between video quality and delay, aiming to maximize Quality of Experience while minimizing latency. Experimental results demonstrate the effectiveness of HeadsetOff in achieving high-quality, low-latency video conferencing on economical VR headsets.
|
2110.08259
|
Hangcheng Dong
|
Hangcheng Dong, Jingxiao Liao, Yan Wang, Yixin Chen, Bingguo Liu, Dong
Ye and Guodong Liu
|
Training Neural Networks for Solving 1-D Optimal Piecewise Linear
Approximation
| null | null | null | null |
cs.LG cs.NE
|
http://creativecommons.org/licenses/by/4.0/
|
Recently, the interpretability of deep learning has attracted a lot of
attention. A plethora of methods have attempted to explain neural networks by
feature visualization, saliency maps, model distillation, and so on. However,
it is hard for these methods to reveal the intrinsic properties of neural
networks. In this work, we studied the 1-D optimal piecewise linear
approximation (PWLA) problem, and associated it with a designed neural network,
named lattice neural network (LNN). We asked four essential questions as
following: (1) What are the characters of the optimal solution of the PWLA
problem? (2) Can an LNN converge to the global optimum? (3) Can an LNN converge
to the local optimum? (4) Can an LNN solve the PWLA problem? Our main
contributions are that we propose the theorems to characterize the optimal
solution of the PWLA problem and present the LNN method for solving it. We
evaluated the proposed LNNs on approximation tasks, forged an empirical method
to improve the performance of LNNs. The experiments verified that our LNN
method is competitive with the start-of-the-art method.
|
[
{
"created": "Thu, 14 Oct 2021 14:41:17 GMT",
"version": "v1"
}
] |
2021-10-19
|
[
[
"Dong",
"Hangcheng",
""
],
[
"Liao",
"Jingxiao",
""
],
[
"Wang",
"Yan",
""
],
[
"Chen",
"Yixin",
""
],
[
"Liu",
"Bingguo",
""
],
[
"Ye",
"Dong",
""
],
[
"Liu",
"Guodong",
""
]
] |
Recently, the interpretability of deep learning has attracted a lot of attention. A plethora of methods have attempted to explain neural networks by feature visualization, saliency maps, model distillation, and so on. However, it is hard for these methods to reveal the intrinsic properties of neural networks. In this work, we studied the 1-D optimal piecewise linear approximation (PWLA) problem, and associated it with a designed neural network, named lattice neural network (LNN). We asked four essential questions as following: (1) What are the characters of the optimal solution of the PWLA problem? (2) Can an LNN converge to the global optimum? (3) Can an LNN converge to the local optimum? (4) Can an LNN solve the PWLA problem? Our main contributions are that we propose the theorems to characterize the optimal solution of the PWLA problem and present the LNN method for solving it. We evaluated the proposed LNNs on approximation tasks, forged an empirical method to improve the performance of LNNs. The experiments verified that our LNN method is competitive with the start-of-the-art method.
|
2307.15988
|
Sascha Kirch
|
Sascha Kirch (1), Valeria Olyunina (2), Jan Ond\v{r}ej (2), Rafael
Pag\'es (2), Sergio Martin (1), Clara P\'erez-Molina (1) ((1) UNED -
Universidad Nacional de Educaci\'on a Distancia, Madrid, Spain, (2) Volograms
ltd, Dublin, Ireland)
|
RGB-D-Fusion: Image Conditioned Depth Diffusion of Humanoid Subjects
| null | null |
10.1109/ACCESS.2023.3312017
| null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present RGB-D-Fusion, a multi-modal conditional denoising diffusion
probabilistic model to generate high resolution depth maps from low-resolution
monocular RGB images of humanoid subjects. RGB-D-Fusion first generates a
low-resolution depth map using an image conditioned denoising diffusion
probabilistic model and then upsamples the depth map using a second denoising
diffusion probabilistic model conditioned on a low-resolution RGB-D image. We
further introduce a novel augmentation technique, depth noise augmentation, to
increase the robustness of our super-resolution model.
|
[
{
"created": "Sat, 29 Jul 2023 13:47:40 GMT",
"version": "v1"
}
] |
2023-09-25
|
[
[
"Kirch",
"Sascha",
""
],
[
"Olyunina",
"Valeria",
""
],
[
"Ondřej",
"Jan",
""
],
[
"Pagés",
"Rafael",
""
],
[
"Martin",
"Sergio",
""
],
[
"Pérez-Molina",
"Clara",
""
]
] |
We present RGB-D-Fusion, a multi-modal conditional denoising diffusion probabilistic model to generate high resolution depth maps from low-resolution monocular RGB images of humanoid subjects. RGB-D-Fusion first generates a low-resolution depth map using an image conditioned denoising diffusion probabilistic model and then upsamples the depth map using a second denoising diffusion probabilistic model conditioned on a low-resolution RGB-D image. We further introduce a novel augmentation technique, depth noise augmentation, to increase the robustness of our super-resolution model.
|
1804.01834
|
Mehdi Salehi Heydar Abad
|
Mehdi Salehi Heydar Abad, Ozgur Ercetin
|
Finite Horizon Throughput Maximization and Sensing Optimization in
Wireless Powered Devices over Fading Channels
|
Single column, 31 pages
| null | null | null |
cs.IT cs.LG eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Wireless power transfer (WPT) is a promising technology that provides the
network a way to replenish the batteries of the remote devices by utilizing RF
transmissions. We study a class of harvest-first-transmit-later type of WPT
policy, where an access point (AP) first employs RF power transfer to recharge
a wireless powered device (WPD) for a certain period subjected to optimization,
and then, the harvested energy is subsequently used by the WPD to transmit its
data bits back to the AP over a finite horizon. A significant challenge
regarding the studied WPT scenario is the time-varying nature of the wireless
channel linking the WPD to the AP. We first investigate as a benchmark the
offline case where the channel realizations are known non-causally prior to the
starting of the horizon. For the offline case, by finding the optimal WPT
duration and power allocations in the data transmission period, we derive an
upper bound on the throughput of the WPD. We then focus on the online
counterpart of the problem where the channel realizations are known causally.
We prove that the optimal WPT duration obeys a time-dependent threshold form
depending on the energy state of the WPD. In the subsequent data transmission
stage, the optimal transmit power allocation for the WPD is shown to be of a
fractional structure where at each time slot a fraction of energy depending on
the current channel and a measure of future channel state expectations is
allocated for data transmission. We numerically show that the online policy
performs almost identical to the upper bound. We then consider a data sensing
application, where the WPD adjusts the sensing resolution to balance between
the quality of the sensed data and the probability of successfully delivering
it. We use Bayesian inference as a reinforcement learning method to provide a
mean for the WPD in learning to balance the sensing resolution.
|
[
{
"created": "Sat, 17 Mar 2018 19:40:40 GMT",
"version": "v1"
},
{
"created": "Sun, 9 Sep 2018 21:23:31 GMT",
"version": "v2"
}
] |
2018-09-11
|
[
[
"Abad",
"Mehdi Salehi Heydar",
""
],
[
"Ercetin",
"Ozgur",
""
]
] |
Wireless power transfer (WPT) is a promising technology that provides the network a way to replenish the batteries of the remote devices by utilizing RF transmissions. We study a class of harvest-first-transmit-later type of WPT policy, where an access point (AP) first employs RF power transfer to recharge a wireless powered device (WPD) for a certain period subjected to optimization, and then, the harvested energy is subsequently used by the WPD to transmit its data bits back to the AP over a finite horizon. A significant challenge regarding the studied WPT scenario is the time-varying nature of the wireless channel linking the WPD to the AP. We first investigate as a benchmark the offline case where the channel realizations are known non-causally prior to the starting of the horizon. For the offline case, by finding the optimal WPT duration and power allocations in the data transmission period, we derive an upper bound on the throughput of the WPD. We then focus on the online counterpart of the problem where the channel realizations are known causally. We prove that the optimal WPT duration obeys a time-dependent threshold form depending on the energy state of the WPD. In the subsequent data transmission stage, the optimal transmit power allocation for the WPD is shown to be of a fractional structure where at each time slot a fraction of energy depending on the current channel and a measure of future channel state expectations is allocated for data transmission. We numerically show that the online policy performs almost identical to the upper bound. We then consider a data sensing application, where the WPD adjusts the sensing resolution to balance between the quality of the sensed data and the probability of successfully delivering it. We use Bayesian inference as a reinforcement learning method to provide a mean for the WPD in learning to balance the sensing resolution.
|
1812.04486
|
Eric Benhamou
|
David Saltiel and Eric Benhamou
|
Trade Selection with Supervised Learning and OCA
|
7 pages, 9 figures. arXiv admin note: substantial text overlap with
arXiv:1811.12064
| null | null | null |
cs.LG q-fin.CP stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, state-of-the-art methods for supervised learning have
exploited increasingly gradient boosting techniques, with mainstream efficient
implementations such as xgboost or lightgbm. One of the key points in
generating proficient methods is Feature Selection (FS). It consists in
selecting the right valuable effective features. When facing hundreds of these
features, it becomes critical to select best features. While filter and
wrappers methods have come to some maturity, embedded methods are truly
necessary to find the best features set as they are hybrid methods combining
features filtering and wrapping. In this work, we tackle the problem of finding
through machine learning best a priori trades from an algorithmic strategy. We
derive this new method using coordinate ascent optimization and using block
variables. We compare our method to Recursive Feature Elimination (RFE) and
Binary Coordinate Ascent (BCA). We show on a real life example the capacity of
this method to select good trades a priori. Not only this method outperforms
the initial trading strategy as it avoids taking loosing trades, it also
surpasses other method, having the smallest feature set and the highest score
at the same time. The interest of this method goes beyond this simple trade
classification problem as it is a very general method to determine the optimal
feature set using some information about features relationship as well as using
coordinate ascent optimization.
|
[
{
"created": "Sun, 9 Dec 2018 21:07:06 GMT",
"version": "v1"
}
] |
2018-12-12
|
[
[
"Saltiel",
"David",
""
],
[
"Benhamou",
"Eric",
""
]
] |
In recent years, state-of-the-art methods for supervised learning have exploited increasingly gradient boosting techniques, with mainstream efficient implementations such as xgboost or lightgbm. One of the key points in generating proficient methods is Feature Selection (FS). It consists in selecting the right valuable effective features. When facing hundreds of these features, it becomes critical to select best features. While filter and wrappers methods have come to some maturity, embedded methods are truly necessary to find the best features set as they are hybrid methods combining features filtering and wrapping. In this work, we tackle the problem of finding through machine learning best a priori trades from an algorithmic strategy. We derive this new method using coordinate ascent optimization and using block variables. We compare our method to Recursive Feature Elimination (RFE) and Binary Coordinate Ascent (BCA). We show on a real life example the capacity of this method to select good trades a priori. Not only this method outperforms the initial trading strategy as it avoids taking loosing trades, it also surpasses other method, having the smallest feature set and the highest score at the same time. The interest of this method goes beyond this simple trade classification problem as it is a very general method to determine the optimal feature set using some information about features relationship as well as using coordinate ascent optimization.
|
1206.0305
|
Ghassan Samara
|
Ghassan Samara
|
Efficient Certificate Management in VANET
|
5 pages. arXiv admin note: text overlap with arXiv:1006.5113, and
with arXiv:1112.2257 by other authors
|
2010 2nd International Conference on Future Computer and
Communication
| null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vehicular Ad hoc Networks is one of the most challenging research area in the
field of Mobile Ad Hoc Networks, in this research We propose a flexible,
simple, and scalable design for VANET certificates, and new methods for
efficient certificate management, which will Reduce channel overhead by
eliminating the use of CRL, and make Better certificate Revocation Management.
Also it will increase the security of the network and helps in identifying the
adversary vehicle.
|
[
{
"created": "Fri, 1 Jun 2012 20:40:23 GMT",
"version": "v1"
}
] |
2012-06-05
|
[
[
"Samara",
"Ghassan",
""
]
] |
Vehicular Ad hoc Networks is one of the most challenging research area in the field of Mobile Ad Hoc Networks, in this research We propose a flexible, simple, and scalable design for VANET certificates, and new methods for efficient certificate management, which will Reduce channel overhead by eliminating the use of CRL, and make Better certificate Revocation Management. Also it will increase the security of the network and helps in identifying the adversary vehicle.
|
2007.05976
|
Saptarshi Ghosh Dr.
|
Shalmoli Ghosh, Prajwal Singhania, Siddharth Singh, Koustav Rudra,
Saptarshi Ghosh
|
Stance Detection in Web and Social Media: A Comparative Study
| null |
Proceedings of Conference and Labs of the Evaluation Forum (CLEF)
2019; Lecture Notes in Computer Science, vol 11696, pp. 75-87
|
10.1007/978-3-030-28577-7_4
| null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Online forums and social media platforms are increasingly being used to
discuss topics of varying polarities where different people take different
stances. Several methodologies for automatic stance detection from text have
been proposed in literature. To our knowledge, there has not been any
systematic investigation towards their reproducibility, and their comparative
performances. In this work, we explore the reproducibility of several existing
stance detection models, including both neural models and classical
classifier-based models. Through experiments on two datasets -- (i)~the popular
SemEval microblog dataset, and (ii)~a set of health-related online news
articles -- we also perform a detailed comparative analysis of various methods
and explore their shortcomings. Implementations of all algorithms discussed in
this paper are available at
https://github.com/prajwal1210/Stance-Detection-in-Web-and-Social-Media.
|
[
{
"created": "Sun, 12 Jul 2020 12:39:35 GMT",
"version": "v1"
}
] |
2020-07-14
|
[
[
"Ghosh",
"Shalmoli",
""
],
[
"Singhania",
"Prajwal",
""
],
[
"Singh",
"Siddharth",
""
],
[
"Rudra",
"Koustav",
""
],
[
"Ghosh",
"Saptarshi",
""
]
] |
Online forums and social media platforms are increasingly being used to discuss topics of varying polarities where different people take different stances. Several methodologies for automatic stance detection from text have been proposed in literature. To our knowledge, there has not been any systematic investigation towards their reproducibility, and their comparative performances. In this work, we explore the reproducibility of several existing stance detection models, including both neural models and classical classifier-based models. Through experiments on two datasets -- (i)~the popular SemEval microblog dataset, and (ii)~a set of health-related online news articles -- we also perform a detailed comparative analysis of various methods and explore their shortcomings. Implementations of all algorithms discussed in this paper are available at https://github.com/prajwal1210/Stance-Detection-in-Web-and-Social-Media.
|
2112.13261
|
Tao Jiang
|
Tao Jiang, Wei Yu
|
Interference Nulling Using Reconfigurable Intelligent Surface
|
This paper is accepted in IEEE Journal on Selected Areas in
Communications
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper investigates the interference nulling capability of reconfigurable
intelligent surface (RIS) in a multiuser environment where multiple
single-antenna transceivers communicate simultaneously in a shared spectrum.
From a theoretical perspective, we show that when the channels between the RIS
and the transceivers have line-of-sight and the direct paths are blocked, it is
possible to adjust the phases of the RIS elements to null out all the
interference completely and to achieve the maximum $K$ degrees-of-freedom (DoF)
in the overall $K$-user interference channel, provided that the number of RIS
elements exceeds some finite value that depends on $K$. Algorithmically, for
any fixed channel realization we formulate the interference nulling problem as
a feasibility problem, and propose an alternating projection algorithm to
efficiently solve the resulting nonconvex problem with local convergence
guarantee. Numerical results show that the proposed alternating projection
algorithm can null all the interference if the number of RIS elements is only
slightly larger than a threshold of $2K(K-1)$. For the practical sum-rate
maximization objective, this paper proposes to use the zero-forcing solution
obtained from alternating projection as an initial point for subsequent
Riemannian conjugate gradient optimization and shows that it has a significant
performance advantage over random initializations. For the objective of
maximizing the minimum rate, this paper proposes a subgradient projection
method which is capable of achieving excellent performance at low complexity.
|
[
{
"created": "Sat, 25 Dec 2021 17:21:43 GMT",
"version": "v1"
},
{
"created": "Thu, 27 Jan 2022 23:31:45 GMT",
"version": "v2"
}
] |
2022-01-31
|
[
[
"Jiang",
"Tao",
""
],
[
"Yu",
"Wei",
""
]
] |
This paper investigates the interference nulling capability of reconfigurable intelligent surface (RIS) in a multiuser environment where multiple single-antenna transceivers communicate simultaneously in a shared spectrum. From a theoretical perspective, we show that when the channels between the RIS and the transceivers have line-of-sight and the direct paths are blocked, it is possible to adjust the phases of the RIS elements to null out all the interference completely and to achieve the maximum $K$ degrees-of-freedom (DoF) in the overall $K$-user interference channel, provided that the number of RIS elements exceeds some finite value that depends on $K$. Algorithmically, for any fixed channel realization we formulate the interference nulling problem as a feasibility problem, and propose an alternating projection algorithm to efficiently solve the resulting nonconvex problem with local convergence guarantee. Numerical results show that the proposed alternating projection algorithm can null all the interference if the number of RIS elements is only slightly larger than a threshold of $2K(K-1)$. For the practical sum-rate maximization objective, this paper proposes to use the zero-forcing solution obtained from alternating projection as an initial point for subsequent Riemannian conjugate gradient optimization and shows that it has a significant performance advantage over random initializations. For the objective of maximizing the minimum rate, this paper proposes a subgradient projection method which is capable of achieving excellent performance at low complexity.
|
1702.03380
|
Guoqiang Zhang
|
Guoqiang Zhang and W. Bastiaan Kleijn
|
Training Deep Neural Networks via Optimization Over Graphs
|
5 pages
| null | null | null |
cs.LG cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we propose to train a deep neural network by distributed
optimization over a graph. Two nonlinear functions are considered: the
rectified linear unit (ReLU) and a linear unit with both lower and upper
cutoffs (DCutLU). The problem reformulation over a graph is realized by
explicitly representing ReLU or DCutLU using a set of slack variables. We then
apply the alternating direction method of multipliers (ADMM) to update the
weights of the network layerwise by solving subproblems of the reformulated
problem. Empirical results suggest that the ADMM-based method is less sensitive
to overfitting than the stochastic gradient descent (SGD) and Adam methods.
|
[
{
"created": "Sat, 11 Feb 2017 04:02:40 GMT",
"version": "v1"
},
{
"created": "Sat, 17 Jun 2017 11:18:48 GMT",
"version": "v2"
}
] |
2017-06-20
|
[
[
"Zhang",
"Guoqiang",
""
],
[
"Kleijn",
"W. Bastiaan",
""
]
] |
In this work, we propose to train a deep neural network by distributed optimization over a graph. Two nonlinear functions are considered: the rectified linear unit (ReLU) and a linear unit with both lower and upper cutoffs (DCutLU). The problem reformulation over a graph is realized by explicitly representing ReLU or DCutLU using a set of slack variables. We then apply the alternating direction method of multipliers (ADMM) to update the weights of the network layerwise by solving subproblems of the reformulated problem. Empirical results suggest that the ADMM-based method is less sensitive to overfitting than the stochastic gradient descent (SGD) and Adam methods.
|
2010.15012
|
Vanlin Sathya
|
Vanlin Sathya, Muhammad Iqbal Rochman, and Monisha Ghosh
|
Measurement-based coexistence studies of LAA & Wi-Fi deployments in
Chicago
|
IEEE Wireless Communication Magazine, October 2020
| null | null | null |
cs.NI cs.PF eess.SP
|
http://creativecommons.org/licenses/by-sa/4.0/
|
LTE-Licensed Assisted Access (LAA) networks are beginning to be deployed
widely in major metropolitan areas in the US in the unlicensed 5 GHz bands,
which have existing dense deployments of Wi-Fi as well. Various aspects of the
coexistence scenarios such deployments give rise to have been considered ina
vast body of academic and industry research. However, there is very little data
and research on how these coexisting networks will behave in practice. The
question of fair coexistence between Wi-Fi and LAA has moved from a theoretical
question to reality. The recent roll-out of LAA deployments provides an
opportunity to collect data on the operation of these networks as well as
studying coexistence issues on the ground. In this paper we describe the first
results of a measurement campaign conducted over many months, using custom apps
as well as off-the-shelf tools, in several areas of Chicago where the major
carriers have been expanding LAA deployments. The measurements reveal that
coexistence between LAA and Wi-Fi in dense, urban environments where both
systems aggregate multiple channels, continues to be a challenging problem that
requires further research.
|
[
{
"created": "Wed, 28 Oct 2020 14:37:40 GMT",
"version": "v1"
}
] |
2020-10-29
|
[
[
"Sathya",
"Vanlin",
""
],
[
"Rochman",
"Muhammad Iqbal",
""
],
[
"Ghosh",
"Monisha",
""
]
] |
LTE-Licensed Assisted Access (LAA) networks are beginning to be deployed widely in major metropolitan areas in the US in the unlicensed 5 GHz bands, which have existing dense deployments of Wi-Fi as well. Various aspects of the coexistence scenarios such deployments give rise to have been considered ina vast body of academic and industry research. However, there is very little data and research on how these coexisting networks will behave in practice. The question of fair coexistence between Wi-Fi and LAA has moved from a theoretical question to reality. The recent roll-out of LAA deployments provides an opportunity to collect data on the operation of these networks as well as studying coexistence issues on the ground. In this paper we describe the first results of a measurement campaign conducted over many months, using custom apps as well as off-the-shelf tools, in several areas of Chicago where the major carriers have been expanding LAA deployments. The measurements reveal that coexistence between LAA and Wi-Fi in dense, urban environments where both systems aggregate multiple channels, continues to be a challenging problem that requires further research.
|
1107.2554
|
Julia Chuzhoy
|
Julia Chuzhoy
|
Routing in Undirected Graphs with Constant Congestion
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Given an undirected graph G=(V,E), a collection (s_1,t_1),...,(s_k,t_k) of k
source-sink pairs, and an integer c, the goal in the Edge Disjoint Paths with
Congestion problem is to connect maximum possible number of the source-sink
pairs by paths, so that the maximum load on any edge (called edge congestion)
does not exceed c.
We show an efficient randomized algorithm to route $\Omega(OPT/\poly\log k)$
source-sink pairs with congestion at most 14, where OPT is the maximum number
of pairs that can be simultaneously routed on edge-disjoint paths. The best
previous algorithm that routed $\Omega(OPT/\poly\log n)$ pairs required
congestion $\poly(\log \log n)$, and for the setting where the maximum allowed
congestion is bounded by a constant c, the best previous algorithms could only
guarantee the routing of $OPT/n^{O(1/c)}$ pairs.
|
[
{
"created": "Wed, 13 Jul 2011 14:04:11 GMT",
"version": "v1"
}
] |
2011-07-14
|
[
[
"Chuzhoy",
"Julia",
""
]
] |
Given an undirected graph G=(V,E), a collection (s_1,t_1),...,(s_k,t_k) of k source-sink pairs, and an integer c, the goal in the Edge Disjoint Paths with Congestion problem is to connect maximum possible number of the source-sink pairs by paths, so that the maximum load on any edge (called edge congestion) does not exceed c. We show an efficient randomized algorithm to route $\Omega(OPT/\poly\log k)$ source-sink pairs with congestion at most 14, where OPT is the maximum number of pairs that can be simultaneously routed on edge-disjoint paths. The best previous algorithm that routed $\Omega(OPT/\poly\log n)$ pairs required congestion $\poly(\log \log n)$, and for the setting where the maximum allowed congestion is bounded by a constant c, the best previous algorithms could only guarantee the routing of $OPT/n^{O(1/c)}$ pairs.
|
2309.15970
|
An Thai Le
|
An T. Le, Georgia Chalvatzaki, Armin Biess, Jan Peters
|
Accelerating Motion Planning via Optimal Transport
|
Published as a conference paper at NeurIPS 2023. Project website:
https://sites.google.com/view/sinkhorn-step/
| null | null | null |
cs.RO math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Motion planning is still an open problem for many disciplines, e.g.,
robotics, autonomous driving, due to their need for high computational
resources that hinder real-time, efficient decision-making. A class of methods
striving to provide smooth solutions is gradient-based trajectory optimization.
However, those methods usually suffer from bad local minima, while for many
settings, they may be inapplicable due to the absence of easy-to-access
gradients of the optimization objectives. In response to these issues, we
introduce Motion Planning via Optimal Transport (MPOT) -- a
\textit{gradient-free} method that optimizes a batch of smooth trajectories
over highly nonlinear costs, even for high-dimensional tasks, while imposing
smoothness through a Gaussian Process dynamics prior via the
planning-as-inference perspective. To facilitate batch trajectory optimization,
we introduce an original zero-order and highly-parallelizable update rule: the
Sinkhorn Step, which uses the regular polytope family for its search
directions. Each regular polytope, centered on trajectory waypoints, serves as
a local cost-probing neighborhood, acting as a \textit{trust region} where the
Sinkhorn Step "transports" local waypoints toward low-cost regions. We
theoretically show that Sinkhorn Step guides the optimizing parameters toward
local minima regions of non-convex objective functions. We then show the
efficiency of MPOT in a range of problems from low-dimensional point-mass
navigation to high-dimensional whole-body robot motion planning, evincing its
superiority compared to popular motion planners, paving the way for new
applications of optimal transport in motion planning.
|
[
{
"created": "Wed, 27 Sep 2023 19:42:01 GMT",
"version": "v1"
},
{
"created": "Sat, 28 Oct 2023 17:38:59 GMT",
"version": "v2"
}
] |
2023-10-31
|
[
[
"Le",
"An T.",
""
],
[
"Chalvatzaki",
"Georgia",
""
],
[
"Biess",
"Armin",
""
],
[
"Peters",
"Jan",
""
]
] |
Motion planning is still an open problem for many disciplines, e.g., robotics, autonomous driving, due to their need for high computational resources that hinder real-time, efficient decision-making. A class of methods striving to provide smooth solutions is gradient-based trajectory optimization. However, those methods usually suffer from bad local minima, while for many settings, they may be inapplicable due to the absence of easy-to-access gradients of the optimization objectives. In response to these issues, we introduce Motion Planning via Optimal Transport (MPOT) -- a \textit{gradient-free} method that optimizes a batch of smooth trajectories over highly nonlinear costs, even for high-dimensional tasks, while imposing smoothness through a Gaussian Process dynamics prior via the planning-as-inference perspective. To facilitate batch trajectory optimization, we introduce an original zero-order and highly-parallelizable update rule: the Sinkhorn Step, which uses the regular polytope family for its search directions. Each regular polytope, centered on trajectory waypoints, serves as a local cost-probing neighborhood, acting as a \textit{trust region} where the Sinkhorn Step "transports" local waypoints toward low-cost regions. We theoretically show that Sinkhorn Step guides the optimizing parameters toward local minima regions of non-convex objective functions. We then show the efficiency of MPOT in a range of problems from low-dimensional point-mass navigation to high-dimensional whole-body robot motion planning, evincing its superiority compared to popular motion planners, paving the way for new applications of optimal transport in motion planning.
|
1207.4383
|
Zhewei Wei
|
Zhewei Wei, Ke Yi
|
Equivalence between Priority Queues and Sorting in External Memory
|
11 pages, 1 figure
| null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A priority queue is a fundamental data structure that maintains a dynamic
ordered set of keys and supports the followig basic operations: insertion of a
key, deletion of a key, and finding the smallest key. The complexity of the
priority queue is closely related to that of sorting: A priority queue can be
used to implement a sorting algorithm trivially. Thorup
\cite{thorup2007equivalence} proved that the converse is also true in the RAM
model. In particular, he designed a priority queue that uses the sorting
algorithm as a black box, such that the per-operation cost of the priority
queue is asymptotically the same as the per-key cost of sorting. In this paper,
we prove an analogous result in the external memory model, showing that
priority queues are computationally equivalent to sorting in external memory,
under some mild assumptions. The reduction provides a possibility for proving
lower bounds for external sorting via showing a lower bound for priority
queues.
|
[
{
"created": "Wed, 18 Jul 2012 14:32:57 GMT",
"version": "v1"
}
] |
2012-07-19
|
[
[
"Wei",
"Zhewei",
""
],
[
"Yi",
"Ke",
""
]
] |
A priority queue is a fundamental data structure that maintains a dynamic ordered set of keys and supports the followig basic operations: insertion of a key, deletion of a key, and finding the smallest key. The complexity of the priority queue is closely related to that of sorting: A priority queue can be used to implement a sorting algorithm trivially. Thorup \cite{thorup2007equivalence} proved that the converse is also true in the RAM model. In particular, he designed a priority queue that uses the sorting algorithm as a black box, such that the per-operation cost of the priority queue is asymptotically the same as the per-key cost of sorting. In this paper, we prove an analogous result in the external memory model, showing that priority queues are computationally equivalent to sorting in external memory, under some mild assumptions. The reduction provides a possibility for proving lower bounds for external sorting via showing a lower bound for priority queues.
|
2309.11911
|
Shanglin Lei
|
Shanglin Lei, Guanting Dong, Xiaoping Wang, Keheng Wang, Sirui Wang
|
InstructERC: Reforming Emotion Recognition in Conversation with a
Retrieval Multi-task LLMs Framework
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The field of emotion recognition of conversation (ERC) has been focusing on
separating sentence feature encoding and context modeling, lacking exploration
in generative paradigms based on unified designs. In this study, we propose a
novel approach,
\textbf{InstructERC}, to reformulate the ERC task from a discriminative
framework to a generative framework based on Large Language Models (LLMs).
InstructERC makes three significant contributions: (1) it introduces a simple
yet effective retrieval template module, which helps the model explicitly
integrate multi-granularity dialogue supervision information. (2) We introduce
two additional emotion alignment tasks, namely speaker identification and
emotion prediction tasks, to implicitly model the dialogue role relationships
and future emotional tendencies in conversations. (3) Pioneeringly, we unify
emotion labels across benchmarks through the feeling wheel to fit real
application scenarios. InstructERC still perform impressively on this unified
dataset. Our LLM-based plugin framework significantly outperforms all previous
models and achieves comprehensive SOTA on three commonly used ERC datasets.
Extensive analysis of parameter-efficient and data-scaling experiments provides
empirical guidance for applying it in practical scenarios. Our code and aligned
unified dataset (UIME) can be found in the Github link.\footnote{You can find
the offical realization in the Github link:
https://github.com/LIN-SHANG/InstructERC}
|
[
{
"created": "Thu, 21 Sep 2023 09:22:07 GMT",
"version": "v1"
},
{
"created": "Fri, 22 Sep 2023 06:33:53 GMT",
"version": "v2"
},
{
"created": "Fri, 24 Nov 2023 10:41:46 GMT",
"version": "v3"
},
{
"created": "Tue, 12 Mar 2024 12:54:36 GMT",
"version": "v4"
}
] |
2024-03-13
|
[
[
"Lei",
"Shanglin",
""
],
[
"Dong",
"Guanting",
""
],
[
"Wang",
"Xiaoping",
""
],
[
"Wang",
"Keheng",
""
],
[
"Wang",
"Sirui",
""
]
] |
The field of emotion recognition of conversation (ERC) has been focusing on separating sentence feature encoding and context modeling, lacking exploration in generative paradigms based on unified designs. In this study, we propose a novel approach, \textbf{InstructERC}, to reformulate the ERC task from a discriminative framework to a generative framework based on Large Language Models (LLMs). InstructERC makes three significant contributions: (1) it introduces a simple yet effective retrieval template module, which helps the model explicitly integrate multi-granularity dialogue supervision information. (2) We introduce two additional emotion alignment tasks, namely speaker identification and emotion prediction tasks, to implicitly model the dialogue role relationships and future emotional tendencies in conversations. (3) Pioneeringly, we unify emotion labels across benchmarks through the feeling wheel to fit real application scenarios. InstructERC still perform impressively on this unified dataset. Our LLM-based plugin framework significantly outperforms all previous models and achieves comprehensive SOTA on three commonly used ERC datasets. Extensive analysis of parameter-efficient and data-scaling experiments provides empirical guidance for applying it in practical scenarios. Our code and aligned unified dataset (UIME) can be found in the Github link.\footnote{You can find the offical realization in the Github link: https://github.com/LIN-SHANG/InstructERC}
|
2309.14529
|
Yingbo Hua
|
Yingbo Hua
|
Secret-Message Transmission by Echoing Encrypted Probes -- STEEP
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper examines the properties of the lower and upper bounds established
by Maurer, Ahlswede and Csiszar (MAC) for secret-key capacity in the case of
channel probing over single-input and single-output (SISO) channels. Inspired
by the insights into MAC's bounds, a scheme called secret-message transmission
by echoing encrypted probes (STEEP) is proposed. STEEP consists of two phases:
in phase 1, Alice sends random probes over a probing channel to Bob; in phase
2, Bob echoes back an estimated version of the probes, but encrypted by a
secret, over a high-quality return channel. Provided that Eve is unable to
obtain the exact probes transmitted by Alice in phase 1, STEEP guarantees a
positive secrecy rate from Bob to Alice over the return channel even if Eve's
channel strength during channel probing is stronger than Bob's. STEEP is
applicable to both physical layer and upper layers in connected networks.
|
[
{
"created": "Mon, 25 Sep 2023 21:07:17 GMT",
"version": "v1"
}
] |
2023-09-27
|
[
[
"Hua",
"Yingbo",
""
]
] |
This paper examines the properties of the lower and upper bounds established by Maurer, Ahlswede and Csiszar (MAC) for secret-key capacity in the case of channel probing over single-input and single-output (SISO) channels. Inspired by the insights into MAC's bounds, a scheme called secret-message transmission by echoing encrypted probes (STEEP) is proposed. STEEP consists of two phases: in phase 1, Alice sends random probes over a probing channel to Bob; in phase 2, Bob echoes back an estimated version of the probes, but encrypted by a secret, over a high-quality return channel. Provided that Eve is unable to obtain the exact probes transmitted by Alice in phase 1, STEEP guarantees a positive secrecy rate from Bob to Alice over the return channel even if Eve's channel strength during channel probing is stronger than Bob's. STEEP is applicable to both physical layer and upper layers in connected networks.
|
1407.4527
|
Bradford Boyle
|
Bradford D. Boyle and Steven Weber
|
Structural and Optimization Properties for Joint Selection of Source
Rates and Network Flow
|
15 pages, 13 figures, Submitted to IEEE/ACM Transactions on
Networking on 2014-07-16. Correction to Fig. 13. Fixed minor typos
| null | null | null |
cs.NI cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider the optimal transmission of distributed correlated discrete
memoryless sources across a network with capacity constraints. We present
several previously undiscussed structural properties of the set of feasible
rates and transmission schemes. We extend previous results concerning the
intersection of polymatroids and contrapolymatroids to characterize when all of
the vertices of the Slepian-Wolf rate region are feasible for the capacity
constrained network. An explicit relationship between the conditional
independence relationships of the distributed sources and the number of
vertices for the Slepian-Wolf rate region are given. These properties are then
applied to characterize the optimal transmission rate and scheme and its
connection to the corner points of the Slepian-Wolf rate region. In particular,
we demonstrate that when the per-source compression costs are in tension with
the per-link flow costs the optimal flow/rate point need not coincide with a
vertex of the Slepian-Wolf rate region. Finally, we connect results for the
single-sink problem to the multi-sink problem by extending structural insights
and developing upper and lower bounds on the optimal cost of the multi-sink
problem.
|
[
{
"created": "Wed, 16 Jul 2014 23:24:41 GMT",
"version": "v1"
},
{
"created": "Wed, 23 Jul 2014 21:26:58 GMT",
"version": "v2"
},
{
"created": "Tue, 9 Sep 2014 22:23:49 GMT",
"version": "v3"
}
] |
2014-09-11
|
[
[
"Boyle",
"Bradford D.",
""
],
[
"Weber",
"Steven",
""
]
] |
We consider the optimal transmission of distributed correlated discrete memoryless sources across a network with capacity constraints. We present several previously undiscussed structural properties of the set of feasible rates and transmission schemes. We extend previous results concerning the intersection of polymatroids and contrapolymatroids to characterize when all of the vertices of the Slepian-Wolf rate region are feasible for the capacity constrained network. An explicit relationship between the conditional independence relationships of the distributed sources and the number of vertices for the Slepian-Wolf rate region are given. These properties are then applied to characterize the optimal transmission rate and scheme and its connection to the corner points of the Slepian-Wolf rate region. In particular, we demonstrate that when the per-source compression costs are in tension with the per-link flow costs the optimal flow/rate point need not coincide with a vertex of the Slepian-Wolf rate region. Finally, we connect results for the single-sink problem to the multi-sink problem by extending structural insights and developing upper and lower bounds on the optimal cost of the multi-sink problem.
|
1502.01963
|
Peter Mutschke
|
Peter Mutschke, Philipp Mayr, Andrea Scharnhorst
|
Editorial for the Proceedings of the Workshop Knowledge Maps and
Information Retrieval (KMIR2014) at Digital Libraries 2014
|
URL workshop proceedings: http://ceur-ws.org/Vol-1311/
| null | null | null |
cs.IR cs.DL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Knowledge maps are promising tools for visualizing the structure of
large-scale information spaces, but still far away from being applicable for
searching. The first international workshop on "Knowledge Maps and Information
Retrieval (KMIR)", held as part of the International Conference on Digital
Libraries 2014 in London, aimed at bringing together experts in Information
Retrieval (IR) and knowledge mapping in order to discuss the potential of
interactive knowledge maps for information seeking purposes.
|
[
{
"created": "Fri, 6 Feb 2015 17:35:34 GMT",
"version": "v1"
}
] |
2015-02-09
|
[
[
"Mutschke",
"Peter",
""
],
[
"Mayr",
"Philipp",
""
],
[
"Scharnhorst",
"Andrea",
""
]
] |
Knowledge maps are promising tools for visualizing the structure of large-scale information spaces, but still far away from being applicable for searching. The first international workshop on "Knowledge Maps and Information Retrieval (KMIR)", held as part of the International Conference on Digital Libraries 2014 in London, aimed at bringing together experts in Information Retrieval (IR) and knowledge mapping in order to discuss the potential of interactive knowledge maps for information seeking purposes.
|
1505.04313
|
Erkki Luuk
|
Erkki Luuk
|
A type-theoretical approach to Universal Grammar
| null | null | null | null |
cs.CL math.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The idea of Universal Grammar (UG) as the hypothetical linguistic structure
shared by all human languages harkens back at least to the 13th century. The
best known modern elaborations of the idea are due to Chomsky. Following a
devastating critique from theoretical, typological and field linguistics, these
elaborations, the idea of UG itself and the more general idea of language
universals stand untenable and are largely abandoned. The proposal tackles the
hypothetical contents of UG using dependent and polymorphic type theory in a
framework very different from the Chomskyan ones. We introduce a type logic for
a precise, universal and parsimonious representation of natural language
morphosyntax and compositional semantics. The logic handles grammatical
ambiguity (with polymorphic types), selectional restrictions and diverse kinds
of anaphora (with dependent types), and features a partly universal set of
morphosyntactic types (by the Curry-Howard isomorphism).
|
[
{
"created": "Sat, 16 May 2015 19:28:49 GMT",
"version": "v1"
}
] |
2015-05-19
|
[
[
"Luuk",
"Erkki",
""
]
] |
The idea of Universal Grammar (UG) as the hypothetical linguistic structure shared by all human languages harkens back at least to the 13th century. The best known modern elaborations of the idea are due to Chomsky. Following a devastating critique from theoretical, typological and field linguistics, these elaborations, the idea of UG itself and the more general idea of language universals stand untenable and are largely abandoned. The proposal tackles the hypothetical contents of UG using dependent and polymorphic type theory in a framework very different from the Chomskyan ones. We introduce a type logic for a precise, universal and parsimonious representation of natural language morphosyntax and compositional semantics. The logic handles grammatical ambiguity (with polymorphic types), selectional restrictions and diverse kinds of anaphora (with dependent types), and features a partly universal set of morphosyntactic types (by the Curry-Howard isomorphism).
|
2005.11217
|
Prashnna Gyawali
|
Prashnna Kumar Gyawali, Sandesh Ghimire, Pradeep Bajracharya, Zhiyuan
Li, Linwei Wang
|
Semi-supervised Medical Image Classification with Global Latent Mixing
| null | null | null | null |
cs.LG cs.CV eess.IV stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
Computer-aided diagnosis via deep learning relies on large-scale annotated
data sets, which can be costly when involving expert knowledge. Semi-supervised
learning (SSL) mitigates this challenge by leveraging unlabeled data. One
effective SSL approach is to regularize the local smoothness of neural
functions via perturbations around single data points. In this work, we argue
that regularizing the global smoothness of neural functions by filling the void
in between data points can further improve SSL. We present a novel SSL approach
that trains the neural network on linear mixing of labeled and unlabeled data,
at both the input and latent space in order to regularize different portions of
the network. We evaluated the presented model on two distinct medical image
data sets for semi-supervised classification of thoracic disease and skin
lesion, demonstrating its improved performance over SSL with local
perturbations and SSL with global mixing but at the input space only. Our code
is available at https://github.com/Prasanna1991/LatentMixing.
|
[
{
"created": "Fri, 22 May 2020 14:49:13 GMT",
"version": "v1"
}
] |
2020-05-25
|
[
[
"Gyawali",
"Prashnna Kumar",
""
],
[
"Ghimire",
"Sandesh",
""
],
[
"Bajracharya",
"Pradeep",
""
],
[
"Li",
"Zhiyuan",
""
],
[
"Wang",
"Linwei",
""
]
] |
Computer-aided diagnosis via deep learning relies on large-scale annotated data sets, which can be costly when involving expert knowledge. Semi-supervised learning (SSL) mitigates this challenge by leveraging unlabeled data. One effective SSL approach is to regularize the local smoothness of neural functions via perturbations around single data points. In this work, we argue that regularizing the global smoothness of neural functions by filling the void in between data points can further improve SSL. We present a novel SSL approach that trains the neural network on linear mixing of labeled and unlabeled data, at both the input and latent space in order to regularize different portions of the network. We evaluated the presented model on two distinct medical image data sets for semi-supervised classification of thoracic disease and skin lesion, demonstrating its improved performance over SSL with local perturbations and SSL with global mixing but at the input space only. Our code is available at https://github.com/Prasanna1991/LatentMixing.
|
1906.01957
|
Anthony Chen
|
Anthony Chen, John Harwell, Maria Gini
|
Maximizing Energy Battery Efficiency in Swarm Robotics
|
Presented as ARMS Workshop paper at AAMAS 2019 Conference
(http://u.cs.biu.ac.il/~agmon/arms2019/program.html)
| null | null | null |
cs.MA cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Miniaturization and cost, two of the main attractive factors of swarm
robotics, have motivated its use as a solution in object collecting tasks,
search & rescue missions, and other applications. However, in the current
literature only a few papers consider energy allocation efficiency within a
swarm. Generally, robots recharge to their maximum level every time
unconditionally, and do not incorporate estimates of the energy needed for
their next task. In this paper we present an energy efficiency maximization
method that minimizes the overall energy cost within a swarm while
simultaneously maximizing swarm performance on an object gathering task. The
method utilizes dynamic thresholds for upper and lower battery limits. This
method has also shown to improve the efficiency of existing energy management
methods.
|
[
{
"created": "Wed, 5 Jun 2019 11:52:50 GMT",
"version": "v1"
}
] |
2019-06-06
|
[
[
"Chen",
"Anthony",
""
],
[
"Harwell",
"John",
""
],
[
"Gini",
"Maria",
""
]
] |
Miniaturization and cost, two of the main attractive factors of swarm robotics, have motivated its use as a solution in object collecting tasks, search & rescue missions, and other applications. However, in the current literature only a few papers consider energy allocation efficiency within a swarm. Generally, robots recharge to their maximum level every time unconditionally, and do not incorporate estimates of the energy needed for their next task. In this paper we present an energy efficiency maximization method that minimizes the overall energy cost within a swarm while simultaneously maximizing swarm performance on an object gathering task. The method utilizes dynamic thresholds for upper and lower battery limits. This method has also shown to improve the efficiency of existing energy management methods.
|
2405.01824
|
Wee Kiat Chan
|
Wee Kiat Chan, PengWei Wang, Raye Chen-Hua Yeow
|
Creation of Novel Soft Robot Designs using Generative AI
| null | null | null | null |
cs.RO cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Soft robotics has emerged as a promising field with the potential to
revolutionize industries such as healthcare and manufacturing. However,
designing effective soft robots presents challenges, particularly in managing
the complex interplay of material properties, structural design, and control
strategies. Traditional design methods are often time-consuming and may not
yield optimal designs. In this paper, we explore the use of generative AI to
create 3D models of soft actuators. We create a dataset of over 70 text-shape
pairings of soft pneumatic robot actuator designs, and adapt a latent diffusion
model (SDFusion) to learn the data distribution and generate novel designs from
it. By employing transfer learning and data augmentation techniques, we
significantly improve the performance of the diffusion model. These findings
highlight the potential of generative AI in designing complex soft robotic
systems, paving the way for future advancements in the field.
|
[
{
"created": "Fri, 3 May 2024 02:55:27 GMT",
"version": "v1"
}
] |
2024-05-06
|
[
[
"Chan",
"Wee Kiat",
""
],
[
"Wang",
"PengWei",
""
],
[
"Yeow",
"Raye Chen-Hua",
""
]
] |
Soft robotics has emerged as a promising field with the potential to revolutionize industries such as healthcare and manufacturing. However, designing effective soft robots presents challenges, particularly in managing the complex interplay of material properties, structural design, and control strategies. Traditional design methods are often time-consuming and may not yield optimal designs. In this paper, we explore the use of generative AI to create 3D models of soft actuators. We create a dataset of over 70 text-shape pairings of soft pneumatic robot actuator designs, and adapt a latent diffusion model (SDFusion) to learn the data distribution and generate novel designs from it. By employing transfer learning and data augmentation techniques, we significantly improve the performance of the diffusion model. These findings highlight the potential of generative AI in designing complex soft robotic systems, paving the way for future advancements in the field.
|
2011.02417
|
Tristan Thrush
|
Tristan Thrush, Ethan Wilcox, and Roger Levy
|
Investigating Novel Verb Learning in BERT: Selectional Preference
Classes and Alternation-Based Syntactic Generalization
|
Accepted to BlackboxNLP 2020
| null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Previous studies investigating the syntactic abilities of deep learning
models have not targeted the relationship between the strength of the
grammatical generalization and the amount of evidence to which the model is
exposed during training. We address this issue by deploying a novel
word-learning paradigm to test BERT's few-shot learning capabilities for two
aspects of English verbs: alternations and classes of selectional preferences.
For the former, we fine-tune BERT on a single frame in a verbal-alternation
pair and ask whether the model expects the novel verb to occur in its sister
frame. For the latter, we fine-tune BERT on an incomplete selectional network
of verbal objects and ask whether it expects unattested but plausible
verb/object pairs. We find that BERT makes robust grammatical generalizations
after just one or two instances of a novel word in fine-tuning. For the verbal
alternation tests, we find that the model displays behavior that is consistent
with a transitivity bias: verbs seen few times are expected to take direct
objects, but verbs seen with direct objects are not expected to occur
intransitively.
|
[
{
"created": "Wed, 4 Nov 2020 17:17:49 GMT",
"version": "v1"
}
] |
2020-11-05
|
[
[
"Thrush",
"Tristan",
""
],
[
"Wilcox",
"Ethan",
""
],
[
"Levy",
"Roger",
""
]
] |
Previous studies investigating the syntactic abilities of deep learning models have not targeted the relationship between the strength of the grammatical generalization and the amount of evidence to which the model is exposed during training. We address this issue by deploying a novel word-learning paradigm to test BERT's few-shot learning capabilities for two aspects of English verbs: alternations and classes of selectional preferences. For the former, we fine-tune BERT on a single frame in a verbal-alternation pair and ask whether the model expects the novel verb to occur in its sister frame. For the latter, we fine-tune BERT on an incomplete selectional network of verbal objects and ask whether it expects unattested but plausible verb/object pairs. We find that BERT makes robust grammatical generalizations after just one or two instances of a novel word in fine-tuning. For the verbal alternation tests, we find that the model displays behavior that is consistent with a transitivity bias: verbs seen few times are expected to take direct objects, but verbs seen with direct objects are not expected to occur intransitively.
|
2407.17365
|
Roman Bachmann
|
Sogand Salehi, Mahdi Shafiei, Teresa Yeo, Roman Bachmann, Amir Zamir
|
ViPer: Visual Personalization of Generative Models via Individual
Preference Learning
|
Project page at https://viper.epfl.ch/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Different users find different images generated for the same prompt
desirable. This gives rise to personalized image generation which involves
creating images aligned with an individual's visual preference. Current
generative models are, however, unpersonalized, as they are tuned to produce
outputs that appeal to a broad audience. Using them to generate images aligned
with individual users relies on iterative manual prompt engineering by the user
which is inefficient and undesirable. We propose to personalize the image
generation process by first capturing the generic preferences of the user in a
one-time process by inviting them to comment on a small selection of images,
explaining why they like or dislike each. Based on these comments, we infer a
user's structured liked and disliked visual attributes, i.e., their visual
preference, using a large language model. These attributes are used to guide a
text-to-image model toward producing images that are tuned towards the
individual user's visual preference. Through a series of user studies and large
language model guided evaluations, we demonstrate that the proposed method
results in generations that are well aligned with individual users' visual
preferences.
|
[
{
"created": "Wed, 24 Jul 2024 15:42:34 GMT",
"version": "v1"
}
] |
2024-07-25
|
[
[
"Salehi",
"Sogand",
""
],
[
"Shafiei",
"Mahdi",
""
],
[
"Yeo",
"Teresa",
""
],
[
"Bachmann",
"Roman",
""
],
[
"Zamir",
"Amir",
""
]
] |
Different users find different images generated for the same prompt desirable. This gives rise to personalized image generation which involves creating images aligned with an individual's visual preference. Current generative models are, however, unpersonalized, as they are tuned to produce outputs that appeal to a broad audience. Using them to generate images aligned with individual users relies on iterative manual prompt engineering by the user which is inefficient and undesirable. We propose to personalize the image generation process by first capturing the generic preferences of the user in a one-time process by inviting them to comment on a small selection of images, explaining why they like or dislike each. Based on these comments, we infer a user's structured liked and disliked visual attributes, i.e., their visual preference, using a large language model. These attributes are used to guide a text-to-image model toward producing images that are tuned towards the individual user's visual preference. Through a series of user studies and large language model guided evaluations, we demonstrate that the proposed method results in generations that are well aligned with individual users' visual preferences.
|
2003.09417
|
Moritz Schubotz
|
Moritz Schubotz and Andr\'e Greiner-Petter and Norman Meuschke and
Olaf Teschke and Bela Gipp
|
Mathematical Formulae in Wikimedia Projects 2020
|
Submitted to JCDL 2020: Proceedings of the ACM/ IEEE Joint Conference
on Digital Libraries in 2020 (JCDL '20), August 1-5, 2020, Virtual Event,
China
| null |
10.1145/3383583.3398557
| null |
cs.DL cs.IR
|
http://creativecommons.org/publicdomain/zero/1.0/
|
This poster summarizes our contributions to Wikimedia's processing pipeline
for mathematical formulae. We describe how we have supported the transition
from rendering formulae as course-grained PNG images in 2001 to providing
modern semantically enriched language-independent MathML formulae in 2020.
Additionally, we describe our plans to improve the accessibility and
discoverability of mathematical knowledge in Wikimedia projects further.
|
[
{
"created": "Fri, 20 Mar 2020 17:56:26 GMT",
"version": "v1"
},
{
"created": "Wed, 6 May 2020 19:25:19 GMT",
"version": "v2"
}
] |
2020-05-08
|
[
[
"Schubotz",
"Moritz",
""
],
[
"Greiner-Petter",
"André",
""
],
[
"Meuschke",
"Norman",
""
],
[
"Teschke",
"Olaf",
""
],
[
"Gipp",
"Bela",
""
]
] |
This poster summarizes our contributions to Wikimedia's processing pipeline for mathematical formulae. We describe how we have supported the transition from rendering formulae as course-grained PNG images in 2001 to providing modern semantically enriched language-independent MathML formulae in 2020. Additionally, we describe our plans to improve the accessibility and discoverability of mathematical knowledge in Wikimedia projects further.
|
2305.03916
|
Jyoti Prakash
|
Jyoti Prakash, Abhishek Tiwari, Christian Hammer
|
Unifying Pointer Analyses for Polyglot Inter-operations through Summary
Specialization
| null | null | null | null |
cs.SE cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
Modular analysis of polyglot applications is challenging because heap object
flows across language boundaries must be resolved. The state-of-the-art
analyses for polyglot applications have two fundamental limitations. First,
they assume explicit boundaries between the host and the guest language to
determine inter-language dataflows. Second, they rely on specific analyses of
the host and guest languages. The former assumption is impractical concerning
recent advancements in polyglot programming techniques, while the latter
disregards advances in pointer analysis of the underlying languages. In this
work, we propose to extend existing pointer analyses with a novel summary
specialization technique so that points-to set across language boundaries can
be unified. Our novel technique leverages various combinations of host and
guest analyses with minor modifications. We demonstrate the efficacy and
generalizability of our approach by evaluating it with two polyglot language
models: Java-C communication via Android's NDK and Java-Python communication in
GraalVM.
|
[
{
"created": "Sat, 6 May 2023 03:40:06 GMT",
"version": "v1"
}
] |
2023-05-09
|
[
[
"Prakash",
"Jyoti",
""
],
[
"Tiwari",
"Abhishek",
""
],
[
"Hammer",
"Christian",
""
]
] |
Modular analysis of polyglot applications is challenging because heap object flows across language boundaries must be resolved. The state-of-the-art analyses for polyglot applications have two fundamental limitations. First, they assume explicit boundaries between the host and the guest language to determine inter-language dataflows. Second, they rely on specific analyses of the host and guest languages. The former assumption is impractical concerning recent advancements in polyglot programming techniques, while the latter disregards advances in pointer analysis of the underlying languages. In this work, we propose to extend existing pointer analyses with a novel summary specialization technique so that points-to set across language boundaries can be unified. Our novel technique leverages various combinations of host and guest analyses with minor modifications. We demonstrate the efficacy and generalizability of our approach by evaluating it with two polyglot language models: Java-C communication via Android's NDK and Java-Python communication in GraalVM.
|
2307.08881
|
Anton Tsitsulin
|
Mustafa Yasir, John Palowitch, Anton Tsitsulin, Long Tran-Thanh, Bryan
Perozzi
|
Examining the Effects of Degree Distribution and Homophily in Graph
Learning Models
|
Accepted to Workshop on Graph Learning Benchmarks at KDD 2023
| null | null | null |
cs.SI cs.LG
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Despite a surge in interest in GNN development, homogeneity in benchmarking
datasets still presents a fundamental issue to GNN research. GraphWorld is a
recent solution which uses the Stochastic Block Model (SBM) to generate diverse
populations of synthetic graphs for benchmarking any GNN task. Despite its
success, the SBM imposed fundamental limitations on the kinds of graph
structure GraphWorld could create.
In this work we examine how two additional synthetic graph generators can
improve GraphWorld's evaluation; LFR, a well-established model in the graph
clustering literature and CABAM, a recent adaptation of the Barabasi-Albert
model tailored for GNN benchmarking. By integrating these generators, we
significantly expand the coverage of graph space within the GraphWorld
framework while preserving key graph properties observed in real-world
networks. To demonstrate their effectiveness, we generate 300,000 graphs to
benchmark 11 GNN models on a node classification task. We find GNN performance
variations in response to homophily, degree distribution and feature signal.
Based on these findings, we classify models by their sensitivity to the new
generators under these properties. Additionally, we release the extensions made
to GraphWorld on the GitHub repository, offering further evaluation of GNN
performance on new graphs.
|
[
{
"created": "Mon, 17 Jul 2023 22:35:46 GMT",
"version": "v1"
}
] |
2023-07-19
|
[
[
"Yasir",
"Mustafa",
""
],
[
"Palowitch",
"John",
""
],
[
"Tsitsulin",
"Anton",
""
],
[
"Tran-Thanh",
"Long",
""
],
[
"Perozzi",
"Bryan",
""
]
] |
Despite a surge in interest in GNN development, homogeneity in benchmarking datasets still presents a fundamental issue to GNN research. GraphWorld is a recent solution which uses the Stochastic Block Model (SBM) to generate diverse populations of synthetic graphs for benchmarking any GNN task. Despite its success, the SBM imposed fundamental limitations on the kinds of graph structure GraphWorld could create. In this work we examine how two additional synthetic graph generators can improve GraphWorld's evaluation; LFR, a well-established model in the graph clustering literature and CABAM, a recent adaptation of the Barabasi-Albert model tailored for GNN benchmarking. By integrating these generators, we significantly expand the coverage of graph space within the GraphWorld framework while preserving key graph properties observed in real-world networks. To demonstrate their effectiveness, we generate 300,000 graphs to benchmark 11 GNN models on a node classification task. We find GNN performance variations in response to homophily, degree distribution and feature signal. Based on these findings, we classify models by their sensitivity to the new generators under these properties. Additionally, we release the extensions made to GraphWorld on the GitHub repository, offering further evaluation of GNN performance on new graphs.
|
2010.06155
|
Beixiong Zheng
|
Beixiong Zheng, Changsheng You, Rui Zhang
|
Uplink Channel Estimation for Double-IRS Assisted Multi-User MIMO
|
In this paper, we propose a new and efficient channel estimation
scheme for the double-IRS assisted uplink multiple-input multiple-output
(MIMO) communication system (arXiv:2008.13701) to resolve the cascaded CSI of
both its single- and double-reflection links
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To achieve the more promising passive beamforming gains in the
double-intelligent reflecting surface (IRS) assisted system over the
conventional single-IRS system, channel estimation is practically indispensable
but also a more challenging problem to tackle, due to the presence of not only
the single- but also double-reflection links that are intricately coupled. In
this paper, we propose a new and efficient channel estimation scheme for the
double-IRS assisted uplink multiple-input multiple-output (MIMO) communication
system to resolve the cascaded channel state information (CSI) of both its
single- and double-reflection links. First, for the single-user case, the
higher-dimensional double-reflection channel is efficiently estimated at the
multi-antenna base station (BS) with low training overhead by exploiting the
fact that its cascaded channel coefficients are scaled versions of those of a
lower-dimensional single-reflection channel. Then, the proposed channel
estimation scheme is extended to the multi-user case, where given an arbitrary
user's cascaded channel estimated as in the single-user case, the other users'
cascaded channels are scaled versions of it and thus can be estimated with
reduced training overhead. Simulation results verify the effectiveness of the
proposed channel estimation scheme as compared to the benchmark scheme.
|
[
{
"created": "Tue, 13 Oct 2020 03:58:19 GMT",
"version": "v1"
}
] |
2020-10-14
|
[
[
"Zheng",
"Beixiong",
""
],
[
"You",
"Changsheng",
""
],
[
"Zhang",
"Rui",
""
]
] |
To achieve the more promising passive beamforming gains in the double-intelligent reflecting surface (IRS) assisted system over the conventional single-IRS system, channel estimation is practically indispensable but also a more challenging problem to tackle, due to the presence of not only the single- but also double-reflection links that are intricately coupled. In this paper, we propose a new and efficient channel estimation scheme for the double-IRS assisted uplink multiple-input multiple-output (MIMO) communication system to resolve the cascaded channel state information (CSI) of both its single- and double-reflection links. First, for the single-user case, the higher-dimensional double-reflection channel is efficiently estimated at the multi-antenna base station (BS) with low training overhead by exploiting the fact that its cascaded channel coefficients are scaled versions of those of a lower-dimensional single-reflection channel. Then, the proposed channel estimation scheme is extended to the multi-user case, where given an arbitrary user's cascaded channel estimated as in the single-user case, the other users' cascaded channels are scaled versions of it and thus can be estimated with reduced training overhead. Simulation results verify the effectiveness of the proposed channel estimation scheme as compared to the benchmark scheme.
|
2112.05469
|
Satya Bagchi
|
Haradhan Ghosh, Sanjit Bhowmick, Pramod Kumar Maurya, Satya Bagchi
|
Linear complementary dual code-based Multi-secret sharing scheme
|
12 pages
| null | null | null |
cs.CR math.RA
|
http://creativecommons.org/licenses/by/4.0/
|
Hiding a secret is needed in many situations. Secret sharing plays an
important role in protecting information from getting lost, stolen, or
destroyed and has been applicable in recent years. A secret sharing scheme is a
cryptographic protocol in which a dealer divides the secret into several pieces
of share and one share is given to each participant. To recover the secret, the
dealer requires a subset of participants called access structure. In this
paper, we present a multi-secret sharing scheme over a local ring based on
linear complementary dual codes using Blakley's method. We take a large secret
space over a local ring that is greater than other code-based schemes and
obtain a perfect and almost ideal scheme.
|
[
{
"created": "Fri, 10 Dec 2021 11:52:52 GMT",
"version": "v1"
}
] |
2021-12-13
|
[
[
"Ghosh",
"Haradhan",
""
],
[
"Bhowmick",
"Sanjit",
""
],
[
"Maurya",
"Pramod Kumar",
""
],
[
"Bagchi",
"Satya",
""
]
] |
Hiding a secret is needed in many situations. Secret sharing plays an important role in protecting information from getting lost, stolen, or destroyed and has been applicable in recent years. A secret sharing scheme is a cryptographic protocol in which a dealer divides the secret into several pieces of share and one share is given to each participant. To recover the secret, the dealer requires a subset of participants called access structure. In this paper, we present a multi-secret sharing scheme over a local ring based on linear complementary dual codes using Blakley's method. We take a large secret space over a local ring that is greater than other code-based schemes and obtain a perfect and almost ideal scheme.
|
2301.07520
|
Elisa Luciano
|
Elisa Luciano and Matteo Cattaneo and Ron Kenett
|
Adversarial AI in Insurance: Pervasiveness and Resilience
| null | null | null | null |
cs.LG q-fin.GN
|
http://creativecommons.org/licenses/by/4.0/
|
The rapid and dynamic pace of Artificial Intelligence (AI) and Machine
Learning (ML) is revolutionizing the insurance sector. AI offers significant,
very much welcome advantages to insurance companies, and is fundamental to
their customer-centricity strategy. It also poses challenges, in the project
and implementation phase. Among those, we study Adversarial Attacks, which
consist of the creation of modified input data to deceive an AI system and
produce false outputs. We provide examples of attacks on insurance AI
applications, categorize them, and argue on defence methods and precautionary
systems, considering that they can involve few-shot and zero-shot
multilabelling. A related topic, with growing interest, is the validation and
verification of systems incorporating AI and ML components. These topics are
discussed in various sections of this paper.
|
[
{
"created": "Tue, 17 Jan 2023 08:49:54 GMT",
"version": "v1"
}
] |
2023-01-19
|
[
[
"Luciano",
"Elisa",
""
],
[
"Cattaneo",
"Matteo",
""
],
[
"Kenett",
"Ron",
""
]
] |
The rapid and dynamic pace of Artificial Intelligence (AI) and Machine Learning (ML) is revolutionizing the insurance sector. AI offers significant, very much welcome advantages to insurance companies, and is fundamental to their customer-centricity strategy. It also poses challenges, in the project and implementation phase. Among those, we study Adversarial Attacks, which consist of the creation of modified input data to deceive an AI system and produce false outputs. We provide examples of attacks on insurance AI applications, categorize them, and argue on defence methods and precautionary systems, considering that they can involve few-shot and zero-shot multilabelling. A related topic, with growing interest, is the validation and verification of systems incorporating AI and ML components. These topics are discussed in various sections of this paper.
|
2012.15543
|
Zheng-Yu Niu
|
Jun Xu, Zeyang Lei, Haifeng Wang, Zheng-Yu Niu, Hua Wu, Wanxiang Che,
Ting Liu
|
Discovering Dialog Structure Graph for Open-Domain Dialog Generation
| null | null | null | null |
cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Learning interpretable dialog structure from human-human dialogs yields basic
insights into the structure of conversation, and also provides background
knowledge to facilitate dialog generation. In this paper, we conduct
unsupervised discovery of dialog structure from chitchat corpora, and then
leverage it to facilitate dialog generation in downstream systems. To this end,
we present a Discrete Variational Auto-Encoder with Graph Neural Network
(DVAE-GNN), to discover a unified human-readable dialog structure. The
structure is a two-layer directed graph that contains session-level semantics
in the upper-layer vertices, utterance-level semantics in the lower-layer
vertices, and edges among these semantic vertices. In particular, we integrate
GNN into DVAE to fine-tune utterance-level semantics for more effective
recognition of session-level semantic vertex. Furthermore, to alleviate the
difficulty of discovering a large number of utterance-level semantics, we
design a coupling mechanism that binds each utterance-level semantic vertex
with a distinct phrase to provide prior semantics. Experimental results on two
benchmark corpora confirm that DVAE-GNN can discover meaningful dialog
structure, and the use of dialog structure graph as background knowledge can
facilitate a graph grounded conversational system to conduct coherent
multi-turn dialog generation.
|
[
{
"created": "Thu, 31 Dec 2020 10:58:37 GMT",
"version": "v1"
}
] |
2021-01-01
|
[
[
"Xu",
"Jun",
""
],
[
"Lei",
"Zeyang",
""
],
[
"Wang",
"Haifeng",
""
],
[
"Niu",
"Zheng-Yu",
""
],
[
"Wu",
"Hua",
""
],
[
"Che",
"Wanxiang",
""
],
[
"Liu",
"Ting",
""
]
] |
Learning interpretable dialog structure from human-human dialogs yields basic insights into the structure of conversation, and also provides background knowledge to facilitate dialog generation. In this paper, we conduct unsupervised discovery of dialog structure from chitchat corpora, and then leverage it to facilitate dialog generation in downstream systems. To this end, we present a Discrete Variational Auto-Encoder with Graph Neural Network (DVAE-GNN), to discover a unified human-readable dialog structure. The structure is a two-layer directed graph that contains session-level semantics in the upper-layer vertices, utterance-level semantics in the lower-layer vertices, and edges among these semantic vertices. In particular, we integrate GNN into DVAE to fine-tune utterance-level semantics for more effective recognition of session-level semantic vertex. Furthermore, to alleviate the difficulty of discovering a large number of utterance-level semantics, we design a coupling mechanism that binds each utterance-level semantic vertex with a distinct phrase to provide prior semantics. Experimental results on two benchmark corpora confirm that DVAE-GNN can discover meaningful dialog structure, and the use of dialog structure graph as background knowledge can facilitate a graph grounded conversational system to conduct coherent multi-turn dialog generation.
|
2406.11227
|
Silvery D. Fu
|
Silvery D. Fu, Xuewei Chen
|
Compound Schema Registry
|
2 pages, compound ai system workshop 2024
| null | null | null |
cs.DB cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Schema evolution is critical in managing database systems to ensure
compatibility across different data versions. A schema registry typically
addresses the challenges of schema evolution in real-time data streaming by
managing, validating, and ensuring schema compatibility. However, current
schema registries struggle with complex syntactic alterations like field
renaming or type changes, which often require significant manual intervention
and can disrupt service. To enhance the flexibility of schema evolution, we
propose the use of generalized schema evolution (GSE) facilitated by a compound
AI system. This system employs Large Language Models (LLMs) to interpret the
semantics of schema changes, supporting a broader range of syntactic
modifications without interrupting data streams. Our approach includes
developing a task-specific language, Schema Transformation Language (STL), to
generate schema mappings as an intermediate representation (IR), simplifying
the integration of schema changes across different data processing platforms.
Initial results indicate that this approach can improve schema mapping accuracy
and efficiency, demonstrating the potential of GSE in practical applications.
|
[
{
"created": "Mon, 17 Jun 2024 05:50:46 GMT",
"version": "v1"
}
] |
2024-06-18
|
[
[
"Fu",
"Silvery D.",
""
],
[
"Chen",
"Xuewei",
""
]
] |
Schema evolution is critical in managing database systems to ensure compatibility across different data versions. A schema registry typically addresses the challenges of schema evolution in real-time data streaming by managing, validating, and ensuring schema compatibility. However, current schema registries struggle with complex syntactic alterations like field renaming or type changes, which often require significant manual intervention and can disrupt service. To enhance the flexibility of schema evolution, we propose the use of generalized schema evolution (GSE) facilitated by a compound AI system. This system employs Large Language Models (LLMs) to interpret the semantics of schema changes, supporting a broader range of syntactic modifications without interrupting data streams. Our approach includes developing a task-specific language, Schema Transformation Language (STL), to generate schema mappings as an intermediate representation (IR), simplifying the integration of schema changes across different data processing platforms. Initial results indicate that this approach can improve schema mapping accuracy and efficiency, demonstrating the potential of GSE in practical applications.
|
2201.11370
|
Hajar Moudoud
|
Hajar Moudoud, Soumaya Cherkaoui and Lyes Khoukhi
|
An IoT Blockchain Architecture Using Oracles and Smart Contracts: the
Use-Case of a Food Supply Chain
|
This paper has been accepted for publication by IEEE 30th Annual
International Symposium on Personal, Indoor and Mobile Radio Communications
(PIMRC). The final version will be published by the IEEE
| null |
10.1109/PIMRC.2019.8904404
| null |
cs.NI cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
The blockchain is a distributed technology which allows establishing trust
among unreliable users who interact and perform transactions with each other.
While blockchain technology has been mainly used for crypto-currency, it has
emerged as an enabling technology for establishing trust in the realm of the
Internet of Things (IoT). Nevertheless, a naive usage of the blockchain for IoT
leads to high delays and extensive computational power. In this paper, we
propose a blockchain architecture dedicated to being used in a supply chain
which comprises different distributed IoT entities. We propose a lightweight
consensus for this architecture, called LC4IoT. The consensus is evaluated
through extensive simulations. The results show that the proposed consensus
uses low computational power, storage capability and latency.
|
[
{
"created": "Thu, 27 Jan 2022 08:10:37 GMT",
"version": "v1"
}
] |
2022-01-28
|
[
[
"Moudoud",
"Hajar",
""
],
[
"Cherkaoui",
"Soumaya",
""
],
[
"Khoukhi",
"Lyes",
""
]
] |
The blockchain is a distributed technology which allows establishing trust among unreliable users who interact and perform transactions with each other. While blockchain technology has been mainly used for crypto-currency, it has emerged as an enabling technology for establishing trust in the realm of the Internet of Things (IoT). Nevertheless, a naive usage of the blockchain for IoT leads to high delays and extensive computational power. In this paper, we propose a blockchain architecture dedicated to being used in a supply chain which comprises different distributed IoT entities. We propose a lightweight consensus for this architecture, called LC4IoT. The consensus is evaluated through extensive simulations. The results show that the proposed consensus uses low computational power, storage capability and latency.
|
1010.0654
|
Shirin Jalali
|
Michelle Effros, Tracey Ho and Shirin Jalali
|
On Equivalence Between Network Topologies
|
8 pages, 12 figures, 48th Annual Allerton Conference on
Communication, Control, and Computing, 2010
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
One major open problem in network coding is to characterize the capacity
region of a general multi-source multi-demand network. There are some existing
computational tools for bounding the capacity of general networks, but their
computational complexity grows very quickly with the size of the network. This
motivates us to propose a new hierarchical approach which finds upper and lower
bounding networks of smaller size for a given network. This approach
sequentially replaces components of the network with simpler structures, i.e.,
with fewer links or nodes, so that the resulting network is more amenable to
computational analysis and its capacity provides an upper or lower bound on the
capacity of the original network. The accuracy of the resulting bounds can be
bounded as a function of the link capacities. Surprisingly, we are able to
simplify some families of network structures without any loss in accuracy.
|
[
{
"created": "Mon, 4 Oct 2010 18:31:44 GMT",
"version": "v1"
}
] |
2015-03-17
|
[
[
"Effros",
"Michelle",
""
],
[
"Ho",
"Tracey",
""
],
[
"Jalali",
"Shirin",
""
]
] |
One major open problem in network coding is to characterize the capacity region of a general multi-source multi-demand network. There are some existing computational tools for bounding the capacity of general networks, but their computational complexity grows very quickly with the size of the network. This motivates us to propose a new hierarchical approach which finds upper and lower bounding networks of smaller size for a given network. This approach sequentially replaces components of the network with simpler structures, i.e., with fewer links or nodes, so that the resulting network is more amenable to computational analysis and its capacity provides an upper or lower bound on the capacity of the original network. The accuracy of the resulting bounds can be bounded as a function of the link capacities. Surprisingly, we are able to simplify some families of network structures without any loss in accuracy.
|
2309.13443
|
Grace Li Zhang
|
Jingcun Wang, Bing Li, Grace Li Zhang
|
Early-Exit with Class Exclusion for Efficient Inference of Neural
Networks
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Deep neural networks (DNNs) have been successfully applied in various fields.
In DNNs, a large number of multiply-accumulate (MAC) operations are required to
be performed, posing critical challenges in applying them in
resource-constrained platforms, e.g., edge devices. To address this challenge,
in this paper, we propose a class-based early-exit for dynamic inference.
Instead of pushing DNNs to make a dynamic decision at intermediate layers, we
take advantage of the learned features in these layers to exclude as many
irrelevant classes as possible, so that later layers only have to determine the
target class among the remaining classes. When only one class remains at a
layer, this class is the corresponding classification result. Experimental
results demonstrate the computational cost of DNNs in inference can be reduced
significantly with the proposed early-exit technique. The codes can be found at
https://github.com/HWAI-TUDa/EarlyClassExclusion.
|
[
{
"created": "Sat, 23 Sep 2023 18:12:27 GMT",
"version": "v1"
},
{
"created": "Sat, 17 Feb 2024 08:03:00 GMT",
"version": "v2"
}
] |
2024-02-20
|
[
[
"Wang",
"Jingcun",
""
],
[
"Li",
"Bing",
""
],
[
"Zhang",
"Grace Li",
""
]
] |
Deep neural networks (DNNs) have been successfully applied in various fields. In DNNs, a large number of multiply-accumulate (MAC) operations are required to be performed, posing critical challenges in applying them in resource-constrained platforms, e.g., edge devices. To address this challenge, in this paper, we propose a class-based early-exit for dynamic inference. Instead of pushing DNNs to make a dynamic decision at intermediate layers, we take advantage of the learned features in these layers to exclude as many irrelevant classes as possible, so that later layers only have to determine the target class among the remaining classes. When only one class remains at a layer, this class is the corresponding classification result. Experimental results demonstrate the computational cost of DNNs in inference can be reduced significantly with the proposed early-exit technique. The codes can be found at https://github.com/HWAI-TUDa/EarlyClassExclusion.
|
2206.00792
|
Jun Muramatsu
|
Jun Muramatsu
|
Channel Codes for Relayless Networks with General Message Access
Structure
|
(v1) 26 pages, to submitted to IEEE ITW2023, (v2) 27 pages, Remark 1
and Lemma 9 in v1 is deleted, Lemma 7 in v2 is added, Eq. (13) and the proof
of Lemma 7 in v1 (Eq. (14) and the proof of Lemma 8 in v2) are revised
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Channel codes for relayless networks with the general message access
structure is introduced. It is shown that the multi-letter characterized
capacity region of this network is achievable with this code. The capacity
region is characterized in terms of entropy functions and provides an
alternative to the regions introduced by [Somekh-Baruch and Verd\'u,
ISIT2006][Muramatsu and Miyake, ISITA2018].
|
[
{
"created": "Wed, 1 Jun 2022 22:56:06 GMT",
"version": "v1"
},
{
"created": "Mon, 20 Mar 2023 02:42:53 GMT",
"version": "v2"
}
] |
2023-03-21
|
[
[
"Muramatsu",
"Jun",
""
]
] |
Channel codes for relayless networks with the general message access structure is introduced. It is shown that the multi-letter characterized capacity region of this network is achievable with this code. The capacity region is characterized in terms of entropy functions and provides an alternative to the regions introduced by [Somekh-Baruch and Verd\'u, ISIT2006][Muramatsu and Miyake, ISITA2018].
|
2401.03357
|
Andrea Bedin
|
Dmitry Chizhik, Jinfeng Du, Reinaldo Valenzuela, Andrea Bedin, Martti
Moisio and Rodolfo Feick
|
Measured and Modeled Outdoor Indoor Coverage at 28 GHz into High Thermal
Efficiency Buildings
|
2 pages, 3 figures. Presented at IEEE International Symposium on
Antennas and Propagation and USNC-URSI Radio Science Meeting
| null | null | null |
cs.NI eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
28 GHz outdoor-indoor coverage into modern office buildings with high thermal
efficiency windows is found to be severely limited due to 46 dB median
penetration loss at normal incidence and additional 15 dB median oblique
incidence loss. The study is based on measurements of path gain over 280
outdoor-indoor links, at ranges up to 100 m. A simple theoretical path gain
model is extended to include building penetration through multiple sides of the
building as well as a reflection from another building. The theoretical model
accounts for the building orientation relative to the source, resulting in 4.9
dB RMSE relative to data, as compared to 5.7 dB RMSE from a linear fit and 14.7
dB RMSE for the 3GPP recommended model. Only coarse description of the
buildings is required: building orientation and exterior wall composition,
without any interior details. Coverage range for SNR>-8 dB from an outdoor base
to a terminal just inside a high-efficiency building is under 35 m
|
[
{
"created": "Mon, 4 Sep 2023 06:37:55 GMT",
"version": "v1"
}
] |
2024-01-09
|
[
[
"Chizhik",
"Dmitry",
""
],
[
"Du",
"Jinfeng",
""
],
[
"Valenzuela",
"Reinaldo",
""
],
[
"Bedin",
"Andrea",
""
],
[
"Moisio",
"Martti",
""
],
[
"Feick",
"Rodolfo",
""
]
] |
28 GHz outdoor-indoor coverage into modern office buildings with high thermal efficiency windows is found to be severely limited due to 46 dB median penetration loss at normal incidence and additional 15 dB median oblique incidence loss. The study is based on measurements of path gain over 280 outdoor-indoor links, at ranges up to 100 m. A simple theoretical path gain model is extended to include building penetration through multiple sides of the building as well as a reflection from another building. The theoretical model accounts for the building orientation relative to the source, resulting in 4.9 dB RMSE relative to data, as compared to 5.7 dB RMSE from a linear fit and 14.7 dB RMSE for the 3GPP recommended model. Only coarse description of the buildings is required: building orientation and exterior wall composition, without any interior details. Coverage range for SNR>-8 dB from an outdoor base to a terminal just inside a high-efficiency building is under 35 m
|
1709.04864
|
Eduardo Aguilar
|
Eduardo Aguilar, Marc Bola\~nos, Petia Radeva
|
Food Recognition using Fusion of Classifiers based on CNNs
| null |
ICIAP 10485 (2017) 213-224
|
10.1007/978-3-319-68548-9_20
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the arrival of convolutional neural networks, the complex problem of
food recognition has experienced an important improvement in recent years. The
best results have been obtained using methods based on very deep convolutional
neural networks, which show that the deeper the model,the better the
classification accuracy will be obtain. However, very deep neural networks may
suffer from the overfitting problem. In this paper, we propose a combination of
multiple classifiers based on different convolutional models that complement
each other and thus, achieve an improvement in performance. The evaluation of
our approach is done on two public datasets: Food-101 as a dataset with a wide
variety of fine-grained dishes, and Food-11 as a dataset of high-level food
categories, where our approach outperforms the independent CNN models.
|
[
{
"created": "Thu, 14 Sep 2017 16:35:40 GMT",
"version": "v1"
}
] |
2018-01-23
|
[
[
"Aguilar",
"Eduardo",
""
],
[
"Bolaños",
"Marc",
""
],
[
"Radeva",
"Petia",
""
]
] |
With the arrival of convolutional neural networks, the complex problem of food recognition has experienced an important improvement in recent years. The best results have been obtained using methods based on very deep convolutional neural networks, which show that the deeper the model,the better the classification accuracy will be obtain. However, very deep neural networks may suffer from the overfitting problem. In this paper, we propose a combination of multiple classifiers based on different convolutional models that complement each other and thus, achieve an improvement in performance. The evaluation of our approach is done on two public datasets: Food-101 as a dataset with a wide variety of fine-grained dishes, and Food-11 as a dataset of high-level food categories, where our approach outperforms the independent CNN models.
|
1707.06885
|
Felix Stahlberg
|
Felix Stahlberg, Eva Hasler, Danielle Saunders and Bill Byrne
|
SGNMT -- A Flexible NMT Decoding Platform for Quick Prototyping of New
Models and Search Strategies
|
Accepted as EMNLP 2017 demo paper
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces SGNMT, our experimental platform for machine
translation research. SGNMT provides a generic interface to neural and symbolic
scoring modules (predictors) with left-to-right semantic such as translation
models like NMT, language models, translation lattices, $n$-best lists or other
kinds of scores and constraints. Predictors can be combined with other
predictors to form complex decoding tasks. SGNMT implements a number of search
strategies for traversing the space spanned by the predictors which are
appropriate for different predictor constellations. Adding new predictors or
decoding strategies is particularly easy, making it a very efficient tool for
prototyping new research ideas. SGNMT is actively being used by students in the
MPhil program in Machine Learning, Speech and Language Technology at the
University of Cambridge for course work and theses, as well as for most of the
research work in our group.
|
[
{
"created": "Fri, 21 Jul 2017 13:14:25 GMT",
"version": "v1"
}
] |
2017-07-24
|
[
[
"Stahlberg",
"Felix",
""
],
[
"Hasler",
"Eva",
""
],
[
"Saunders",
"Danielle",
""
],
[
"Byrne",
"Bill",
""
]
] |
This paper introduces SGNMT, our experimental platform for machine translation research. SGNMT provides a generic interface to neural and symbolic scoring modules (predictors) with left-to-right semantic such as translation models like NMT, language models, translation lattices, $n$-best lists or other kinds of scores and constraints. Predictors can be combined with other predictors to form complex decoding tasks. SGNMT implements a number of search strategies for traversing the space spanned by the predictors which are appropriate for different predictor constellations. Adding new predictors or decoding strategies is particularly easy, making it a very efficient tool for prototyping new research ideas. SGNMT is actively being used by students in the MPhil program in Machine Learning, Speech and Language Technology at the University of Cambridge for course work and theses, as well as for most of the research work in our group.
|
1906.12091
|
Quanming Yao
|
Quanming Yao, Xiangning Chen, James Kwok, Yong Li, Cho-Jui Hsieh
|
Efficient Neural Interaction Function Search for Collaborative Filtering
|
Accepted to WWW 2020
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In collaborative filtering (CF), interaction function (IFC) plays the
important role of capturing interactions among items and users. The most
popular IFC is the inner product, which has been successfully used in low-rank
matrix factorization. However, interactions in real-world applications can be
highly complex. Thus, other operations (such as plus and concatenation), which
may potentially offer better performance, have been proposed. Nevertheless, it
is still hard for existing IFCs to have consistently good performance across
different application scenarios. Motivated by the recent success of automated
machine learning (AutoML), we propose in this paper the search for simple
neural interaction functions (SIF) in CF. By examining and generalizing
existing CF approaches, an expressive SIF search space is designed and
represented as a structured multi-layer perceptron. We propose an one-shot
search algorithm that simultaneously updates both the architecture and learning
parameters. Experimental results demonstrate that the proposed method can be
much more efficient than popular AutoML approaches, can obtain much better
prediction performance than state-of-the-art CF approaches, and can discover
distinct IFCs for different data sets and tasks
|
[
{
"created": "Fri, 28 Jun 2019 08:37:02 GMT",
"version": "v1"
},
{
"created": "Thu, 23 Jan 2020 03:48:44 GMT",
"version": "v2"
},
{
"created": "Sun, 5 Apr 2020 11:45:17 GMT",
"version": "v3"
}
] |
2020-04-07
|
[
[
"Yao",
"Quanming",
""
],
[
"Chen",
"Xiangning",
""
],
[
"Kwok",
"James",
""
],
[
"Li",
"Yong",
""
],
[
"Hsieh",
"Cho-Jui",
""
]
] |
In collaborative filtering (CF), interaction function (IFC) plays the important role of capturing interactions among items and users. The most popular IFC is the inner product, which has been successfully used in low-rank matrix factorization. However, interactions in real-world applications can be highly complex. Thus, other operations (such as plus and concatenation), which may potentially offer better performance, have been proposed. Nevertheless, it is still hard for existing IFCs to have consistently good performance across different application scenarios. Motivated by the recent success of automated machine learning (AutoML), we propose in this paper the search for simple neural interaction functions (SIF) in CF. By examining and generalizing existing CF approaches, an expressive SIF search space is designed and represented as a structured multi-layer perceptron. We propose an one-shot search algorithm that simultaneously updates both the architecture and learning parameters. Experimental results demonstrate that the proposed method can be much more efficient than popular AutoML approaches, can obtain much better prediction performance than state-of-the-art CF approaches, and can discover distinct IFCs for different data sets and tasks
|
2307.02590
|
Salvatore Spina Dr.
|
Salvatore Spina
|
Homo-Loggatus. The anthropological condition of historians in the
digital world
| null | null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Computerization has created a digital ecological niche where humans live in a
state of interconnection that modifies their Epigenetics. Within this
hyper-datafied virtual space, the logged-in agent enhances their intellectual
and rational abilities, giving rise to a new cognitive entity. Humans are
evolving towards a new anthropological status that shifts the terms of the
Digital History debate from History to the historian, compelling the latter to
reflect on the positions of Fichte and Schelling regarding the mind-body-world
relationship (ecological niche). This reflection leads to the possibility of
overcoming the crisis of History imposed by presentism and the necessity of
redefining the research methodology based on the new vision of the
interconnection between the mind and the digital niche as an investigative
tool.
|
[
{
"created": "Wed, 5 Jul 2023 18:38:17 GMT",
"version": "v1"
}
] |
2023-07-07
|
[
[
"Spina",
"Salvatore",
""
]
] |
Computerization has created a digital ecological niche where humans live in a state of interconnection that modifies their Epigenetics. Within this hyper-datafied virtual space, the logged-in agent enhances their intellectual and rational abilities, giving rise to a new cognitive entity. Humans are evolving towards a new anthropological status that shifts the terms of the Digital History debate from History to the historian, compelling the latter to reflect on the positions of Fichte and Schelling regarding the mind-body-world relationship (ecological niche). This reflection leads to the possibility of overcoming the crisis of History imposed by presentism and the necessity of redefining the research methodology based on the new vision of the interconnection between the mind and the digital niche as an investigative tool.
|
1903.04253
|
Georges Younes Mr.
|
Georges Younes, Daniel Asmar, and John Zelek
|
A Unified Formulation for Visual Odometry
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Monocular Odometry systems can be broadly categorized as being either Direct,
Indirect, or a hybrid of both. While Indirect systems process an alternative
image representation to compute geometric residuals, Direct methods process the
image pixels directly to generate photometric residuals. Both paradigms have
distinct but often complementary properties. This paper presents a Unified
Formulation for Visual Odometry, referred to as UFVO, with the following key
contributions: (1) a tight coupling of photometric (Direct) and geometric
(Indirect) measurements using a joint multi-objective optimization, (2) the use
of a utility function as a decision maker that incorporates prior knowledge on
both paradigms, (3) descriptor sharing, where a feature can have more than one
type of descriptor and its different descriptors are used for tracking and
mapping, (4) the depth estimation of both corner features and pixel features
within the same map using an inverse depth parametrization, and (5) a corner
and pixel selection strategy that extracts both types of information, while
promoting a uniform distribution over the image domain. Experiments show that
our proposed system can handle large inter-frame motions, inherits the
sub-pixel accuracy of direct methods, can run efficiently in real-time, can
generate an Indirect map representation at a marginal computational cost when
compared to traditional Indirect systems, all while outperforming state of the
art in Direct, Indirect and hybrid systems.
|
[
{
"created": "Mon, 11 Mar 2019 12:44:14 GMT",
"version": "v1"
}
] |
2019-03-12
|
[
[
"Younes",
"Georges",
""
],
[
"Asmar",
"Daniel",
""
],
[
"Zelek",
"John",
""
]
] |
Monocular Odometry systems can be broadly categorized as being either Direct, Indirect, or a hybrid of both. While Indirect systems process an alternative image representation to compute geometric residuals, Direct methods process the image pixels directly to generate photometric residuals. Both paradigms have distinct but often complementary properties. This paper presents a Unified Formulation for Visual Odometry, referred to as UFVO, with the following key contributions: (1) a tight coupling of photometric (Direct) and geometric (Indirect) measurements using a joint multi-objective optimization, (2) the use of a utility function as a decision maker that incorporates prior knowledge on both paradigms, (3) descriptor sharing, where a feature can have more than one type of descriptor and its different descriptors are used for tracking and mapping, (4) the depth estimation of both corner features and pixel features within the same map using an inverse depth parametrization, and (5) a corner and pixel selection strategy that extracts both types of information, while promoting a uniform distribution over the image domain. Experiments show that our proposed system can handle large inter-frame motions, inherits the sub-pixel accuracy of direct methods, can run efficiently in real-time, can generate an Indirect map representation at a marginal computational cost when compared to traditional Indirect systems, all while outperforming state of the art in Direct, Indirect and hybrid systems.
|
2311.11289
|
Ping Li PhD
|
Ping Li, Chenhan Zhang, Zheng Yang, Xianghua Xu, Mingli Song
|
Pair-wise Layer Attention with Spatial Masking for Video Prediction
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Video prediction yields future frames by employing the historical frames and
has exhibited its great potential in many applications, e.g., meteorological
prediction, and autonomous driving. Previous works often decode the ultimate
high-level semantic features to future frames without texture details, which
deteriorates the prediction quality. Motivated by this, we develop a Pair-wise
Layer Attention (PLA) module to enhance the layer-wise semantic dependency of
the feature maps derived from the U-shape structure in Translator, by coupling
low-level visual cues and high-level features. Hence, the texture details of
predicted frames are enriched. Moreover, most existing methods capture the
spatiotemporal dynamics by Translator, but fail to sufficiently utilize the
spatial features of Encoder. This inspires us to design a Spatial Masking (SM)
module to mask partial encoding features during pretraining, which adds the
visibility of remaining feature pixels by Decoder. To this end, we present a
Pair-wise Layer Attention with Spatial Masking (PLA-SM) framework for video
prediction to capture the spatiotemporal dynamics, which reflect the motion
trend. Extensive experiments and rigorous ablation studies on five benchmarks
demonstrate the advantages of the proposed approach. The code is available at
GitHub.
|
[
{
"created": "Sun, 19 Nov 2023 10:29:05 GMT",
"version": "v1"
}
] |
2023-11-21
|
[
[
"Li",
"Ping",
""
],
[
"Zhang",
"Chenhan",
""
],
[
"Yang",
"Zheng",
""
],
[
"Xu",
"Xianghua",
""
],
[
"Song",
"Mingli",
""
]
] |
Video prediction yields future frames by employing the historical frames and has exhibited its great potential in many applications, e.g., meteorological prediction, and autonomous driving. Previous works often decode the ultimate high-level semantic features to future frames without texture details, which deteriorates the prediction quality. Motivated by this, we develop a Pair-wise Layer Attention (PLA) module to enhance the layer-wise semantic dependency of the feature maps derived from the U-shape structure in Translator, by coupling low-level visual cues and high-level features. Hence, the texture details of predicted frames are enriched. Moreover, most existing methods capture the spatiotemporal dynamics by Translator, but fail to sufficiently utilize the spatial features of Encoder. This inspires us to design a Spatial Masking (SM) module to mask partial encoding features during pretraining, which adds the visibility of remaining feature pixels by Decoder. To this end, we present a Pair-wise Layer Attention with Spatial Masking (PLA-SM) framework for video prediction to capture the spatiotemporal dynamics, which reflect the motion trend. Extensive experiments and rigorous ablation studies on five benchmarks demonstrate the advantages of the proposed approach. The code is available at GitHub.
|
2306.00432
|
Shreyas Pai
|
M\'elanie Cambus, Fabian Kuhn, Shreyas Pai and Jara Uitto
|
Time and Space Optimal Massively Parallel Algorithm for the 2-Ruling Set
Problem
| null | null | null | null |
cs.DS cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we present a constant-round algorithm for the $2$-ruling set
problem in the Congested Clique model. As a direct consequence, we obtain a
constant round algorithm in the MPC model with linear space-per-machine and
optimal total space. Our results improve on the $O(\log \log \log n)$-round
algorithm by [HPS, DISC'14] and the $O(\log \log \Delta)$-round algorithm by
[GGKMR, PODC'18]. Our techniques can also be applied to the semi-streaming
model to obtain an $O(1)$-pass algorithm.
Our main technical contribution is a novel sampling procedure that returns a
small subgraph such that almost all nodes in the input graph are adjacent to
the sampled subgraph. An MIS on the sampled subgraph provides a $2$-ruling set
for a large fraction of the input graph. As a technical challenge, we must
handle the remaining part of the graph, which might still be relatively large.
We overcome this challenge by showing useful structural properties of the
remaining graph and show that running our process twice yields a $2$-ruling set
of the original input graph with high probability.
|
[
{
"created": "Thu, 1 Jun 2023 08:19:19 GMT",
"version": "v1"
},
{
"created": "Tue, 10 Oct 2023 13:15:55 GMT",
"version": "v2"
}
] |
2023-10-11
|
[
[
"Cambus",
"Mélanie",
""
],
[
"Kuhn",
"Fabian",
""
],
[
"Pai",
"Shreyas",
""
],
[
"Uitto",
"Jara",
""
]
] |
In this work, we present a constant-round algorithm for the $2$-ruling set problem in the Congested Clique model. As a direct consequence, we obtain a constant round algorithm in the MPC model with linear space-per-machine and optimal total space. Our results improve on the $O(\log \log \log n)$-round algorithm by [HPS, DISC'14] and the $O(\log \log \Delta)$-round algorithm by [GGKMR, PODC'18]. Our techniques can also be applied to the semi-streaming model to obtain an $O(1)$-pass algorithm. Our main technical contribution is a novel sampling procedure that returns a small subgraph such that almost all nodes in the input graph are adjacent to the sampled subgraph. An MIS on the sampled subgraph provides a $2$-ruling set for a large fraction of the input graph. As a technical challenge, we must handle the remaining part of the graph, which might still be relatively large. We overcome this challenge by showing useful structural properties of the remaining graph and show that running our process twice yields a $2$-ruling set of the original input graph with high probability.
|
2304.08967
|
Nabeel Gillani
|
Nabeel Gillani and Doug Beeferman and Cassandra Overney and Christine
Vega-Pourheydarian and Deb Roy
|
All a-board: sharing educational data science research with school
districts
|
In Proceedings of the Tenth ACM Conference on Learning at Scale (L@S
'23)
| null | null | null |
cs.CY stat.AP
|
http://creativecommons.org/licenses/by/4.0/
|
Educational data scientists often conduct research with the hopes of
translating findings into lasting change through policy, civil society, or
other channels. However, the bridge from research to practice can be fraught
with sociopolitical frictions that impede, or altogether block, such
translations -- especially when they are contentious or otherwise difficult to
achieve. Focusing on one entrenched educational equity issue in US public
schools -- racial and ethnic segregation -- we conduct randomized email
outreach experiments and surveys to explore how local school districts respond
to algorithmically-generated school catchment areas ("attendance boundaries")
designed to foster more diverse and integrated schools. Cold email outreach to
approximately 4,320 elected school board members across over 800 school
districts informing them of potential boundary changes reveals a large average
open rate of nearly 40%, but a relatively small click-through rate of 2.5% to
an interactive dashboard depicting such changes. Board members, however, appear
responsive to different messaging techniques -- particularly those that
dovetail issues of racial and ethnic diversity with other top-of-mind issues
(like school capacity planning). On the other hand, media coverage of the
research drives more dashboard engagement, especially in more segregated
districts. A small but rich set of survey responses from school board and
community members across several districts identify data and operational
bottlenecks to implementing boundary changes to foster more diverse schools,
but also share affirmative comments on the potential viability of such changes.
Together, our findings may support educational data scientists in more
effectively disseminating research that aims to bridge educational inequalities
through systems-level change.
|
[
{
"created": "Tue, 18 Apr 2023 13:03:05 GMT",
"version": "v1"
},
{
"created": "Wed, 5 Jul 2023 15:45:30 GMT",
"version": "v2"
}
] |
2023-07-06
|
[
[
"Gillani",
"Nabeel",
""
],
[
"Beeferman",
"Doug",
""
],
[
"Overney",
"Cassandra",
""
],
[
"Vega-Pourheydarian",
"Christine",
""
],
[
"Roy",
"Deb",
""
]
] |
Educational data scientists often conduct research with the hopes of translating findings into lasting change through policy, civil society, or other channels. However, the bridge from research to practice can be fraught with sociopolitical frictions that impede, or altogether block, such translations -- especially when they are contentious or otherwise difficult to achieve. Focusing on one entrenched educational equity issue in US public schools -- racial and ethnic segregation -- we conduct randomized email outreach experiments and surveys to explore how local school districts respond to algorithmically-generated school catchment areas ("attendance boundaries") designed to foster more diverse and integrated schools. Cold email outreach to approximately 4,320 elected school board members across over 800 school districts informing them of potential boundary changes reveals a large average open rate of nearly 40%, but a relatively small click-through rate of 2.5% to an interactive dashboard depicting such changes. Board members, however, appear responsive to different messaging techniques -- particularly those that dovetail issues of racial and ethnic diversity with other top-of-mind issues (like school capacity planning). On the other hand, media coverage of the research drives more dashboard engagement, especially in more segregated districts. A small but rich set of survey responses from school board and community members across several districts identify data and operational bottlenecks to implementing boundary changes to foster more diverse schools, but also share affirmative comments on the potential viability of such changes. Together, our findings may support educational data scientists in more effectively disseminating research that aims to bridge educational inequalities through systems-level change.
|
2110.13214
|
Pan Lu
|
Pan Lu, Liang Qiu, Jiaqi Chen, Tony Xia, Yizhou Zhao, Wei Zhang, Zhou
Yu, Xiaodan Liang, Song-Chun Zhu
|
IconQA: A New Benchmark for Abstract Diagram Understanding and Visual
Language Reasoning
|
Corrected typos. Accepted to NeurIPS 2021, 27 pages, 18 figures. Data
and code are available at https://iconqa.github.io
| null | null | null |
cs.CV cs.AI cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Current visual question answering (VQA) tasks mainly consider answering
human-annotated questions for natural images. However, aside from natural
images, abstract diagrams with semantic richness are still understudied in
visual understanding and reasoning research. In this work, we introduce a new
challenge of Icon Question Answering (IconQA) with the goal of answering a
question in an icon image context. We release IconQA, a large-scale dataset
that consists of 107,439 questions and three sub-tasks: multi-image-choice,
multi-text-choice, and filling-in-the-blank. The IconQA dataset is inspired by
real-world diagram word problems that highlight the importance of abstract
diagram understanding and comprehensive cognitive reasoning. Thus, IconQA
requires not only perception skills like object recognition and text
understanding, but also diverse cognitive reasoning skills, such as geometric
reasoning, commonsense reasoning, and arithmetic reasoning. To facilitate
potential IconQA models to learn semantic representations for icon images, we
further release an icon dataset Icon645 which contains 645,687 colored icons on
377 classes. We conduct extensive user studies and blind experiments and
reproduce a wide range of advanced VQA methods to benchmark the IconQA task.
Also, we develop a strong IconQA baseline Patch-TRM that applies a pyramid
cross-modal Transformer with input diagram embeddings pre-trained on the icon
dataset. IconQA and Icon645 are available at https://iconqa.github.io.
|
[
{
"created": "Mon, 25 Oct 2021 18:52:26 GMT",
"version": "v1"
},
{
"created": "Sun, 7 Nov 2021 00:44:53 GMT",
"version": "v2"
},
{
"created": "Sun, 20 Feb 2022 01:09:40 GMT",
"version": "v3"
},
{
"created": "Mon, 25 Jul 2022 04:05:29 GMT",
"version": "v4"
}
] |
2022-07-26
|
[
[
"Lu",
"Pan",
""
],
[
"Qiu",
"Liang",
""
],
[
"Chen",
"Jiaqi",
""
],
[
"Xia",
"Tony",
""
],
[
"Zhao",
"Yizhou",
""
],
[
"Zhang",
"Wei",
""
],
[
"Yu",
"Zhou",
""
],
[
"Liang",
"Xiaodan",
""
],
[
"Zhu",
"Song-Chun",
""
]
] |
Current visual question answering (VQA) tasks mainly consider answering human-annotated questions for natural images. However, aside from natural images, abstract diagrams with semantic richness are still understudied in visual understanding and reasoning research. In this work, we introduce a new challenge of Icon Question Answering (IconQA) with the goal of answering a question in an icon image context. We release IconQA, a large-scale dataset that consists of 107,439 questions and three sub-tasks: multi-image-choice, multi-text-choice, and filling-in-the-blank. The IconQA dataset is inspired by real-world diagram word problems that highlight the importance of abstract diagram understanding and comprehensive cognitive reasoning. Thus, IconQA requires not only perception skills like object recognition and text understanding, but also diverse cognitive reasoning skills, such as geometric reasoning, commonsense reasoning, and arithmetic reasoning. To facilitate potential IconQA models to learn semantic representations for icon images, we further release an icon dataset Icon645 which contains 645,687 colored icons on 377 classes. We conduct extensive user studies and blind experiments and reproduce a wide range of advanced VQA methods to benchmark the IconQA task. Also, we develop a strong IconQA baseline Patch-TRM that applies a pyramid cross-modal Transformer with input diagram embeddings pre-trained on the icon dataset. IconQA and Icon645 are available at https://iconqa.github.io.
|
2304.06346
|
Kangliang Liu
|
Kangliang Liu, Xiangcheng Du, Sijie Liu, Yingbin Zheng, Xingjiao Wu,
Cheng Jin
|
DDT: Dual-branch Deformable Transformer for Image Denoising
|
The code is avaliable at: https://github.com/Merenguelkl/DDT
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Transformer is beneficial for image denoising tasks since it can model
long-range dependencies to overcome the limitations presented by inductive
convolutional biases. However, directly applying the transformer structure to
remove noise is challenging because its complexity grows quadratically with the
spatial resolution. In this paper, we propose an efficient Dual-branch
Deformable Transformer (DDT) denoising network which captures both local and
global interactions in parallel. We divide features with a fixed patch size and
a fixed number of patches in local and global branches, respectively. In
addition, we apply deformable attention operation in both branches, which helps
the network focus on more important regions and further reduces computational
complexity. We conduct extensive experiments on real-world and synthetic
denoising tasks, and the proposed DDT achieves state-of-the-art performance
with significantly fewer computational costs.
|
[
{
"created": "Thu, 13 Apr 2023 08:54:44 GMT",
"version": "v1"
}
] |
2023-04-14
|
[
[
"Liu",
"Kangliang",
""
],
[
"Du",
"Xiangcheng",
""
],
[
"Liu",
"Sijie",
""
],
[
"Zheng",
"Yingbin",
""
],
[
"Wu",
"Xingjiao",
""
],
[
"Jin",
"Cheng",
""
]
] |
Transformer is beneficial for image denoising tasks since it can model long-range dependencies to overcome the limitations presented by inductive convolutional biases. However, directly applying the transformer structure to remove noise is challenging because its complexity grows quadratically with the spatial resolution. In this paper, we propose an efficient Dual-branch Deformable Transformer (DDT) denoising network which captures both local and global interactions in parallel. We divide features with a fixed patch size and a fixed number of patches in local and global branches, respectively. In addition, we apply deformable attention operation in both branches, which helps the network focus on more important regions and further reduces computational complexity. We conduct extensive experiments on real-world and synthetic denoising tasks, and the proposed DDT achieves state-of-the-art performance with significantly fewer computational costs.
|
1906.03345
|
Qinbo Li
|
Qinbo Li, Adam G. D'Souza, Cason Schmit, Hye-Chung Kum
|
Increasing Transparent and Accountable Use of Data by Quantifying the
Actual Privacy Risk in Interactive Record Linkage
|
7 pages
| null | null | null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Record linkage refers to the task of integrating data from two or more
databases without a common identifier. MINDFIRL (MInimum Necessary Disclosure
For Interactive Record Linkage) is a software system that demonstrates the
tradeoff between utility and privacy in interactive record linkage. Due to the
need to access personally identifiable information (PII) to accurately assess
whether different records refer to the same person in heterogeneous databases,
privacy is a major concern in interactive record linkage. MINDFIRL supports
interactive record linkage while minimizing the privacy risk by (1) using
pseudonyms to separate the identifying information from the sensitive
information, (2) dynamically disclosing only the minimum necessary information
incrementally, as needed on-demand at the point of decision, and (3) quantifies
the risk due to the needed information disclosure to support transparency, the
reasoning, communication, and decisions on the privacy and utility trade off.
In this paper we present an overview of the MINDFIRL system and the
k-Anonymized Privacy Risk (KAPR) score used to measure the privacy risk based
on the disclosed information. We prove that KAPR score is a norm meeting all
the desirable properties for a risk score for interactive record linkage.
|
[
{
"created": "Fri, 7 Jun 2019 22:00:08 GMT",
"version": "v1"
}
] |
2019-06-11
|
[
[
"Li",
"Qinbo",
""
],
[
"D'Souza",
"Adam G.",
""
],
[
"Schmit",
"Cason",
""
],
[
"Kum",
"Hye-Chung",
""
]
] |
Record linkage refers to the task of integrating data from two or more databases without a common identifier. MINDFIRL (MInimum Necessary Disclosure For Interactive Record Linkage) is a software system that demonstrates the tradeoff between utility and privacy in interactive record linkage. Due to the need to access personally identifiable information (PII) to accurately assess whether different records refer to the same person in heterogeneous databases, privacy is a major concern in interactive record linkage. MINDFIRL supports interactive record linkage while minimizing the privacy risk by (1) using pseudonyms to separate the identifying information from the sensitive information, (2) dynamically disclosing only the minimum necessary information incrementally, as needed on-demand at the point of decision, and (3) quantifies the risk due to the needed information disclosure to support transparency, the reasoning, communication, and decisions on the privacy and utility trade off. In this paper we present an overview of the MINDFIRL system and the k-Anonymized Privacy Risk (KAPR) score used to measure the privacy risk based on the disclosed information. We prove that KAPR score is a norm meeting all the desirable properties for a risk score for interactive record linkage.
|
2403.16699
|
Yuanming Tian
|
Yuanming Tian, Dongxu Li, Chuan Huang, Qingwen Liu, Shengli Zhou
|
Resonant Beam Communications: A New Design Paradigm and Challenges
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Resonant beam communications (RBCom), which adopt oscillating photons between
two separate retroreflectors for information transmission, exhibit potential
advantages over other types of wireless optical communications (WOC). However,
echo interference generated by the modulated beam reflected from the receiver
affects the transmission of the desired information. To tackle this challenge,
a synchronization-based point-to-point RBCom system is proposed to eliminate
the echo interference, and the design for the transmitter and receiver is
discussed. Subsequently, the performance of the proposed RBCom is evaluated and
compared with that of visible light communications (VLC) and free space optical
communications (FOC). Finally, future research directions are outlined and
several implementation challenges of RBCom systems are highlighted.
|
[
{
"created": "Mon, 25 Mar 2024 12:33:10 GMT",
"version": "v1"
}
] |
2024-03-26
|
[
[
"Tian",
"Yuanming",
""
],
[
"Li",
"Dongxu",
""
],
[
"Huang",
"Chuan",
""
],
[
"Liu",
"Qingwen",
""
],
[
"Zhou",
"Shengli",
""
]
] |
Resonant beam communications (RBCom), which adopt oscillating photons between two separate retroreflectors for information transmission, exhibit potential advantages over other types of wireless optical communications (WOC). However, echo interference generated by the modulated beam reflected from the receiver affects the transmission of the desired information. To tackle this challenge, a synchronization-based point-to-point RBCom system is proposed to eliminate the echo interference, and the design for the transmitter and receiver is discussed. Subsequently, the performance of the proposed RBCom is evaluated and compared with that of visible light communications (VLC) and free space optical communications (FOC). Finally, future research directions are outlined and several implementation challenges of RBCom systems are highlighted.
|
1906.01599
|
Marco Bressan
|
Marco Bressan, Stefano Leucci, Alessandro Panconesi
|
Motivo: fast motif counting via succinct color coding and adaptive
sampling
|
13 pages
| null | null | null |
cs.DB cs.DM cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The randomized technique of color coding is behind state-of-the-art
algorithms for estimating graph motif counts. Those algorithms, however, are
not yet capable of scaling well to very large graphs with billions of edges. In
this paper we develop novel tools for the `motif counting via color coding'
framework. As a result, our new algorithm, Motivo, is able to scale well to
larger graphs while at the same time provide more accurate graphlet counts than
ever before. This is achieved thanks to two types of improvements. First, we
design new succinct data structures that support fast common color coding
operations, and a biased coloring trick that trades accuracy versus running
time and memory usage. These adaptations drastically reduce the time and memory
requirements of color coding. Second, we develop an adaptive graphlet sampling
strategy, based on a fractional set cover problem, that breaks the additive
approximation barrier of standard sampling. This strategy gives multiplicative
approximations for all graphlets at once, allowing us to count not only the
most frequent graphlets but also extremely rare ones.
To give an idea of the improvements, in $40$ minutes Motivo counts $7$-nodes
motifs on a graph with $65$M nodes and $1.8$B edges; this is $30$ and $500$
times larger than the state of the art, respectively in terms of nodes and
edges. On the accuracy side, in one hour Motivo produces accurate counts of
$\approx \! 10.000$ distinct $8$-node motifs on graphs where state-of-the-art
algorithms fail even to find the second most frequent motif. Our method
requires just a high-end desktop machine. These results show how color coding
can bring motif mining to the realm of truly massive graphs using only ordinary
hardware.
|
[
{
"created": "Tue, 4 Jun 2019 17:22:07 GMT",
"version": "v1"
}
] |
2019-06-05
|
[
[
"Bressan",
"Marco",
""
],
[
"Leucci",
"Stefano",
""
],
[
"Panconesi",
"Alessandro",
""
]
] |
The randomized technique of color coding is behind state-of-the-art algorithms for estimating graph motif counts. Those algorithms, however, are not yet capable of scaling well to very large graphs with billions of edges. In this paper we develop novel tools for the `motif counting via color coding' framework. As a result, our new algorithm, Motivo, is able to scale well to larger graphs while at the same time provide more accurate graphlet counts than ever before. This is achieved thanks to two types of improvements. First, we design new succinct data structures that support fast common color coding operations, and a biased coloring trick that trades accuracy versus running time and memory usage. These adaptations drastically reduce the time and memory requirements of color coding. Second, we develop an adaptive graphlet sampling strategy, based on a fractional set cover problem, that breaks the additive approximation barrier of standard sampling. This strategy gives multiplicative approximations for all graphlets at once, allowing us to count not only the most frequent graphlets but also extremely rare ones. To give an idea of the improvements, in $40$ minutes Motivo counts $7$-nodes motifs on a graph with $65$M nodes and $1.8$B edges; this is $30$ and $500$ times larger than the state of the art, respectively in terms of nodes and edges. On the accuracy side, in one hour Motivo produces accurate counts of $\approx \! 10.000$ distinct $8$-node motifs on graphs where state-of-the-art algorithms fail even to find the second most frequent motif. Our method requires just a high-end desktop machine. These results show how color coding can bring motif mining to the realm of truly massive graphs using only ordinary hardware.
|
2009.14409
|
Seongmin Lee
|
Hyun Dong Lee, Seongmin Lee and U Kang
|
AUBER: Automated BERT Regularization
| null | null |
10.1371/journal.pone.0253241
| null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
How can we effectively regularize BERT? Although BERT proves its
effectiveness in various downstream natural language processing tasks, it often
overfits when there are only a small number of training instances. A promising
direction to regularize BERT is based on pruning its attention heads based on a
proxy score for head importance. However, heuristic-based methods are usually
suboptimal since they predetermine the order by which attention heads are
pruned. In order to overcome such a limitation, we propose AUBER, an effective
regularization method that leverages reinforcement learning to automatically
prune attention heads from BERT. Instead of depending on heuristics or
rule-based policies, AUBER learns a pruning policy that determines which
attention heads should or should not be pruned for regularization. Experimental
results show that AUBER outperforms existing pruning methods by achieving up to
10% better accuracy. In addition, our ablation study empirically demonstrates
the effectiveness of our design choices for AUBER.
|
[
{
"created": "Wed, 30 Sep 2020 03:32:55 GMT",
"version": "v1"
}
] |
2021-09-15
|
[
[
"Lee",
"Hyun Dong",
""
],
[
"Lee",
"Seongmin",
""
],
[
"Kang",
"U",
""
]
] |
How can we effectively regularize BERT? Although BERT proves its effectiveness in various downstream natural language processing tasks, it often overfits when there are only a small number of training instances. A promising direction to regularize BERT is based on pruning its attention heads based on a proxy score for head importance. However, heuristic-based methods are usually suboptimal since they predetermine the order by which attention heads are pruned. In order to overcome such a limitation, we propose AUBER, an effective regularization method that leverages reinforcement learning to automatically prune attention heads from BERT. Instead of depending on heuristics or rule-based policies, AUBER learns a pruning policy that determines which attention heads should or should not be pruned for regularization. Experimental results show that AUBER outperforms existing pruning methods by achieving up to 10% better accuracy. In addition, our ablation study empirically demonstrates the effectiveness of our design choices for AUBER.
|
2104.06645
|
Leon Bergen
|
Leon Bergen, Dzmitry Bahdanau, Timothy J. O'Donnell
|
Jointly Learning Truth-Conditional Denotations and Groundings using
Parallel Attention
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We present a model that jointly learns the denotations of words together with
their groundings using a truth-conditional semantics. Our model builds on the
neurosymbolic approach of Mao et al. (2019), learning to ground objects in the
CLEVR dataset (Johnson et al., 2017) using a novel parallel attention
mechanism. The model achieves state of the art performance on visual question
answering, learning to detect and ground objects with question performance as
the only training signal. We also show that the model is able to learn flexible
non-canonical groundings just by adjusting answers to questions in the training
set.
|
[
{
"created": "Wed, 14 Apr 2021 06:33:27 GMT",
"version": "v1"
}
] |
2021-04-15
|
[
[
"Bergen",
"Leon",
""
],
[
"Bahdanau",
"Dzmitry",
""
],
[
"O'Donnell",
"Timothy J.",
""
]
] |
We present a model that jointly learns the denotations of words together with their groundings using a truth-conditional semantics. Our model builds on the neurosymbolic approach of Mao et al. (2019), learning to ground objects in the CLEVR dataset (Johnson et al., 2017) using a novel parallel attention mechanism. The model achieves state of the art performance on visual question answering, learning to detect and ground objects with question performance as the only training signal. We also show that the model is able to learn flexible non-canonical groundings just by adjusting answers to questions in the training set.
|
2002.03014
|
Benjamin Stevens
|
Ben Stevens, Tim Colonius
|
FiniteNet: A Fully Convolutional LSTM Network Architecture for
Time-Dependent Partial Differential Equations
|
8 pages, 12 figures. Under review for ICML 2020
| null | null | null |
cs.LG cs.NA math.NA physics.comp-ph physics.flu-dyn stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we present a machine learning approach for reducing the error
when numerically solving time-dependent partial differential equations (PDE).
We use a fully convolutional LSTM network to exploit the spatiotemporal
dynamics of PDEs. The neural network serves to enhance finite-difference and
finite-volume methods (FDM/FVM) that are commonly used to solve PDEs, allowing
us to maintain guarantees on the order of convergence of our method. We train
the network on simulation data, and show that our network can reduce error by a
factor of 2 to 3 compared to the baseline algorithms. We demonstrate our method
on three PDEs that each feature qualitatively different dynamics. We look at
the linear advection equation, which propagates its initial conditions at a
constant speed, the inviscid Burgers' equation, which develops shockwaves, and
the Kuramoto-Sivashinsky (KS) equation, which is chaotic.
|
[
{
"created": "Fri, 7 Feb 2020 21:18:46 GMT",
"version": "v1"
}
] |
2020-02-11
|
[
[
"Stevens",
"Ben",
""
],
[
"Colonius",
"Tim",
""
]
] |
In this work, we present a machine learning approach for reducing the error when numerically solving time-dependent partial differential equations (PDE). We use a fully convolutional LSTM network to exploit the spatiotemporal dynamics of PDEs. The neural network serves to enhance finite-difference and finite-volume methods (FDM/FVM) that are commonly used to solve PDEs, allowing us to maintain guarantees on the order of convergence of our method. We train the network on simulation data, and show that our network can reduce error by a factor of 2 to 3 compared to the baseline algorithms. We demonstrate our method on three PDEs that each feature qualitatively different dynamics. We look at the linear advection equation, which propagates its initial conditions at a constant speed, the inviscid Burgers' equation, which develops shockwaves, and the Kuramoto-Sivashinsky (KS) equation, which is chaotic.
|
1710.03720
|
Paul Muntean
|
Paul Muntean, Jens Grossklags and Claudia Eckert
|
Practical Integer Overflow Prevention
|
20 pages
| null | null | null |
cs.CR cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Integer overflows in commodity software are a main source for software bugs,
which can result in exploitable memory corruption vulnerabilities and may
eventually contribute to powerful software based exploits, i.e., code reuse
attacks (CRAs).
In this paper, we present IntGuard , a tool that can repair integer overflows
with high-quality source code repairs. Specifically, given the source code of a
program, IntGuard first discovers the location of an integer overflow error by
using static source code analysis and satisfiability modulo theories (SMT)
solving. IntGuard then generates integer multi-precision code repairs based on
modular manipulation of SMT constraints as well as an extensible set of
customizable code repair patterns.
We have implemented and evaluated IntGuard with 2052 C programs (approx. 1
Mil. LOC) available in the currently largest open- source test suite for C/C++
programs and with a benchmark containing large and complex programs. The
evaluation results show that IntGuard can precisely (i.e., no false positives
are accidentally repaired), with low computational and runtime overhead repair
programs with very small binary and source code blow-up. In a controlled
experiment, we show that IntGuard is more time-effective and achieves a higher
repair success rate than manually generated code repairs.
|
[
{
"created": "Tue, 10 Oct 2017 16:46:23 GMT",
"version": "v1"
},
{
"created": "Wed, 11 Oct 2017 12:09:04 GMT",
"version": "v2"
},
{
"created": "Thu, 12 Oct 2017 08:07:28 GMT",
"version": "v3"
},
{
"created": "Mon, 16 Oct 2017 15:06:41 GMT",
"version": "v4"
},
{
"created": "Tue, 17 Oct 2017 09:04:20 GMT",
"version": "v5"
},
{
"created": "Mon, 23 Oct 2017 14:04:06 GMT",
"version": "v6"
},
{
"created": "Sun, 29 Oct 2017 11:08:12 GMT",
"version": "v7"
},
{
"created": "Tue, 31 Oct 2017 07:47:12 GMT",
"version": "v8"
},
{
"created": "Fri, 3 Nov 2017 10:09:05 GMT",
"version": "v9"
}
] |
2017-11-06
|
[
[
"Muntean",
"Paul",
""
],
[
"Grossklags",
"Jens",
""
],
[
"Eckert",
"Claudia",
""
]
] |
Integer overflows in commodity software are a main source for software bugs, which can result in exploitable memory corruption vulnerabilities and may eventually contribute to powerful software based exploits, i.e., code reuse attacks (CRAs). In this paper, we present IntGuard , a tool that can repair integer overflows with high-quality source code repairs. Specifically, given the source code of a program, IntGuard first discovers the location of an integer overflow error by using static source code analysis and satisfiability modulo theories (SMT) solving. IntGuard then generates integer multi-precision code repairs based on modular manipulation of SMT constraints as well as an extensible set of customizable code repair patterns. We have implemented and evaluated IntGuard with 2052 C programs (approx. 1 Mil. LOC) available in the currently largest open- source test suite for C/C++ programs and with a benchmark containing large and complex programs. The evaluation results show that IntGuard can precisely (i.e., no false positives are accidentally repaired), with low computational and runtime overhead repair programs with very small binary and source code blow-up. In a controlled experiment, we show that IntGuard is more time-effective and achieves a higher repair success rate than manually generated code repairs.
|
2401.08867
|
Md Atik Ahamed
|
Md Atik Ahamed and Qiang Cheng
|
MambaTab: A Plug-and-Play Model for Learning Tabular Data
|
Accepted by IEEE 7th International Conference on Multimedia
Information Processing and Retrieval (MIPR), 2024
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite the prevalence of images and texts in machine learning, tabular data
remains widely used across various domains. Existing deep learning models, such
as convolutional neural networks and transformers, perform well however demand
extensive preprocessing and tuning limiting accessibility and scalability. This
work introduces an innovative approach based on a structured state-space model
(SSM), MambaTab, for tabular data. SSMs have strong capabilities for
efficiently extracting effective representations from data with long-range
dependencies. MambaTab leverages Mamba, an emerging SSM variant, for end-to-end
supervised learning on tables. Compared to state-of-the-art baselines, MambaTab
delivers superior performance while requiring significantly fewer parameters,
as empirically validated on diverse benchmark datasets. MambaTab's efficiency,
scalability, generalizability, and predictive gains signify it as a
lightweight, "plug-and-play" solution for diverse tabular data with promise for
enabling wider practical applications.
|
[
{
"created": "Tue, 16 Jan 2024 22:44:12 GMT",
"version": "v1"
},
{
"created": "Mon, 24 Jun 2024 19:58:06 GMT",
"version": "v2"
}
] |
2024-06-26
|
[
[
"Ahamed",
"Md Atik",
""
],
[
"Cheng",
"Qiang",
""
]
] |
Despite the prevalence of images and texts in machine learning, tabular data remains widely used across various domains. Existing deep learning models, such as convolutional neural networks and transformers, perform well however demand extensive preprocessing and tuning limiting accessibility and scalability. This work introduces an innovative approach based on a structured state-space model (SSM), MambaTab, for tabular data. SSMs have strong capabilities for efficiently extracting effective representations from data with long-range dependencies. MambaTab leverages Mamba, an emerging SSM variant, for end-to-end supervised learning on tables. Compared to state-of-the-art baselines, MambaTab delivers superior performance while requiring significantly fewer parameters, as empirically validated on diverse benchmark datasets. MambaTab's efficiency, scalability, generalizability, and predictive gains signify it as a lightweight, "plug-and-play" solution for diverse tabular data with promise for enabling wider practical applications.
|
1909.06008
|
Zhao Kang
|
Zhao Kang and Zipeng Guo and Shudong Huang and Siying Wang and Wenyu
Chen and Yuanzhang Su and Zenglin Xu
|
Multiple Partitions Aligned Clustering
|
IJCAI 2019
| null | null | null |
cs.LG cs.AI cs.CV stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-view clustering is an important yet challenging task due to the
difficulty of integrating the information from multiple representations. Most
existing multi-view clustering methods explore the heterogeneous information in
the space where the data points lie. Such common practice may cause significant
information loss because of unavoidable noise or inconsistency among views.
Since different views admit the same cluster structure, the natural space
should be all partitions. Orthogonal to existing techniques, in this paper, we
propose to leverage the multi-view information by fusing partitions.
Specifically, we align each partition to form a consensus cluster indicator
matrix through a distinct rotation matrix. Moreover, a weight is assigned for
each view to account for the clustering capacity differences of views. Finally,
the basic partitions, weights, and consensus clustering are jointly learned in
a unified framework. We demonstrate the effectiveness of our approach on
several real datasets, where significant improvement is found over other
state-of-the-art multi-view clustering methods.
|
[
{
"created": "Fri, 13 Sep 2019 02:45:13 GMT",
"version": "v1"
}
] |
2019-09-16
|
[
[
"Kang",
"Zhao",
""
],
[
"Guo",
"Zipeng",
""
],
[
"Huang",
"Shudong",
""
],
[
"Wang",
"Siying",
""
],
[
"Chen",
"Wenyu",
""
],
[
"Su",
"Yuanzhang",
""
],
[
"Xu",
"Zenglin",
""
]
] |
Multi-view clustering is an important yet challenging task due to the difficulty of integrating the information from multiple representations. Most existing multi-view clustering methods explore the heterogeneous information in the space where the data points lie. Such common practice may cause significant information loss because of unavoidable noise or inconsistency among views. Since different views admit the same cluster structure, the natural space should be all partitions. Orthogonal to existing techniques, in this paper, we propose to leverage the multi-view information by fusing partitions. Specifically, we align each partition to form a consensus cluster indicator matrix through a distinct rotation matrix. Moreover, a weight is assigned for each view to account for the clustering capacity differences of views. Finally, the basic partitions, weights, and consensus clustering are jointly learned in a unified framework. We demonstrate the effectiveness of our approach on several real datasets, where significant improvement is found over other state-of-the-art multi-view clustering methods.
|
1612.00272
|
Vidak Vujicic
|
Vidak Vujicic, Aravind P. Anthur, Alexander Gazman, Colm Browning, M.
Deseada Gutierrez Pascual, Ziyi Zhu, Keren Bergman and Liam P. Barry
|
Software-Defined Silicon Photonics based Metro Node for Spatial and
Wavelength Superchannel Switching
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Due to the growing popularity of optical superchannels and software defined
networking, reconfigurable optical add-drop multiplexer (ROADM) architectures
for superchannel switching have recently attracted significant attention.
ROADMs based on micro electro-mechanical system (MEMS) and liquid
crystal-on-silicon (LCoS) technologies are predominantly used. Motivated by
requirements for low power, high-speed, small area footprint and compact
switching solutions, we propose and demonstrate spatial and wavelength flexible
superchannel switching using monolithically integrated silicon photonics (SiP)
micro-ring resonators (MRR). We demonstrate the MRRs capabilities and potential
to be used as a fundamental building block in ROADMs. Unicast and multicast
switching operation of an entire superchannel is demonstrated after
transmission over 50 km of standard single mode fiber. The performance of each
sub-channel from the 120 Gb/s QPSK Nyquist superchannel is analyzed and
degradation in error vector magnitude performance was observed for outer
sub-channels due to the 3-dB bandwidth of the MRRs, which is comparable with
the superchannel bandwidth. However, all sub-channels for all switching cases
(unicast, multicast and bi-directional operation) exhibit performance far below
the 7% FEC limit. The switching time of the SiP MRR chip is such that high
capacity superchannel interconnects between users can be setup and reconfigured
on the microsecond timescale.
|
[
{
"created": "Tue, 29 Nov 2016 10:11:32 GMT",
"version": "v1"
},
{
"created": "Tue, 20 Dec 2016 16:11:45 GMT",
"version": "v2"
},
{
"created": "Thu, 9 Feb 2017 15:03:23 GMT",
"version": "v3"
}
] |
2017-02-10
|
[
[
"Vujicic",
"Vidak",
""
],
[
"Anthur",
"Aravind P.",
""
],
[
"Gazman",
"Alexander",
""
],
[
"Browning",
"Colm",
""
],
[
"Pascual",
"M. Deseada Gutierrez",
""
],
[
"Zhu",
"Ziyi",
""
],
[
"Bergman",
"Keren",
""
],
[
"Barry",
"Liam P.",
""
]
] |
Due to the growing popularity of optical superchannels and software defined networking, reconfigurable optical add-drop multiplexer (ROADM) architectures for superchannel switching have recently attracted significant attention. ROADMs based on micro electro-mechanical system (MEMS) and liquid crystal-on-silicon (LCoS) technologies are predominantly used. Motivated by requirements for low power, high-speed, small area footprint and compact switching solutions, we propose and demonstrate spatial and wavelength flexible superchannel switching using monolithically integrated silicon photonics (SiP) micro-ring resonators (MRR). We demonstrate the MRRs capabilities and potential to be used as a fundamental building block in ROADMs. Unicast and multicast switching operation of an entire superchannel is demonstrated after transmission over 50 km of standard single mode fiber. The performance of each sub-channel from the 120 Gb/s QPSK Nyquist superchannel is analyzed and degradation in error vector magnitude performance was observed for outer sub-channels due to the 3-dB bandwidth of the MRRs, which is comparable with the superchannel bandwidth. However, all sub-channels for all switching cases (unicast, multicast and bi-directional operation) exhibit performance far below the 7% FEC limit. The switching time of the SiP MRR chip is such that high capacity superchannel interconnects between users can be setup and reconfigured on the microsecond timescale.
|
1703.03442
|
Vanessa Ferdinand PhD
|
Vanessa Ferdinand, Simon Kirby, Kenny Smith
|
The cognitive roots of regularization in language
|
21 pages
| null | null | null |
cs.CL q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Regularization occurs when the output a learner produces is less variable
than the linguistic data they observed. In an artificial language learning
experiment, we show that there exist at least two independent sources of
regularization bias in cognition: a domain-general source based on cognitive
load and a domain-specific source triggered by linguistic stimuli. Both of
these factors modulate how frequency information is encoded and produced, but
only the production-side modulations result in regularization (i.e. cause
learners to eliminate variation from the observed input). We formalize the
definition of regularization as the reduction of entropy and find that entropy
measures are better at identifying regularization behavior than frequency-based
analyses. Using our experimental data and a model of cultural transmission, we
generate predictions for the amount of regularity that would develop in each
experimental condition if the artificial language were transmitted over several
generations of learners. Here we find that the effect of cognitive constraints
can become more complex when put into the context of cultural evolution:
although learning biases certainly carry information about the course of
language evolution, we should not expect a one-to-one correspondence between
the micro-level processes that regularize linguistic datasets and the
macro-level evolution of linguistic regularity.
|
[
{
"created": "Thu, 9 Mar 2017 19:50:00 GMT",
"version": "v1"
},
{
"created": "Thu, 18 Oct 2018 21:33:46 GMT",
"version": "v2"
}
] |
2018-10-22
|
[
[
"Ferdinand",
"Vanessa",
""
],
[
"Kirby",
"Simon",
""
],
[
"Smith",
"Kenny",
""
]
] |
Regularization occurs when the output a learner produces is less variable than the linguistic data they observed. In an artificial language learning experiment, we show that there exist at least two independent sources of regularization bias in cognition: a domain-general source based on cognitive load and a domain-specific source triggered by linguistic stimuli. Both of these factors modulate how frequency information is encoded and produced, but only the production-side modulations result in regularization (i.e. cause learners to eliminate variation from the observed input). We formalize the definition of regularization as the reduction of entropy and find that entropy measures are better at identifying regularization behavior than frequency-based analyses. Using our experimental data and a model of cultural transmission, we generate predictions for the amount of regularity that would develop in each experimental condition if the artificial language were transmitted over several generations of learners. Here we find that the effect of cognitive constraints can become more complex when put into the context of cultural evolution: although learning biases certainly carry information about the course of language evolution, we should not expect a one-to-one correspondence between the micro-level processes that regularize linguistic datasets and the macro-level evolution of linguistic regularity.
|
1610.01784
|
Qiang Wang
|
Xinxin Mei, Qiang Wang, Xiaowen Chu
|
A Survey and Measurement Study of GPU DVFS on Energy Conservation
| null | null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Energy efficiency has become one of the top design criteria for current
computing systems. The dynamic voltage and frequency scaling (DVFS) has been
widely adopted by laptop computers, servers, and mobile devices to conserve
energy, while the GPU DVFS is still at a certain early age. This paper aims at
exploring the impact of GPU DVFS on the application performance and power
consumption, and furthermore, on energy conservation. We survey the
state-of-the-art GPU DVFS characterizations, and then summarize recent research
works on GPU power and performance models. We also conduct real GPU DVFS
experiments on NVIDIA Fermi and Maxwell GPUs. According to our experimental
results, GPU DVFS has significant potential for energy saving. The effect of
scaling core voltage/frequency and memory voltage/frequency depends on not only
the GPU architectures, but also the characteristic of GPU applications.
|
[
{
"created": "Thu, 6 Oct 2016 09:21:22 GMT",
"version": "v1"
}
] |
2016-10-07
|
[
[
"Mei",
"Xinxin",
""
],
[
"Wang",
"Qiang",
""
],
[
"Chu",
"Xiaowen",
""
]
] |
Energy efficiency has become one of the top design criteria for current computing systems. The dynamic voltage and frequency scaling (DVFS) has been widely adopted by laptop computers, servers, and mobile devices to conserve energy, while the GPU DVFS is still at a certain early age. This paper aims at exploring the impact of GPU DVFS on the application performance and power consumption, and furthermore, on energy conservation. We survey the state-of-the-art GPU DVFS characterizations, and then summarize recent research works on GPU power and performance models. We also conduct real GPU DVFS experiments on NVIDIA Fermi and Maxwell GPUs. According to our experimental results, GPU DVFS has significant potential for energy saving. The effect of scaling core voltage/frequency and memory voltage/frequency depends on not only the GPU architectures, but also the characteristic of GPU applications.
|
2006.15000
|
Vadim Malvone
|
Francesco Belardinelli, Catalin Dima, Vadim Malvone, and Ferucio
Tiplea
|
A Hennessy-Milner Theorem for ATL with Imperfect Information
| null | null | null | null |
cs.LO cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We show that a history-based variant of alternating bisimulation with
imperfect information allows it to be related to a variant of Alternating-time
Temporal Logic (ATL) with imperfect information by a full Hennessy-Milner
theorem. The variant of ATL we consider has a common knowledge semantics, which
requires that the uniform strategy available for a coalition to accomplish some
goal must be common knowledge inside the coalition, while other semantic
variants of ATL with imperfect information do not accommodate a Hennessy-Milner
theorem. We also show that the existence of a history-based alternating
bisimulation between two finite Concurrent Game Structures with imperfect
information (iCGS) is undecidable.
|
[
{
"created": "Fri, 26 Jun 2020 14:12:11 GMT",
"version": "v1"
}
] |
2020-06-29
|
[
[
"Belardinelli",
"Francesco",
""
],
[
"Dima",
"Catalin",
""
],
[
"Malvone",
"Vadim",
""
],
[
"Tiplea",
"Ferucio",
""
]
] |
We show that a history-based variant of alternating bisimulation with imperfect information allows it to be related to a variant of Alternating-time Temporal Logic (ATL) with imperfect information by a full Hennessy-Milner theorem. The variant of ATL we consider has a common knowledge semantics, which requires that the uniform strategy available for a coalition to accomplish some goal must be common knowledge inside the coalition, while other semantic variants of ATL with imperfect information do not accommodate a Hennessy-Milner theorem. We also show that the existence of a history-based alternating bisimulation between two finite Concurrent Game Structures with imperfect information (iCGS) is undecidable.
|
2203.09711
|
Sarik Ghazarian
|
Sarik Ghazarian, Nuan Wen, Aram Galstyan, Nanyun Peng
|
DEAM: Dialogue Coherence Evaluation using AMR-based Semantic
Manipulations
|
Association for Computational Linguistics (ACL 2022)
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Automatic evaluation metrics are essential for the rapid development of
open-domain dialogue systems as they facilitate hyper-parameter tuning and
comparison between models. Although recently proposed trainable
conversation-level metrics have shown encouraging results, the quality of the
metrics is strongly dependent on the quality of training data. Prior works
mainly resort to heuristic text-level manipulations (e.g. utterances shuffling)
to bootstrap incoherent conversations (negative examples) from coherent
dialogues (positive examples). Such approaches are insufficient to
appropriately reflect the incoherence that occurs in interactions between
advanced dialogue models and humans. To tackle this problem, we propose DEAM, a
Dialogue coherence Evaluation metric that relies on Abstract Meaning
Representation (AMR) to apply semantic-level Manipulations for incoherent
(negative) data generation. AMRs naturally facilitate the injection of various
types of incoherence sources, such as coreference inconsistency, irrelevancy,
contradictions, and decrease engagement, at the semantic level, thus resulting
in more natural incoherent samples. Our experiments show that DEAM achieves
higher correlations with human judgments compared to baseline methods on
several dialog datasets by significant margins. We also show that DEAM can
distinguish between coherent and incoherent dialogues generated by baseline
manipulations, whereas those baseline models cannot detect incoherent examples
generated by DEAM. Our results demonstrate the potential of AMR-based semantic
manipulations for natural negative example generation.
|
[
{
"created": "Fri, 18 Mar 2022 03:11:35 GMT",
"version": "v1"
}
] |
2022-03-21
|
[
[
"Ghazarian",
"Sarik",
""
],
[
"Wen",
"Nuan",
""
],
[
"Galstyan",
"Aram",
""
],
[
"Peng",
"Nanyun",
""
]
] |
Automatic evaluation metrics are essential for the rapid development of open-domain dialogue systems as they facilitate hyper-parameter tuning and comparison between models. Although recently proposed trainable conversation-level metrics have shown encouraging results, the quality of the metrics is strongly dependent on the quality of training data. Prior works mainly resort to heuristic text-level manipulations (e.g. utterances shuffling) to bootstrap incoherent conversations (negative examples) from coherent dialogues (positive examples). Such approaches are insufficient to appropriately reflect the incoherence that occurs in interactions between advanced dialogue models and humans. To tackle this problem, we propose DEAM, a Dialogue coherence Evaluation metric that relies on Abstract Meaning Representation (AMR) to apply semantic-level Manipulations for incoherent (negative) data generation. AMRs naturally facilitate the injection of various types of incoherence sources, such as coreference inconsistency, irrelevancy, contradictions, and decrease engagement, at the semantic level, thus resulting in more natural incoherent samples. Our experiments show that DEAM achieves higher correlations with human judgments compared to baseline methods on several dialog datasets by significant margins. We also show that DEAM can distinguish between coherent and incoherent dialogues generated by baseline manipulations, whereas those baseline models cannot detect incoherent examples generated by DEAM. Our results demonstrate the potential of AMR-based semantic manipulations for natural negative example generation.
|
1712.10107
|
Saeedeh Ziyabari
|
Saeedeh Ziyabari, Vinit Shah, Meysam Golmohammadi, Iyad Obeid and
Joseph Picone
|
Objective evaluation metrics for automatic classification of EEG events
|
22 pages, 11 figures, 9 tables
| null | null | null |
cs.LG eess.SP stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The evaluation of machine learning algorithms in biomedical fields for
applications involving sequential data lacks standardization. Common
quantitative scalar evaluation metrics such as sensitivity and specificity can
often be misleading depending on the requirements of the application.
Evaluation metrics must ultimately reflect the needs of users yet be
sufficiently sensitive to guide algorithm development. Feedback from critical
care clinicians who use automated event detection software in clinical
applications has been overwhelmingly emphatic that a low false alarm rate,
typically measured in units of the number of errors per 24 hours, is the single
most important criterion for user acceptance. Though using a single metric is
not often as insightful as examining performance over a range of operating
conditions, there is a need for a single scalar figure of merit. In this paper,
we discuss the deficiencies of existing metrics for a seizure detection task
and propose several new metrics that offer a more balanced view of performance.
We demonstrate these metrics on a seizure detection task based on the TUH EEG
Corpus. We show that two promising metrics are a measure based on a concept
borrowed from the spoken term detection literature, Actual Term-Weighted Value
(ATWV), and a new metric, Time-Aligned Event Scoring (TAES), that accounts for
the temporal alignment of the hypothesis to the reference annotation. We also
demonstrate that state of the art technology based on deep learning, though
impressive in its performance, still needs significant improvement before it
meets very strict user acceptance criteria.
|
[
{
"created": "Fri, 29 Dec 2017 03:36:46 GMT",
"version": "v1"
},
{
"created": "Fri, 18 Oct 2019 07:03:06 GMT",
"version": "v2"
},
{
"created": "Mon, 2 Dec 2019 17:27:35 GMT",
"version": "v3"
}
] |
2019-12-03
|
[
[
"Ziyabari",
"Saeedeh",
""
],
[
"Shah",
"Vinit",
""
],
[
"Golmohammadi",
"Meysam",
""
],
[
"Obeid",
"Iyad",
""
],
[
"Picone",
"Joseph",
""
]
] |
The evaluation of machine learning algorithms in biomedical fields for applications involving sequential data lacks standardization. Common quantitative scalar evaluation metrics such as sensitivity and specificity can often be misleading depending on the requirements of the application. Evaluation metrics must ultimately reflect the needs of users yet be sufficiently sensitive to guide algorithm development. Feedback from critical care clinicians who use automated event detection software in clinical applications has been overwhelmingly emphatic that a low false alarm rate, typically measured in units of the number of errors per 24 hours, is the single most important criterion for user acceptance. Though using a single metric is not often as insightful as examining performance over a range of operating conditions, there is a need for a single scalar figure of merit. In this paper, we discuss the deficiencies of existing metrics for a seizure detection task and propose several new metrics that offer a more balanced view of performance. We demonstrate these metrics on a seizure detection task based on the TUH EEG Corpus. We show that two promising metrics are a measure based on a concept borrowed from the spoken term detection literature, Actual Term-Weighted Value (ATWV), and a new metric, Time-Aligned Event Scoring (TAES), that accounts for the temporal alignment of the hypothesis to the reference annotation. We also demonstrate that state of the art technology based on deep learning, though impressive in its performance, still needs significant improvement before it meets very strict user acceptance criteria.
|
2110.11790
|
Guillaume Baudart
|
Guillaume Baudart and Louis Mandel
|
Automatic Guide Generation for Stan via NumPyro
|
PROBPROG 2021
| null | null | null |
cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
Stan is a very popular probabilistic language with a state-of-the-art HMC
sampler but it only offers a limited choice of algorithms for black-box
variational inference. In this paper, we show that using our recently proposed
compiler from Stan to Pyro, Stan users can easily try the set of algorithms
implemented in Pyro for black-box variational inference. We evaluate our
approach on PosteriorDB, a database of Stan models with corresponding data and
reference posterior samples. Results show that the eight algorithms available
in Pyro offer a range of possible compromises between complexity and accuracy.
This paper illustrates that compiling Stan to another probabilistic language
can be used to leverage new features for Stan users, and give access to a large
set of examples for language developers who implement these new features.
|
[
{
"created": "Fri, 22 Oct 2021 13:42:48 GMT",
"version": "v1"
}
] |
2021-10-25
|
[
[
"Baudart",
"Guillaume",
""
],
[
"Mandel",
"Louis",
""
]
] |
Stan is a very popular probabilistic language with a state-of-the-art HMC sampler but it only offers a limited choice of algorithms for black-box variational inference. In this paper, we show that using our recently proposed compiler from Stan to Pyro, Stan users can easily try the set of algorithms implemented in Pyro for black-box variational inference. We evaluate our approach on PosteriorDB, a database of Stan models with corresponding data and reference posterior samples. Results show that the eight algorithms available in Pyro offer a range of possible compromises between complexity and accuracy. This paper illustrates that compiling Stan to another probabilistic language can be used to leverage new features for Stan users, and give access to a large set of examples for language developers who implement these new features.
|
2303.12798
|
Yimin Dai
|
Yimin Dai and Xian Shuai and Rui Tan and Guoliang Xing
|
Interpersonal Distance Tracking with mmWave Radar and IMUs
| null | null | null | null |
cs.NI cs.LG cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Tracking interpersonal distances is essential for real-time social distancing
management and {\em ex-post} contact tracing to prevent spreads of contagious
diseases. Bluetooth neighbor discovery has been employed for such purposes in
combating COVID-19, but does not provide satisfactory spatiotemporal
resolutions. This paper presents ImmTrack, a system that uses a millimeter wave
radar and exploits the inertial measurement data from user-carried smartphones
or wearables to track interpersonal distances. By matching the movement traces
reconstructed from the radar and inertial data, the pseudo identities of the
inertial data can be transferred to the radar sensing results in the global
coordinate system. The re-identified, radar-sensed movement trajectories are
then used to track interpersonal distances. In a broader sense, ImmTrack is the
first system that fuses data from millimeter wave radar and inertial
measurement units for simultaneous user tracking and re-identification.
Evaluation with up to 27 people in various indoor/outdoor environments shows
ImmTrack's decimeters-seconds spatiotemporal accuracy in contact tracing, which
is similar to that of the privacy-intrusive camera surveillance and
significantly outperforms the Bluetooth neighbor discovery approach.
|
[
{
"created": "Tue, 28 Feb 2023 15:44:17 GMT",
"version": "v1"
}
] |
2023-03-24
|
[
[
"Dai",
"Yimin",
""
],
[
"Shuai",
"Xian",
""
],
[
"Tan",
"Rui",
""
],
[
"Xing",
"Guoliang",
""
]
] |
Tracking interpersonal distances is essential for real-time social distancing management and {\em ex-post} contact tracing to prevent spreads of contagious diseases. Bluetooth neighbor discovery has been employed for such purposes in combating COVID-19, but does not provide satisfactory spatiotemporal resolutions. This paper presents ImmTrack, a system that uses a millimeter wave radar and exploits the inertial measurement data from user-carried smartphones or wearables to track interpersonal distances. By matching the movement traces reconstructed from the radar and inertial data, the pseudo identities of the inertial data can be transferred to the radar sensing results in the global coordinate system. The re-identified, radar-sensed movement trajectories are then used to track interpersonal distances. In a broader sense, ImmTrack is the first system that fuses data from millimeter wave radar and inertial measurement units for simultaneous user tracking and re-identification. Evaluation with up to 27 people in various indoor/outdoor environments shows ImmTrack's decimeters-seconds spatiotemporal accuracy in contact tracing, which is similar to that of the privacy-intrusive camera surveillance and significantly outperforms the Bluetooth neighbor discovery approach.
|
1811.05894
|
Daniil Osokin
|
Alexander Kozlov, Daniil Osokin
|
Development of Real-time ADAS Object Detector for Deployment on CPU
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In this work, we outline the set of problems, which any Object Detection CNN
faces when its development comes to the deployment stage and propose methods to
deal with such difficulties. We show that these practices allow one to get
Object Detection network, which can recognize two classes: vehicles and
pedestrians and achieves more than 60 frames per second inference speed on
Core$^{TM}$ i5-6500 CPU. The proposed model is built on top of the popular
Single Shot MultiBox Object Detection framework but with substantial
improvements, which were inspired by the discovered problems. The network has
just 1.96 GMAC complexity and less than 7 MB model size. It is publicly
available as a part of Intel$\circledR$ OpenVINO$^{TM}$ Toolkit.
|
[
{
"created": "Wed, 14 Nov 2018 16:37:09 GMT",
"version": "v1"
}
] |
2018-11-15
|
[
[
"Kozlov",
"Alexander",
""
],
[
"Osokin",
"Daniil",
""
]
] |
In this work, we outline the set of problems, which any Object Detection CNN faces when its development comes to the deployment stage and propose methods to deal with such difficulties. We show that these practices allow one to get Object Detection network, which can recognize two classes: vehicles and pedestrians and achieves more than 60 frames per second inference speed on Core$^{TM}$ i5-6500 CPU. The proposed model is built on top of the popular Single Shot MultiBox Object Detection framework but with substantial improvements, which were inspired by the discovered problems. The network has just 1.96 GMAC complexity and less than 7 MB model size. It is publicly available as a part of Intel$\circledR$ OpenVINO$^{TM}$ Toolkit.
|
1210.6157
|
Nishtha Kesswani
|
Vibekananda Dutta, Dr Nishtha Kesswani, Deepti Gahalot
|
Novel Architecture for 3D model in virtual communities from detected
face
|
7 pages
|
http://www.ijascse.in/publications-2012--2
| null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this research paper we suggest how to extract a face from an image, modify
it, characterize it in terms of high-level properties, and apply it to the
creation of a personalized avatar. In this research work we tested, we
implemented the algorithm on several hundred facial images, including many
taken under uncontrolled acquisition conditions, and found to exhibit
satisfactory performance for immediate practical use.
|
[
{
"created": "Tue, 23 Oct 2012 07:57:24 GMT",
"version": "v1"
}
] |
2012-10-24
|
[
[
"Dutta",
"Vibekananda",
""
],
[
"Kesswani",
"Dr Nishtha",
""
],
[
"Gahalot",
"Deepti",
""
]
] |
In this research paper we suggest how to extract a face from an image, modify it, characterize it in terms of high-level properties, and apply it to the creation of a personalized avatar. In this research work we tested, we implemented the algorithm on several hundred facial images, including many taken under uncontrolled acquisition conditions, and found to exhibit satisfactory performance for immediate practical use.
|
2306.00275
|
Changhao Wu
|
Changhao Wu and Yuanchun Li and Mengwei Xu and Chongbin Guo and
Zengshan Yin and Weiwei Gao and Chuanxiu Chi
|
A Comprehensive Survey on Orbital Edge Computing: Systems, Applications,
and Algorithms
|
18 pages, 9 figures and 5 tables
| null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The number of satellites, especially those operating in low-earth orbit
(LEO), is exploding in recent years. Additionally, the use of COTS hardware
into those satellites enables a new paradigm of computing: orbital edge
computing (OEC). OEC entails more technically advanced steps compared to
single-satellite computing. This feature allows for vast design spaces with
multiple parameters, rendering several novel approaches feasible. The mobility
of LEO satellites in the network and limited resources of communication,
computation, and storage make it challenging to design an appropriate
scheduling algorithm for specific tasks in comparison to traditional
ground-based edge computing. This article comprehensively surveys the
significant areas of focus in orbital edge computing, which include protocol
optimization, mobility management, and resource allocation. This article
provides the first comprehensive survey of OEC. Previous survey papers have
only concentrated on ground-based edge computing or the integration of space
and ground technologies. This article presents a review of recent research from
2000 to 2023 on orbital edge computing that covers network design, computation
offloading, resource allocation, performance analysis, and optimization.
Moreover, having discussed several related works, both technological challenges
and future directions are highlighted in the field.
|
[
{
"created": "Thu, 1 Jun 2023 01:37:33 GMT",
"version": "v1"
},
{
"created": "Fri, 2 Jun 2023 01:11:40 GMT",
"version": "v2"
}
] |
2023-06-05
|
[
[
"Wu",
"Changhao",
""
],
[
"Li",
"Yuanchun",
""
],
[
"Xu",
"Mengwei",
""
],
[
"Guo",
"Chongbin",
""
],
[
"Yin",
"Zengshan",
""
],
[
"Gao",
"Weiwei",
""
],
[
"Chi",
"Chuanxiu",
""
]
] |
The number of satellites, especially those operating in low-earth orbit (LEO), is exploding in recent years. Additionally, the use of COTS hardware into those satellites enables a new paradigm of computing: orbital edge computing (OEC). OEC entails more technically advanced steps compared to single-satellite computing. This feature allows for vast design spaces with multiple parameters, rendering several novel approaches feasible. The mobility of LEO satellites in the network and limited resources of communication, computation, and storage make it challenging to design an appropriate scheduling algorithm for specific tasks in comparison to traditional ground-based edge computing. This article comprehensively surveys the significant areas of focus in orbital edge computing, which include protocol optimization, mobility management, and resource allocation. This article provides the first comprehensive survey of OEC. Previous survey papers have only concentrated on ground-based edge computing or the integration of space and ground technologies. This article presents a review of recent research from 2000 to 2023 on orbital edge computing that covers network design, computation offloading, resource allocation, performance analysis, and optimization. Moreover, having discussed several related works, both technological challenges and future directions are highlighted in the field.
|
1711.06491
|
J. D. Curt\'o
|
J. D. Curt\'o and I. C. Zarza and Fernando de la Torre and Irwin King
and Michael R. Lyu
|
High-resolution Deep Convolutional Generative Adversarial Networks
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Generative Adversarial Networks (GANs) [Goodfellow et al. 2014] convergence
in a high-resolution setting with a computational constrain of GPU memory
capacity has been beset with difficulty due to the known lack of convergence
rate stability. In order to boost network convergence of DCGAN (Deep
Convolutional Generative Adversarial Networks) [Radford et al. 2016] and
achieve good-looking high-resolution results we propose a new layered network,
HDCGAN, that incorporates current state-of-the-art techniques for this effect.
Glasses, a mechanism to arbitrarily improve the final GAN generated results by
enlarging the input size by a telescope {\zeta} is also presented. A novel
bias-free dataset, Curt\'o & Zarza, containing human faces from different
ethnical groups in a wide variety of illumination conditions and image
resolutions is introduced. Curt\'o is enhanced with HDCGAN synthetic images,
thus being the first GAN augmented dataset of faces. We conduct extensive
experiments on CelebA [Liu et al. 2015], CelebA-hq [Karras et al. 2018] and
Curt\'o. HDCGAN is the current state-of-the-art in synthetic image generation
on CelebA achieving a MS-SSIM of 0.1978 and a FR\'ECHET Inception Distance of
8.44.
|
[
{
"created": "Fri, 17 Nov 2017 10:47:08 GMT",
"version": "v1"
},
{
"created": "Sat, 26 May 2018 15:26:47 GMT",
"version": "v10"
},
{
"created": "Thu, 31 May 2018 06:25:31 GMT",
"version": "v11"
},
{
"created": "Sun, 10 Jun 2018 12:23:26 GMT",
"version": "v12"
},
{
"created": "Sun, 6 Jan 2019 16:43:48 GMT",
"version": "v13"
},
{
"created": "Thu, 24 Jan 2019 20:03:44 GMT",
"version": "v14"
},
{
"created": "Wed, 20 Feb 2019 16:47:49 GMT",
"version": "v15"
},
{
"created": "Sun, 24 Mar 2019 17:15:00 GMT",
"version": "v16"
},
{
"created": "Tue, 31 Dec 2019 18:47:06 GMT",
"version": "v17"
},
{
"created": "Fri, 17 Apr 2020 16:59:30 GMT",
"version": "v18"
},
{
"created": "Wed, 22 Nov 2017 19:25:17 GMT",
"version": "v2"
},
{
"created": "Sat, 27 Jan 2018 15:03:14 GMT",
"version": "v3"
},
{
"created": "Sat, 3 Feb 2018 10:54:57 GMT",
"version": "v4"
},
{
"created": "Fri, 16 Mar 2018 17:03:28 GMT",
"version": "v5"
},
{
"created": "Tue, 20 Mar 2018 16:53:06 GMT",
"version": "v6"
},
{
"created": "Tue, 27 Mar 2018 18:17:12 GMT",
"version": "v7"
},
{
"created": "Thu, 19 Apr 2018 12:30:42 GMT",
"version": "v8"
},
{
"created": "Thu, 10 May 2018 12:13:59 GMT",
"version": "v9"
}
] |
2020-04-20
|
[
[
"Curtó",
"J. D.",
""
],
[
"Zarza",
"I. C.",
""
],
[
"de la Torre",
"Fernando",
""
],
[
"King",
"Irwin",
""
],
[
"Lyu",
"Michael R.",
""
]
] |
Generative Adversarial Networks (GANs) [Goodfellow et al. 2014] convergence in a high-resolution setting with a computational constrain of GPU memory capacity has been beset with difficulty due to the known lack of convergence rate stability. In order to boost network convergence of DCGAN (Deep Convolutional Generative Adversarial Networks) [Radford et al. 2016] and achieve good-looking high-resolution results we propose a new layered network, HDCGAN, that incorporates current state-of-the-art techniques for this effect. Glasses, a mechanism to arbitrarily improve the final GAN generated results by enlarging the input size by a telescope {\zeta} is also presented. A novel bias-free dataset, Curt\'o & Zarza, containing human faces from different ethnical groups in a wide variety of illumination conditions and image resolutions is introduced. Curt\'o is enhanced with HDCGAN synthetic images, thus being the first GAN augmented dataset of faces. We conduct extensive experiments on CelebA [Liu et al. 2015], CelebA-hq [Karras et al. 2018] and Curt\'o. HDCGAN is the current state-of-the-art in synthetic image generation on CelebA achieving a MS-SSIM of 0.1978 and a FR\'ECHET Inception Distance of 8.44.
|
2103.14232
|
Chi Zhang
|
Chi Zhang, Baoxiong Jia, Mark Edmonds, Song-Chun Zhu, Yixin Zhu
|
ACRE: Abstract Causal REasoning Beyond Covariation
|
CVPR 2021 paper. Supplementary:
http://wellyzhang.github.io/attach/cvpr21zhang_acre_supp.pdf Project:
http://wellyzhang.github.io/project/acre.html
| null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Causal induction, i.e., identifying unobservable mechanisms that lead to the
observable relations among variables, has played a pivotal role in modern
scientific discovery, especially in scenarios with only sparse and limited
data. Humans, even young toddlers, can induce causal relationships surprisingly
well in various settings despite its notorious difficulty. However, in contrast
to the commonplace trait of human cognition is the lack of a diagnostic
benchmark to measure causal induction for modern Artificial Intelligence (AI)
systems. Therefore, in this work, we introduce the Abstract Causal REasoning
(ACRE) dataset for systematic evaluation of current vision systems in causal
induction. Motivated by the stream of research on causal discovery in Blicket
experiments, we query a visual reasoning system with the following four types
of questions in either an independent scenario or an interventional scenario:
direct, indirect, screening-off, and backward-blocking, intentionally going
beyond the simple strategy of inducing causal relationships by covariation. By
analyzing visual reasoning architectures on this testbed, we notice that pure
neural models tend towards an associative strategy under their chance-level
performance, whereas neuro-symbolic combinations struggle in backward-blocking
reasoning. These deficiencies call for future research in models with a more
comprehensive capability of causal induction.
|
[
{
"created": "Fri, 26 Mar 2021 02:42:38 GMT",
"version": "v1"
}
] |
2021-03-29
|
[
[
"Zhang",
"Chi",
""
],
[
"Jia",
"Baoxiong",
""
],
[
"Edmonds",
"Mark",
""
],
[
"Zhu",
"Song-Chun",
""
],
[
"Zhu",
"Yixin",
""
]
] |
Causal induction, i.e., identifying unobservable mechanisms that lead to the observable relations among variables, has played a pivotal role in modern scientific discovery, especially in scenarios with only sparse and limited data. Humans, even young toddlers, can induce causal relationships surprisingly well in various settings despite its notorious difficulty. However, in contrast to the commonplace trait of human cognition is the lack of a diagnostic benchmark to measure causal induction for modern Artificial Intelligence (AI) systems. Therefore, in this work, we introduce the Abstract Causal REasoning (ACRE) dataset for systematic evaluation of current vision systems in causal induction. Motivated by the stream of research on causal discovery in Blicket experiments, we query a visual reasoning system with the following four types of questions in either an independent scenario or an interventional scenario: direct, indirect, screening-off, and backward-blocking, intentionally going beyond the simple strategy of inducing causal relationships by covariation. By analyzing visual reasoning architectures on this testbed, we notice that pure neural models tend towards an associative strategy under their chance-level performance, whereas neuro-symbolic combinations struggle in backward-blocking reasoning. These deficiencies call for future research in models with a more comprehensive capability of causal induction.
|
2112.11572
|
Qian Yang
|
Maryam Pardakhti, Nila Mandal, Anson W. K. Ma and Qian Yang
|
Practical Active Learning with Model Selection for Small Data
|
Accepted for publication in the Proceedings of the 2021 20th IEEE
International Conference on Machine Learning and Applications (ICMLA)
| null |
10.1109/ICMLA52953.2021.00263
| null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Active learning is of great interest for many practical applications,
especially in industry and the physical sciences, where there is a strong need
to minimize the number of costly experiments necessary to train predictive
models. However, there remain significant challenges for the adoption of active
learning methods in many practical applications. One important challenge is
that many methods assume a fixed model, where model hyperparameters are chosen
a priori. In practice, it is rarely true that a good model will be known in
advance. Existing methods for active learning with model selection typically
depend on a medium-sized labeling budget. In this work, we focus on the case of
having a very small labeling budget, on the order of a few dozen data points,
and develop a simple and fast method for practical active learning with model
selection. Our method is based on an underlying pool-based active learner for
binary classification using support vector classification with a radial basis
function kernel. First we show empirically that our method is able to find
hyperparameters that lead to the best performance compared to an oracle model
on less separable, difficult to classify datasets, and reasonable performance
on datasets that are more separable and easier to classify. Then, we
demonstrate that it is possible to refine our model selection method using a
weighted approach to trade-off between achieving optimal performance on
datasets that are easy to classify, versus datasets that are difficult to
classify, which can be tuned based on prior domain knowledge about the dataset.
|
[
{
"created": "Tue, 21 Dec 2021 23:11:27 GMT",
"version": "v1"
}
] |
2021-12-23
|
[
[
"Pardakhti",
"Maryam",
""
],
[
"Mandal",
"Nila",
""
],
[
"Ma",
"Anson W. K.",
""
],
[
"Yang",
"Qian",
""
]
] |
Active learning is of great interest for many practical applications, especially in industry and the physical sciences, where there is a strong need to minimize the number of costly experiments necessary to train predictive models. However, there remain significant challenges for the adoption of active learning methods in many practical applications. One important challenge is that many methods assume a fixed model, where model hyperparameters are chosen a priori. In practice, it is rarely true that a good model will be known in advance. Existing methods for active learning with model selection typically depend on a medium-sized labeling budget. In this work, we focus on the case of having a very small labeling budget, on the order of a few dozen data points, and develop a simple and fast method for practical active learning with model selection. Our method is based on an underlying pool-based active learner for binary classification using support vector classification with a radial basis function kernel. First we show empirically that our method is able to find hyperparameters that lead to the best performance compared to an oracle model on less separable, difficult to classify datasets, and reasonable performance on datasets that are more separable and easier to classify. Then, we demonstrate that it is possible to refine our model selection method using a weighted approach to trade-off between achieving optimal performance on datasets that are easy to classify, versus datasets that are difficult to classify, which can be tuned based on prior domain knowledge about the dataset.
|
2401.07586
|
Muhammad Asif Khan
|
Muhammad Asif Khan, Hamid Menouar, Ridha Hamila
|
Curriculum for Crowd Counting -- Is it Worthy?
|
Accepted version of the paper in 19th International Conference on
Computer Vision Theory and Applications (VISAPP), Rome, Italy, 27-19 February
2024
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Recent advances in deep learning techniques have achieved remarkable
performance in several computer vision problems. A notably intuitive technique
called Curriculum Learning (CL) has been introduced recently for training deep
learning models. Surprisingly, curriculum learning achieves significantly
improved results in some tasks but marginal or no improvement in others. Hence,
there is still a debate about its adoption as a standard method to train
supervised learning models. In this work, we investigate the impact of
curriculum learning in crowd counting using the density estimation method. We
performed detailed investigations by conducting 112 experiments using six
different CL settings using eight different crowd models. Our experiments show
that curriculum learning improves the model learning performance and shortens
the convergence time.
|
[
{
"created": "Mon, 15 Jan 2024 10:46:01 GMT",
"version": "v1"
}
] |
2024-01-17
|
[
[
"Khan",
"Muhammad Asif",
""
],
[
"Menouar",
"Hamid",
""
],
[
"Hamila",
"Ridha",
""
]
] |
Recent advances in deep learning techniques have achieved remarkable performance in several computer vision problems. A notably intuitive technique called Curriculum Learning (CL) has been introduced recently for training deep learning models. Surprisingly, curriculum learning achieves significantly improved results in some tasks but marginal or no improvement in others. Hence, there is still a debate about its adoption as a standard method to train supervised learning models. In this work, we investigate the impact of curriculum learning in crowd counting using the density estimation method. We performed detailed investigations by conducting 112 experiments using six different CL settings using eight different crowd models. Our experiments show that curriculum learning improves the model learning performance and shortens the convergence time.
|
1809.07702
|
Kun Cheng
|
Kun Cheng, Weiyue Liu, Qi Shen and Shengkai Liao
|
Design and Implementation of High-throughput PCIe with DMA Architecture
between FPGA and PowerPC
| null | null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We designed and implemented a direct memory access (DMA) architecture of
PCI-Express(PCIe) between Xilinx Field Program Gate Array(FPGA) and Freescale
PowerPC. The DMA architecture based on FPGA is compatible with the Xilinx PCIe
core while the DMA architecture based on POWERPC is compatible with VxBus of
VxWorks. The solutions provide a high-performance and low-occupancy alternative
to commercial. In order to maximize the PCIe throughput while minimizing the
FPGA resources utilization, the DMA engine adopts a novel strategy where the
DMA register list is stored both inside the FPGA during initialization phase
and inside the central memory of the host CPU. The FPGA design package is
complemented with simple register access to control the DMA engine by a VxWorks
driver. The design is compatible with Xilinx FPGA Kintex Ultrascale Family, and
operates with the Xilinx PCIe endpoint Generation 1 with lane configurations
x8. A data throughput of more than 666 MBytes/s(memory write with data from
FPGA to PowerPC) has been achieved with the single PCIe Gen1 x8 lanes endpoint
of this design, PowerPC and FPGA can send memory write request to each other.
|
[
{
"created": "Mon, 17 Sep 2018 13:22:53 GMT",
"version": "v1"
}
] |
2018-09-21
|
[
[
"Cheng",
"Kun",
""
],
[
"Liu",
"Weiyue",
""
],
[
"Shen",
"Qi",
""
],
[
"Liao",
"Shengkai",
""
]
] |
We designed and implemented a direct memory access (DMA) architecture of PCI-Express(PCIe) between Xilinx Field Program Gate Array(FPGA) and Freescale PowerPC. The DMA architecture based on FPGA is compatible with the Xilinx PCIe core while the DMA architecture based on POWERPC is compatible with VxBus of VxWorks. The solutions provide a high-performance and low-occupancy alternative to commercial. In order to maximize the PCIe throughput while minimizing the FPGA resources utilization, the DMA engine adopts a novel strategy where the DMA register list is stored both inside the FPGA during initialization phase and inside the central memory of the host CPU. The FPGA design package is complemented with simple register access to control the DMA engine by a VxWorks driver. The design is compatible with Xilinx FPGA Kintex Ultrascale Family, and operates with the Xilinx PCIe endpoint Generation 1 with lane configurations x8. A data throughput of more than 666 MBytes/s(memory write with data from FPGA to PowerPC) has been achieved with the single PCIe Gen1 x8 lanes endpoint of this design, PowerPC and FPGA can send memory write request to each other.
|
1807.07363
|
Kleanthis Thramboulidis
|
Kleanthis Thramboulidis, Danai C. Vachtsevanou, Ioanna Kontou
|
CPuS-IoT : A Cyber-Physical Microservice and IoT-based Framework for
Manufacturing Assembly Systems
|
14 pages, 16 figures, Journal submission
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Today's customers are characterized by individual requirements that lead the
manufacturing industry to increased product variety and volume reduction.
Manufacturing systems and more specifically assembly systems (ASs) should allow
quick adaptation of manufacturing assets so as to respond to the evolving
market requirements that lead to mass customization. Meanwhile, the
manufacturing era is changing due to the fourth industrial revolution, i.e.,
Industry 4.0, that will change the traditional manufacturing environment to an
IoT-based one. In this context, this paper introduces the concept of
cyber-physical microservice in the Manufacturing and the ASs domain and
presents the Cyber-Physical microservice and IoT-based (CPuS-IoT) framework.
The CPuS-IoT framework exploits the benefits of the microservice architectural
style and the IoT technologies, but also utilizes the existing in this domain
huge investment based on traditional technologies, to support the life cycle of
evolvable ASs in the age of Industry 4.0. It provides a solid basis to capture
domain knowledge that is used by a model-driven engineering (MDE) approach to
semi-automate the development, evolution and operation of ASs, as well as, to
establish a common vocabulary for assembly system experts and IoT ones. The
CPuS-IoT approach and framework effectively combines MDE with IoT and the
microservice architectural paradigm. A case study for the assembly of an
everyday life product is adopted to demonstrate the approach even to
non-experts of this domain.
|
[
{
"created": "Thu, 19 Jul 2018 12:35:03 GMT",
"version": "v1"
},
{
"created": "Tue, 12 Feb 2019 11:38:44 GMT",
"version": "v2"
}
] |
2019-02-13
|
[
[
"Thramboulidis",
"Kleanthis",
""
],
[
"Vachtsevanou",
"Danai C.",
""
],
[
"Kontou",
"Ioanna",
""
]
] |
Today's customers are characterized by individual requirements that lead the manufacturing industry to increased product variety and volume reduction. Manufacturing systems and more specifically assembly systems (ASs) should allow quick adaptation of manufacturing assets so as to respond to the evolving market requirements that lead to mass customization. Meanwhile, the manufacturing era is changing due to the fourth industrial revolution, i.e., Industry 4.0, that will change the traditional manufacturing environment to an IoT-based one. In this context, this paper introduces the concept of cyber-physical microservice in the Manufacturing and the ASs domain and presents the Cyber-Physical microservice and IoT-based (CPuS-IoT) framework. The CPuS-IoT framework exploits the benefits of the microservice architectural style and the IoT technologies, but also utilizes the existing in this domain huge investment based on traditional technologies, to support the life cycle of evolvable ASs in the age of Industry 4.0. It provides a solid basis to capture domain knowledge that is used by a model-driven engineering (MDE) approach to semi-automate the development, evolution and operation of ASs, as well as, to establish a common vocabulary for assembly system experts and IoT ones. The CPuS-IoT approach and framework effectively combines MDE with IoT and the microservice architectural paradigm. A case study for the assembly of an everyday life product is adopted to demonstrate the approach even to non-experts of this domain.
|
2401.02008
|
Farhad Pourkamali-Anaraki
|
Farhad Pourkamali-Anaraki, Jamal F. Husseini, Evan J. Pineda, Brett A.
Bednarcyk, Scott E. Stapleton
|
Two-Stage Surrogate Modeling for Data-Driven Design Optimization with
Application to Composite Microstructure Generation
|
23 pages, 11 figures
| null | null | null |
cs.LG cs.CE
|
http://creativecommons.org/licenses/by/4.0/
|
This paper introduces a novel two-stage machine learning-based surrogate
modeling framework to address inverse problems in scientific and engineering
fields. In the first stage of the proposed framework, a machine learning model
termed the "learner" identifies a limited set of candidates within the input
design space whose predicted outputs closely align with desired outcomes.
Subsequently, in the second stage, a separate surrogate model, functioning as
an "evaluator," is employed to assess the reduced candidate space generated in
the first stage. This evaluation process eliminates inaccurate and uncertain
solutions, guided by a user-defined coverage level. The framework's distinctive
contribution is the integration of conformal inference, providing a versatile
and efficient approach that can be widely applicable. To demonstrate the
effectiveness of the proposed framework compared to conventional single-stage
inverse problems, we conduct several benchmark tests and investigate an
engineering application focused on the micromechanical modeling of
fiber-reinforced composites. The results affirm the superiority of our proposed
framework, as it consistently produces more reliable solutions. Therefore, the
introduced framework offers a unique perspective on fostering interactions
between machine learning-based surrogate models in real-world applications.
|
[
{
"created": "Thu, 4 Jan 2024 00:25:12 GMT",
"version": "v1"
}
] |
2024-01-05
|
[
[
"Pourkamali-Anaraki",
"Farhad",
""
],
[
"Husseini",
"Jamal F.",
""
],
[
"Pineda",
"Evan J.",
""
],
[
"Bednarcyk",
"Brett A.",
""
],
[
"Stapleton",
"Scott E.",
""
]
] |
This paper introduces a novel two-stage machine learning-based surrogate modeling framework to address inverse problems in scientific and engineering fields. In the first stage of the proposed framework, a machine learning model termed the "learner" identifies a limited set of candidates within the input design space whose predicted outputs closely align with desired outcomes. Subsequently, in the second stage, a separate surrogate model, functioning as an "evaluator," is employed to assess the reduced candidate space generated in the first stage. This evaluation process eliminates inaccurate and uncertain solutions, guided by a user-defined coverage level. The framework's distinctive contribution is the integration of conformal inference, providing a versatile and efficient approach that can be widely applicable. To demonstrate the effectiveness of the proposed framework compared to conventional single-stage inverse problems, we conduct several benchmark tests and investigate an engineering application focused on the micromechanical modeling of fiber-reinforced composites. The results affirm the superiority of our proposed framework, as it consistently produces more reliable solutions. Therefore, the introduced framework offers a unique perspective on fostering interactions between machine learning-based surrogate models in real-world applications.
|
2209.06750
|
Oscar Araque
|
Oscar Araque, Lorenzo Gatti and Kyriaki Kalimeri
|
LibertyMFD: A Lexicon to Assess the Moral Foundation of Liberty
|
GoodIT '22: Proceedings of the 2022 ACM Conference on Information
Technology for Social Good. GoodIT'22, September 7-9, 2022, Limassol, Cyprus
|
Conference on Information Technology for Social Good (GoodIT'22),
September 7-9, 2022, Limassol, Cyprus. ACM, New York, NY, USA, 7 pages
|
10.1145/3524458.3547264
| null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Quantifying the moral narratives expressed in the user-generated text, news,
or public discourses is fundamental for understanding individuals' concerns and
viewpoints and preventing violent protests and social polarisation. The Moral
Foundation Theory (MFT) was developed to operationalise morality in a
five-dimensional scale system. Recent developments of the theory urged for the
introduction of a new foundation, the Liberty Foundation. Being only recently
added to the theory, there are no available linguistic resources to assess
whether liberty is present in text corpora. Given its importance to current
social issues such as the vaccination debate, we propose two data-driven
approaches, deriving two candidate lexicons generated based on aligned
documents from online news sources with different worldviews. After extensive
experimentation, we contribute to the research community a novel lexicon that
assesses the liberty moral foundation in the way individuals with contrasting
viewpoints express themselves through written text. The LibertyMFD dictionary
can be a valuable tool for policymakers to understand diverse viewpoints on
controversial social issues such as vaccination, abortion, or even uprisings,
as they happen and on a large scale.
|
[
{
"created": "Wed, 14 Sep 2022 16:14:54 GMT",
"version": "v1"
}
] |
2022-09-15
|
[
[
"Araque",
"Oscar",
""
],
[
"Gatti",
"Lorenzo",
""
],
[
"Kalimeri",
"Kyriaki",
""
]
] |
Quantifying the moral narratives expressed in the user-generated text, news, or public discourses is fundamental for understanding individuals' concerns and viewpoints and preventing violent protests and social polarisation. The Moral Foundation Theory (MFT) was developed to operationalise morality in a five-dimensional scale system. Recent developments of the theory urged for the introduction of a new foundation, the Liberty Foundation. Being only recently added to the theory, there are no available linguistic resources to assess whether liberty is present in text corpora. Given its importance to current social issues such as the vaccination debate, we propose two data-driven approaches, deriving two candidate lexicons generated based on aligned documents from online news sources with different worldviews. After extensive experimentation, we contribute to the research community a novel lexicon that assesses the liberty moral foundation in the way individuals with contrasting viewpoints express themselves through written text. The LibertyMFD dictionary can be a valuable tool for policymakers to understand diverse viewpoints on controversial social issues such as vaccination, abortion, or even uprisings, as they happen and on a large scale.
|
1607.05994
|
Omer Gold
|
Omer Gold and Micha Sharir
|
Dynamic Time Warping and Geometric Edit Distance: Breaking the Quadratic
Barrier
|
Removing the $\log\log\log n$ factor from the runtime bound that
appeared in previous versions
| null | null | null |
cs.DS cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dynamic Time Warping (DTW) and Geometric Edit Distance (GED) are basic
similarity measures between curves or general temporal sequences (e.g., time
series) that are represented as sequences of points in some metric space $(X,
\mathrm{dist})$. The DTW and GED measures are massively used in various fields
of computer science, computational biology, and engineering. Consequently, the
tasks of computing these measures are among the core problems in P. Despite
extensive efforts to find more efficient algorithms, the best-known algorithms
for computing the DTW or GED between two sequences of points in $X =
\mathbb{R}^d$ are long-standing dynamic programming algorithms that require
quadratic runtime, even for the one-dimensional case $d = 1$, which is perhaps
one of the most used in practice.
In this paper, we break the nearly 50 years old quadratic time bound for
computing DTW or GED between two sequences of $n$ points in $\mathbb{R}$, by
presenting deterministic algorithms that run in $O\left( n^2 / \log\log n
\right)$ time. Our algorithms can be extended to work also for higher
dimensional spaces $\mathbb{R}^d$, for any constant $d$, when the underlying
distance-metric $\mathrm{dist}$ is polyhedral (e.g., $L_1, L_\infty$).
|
[
{
"created": "Wed, 20 Jul 2016 15:15:44 GMT",
"version": "v1"
},
{
"created": "Mon, 15 Aug 2016 13:53:52 GMT",
"version": "v2"
},
{
"created": "Sat, 5 Nov 2016 11:07:42 GMT",
"version": "v3"
},
{
"created": "Tue, 28 Jan 2020 10:45:55 GMT",
"version": "v4"
}
] |
2020-01-29
|
[
[
"Gold",
"Omer",
""
],
[
"Sharir",
"Micha",
""
]
] |
Dynamic Time Warping (DTW) and Geometric Edit Distance (GED) are basic similarity measures between curves or general temporal sequences (e.g., time series) that are represented as sequences of points in some metric space $(X, \mathrm{dist})$. The DTW and GED measures are massively used in various fields of computer science, computational biology, and engineering. Consequently, the tasks of computing these measures are among the core problems in P. Despite extensive efforts to find more efficient algorithms, the best-known algorithms for computing the DTW or GED between two sequences of points in $X = \mathbb{R}^d$ are long-standing dynamic programming algorithms that require quadratic runtime, even for the one-dimensional case $d = 1$, which is perhaps one of the most used in practice. In this paper, we break the nearly 50 years old quadratic time bound for computing DTW or GED between two sequences of $n$ points in $\mathbb{R}$, by presenting deterministic algorithms that run in $O\left( n^2 / \log\log n \right)$ time. Our algorithms can be extended to work also for higher dimensional spaces $\mathbb{R}^d$, for any constant $d$, when the underlying distance-metric $\mathrm{dist}$ is polyhedral (e.g., $L_1, L_\infty$).
|
2207.04307
|
Taha Belkhouja
|
Taha Belkhouja, Janardhan Rao Doppa
|
Adversarial Framework with Certified Robustness for Time-Series Domain
via Statistical Features
|
Published at Journal of Artificial Intelligence Research
|
Journal of Artificial Intelligence Research, 73, 1435-1471, 2022
|
10.1613/jair.1.13543
| null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Time-series data arises in many real-world applications (e.g., mobile health)
and deep neural networks (DNNs) have shown great success in solving them.
Despite their success, little is known about their robustness to adversarial
attacks. In this paper, we propose a novel adversarial framework referred to as
Time-Series Attacks via STATistical Features (TSA-STAT)}. To address the unique
challenges of time-series domain, TSA-STAT employs constraints on statistical
features of the time-series data to construct adversarial examples. Optimized
polynomial transformations are used to create attacks that are more effective
(in terms of successfully fooling DNNs) than those based on additive
perturbations. We also provide certified bounds on the norm of the statistical
features for constructing adversarial examples. Our experiments on diverse
real-world benchmark datasets show the effectiveness of TSA-STAT in fooling
DNNs for time-series domain and in improving their robustness. The source code
of TSA-STAT algorithms is available at
https://github.com/tahabelkhouja/Time-Series-Attacks-via-STATistical-Features
|
[
{
"created": "Sat, 9 Jul 2022 17:22:34 GMT",
"version": "v1"
}
] |
2022-07-12
|
[
[
"Belkhouja",
"Taha",
""
],
[
"Doppa",
"Janardhan Rao",
""
]
] |
Time-series data arises in many real-world applications (e.g., mobile health) and deep neural networks (DNNs) have shown great success in solving them. Despite their success, little is known about their robustness to adversarial attacks. In this paper, we propose a novel adversarial framework referred to as Time-Series Attacks via STATistical Features (TSA-STAT)}. To address the unique challenges of time-series domain, TSA-STAT employs constraints on statistical features of the time-series data to construct adversarial examples. Optimized polynomial transformations are used to create attacks that are more effective (in terms of successfully fooling DNNs) than those based on additive perturbations. We also provide certified bounds on the norm of the statistical features for constructing adversarial examples. Our experiments on diverse real-world benchmark datasets show the effectiveness of TSA-STAT in fooling DNNs for time-series domain and in improving their robustness. The source code of TSA-STAT algorithms is available at https://github.com/tahabelkhouja/Time-Series-Attacks-via-STATistical-Features
|
2402.16304
|
Wonbin Kweon
|
Wonbin Kweon, SeongKu Kang, Sanghwan Jang, Hwanjo Yu
|
Top-Personalized-K Recommendation
|
WWW 2024
| null |
10.1145/3589334.3645417
| null |
cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
The conventional top-K recommendation, which presents the top-K items with
the highest ranking scores, is a common practice for generating personalized
ranking lists. However, is this fixed-size top-K recommendation the optimal
approach for every user's satisfaction? Not necessarily. We point out that
providing fixed-size recommendations without taking into account user utility
can be suboptimal, as it may unavoidably include irrelevant items or limit the
exposure to relevant ones. To address this issue, we introduce
Top-Personalized-K Recommendation, a new recommendation task aimed at
generating a personalized-sized ranking list to maximize individual user
satisfaction. As a solution to the proposed task, we develop a model-agnostic
framework named PerK. PerK estimates the expected user utility by leveraging
calibrated interaction probabilities, subsequently selecting the recommendation
size that maximizes this expected utility. Through extensive experiments on
real-world datasets, we demonstrate the superiority of PerK in
Top-Personalized-K recommendation task. We expect that Top-Personalized-K
recommendation has the potential to offer enhanced solutions for various
real-world recommendation scenarios, based on its great compatibility with
existing models.
|
[
{
"created": "Mon, 26 Feb 2024 05:03:54 GMT",
"version": "v1"
}
] |
2024-02-27
|
[
[
"Kweon",
"Wonbin",
""
],
[
"Kang",
"SeongKu",
""
],
[
"Jang",
"Sanghwan",
""
],
[
"Yu",
"Hwanjo",
""
]
] |
The conventional top-K recommendation, which presents the top-K items with the highest ranking scores, is a common practice for generating personalized ranking lists. However, is this fixed-size top-K recommendation the optimal approach for every user's satisfaction? Not necessarily. We point out that providing fixed-size recommendations without taking into account user utility can be suboptimal, as it may unavoidably include irrelevant items or limit the exposure to relevant ones. To address this issue, we introduce Top-Personalized-K Recommendation, a new recommendation task aimed at generating a personalized-sized ranking list to maximize individual user satisfaction. As a solution to the proposed task, we develop a model-agnostic framework named PerK. PerK estimates the expected user utility by leveraging calibrated interaction probabilities, subsequently selecting the recommendation size that maximizes this expected utility. Through extensive experiments on real-world datasets, we demonstrate the superiority of PerK in Top-Personalized-K recommendation task. We expect that Top-Personalized-K recommendation has the potential to offer enhanced solutions for various real-world recommendation scenarios, based on its great compatibility with existing models.
|
2402.07242
|
Tommaso Boccato
|
Tommaso Boccato, Matteo Ferrante, Nicola Toschi
|
Optimizing Genetically-Driven Synaptogenesis
| null | null | null | null |
cs.NE q-bio.NC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In this paper we introduce SynaptoGen, a novel framework that aims to bridge
the gap between genetic manipulations and neuronal network behavior by
simulating synaptogenesis and guiding the development of neuronal networks
capable of solving predetermined computational tasks. Drawing inspiration from
recent advancements in the field, we propose SynaptoGen as a bio-plausible
approach to modeling synaptogenesis through differentiable functions. To
validate SynaptoGen, we conduct a preliminary experiment using reinforcement
learning as a benchmark learning framework, demonstrating its effectiveness in
generating neuronal networks capable of solving the OpenAI Gym's Cart Pole
task, compared to carefully designed baselines. The results highlight the
potential of SynaptoGen to inspire further advancements in neuroscience and
computational modeling, while also acknowledging the need for incorporating
more realistic genetic rules and synaptic conductances in future research.
Overall, SynaptoGen represents a promising avenue for exploring the
intersection of genetics, neuroscience, and artificial intelligence.
|
[
{
"created": "Sun, 11 Feb 2024 16:49:12 GMT",
"version": "v1"
}
] |
2024-02-13
|
[
[
"Boccato",
"Tommaso",
""
],
[
"Ferrante",
"Matteo",
""
],
[
"Toschi",
"Nicola",
""
]
] |
In this paper we introduce SynaptoGen, a novel framework that aims to bridge the gap between genetic manipulations and neuronal network behavior by simulating synaptogenesis and guiding the development of neuronal networks capable of solving predetermined computational tasks. Drawing inspiration from recent advancements in the field, we propose SynaptoGen as a bio-plausible approach to modeling synaptogenesis through differentiable functions. To validate SynaptoGen, we conduct a preliminary experiment using reinforcement learning as a benchmark learning framework, demonstrating its effectiveness in generating neuronal networks capable of solving the OpenAI Gym's Cart Pole task, compared to carefully designed baselines. The results highlight the potential of SynaptoGen to inspire further advancements in neuroscience and computational modeling, while also acknowledging the need for incorporating more realistic genetic rules and synaptic conductances in future research. Overall, SynaptoGen represents a promising avenue for exploring the intersection of genetics, neuroscience, and artificial intelligence.
|
2006.11159
|
Martin Van Harmelen
|
Martin van Harmelen, Jonas Groschwitz
|
Graphs with Multiple Sources per Vertex
|
Supervision by Jonas Groschwitz
| null | null | null |
cs.LO cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Several attempts have been made at constructing Abstract Meaning
Representations (AMRs) compositionally, and recently the idea of using s-graphs
with the HR-algebra (Koller, 2015) has been simplified to reduce the number of
options when parsing (Groschwitz et al., 2017). This apply-modify algebra
(AM-algebra) is a linguistically plausible graph algebra with two classes of
operations, both of rank two: the apply operation is used to combine a
predicate with its argument; the modify operation is used to modify a
predicate. While the AM-algebra correctly handles relative clauses and complex
cases of coordination, it cannot parse reflexive sentences like: "The raven
washes herself." To facilitate processing of such reflexive sentences, this
paper proposes to change the definition of s-graphs underlying the AM-algebra
to allow vertices with multiple sources, and additionally proposes an adaption
to the type system of the algebra to correctly handle such vertices.
|
[
{
"created": "Fri, 19 Jun 2020 14:43:12 GMT",
"version": "v1"
}
] |
2020-06-22
|
[
[
"van Harmelen",
"Martin",
""
],
[
"Groschwitz",
"Jonas",
""
]
] |
Several attempts have been made at constructing Abstract Meaning Representations (AMRs) compositionally, and recently the idea of using s-graphs with the HR-algebra (Koller, 2015) has been simplified to reduce the number of options when parsing (Groschwitz et al., 2017). This apply-modify algebra (AM-algebra) is a linguistically plausible graph algebra with two classes of operations, both of rank two: the apply operation is used to combine a predicate with its argument; the modify operation is used to modify a predicate. While the AM-algebra correctly handles relative clauses and complex cases of coordination, it cannot parse reflexive sentences like: "The raven washes herself." To facilitate processing of such reflexive sentences, this paper proposes to change the definition of s-graphs underlying the AM-algebra to allow vertices with multiple sources, and additionally proposes an adaption to the type system of the algebra to correctly handle such vertices.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.