id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2103.10206
|
Buyu Li
|
Buyu Li, Yongchi Zhao, Zhelun Shi, Lu Sheng
|
DanceFormer: Music Conditioned 3D Dance Generation with Parametric
Motion Transformer
|
This is the version accepted by AAAI-22
| null | null | null |
cs.AI cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Generating 3D dances from music is an emerged research task that benefits a
lot of applications in vision and graphics. Previous works treat this task as
sequence generation, however, it is challenging to render a music-aligned
long-term sequence with high kinematic complexity and coherent movements. In
this paper, we reformulate it by a two-stage process, ie, a key pose generation
and then an in-between parametric motion curve prediction, where the key poses
are easier to be synchronized with the music beats and the parametric curves
can be efficiently regressed to render fluent rhythm-aligned movements. We
named the proposed method as DanceFormer, which includes two cascading
kinematics-enhanced transformer-guided networks (called DanTrans) that tackle
each stage, respectively. Furthermore, we propose a large-scale music
conditioned 3D dance dataset, called PhantomDance, that is accurately labeled
by experienced animators rather than reconstruction or motion capture. This
dataset also encodes dances as key poses and parametric motion curves apart
from pose sequences, thus benefiting the training of our DanceFormer. Extensive
experiments demonstrate that the proposed method, even trained by existing
datasets, can generate fluent, performative, and music-matched 3D dances that
surpass previous works quantitatively and qualitatively. Moreover, the proposed
DanceFormer, together with the PhantomDance dataset
(https://github.com/libuyu/PhantomDanceDataset), are seamlessly compatible with
industrial animation software, thus facilitating the adaptation for various
downstream applications.
|
[
{
"created": "Thu, 18 Mar 2021 12:17:38 GMT",
"version": "v1"
},
{
"created": "Sat, 20 Mar 2021 07:28:26 GMT",
"version": "v2"
},
{
"created": "Thu, 25 Mar 2021 09:27:22 GMT",
"version": "v3"
},
{
"created": "Wed, 8 Dec 2021 12:15:36 GMT",
"version": "v4"
},
{
"created": "Thu, 27 Jul 2023 08:49:55 GMT",
"version": "v5"
}
] |
2023-07-28
|
[
[
"Li",
"Buyu",
""
],
[
"Zhao",
"Yongchi",
""
],
[
"Shi",
"Zhelun",
""
],
[
"Sheng",
"Lu",
""
]
] |
Generating 3D dances from music is an emerged research task that benefits a lot of applications in vision and graphics. Previous works treat this task as sequence generation, however, it is challenging to render a music-aligned long-term sequence with high kinematic complexity and coherent movements. In this paper, we reformulate it by a two-stage process, ie, a key pose generation and then an in-between parametric motion curve prediction, where the key poses are easier to be synchronized with the music beats and the parametric curves can be efficiently regressed to render fluent rhythm-aligned movements. We named the proposed method as DanceFormer, which includes two cascading kinematics-enhanced transformer-guided networks (called DanTrans) that tackle each stage, respectively. Furthermore, we propose a large-scale music conditioned 3D dance dataset, called PhantomDance, that is accurately labeled by experienced animators rather than reconstruction or motion capture. This dataset also encodes dances as key poses and parametric motion curves apart from pose sequences, thus benefiting the training of our DanceFormer. Extensive experiments demonstrate that the proposed method, even trained by existing datasets, can generate fluent, performative, and music-matched 3D dances that surpass previous works quantitatively and qualitatively. Moreover, the proposed DanceFormer, together with the PhantomDance dataset (https://github.com/libuyu/PhantomDanceDataset), are seamlessly compatible with industrial animation software, thus facilitating the adaptation for various downstream applications.
|
2103.14074
|
Cesar Augusto Ipanaque Zapata Prof.
|
Cesar A. Ipanaque Zapata and Jes\'us Gonz\'alez
|
Parametrised collision-free optimal motion planning algorithms in
Euclidean spaces
|
16 pages. Final version. To appear in Morfismos
| null | null | null |
cs.RO math.AT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We describe parametrised motion planning algorithms for systems controlling
objects represented by points that move without collisions in an even
dimensional Euclidean space and in the presence of up to three obstacles with
\emph{a priori} unknown positions. Our algorithms are optimal in the sense that
the parametrised local planners have minimal posible size.
|
[
{
"created": "Thu, 25 Mar 2021 18:51:04 GMT",
"version": "v1"
},
{
"created": "Sat, 24 Jun 2023 05:56:47 GMT",
"version": "v2"
}
] |
2023-06-27
|
[
[
"Zapata",
"Cesar A. Ipanaque",
""
],
[
"González",
"Jesús",
""
]
] |
We describe parametrised motion planning algorithms for systems controlling objects represented by points that move without collisions in an even dimensional Euclidean space and in the presence of up to three obstacles with \emph{a priori} unknown positions. Our algorithms are optimal in the sense that the parametrised local planners have minimal posible size.
|
1809.10804
|
Ivan Girardi
|
Ivan Girardi, Pengfei Ji, An-phi Nguyen, Nora Hollenstein, Adam
Ivankay, Lorenz Kuhn, Chiara Marchiori and Ce Zhang
|
Patient Risk Assessment and Warning Symptom Detection Using Deep
Attention-Based Neural Networks
|
10 pages, 2 figures, EMNLP workshop LOUHI 2018
| null | null | null |
cs.CL cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present an operational component of a real-world patient triage system.
Given a specific patient presentation, the system is able to assess the level
of medical urgency and issue the most appropriate recommendation in terms of
best point of care and time to treat. We use an attention-based convolutional
neural network architecture trained on 600,000 doctor notes in German. We
compare two approaches, one that uses the full text of the medical notes and
one that uses only a selected list of medical entities extracted from the text.
These approaches achieve 79% and 66% precision, respectively, but on a
confidence threshold of 0.6, precision increases to 85% and 75%, respectively.
In addition, a method to detect warning symptoms is implemented to render the
classification task transparent from a medical perspective. The method is based
on the learning of attention scores and a method of automatic validation using
the same data.
|
[
{
"created": "Fri, 28 Sep 2018 00:14:10 GMT",
"version": "v1"
}
] |
2018-10-01
|
[
[
"Girardi",
"Ivan",
""
],
[
"Ji",
"Pengfei",
""
],
[
"Nguyen",
"An-phi",
""
],
[
"Hollenstein",
"Nora",
""
],
[
"Ivankay",
"Adam",
""
],
[
"Kuhn",
"Lorenz",
""
],
[
"Marchiori",
"Chiara",
""
],
[
"Zhang",
"Ce",
""
]
] |
We present an operational component of a real-world patient triage system. Given a specific patient presentation, the system is able to assess the level of medical urgency and issue the most appropriate recommendation in terms of best point of care and time to treat. We use an attention-based convolutional neural network architecture trained on 600,000 doctor notes in German. We compare two approaches, one that uses the full text of the medical notes and one that uses only a selected list of medical entities extracted from the text. These approaches achieve 79% and 66% precision, respectively, but on a confidence threshold of 0.6, precision increases to 85% and 75%, respectively. In addition, a method to detect warning symptoms is implemented to render the classification task transparent from a medical perspective. The method is based on the learning of attention scores and a method of automatic validation using the same data.
|
1911.07701
|
Christian Tiefenau
|
Christian Tiefenau, Maximilian H\"aring, Eva Gerlitz, Emanuel von
Zezschwitz
|
Making Privacy Graspable: Can we Nudge Users to use Privacy Enhancing
Techniques?
|
SOUPS 2019 Poster Session
| null | null | null |
cs.HC
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Smart speakers are gaining popularity. However, such devices can put the
user's privacy at risk whenever hot-words are misinterpreted and voice data is
recorded without the user's consent. To mitigate such risks, smart speakers
provide privacy control mechanisms like the build-in mute button.
Unfortunately, previous work indicated that such mute buttons are rarely used.
In this paper, we present the Privacy Hat, a tangible device which can be
placed on the smart speaker to prevent the device from listening. We designed
the Privacy Hat based on the results of a focus group and developed a working
prototype. We hypothesize that the specific user experience of this physical
and tangible token makes the use of privacy-enhancing technology more graspable
for the user. As a consequence, we expect that the Privacy Hat nudges users to
more actively use privacy-enhancing features like the mute button. In addition,
we propose the Privacy Hat as a study tool as we hypothesize that the artifact
supports participants in reflecting their behaviour. We report on the concept,
the prototype and our preliminary results.
|
[
{
"created": "Mon, 18 Nov 2019 15:30:08 GMT",
"version": "v1"
}
] |
2019-11-19
|
[
[
"Tiefenau",
"Christian",
""
],
[
"Häring",
"Maximilian",
""
],
[
"Gerlitz",
"Eva",
""
],
[
"von Zezschwitz",
"Emanuel",
""
]
] |
Smart speakers are gaining popularity. However, such devices can put the user's privacy at risk whenever hot-words are misinterpreted and voice data is recorded without the user's consent. To mitigate such risks, smart speakers provide privacy control mechanisms like the build-in mute button. Unfortunately, previous work indicated that such mute buttons are rarely used. In this paper, we present the Privacy Hat, a tangible device which can be placed on the smart speaker to prevent the device from listening. We designed the Privacy Hat based on the results of a focus group and developed a working prototype. We hypothesize that the specific user experience of this physical and tangible token makes the use of privacy-enhancing technology more graspable for the user. As a consequence, we expect that the Privacy Hat nudges users to more actively use privacy-enhancing features like the mute button. In addition, we propose the Privacy Hat as a study tool as we hypothesize that the artifact supports participants in reflecting their behaviour. We report on the concept, the prototype and our preliminary results.
|
2112.09131
|
Ali Athar
|
Ali Athar, Jonathon Luiten, Alexander Hermans, Deva Ramanan, Bastian
Leibe
|
HODOR: High-level Object Descriptors for Object Re-segmentation in Video
Learned from Static Images
| null | null |
10.1109/CVPR52688.2022.00303
| null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing state-of-the-art methods for Video Object Segmentation (VOS) learn
low-level pixel-to-pixel correspondences between frames to propagate object
masks across video. This requires a large amount of densely annotated video
data, which is costly to annotate, and largely redundant since frames within a
video are highly correlated. In light of this, we propose HODOR: a novel method
that tackles VOS by effectively leveraging annotated static images for
understanding object appearance and scene context. We encode object instances
and scene information from an image frame into robust high-level descriptors
which can then be used to re-segment those objects in different frames. As a
result, HODOR achieves state-of-the-art performance on the DAVIS and
YouTube-VOS benchmarks compared to existing methods trained without video
annotations. Without any architectural modification, HODOR can also learn from
video context around single annotated video frames by utilizing cyclic
consistency, whereas other methods rely on dense, temporally consistent
annotations. Source code is available at: https://github.com/Ali2500/HODOR
|
[
{
"created": "Thu, 16 Dec 2021 18:59:53 GMT",
"version": "v1"
},
{
"created": "Fri, 15 Jul 2022 13:15:16 GMT",
"version": "v2"
}
] |
2022-11-23
|
[
[
"Athar",
"Ali",
""
],
[
"Luiten",
"Jonathon",
""
],
[
"Hermans",
"Alexander",
""
],
[
"Ramanan",
"Deva",
""
],
[
"Leibe",
"Bastian",
""
]
] |
Existing state-of-the-art methods for Video Object Segmentation (VOS) learn low-level pixel-to-pixel correspondences between frames to propagate object masks across video. This requires a large amount of densely annotated video data, which is costly to annotate, and largely redundant since frames within a video are highly correlated. In light of this, we propose HODOR: a novel method that tackles VOS by effectively leveraging annotated static images for understanding object appearance and scene context. We encode object instances and scene information from an image frame into robust high-level descriptors which can then be used to re-segment those objects in different frames. As a result, HODOR achieves state-of-the-art performance on the DAVIS and YouTube-VOS benchmarks compared to existing methods trained without video annotations. Without any architectural modification, HODOR can also learn from video context around single annotated video frames by utilizing cyclic consistency, whereas other methods rely on dense, temporally consistent annotations. Source code is available at: https://github.com/Ali2500/HODOR
|
1310.7610
|
Adwaitvedant Mathkar
|
Adwaitvedant S. Mathkar and Vivek S. Borkar
|
Distributed Reinforcement Learning via Gossip
|
18 pages, 3 figures, Submitted to Discrete Event Dynamic Systems
| null | null | null |
cs.DC cs.AI math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider the classical TD(0) algorithm implemented on a network of agents
wherein the agents also incorporate the updates received from neighboring
agents using a gossip-like mechanism. The combined scheme is shown to converge
for both discounted and average cost problems.
|
[
{
"created": "Mon, 28 Oct 2013 20:23:57 GMT",
"version": "v1"
}
] |
2013-10-30
|
[
[
"Mathkar",
"Adwaitvedant S.",
""
],
[
"Borkar",
"Vivek S.",
""
]
] |
We consider the classical TD(0) algorithm implemented on a network of agents wherein the agents also incorporate the updates received from neighboring agents using a gossip-like mechanism. The combined scheme is shown to converge for both discounted and average cost problems.
|
1411.5433
|
Alexander Semenov
|
Alexander Semenov, Oleg Zaikin and Ilya Otpuschennikov
|
Using Volunteer Computing for Mounting SAT-based Cryptographic Attacks
| null | null | null | null |
cs.DC cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we describe the volunteer computing project SAT@home, developed
and maintained by us. This project is aimed at solving hard instances of the
Boolean satisfiability problem (SAT). We believe that this project can be a
useful tool for computational study of inversion problems of some cryptographic
functions. In particular we describe a series of experiments performed in
SAT@home on the cryptanalysis of the widely known keystream generator A5/1. In
all experiments we analyzed one known burst (114 bits) of keystream produced by
A5/1. Before the cryptanalysis itself there is a stage on which the
partitioning of the original problem to a family of subproblems is carried out.
Each of subproblems should be easy enough so that it could be solved in
relatively small amount of time by volunteer's PC. We construct such
partitioning using the special technique based on the Monte Carlo method and
discrete optimization algorithms for special predictive functions. Besides this
in the paper we describe the technique for reducing inversion problems of
cryptographic functions to SAT.
|
[
{
"created": "Thu, 20 Nov 2014 03:42:13 GMT",
"version": "v1"
}
] |
2014-11-21
|
[
[
"Semenov",
"Alexander",
""
],
[
"Zaikin",
"Oleg",
""
],
[
"Otpuschennikov",
"Ilya",
""
]
] |
In this paper we describe the volunteer computing project SAT@home, developed and maintained by us. This project is aimed at solving hard instances of the Boolean satisfiability problem (SAT). We believe that this project can be a useful tool for computational study of inversion problems of some cryptographic functions. In particular we describe a series of experiments performed in SAT@home on the cryptanalysis of the widely known keystream generator A5/1. In all experiments we analyzed one known burst (114 bits) of keystream produced by A5/1. Before the cryptanalysis itself there is a stage on which the partitioning of the original problem to a family of subproblems is carried out. Each of subproblems should be easy enough so that it could be solved in relatively small amount of time by volunteer's PC. We construct such partitioning using the special technique based on the Monte Carlo method and discrete optimization algorithms for special predictive functions. Besides this in the paper we describe the technique for reducing inversion problems of cryptographic functions to SAT.
|
1911.05347
|
Nandana Rajatheva
|
Nora Boulaioune, Nandana Rajatheva, Matti Latva-aho
|
High Reliability Downlink MU-MIMO: New OSTBC Approach and Superposition
Modulated Side Information
|
Submitted to IEEE VTC 2020 Spring Conference
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, a proposal to improve the reliability of a downlink multiuser
(MU) MIMO transmission scheme is investigated with the use of a new approach in
orthogonal space-time block codes (OSTBC) and network coding with a
superposition modulated system and side information. With the new encoded OSTBC
approach, diversity is offered where each user receives all other users'
symbols, which allows the recovery of symbols in several ways. In addition,
multiple users can be accommodated with the same resource, which is quite
useful in a wireless system where resources are always restricted. By employing
superposition modulation, the side information needed for error recovery can be
transmitted over the same resource used for the normal information frame. In
addition, the proposed system exploits diversity through a novel technique of
sub-constellation alignment-based signal combining for efficient side
information dissemination. A detailed analysis of the new OSTBC approach is
carried out. It is shown that the performance of the MU-MIMO system can be
improved significantly in terms of block and frame error rates (BLER, FER)
considered as reliability measures. By accommodating a reasonable number of
multiple users, high reliability is achieved at the expense of the rate. To
compensate for the low rate, conventional OSTBC can be considered and
simulation results are shown, where, as a penalty to pay, multiple orthogonal
resources are required.
|
[
{
"created": "Wed, 13 Nov 2019 08:32:03 GMT",
"version": "v1"
},
{
"created": "Mon, 20 Jan 2020 21:51:04 GMT",
"version": "v2"
}
] |
2020-01-22
|
[
[
"Boulaioune",
"Nora",
""
],
[
"Rajatheva",
"Nandana",
""
],
[
"Latva-aho",
"Matti",
""
]
] |
In this paper, a proposal to improve the reliability of a downlink multiuser (MU) MIMO transmission scheme is investigated with the use of a new approach in orthogonal space-time block codes (OSTBC) and network coding with a superposition modulated system and side information. With the new encoded OSTBC approach, diversity is offered where each user receives all other users' symbols, which allows the recovery of symbols in several ways. In addition, multiple users can be accommodated with the same resource, which is quite useful in a wireless system where resources are always restricted. By employing superposition modulation, the side information needed for error recovery can be transmitted over the same resource used for the normal information frame. In addition, the proposed system exploits diversity through a novel technique of sub-constellation alignment-based signal combining for efficient side information dissemination. A detailed analysis of the new OSTBC approach is carried out. It is shown that the performance of the MU-MIMO system can be improved significantly in terms of block and frame error rates (BLER, FER) considered as reliability measures. By accommodating a reasonable number of multiple users, high reliability is achieved at the expense of the rate. To compensate for the low rate, conventional OSTBC can be considered and simulation results are shown, where, as a penalty to pay, multiple orthogonal resources are required.
|
2304.11609
|
Cilin Yan
|
Cilin Yan, Haochen Wang, Jie Liu, Xiaolong Jiang, Yao Hu, Xu Tang,
Guoliang Kang, Efstratios Gavves
|
PiClick: Picking the desired mask from multiple candidates in
click-based interactive segmentation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Click-based interactive segmentation aims to generate target masks via human
clicking, which facilitates efficient pixel-level annotation and image editing.
In such a task, target ambiguity remains a problem hindering the accuracy and
efficiency of segmentation. That is, in scenes with rich context, one click may
correspond to multiple potential targets, while most previous interactive
segmentors only generate a single mask and fail to deal with target ambiguity.
In this paper, we propose a novel interactive segmentation network named
PiClick, to yield all potentially reasonable masks and suggest the most
plausible one for the user. Specifically, PiClick utilizes a Transformer-based
architecture to generate all potential target masks by mutually interactive
mask queries. Moreover, a Target Reasoning module(TRM) is designed in PiClick
to automatically suggest the user-desired mask from all candidates, relieving
target ambiguity and extra-human efforts. Extensive experiments on 9
interactive segmentation datasets demonstrate PiClick performs favorably
against previous state-of-the-arts considering the segmentation results.
Moreover, we show that PiClick effectively reduces human efforts in annotating
and picking the desired masks. To ease the usage and inspire future research,
we release the source code of PiClick together with a plug-and-play annotation
tool at https://github.com/cilinyan/PiClick.
|
[
{
"created": "Sun, 23 Apr 2023 10:46:16 GMT",
"version": "v1"
},
{
"created": "Sat, 19 Aug 2023 02:30:56 GMT",
"version": "v2"
},
{
"created": "Mon, 28 Aug 2023 13:26:52 GMT",
"version": "v3"
},
{
"created": "Mon, 29 Jan 2024 14:33:02 GMT",
"version": "v4"
},
{
"created": "Mon, 17 Jun 2024 06:41:56 GMT",
"version": "v5"
}
] |
2024-06-18
|
[
[
"Yan",
"Cilin",
""
],
[
"Wang",
"Haochen",
""
],
[
"Liu",
"Jie",
""
],
[
"Jiang",
"Xiaolong",
""
],
[
"Hu",
"Yao",
""
],
[
"Tang",
"Xu",
""
],
[
"Kang",
"Guoliang",
""
],
[
"Gavves",
"Efstratios",
""
]
] |
Click-based interactive segmentation aims to generate target masks via human clicking, which facilitates efficient pixel-level annotation and image editing. In such a task, target ambiguity remains a problem hindering the accuracy and efficiency of segmentation. That is, in scenes with rich context, one click may correspond to multiple potential targets, while most previous interactive segmentors only generate a single mask and fail to deal with target ambiguity. In this paper, we propose a novel interactive segmentation network named PiClick, to yield all potentially reasonable masks and suggest the most plausible one for the user. Specifically, PiClick utilizes a Transformer-based architecture to generate all potential target masks by mutually interactive mask queries. Moreover, a Target Reasoning module(TRM) is designed in PiClick to automatically suggest the user-desired mask from all candidates, relieving target ambiguity and extra-human efforts. Extensive experiments on 9 interactive segmentation datasets demonstrate PiClick performs favorably against previous state-of-the-arts considering the segmentation results. Moreover, we show that PiClick effectively reduces human efforts in annotating and picking the desired masks. To ease the usage and inspire future research, we release the source code of PiClick together with a plug-and-play annotation tool at https://github.com/cilinyan/PiClick.
|
2311.15698
|
Federico Galatolo
|
Federico A. Galatolo, Mario G.C.A. Cimino
|
Cerbero-7B: A Leap Forward in Language-Specific LLMs Through Enhanced
Chat Corpus Generation and Evaluation
| null | null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This study introduces a novel approach for generating high-quality,
language-specific chat corpora using a self-chat mechanism. We combine a
generator LLM for creating new samples and an embedder LLM to ensure diversity.
A new Masked Language Modelling (MLM) model-based quality assessment metric is
proposed for evaluating and filtering the corpora. Utilizing the llama2-70b as
the generator and a multilingual sentence transformer as embedder, we generate
an Italian chat corpus and refine the Fauno corpus, which is based on
translated English ChatGPT self-chat data. The refinement uses structural
assertions and Natural Language Processing techniques. Both corpora undergo a
comprehensive quality evaluation using the proposed MLM model-based quality
metric. The Italian LLM fine-tuned with these corpora demonstrates
significantly enhanced language comprehension and question-answering skills.
The resultant model, cerbero-7b, establishes a new state-of-the-art for Italian
LLMs. This approach marks a substantial advancement in the development of
language-specific LLMs, with a special emphasis on augmenting corpora for
underrepresented languages like Italian.
|
[
{
"created": "Mon, 27 Nov 2023 10:34:55 GMT",
"version": "v1"
}
] |
2023-11-28
|
[
[
"Galatolo",
"Federico A.",
""
],
[
"Cimino",
"Mario G. C. A.",
""
]
] |
This study introduces a novel approach for generating high-quality, language-specific chat corpora using a self-chat mechanism. We combine a generator LLM for creating new samples and an embedder LLM to ensure diversity. A new Masked Language Modelling (MLM) model-based quality assessment metric is proposed for evaluating and filtering the corpora. Utilizing the llama2-70b as the generator and a multilingual sentence transformer as embedder, we generate an Italian chat corpus and refine the Fauno corpus, which is based on translated English ChatGPT self-chat data. The refinement uses structural assertions and Natural Language Processing techniques. Both corpora undergo a comprehensive quality evaluation using the proposed MLM model-based quality metric. The Italian LLM fine-tuned with these corpora demonstrates significantly enhanced language comprehension and question-answering skills. The resultant model, cerbero-7b, establishes a new state-of-the-art for Italian LLMs. This approach marks a substantial advancement in the development of language-specific LLMs, with a special emphasis on augmenting corpora for underrepresented languages like Italian.
|
2009.14073
|
Alessandro Brusaferri Eng.
|
Alessandro Brusaferri and Matteo Matteucci and Stefano Spinelli
|
Estimation of Switched Markov Polynomial NARX models
|
7 pages, 2 figures
| null | null | null |
cs.LG stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
This work targets the identification of a class of models for hybrid
dynamical systems characterized by nonlinear autoregressive exogenous (NARX)
components, with finite-dimensional polynomial expansions, and by a Markovian
switching mechanism. The estimation of the model parameters is performed under
a probabilistic framework via Expectation Maximization, including submodel
coefficients, hidden state values and transition probabilities. Discrete mode
classification and NARX regression tasks are disentangled within the
iterations. Soft-labels are assigned to latent states on the trajectories by
averaging over the state posteriors and updated using the parametrization
obtained from the previous maximization phase. Then, NARXs parameters are
repeatedly fitted by solving weighted regression subproblems through a cyclical
coordinate descent approach with coordinate-wise minimization. Moreover, we
investigate a two stage selection scheme, based on a l1-norm bridge estimation
followed by hard-thresholding, to achieve parsimonious models through selection
of the polynomial expansion. The proposed approach is demonstrated on a SMNARX
problem composed by three nonlinear sub-models with specific regressors.
|
[
{
"created": "Tue, 29 Sep 2020 15:00:47 GMT",
"version": "v1"
}
] |
2020-09-30
|
[
[
"Brusaferri",
"Alessandro",
""
],
[
"Matteucci",
"Matteo",
""
],
[
"Spinelli",
"Stefano",
""
]
] |
This work targets the identification of a class of models for hybrid dynamical systems characterized by nonlinear autoregressive exogenous (NARX) components, with finite-dimensional polynomial expansions, and by a Markovian switching mechanism. The estimation of the model parameters is performed under a probabilistic framework via Expectation Maximization, including submodel coefficients, hidden state values and transition probabilities. Discrete mode classification and NARX regression tasks are disentangled within the iterations. Soft-labels are assigned to latent states on the trajectories by averaging over the state posteriors and updated using the parametrization obtained from the previous maximization phase. Then, NARXs parameters are repeatedly fitted by solving weighted regression subproblems through a cyclical coordinate descent approach with coordinate-wise minimization. Moreover, we investigate a two stage selection scheme, based on a l1-norm bridge estimation followed by hard-thresholding, to achieve parsimonious models through selection of the polynomial expansion. The proposed approach is demonstrated on a SMNARX problem composed by three nonlinear sub-models with specific regressors.
|
2009.03397
|
Jason Angel
|
Jason Angel, Segun Taofeek Aroyehun, Antonio Tamayo and Alexander
Gelbukh
|
NLP-CIC at SemEval-2020 Task 9: Analysing sentiment in code-switching
language using a simple deep-learning classifier
|
Accepted at SemEval-2020, COLING
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Code-switching is a phenomenon in which two or more languages are used in the
same message. Nowadays, it is quite common to find messages with languages
mixed in social media. This phenomenon presents a challenge for sentiment
analysis. In this paper, we use a standard convolutional neural network model
to predict the sentiment of tweets in a blend of Spanish and English languages.
Our simple approach achieved a F1-score of 0.71 on test set on the competition.
We analyze our best model capabilities and perform error analysis to expose
important difficulties for classifying sentiment in a code-switching setting.
|
[
{
"created": "Mon, 7 Sep 2020 19:57:09 GMT",
"version": "v1"
}
] |
2020-09-09
|
[
[
"Angel",
"Jason",
""
],
[
"Aroyehun",
"Segun Taofeek",
""
],
[
"Tamayo",
"Antonio",
""
],
[
"Gelbukh",
"Alexander",
""
]
] |
Code-switching is a phenomenon in which two or more languages are used in the same message. Nowadays, it is quite common to find messages with languages mixed in social media. This phenomenon presents a challenge for sentiment analysis. In this paper, we use a standard convolutional neural network model to predict the sentiment of tweets in a blend of Spanish and English languages. Our simple approach achieved a F1-score of 0.71 on test set on the competition. We analyze our best model capabilities and perform error analysis to expose important difficulties for classifying sentiment in a code-switching setting.
|
2102.02841
|
Costanza Conforti
|
Stephanie Hirmer, Alycia Leonard, Josephine Tumwesige, Costanza
Conforti
|
Building Representative Corpora from Illiterate Communities: A Review of
Challenges and Mitigation Strategies for Developing Countries
|
Accepted at EACL 2021
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Most well-established data collection methods currently adopted in NLP depend
on the assumption of speaker literacy. Consequently, the collected corpora
largely fail to represent swathes of the global population, which tend to be
some of the most vulnerable and marginalised people in society, and often live
in rural developing areas. Such underrepresented groups are thus not only
ignored when making modeling and system design decisions, but also prevented
from benefiting from development outcomes achieved through data-driven NLP.
This paper aims to address the under-representation of illiterate communities
in NLP corpora: we identify potential biases and ethical issues that might
arise when collecting data from rural communities with high illiteracy rates in
Low-Income Countries, and propose a set of practical mitigation strategies to
help future work.
|
[
{
"created": "Thu, 4 Feb 2021 19:20:35 GMT",
"version": "v1"
}
] |
2021-02-08
|
[
[
"Hirmer",
"Stephanie",
""
],
[
"Leonard",
"Alycia",
""
],
[
"Tumwesige",
"Josephine",
""
],
[
"Conforti",
"Costanza",
""
]
] |
Most well-established data collection methods currently adopted in NLP depend on the assumption of speaker literacy. Consequently, the collected corpora largely fail to represent swathes of the global population, which tend to be some of the most vulnerable and marginalised people in society, and often live in rural developing areas. Such underrepresented groups are thus not only ignored when making modeling and system design decisions, but also prevented from benefiting from development outcomes achieved through data-driven NLP. This paper aims to address the under-representation of illiterate communities in NLP corpora: we identify potential biases and ethical issues that might arise when collecting data from rural communities with high illiteracy rates in Low-Income Countries, and propose a set of practical mitigation strategies to help future work.
|
1906.08138
|
Julian Hammer
|
Julian Hornich, Julian Hammer, Georg Hager, Thomas Gruber, Gerhard
Wellein
|
Collecting and Presenting Reproducible Intranode Stencil Performance:
INSPECT
| null | null |
10.14529/jsfi190301
| null |
cs.PF
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Stencil algorithms have been receiving considerable interest in HPC research
for decades. The techniques used to approach multi-core stencil performance
modeling and engineering span basic runtime measurements, elaborate performance
models, detailed hardware counter analysis, and thorough scaling behavior
evaluation. Due to the plurality of approaches and stencil patterns, we set out
to develop a generalizable methodology for reproducible measurements
accompanied by state-of-the-art performance models. Our open-source toolchain,
and collected results are publicly available in the "Intranode Stencil
Performance Evaluation Collection" (INSPECT). We present the underlying
methodologies, models and tools involved in gathering and documenting the
performance behavior of a collection of typical stencil patterns across
multiple architectures and hardware configuration options. Our aim is to endow
performance-aware application developers with reproducible baseline performance
data and validated models to initiate a well-defined process of performance
assessment and optimization.
|
[
{
"created": "Wed, 19 Jun 2019 15:08:06 GMT",
"version": "v1"
},
{
"created": "Tue, 2 Jul 2019 10:08:33 GMT",
"version": "v2"
}
] |
2020-06-25
|
[
[
"Hornich",
"Julian",
""
],
[
"Hammer",
"Julian",
""
],
[
"Hager",
"Georg",
""
],
[
"Gruber",
"Thomas",
""
],
[
"Wellein",
"Gerhard",
""
]
] |
Stencil algorithms have been receiving considerable interest in HPC research for decades. The techniques used to approach multi-core stencil performance modeling and engineering span basic runtime measurements, elaborate performance models, detailed hardware counter analysis, and thorough scaling behavior evaluation. Due to the plurality of approaches and stencil patterns, we set out to develop a generalizable methodology for reproducible measurements accompanied by state-of-the-art performance models. Our open-source toolchain, and collected results are publicly available in the "Intranode Stencil Performance Evaluation Collection" (INSPECT). We present the underlying methodologies, models and tools involved in gathering and documenting the performance behavior of a collection of typical stencil patterns across multiple architectures and hardware configuration options. Our aim is to endow performance-aware application developers with reproducible baseline performance data and validated models to initiate a well-defined process of performance assessment and optimization.
|
2402.07577
|
Thong Nguyen
|
Thong Nguyen, Xiaobao Wu, Xinshuai Dong, Cong-Duy T Nguyen, See-Kiong
Ng, Anh Tuan Luu
|
Topic Modeling as Multi-Objective Contrastive Optimization
|
Accepted at ICLR 2024 (poster)
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Recent representation learning approaches enhance neural topic models by
optimizing the weighted linear combination of the evidence lower bound (ELBO)
of the log-likelihood and the contrastive learning objective that contrasts
pairs of input documents. However, document-level contrastive learning might
capture low-level mutual information, such as word ratio, which disturbs topic
modeling. Moreover, there is a potential conflict between the ELBO loss that
memorizes input details for better reconstruction quality, and the contrastive
loss which attempts to learn topic representations that generalize among input
documents. To address these issues, we first introduce a novel contrastive
learning method oriented towards sets of topic vectors to capture useful
semantics that are shared among a set of input documents. Secondly, we
explicitly cast contrastive topic modeling as a gradient-based multi-objective
optimization problem, with the goal of achieving a Pareto stationary solution
that balances the trade-off between the ELBO and the contrastive objective.
Extensive experiments demonstrate that our framework consistently produces
higher-performing neural topic models in terms of topic coherence, topic
diversity, and downstream performance.
|
[
{
"created": "Mon, 12 Feb 2024 11:18:32 GMT",
"version": "v1"
},
{
"created": "Sat, 9 Mar 2024 05:35:21 GMT",
"version": "v2"
}
] |
2024-03-12
|
[
[
"Nguyen",
"Thong",
""
],
[
"Wu",
"Xiaobao",
""
],
[
"Dong",
"Xinshuai",
""
],
[
"Nguyen",
"Cong-Duy T",
""
],
[
"Ng",
"See-Kiong",
""
],
[
"Luu",
"Anh Tuan",
""
]
] |
Recent representation learning approaches enhance neural topic models by optimizing the weighted linear combination of the evidence lower bound (ELBO) of the log-likelihood and the contrastive learning objective that contrasts pairs of input documents. However, document-level contrastive learning might capture low-level mutual information, such as word ratio, which disturbs topic modeling. Moreover, there is a potential conflict between the ELBO loss that memorizes input details for better reconstruction quality, and the contrastive loss which attempts to learn topic representations that generalize among input documents. To address these issues, we first introduce a novel contrastive learning method oriented towards sets of topic vectors to capture useful semantics that are shared among a set of input documents. Secondly, we explicitly cast contrastive topic modeling as a gradient-based multi-objective optimization problem, with the goal of achieving a Pareto stationary solution that balances the trade-off between the ELBO and the contrastive objective. Extensive experiments demonstrate that our framework consistently produces higher-performing neural topic models in terms of topic coherence, topic diversity, and downstream performance.
|
1411.4116
|
Jack Cheng J
|
Jianpeng Cheng, Dimitri Kartsaklis, Edward Grefenstette
|
Investigating the Role of Prior Disambiguation in Deep-learning
Compositional Models of Meaning
|
NIPS 2014
| null | null | null |
cs.CL cs.LG cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper aims to explore the effect of prior disambiguation on neural
network- based compositional models, with the hope that better semantic
representations for text compounds can be produced. We disambiguate the input
word vectors before they are fed into a compositional deep net. A series of
evaluations shows the positive effect of prior disambiguation for such deep
models.
|
[
{
"created": "Sat, 15 Nov 2014 06:32:49 GMT",
"version": "v1"
}
] |
2014-11-18
|
[
[
"Cheng",
"Jianpeng",
""
],
[
"Kartsaklis",
"Dimitri",
""
],
[
"Grefenstette",
"Edward",
""
]
] |
This paper aims to explore the effect of prior disambiguation on neural network- based compositional models, with the hope that better semantic representations for text compounds can be produced. We disambiguate the input word vectors before they are fed into a compositional deep net. A series of evaluations shows the positive effect of prior disambiguation for such deep models.
|
2108.01441
|
Quanye Jia
|
Quanye Jia, Rui Liu and Jianying Lin
|
Using Query Expansion in Manifold Ranking for Query-Oriented
Multi-Document Summarization
|
https://github.com/homealim2012/QE_Mani_Summary
|
CCL2021
|
10.1007/978-3-030-84186-7_7
| null |
cs.IR cs.AI cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Manifold ranking has been successfully applied in query-oriented
multi-document summarization. It not only makes use of the relationships among
the sentences, but also the relationships between the given query and the
sentences. However, the information of original query is often insufficient. So
we present a query expansion method, which is combined in the manifold ranking
to resolve this problem. Our method not only utilizes the information of the
query term itself and the knowledge base WordNet to expand it by synonyms, but
also uses the information of the document set itself to expand the query in
various ways (mean expansion, variance expansion and TextRank expansion).
Compared with the previous query expansion methods, our method combines
multiple query expansion methods to better represent query information, and at
the same time, it makes a useful attempt on manifold ranking. In addition, we
use the degree of word overlap and the proximity between words to calculate the
similarity between sentences. We performed experiments on the datasets of DUC
2006 and DUC2007, and the evaluation results show that the proposed query
expansion method can significantly improve the system performance and make our
system comparable to the state-of-the-art systems.
|
[
{
"created": "Sat, 31 Jul 2021 02:20:44 GMT",
"version": "v1"
}
] |
2021-08-30
|
[
[
"Jia",
"Quanye",
""
],
[
"Liu",
"Rui",
""
],
[
"Lin",
"Jianying",
""
]
] |
Manifold ranking has been successfully applied in query-oriented multi-document summarization. It not only makes use of the relationships among the sentences, but also the relationships between the given query and the sentences. However, the information of original query is often insufficient. So we present a query expansion method, which is combined in the manifold ranking to resolve this problem. Our method not only utilizes the information of the query term itself and the knowledge base WordNet to expand it by synonyms, but also uses the information of the document set itself to expand the query in various ways (mean expansion, variance expansion and TextRank expansion). Compared with the previous query expansion methods, our method combines multiple query expansion methods to better represent query information, and at the same time, it makes a useful attempt on manifold ranking. In addition, we use the degree of word overlap and the proximity between words to calculate the similarity between sentences. We performed experiments on the datasets of DUC 2006 and DUC2007, and the evaluation results show that the proposed query expansion method can significantly improve the system performance and make our system comparable to the state-of-the-art systems.
|
2012.05258
|
Siyuan Qiao
|
Siyuan Qiao, Yukun Zhu, Hartwig Adam, Alan Yuille, Liang-Chieh Chen
|
ViP-DeepLab: Learning Visual Perception with Depth-aware Video Panoptic
Segmentation
|
Video: https://youtu.be/XR4HFiwwao0 GitHub:
https://github.com/joe-siyuan-qiao/ViP-DeepLab
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present ViP-DeepLab, a unified model attempting to tackle
the long-standing and challenging inverse projection problem in vision, which
we model as restoring the point clouds from perspective image sequences while
providing each point with instance-level semantic interpretations. Solving this
problem requires the vision models to predict the spatial location, semantic
class, and temporally consistent instance label for each 3D point. ViP-DeepLab
approaches it by jointly performing monocular depth estimation and video
panoptic segmentation. We name this joint task as Depth-aware Video Panoptic
Segmentation, and propose a new evaluation metric along with two derived
datasets for it, which will be made available to the public. On the individual
sub-tasks, ViP-DeepLab also achieves state-of-the-art results, outperforming
previous methods by 5.1% VPQ on Cityscapes-VPS, ranking 1st on the KITTI
monocular depth estimation benchmark, and 1st on KITTI MOTS pedestrian. The
datasets and the evaluation codes are made publicly available.
|
[
{
"created": "Wed, 9 Dec 2020 19:00:35 GMT",
"version": "v1"
}
] |
2020-12-11
|
[
[
"Qiao",
"Siyuan",
""
],
[
"Zhu",
"Yukun",
""
],
[
"Adam",
"Hartwig",
""
],
[
"Yuille",
"Alan",
""
],
[
"Chen",
"Liang-Chieh",
""
]
] |
In this paper, we present ViP-DeepLab, a unified model attempting to tackle the long-standing and challenging inverse projection problem in vision, which we model as restoring the point clouds from perspective image sequences while providing each point with instance-level semantic interpretations. Solving this problem requires the vision models to predict the spatial location, semantic class, and temporally consistent instance label for each 3D point. ViP-DeepLab approaches it by jointly performing monocular depth estimation and video panoptic segmentation. We name this joint task as Depth-aware Video Panoptic Segmentation, and propose a new evaluation metric along with two derived datasets for it, which will be made available to the public. On the individual sub-tasks, ViP-DeepLab also achieves state-of-the-art results, outperforming previous methods by 5.1% VPQ on Cityscapes-VPS, ranking 1st on the KITTI monocular depth estimation benchmark, and 1st on KITTI MOTS pedestrian. The datasets and the evaluation codes are made publicly available.
|
cs/0106013
|
Viacheslav Wolfengagen
|
Larissa Ismailova
|
The Set of Equations to Evaluate Objects
|
5 pages
|
Proceedings of the 3-rd International Workshop on Computer Science
and Information Technologies CSIT'2001, Ufa, Yangantau, Ruissia
| null | null |
cs.LO cs.PL cs.SC
| null |
The notion of an equational shell is studied to involve the objects and their
environment. Appropriate methods are studied as valid embeddings of refined
objects. The refinement process determines the linkages between the variety of
possible representations giving rise to variants of computations. The case
study is equipped with the adjusted equational systems that validate the
initial applicative framework.
|
[
{
"created": "Fri, 8 Jun 2001 09:28:46 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Ismailova",
"Larissa",
""
]
] |
The notion of an equational shell is studied to involve the objects and their environment. Appropriate methods are studied as valid embeddings of refined objects. The refinement process determines the linkages between the variety of possible representations giving rise to variants of computations. The case study is equipped with the adjusted equational systems that validate the initial applicative framework.
|
1812.00583
|
Xiaobo Zhou
|
Xiaobo Zhou, Shihao Yan, Jinsong Hu, Jiande Sun, Jun Li, and Feng Shu
|
Joint Optimization of a UAV's Trajectory and Transmit Power for Covert
Communications
| null | null |
10.1109/TSP.2019.2928949
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work considers covert communications in the context of unmanned aerial
vehicle (UAV) networks, aiming to hide a UAV for transmitting critical
information out of a scenario that is monitored and where communication is not
allowed. Specifically, the UAV as a transmitter intends to transmit information
to a legitimate receiver (Bob) covertly in order to avoid being detected by a
warden (Willie) with location uncertainties at Bob and/or Willie. In order to
enhance the considered covert communication performance, we prefer to jointly
optimize the UAV's trajectory and transmit power in terms of maximizing the
average covert transmission rate from the UAV to Bob subject to transmission
outage constraint and covertness constraint. The formulated optimization
problem is difficult to tackle directly due to the intractable constraints. As
such, we first employ conservative approximation to transform a constraint into
a deterministic form and then apply the first-order restrictive approximation
to transform the optimization problem into a convex form. By applying the
successive convex approximation (SCA) technique, an efficient iterative
algorithm is developed to solve the optimization problem. Our examination shows
that the developed joint trajectory and transmit power optimization scheme
achieves significantly better covert communication performance as compared to a
benchmark scheme.
|
[
{
"created": "Mon, 3 Dec 2018 07:39:54 GMT",
"version": "v1"
},
{
"created": "Tue, 2 Jul 2019 06:22:02 GMT",
"version": "v2"
}
] |
2019-09-04
|
[
[
"Zhou",
"Xiaobo",
""
],
[
"Yan",
"Shihao",
""
],
[
"Hu",
"Jinsong",
""
],
[
"Sun",
"Jiande",
""
],
[
"Li",
"Jun",
""
],
[
"Shu",
"Feng",
""
]
] |
This work considers covert communications in the context of unmanned aerial vehicle (UAV) networks, aiming to hide a UAV for transmitting critical information out of a scenario that is monitored and where communication is not allowed. Specifically, the UAV as a transmitter intends to transmit information to a legitimate receiver (Bob) covertly in order to avoid being detected by a warden (Willie) with location uncertainties at Bob and/or Willie. In order to enhance the considered covert communication performance, we prefer to jointly optimize the UAV's trajectory and transmit power in terms of maximizing the average covert transmission rate from the UAV to Bob subject to transmission outage constraint and covertness constraint. The formulated optimization problem is difficult to tackle directly due to the intractable constraints. As such, we first employ conservative approximation to transform a constraint into a deterministic form and then apply the first-order restrictive approximation to transform the optimization problem into a convex form. By applying the successive convex approximation (SCA) technique, an efficient iterative algorithm is developed to solve the optimization problem. Our examination shows that the developed joint trajectory and transmit power optimization scheme achieves significantly better covert communication performance as compared to a benchmark scheme.
|
1708.05849
|
Nicolas Markey
|
Patrick Gardy, Patricia Bouyer, Nicolas Markey
|
Dependences in Strategy Logic
| null | null | null | null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Strategy Logic (SL) is a very expressive logic for specifying and verifying
properties of multi-agent systems: in SL, one can quantify over strategies,
assign them to agents, and express properties of the resulting plays. Such a
powerful framework has two drawbacks: first, model checking SL has
non-elementary complexity; second, the exact semantics of SL is rather
intricate, and may not correspond to what is expected. In this paper, we focus
on strategy dependences in SL, by tracking how existentially-quantified
strategies in a formula may (or may not) depend on other strategies selected in
the formula. We study different kinds of dependences, refining the approach of
[Mogavero et al., Reasoning about strategies: On the model-checking problem,
2014], and prove that they give rise to different satisfaction relations. In
the setting where strategies may only depend on what they have observed, we
identify a large fragment of SL for which we prove model checking can be
performed in 2EXPTIME.
|
[
{
"created": "Sat, 19 Aug 2017 14:10:22 GMT",
"version": "v1"
}
] |
2017-08-22
|
[
[
"Gardy",
"Patrick",
""
],
[
"Bouyer",
"Patricia",
""
],
[
"Markey",
"Nicolas",
""
]
] |
Strategy Logic (SL) is a very expressive logic for specifying and verifying properties of multi-agent systems: in SL, one can quantify over strategies, assign them to agents, and express properties of the resulting plays. Such a powerful framework has two drawbacks: first, model checking SL has non-elementary complexity; second, the exact semantics of SL is rather intricate, and may not correspond to what is expected. In this paper, we focus on strategy dependences in SL, by tracking how existentially-quantified strategies in a formula may (or may not) depend on other strategies selected in the formula. We study different kinds of dependences, refining the approach of [Mogavero et al., Reasoning about strategies: On the model-checking problem, 2014], and prove that they give rise to different satisfaction relations. In the setting where strategies may only depend on what they have observed, we identify a large fragment of SL for which we prove model checking can be performed in 2EXPTIME.
|
1202.3721
|
Phan H. Giang
|
Phan H. Giang
|
Dynamic consistency and decision making under vacuous belief
| null | null | null |
UAI-P-2011-PG-230-237
|
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The ideas about decision making under ignorance in economics are combined
with the ideas about uncertainty representation in computer science. The
combination sheds new light on the question of how artificial agents can act in
a dynamically consistent manner. The notion of sequential consistency is
formalized by adapting the law of iterated expectation for plausibility
measures. The necessary and sufficient condition for a certainty equivalence
operator for Nehring-Puppe's preference to be sequentially consistent is given.
This result sheds light on the models of decision making under uncertainty.
|
[
{
"created": "Tue, 14 Feb 2012 16:41:17 GMT",
"version": "v1"
}
] |
2012-02-20
|
[
[
"Giang",
"Phan H.",
""
]
] |
The ideas about decision making under ignorance in economics are combined with the ideas about uncertainty representation in computer science. The combination sheds new light on the question of how artificial agents can act in a dynamically consistent manner. The notion of sequential consistency is formalized by adapting the law of iterated expectation for plausibility measures. The necessary and sufficient condition for a certainty equivalence operator for Nehring-Puppe's preference to be sequentially consistent is given. This result sheds light on the models of decision making under uncertainty.
|
2307.02975
|
Mattia Giovanni Campana
|
Mattia Giovanni Campana, Franca Delmastro, Elena Pagani
|
Transfer Learning for the Efficient Detection of COVID-19 from
Smartphone Audio Data
| null |
Pervasive and Mobile Computing, Volume 89, 2023
|
10.1016/j.pmcj.2023.101754
| null |
cs.LG cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Disease detection from smartphone data represents an open research challenge
in mobile health (m-health) systems. COVID-19 and its respiratory symptoms are
an important case study in this area and their early detection is a potential
real instrument to counteract the pandemic situation. The efficacy of this
solution mainly depends on the performances of AI algorithms applied to the
collected data and their possible implementation directly on the users' mobile
devices. Considering these issues, and the limited amount of available data, in
this paper we present the experimental evaluation of 3 different deep learning
models, compared also with hand-crafted features, and of two main approaches of
transfer learning in the considered scenario: both feature extraction and
fine-tuning. Specifically, we considered VGGish, YAMNET, and
L\textsuperscript{3}-Net (including 12 different configurations) evaluated
through user-independent experiments on 4 different datasets (13,447 samples in
total). Results clearly show the advantages of L\textsuperscript{3}-Net in all
the experimental settings as it overcomes the other solutions by 12.3\% in
terms of Precision-Recall AUC as features extractor, and by 10\% when the model
is fine-tuned. Moreover, we note that to fine-tune only the fully-connected
layers of the pre-trained models generally leads to worse performances, with an
average drop of 6.6\% with respect to feature extraction. %highlighting the
need for further investigations. Finally, we evaluate the memory footprints of
the different models for their possible applications on commercial mobile
devices.
|
[
{
"created": "Thu, 6 Jul 2023 13:19:27 GMT",
"version": "v1"
}
] |
2023-07-07
|
[
[
"Campana",
"Mattia Giovanni",
""
],
[
"Delmastro",
"Franca",
""
],
[
"Pagani",
"Elena",
""
]
] |
Disease detection from smartphone data represents an open research challenge in mobile health (m-health) systems. COVID-19 and its respiratory symptoms are an important case study in this area and their early detection is a potential real instrument to counteract the pandemic situation. The efficacy of this solution mainly depends on the performances of AI algorithms applied to the collected data and their possible implementation directly on the users' mobile devices. Considering these issues, and the limited amount of available data, in this paper we present the experimental evaluation of 3 different deep learning models, compared also with hand-crafted features, and of two main approaches of transfer learning in the considered scenario: both feature extraction and fine-tuning. Specifically, we considered VGGish, YAMNET, and L\textsuperscript{3}-Net (including 12 different configurations) evaluated through user-independent experiments on 4 different datasets (13,447 samples in total). Results clearly show the advantages of L\textsuperscript{3}-Net in all the experimental settings as it overcomes the other solutions by 12.3\% in terms of Precision-Recall AUC as features extractor, and by 10\% when the model is fine-tuned. Moreover, we note that to fine-tune only the fully-connected layers of the pre-trained models generally leads to worse performances, with an average drop of 6.6\% with respect to feature extraction. %highlighting the need for further investigations. Finally, we evaluate the memory footprints of the different models for their possible applications on commercial mobile devices.
|
0908.3715
|
Moustafa Youssef
|
Moustafa Youssef, Adel Youssef, and Mohamed Younis
|
Overlapping Multi-hop Clustering for Wireless Sensor Networks
| null |
IEEE TPDS 2009
|
10.1109/TPDS.2009.32
|
WINC-TR-1001
|
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Clustering is a standard approach for achieving efficient and scalable
performance in wireless sensor networks. Traditionally, clustering algorithms
aim at generating a number of disjoint clusters that satisfy some criteria. In
this paper, we formulate a novel clustering problem that aims at generating
overlapping multi-hop clusters. Overlapping clusters are useful in many sensor
network applications, including inter-cluster routing, node localization, and
time synchronization protocols. We also propose a randomized, distributed
multi-hop clustering algorithm (KOCA) for solving the overlapping clustering
problem. KOCA aims at generating connected overlapping clusters that cover the
entire sensor network with a specific average overlapping degree. Through
analysis and simulation experiments we show how to select the different values
of the parameters to achieve the clustering process objectives. Moreover, the
results show that KOCA produces approximately equal-sized clusters, which
allows distributing the load evenly over different clusters. In addition, KOCA
is scalable; the clustering formation terminates in a constant time regardless
of the network size.
|
[
{
"created": "Wed, 26 Aug 2009 01:43:15 GMT",
"version": "v1"
}
] |
2013-04-09
|
[
[
"Youssef",
"Moustafa",
""
],
[
"Youssef",
"Adel",
""
],
[
"Younis",
"Mohamed",
""
]
] |
Clustering is a standard approach for achieving efficient and scalable performance in wireless sensor networks. Traditionally, clustering algorithms aim at generating a number of disjoint clusters that satisfy some criteria. In this paper, we formulate a novel clustering problem that aims at generating overlapping multi-hop clusters. Overlapping clusters are useful in many sensor network applications, including inter-cluster routing, node localization, and time synchronization protocols. We also propose a randomized, distributed multi-hop clustering algorithm (KOCA) for solving the overlapping clustering problem. KOCA aims at generating connected overlapping clusters that cover the entire sensor network with a specific average overlapping degree. Through analysis and simulation experiments we show how to select the different values of the parameters to achieve the clustering process objectives. Moreover, the results show that KOCA produces approximately equal-sized clusters, which allows distributing the load evenly over different clusters. In addition, KOCA is scalable; the clustering formation terminates in a constant time regardless of the network size.
|
2303.16009
|
Parag Khanna
|
Parag Khanna, M{\aa}rten Bj\"orkman and Christian Smith
|
Data-driven Grip Force Variation in Robot-Human Handovers
|
Contributed to "Advances in Close Proximity Human-Robot
Collaboration" Workshop in 2022 IEEE-RAS International Conference on Humanoid
Robots (Humanoids 2022)
| null | null | null |
cs.RO cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Handovers frequently occur in our social environments, making it imperative
for a collaborative robotic system to master the skill of handover. In this
work, we aim to investigate the relationship between the grip force variation
for a human giver and the sensed interaction force-torque in human-human
handovers, utilizing a data-driven approach. A Long-Short Term Memory (LSTM)
network was trained to use the interaction force-torque in a handover to
predict the human grip force variation in advance. Further, we propose to
utilize the trained network to cause human-like grip force variation for a
robotic giver.
|
[
{
"created": "Tue, 28 Mar 2023 14:37:37 GMT",
"version": "v1"
}
] |
2023-03-29
|
[
[
"Khanna",
"Parag",
""
],
[
"Björkman",
"Mårten",
""
],
[
"Smith",
"Christian",
""
]
] |
Handovers frequently occur in our social environments, making it imperative for a collaborative robotic system to master the skill of handover. In this work, we aim to investigate the relationship between the grip force variation for a human giver and the sensed interaction force-torque in human-human handovers, utilizing a data-driven approach. A Long-Short Term Memory (LSTM) network was trained to use the interaction force-torque in a handover to predict the human grip force variation in advance. Further, we propose to utilize the trained network to cause human-like grip force variation for a robotic giver.
|
2103.11188
|
Isabella Panaccione
|
Isabella Panaccione
|
Attaining Sudan's decoding radius with no genus penalty for algebraic
geometry codes
|
Typo in the title corrected
| null | null | null |
cs.IT math.AG math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper we present a decoding algorithm for algebraic geometry codes
with error-correcting capacity beyond half the designed distance of the code.
This algorithm comes as a fusion of the Power Error Locating Pairs algorithm
for algebraic geometry codes and the technique used by Ehrhard in order to
correct these codes up to half the designed distance. The decoding radius of
this algorithm reaches that of Sudan algorithm, without any penalty given by
the genus of the curve.
|
[
{
"created": "Sat, 20 Mar 2021 14:24:22 GMT",
"version": "v1"
},
{
"created": "Tue, 23 Mar 2021 14:22:55 GMT",
"version": "v2"
}
] |
2021-03-24
|
[
[
"Panaccione",
"Isabella",
""
]
] |
In this paper we present a decoding algorithm for algebraic geometry codes with error-correcting capacity beyond half the designed distance of the code. This algorithm comes as a fusion of the Power Error Locating Pairs algorithm for algebraic geometry codes and the technique used by Ehrhard in order to correct these codes up to half the designed distance. The decoding radius of this algorithm reaches that of Sudan algorithm, without any penalty given by the genus of the curve.
|
0710.1482
|
Amey Karkare
|
Amey Karkare, Amitabha Sanyal, Uday Khedker
|
Heap Reference Analysis for Functional Programs
| null | null | null | null |
cs.PL cs.SE
| null |
Current garbage collectors leave a lot of garbage uncollected because they
conservatively approximate liveness by reachability from program variables. In
this paper, we describe a sequence of static analyses that takes as input a
program written in a first-order, eager functional programming language, and
finds at each program point the references to objects that are guaranteed not
to be used in the future. Such references are made null by a transformation
pass. If this makes the object unreachable, it can be collected by the garbage
collector. This causes more garbage to be collected, resulting in fewer
collections. Additionally, for those garbage collectors which scavenge live
objects, it makes each collection faster.
The interesting aspects of our method are both in the identification of the
analyses required to solve the problem and the way they are carried out. We
identify three different analyses -- liveness, sharing and accessibility. In
liveness and sharing analyses, the function definitions are analyzed
independently of the calling context. This is achieved by using a variable to
represent the unknown context of the function being analyzed and setting up
constraints expressing the effect of the function with respect to the variable.
The solution of the constraints is a summary of the function that is
parameterized with respect to a calling context and is used to analyze function
calls. As a result we achieve context sensitivity at call sites without
analyzing the function multiple number of times.
|
[
{
"created": "Mon, 8 Oct 2007 08:43:58 GMT",
"version": "v1"
}
] |
2007-10-09
|
[
[
"Karkare",
"Amey",
""
],
[
"Sanyal",
"Amitabha",
""
],
[
"Khedker",
"Uday",
""
]
] |
Current garbage collectors leave a lot of garbage uncollected because they conservatively approximate liveness by reachability from program variables. In this paper, we describe a sequence of static analyses that takes as input a program written in a first-order, eager functional programming language, and finds at each program point the references to objects that are guaranteed not to be used in the future. Such references are made null by a transformation pass. If this makes the object unreachable, it can be collected by the garbage collector. This causes more garbage to be collected, resulting in fewer collections. Additionally, for those garbage collectors which scavenge live objects, it makes each collection faster. The interesting aspects of our method are both in the identification of the analyses required to solve the problem and the way they are carried out. We identify three different analyses -- liveness, sharing and accessibility. In liveness and sharing analyses, the function definitions are analyzed independently of the calling context. This is achieved by using a variable to represent the unknown context of the function being analyzed and setting up constraints expressing the effect of the function with respect to the variable. The solution of the constraints is a summary of the function that is parameterized with respect to a calling context and is used to analyze function calls. As a result we achieve context sensitivity at call sites without analyzing the function multiple number of times.
|
2306.01417
|
Robert Poe
|
Robert Lee Poe and Soumia Zohra El Mestari
|
The Flawed Foundations of Fair Machine Learning
|
This article is a preprint submitted to the Minds and Machines
Special Issue on the (Un)fairness of AI on May 31st, 2023
| null | null | null |
cs.CY cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The definition and implementation of fairness in automated decisions has been
extensively studied by the research community. Yet, there hides fallacious
reasoning, misleading assertions, and questionable practices at the foundations
of the current fair machine learning paradigm. Those flaws are the result of a
failure to understand that the trade-off between statistically accurate
outcomes and group similar outcomes exists as independent, external constraint
rather than as a subjective manifestation as has been commonly argued. First,
we explain that there is only one conception of fairness present in the fair
machine learning literature: group similarity of outcomes based on a sensitive
attribute where the similarity benefits an underprivileged group. Second, we
show that there is, in fact, a trade-off between statistically accurate
outcomes and group similar outcomes in any data setting where group disparities
exist, and that the trade-off presents an existential threat to the equitable,
fair machine learning approach. Third, we introduce a proof-of-concept
evaluation to aid researchers and designers in understanding the relationship
between statistically accurate outcomes and group similar outcomes. Finally,
suggestions for future work aimed at data scientists, legal scholars, and data
ethicists that utilize the conceptual and experimental framework described
throughout this article are provided.
|
[
{
"created": "Fri, 2 Jun 2023 10:07:12 GMT",
"version": "v1"
}
] |
2023-06-05
|
[
[
"Poe",
"Robert Lee",
""
],
[
"Mestari",
"Soumia Zohra El",
""
]
] |
The definition and implementation of fairness in automated decisions has been extensively studied by the research community. Yet, there hides fallacious reasoning, misleading assertions, and questionable practices at the foundations of the current fair machine learning paradigm. Those flaws are the result of a failure to understand that the trade-off between statistically accurate outcomes and group similar outcomes exists as independent, external constraint rather than as a subjective manifestation as has been commonly argued. First, we explain that there is only one conception of fairness present in the fair machine learning literature: group similarity of outcomes based on a sensitive attribute where the similarity benefits an underprivileged group. Second, we show that there is, in fact, a trade-off between statistically accurate outcomes and group similar outcomes in any data setting where group disparities exist, and that the trade-off presents an existential threat to the equitable, fair machine learning approach. Third, we introduce a proof-of-concept evaluation to aid researchers and designers in understanding the relationship between statistically accurate outcomes and group similar outcomes. Finally, suggestions for future work aimed at data scientists, legal scholars, and data ethicists that utilize the conceptual and experimental framework described throughout this article are provided.
|
2201.11481
|
B.Sundar Rajan
|
Kanishak Vaidya, and B Sundar Rajan
|
Multi-Access Cache-Aided Multi-User Private Information Retrieval
|
15 pages, 11 figures, 2 tables. Fixed minor errors in the previous
version and the presentation improved
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider the problem of multi-access cache-aided multi-user Private
Information Retrieval (MuPIR). In this problem, several files are replicated
across multiple servers. There are $K$ users and $C$ cache nodes. Each user can
access $L$ cache nodes, and every cache node can be accessed by several users.
Each user wants to retrieve one file from the servers, but the users do not
want the servers to know their demands. Before the users decide their
respective demands, servers will fill the cache nodes from the content of the
files. Users will then request their desired files from the servers. Servers
will perform coded transmissions, and all the users should get their desired
files from these transmissions and the content placed in the caches they are
accessing. It is required that any individual server should not get any
information about the demands of the users. This problem is an extension of the
dedicated cache-aided MuPIR problem, which itself generalizes the widely
studied single user PIR setup. In this paper, we propose a MuPIR scheme which
utilizes a multi-access setup of the coded caching problem. The presented
scheme is order optimal when $K=\binom{C}{L}$ users. We also characterize the
rate of the scheme for the special case of cyclic wraparound multi-access
setup, where $C=K$ and each user access $L$ consecutive cache nodes in cyclic
wraparound fashion.
|
[
{
"created": "Thu, 27 Jan 2022 12:37:22 GMT",
"version": "v1"
},
{
"created": "Wed, 9 Feb 2022 19:37:53 GMT",
"version": "v2"
}
] |
2022-02-11
|
[
[
"Vaidya",
"Kanishak",
""
],
[
"Rajan",
"B Sundar",
""
]
] |
We consider the problem of multi-access cache-aided multi-user Private Information Retrieval (MuPIR). In this problem, several files are replicated across multiple servers. There are $K$ users and $C$ cache nodes. Each user can access $L$ cache nodes, and every cache node can be accessed by several users. Each user wants to retrieve one file from the servers, but the users do not want the servers to know their demands. Before the users decide their respective demands, servers will fill the cache nodes from the content of the files. Users will then request their desired files from the servers. Servers will perform coded transmissions, and all the users should get their desired files from these transmissions and the content placed in the caches they are accessing. It is required that any individual server should not get any information about the demands of the users. This problem is an extension of the dedicated cache-aided MuPIR problem, which itself generalizes the widely studied single user PIR setup. In this paper, we propose a MuPIR scheme which utilizes a multi-access setup of the coded caching problem. The presented scheme is order optimal when $K=\binom{C}{L}$ users. We also characterize the rate of the scheme for the special case of cyclic wraparound multi-access setup, where $C=K$ and each user access $L$ consecutive cache nodes in cyclic wraparound fashion.
|
2110.00613
|
Jesse Dodge
|
Jesse Dodge, Suchin Gururangan, Dallas Card, Roy Schwartz, Noah A.
Smith
|
Expected Validation Performance and Estimation of a Random Variable's
Maximum
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Research in NLP is often supported by experimental results, and improved
reporting of such results can lead to better understanding and more
reproducible science. In this paper we analyze three statistical estimators for
expected validation performance, a tool used for reporting performance (e.g.,
accuracy) as a function of computational budget (e.g., number of hyperparameter
tuning experiments). Where previous work analyzing such estimators focused on
the bias, we also examine the variance and mean squared error (MSE). In both
synthetic and realistic scenarios, we evaluate three estimators and find the
unbiased estimator has the highest variance, and the estimator with the
smallest variance has the largest bias; the estimator with the smallest MSE
strikes a balance between bias and variance, displaying a classic bias-variance
tradeoff. We use expected validation performance to compare between different
models, and analyze how frequently each estimator leads to drawing incorrect
conclusions about which of two models performs best. We find that the two
biased estimators lead to the fewest incorrect conclusions, which hints at the
importance of minimizing variance and MSE.
|
[
{
"created": "Fri, 1 Oct 2021 18:48:47 GMT",
"version": "v1"
}
] |
2021-10-05
|
[
[
"Dodge",
"Jesse",
""
],
[
"Gururangan",
"Suchin",
""
],
[
"Card",
"Dallas",
""
],
[
"Schwartz",
"Roy",
""
],
[
"Smith",
"Noah A.",
""
]
] |
Research in NLP is often supported by experimental results, and improved reporting of such results can lead to better understanding and more reproducible science. In this paper we analyze three statistical estimators for expected validation performance, a tool used for reporting performance (e.g., accuracy) as a function of computational budget (e.g., number of hyperparameter tuning experiments). Where previous work analyzing such estimators focused on the bias, we also examine the variance and mean squared error (MSE). In both synthetic and realistic scenarios, we evaluate three estimators and find the unbiased estimator has the highest variance, and the estimator with the smallest variance has the largest bias; the estimator with the smallest MSE strikes a balance between bias and variance, displaying a classic bias-variance tradeoff. We use expected validation performance to compare between different models, and analyze how frequently each estimator leads to drawing incorrect conclusions about which of two models performs best. We find that the two biased estimators lead to the fewest incorrect conclusions, which hints at the importance of minimizing variance and MSE.
|
2203.01482
|
Baoquan Zhang
|
Baoquan Zhang, Hao Jiang, Xutao Li, Shanshan Feng, Yunming Ye, Rui Ye
|
MetaDT: Meta Decision Tree with Class Hierarchy for Interpretable
Few-Shot Learning
|
10 pages, 7 figures
| null | null | null |
cs.LG cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Few-Shot Learning (FSL) is a challenging task, which aims to recognize novel
classes with few examples. Recently, lots of methods have been proposed from
the perspective of meta-learning and representation learning. However, few
works focus on the interpretability of FSL decision process. In this paper, we
take a step towards the interpretable FSL by proposing a novel meta-learning
based decision tree framework, namely, MetaDT. In particular, the FSL
interpretability is achieved from two aspects, i.e., a concept aspect and a
visual aspect. On the concept aspect, we first introduce a tree-like concept
hierarchy as FSL prior. Then, resorting to the prior, we split each few-shot
task to a set of subtasks with different concept levels and then perform class
prediction via a model of decision tree. The advantage of such design is that a
sequence of high-level concept decisions that lead up to a final class
prediction can be obtained, which clarifies the FSL decision process. On the
visual aspect, a set of subtask-specific classifiers with visual attention
mechanism is designed to perform decision at each node of the decision tree. As
a result, a subtask-specific heatmap visualization can be obtained to achieve
the decision interpretability of each tree node. At last, to alleviate the data
scarcity issue of FSL, we regard the prior of concept hierarchy as an
undirected graph, and then design a graph convolution-based decision tree
inference network as our meta-learner to infer parameters of the decision tree.
Extensive experiments on performance comparison and interpretability analysis
show superiority of our MetaDT.
|
[
{
"created": "Thu, 3 Mar 2022 01:53:47 GMT",
"version": "v1"
},
{
"created": "Wed, 26 Jul 2023 00:49:29 GMT",
"version": "v2"
}
] |
2023-07-27
|
[
[
"Zhang",
"Baoquan",
""
],
[
"Jiang",
"Hao",
""
],
[
"Li",
"Xutao",
""
],
[
"Feng",
"Shanshan",
""
],
[
"Ye",
"Yunming",
""
],
[
"Ye",
"Rui",
""
]
] |
Few-Shot Learning (FSL) is a challenging task, which aims to recognize novel classes with few examples. Recently, lots of methods have been proposed from the perspective of meta-learning and representation learning. However, few works focus on the interpretability of FSL decision process. In this paper, we take a step towards the interpretable FSL by proposing a novel meta-learning based decision tree framework, namely, MetaDT. In particular, the FSL interpretability is achieved from two aspects, i.e., a concept aspect and a visual aspect. On the concept aspect, we first introduce a tree-like concept hierarchy as FSL prior. Then, resorting to the prior, we split each few-shot task to a set of subtasks with different concept levels and then perform class prediction via a model of decision tree. The advantage of such design is that a sequence of high-level concept decisions that lead up to a final class prediction can be obtained, which clarifies the FSL decision process. On the visual aspect, a set of subtask-specific classifiers with visual attention mechanism is designed to perform decision at each node of the decision tree. As a result, a subtask-specific heatmap visualization can be obtained to achieve the decision interpretability of each tree node. At last, to alleviate the data scarcity issue of FSL, we regard the prior of concept hierarchy as an undirected graph, and then design a graph convolution-based decision tree inference network as our meta-learner to infer parameters of the decision tree. Extensive experiments on performance comparison and interpretability analysis show superiority of our MetaDT.
|
2210.08400
|
Atish Dixit
|
Atish Dixit, Ahmed Elsheikh
|
A Multilevel Reinforcement Learning Framework for PDE-based Control
|
In preparation for submission to a journal
| null | null | null |
cs.LG cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Reinforcement learning (RL) is a promising method to solve control problems.
However, model-free RL algorithms are sample inefficient and require thousands
if not millions of samples to learn optimal control policies. A major source of
computational cost in RL corresponds to the transition function, which is
dictated by the model dynamics. This is especially problematic when model
dynamics is represented with coupled PDEs. In such cases, the transition
function often involves solving a large-scale discretization of the said PDEs.
We propose a multilevel RL framework in order to ease this cost by exploiting
sublevel models that correspond to coarser scale discretization (i.e.
multilevel models). This is done by formulating an approximate multilevel Monte
Carlo estimate of the objective function of the policy and / or value network
instead of Monte Carlo estimates, as done in the classical framework. As a
demonstration of this framework, we present a multilevel version of the
proximal policy optimization (PPO) algorithm. Here, the level refers to the
grid fidelity of the chosen simulation-based environment. We provide two
examples of simulation-based environments that employ stochastic PDEs that are
solved using finite-volume discretization. For the case studies presented, we
observed substantial computational savings using multilevel PPO compared to its
classical counterpart.
|
[
{
"created": "Sat, 15 Oct 2022 23:52:48 GMT",
"version": "v1"
},
{
"created": "Fri, 28 Oct 2022 16:26:04 GMT",
"version": "v2"
}
] |
2022-10-31
|
[
[
"Dixit",
"Atish",
""
],
[
"Elsheikh",
"Ahmed",
""
]
] |
Reinforcement learning (RL) is a promising method to solve control problems. However, model-free RL algorithms are sample inefficient and require thousands if not millions of samples to learn optimal control policies. A major source of computational cost in RL corresponds to the transition function, which is dictated by the model dynamics. This is especially problematic when model dynamics is represented with coupled PDEs. In such cases, the transition function often involves solving a large-scale discretization of the said PDEs. We propose a multilevel RL framework in order to ease this cost by exploiting sublevel models that correspond to coarser scale discretization (i.e. multilevel models). This is done by formulating an approximate multilevel Monte Carlo estimate of the objective function of the policy and / or value network instead of Monte Carlo estimates, as done in the classical framework. As a demonstration of this framework, we present a multilevel version of the proximal policy optimization (PPO) algorithm. Here, the level refers to the grid fidelity of the chosen simulation-based environment. We provide two examples of simulation-based environments that employ stochastic PDEs that are solved using finite-volume discretization. For the case studies presented, we observed substantial computational savings using multilevel PPO compared to its classical counterpart.
|
2306.06865
|
Lichin Chen
|
Li-Chin Chen, Yi-Heng Lin, Li-Ning Peng, Feng-Ming Wang, Yu-Hsin Chen,
Po-Hsun Huang, Shang-Feng Yang, Yu Tsao
|
Deep denoising autoencoder-based non-invasive blood flow detection for
arteriovenous fistula
| null | null | null | null |
cs.LG cs.AI eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
Clinical guidelines underscore the importance of regularly monitoring and
surveilling arteriovenous fistula (AVF) access in hemodialysis patients to
promptly detect any dysfunction. Although phono-angiography/sound analysis
overcomes the limitations of standardized AVF stenosis diagnosis tool, prior
studies have depended on conventional feature extraction methods, restricting
their applicability in diverse contexts. In contrast, representation learning
captures fundamental underlying factors that can be readily transferred across
different contexts. We propose an approach based on deep denoising autoencoders
(DAEs) that perform dimensionality reduction and reconstruction tasks using the
waveform obtained through one-level discrete wavelet transform, utilizing
representation learning. Our results demonstrate that the latent representation
generated by the DAE surpasses expectations with an accuracy of 0.93. The
incorporation of noise-mixing and the utilization of a noise-to-clean scheme
effectively enhance the discriminative capabilities of the latent
representation. Moreover, when employed to identify patient-specific
characteristics, the latent representation exhibited performance by surpassing
an accuracy of 0.92. Appropriate light-weighted methods can restore the
detection performance of the excessively reduced dimensionality version and
enable operation on less computational devices. Our findings suggest that
representation learning is a more feasible approach for extracting auscultation
features in AVF, leading to improved generalization and applicability across
multiple tasks. The manipulation of latent representations holds immense
potential for future advancements. Further investigations in this area are
promising and warrant continued exploration.
|
[
{
"created": "Mon, 12 Jun 2023 04:46:01 GMT",
"version": "v1"
}
] |
2023-06-13
|
[
[
"Chen",
"Li-Chin",
""
],
[
"Lin",
"Yi-Heng",
""
],
[
"Peng",
"Li-Ning",
""
],
[
"Wang",
"Feng-Ming",
""
],
[
"Chen",
"Yu-Hsin",
""
],
[
"Huang",
"Po-Hsun",
""
],
[
"Yang",
"Shang-Feng",
""
],
[
"Tsao",
"Yu",
""
]
] |
Clinical guidelines underscore the importance of regularly monitoring and surveilling arteriovenous fistula (AVF) access in hemodialysis patients to promptly detect any dysfunction. Although phono-angiography/sound analysis overcomes the limitations of standardized AVF stenosis diagnosis tool, prior studies have depended on conventional feature extraction methods, restricting their applicability in diverse contexts. In contrast, representation learning captures fundamental underlying factors that can be readily transferred across different contexts. We propose an approach based on deep denoising autoencoders (DAEs) that perform dimensionality reduction and reconstruction tasks using the waveform obtained through one-level discrete wavelet transform, utilizing representation learning. Our results demonstrate that the latent representation generated by the DAE surpasses expectations with an accuracy of 0.93. The incorporation of noise-mixing and the utilization of a noise-to-clean scheme effectively enhance the discriminative capabilities of the latent representation. Moreover, when employed to identify patient-specific characteristics, the latent representation exhibited performance by surpassing an accuracy of 0.92. Appropriate light-weighted methods can restore the detection performance of the excessively reduced dimensionality version and enable operation on less computational devices. Our findings suggest that representation learning is a more feasible approach for extracting auscultation features in AVF, leading to improved generalization and applicability across multiple tasks. The manipulation of latent representations holds immense potential for future advancements. Further investigations in this area are promising and warrant continued exploration.
|
1311.2241
|
Ying Liu
|
Ying Liu and Alan S. Willsky
|
Learning Gaussian Graphical Models with Observed or Latent FVSs
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Gaussian Graphical Models (GGMs) or Gauss Markov random fields are widely
used in many applications, and the trade-off between the modeling capacity and
the efficiency of learning and inference has been an important research
problem. In this paper, we study the family of GGMs with small feedback vertex
sets (FVSs), where an FVS is a set of nodes whose removal breaks all the
cycles. Exact inference such as computing the marginal distributions and the
partition function has complexity $O(k^{2}n)$ using message-passing algorithms,
where k is the size of the FVS, and n is the total number of nodes. We propose
efficient structure learning algorithms for two cases: 1) All nodes are
observed, which is useful in modeling social or flight networks where the FVS
nodes often correspond to a small number of high-degree nodes, or hubs, while
the rest of the networks is modeled by a tree. Regardless of the maximum
degree, without knowing the full graph structure, we can exactly compute the
maximum likelihood estimate in $O(kn^2+n^2\log n)$ if the FVS is known or in
polynomial time if the FVS is unknown but has bounded size. 2) The FVS nodes
are latent variables, where structure learning is equivalent to decomposing a
inverse covariance matrix (exactly or approximately) into the sum of a
tree-structured matrix and a low-rank matrix. By incorporating efficient
inference into the learning steps, we can obtain a learning algorithm using
alternating low-rank correction with complexity $O(kn^{2}+n^{2}\log n)$ per
iteration. We also perform experiments using both synthetic data as well as
real data of flight delays to demonstrate the modeling capacity with FVSs of
various sizes.
|
[
{
"created": "Sun, 10 Nov 2013 02:39:48 GMT",
"version": "v1"
}
] |
2013-11-12
|
[
[
"Liu",
"Ying",
""
],
[
"Willsky",
"Alan S.",
""
]
] |
Gaussian Graphical Models (GGMs) or Gauss Markov random fields are widely used in many applications, and the trade-off between the modeling capacity and the efficiency of learning and inference has been an important research problem. In this paper, we study the family of GGMs with small feedback vertex sets (FVSs), where an FVS is a set of nodes whose removal breaks all the cycles. Exact inference such as computing the marginal distributions and the partition function has complexity $O(k^{2}n)$ using message-passing algorithms, where k is the size of the FVS, and n is the total number of nodes. We propose efficient structure learning algorithms for two cases: 1) All nodes are observed, which is useful in modeling social or flight networks where the FVS nodes often correspond to a small number of high-degree nodes, or hubs, while the rest of the networks is modeled by a tree. Regardless of the maximum degree, without knowing the full graph structure, we can exactly compute the maximum likelihood estimate in $O(kn^2+n^2\log n)$ if the FVS is known or in polynomial time if the FVS is unknown but has bounded size. 2) The FVS nodes are latent variables, where structure learning is equivalent to decomposing a inverse covariance matrix (exactly or approximately) into the sum of a tree-structured matrix and a low-rank matrix. By incorporating efficient inference into the learning steps, we can obtain a learning algorithm using alternating low-rank correction with complexity $O(kn^{2}+n^{2}\log n)$ per iteration. We also perform experiments using both synthetic data as well as real data of flight delays to demonstrate the modeling capacity with FVSs of various sizes.
|
2002.10582
|
Jim Samuel
|
Jim Samuel, Richard Holowczak, Raquel Benbunan-Fich, Ilan Levine
|
Automating Discovery of Dominance in Synchronous Computer-Mediated
Communication
| null |
47th Hawaii International Conference on System Sciences, 2014, pp.
1804-1812
|
10.1109/HICSS.2014.636
| null |
cs.SI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the advent of electronic interaction, dominance (or the assertion of
control over others) has acquired new dimensions. This study investigates the
dynamics and characteristics of dominance in virtual interaction by analyzing
electronic chat transcripts of groups solving a hidden profile task. We
investigate computer-mediated communication behavior patterns that demonstrate
dominance and identify a number of relevant variables. These indicators are
calculated with automatic and manual coding of text transcripts. A comparison
of both sets of variables indicates that automatic text analysis methods yield
similar conclusions than manual coding. These findings are encouraging to
advance research in text analysis methods in general, and in the study of
virtual team dominance in particular.
|
[
{
"created": "Mon, 24 Feb 2020 23:07:38 GMT",
"version": "v1"
}
] |
2020-02-26
|
[
[
"Samuel",
"Jim",
""
],
[
"Holowczak",
"Richard",
""
],
[
"Benbunan-Fich",
"Raquel",
""
],
[
"Levine",
"Ilan",
""
]
] |
With the advent of electronic interaction, dominance (or the assertion of control over others) has acquired new dimensions. This study investigates the dynamics and characteristics of dominance in virtual interaction by analyzing electronic chat transcripts of groups solving a hidden profile task. We investigate computer-mediated communication behavior patterns that demonstrate dominance and identify a number of relevant variables. These indicators are calculated with automatic and manual coding of text transcripts. A comparison of both sets of variables indicates that automatic text analysis methods yield similar conclusions than manual coding. These findings are encouraging to advance research in text analysis methods in general, and in the study of virtual team dominance in particular.
|
1504.00681
|
Alexandra Kolla
|
Guy Kindler, Alexandra Kolla, Luca Trevisan
|
Approximation of non-boolean 2CSP
| null | null | null | null |
cs.DS cs.CC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We develop a polynomial time $\Omega\left ( \frac 1R \log R \right)$
approximate algorithm for Max 2CSP-$R$, the problem where we are given a
collection of constraints, each involving two variables, where each variable
ranges over a set of size $R$, and we want to find an assignment to the
variables that maximizes the number of satisfied constraints. Assuming the
Unique Games Conjecture, this is the best possible approximation up to constant
factors.
Previously, a $1/R$-approximate algorithm was known, based on linear
programming. Our algorithm is based on semidefinite programming (SDP) and on a
novel rounding technique. The SDP that we use has an almost-matching
integrality gap.
|
[
{
"created": "Thu, 2 Apr 2015 20:05:58 GMT",
"version": "v1"
},
{
"created": "Mon, 6 Apr 2015 02:55:27 GMT",
"version": "v2"
}
] |
2015-04-07
|
[
[
"Kindler",
"Guy",
""
],
[
"Kolla",
"Alexandra",
""
],
[
"Trevisan",
"Luca",
""
]
] |
We develop a polynomial time $\Omega\left ( \frac 1R \log R \right)$ approximate algorithm for Max 2CSP-$R$, the problem where we are given a collection of constraints, each involving two variables, where each variable ranges over a set of size $R$, and we want to find an assignment to the variables that maximizes the number of satisfied constraints. Assuming the Unique Games Conjecture, this is the best possible approximation up to constant factors. Previously, a $1/R$-approximate algorithm was known, based on linear programming. Our algorithm is based on semidefinite programming (SDP) and on a novel rounding technique. The SDP that we use has an almost-matching integrality gap.
|
0904.3718
|
Florentina Pintea
|
K. Yermashov, K. H. Siemsen, K. Wolke, R.A. Rasenack
|
Architecture of the Neurath Basic Model View Controller
|
6 pages,exposed on 1st "European Conference on Computer Sciences &
Applications" - XA2006, Timisoara, Romania
|
Ann. Univ. Tibiscus Comp. Sci. Series IV (2006), 277-282
| null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The idea of the Neurath Basic Model View Controller (NBMVC) appeared during
the discussion of the design of domain-specific modeling tools based on the
Neurath Modeling Language [Yer06]. The NBMVC is the core of the modeling
process within the modeling environment. It reduces complexity out of the
design process by providing domain-specific interfaces between the developer
and the model. These interfaces help to organize and manipulate the model. The
organization includes, for example, a layer with visual components to drop them
in and filter them out. The control routines includes, for example, model
transformations.
|
[
{
"created": "Thu, 23 Apr 2009 15:05:01 GMT",
"version": "v1"
}
] |
2009-04-24
|
[
[
"Yermashov",
"K.",
""
],
[
"Siemsen",
"K. H.",
""
],
[
"Wolke",
"K.",
""
],
[
"Rasenack",
"R. A.",
""
]
] |
The idea of the Neurath Basic Model View Controller (NBMVC) appeared during the discussion of the design of domain-specific modeling tools based on the Neurath Modeling Language [Yer06]. The NBMVC is the core of the modeling process within the modeling environment. It reduces complexity out of the design process by providing domain-specific interfaces between the developer and the model. These interfaces help to organize and manipulate the model. The organization includes, for example, a layer with visual components to drop them in and filter them out. The control routines includes, for example, model transformations.
|
2007.08063
|
Boris Rubinstein
|
Boris Rubinstein
|
A fast noise filtering algorithm for time series prediction using
recurrent neural networks
|
15 pages, 10 figures; typos corrected; the notation table removed; an
appendix added
| null | null | null |
cs.LG math.DS stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent research demonstrate that prediction of time series by recurrent
neural networks (RNNs) based on the noisy input generates a smooth anticipated
trajectory. We examine the internal dynamics of RNNs and establish a set of
conditions required for such behavior. Based on this analysis we propose a new
approximate algorithm and show that it significantly speeds up the predictive
process without loss of accuracy.
|
[
{
"created": "Thu, 16 Jul 2020 01:32:48 GMT",
"version": "v1"
},
{
"created": "Wed, 12 Aug 2020 23:44:50 GMT",
"version": "v2"
},
{
"created": "Tue, 6 Oct 2020 14:53:56 GMT",
"version": "v3"
}
] |
2020-10-07
|
[
[
"Rubinstein",
"Boris",
""
]
] |
Recent research demonstrate that prediction of time series by recurrent neural networks (RNNs) based on the noisy input generates a smooth anticipated trajectory. We examine the internal dynamics of RNNs and establish a set of conditions required for such behavior. Based on this analysis we propose a new approximate algorithm and show that it significantly speeds up the predictive process without loss of accuracy.
|
2403.12466
|
Yunhan Ren
|
Yunhan Ren, Bo Li, Chengyang Zhang, Yong Zhang, Baocai Yin
|
Few-shot Object Localization
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing object localization methods are tailored to locate specific classes
of objects, relying heavily on abundant labeled data for model optimization.
However, acquiring large amounts of labeled data is challenging in many
real-world scenarios, significantly limiting the broader application of
localization models. To bridge this research gap, this paper defines a novel
task named Few-Shot Object Localization (FSOL), which aims to achieve precise
localization with limited samples. This task achieves generalized object
localization by leveraging a small number of labeled support samples to query
the positional information of objects within corresponding images. To advance
this field, we design an innovative high-performance baseline model. This model
integrates a dual-path feature augmentation module to enhance shape association
and gradient differences between supports and query images, alongside a self
query module to explore the association between feature maps and query images.
Experimental results demonstrate a significant performance improvement of our
approach in the FSOL task, establishing an efficient benchmark for further
research. All codes and data are available at https://github.com/Ryh1218/FSOL.
|
[
{
"created": "Tue, 19 Mar 2024 05:50:48 GMT",
"version": "v1"
},
{
"created": "Sun, 24 Mar 2024 12:42:25 GMT",
"version": "v2"
},
{
"created": "Wed, 5 Jun 2024 08:10:26 GMT",
"version": "v3"
}
] |
2024-06-06
|
[
[
"Ren",
"Yunhan",
""
],
[
"Li",
"Bo",
""
],
[
"Zhang",
"Chengyang",
""
],
[
"Zhang",
"Yong",
""
],
[
"Yin",
"Baocai",
""
]
] |
Existing object localization methods are tailored to locate specific classes of objects, relying heavily on abundant labeled data for model optimization. However, acquiring large amounts of labeled data is challenging in many real-world scenarios, significantly limiting the broader application of localization models. To bridge this research gap, this paper defines a novel task named Few-Shot Object Localization (FSOL), which aims to achieve precise localization with limited samples. This task achieves generalized object localization by leveraging a small number of labeled support samples to query the positional information of objects within corresponding images. To advance this field, we design an innovative high-performance baseline model. This model integrates a dual-path feature augmentation module to enhance shape association and gradient differences between supports and query images, alongside a self query module to explore the association between feature maps and query images. Experimental results demonstrate a significant performance improvement of our approach in the FSOL task, establishing an efficient benchmark for further research. All codes and data are available at https://github.com/Ryh1218/FSOL.
|
1910.04284
|
Colin Wei
|
Colin Wei, Tengyu Ma
|
Improved Sample Complexities for Deep Networks and Robust Classification
via an All-Layer Margin
|
Code for all-layer margin optimization is available at the following
link: https://github.com/cwein3/all-layer-margin-opt. Version 4: Re-organized
proofs for more clarity
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For linear classifiers, the relationship between (normalized) output margin
and generalization is captured in a clear and simple bound -- a large output
margin implies good generalization. Unfortunately, for deep models, this
relationship is less clear: existing analyses of the output margin give
complicated bounds which sometimes depend exponentially on depth. In this work,
we propose to instead analyze a new notion of margin, which we call the
"all-layer margin." Our analysis reveals that the all-layer margin has a clear
and direct relationship with generalization for deep models. This enables the
following concrete applications of the all-layer margin: 1) by analyzing the
all-layer margin, we obtain tighter generalization bounds for neural nets which
depend on Jacobian and hidden layer norms and remove the exponential dependency
on depth 2) our neural net results easily translate to the adversarially robust
setting, giving the first direct analysis of robust test error for deep
networks, and 3) we present a theoretically inspired training algorithm for
increasing the all-layer margin. Our algorithm improves both clean and
adversarially robust test performance over strong baselines in practice.
|
[
{
"created": "Wed, 9 Oct 2019 22:45:45 GMT",
"version": "v1"
},
{
"created": "Fri, 10 Apr 2020 08:52:51 GMT",
"version": "v2"
},
{
"created": "Sat, 25 Apr 2020 06:24:54 GMT",
"version": "v3"
},
{
"created": "Sun, 11 Apr 2021 08:30:15 GMT",
"version": "v4"
},
{
"created": "Wed, 16 Jun 2021 05:12:53 GMT",
"version": "v5"
}
] |
2021-06-17
|
[
[
"Wei",
"Colin",
""
],
[
"Ma",
"Tengyu",
""
]
] |
For linear classifiers, the relationship between (normalized) output margin and generalization is captured in a clear and simple bound -- a large output margin implies good generalization. Unfortunately, for deep models, this relationship is less clear: existing analyses of the output margin give complicated bounds which sometimes depend exponentially on depth. In this work, we propose to instead analyze a new notion of margin, which we call the "all-layer margin." Our analysis reveals that the all-layer margin has a clear and direct relationship with generalization for deep models. This enables the following concrete applications of the all-layer margin: 1) by analyzing the all-layer margin, we obtain tighter generalization bounds for neural nets which depend on Jacobian and hidden layer norms and remove the exponential dependency on depth 2) our neural net results easily translate to the adversarially robust setting, giving the first direct analysis of robust test error for deep networks, and 3) we present a theoretically inspired training algorithm for increasing the all-layer margin. Our algorithm improves both clean and adversarially robust test performance over strong baselines in practice.
|
2306.11950
|
Xundong Wu
|
Xundong Wu, Pengfei Zhao, Zilin Yu, Lei Ma, Ka-Wa Yip, Huajin Tang,
Gang Pan, Tiejun Huang
|
Mitigating Communication Costs in Neural Networks: The Role of Dendritic
Nonlinearity
| null | null | null | null |
cs.NE cs.LG q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Our comprehension of biological neuronal networks has profoundly influenced
the evolution of artificial neural networks (ANNs). However, the neurons
employed in ANNs exhibit remarkable deviations from their biological analogs,
mainly due to the absence of complex dendritic trees encompassing local
nonlinearity. Despite such disparities, previous investigations have
demonstrated that point neurons can functionally substitute dendritic neurons
in executing computational tasks. In this study, we scrutinized the importance
of nonlinear dendrites within neural networks. By employing machine-learning
methodologies, we assessed the impact of dendritic structure nonlinearity on
neural network performance. Our findings reveal that integrating dendritic
structures can substantially enhance model capacity and performance while
keeping signal communication costs effectively restrained. This investigation
offers pivotal insights that hold considerable implications for the development
of future neural network accelerators.
|
[
{
"created": "Wed, 21 Jun 2023 00:28:20 GMT",
"version": "v1"
}
] |
2023-06-22
|
[
[
"Wu",
"Xundong",
""
],
[
"Zhao",
"Pengfei",
""
],
[
"Yu",
"Zilin",
""
],
[
"Ma",
"Lei",
""
],
[
"Yip",
"Ka-Wa",
""
],
[
"Tang",
"Huajin",
""
],
[
"Pan",
"Gang",
""
],
[
"Huang",
"Tiejun",
""
]
] |
Our comprehension of biological neuronal networks has profoundly influenced the evolution of artificial neural networks (ANNs). However, the neurons employed in ANNs exhibit remarkable deviations from their biological analogs, mainly due to the absence of complex dendritic trees encompassing local nonlinearity. Despite such disparities, previous investigations have demonstrated that point neurons can functionally substitute dendritic neurons in executing computational tasks. In this study, we scrutinized the importance of nonlinear dendrites within neural networks. By employing machine-learning methodologies, we assessed the impact of dendritic structure nonlinearity on neural network performance. Our findings reveal that integrating dendritic structures can substantially enhance model capacity and performance while keeping signal communication costs effectively restrained. This investigation offers pivotal insights that hold considerable implications for the development of future neural network accelerators.
|
2305.16998
|
Min Zhang
|
Zhiyi Xue, Si Liu, Zhaodi Zhang, Yiting Wu, Min Zhang
|
A Tale of Two Approximations: Tightening Over-Approximation for DNN
Robustness Verification via Under-Approximation
|
16 pages, 11 figures, 5 tables, ISSTA 2023. arXiv admin note:
substantial text overlap with arXiv:2211.11186
| null |
10.1145/3597926.3598127
| null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The robustness of deep neural networks (DNNs) is crucial to the hosting
system's reliability and security. Formal verification has been demonstrated to
be effective in providing provable robustness guarantees. To improve its
scalability, over-approximating the non-linear activation functions in DNNs by
linear constraints has been widely adopted, which transforms the verification
problem into an efficiently solvable linear programming problem. Many efforts
have been dedicated to defining the so-called tightest approximations to reduce
overestimation imposed by over-approximation. In this paper, we study existing
approaches and identify a dominant factor in defining tight approximation,
namely the approximation domain of the activation function. We find out that
tight approximations defined on approximation domains may not be as tight as
the ones on their actual domains, yet existing approaches all rely only on
approximation domains. Based on this observation, we propose a novel
dual-approximation approach to tighten over-approximations, leveraging an
activation function's underestimated domain to define tight approximation
bounds. We implement our approach with two complementary algorithms based
respectively on Monte Carlo simulation and gradient descent into a tool called
DualApp. We assess it on a comprehensive benchmark of DNNs with different
architectures. Our experimental results show that DualApp significantly
outperforms the state-of-the-art approaches with 100% - 1000% improvement on
the verified robustness ratio and 10.64% on average (up to 66.53%) on the
certified lower bound.
|
[
{
"created": "Fri, 26 May 2023 14:58:30 GMT",
"version": "v1"
}
] |
2023-05-29
|
[
[
"Xue",
"Zhiyi",
""
],
[
"Liu",
"Si",
""
],
[
"Zhang",
"Zhaodi",
""
],
[
"Wu",
"Yiting",
""
],
[
"Zhang",
"Min",
""
]
] |
The robustness of deep neural networks (DNNs) is crucial to the hosting system's reliability and security. Formal verification has been demonstrated to be effective in providing provable robustness guarantees. To improve its scalability, over-approximating the non-linear activation functions in DNNs by linear constraints has been widely adopted, which transforms the verification problem into an efficiently solvable linear programming problem. Many efforts have been dedicated to defining the so-called tightest approximations to reduce overestimation imposed by over-approximation. In this paper, we study existing approaches and identify a dominant factor in defining tight approximation, namely the approximation domain of the activation function. We find out that tight approximations defined on approximation domains may not be as tight as the ones on their actual domains, yet existing approaches all rely only on approximation domains. Based on this observation, we propose a novel dual-approximation approach to tighten over-approximations, leveraging an activation function's underestimated domain to define tight approximation bounds. We implement our approach with two complementary algorithms based respectively on Monte Carlo simulation and gradient descent into a tool called DualApp. We assess it on a comprehensive benchmark of DNNs with different architectures. Our experimental results show that DualApp significantly outperforms the state-of-the-art approaches with 100% - 1000% improvement on the verified robustness ratio and 10.64% on average (up to 66.53%) on the certified lower bound.
|
2010.08542
|
Adrian de Wynter
|
Adrian de Wynter
|
Mischief: A Simple Black-Box Attack Against Transformer Architectures
|
Technical report
| null | null | null |
cs.CL cs.CR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce Mischief, a simple and lightweight method to produce a class of
human-readable, realistic adversarial examples for language models. We perform
exhaustive experimentations of our algorithm on four transformer-based
architectures, across a variety of downstream tasks, as well as under varying
concentrations of said examples. Our findings show that the presence of
Mischief-generated adversarial samples in the test set significantly degrades
(by up to $20\%$) the performance of these models with respect to their
reported baselines. Nonetheless, we also demonstrate that, by including similar
examples in the training set, it is possible to restore the baseline scores on
the adversarial test set. Moreover, for certain tasks, the models trained with
Mischief set show a modest increase on performance with respect to their
original, non-adversarial baseline.
|
[
{
"created": "Fri, 16 Oct 2020 17:52:06 GMT",
"version": "v1"
}
] |
2020-10-19
|
[
[
"de Wynter",
"Adrian",
""
]
] |
We introduce Mischief, a simple and lightweight method to produce a class of human-readable, realistic adversarial examples for language models. We perform exhaustive experimentations of our algorithm on four transformer-based architectures, across a variety of downstream tasks, as well as under varying concentrations of said examples. Our findings show that the presence of Mischief-generated adversarial samples in the test set significantly degrades (by up to $20\%$) the performance of these models with respect to their reported baselines. Nonetheless, we also demonstrate that, by including similar examples in the training set, it is possible to restore the baseline scores on the adversarial test set. Moreover, for certain tasks, the models trained with Mischief set show a modest increase on performance with respect to their original, non-adversarial baseline.
|
2106.10124
|
Oriel Frigo
|
Oriel Frigo, R\'emy Brossard, David Dehaene
|
Graph Context Encoder: Graph Feature Inpainting for Graph Generation and
Self-supervised Pretraining
|
13 pages, 4 figures
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We propose the Graph Context Encoder (GCE), a simple but efficient approach
for graph representation learning based on graph feature masking and
reconstruction.
GCE models are trained to efficiently reconstruct input graphs similarly to a
graph autoencoder where node and edge labels are masked. In particular, our
model is also allowed to change graph structures by masking and reconstructing
graphs augmented by random pseudo-edges.
We show that GCE can be used for novel graph generation, with applications
for molecule generation. Used as a pretraining method, we also show that GCE
improves baseline performances in supervised classification tasks tested on
multiple standard benchmark graph datasets.
|
[
{
"created": "Fri, 18 Jun 2021 13:28:11 GMT",
"version": "v1"
}
] |
2021-06-21
|
[
[
"Frigo",
"Oriel",
""
],
[
"Brossard",
"Rémy",
""
],
[
"Dehaene",
"David",
""
]
] |
We propose the Graph Context Encoder (GCE), a simple but efficient approach for graph representation learning based on graph feature masking and reconstruction. GCE models are trained to efficiently reconstruct input graphs similarly to a graph autoencoder where node and edge labels are masked. In particular, our model is also allowed to change graph structures by masking and reconstructing graphs augmented by random pseudo-edges. We show that GCE can be used for novel graph generation, with applications for molecule generation. Used as a pretraining method, we also show that GCE improves baseline performances in supervised classification tasks tested on multiple standard benchmark graph datasets.
|
2309.14720
|
Chen Yu
|
Yu Chen, Gong Chen, Jing Ye, Chenglong Fu, Bin Liang, and Xiang Li
|
Learning to Assist Different Wearers in Multitasks: Efficient and
Individualized Human-In-the-Loop Adaption Framework for Exoskeleton Robots
|
16 pages journal article
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
One of the typical purposes of using lower-limb exoskeleton robots is to
provide assistance to the wearer by supporting their weight and augmenting
their physical capabilities according to a given task and human motion
intentions. The generalizability of robots across different wearers in multiple
tasks is important to ensure that the robot can provide correct and effective
assistance in actual implementation. However, most lower-limb exoskeleton
robots exhibit only limited generalizability. Therefore, this paper proposes a
human-in-the-loop learning and adaptation framework for exoskeleton robots to
improve their performance in various tasks and for different wearers. To suit
different wearers, an individualized walking trajectory is generated online
using dynamic movement primitives and Bayes optimization. To accommodate
various tasks, a task translator is constructed using a neural network to
generalize a trajectory to more complex scenarios. These generalization
techniques are integrated into a unified variable impedance model, which
regulates the exoskeleton to provide assistance while ensuring safety. In
addition, an anomaly detection network is developed to quantitatively evaluate
the wearer's comfort, which is considered in the trajectory learning procedure
and contributes to the relaxation of conflicts in impedance control. The
proposed framework is easy to implement, because it requires proprioceptive
sensors only to perform and deploy data-efficient learning schemes. This makes
the exoskeleton practical for deployment in complex scenarios, accommodating
different walking patterns, habits, tasks, and conflicts. Experiments and
comparative studies on a lower-limb exoskeleton robot are performed to
demonstrate the effectiveness of the proposed framework.
|
[
{
"created": "Tue, 26 Sep 2023 07:26:48 GMT",
"version": "v1"
}
] |
2023-09-27
|
[
[
"Chen",
"Yu",
""
],
[
"Chen",
"Gong",
""
],
[
"Ye",
"Jing",
""
],
[
"Fu",
"Chenglong",
""
],
[
"Liang",
"Bin",
""
],
[
"Li",
"Xiang",
""
]
] |
One of the typical purposes of using lower-limb exoskeleton robots is to provide assistance to the wearer by supporting their weight and augmenting their physical capabilities according to a given task and human motion intentions. The generalizability of robots across different wearers in multiple tasks is important to ensure that the robot can provide correct and effective assistance in actual implementation. However, most lower-limb exoskeleton robots exhibit only limited generalizability. Therefore, this paper proposes a human-in-the-loop learning and adaptation framework for exoskeleton robots to improve their performance in various tasks and for different wearers. To suit different wearers, an individualized walking trajectory is generated online using dynamic movement primitives and Bayes optimization. To accommodate various tasks, a task translator is constructed using a neural network to generalize a trajectory to more complex scenarios. These generalization techniques are integrated into a unified variable impedance model, which regulates the exoskeleton to provide assistance while ensuring safety. In addition, an anomaly detection network is developed to quantitatively evaluate the wearer's comfort, which is considered in the trajectory learning procedure and contributes to the relaxation of conflicts in impedance control. The proposed framework is easy to implement, because it requires proprioceptive sensors only to perform and deploy data-efficient learning schemes. This makes the exoskeleton practical for deployment in complex scenarios, accommodating different walking patterns, habits, tasks, and conflicts. Experiments and comparative studies on a lower-limb exoskeleton robot are performed to demonstrate the effectiveness of the proposed framework.
|
2010.06050
|
Juner Zhu
|
Wei Li and Martin Z. Bazant and Juner Zhu
|
A Physics-Guided Neural Network Framework for Elastic Plates: Comparison
of Governing Equations-Based and Energy-Based Approaches
| null | null |
10.1016/j.cma.2021.113933
| null |
cs.CE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
One of the obstacles hindering the scaling-up of the initial successes of
machine learning in practical engineering applications is the dependence of the
accuracy on the size of the database that "drives" the algorithms.
Incorporating the already-known physical laws into the training process can
significantly reduce the size of the required database. In this study, we
establish a neural network-based computational framework to characterize the
finite deformation of elastic plates, which in classic theories is described by
the F\"oppl--von K\'arm\'an (FvK) equations with a set of boundary conditions
(BCs). A neural network is constructed by taking the spatial coordinates as the
input and the displacement field as the output to approximate the exact
solution of the FvK equations. The physical information (PDEs, BCs, and
potential energies) is then incorporated into the loss function, and a pseudo
dataset is sampled without knowing the exact solution to finally train the
neural network. The prediction accuracy of the modeling framework is carefully
examined by applying it to four different loading cases: in-plane tension with
non-uniformly distributed stretching forces, in-plane central-hole tension,
out-of-plane deflection, and buckling under compression. \hl{Three ways of
formulating the loss function are compared: 1) purely data-driven, 2)
PDE-based, and 3) energy-based. Through the comparison with the finite element
simulations, it is found that all the three approaches can characterize the
elastic deformation of plates with a satisfactory accuracy if trained properly.
Compared with incorporating the PDEs and BCs in the loss, using the total
potential energy shows certain advantage in terms of the simplicity of
hyperparameter tuning and the computational efficiency.
|
[
{
"created": "Mon, 12 Oct 2020 21:51:35 GMT",
"version": "v1"
},
{
"created": "Wed, 14 Oct 2020 00:37:33 GMT",
"version": "v2"
},
{
"created": "Mon, 18 Jan 2021 20:45:48 GMT",
"version": "v3"
}
] |
2021-06-09
|
[
[
"Li",
"Wei",
""
],
[
"Bazant",
"Martin Z.",
""
],
[
"Zhu",
"Juner",
""
]
] |
One of the obstacles hindering the scaling-up of the initial successes of machine learning in practical engineering applications is the dependence of the accuracy on the size of the database that "drives" the algorithms. Incorporating the already-known physical laws into the training process can significantly reduce the size of the required database. In this study, we establish a neural network-based computational framework to characterize the finite deformation of elastic plates, which in classic theories is described by the F\"oppl--von K\'arm\'an (FvK) equations with a set of boundary conditions (BCs). A neural network is constructed by taking the spatial coordinates as the input and the displacement field as the output to approximate the exact solution of the FvK equations. The physical information (PDEs, BCs, and potential energies) is then incorporated into the loss function, and a pseudo dataset is sampled without knowing the exact solution to finally train the neural network. The prediction accuracy of the modeling framework is carefully examined by applying it to four different loading cases: in-plane tension with non-uniformly distributed stretching forces, in-plane central-hole tension, out-of-plane deflection, and buckling under compression. \hl{Three ways of formulating the loss function are compared: 1) purely data-driven, 2) PDE-based, and 3) energy-based. Through the comparison with the finite element simulations, it is found that all the three approaches can characterize the elastic deformation of plates with a satisfactory accuracy if trained properly. Compared with incorporating the PDEs and BCs in the loss, using the total potential energy shows certain advantage in terms of the simplicity of hyperparameter tuning and the computational efficiency.
|
2104.01569
|
Endri Kacupaj
|
Endri Kacupaj, Joan Plepi, Kuldeep Singh, Harsh Thakkar, Jens Lehmann,
Maria Maleshkova
|
Conversational Question Answering over Knowledge Graphs with Transformer
and Graph Attention Networks
|
16th conference of the European Chapter of the Association for
Computational Linguistics (EACL 2021)
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
This paper addresses the task of (complex) conversational question answering
over a knowledge graph. For this task, we propose LASAGNE (muLti-task semAntic
parSing with trAnsformer and Graph atteNtion nEtworks). It is the first
approach, which employs a transformer architecture extended with Graph
Attention Networks for multi-task neural semantic parsing. LASAGNE uses a
transformer model for generating the base logical forms, while the Graph
Attention model is used to exploit correlations between (entity) types and
predicates to produce node representations. LASAGNE also includes a novel
entity recognition module which detects, links, and ranks all relevant entities
in the question context. We evaluate LASAGNE on a standard dataset for complex
sequential question answering, on which it outperforms existing baseline
averages on all question types. Specifically, we show that LASAGNE improves the
F1-score on eight out of ten question types; in some cases, the increase in
F1-score is more than 20% compared to the state of the art.
|
[
{
"created": "Sun, 4 Apr 2021 09:21:50 GMT",
"version": "v1"
},
{
"created": "Thu, 24 Jun 2021 11:41:05 GMT",
"version": "v2"
}
] |
2021-06-25
|
[
[
"Kacupaj",
"Endri",
""
],
[
"Plepi",
"Joan",
""
],
[
"Singh",
"Kuldeep",
""
],
[
"Thakkar",
"Harsh",
""
],
[
"Lehmann",
"Jens",
""
],
[
"Maleshkova",
"Maria",
""
]
] |
This paper addresses the task of (complex) conversational question answering over a knowledge graph. For this task, we propose LASAGNE (muLti-task semAntic parSing with trAnsformer and Graph atteNtion nEtworks). It is the first approach, which employs a transformer architecture extended with Graph Attention Networks for multi-task neural semantic parsing. LASAGNE uses a transformer model for generating the base logical forms, while the Graph Attention model is used to exploit correlations between (entity) types and predicates to produce node representations. LASAGNE also includes a novel entity recognition module which detects, links, and ranks all relevant entities in the question context. We evaluate LASAGNE on a standard dataset for complex sequential question answering, on which it outperforms existing baseline averages on all question types. Specifically, we show that LASAGNE improves the F1-score on eight out of ten question types; in some cases, the increase in F1-score is more than 20% compared to the state of the art.
|
1812.05448
|
Mengwei Xu
|
Mengwei Xu, Jiawei Liu, Yuanqiang Liu, Felix Xiaozhu Lin, Yunxin Liu,
and Xuanzhe Liu
|
A First Look at Deep Learning Apps on Smartphones
| null | null | null | null |
cs.LG cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We are in the dawn of deep learning explosion for smartphones. To bridge the
gap between research and practice, we present the first empirical study on
16,500 the most popular Android apps, demystifying how smartphone apps exploit
deep learning in the wild. To this end, we build a new static tool that
dissects apps and analyzes their deep learning functions. Our study answers
threefold questions: what are the early adopter apps of deep learning, what do
they use deep learning for, and how do their deep learning models look like.
Our study has strong implications for app developers, smartphone vendors, and
deep learning R\&D. On one hand, our findings paint a promising picture of deep
learning for smartphones, showing the prosperity of mobile deep learning
frameworks as well as the prosperity of apps building their cores atop deep
learning. On the other hand, our findings urge optimizations on deep learning
models deployed on smartphones, the protection of these models, and validation
of research ideas on these models.
|
[
{
"created": "Thu, 8 Nov 2018 07:59:23 GMT",
"version": "v1"
},
{
"created": "Wed, 23 Jan 2019 04:28:12 GMT",
"version": "v2"
},
{
"created": "Mon, 30 Mar 2020 03:50:46 GMT",
"version": "v3"
},
{
"created": "Wed, 13 Jan 2021 01:29:33 GMT",
"version": "v4"
}
] |
2021-01-14
|
[
[
"Xu",
"Mengwei",
""
],
[
"Liu",
"Jiawei",
""
],
[
"Liu",
"Yuanqiang",
""
],
[
"Lin",
"Felix Xiaozhu",
""
],
[
"Liu",
"Yunxin",
""
],
[
"Liu",
"Xuanzhe",
""
]
] |
We are in the dawn of deep learning explosion for smartphones. To bridge the gap between research and practice, we present the first empirical study on 16,500 the most popular Android apps, demystifying how smartphone apps exploit deep learning in the wild. To this end, we build a new static tool that dissects apps and analyzes their deep learning functions. Our study answers threefold questions: what are the early adopter apps of deep learning, what do they use deep learning for, and how do their deep learning models look like. Our study has strong implications for app developers, smartphone vendors, and deep learning R\&D. On one hand, our findings paint a promising picture of deep learning for smartphones, showing the prosperity of mobile deep learning frameworks as well as the prosperity of apps building their cores atop deep learning. On the other hand, our findings urge optimizations on deep learning models deployed on smartphones, the protection of these models, and validation of research ideas on these models.
|
1811.05115
|
Kshitij Gajjar
|
Kshitij Gajjar, Jaikumar Radhakrishnan
|
Parametric Shortest Paths in Planar Graphs
|
39 pages, 4 figures
| null | null | null |
cs.CC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We construct a family of planar graphs $\{G_n\}_{n\geq 4}$, where $G_n$ has
$n$ vertices including a source vertex $s$ and a sink vertex $t$, and edge
weights that change linearly with a parameter $\lambda$ such that, as $\lambda$
varies in $(-\infty,+\infty)$, the piece-wise linear cost of the shortest path
from $s$ to $t$ has $n^{\Omega(\log n)}$ pieces. This shows that lower bounds
obtained earlier by Carstensen (1983) and Mulmuley \& Shah (2001) for general
graphs also hold for planar graphs, thereby refuting a conjecture of Nikolova
(2009). Gusfield (1980) and Dean (2009) showed that the number of pieces for
every $n$-vertex graph with linear edge weights is $n^{\log n + O(1)}$. We
generalize this result in two ways. (i) If the edge weights vary as a
polynomial of degree at most $d$, then the number of pieces is $n^{\log n +
(\alpha(n)+O(1))^d}$, where $\alpha(n)$ is the slow growing inverse Ackermann
function. (ii) If the edge weights are linear forms of three parameters, then
the number of pieces, appropriately defined for $\mathbb{R}^3$, is $n^{(\log
n)^2+O(\log n)}$.
|
[
{
"created": "Tue, 13 Nov 2018 05:35:47 GMT",
"version": "v1"
},
{
"created": "Mon, 4 Feb 2019 05:49:08 GMT",
"version": "v2"
},
{
"created": "Wed, 19 Jun 2019 07:56:19 GMT",
"version": "v3"
}
] |
2019-06-20
|
[
[
"Gajjar",
"Kshitij",
""
],
[
"Radhakrishnan",
"Jaikumar",
""
]
] |
We construct a family of planar graphs $\{G_n\}_{n\geq 4}$, where $G_n$ has $n$ vertices including a source vertex $s$ and a sink vertex $t$, and edge weights that change linearly with a parameter $\lambda$ such that, as $\lambda$ varies in $(-\infty,+\infty)$, the piece-wise linear cost of the shortest path from $s$ to $t$ has $n^{\Omega(\log n)}$ pieces. This shows that lower bounds obtained earlier by Carstensen (1983) and Mulmuley \& Shah (2001) for general graphs also hold for planar graphs, thereby refuting a conjecture of Nikolova (2009). Gusfield (1980) and Dean (2009) showed that the number of pieces for every $n$-vertex graph with linear edge weights is $n^{\log n + O(1)}$. We generalize this result in two ways. (i) If the edge weights vary as a polynomial of degree at most $d$, then the number of pieces is $n^{\log n + (\alpha(n)+O(1))^d}$, where $\alpha(n)$ is the slow growing inverse Ackermann function. (ii) If the edge weights are linear forms of three parameters, then the number of pieces, appropriately defined for $\mathbb{R}^3$, is $n^{(\log n)^2+O(\log n)}$.
|
1707.01810
|
Varun Ojha
|
Varun Kumar Ojha, Ajith Abraham, Vaclav Snasel
|
Simultaneous Optimization of Neural Network Weights and Active Nodes
using Metaheuristics
| null | null |
10.1109/HIS.2014.7086207
| null |
cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Optimization of neural network (NN) significantly influenced by the transfer
function used in its active nodes. It has been observed that the homogeneity in
the activation nodes does not provide the best solution. Therefore, the
customizable transfer functions whose underlying parameters are subjected to
optimization were used to provide heterogeneity to NN. For the experimental
purpose, a meta-heuristic framework using a combined genotype representation of
connection weights and transfer function parameter was used. The performance of
adaptive Logistic, Tangent-hyperbolic, Gaussian and Beta functions were
analyzed. In present research work, concise comparisons between different
transfer function and between the NN optimization algorithms are presented. The
comprehensive analysis of the results obtained over the benchmark dataset
suggests that the Artificial Bee Colony with adaptive transfer function
provides the best results in terms of classification accuracy over the particle
swarm optimization and differential evolution.
|
[
{
"created": "Thu, 6 Jul 2017 14:20:50 GMT",
"version": "v1"
}
] |
2017-07-07
|
[
[
"Ojha",
"Varun Kumar",
""
],
[
"Abraham",
"Ajith",
""
],
[
"Snasel",
"Vaclav",
""
]
] |
Optimization of neural network (NN) significantly influenced by the transfer function used in its active nodes. It has been observed that the homogeneity in the activation nodes does not provide the best solution. Therefore, the customizable transfer functions whose underlying parameters are subjected to optimization were used to provide heterogeneity to NN. For the experimental purpose, a meta-heuristic framework using a combined genotype representation of connection weights and transfer function parameter was used. The performance of adaptive Logistic, Tangent-hyperbolic, Gaussian and Beta functions were analyzed. In present research work, concise comparisons between different transfer function and between the NN optimization algorithms are presented. The comprehensive analysis of the results obtained over the benchmark dataset suggests that the Artificial Bee Colony with adaptive transfer function provides the best results in terms of classification accuracy over the particle swarm optimization and differential evolution.
|
2402.05156
|
Petra Heck
|
Petra Heck
|
What About the Data? A Mapping Study on Data Engineering for AI Systems
|
Preprint, accepted for CAIN24
| null |
10.1145/3644815.3644954
| null |
cs.DL cs.AI cs.DB
|
http://creativecommons.org/licenses/by/4.0/
|
AI systems cannot exist without data. Now that AI models (data science and
AI) have matured and are readily available to apply in practice, most
organizations struggle with the data infrastructure to do so. There is a
growing need for data engineers that know how to prepare data for AI systems or
that can setup enterprise-wide data architectures for analytical projects. But
until now, the data engineering part of AI engineering has not been getting
much attention, in favor of discussing the modeling part. In this paper we aim
to change this by perform a mapping study on data engineering for AI systems,
i.e., AI data engineering. We found 25 relevant papers between January 2019 and
June 2023, explaining AI data engineering activities. We identify which life
cycle phases are covered, which technical solutions or architectures are
proposed and which lessons learned are presented. We end by an overall
discussion of the papers with implications for practitioners and researchers.
This paper creates an overview of the body of knowledge on data engineering for
AI. This overview is useful for practitioners to identify solutions and best
practices as well as for researchers to identify gaps.
|
[
{
"created": "Wed, 7 Feb 2024 16:31:58 GMT",
"version": "v1"
}
] |
2024-02-09
|
[
[
"Heck",
"Petra",
""
]
] |
AI systems cannot exist without data. Now that AI models (data science and AI) have matured and are readily available to apply in practice, most organizations struggle with the data infrastructure to do so. There is a growing need for data engineers that know how to prepare data for AI systems or that can setup enterprise-wide data architectures for analytical projects. But until now, the data engineering part of AI engineering has not been getting much attention, in favor of discussing the modeling part. In this paper we aim to change this by perform a mapping study on data engineering for AI systems, i.e., AI data engineering. We found 25 relevant papers between January 2019 and June 2023, explaining AI data engineering activities. We identify which life cycle phases are covered, which technical solutions or architectures are proposed and which lessons learned are presented. We end by an overall discussion of the papers with implications for practitioners and researchers. This paper creates an overview of the body of knowledge on data engineering for AI. This overview is useful for practitioners to identify solutions and best practices as well as for researchers to identify gaps.
|
2210.10180
|
Mrigank Rochan
|
Mrigank Rochan, Xingxin Chen, Alaap Grandhi, Eduardo R. Corral-Soto,
Bingbing Liu
|
Domain Adaptation in 3D Object Detection with Gradual Batch Alternation
Training
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider the problem of domain adaptation in LiDAR-based 3D object
detection. Towards this, we propose a simple yet effective training strategy
called Gradual Batch Alternation that can adapt from a large labeled source
domain to an insufficiently labeled target domain. The idea is to initiate the
training with the batch of samples from the source and target domain data in an
alternate fashion, but then gradually reduce the amount of the source domain
data over time as the training progresses. This way the model slowly shifts
towards the target domain and eventually better adapt to it. The domain
adaptation experiments for 3D object detection on four benchmark autonomous
driving datasets, namely ONCE, PandaSet, Waymo, and nuScenes, demonstrate
significant performance gains over prior arts and strong baselines.
|
[
{
"created": "Tue, 18 Oct 2022 22:03:37 GMT",
"version": "v1"
},
{
"created": "Sat, 5 Aug 2023 01:29:51 GMT",
"version": "v2"
}
] |
2023-08-08
|
[
[
"Rochan",
"Mrigank",
""
],
[
"Chen",
"Xingxin",
""
],
[
"Grandhi",
"Alaap",
""
],
[
"Corral-Soto",
"Eduardo R.",
""
],
[
"Liu",
"Bingbing",
""
]
] |
We consider the problem of domain adaptation in LiDAR-based 3D object detection. Towards this, we propose a simple yet effective training strategy called Gradual Batch Alternation that can adapt from a large labeled source domain to an insufficiently labeled target domain. The idea is to initiate the training with the batch of samples from the source and target domain data in an alternate fashion, but then gradually reduce the amount of the source domain data over time as the training progresses. This way the model slowly shifts towards the target domain and eventually better adapt to it. The domain adaptation experiments for 3D object detection on four benchmark autonomous driving datasets, namely ONCE, PandaSet, Waymo, and nuScenes, demonstrate significant performance gains over prior arts and strong baselines.
|
2403.02902
|
Chengguang Gan
|
Chengguang Gan, Xuzheng He, Qinghao Zhang, Tatsunori Mori
|
Demonstrating Mutual Reinforcement Effect through Information Flow
|
The co-authors have requested that the manuscript be withdrawn. And
the paper has major flaws
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Mutual Reinforcement Effect (MRE) investigates the synergistic
relationship between word-level and text-level classifications in text
classification tasks. It posits that the performance of both classification
levels can be mutually enhanced. However, this mechanism has not been
adequately demonstrated or explained in prior research. To address this gap, we
employ information flow analysis to observe and substantiate the MRE theory.
Our experiments on six MRE hybrid datasets revealed the presence of MRE in the
model and its impact. Additionally, we conducted fine-tuning experiments, whose
results were consistent with those of the information flow experiments. The
convergence of findings from both experiments corroborates the existence of
MRE. Furthermore, we extended the application of MRE to prompt learning,
utilizing word-level information as a verbalizer to bolster the model's
prediction of text-level classification labels. In our final experiment, the
F1-score significantly surpassed the baseline in five out of six datasets,
further validating the notion that word-level information enhances the language
model's comprehension of the text as a whole.
|
[
{
"created": "Tue, 5 Mar 2024 12:11:32 GMT",
"version": "v1"
},
{
"created": "Wed, 5 Jun 2024 15:47:38 GMT",
"version": "v2"
}
] |
2024-06-06
|
[
[
"Gan",
"Chengguang",
""
],
[
"He",
"Xuzheng",
""
],
[
"Zhang",
"Qinghao",
""
],
[
"Mori",
"Tatsunori",
""
]
] |
The Mutual Reinforcement Effect (MRE) investigates the synergistic relationship between word-level and text-level classifications in text classification tasks. It posits that the performance of both classification levels can be mutually enhanced. However, this mechanism has not been adequately demonstrated or explained in prior research. To address this gap, we employ information flow analysis to observe and substantiate the MRE theory. Our experiments on six MRE hybrid datasets revealed the presence of MRE in the model and its impact. Additionally, we conducted fine-tuning experiments, whose results were consistent with those of the information flow experiments. The convergence of findings from both experiments corroborates the existence of MRE. Furthermore, we extended the application of MRE to prompt learning, utilizing word-level information as a verbalizer to bolster the model's prediction of text-level classification labels. In our final experiment, the F1-score significantly surpassed the baseline in five out of six datasets, further validating the notion that word-level information enhances the language model's comprehension of the text as a whole.
|
2005.07111
|
Madhumita Sushil
|
Madhumita Sushil and Simon \v{S}uster and Walter Daelemans
|
Distilling neural networks into skipgram-level decision lists
| null | null | null | null |
cs.CL cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Several previous studies on explanation for recurrent neural networks focus
on approaches that find the most important input segments for a network as its
explanations. In that case, the manner in which these input segments combine
with each other to form an explanatory pattern remains unknown. To overcome
this, some previous work tries to find patterns (called rules) in the data that
explain neural outputs. However, their explanations are often insensitive to
model parameters, which limits the scalability of text explanations. To
overcome these limitations, we propose a pipeline to explain RNNs by means of
decision lists (also called rules) over skipgrams. For evaluation of
explanations, we create a synthetic sepsis-identification dataset, as well as
apply our technique on additional clinical and sentiment analysis datasets. We
find that our technique persistently achieves high explanation fidelity and
qualitatively interpretable rules.
|
[
{
"created": "Thu, 14 May 2020 16:25:42 GMT",
"version": "v1"
},
{
"created": "Mon, 18 May 2020 08:43:42 GMT",
"version": "v2"
}
] |
2020-05-19
|
[
[
"Sushil",
"Madhumita",
""
],
[
"Šuster",
"Simon",
""
],
[
"Daelemans",
"Walter",
""
]
] |
Several previous studies on explanation for recurrent neural networks focus on approaches that find the most important input segments for a network as its explanations. In that case, the manner in which these input segments combine with each other to form an explanatory pattern remains unknown. To overcome this, some previous work tries to find patterns (called rules) in the data that explain neural outputs. However, their explanations are often insensitive to model parameters, which limits the scalability of text explanations. To overcome these limitations, we propose a pipeline to explain RNNs by means of decision lists (also called rules) over skipgrams. For evaluation of explanations, we create a synthetic sepsis-identification dataset, as well as apply our technique on additional clinical and sentiment analysis datasets. We find that our technique persistently achieves high explanation fidelity and qualitatively interpretable rules.
|
2101.09810
|
Bilal Ghanem
|
Bilal Ghanem, Simone Paolo Ponzetto, Paolo Rosso, Francisco Rangel
|
FakeFlow: Fake News Detection by Modeling the Flow of Affective
Information
|
9 pages, 6 figures, EACL-2021
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Fake news articles often stir the readers' attention by means of emotional
appeals that arouse their feelings. Unlike in short news texts, authors of
longer articles can exploit such affective factors to manipulate readers by
adding exaggerations or fabricating events, in order to affect the readers'
emotions. To capture this, we propose in this paper to model the flow of
affective information in fake news articles using a neural architecture. The
proposed model, FakeFlow, learns this flow by combining topic and affective
information extracted from text. We evaluate the model's performance with
several experiments on four real-world datasets. The results show that FakeFlow
achieves superior results when compared against state-of-the-art methods, thus
confirming the importance of capturing the flow of the affective information in
news articles.
|
[
{
"created": "Sun, 24 Jan 2021 21:55:28 GMT",
"version": "v1"
}
] |
2021-01-26
|
[
[
"Ghanem",
"Bilal",
""
],
[
"Ponzetto",
"Simone Paolo",
""
],
[
"Rosso",
"Paolo",
""
],
[
"Rangel",
"Francisco",
""
]
] |
Fake news articles often stir the readers' attention by means of emotional appeals that arouse their feelings. Unlike in short news texts, authors of longer articles can exploit such affective factors to manipulate readers by adding exaggerations or fabricating events, in order to affect the readers' emotions. To capture this, we propose in this paper to model the flow of affective information in fake news articles using a neural architecture. The proposed model, FakeFlow, learns this flow by combining topic and affective information extracted from text. We evaluate the model's performance with several experiments on four real-world datasets. The results show that FakeFlow achieves superior results when compared against state-of-the-art methods, thus confirming the importance of capturing the flow of the affective information in news articles.
|
2407.05858
|
Daliang Xu
|
Daliang Xu, Hao Zhang, Liming Yang, Ruiqi Liu, Gang Huang, Mengwei Xu,
Xuanzhe Liu
|
Empowering 1000 tokens/second on-device LLM prefilling with mllm-NPU
| null | null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
On-device large language models (LLMs) are catalyzing novel mobile
applications such as UI task automation and personalized email auto-reply,
without giving away users' private data. However, on-device LLMs still suffer
from unacceptably long inference latency, especially the time to first token
(prefill stage) due to the need of long context for accurate, personalized
content generation, as well as the lack of parallel computing capacity of
mobile CPU/GPU.
To enable practical on-device LLM, we present mllm-NPU, the first-of-its-kind
LLM inference system that efficiently leverages on-device Neural Processing
Unit (NPU) offloading. Essentially, mllm-NPU is an algorithm-system co-design
that tackles a few semantic gaps between the LLM architecture and contemporary
NPU design. Specifically, it re-constructs the prompt and model in three
levels: (1) At prompt level, it divides variable-length prompts into multiple
fixed-sized chunks while maintaining data dependencies; (2) At tensor level, it
identifies and extracts significant outliers to run on the CPU/GPU in parallel
with minimal overhead; (3) At block level, it schedules Transformer blocks in
an out-of-order manner to the CPU/GPU and NPU based on their hardware affinity
and sensitivity to accuracy. Compared to competitive baselines, mllm-NPU
achieves 22.4x faster prefill speed and 30.7x energy savings on average, and up
to 32.8x speedup in an end-to-end real-world application. For the first time,
mllm-NPU achieves more than 1,000 tokens/sec prefilling for a billion-sized
model (Qwen1.5-1.8B), paving the way towards practical on-device LLM.
|
[
{
"created": "Mon, 8 Jul 2024 12:20:45 GMT",
"version": "v1"
}
] |
2024-07-09
|
[
[
"Xu",
"Daliang",
""
],
[
"Zhang",
"Hao",
""
],
[
"Yang",
"Liming",
""
],
[
"Liu",
"Ruiqi",
""
],
[
"Huang",
"Gang",
""
],
[
"Xu",
"Mengwei",
""
],
[
"Liu",
"Xuanzhe",
""
]
] |
On-device large language models (LLMs) are catalyzing novel mobile applications such as UI task automation and personalized email auto-reply, without giving away users' private data. However, on-device LLMs still suffer from unacceptably long inference latency, especially the time to first token (prefill stage) due to the need of long context for accurate, personalized content generation, as well as the lack of parallel computing capacity of mobile CPU/GPU. To enable practical on-device LLM, we present mllm-NPU, the first-of-its-kind LLM inference system that efficiently leverages on-device Neural Processing Unit (NPU) offloading. Essentially, mllm-NPU is an algorithm-system co-design that tackles a few semantic gaps between the LLM architecture and contemporary NPU design. Specifically, it re-constructs the prompt and model in three levels: (1) At prompt level, it divides variable-length prompts into multiple fixed-sized chunks while maintaining data dependencies; (2) At tensor level, it identifies and extracts significant outliers to run on the CPU/GPU in parallel with minimal overhead; (3) At block level, it schedules Transformer blocks in an out-of-order manner to the CPU/GPU and NPU based on their hardware affinity and sensitivity to accuracy. Compared to competitive baselines, mllm-NPU achieves 22.4x faster prefill speed and 30.7x energy savings on average, and up to 32.8x speedup in an end-to-end real-world application. For the first time, mllm-NPU achieves more than 1,000 tokens/sec prefilling for a billion-sized model (Qwen1.5-1.8B), paving the way towards practical on-device LLM.
|
2212.05028
|
Tirth Patel
|
Tirth Patel, Niyatiben Salot, Vrusha Parikh
|
A systematic literature review on Security of Unmanned Aerial Vehicle
Systems
|
10 Pages, 4 Figures
| null | null | null |
cs.CR cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
Unmanned aerial vehicles (UAVs) are becoming more common, and their
operational range is expanding tremendously, making the security aspect of the
inquiry essential. This study does a thorough assessment of the literature to
determine the most common cyberattacks and the effects they have on UAV
assaults on civilian targets. The STRIDE assault paradigm, the challenge they
present, and the proper tools for the attack are used to categorize the cyber
dangers discussed in this paper. Spoofing and denial of service assaults are
the most prevalent types of UAV cyberattacks and have the best results. No
attack style demands the employment of a hard-to-reach gadget, indicating that
the security environment currently necessitates improvements to UAV use in
civilian applications.
|
[
{
"created": "Fri, 9 Dec 2022 18:21:18 GMT",
"version": "v1"
}
] |
2022-12-12
|
[
[
"Patel",
"Tirth",
""
],
[
"Salot",
"Niyatiben",
""
],
[
"Parikh",
"Vrusha",
""
]
] |
Unmanned aerial vehicles (UAVs) are becoming more common, and their operational range is expanding tremendously, making the security aspect of the inquiry essential. This study does a thorough assessment of the literature to determine the most common cyberattacks and the effects they have on UAV assaults on civilian targets. The STRIDE assault paradigm, the challenge they present, and the proper tools for the attack are used to categorize the cyber dangers discussed in this paper. Spoofing and denial of service assaults are the most prevalent types of UAV cyberattacks and have the best results. No attack style demands the employment of a hard-to-reach gadget, indicating that the security environment currently necessitates improvements to UAV use in civilian applications.
|
2112.07867
|
Aman Madaan
|
Niket Tandon, Aman Madaan, Peter Clark, Keisuke Sakaguchi, Yiming Yang
|
Interscript: A dataset for interactive learning of scripts through error
feedback
|
AAAI'22-Workshop on Interactive Machine Learning
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
How can an end-user provide feedback if a deployed structured prediction
model generates inconsistent output, ignoring the structural complexity of
human language? This is an emerging topic with recent progress in synthetic or
constrained settings, and the next big leap would require testing and tuning
models in real-world settings. We present a new dataset, Interscript,
containing user feedback on a deployed model that generates complex everyday
tasks. Interscript contains 8,466 data points -- the input is a possibly
erroneous script and a user feedback, and the output is a modified script. We
posit two use-cases of \ours that might significantly advance the
state-of-the-art in interactive learning. The dataset is available at:
https://github.com/allenai/interscript.
|
[
{
"created": "Wed, 15 Dec 2021 04:04:03 GMT",
"version": "v1"
},
{
"created": "Thu, 16 Dec 2021 03:31:52 GMT",
"version": "v2"
}
] |
2021-12-17
|
[
[
"Tandon",
"Niket",
""
],
[
"Madaan",
"Aman",
""
],
[
"Clark",
"Peter",
""
],
[
"Sakaguchi",
"Keisuke",
""
],
[
"Yang",
"Yiming",
""
]
] |
How can an end-user provide feedback if a deployed structured prediction model generates inconsistent output, ignoring the structural complexity of human language? This is an emerging topic with recent progress in synthetic or constrained settings, and the next big leap would require testing and tuning models in real-world settings. We present a new dataset, Interscript, containing user feedback on a deployed model that generates complex everyday tasks. Interscript contains 8,466 data points -- the input is a possibly erroneous script and a user feedback, and the output is a modified script. We posit two use-cases of \ours that might significantly advance the state-of-the-art in interactive learning. The dataset is available at: https://github.com/allenai/interscript.
|
2103.13525
|
Jose Vega
|
Jos\'e David Vega S\'anchez, Luis Urquiza-Aguiar, Martha Cecilia
Paredes Paredes, and F. Javier L\'opez-Mart\'inez
|
Expectation-Maximization Learning for Wireless Channel Modeling of
Reconfigurable Intelligent Surfaces
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Channel modeling is a critical issue when designing or evaluating the
performance of reconfigurable intelligent surface (RIS)-assisted
communications. Inspired by the promising potential of learning-based methods
for characterizing the radio environment, we present a general approach to
model the RIS end-to-end equivalent channel using the unsupervised
expectation-maximization (EM) learning algorithm. We show that an EM-based
approximation through a simple mixture of two Nakagami-$m$ distributions
suffices to accurately approximating the equivalent channel, while allowing for
the incorporation of crucial aspects into RIS's channel modeling as spatial
channel correlation, phase-shift errors, arbitrary fading conditions, and
coexistence of direct and RIS channels. Based on the proposed analytical
framework, we evaluate the outage probability under different settings of RIS's
channel features and confirm the superiority of this approach compared to
recent results in the literature.
|
[
{
"created": "Wed, 24 Mar 2021 23:21:13 GMT",
"version": "v1"
},
{
"created": "Tue, 10 Aug 2021 19:23:27 GMT",
"version": "v2"
}
] |
2021-08-12
|
[
[
"Sánchez",
"José David Vega",
""
],
[
"Urquiza-Aguiar",
"Luis",
""
],
[
"Paredes",
"Martha Cecilia Paredes",
""
],
[
"López-Martínez",
"F. Javier",
""
]
] |
Channel modeling is a critical issue when designing or evaluating the performance of reconfigurable intelligent surface (RIS)-assisted communications. Inspired by the promising potential of learning-based methods for characterizing the radio environment, we present a general approach to model the RIS end-to-end equivalent channel using the unsupervised expectation-maximization (EM) learning algorithm. We show that an EM-based approximation through a simple mixture of two Nakagami-$m$ distributions suffices to accurately approximating the equivalent channel, while allowing for the incorporation of crucial aspects into RIS's channel modeling as spatial channel correlation, phase-shift errors, arbitrary fading conditions, and coexistence of direct and RIS channels. Based on the proposed analytical framework, we evaluate the outage probability under different settings of RIS's channel features and confirm the superiority of this approach compared to recent results in the literature.
|
2304.07042
|
Yifang Qin
|
Yifang Qin, Wei Ju, Hongjun Wu, Xiao Luo, Ming Zhang
|
Learning Graph ODE for Continuous-Time Sequential Recommendation
|
Accepted by EEE Transactions on Knowledge and Data Engineering (TKDE
2024)
| null |
10.1109/TKDE.2024.3349397
| null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sequential recommendation aims at understanding user preference by capturing
successive behavior correlations, which are usually represented as the item
purchasing sequences based on their past interactions. Existing efforts
generally predict the next item via modeling the sequential patterns. Despite
effectiveness, there exist two natural deficiencies: (i) user preference is
dynamic in nature, and the evolution of collaborative signals is often ignored;
and (ii) the observed interactions are often irregularly-sampled, while
existing methods model item transitions assuming uniform intervals. Thus, how
to effectively model and predict the underlying dynamics for user preference
becomes a critical research problem. To tackle the above challenges, in this
paper, we focus on continuous-time sequential recommendation and propose a
principled graph ordinary differential equation framework named GDERec.
Technically, GDERec is characterized by an autoregressive graph ordinary
differential equation consisting of two components, which are parameterized by
two tailored graph neural networks (GNNs) respectively to capture user
preference from the perspective of hybrid dynamical systems. The two customized
GNNs are trained alternately in an autoregressive manner to track the evolution
of the underlying system from irregular observations, and thus learn effective
representations of users and items beneficial to the sequential recommendation.
Extensive experiments on five benchmark datasets demonstrate the superiority of
our model over various state-of-the-art recommendation methods.
|
[
{
"created": "Fri, 14 Apr 2023 10:33:56 GMT",
"version": "v1"
},
{
"created": "Sat, 20 Jan 2024 08:58:56 GMT",
"version": "v2"
}
] |
2024-01-23
|
[
[
"Qin",
"Yifang",
""
],
[
"Ju",
"Wei",
""
],
[
"Wu",
"Hongjun",
""
],
[
"Luo",
"Xiao",
""
],
[
"Zhang",
"Ming",
""
]
] |
Sequential recommendation aims at understanding user preference by capturing successive behavior correlations, which are usually represented as the item purchasing sequences based on their past interactions. Existing efforts generally predict the next item via modeling the sequential patterns. Despite effectiveness, there exist two natural deficiencies: (i) user preference is dynamic in nature, and the evolution of collaborative signals is often ignored; and (ii) the observed interactions are often irregularly-sampled, while existing methods model item transitions assuming uniform intervals. Thus, how to effectively model and predict the underlying dynamics for user preference becomes a critical research problem. To tackle the above challenges, in this paper, we focus on continuous-time sequential recommendation and propose a principled graph ordinary differential equation framework named GDERec. Technically, GDERec is characterized by an autoregressive graph ordinary differential equation consisting of two components, which are parameterized by two tailored graph neural networks (GNNs) respectively to capture user preference from the perspective of hybrid dynamical systems. The two customized GNNs are trained alternately in an autoregressive manner to track the evolution of the underlying system from irregular observations, and thus learn effective representations of users and items beneficial to the sequential recommendation. Extensive experiments on five benchmark datasets demonstrate the superiority of our model over various state-of-the-art recommendation methods.
|
2012.14602
|
Oleg Vasilyev
|
Oleg Vasilyev and John Bohannon
|
Is human scoring the best criteria for summary evaluation?
|
7 pages, 5 figures, 1 table
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Normally, summary quality measures are compared with quality scores produced
by human annotators. A higher correlation with human scores is considered to be
a fair indicator of a better measure. We discuss observations that cast doubt
on this view. We attempt to show a possibility of an alternative indicator.
Given a family of measures, we explore a criterion of selecting the best
measure not relying on correlations with human scores. Our observations for the
BLANC family of measures suggest that the criterion is universal across very
different styles of summaries.
|
[
{
"created": "Tue, 29 Dec 2020 04:48:52 GMT",
"version": "v1"
}
] |
2021-01-01
|
[
[
"Vasilyev",
"Oleg",
""
],
[
"Bohannon",
"John",
""
]
] |
Normally, summary quality measures are compared with quality scores produced by human annotators. A higher correlation with human scores is considered to be a fair indicator of a better measure. We discuss observations that cast doubt on this view. We attempt to show a possibility of an alternative indicator. Given a family of measures, we explore a criterion of selecting the best measure not relying on correlations with human scores. Our observations for the BLANC family of measures suggest that the criterion is universal across very different styles of summaries.
|
2304.06484
|
Mark Hamilton
|
Howard Zhong, Mark Hamilton
|
Exploring Gender and Race Biases in the NFT Market
| null | null |
10.1016/j.frl.2023.103651
| null |
cs.CY cs.LG cs.SI econ.GN q-fin.EC stat.AP
|
http://creativecommons.org/licenses/by/4.0/
|
Non-Fungible Tokens (NFTs) are non-interchangeable assets, usually digital
art, which are stored on the blockchain. Preliminary studies find that female
and darker-skinned NFTs are valued less than their male and lighter-skinned
counterparts. However, these studies analyze only the CryptoPunks collection.
We test the statistical significance of race and gender biases in the prices of
CryptoPunks and present the first study of gender bias in the broader NFT
market. We find evidence of racial bias but not gender bias. Our work also
introduces a dataset of gender-labeled NFT collections to advance the broader
study of social equity in this emerging market.
|
[
{
"created": "Wed, 29 Mar 2023 17:38:11 GMT",
"version": "v1"
}
] |
2023-04-14
|
[
[
"Zhong",
"Howard",
""
],
[
"Hamilton",
"Mark",
""
]
] |
Non-Fungible Tokens (NFTs) are non-interchangeable assets, usually digital art, which are stored on the blockchain. Preliminary studies find that female and darker-skinned NFTs are valued less than their male and lighter-skinned counterparts. However, these studies analyze only the CryptoPunks collection. We test the statistical significance of race and gender biases in the prices of CryptoPunks and present the first study of gender bias in the broader NFT market. We find evidence of racial bias but not gender bias. Our work also introduces a dataset of gender-labeled NFT collections to advance the broader study of social equity in this emerging market.
|
2205.15868
|
Ming Ding
|
Wenyi Hong, Ming Ding, Wendi Zheng, Xinghan Liu, Jie Tang
|
CogVideo: Large-scale Pretraining for Text-to-Video Generation via
Transformers
| null | null | null | null |
cs.CV cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large-scale pretrained transformers have created milestones in text (GPT-3)
and text-to-image (DALL-E and CogView) generation. Its application to video
generation is still facing many challenges: The potential huge computation cost
makes the training from scratch unaffordable; The scarcity and weak relevance
of text-video datasets hinder the model understanding complex movement
semantics. In this work, we present 9B-parameter transformer CogVideo, trained
by inheriting a pretrained text-to-image model, CogView2. We also propose
multi-frame-rate hierarchical training strategy to better align text and video
clips. As (probably) the first open-source large-scale pretrained text-to-video
model, CogVideo outperforms all publicly available models at a large margin in
machine and human evaluations.
|
[
{
"created": "Sun, 29 May 2022 19:02:15 GMT",
"version": "v1"
}
] |
2022-06-01
|
[
[
"Hong",
"Wenyi",
""
],
[
"Ding",
"Ming",
""
],
[
"Zheng",
"Wendi",
""
],
[
"Liu",
"Xinghan",
""
],
[
"Tang",
"Jie",
""
]
] |
Large-scale pretrained transformers have created milestones in text (GPT-3) and text-to-image (DALL-E and CogView) generation. Its application to video generation is still facing many challenges: The potential huge computation cost makes the training from scratch unaffordable; The scarcity and weak relevance of text-video datasets hinder the model understanding complex movement semantics. In this work, we present 9B-parameter transformer CogVideo, trained by inheriting a pretrained text-to-image model, CogView2. We also propose multi-frame-rate hierarchical training strategy to better align text and video clips. As (probably) the first open-source large-scale pretrained text-to-video model, CogVideo outperforms all publicly available models at a large margin in machine and human evaluations.
|
1312.0526
|
Giuseppe Ottaviano
|
Djamal Belazzougui, Paolo Boldi, Giuseppe Ottaviano, Rossano
Venturini, Sebastiano Vigna
|
Cache-Oblivious Peeling of Random Hypergraphs
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The computation of a peeling order in a randomly generated hypergraph is the
most time-consuming step in a number of constructions, such as perfect hashing
schemes, random $r$-SAT solvers, error-correcting codes, and approximate set
encodings. While there exists a straightforward linear time algorithm, its poor
I/O performance makes it impractical for hypergraphs whose size exceeds the
available internal memory.
We show how to reduce the computation of a peeling order to a small number of
sequential scans and sorts, and analyze its I/O complexity in the
cache-oblivious model. The resulting algorithm requires $O(\mathrm{sort}(n))$
I/Os and $O(n \log n)$ time to peel a random hypergraph with $n$ edges.
We experimentally evaluate the performance of our implementation of this
algorithm in a real-world scenario by using the construction of minimal perfect
hash functions (MPHF) as our test case: our algorithm builds a MPHF of $7.6$
billion keys in less than $21$ hours on a single machine. The resulting data
structure is both more space-efficient and faster than that obtained with the
current state-of-the-art MPHF construction for large-scale key sets.
|
[
{
"created": "Mon, 2 Dec 2013 17:37:51 GMT",
"version": "v1"
}
] |
2013-12-03
|
[
[
"Belazzougui",
"Djamal",
""
],
[
"Boldi",
"Paolo",
""
],
[
"Ottaviano",
"Giuseppe",
""
],
[
"Venturini",
"Rossano",
""
],
[
"Vigna",
"Sebastiano",
""
]
] |
The computation of a peeling order in a randomly generated hypergraph is the most time-consuming step in a number of constructions, such as perfect hashing schemes, random $r$-SAT solvers, error-correcting codes, and approximate set encodings. While there exists a straightforward linear time algorithm, its poor I/O performance makes it impractical for hypergraphs whose size exceeds the available internal memory. We show how to reduce the computation of a peeling order to a small number of sequential scans and sorts, and analyze its I/O complexity in the cache-oblivious model. The resulting algorithm requires $O(\mathrm{sort}(n))$ I/Os and $O(n \log n)$ time to peel a random hypergraph with $n$ edges. We experimentally evaluate the performance of our implementation of this algorithm in a real-world scenario by using the construction of minimal perfect hash functions (MPHF) as our test case: our algorithm builds a MPHF of $7.6$ billion keys in less than $21$ hours on a single machine. The resulting data structure is both more space-efficient and faster than that obtained with the current state-of-the-art MPHF construction for large-scale key sets.
|
1808.08601
|
Zhengqi Li
|
Zhengqi Li and Noah Snavely
|
CGIntrinsics: Better Intrinsic Image Decomposition through
Physically-Based Rendering
|
Paper for 'CGIntrinsics: Better Intrinsic Image Decomposition through
Physically-Based Rendering' published in ECCV, 2018
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Intrinsic image decomposition is a challenging, long-standing computer vision
problem for which ground truth data is very difficult to acquire. We explore
the use of synthetic data for training CNN-based intrinsic image decomposition
models, then applying these learned models to real-world images. To that end,
we present \ICG, a new, large-scale dataset of physically-based rendered images
of scenes with full ground truth decompositions. The rendering process we use
is carefully designed to yield high-quality, realistic images, which we find to
be crucial for this problem domain. We also propose a new end-to-end training
method that learns better decompositions by leveraging \ICG, and optionally IIW
and SAW, two recent datasets of sparse annotations on real-world images.
Surprisingly, we find that a decomposition network trained solely on our
synthetic data outperforms the state-of-the-art on both IIW and SAW, and
performance improves even further when IIW and SAW data is added during
training. Our work demonstrates the suprising effectiveness of
carefully-rendered synthetic data for the intrinsic images task.
|
[
{
"created": "Sun, 26 Aug 2018 17:58:46 GMT",
"version": "v1"
},
{
"created": "Sun, 11 Nov 2018 00:53:00 GMT",
"version": "v2"
},
{
"created": "Wed, 5 Dec 2018 22:34:49 GMT",
"version": "v3"
}
] |
2018-12-07
|
[
[
"Li",
"Zhengqi",
""
],
[
"Snavely",
"Noah",
""
]
] |
Intrinsic image decomposition is a challenging, long-standing computer vision problem for which ground truth data is very difficult to acquire. We explore the use of synthetic data for training CNN-based intrinsic image decomposition models, then applying these learned models to real-world images. To that end, we present \ICG, a new, large-scale dataset of physically-based rendered images of scenes with full ground truth decompositions. The rendering process we use is carefully designed to yield high-quality, realistic images, which we find to be crucial for this problem domain. We also propose a new end-to-end training method that learns better decompositions by leveraging \ICG, and optionally IIW and SAW, two recent datasets of sparse annotations on real-world images. Surprisingly, we find that a decomposition network trained solely on our synthetic data outperforms the state-of-the-art on both IIW and SAW, and performance improves even further when IIW and SAW data is added during training. Our work demonstrates the suprising effectiveness of carefully-rendered synthetic data for the intrinsic images task.
|
2111.09487
|
Bingjie Yan
|
Zhicheng Zhou, Hailong Chen, Kunhua Li, Fei Hu, Bingjie Yan, Jieren
Cheng, Xuyan Wei, Bernie Liu, Xiulai Li, Fuwen Chen, Yongji Sui
|
A Novel Optimized Asynchronous Federated Learning Framework
|
8 pages
| null | null | null |
cs.LG cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Federated Learning (FL) since proposed has been applied in many fields, such
as credit assessment, medical, etc. Because of the difference in the network or
computing resource, the clients may not update their gradients at the same time
that may take a lot of time to wait or idle. That's why Asynchronous Federated
Learning (AFL) method is needed. The main bottleneck in AFL is communication.
How to find a balance between the model performance and the communication cost
is a challenge in AFL. This paper proposed a novel AFL framework VAFL. And we
verified the performance of the algorithm through sufficient experiments. The
experiments show that VAFL can reduce the communication times about 51.02\%
with 48.23\% average communication compression rate and allow the model to be
converged faster. The code is available at
\url{https://github.com/RobAI-Lab/VAFL}
|
[
{
"created": "Thu, 18 Nov 2021 02:52:49 GMT",
"version": "v1"
}
] |
2021-11-19
|
[
[
"Zhou",
"Zhicheng",
""
],
[
"Chen",
"Hailong",
""
],
[
"Li",
"Kunhua",
""
],
[
"Hu",
"Fei",
""
],
[
"Yan",
"Bingjie",
""
],
[
"Cheng",
"Jieren",
""
],
[
"Wei",
"Xuyan",
""
],
[
"Liu",
"Bernie",
""
],
[
"Li",
"Xiulai",
""
],
[
"Chen",
"Fuwen",
""
],
[
"Sui",
"Yongji",
""
]
] |
Federated Learning (FL) since proposed has been applied in many fields, such as credit assessment, medical, etc. Because of the difference in the network or computing resource, the clients may not update their gradients at the same time that may take a lot of time to wait or idle. That's why Asynchronous Federated Learning (AFL) method is needed. The main bottleneck in AFL is communication. How to find a balance between the model performance and the communication cost is a challenge in AFL. This paper proposed a novel AFL framework VAFL. And we verified the performance of the algorithm through sufficient experiments. The experiments show that VAFL can reduce the communication times about 51.02\% with 48.23\% average communication compression rate and allow the model to be converged faster. The code is available at \url{https://github.com/RobAI-Lab/VAFL}
|
1803.09932
|
Jianbo Wang
|
Dasong Li, Jianbo Wang
|
Image Semantic Transformation: Faster, Lighter and Stronger
|
ECCV 2018 submission, 14 pages
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We propose Image-Semantic-Transformation-Reconstruction-Circle(ISTRC) model,
a novel and powerful method using facenet's Euclidean latent space to
understand the images. As the name suggests, ISTRC construct the circle, able
to perfectly reconstruct images. One powerful Euclidean latent space embedded
in ISTRC is FaceNet's last layer with the power of distinguishing and
understanding images. Our model will reconstruct the images and manipulate
Euclidean latent vectors to achieve semantic transformations and semantic
images arthimetic calculations. In this paper, we show that ISTRC performs 10
high-level semantic transformations like "Male and female","add smile","open
mouth", "deduct beard or add mustache", "bigger/smaller nose", "make older and
younger", "bigger lips", "bigger eyes", "bigger/smaller mouths" and "more
attractive". It just takes 3 hours(GTX 1080) to train the models of 10 semantic
transformations.
|
[
{
"created": "Tue, 27 Mar 2018 07:20:46 GMT",
"version": "v1"
}
] |
2018-03-28
|
[
[
"Li",
"Dasong",
""
],
[
"Wang",
"Jianbo",
""
]
] |
We propose Image-Semantic-Transformation-Reconstruction-Circle(ISTRC) model, a novel and powerful method using facenet's Euclidean latent space to understand the images. As the name suggests, ISTRC construct the circle, able to perfectly reconstruct images. One powerful Euclidean latent space embedded in ISTRC is FaceNet's last layer with the power of distinguishing and understanding images. Our model will reconstruct the images and manipulate Euclidean latent vectors to achieve semantic transformations and semantic images arthimetic calculations. In this paper, we show that ISTRC performs 10 high-level semantic transformations like "Male and female","add smile","open mouth", "deduct beard or add mustache", "bigger/smaller nose", "make older and younger", "bigger lips", "bigger eyes", "bigger/smaller mouths" and "more attractive". It just takes 3 hours(GTX 1080) to train the models of 10 semantic transformations.
|
1802.09303
|
Ganzhao Yuan
|
Ganzhao Yuan, Li Shen, Wei-Shi Zheng
|
A Decomposition Algorithm for the Sparse Generalized Eigenvalue Problem
|
To appear in CVPR 2019
| null | null | null |
cs.NA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The sparse generalized eigenvalue problem arises in a number of standard and
modern statistical learning models, including sparse principal component
analysis, sparse Fisher discriminant analysis, and sparse canonical correlation
analysis. However, this problem is difficult to solve since it is NP-hard. In
this paper, we consider a new decomposition method to tackle this problem.
Specifically, we use random or/and swapping strategies to find a working set
and perform global combinatorial search over the small subset of variables. We
consider a bisection search method and a coordinate descent method for solving
the quadratic fractional programming subproblem. In addition, we provide some
theoretical analysis for the proposed method. Our experiments have shown that
the proposed method significantly and consistently outperforms existing
solutions in term of accuracy.
|
[
{
"created": "Mon, 26 Feb 2018 14:00:22 GMT",
"version": "v1"
},
{
"created": "Sat, 2 Mar 2019 08:34:02 GMT",
"version": "v2"
}
] |
2019-03-05
|
[
[
"Yuan",
"Ganzhao",
""
],
[
"Shen",
"Li",
""
],
[
"Zheng",
"Wei-Shi",
""
]
] |
The sparse generalized eigenvalue problem arises in a number of standard and modern statistical learning models, including sparse principal component analysis, sparse Fisher discriminant analysis, and sparse canonical correlation analysis. However, this problem is difficult to solve since it is NP-hard. In this paper, we consider a new decomposition method to tackle this problem. Specifically, we use random or/and swapping strategies to find a working set and perform global combinatorial search over the small subset of variables. We consider a bisection search method and a coordinate descent method for solving the quadratic fractional programming subproblem. In addition, we provide some theoretical analysis for the proposed method. Our experiments have shown that the proposed method significantly and consistently outperforms existing solutions in term of accuracy.
|
2207.01531
|
Giovanni Apruzzese
|
Giovanni Apruzzese, Rodion Vladimirov, Aliya Tastemirova, Pavel Laskov
|
Wild Networks: Exposure of 5G Network Infrastructures to Adversarial
Examples
| null | null |
10.1109/TNSM.2022.3188930
| null |
cs.CR cs.LG cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
Fifth Generation (5G) networks must support billions of heterogeneous devices
while guaranteeing optimal Quality of Service (QoS). Such requirements are
impossible to meet with human effort alone, and Machine Learning (ML)
represents a core asset in 5G. ML, however, is known to be vulnerable to
adversarial examples; moreover, as our paper will show, the 5G context is
exposed to a yet another type of adversarial ML attacks that cannot be
formalized with existing threat models. Proactive assessment of such risks is
also challenging due to the lack of ML-powered 5G equipment available for
adversarial ML research.
To tackle these problems, we propose a novel adversarial ML threat model that
is particularly suited to 5G scenarios, and is agnostic to the precise function
solved by ML. In contrast to existing ML threat models, our attacks do not
require any compromise of the target 5G system while still being viable due to
the QoS guarantees and the open nature of 5G networks. Furthermore, we propose
an original framework for realistic ML security assessments based on public
data. We proactively evaluate our threat model on 6 applications of ML
envisioned in 5G. Our attacks affect both the training and the inference
stages, can degrade the performance of state-of-the-art ML systems, and have a
lower entry barrier than previous attacks.
|
[
{
"created": "Mon, 4 Jul 2022 15:52:54 GMT",
"version": "v1"
}
] |
2022-10-19
|
[
[
"Apruzzese",
"Giovanni",
""
],
[
"Vladimirov",
"Rodion",
""
],
[
"Tastemirova",
"Aliya",
""
],
[
"Laskov",
"Pavel",
""
]
] |
Fifth Generation (5G) networks must support billions of heterogeneous devices while guaranteeing optimal Quality of Service (QoS). Such requirements are impossible to meet with human effort alone, and Machine Learning (ML) represents a core asset in 5G. ML, however, is known to be vulnerable to adversarial examples; moreover, as our paper will show, the 5G context is exposed to a yet another type of adversarial ML attacks that cannot be formalized with existing threat models. Proactive assessment of such risks is also challenging due to the lack of ML-powered 5G equipment available for adversarial ML research. To tackle these problems, we propose a novel adversarial ML threat model that is particularly suited to 5G scenarios, and is agnostic to the precise function solved by ML. In contrast to existing ML threat models, our attacks do not require any compromise of the target 5G system while still being viable due to the QoS guarantees and the open nature of 5G networks. Furthermore, we propose an original framework for realistic ML security assessments based on public data. We proactively evaluate our threat model on 6 applications of ML envisioned in 5G. Our attacks affect both the training and the inference stages, can degrade the performance of state-of-the-art ML systems, and have a lower entry barrier than previous attacks.
|
2007.04954
|
Chuang Gan
|
Chuang Gan, Jeremy Schwartz, Seth Alter, Damian Mrowca, Martin
Schrimpf, James Traer, Julian De Freitas, Jonas Kubilius, Abhishek
Bhandwaldar, Nick Haber, Megumi Sano, Kuno Kim, Elias Wang, Michael
Lingelbach, Aidan Curtis, Kevin Feigelis, Daniel M. Bear, Dan Gutfreund,
David Cox, Antonio Torralba, James J. DiCarlo, Joshua B. Tenenbaum, Josh H.
McDermott, Daniel L.K. Yamins
|
ThreeDWorld: A Platform for Interactive Multi-Modal Physical Simulation
|
Oral Presentation at NeurIPS 21 Datasets and Benchmarks Track.
Project page: http://www.threedworld.org
| null | null | null |
cs.CV cs.GR cs.LG cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce ThreeDWorld (TDW), a platform for interactive multi-modal
physical simulation. TDW enables simulation of high-fidelity sensory data and
physical interactions between mobile agents and objects in rich 3D
environments. Unique properties include: real-time near-photo-realistic image
rendering; a library of objects and environments, and routines for their
customization; generative procedures for efficiently building classes of new
environments; high-fidelity audio rendering; realistic physical interactions
for a variety of material types, including cloths, liquid, and deformable
objects; customizable agents that embody AI agents; and support for human
interactions with VR devices. TDW's API enables multiple agents to interact
within a simulation and returns a range of sensor and physics data representing
the state of the world. We present initial experiments enabled by TDW in
emerging research directions in computer vision, machine learning, and
cognitive science, including multi-modal physical scene understanding, physical
dynamics predictions, multi-agent interactions, models that learn like a child,
and attention studies in humans and neural networks.
|
[
{
"created": "Thu, 9 Jul 2020 17:33:27 GMT",
"version": "v1"
},
{
"created": "Tue, 28 Dec 2021 17:03:21 GMT",
"version": "v2"
}
] |
2021-12-30
|
[
[
"Gan",
"Chuang",
""
],
[
"Schwartz",
"Jeremy",
""
],
[
"Alter",
"Seth",
""
],
[
"Mrowca",
"Damian",
""
],
[
"Schrimpf",
"Martin",
""
],
[
"Traer",
"James",
""
],
[
"De Freitas",
"Julian",
""
],
[
"Kubilius",
"Jonas",
""
],
[
"Bhandwaldar",
"Abhishek",
""
],
[
"Haber",
"Nick",
""
],
[
"Sano",
"Megumi",
""
],
[
"Kim",
"Kuno",
""
],
[
"Wang",
"Elias",
""
],
[
"Lingelbach",
"Michael",
""
],
[
"Curtis",
"Aidan",
""
],
[
"Feigelis",
"Kevin",
""
],
[
"Bear",
"Daniel M.",
""
],
[
"Gutfreund",
"Dan",
""
],
[
"Cox",
"David",
""
],
[
"Torralba",
"Antonio",
""
],
[
"DiCarlo",
"James J.",
""
],
[
"Tenenbaum",
"Joshua B.",
""
],
[
"McDermott",
"Josh H.",
""
],
[
"Yamins",
"Daniel L. K.",
""
]
] |
We introduce ThreeDWorld (TDW), a platform for interactive multi-modal physical simulation. TDW enables simulation of high-fidelity sensory data and physical interactions between mobile agents and objects in rich 3D environments. Unique properties include: real-time near-photo-realistic image rendering; a library of objects and environments, and routines for their customization; generative procedures for efficiently building classes of new environments; high-fidelity audio rendering; realistic physical interactions for a variety of material types, including cloths, liquid, and deformable objects; customizable agents that embody AI agents; and support for human interactions with VR devices. TDW's API enables multiple agents to interact within a simulation and returns a range of sensor and physics data representing the state of the world. We present initial experiments enabled by TDW in emerging research directions in computer vision, machine learning, and cognitive science, including multi-modal physical scene understanding, physical dynamics predictions, multi-agent interactions, models that learn like a child, and attention studies in humans and neural networks.
|
2211.13258
|
Koorosh Aslansefat
|
Sohag Kabir, Koorosh Aslansefat, Prosanta Gope, Felician Campean,
Yiannis Papadopoulos
|
Online Dynamic Reliability Evaluation of Wind Turbines based on
Drone-assisted Monitoring
|
A modified version of this work has been published in the 2022
International Conference on Computing, Electronics & Communications
Engineering (iCCECE). This work is a draft author version
| null | null | null |
cs.AI cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
The offshore wind energy is increasingly becoming an attractive source of
energy due to having lower environmental impact. Effective operation and
maintenance that ensures the maximum availability of the energy generation
process using offshore facilities and minimal production cost are two key
factors to improve the competitiveness of this energy source over other
traditional sources of energy. Condition monitoring systems are widely used for
health management of offshore wind farms to have improved operation and
maintenance. Reliability of the wind farms are increasingly being evaluated to
aid in the maintenance process and thereby to improve the availability of the
farms. However, much of the reliability analysis is performed offline based on
statistical data. In this article, we propose a drone-assisted monitoring based
method for online reliability evaluation of wind turbines. A blade system of a
wind turbine is used as an illustrative example to demonstrate the proposed
approach.
|
[
{
"created": "Wed, 23 Nov 2022 19:11:33 GMT",
"version": "v1"
}
] |
2022-11-28
|
[
[
"Kabir",
"Sohag",
""
],
[
"Aslansefat",
"Koorosh",
""
],
[
"Gope",
"Prosanta",
""
],
[
"Campean",
"Felician",
""
],
[
"Papadopoulos",
"Yiannis",
""
]
] |
The offshore wind energy is increasingly becoming an attractive source of energy due to having lower environmental impact. Effective operation and maintenance that ensures the maximum availability of the energy generation process using offshore facilities and minimal production cost are two key factors to improve the competitiveness of this energy source over other traditional sources of energy. Condition monitoring systems are widely used for health management of offshore wind farms to have improved operation and maintenance. Reliability of the wind farms are increasingly being evaluated to aid in the maintenance process and thereby to improve the availability of the farms. However, much of the reliability analysis is performed offline based on statistical data. In this article, we propose a drone-assisted monitoring based method for online reliability evaluation of wind turbines. A blade system of a wind turbine is used as an illustrative example to demonstrate the proposed approach.
|
1602.01641
|
Emeric Gioan
|
Emeric Gioan, Kevin Sol, G\'erard Subsol
|
Orientations of Simplices Determined by Orderings on the Coordinates of
their Vertices
|
Full length paper submitted to a journal. A short conference version
has been published [5]
| null | null | null |
cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Provided n points in an (n-1)-dimensional affine space, and one ordering of
the points for each coordinate, we address the problem of testing whether these
orderings determine if the points are the vertices of a simplex (i.e. are
affinely independent), regardless of the real values of the coordinates. We
also attempt to determine the orientation of this simplex. In other words,
given a matrix whose columns correspond to affine points, we want to know when
the sign (or the non-nullity) of its determinant is implied by orderings given
to each row for the values of the row. We completely solve the problem in
dimensions 2 and 3. We provide a direct combinatorial characterization, along
with a formal calculus method. It can also be viewed as a decision algorithm,
and is based on testing the existence of a suitable inductive cofactor
expansion of the determinant. We conjecture that our method generalizes in
higher dimensions. This work aims to be part of a study on how oriented
matroids encode shapes of 3-dimensional landmark-based objects. Specifically,
applications include the analysis of anatomical data for physical anthropology
and clinical research.
|
[
{
"created": "Thu, 4 Feb 2016 11:26:14 GMT",
"version": "v1"
}
] |
2016-02-05
|
[
[
"Gioan",
"Emeric",
""
],
[
"Sol",
"Kevin",
""
],
[
"Subsol",
"Gérard",
""
]
] |
Provided n points in an (n-1)-dimensional affine space, and one ordering of the points for each coordinate, we address the problem of testing whether these orderings determine if the points are the vertices of a simplex (i.e. are affinely independent), regardless of the real values of the coordinates. We also attempt to determine the orientation of this simplex. In other words, given a matrix whose columns correspond to affine points, we want to know when the sign (or the non-nullity) of its determinant is implied by orderings given to each row for the values of the row. We completely solve the problem in dimensions 2 and 3. We provide a direct combinatorial characterization, along with a formal calculus method. It can also be viewed as a decision algorithm, and is based on testing the existence of a suitable inductive cofactor expansion of the determinant. We conjecture that our method generalizes in higher dimensions. This work aims to be part of a study on how oriented matroids encode shapes of 3-dimensional landmark-based objects. Specifically, applications include the analysis of anatomical data for physical anthropology and clinical research.
|
1607.08821
|
Sina Sajadmanesh
|
Sina Sajadmanesh, Hamid R. Rabiee and Ali Khodadadi
|
Predicting Anchor Links between Heterogeneous Social Networks
|
To be published in "Proceedings of the 2016 IEEE/ACM International
Conference on Advances in Social Networks Analysis and Mining (ASONAM)"
| null |
10.1109/ASONAM.2016.7752228
| null |
cs.SI physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
People usually get involved in multiple social networks to enjoy new services
or to fulfill their needs. Many new social networks try to attract users of
other existing networks to increase the number of their users. Once a user
(called source user) of a social network (called source network) joins a new
social network (called target network), a new inter-network link (called anchor
link) is formed between the source and target networks. In this paper, we
concentrated on predicting the formation of such anchor links between
heterogeneous social networks. Unlike conventional link prediction problems in
which the formation of a link between two existing users within a single
network is predicted, in anchor link prediction, the target user is missing and
will be added to the target network once the anchor link is created. To solve
this problem, we use meta-paths as a powerful tool for utilizing heterogeneous
information in both the source and target networks. To this end, we propose an
effective general meta-path-based approach called Connector and Recursive
Meta-Paths (CRMP). By using those two different categories of meta-paths, we
model different aspects of social factors that may affect a source user to join
the target network, resulting in the formation of a new anchor link. Extensive
experiments on real-world heterogeneous social networks demonstrate the
effectiveness of the proposed method against the recent methods.
|
[
{
"created": "Fri, 29 Jul 2016 14:20:52 GMT",
"version": "v1"
},
{
"created": "Mon, 1 Aug 2016 04:45:12 GMT",
"version": "v2"
}
] |
2017-10-04
|
[
[
"Sajadmanesh",
"Sina",
""
],
[
"Rabiee",
"Hamid R.",
""
],
[
"Khodadadi",
"Ali",
""
]
] |
People usually get involved in multiple social networks to enjoy new services or to fulfill their needs. Many new social networks try to attract users of other existing networks to increase the number of their users. Once a user (called source user) of a social network (called source network) joins a new social network (called target network), a new inter-network link (called anchor link) is formed between the source and target networks. In this paper, we concentrated on predicting the formation of such anchor links between heterogeneous social networks. Unlike conventional link prediction problems in which the formation of a link between two existing users within a single network is predicted, in anchor link prediction, the target user is missing and will be added to the target network once the anchor link is created. To solve this problem, we use meta-paths as a powerful tool for utilizing heterogeneous information in both the source and target networks. To this end, we propose an effective general meta-path-based approach called Connector and Recursive Meta-Paths (CRMP). By using those two different categories of meta-paths, we model different aspects of social factors that may affect a source user to join the target network, resulting in the formation of a new anchor link. Extensive experiments on real-world heterogeneous social networks demonstrate the effectiveness of the proposed method against the recent methods.
|
2401.06582
|
Lynnette Hui Xian Ng
|
Lynnette Hui Xian Ng, Dawn C. Robertson, Kathleen M. Carley
|
Cyborgs for strategic communication on social media
|
To appear in Big Data and Society
| null |
10.1177/20539517241231275
| null |
cs.SI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Social media platforms are a key ground of information consumption and
dissemination. Key figures like politicians, celebrities and activists have
leveraged on its wide user base for strategic communication. Strategic
communications, or StratCom, is the deliberate act of information creation and
distribution. Its techniques are used by these key figures for establishing
their brand and amplifying their messages. Automated scripts are used on top of
personal touches to quickly and effectively perform these tasks. The
combination of automation and manual online posting creates a Cyborg social
media profile, which is a hybrid between bot and human. In this study, we
establish a quantitative definition for a Cyborg account, which is an account
that are detected as bots in one time window, and identified as humans in
another. This definition makes use of frequent changes of bot classification
labels and large differences in bot likelihood scores to identify Cyborgs. We
perform a large-scale analysis across over 3.1 million users from Twitter
collected from two key events, the 2020 Coronavirus pandemic and 2020 US
Elections. We extract Cyborgs from two datasets and employ tools from network
science, natural language processing and manual annotation to characterize
Cyborg accounts. Our analyses identify Cyborg accounts are mostly constructed
for strategic communication uses, have a strong duality in their bot/human
classification and are tactically positioned in the social media network,
aiding these accounts to promote their desired content. Cyborgs are also
discovered to have long online lives, indicating their ability to evade bot
detectors, or the graciousness of platforms to allow their operations.
|
[
{
"created": "Fri, 12 Jan 2024 13:57:55 GMT",
"version": "v1"
}
] |
2024-02-23
|
[
[
"Ng",
"Lynnette Hui Xian",
""
],
[
"Robertson",
"Dawn C.",
""
],
[
"Carley",
"Kathleen M.",
""
]
] |
Social media platforms are a key ground of information consumption and dissemination. Key figures like politicians, celebrities and activists have leveraged on its wide user base for strategic communication. Strategic communications, or StratCom, is the deliberate act of information creation and distribution. Its techniques are used by these key figures for establishing their brand and amplifying their messages. Automated scripts are used on top of personal touches to quickly and effectively perform these tasks. The combination of automation and manual online posting creates a Cyborg social media profile, which is a hybrid between bot and human. In this study, we establish a quantitative definition for a Cyborg account, which is an account that are detected as bots in one time window, and identified as humans in another. This definition makes use of frequent changes of bot classification labels and large differences in bot likelihood scores to identify Cyborgs. We perform a large-scale analysis across over 3.1 million users from Twitter collected from two key events, the 2020 Coronavirus pandemic and 2020 US Elections. We extract Cyborgs from two datasets and employ tools from network science, natural language processing and manual annotation to characterize Cyborg accounts. Our analyses identify Cyborg accounts are mostly constructed for strategic communication uses, have a strong duality in their bot/human classification and are tactically positioned in the social media network, aiding these accounts to promote their desired content. Cyborgs are also discovered to have long online lives, indicating their ability to evade bot detectors, or the graciousness of platforms to allow their operations.
|
2402.16909
|
Kianoosh Kazemi
|
Kianoosh Kazemi, Iina Ryht\"a, Iman Azimi, Hannakaisa Niela-Vilen,
Anna Axelin, Amir M. Rahmani, Pasi Liljeberg
|
Impact of Physical Activity on Quality of Life During Pregnancy: A
Causal ML Approach
| null | null | null | null |
cs.LG stat.ME
|
http://creativecommons.org/licenses/by/4.0/
|
The concept of Quality of Life (QoL) refers to a holistic measurement of an
individual's well-being, incorporating psychological and social aspects.
Pregnant women, especially those with obesity and stress, often experience
lower QoL. Physical activity (PA) has shown the potential to enhance the QoL.
However, pregnant women who are overweight and obese rarely meet the
recommended level of PA. Studies have investigated the relationship between PA
and QoL during pregnancy using correlation-based approaches. These methods aim
to discover spurious correlations between variables rather than causal
relationships. Besides, the existing methods mainly rely on physical activity
parameters and neglect the use of different factors such as maternal (medical)
history and context data, leading to biased estimates. Furthermore, the
estimations lack an understanding of mediators and counterfactual scenarios
that might affect them. In this paper, we investigate the causal relationship
between being physically active (treatment variable) and the QoL (outcome)
during pregnancy and postpartum. To estimate the causal effect, we develop a
Causal Machine Learning method, integrating causal discovery and causal
inference components. The data for our investigation is derived from a
long-term wearable-based health monitoring study focusing on overweight and
obese pregnant women. The machine learning (meta-learner) estimation technique
is used to estimate the causal effect. Our result shows that performing
adequate physical activity during pregnancy and postpartum improves the QoL by
units of 7.3 and 3.4 on average in physical health and psychological domains,
respectively. In the final step, four refutation analysis techniques are
employed to validate our estimation.
|
[
{
"created": "Sun, 25 Feb 2024 12:07:32 GMT",
"version": "v1"
}
] |
2024-02-28
|
[
[
"Kazemi",
"Kianoosh",
""
],
[
"Ryhtä",
"Iina",
""
],
[
"Azimi",
"Iman",
""
],
[
"Niela-Vilen",
"Hannakaisa",
""
],
[
"Axelin",
"Anna",
""
],
[
"Rahmani",
"Amir M.",
""
],
[
"Liljeberg",
"Pasi",
""
]
] |
The concept of Quality of Life (QoL) refers to a holistic measurement of an individual's well-being, incorporating psychological and social aspects. Pregnant women, especially those with obesity and stress, often experience lower QoL. Physical activity (PA) has shown the potential to enhance the QoL. However, pregnant women who are overweight and obese rarely meet the recommended level of PA. Studies have investigated the relationship between PA and QoL during pregnancy using correlation-based approaches. These methods aim to discover spurious correlations between variables rather than causal relationships. Besides, the existing methods mainly rely on physical activity parameters and neglect the use of different factors such as maternal (medical) history and context data, leading to biased estimates. Furthermore, the estimations lack an understanding of mediators and counterfactual scenarios that might affect them. In this paper, we investigate the causal relationship between being physically active (treatment variable) and the QoL (outcome) during pregnancy and postpartum. To estimate the causal effect, we develop a Causal Machine Learning method, integrating causal discovery and causal inference components. The data for our investigation is derived from a long-term wearable-based health monitoring study focusing on overweight and obese pregnant women. The machine learning (meta-learner) estimation technique is used to estimate the causal effect. Our result shows that performing adequate physical activity during pregnancy and postpartum improves the QoL by units of 7.3 and 3.4 on average in physical health and psychological domains, respectively. In the final step, four refutation analysis techniques are employed to validate our estimation.
|
2109.12814
|
Leyang Cui
|
Leyang Cui, Sen Yang, Yue Zhang
|
Investigating Non-local Features for Neural Constituency Parsing
|
ACL 2022
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Thanks to the strong representation power of neural encoders, neural
chart-based parsers have achieved highly competitive performance by using local
features. Recently, it has been shown that non-local features in CRF structures
lead to improvements. In this paper, we investigate injecting non-local
features into the training process of a local span-based parser, by predicting
constituent n-gram non-local patterns and ensuring consistency between
non-local patterns and local constituents. Results show that our simple method
gives better results than the self-attentive parser on both PTB and CTB.
Besides, our method achieves state-of-the-art BERT-based performance on PTB
(95.92 F1) and strong performance on CTB (92.31 F1). Our parser also achieves
better or competitive performance in multilingual and zero-shot cross-domain
settings compared with the baseline.
|
[
{
"created": "Mon, 27 Sep 2021 06:14:30 GMT",
"version": "v1"
},
{
"created": "Mon, 28 Mar 2022 06:43:32 GMT",
"version": "v2"
}
] |
2022-03-29
|
[
[
"Cui",
"Leyang",
""
],
[
"Yang",
"Sen",
""
],
[
"Zhang",
"Yue",
""
]
] |
Thanks to the strong representation power of neural encoders, neural chart-based parsers have achieved highly competitive performance by using local features. Recently, it has been shown that non-local features in CRF structures lead to improvements. In this paper, we investigate injecting non-local features into the training process of a local span-based parser, by predicting constituent n-gram non-local patterns and ensuring consistency between non-local patterns and local constituents. Results show that our simple method gives better results than the self-attentive parser on both PTB and CTB. Besides, our method achieves state-of-the-art BERT-based performance on PTB (95.92 F1) and strong performance on CTB (92.31 F1). Our parser also achieves better or competitive performance in multilingual and zero-shot cross-domain settings compared with the baseline.
|
cs/0702057
|
Salman Beigi
|
Mohsen Bahramgiri, Salman Beigi
|
An Efficient Algorithm to Recognize Locally Equivalent Graphs in
Non-Binary Case
|
21 pages, no figures, minor corrections
| null | null | null |
cs.DS
| null |
Let $v$ be a vertex of a graph $G$. By the local complementation of $G$ at
$v$ we mean to complement the subgraph induced by the neighbors of $v$. This
operator can be generalized as follows. Assume that, each edge of $G$ has a
label in the finite field $\mathbf{F}_q$. Let $(g_{ij})$ be set of labels
($g_{ij}$ is the label of edge $ij$). We define two types of operators. For the
first one, let $v$ be a vertex of $G$ and $a\in \mathbf{F}_q$, and obtain the
graph with labels $g'_{ij}=g_{ij}+ag_{vi}g_{vj}$. For the second, if $0\neq
b\in \mathbf{F}_q$ the resulted graph is a graph with labels $g''_{vi}=bg_{vi}$
and $g''_{ij}=g_{ij}$, for $i,j$ unequal to $v$. It is clear that if the field
is binary, the operators are just local complementations that we described.
The problem of whether two graphs are equivalent under local complementations
has been studied, \cite{bouchalg}. Here we consider the general case and
assuming that $q$ is odd, present the first known efficient algorithm to verify
whether two graphs are locally equivalent or not.
|
[
{
"created": "Fri, 9 Feb 2007 15:42:46 GMT",
"version": "v1"
},
{
"created": "Sun, 1 Jul 2007 21:01:46 GMT",
"version": "v2"
}
] |
2007-07-02
|
[
[
"Bahramgiri",
"Mohsen",
""
],
[
"Beigi",
"Salman",
""
]
] |
Let $v$ be a vertex of a graph $G$. By the local complementation of $G$ at $v$ we mean to complement the subgraph induced by the neighbors of $v$. This operator can be generalized as follows. Assume that, each edge of $G$ has a label in the finite field $\mathbf{F}_q$. Let $(g_{ij})$ be set of labels ($g_{ij}$ is the label of edge $ij$). We define two types of operators. For the first one, let $v$ be a vertex of $G$ and $a\in \mathbf{F}_q$, and obtain the graph with labels $g'_{ij}=g_{ij}+ag_{vi}g_{vj}$. For the second, if $0\neq b\in \mathbf{F}_q$ the resulted graph is a graph with labels $g''_{vi}=bg_{vi}$ and $g''_{ij}=g_{ij}$, for $i,j$ unequal to $v$. It is clear that if the field is binary, the operators are just local complementations that we described. The problem of whether two graphs are equivalent under local complementations has been studied, \cite{bouchalg}. Here we consider the general case and assuming that $q$ is odd, present the first known efficient algorithm to verify whether two graphs are locally equivalent or not.
|
2406.05027
|
Jamie Lohoff
|
Jamie Lohoff and Emre Neftci
|
Optimizing Automatic Differentiation with Deep Reinforcement Learning
| null | null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Computing Jacobians with automatic differentiation is ubiquitous in many
scientific domains such as machine learning, computational fluid dynamics,
robotics and finance. Even small savings in the number of computations or
memory usage in Jacobian computations can already incur massive savings in
energy consumption and runtime. While there exist many methods that allow for
such savings, they generally trade computational efficiency for approximations
of the exact Jacobian. In this paper, we present a novel method to optimize the
number of necessary multiplications for Jacobian computation by leveraging deep
reinforcement learning (RL) and a concept called cross-country elimination
while still computing the exact Jacobian. Cross-country elimination is a
framework for automatic differentiation that phrases Jacobian accumulation as
ordered elimination of all vertices on the computational graph where every
elimination incurs a certain computational cost. We formulate the search for
the optimal elimination order that minimizes the number of necessary
multiplications as a single player game which is played by an RL agent. We
demonstrate that this method achieves up to 33% improvements over
state-of-the-art methods on several relevant tasks taken from diverse domains.
Furthermore, we show that these theoretical gains translate into actual runtime
improvements by providing a cross-country elimination interpreter in JAX that
can efficiently execute the obtained elimination orders.
|
[
{
"created": "Fri, 7 Jun 2024 15:44:33 GMT",
"version": "v1"
},
{
"created": "Mon, 17 Jun 2024 00:54:09 GMT",
"version": "v2"
}
] |
2024-06-18
|
[
[
"Lohoff",
"Jamie",
""
],
[
"Neftci",
"Emre",
""
]
] |
Computing Jacobians with automatic differentiation is ubiquitous in many scientific domains such as machine learning, computational fluid dynamics, robotics and finance. Even small savings in the number of computations or memory usage in Jacobian computations can already incur massive savings in energy consumption and runtime. While there exist many methods that allow for such savings, they generally trade computational efficiency for approximations of the exact Jacobian. In this paper, we present a novel method to optimize the number of necessary multiplications for Jacobian computation by leveraging deep reinforcement learning (RL) and a concept called cross-country elimination while still computing the exact Jacobian. Cross-country elimination is a framework for automatic differentiation that phrases Jacobian accumulation as ordered elimination of all vertices on the computational graph where every elimination incurs a certain computational cost. We formulate the search for the optimal elimination order that minimizes the number of necessary multiplications as a single player game which is played by an RL agent. We demonstrate that this method achieves up to 33% improvements over state-of-the-art methods on several relevant tasks taken from diverse domains. Furthermore, we show that these theoretical gains translate into actual runtime improvements by providing a cross-country elimination interpreter in JAX that can efficiently execute the obtained elimination orders.
|
2208.08255
|
Ashraf Tantawy
|
Ashraf Tantawy
|
On the Elements of Datasets for Cyber Physical Systems Security
|
Submitted for peer review
| null | null | null |
cs.CR cs.AI cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Datasets are essential to apply AI algorithms to Cyber Physical System (CPS)
Security. Due to scarcity of real CPS datasets, researchers elected to generate
their own datasets using either real or virtualized testbeds. However, unlike
other AI domains, a CPS is a complex system with many interfaces that determine
its behavior. A dataset that comprises merely a collection of sensor
measurements and network traffic may not be sufficient to develop resilient AI
defensive or offensive agents. In this paper, we study the \emph{elements} of
CPS security datasets required to capture the system behavior and interactions,
and propose a dataset architecture that has the potential to enhance the
performance of AI algorithms in securing cyber physical systems. The framework
includes dataset elements, attack representation, and required dataset
features. We compare existing datasets to the proposed architecture to identify
the current limitations and discuss the future of CPS dataset generation using
testbeds.
|
[
{
"created": "Wed, 17 Aug 2022 12:20:57 GMT",
"version": "v1"
}
] |
2022-08-18
|
[
[
"Tantawy",
"Ashraf",
""
]
] |
Datasets are essential to apply AI algorithms to Cyber Physical System (CPS) Security. Due to scarcity of real CPS datasets, researchers elected to generate their own datasets using either real or virtualized testbeds. However, unlike other AI domains, a CPS is a complex system with many interfaces that determine its behavior. A dataset that comprises merely a collection of sensor measurements and network traffic may not be sufficient to develop resilient AI defensive or offensive agents. In this paper, we study the \emph{elements} of CPS security datasets required to capture the system behavior and interactions, and propose a dataset architecture that has the potential to enhance the performance of AI algorithms in securing cyber physical systems. The framework includes dataset elements, attack representation, and required dataset features. We compare existing datasets to the proposed architecture to identify the current limitations and discuss the future of CPS dataset generation using testbeds.
|
1901.04846
|
Felix M. Riese
|
Felix M. Riese, Sina Keller
|
Soil Texture Classification with 1D Convolutional Neural Networks based
on Hyperspectral Data
|
Accepted to the ISPRS Geospatial Week 2019 in Enschede (NL)
| null |
10.5194/isprs-annals-IV-2-W5-615-2019
| null |
cs.CV cs.LG physics.geo-ph stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Soil texture is important for many environmental processes. In this paper, we
study the classification of soil texture based on hyperspectral data. We
develop and implement three 1-dimensional (1D) convolutional neural networks
(CNN): the LucasCNN, the LucasResNet which contains an identity block as
residual network, and the LucasCoordConv with an additional coordinates layer.
Furthermore, we modify two existing 1D CNN approaches for the presented
classification task. The code of all five CNN approaches is available on GitHub
(Riese, 2019). We evaluate the performance of the CNN approaches and compare
them to a random forest classifier. Thereby, we rely on the freely available
LUCAS topsoil dataset. The CNN approach with the least depth turns out to be
the best performing classifier. The LucasCoordConv achieves the best
performance regarding the average accuracy. In future work, we can further
enhance the introduced LucasCNN, LucasResNet and LucasCoordConv and include
additional variables of the rich LUCAS dataset.
|
[
{
"created": "Tue, 15 Jan 2019 14:29:04 GMT",
"version": "v1"
},
{
"created": "Tue, 19 Mar 2019 16:14:04 GMT",
"version": "v2"
},
{
"created": "Sat, 30 Mar 2019 13:57:12 GMT",
"version": "v3"
}
] |
2019-07-02
|
[
[
"Riese",
"Felix M.",
""
],
[
"Keller",
"Sina",
""
]
] |
Soil texture is important for many environmental processes. In this paper, we study the classification of soil texture based on hyperspectral data. We develop and implement three 1-dimensional (1D) convolutional neural networks (CNN): the LucasCNN, the LucasResNet which contains an identity block as residual network, and the LucasCoordConv with an additional coordinates layer. Furthermore, we modify two existing 1D CNN approaches for the presented classification task. The code of all five CNN approaches is available on GitHub (Riese, 2019). We evaluate the performance of the CNN approaches and compare them to a random forest classifier. Thereby, we rely on the freely available LUCAS topsoil dataset. The CNN approach with the least depth turns out to be the best performing classifier. The LucasCoordConv achieves the best performance regarding the average accuracy. In future work, we can further enhance the introduced LucasCNN, LucasResNet and LucasCoordConv and include additional variables of the rich LUCAS dataset.
|
2211.08799
|
Eyad Kannout
|
Eyad Kannout, Hung Son Nguyen, Marek Grzegorowski
|
Speeding Up Recommender Systems Using Association Rules
|
13 pages, 3 figures, 1 table, 14th Asian Conference on Intelligent
Information and Database Systems (ACIIDS)
| null |
10.1007/978-3-031-21967-2_14
| null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recommender systems are considered one of the most rapidly growing branches
of Artificial Intelligence. The demand for finding more efficient techniques to
generate recommendations becomes urgent. However, many recommendations become
useless if there is a delay in generating and showing them to the user.
Therefore, we focus on improving the speed of recommendation systems without
impacting the accuracy. In this paper, we suggest a novel recommender system
based on Factorization Machines and Association Rules (FMAR). We introduce an
approach to generate association rules using two algorithms: (i) apriori and
(ii) frequent pattern (FP) growth. These association rules will be utilized to
reduce the number of items passed to the factorization machines recommendation
model. We show that FMAR has significantly decreased the number of new items
that the recommender system has to predict and hence, decreased the required
time for generating the recommendations. On the other hand, while building the
FMAR tool, we concentrate on making a balance between prediction time and
accuracy of generated recommendations to ensure that the accuracy is not
significantly impacted compared to the accuracy of using factorization machines
without association rules.
|
[
{
"created": "Wed, 16 Nov 2022 09:55:15 GMT",
"version": "v1"
}
] |
2022-11-17
|
[
[
"Kannout",
"Eyad",
""
],
[
"Nguyen",
"Hung Son",
""
],
[
"Grzegorowski",
"Marek",
""
]
] |
Recommender systems are considered one of the most rapidly growing branches of Artificial Intelligence. The demand for finding more efficient techniques to generate recommendations becomes urgent. However, many recommendations become useless if there is a delay in generating and showing them to the user. Therefore, we focus on improving the speed of recommendation systems without impacting the accuracy. In this paper, we suggest a novel recommender system based on Factorization Machines and Association Rules (FMAR). We introduce an approach to generate association rules using two algorithms: (i) apriori and (ii) frequent pattern (FP) growth. These association rules will be utilized to reduce the number of items passed to the factorization machines recommendation model. We show that FMAR has significantly decreased the number of new items that the recommender system has to predict and hence, decreased the required time for generating the recommendations. On the other hand, while building the FMAR tool, we concentrate on making a balance between prediction time and accuracy of generated recommendations to ensure that the accuracy is not significantly impacted compared to the accuracy of using factorization machines without association rules.
|
2210.10771
|
Martin Engilberge
|
Martin Engilberge, Weizhe Liu, Pascal Fua
|
Multi-view Tracking Using Weakly Supervised Human Motion Prediction
|
Accepted at WACV 2023
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Multi-view approaches to people-tracking have the potential to better handle
occlusions than single-view ones in crowded scenes. They often rely on the
tracking-by-detection paradigm, which involves detecting people first and then
connecting the detections. In this paper, we argue that an even more effective
approach is to predict people motion over time and infer people's presence in
individual frames from these. This enables to enforce consistency both over
time and across views of a single temporal frame. We validate our approach on
the PETS2009 and WILDTRACK datasets and demonstrate that it outperforms
state-of-the-art methods.
|
[
{
"created": "Wed, 19 Oct 2022 17:58:23 GMT",
"version": "v1"
}
] |
2022-10-20
|
[
[
"Engilberge",
"Martin",
""
],
[
"Liu",
"Weizhe",
""
],
[
"Fua",
"Pascal",
""
]
] |
Multi-view approaches to people-tracking have the potential to better handle occlusions than single-view ones in crowded scenes. They often rely on the tracking-by-detection paradigm, which involves detecting people first and then connecting the detections. In this paper, we argue that an even more effective approach is to predict people motion over time and infer people's presence in individual frames from these. This enables to enforce consistency both over time and across views of a single temporal frame. We validate our approach on the PETS2009 and WILDTRACK datasets and demonstrate that it outperforms state-of-the-art methods.
|
1911.04058
|
Yiming Xu
|
Yiming Xu, Lin Chen, Zhongwei Cheng, Lixin Duan, Jiebo Luo
|
Open-Ended Visual Question Answering by Multi-Modal Domain Adaptation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the problem of visual question answering (VQA) in images by
exploiting supervised domain adaptation, where there is a large amount of
labeled data in the source domain but only limited labeled data in the target
domain with the goal to train a good target model. A straightforward solution
is to fine-tune a pre-trained source model by using those limited labeled
target data, but it usually cannot work well due to the considerable difference
between the data distributions of the source and target domains. Moreover, the
availability of multiple modalities (i.e., images, questions and answers) in
VQA poses further challenges to model the transferability between those
different modalities. In this paper, we tackle the above issues by proposing a
novel supervised multi-modal domain adaptation method for VQA to learn joint
feature embeddings across different domains and modalities. Specifically, we
align the data distributions of the source and target domains by considering
all modalities together as well as separately for each individual modality.
Based on the extensive experiments on the benchmark VQA 2.0 and VizWiz datasets
for the realistic open-ended VQA task, we demonstrate that our proposed method
outperforms the existing state-of-the-art approaches in this challenging domain
adaptation setting for VQA.
|
[
{
"created": "Mon, 11 Nov 2019 03:26:58 GMT",
"version": "v1"
}
] |
2019-11-12
|
[
[
"Xu",
"Yiming",
""
],
[
"Chen",
"Lin",
""
],
[
"Cheng",
"Zhongwei",
""
],
[
"Duan",
"Lixin",
""
],
[
"Luo",
"Jiebo",
""
]
] |
We study the problem of visual question answering (VQA) in images by exploiting supervised domain adaptation, where there is a large amount of labeled data in the source domain but only limited labeled data in the target domain with the goal to train a good target model. A straightforward solution is to fine-tune a pre-trained source model by using those limited labeled target data, but it usually cannot work well due to the considerable difference between the data distributions of the source and target domains. Moreover, the availability of multiple modalities (i.e., images, questions and answers) in VQA poses further challenges to model the transferability between those different modalities. In this paper, we tackle the above issues by proposing a novel supervised multi-modal domain adaptation method for VQA to learn joint feature embeddings across different domains and modalities. Specifically, we align the data distributions of the source and target domains by considering all modalities together as well as separately for each individual modality. Based on the extensive experiments on the benchmark VQA 2.0 and VizWiz datasets for the realistic open-ended VQA task, we demonstrate that our proposed method outperforms the existing state-of-the-art approaches in this challenging domain adaptation setting for VQA.
|
2207.06400
|
Hongwen Zhang
|
Hongwen Zhang, Yating Tian, Yuxiang Zhang, Mengcheng Li, Liang An,
Zhenan Sun, Yebin Liu
|
PyMAF-X: Towards Well-aligned Full-body Model Regression from Monocular
Images
|
Accepted to IEEE TPAMI, Project page:
https://www.liuyebin.com/pymaf-x, An eXpressive extension of PyMAF
[arXiv:2103.16507] for monocular human/hand/face/full-body mesh recovery
| null |
10.1109/TPAMI.2023.3271691
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present PyMAF-X, a regression-based approach to recovering parametric
full-body models from monocular images. This task is very challenging since
minor parametric deviation may lead to noticeable misalignment between the
estimated mesh and the input image. Moreover, when integrating part-specific
estimations into the full-body model, existing solutions tend to either degrade
the alignment or produce unnatural wrist poses. To address these issues, we
propose a Pyramidal Mesh Alignment Feedback (PyMAF) loop in our regression
network for well-aligned human mesh recovery and extend it as PyMAF-X for the
recovery of expressive full-body models. The core idea of PyMAF is to leverage
a feature pyramid and rectify the predicted parameters explicitly based on the
mesh-image alignment status. Specifically, given the currently predicted
parameters, mesh-aligned evidence will be extracted from finer-resolution
features accordingly and fed back for parameter rectification. To enhance the
alignment perception, an auxiliary dense supervision is employed to provide
mesh-image correspondence guidance while spatial alignment attention is
introduced to enable the awareness of the global contexts for our network. When
extending PyMAF for full-body mesh recovery, an adaptive integration strategy
is proposed in PyMAF-X to produce natural wrist poses while maintaining the
well-aligned performance of the part-specific estimations. The efficacy of our
approach is validated on several benchmark datasets for body, hand, face, and
full-body mesh recovery, where PyMAF and PyMAF-X effectively improve the
mesh-image alignment and achieve new state-of-the-art results. The project page
with code and video results can be found at https://www.liuyebin.com/pymaf-x.
|
[
{
"created": "Wed, 13 Jul 2022 17:58:33 GMT",
"version": "v1"
},
{
"created": "Mon, 18 Jul 2022 17:41:15 GMT",
"version": "v2"
},
{
"created": "Fri, 28 Apr 2023 02:33:10 GMT",
"version": "v3"
}
] |
2023-05-01
|
[
[
"Zhang",
"Hongwen",
""
],
[
"Tian",
"Yating",
""
],
[
"Zhang",
"Yuxiang",
""
],
[
"Li",
"Mengcheng",
""
],
[
"An",
"Liang",
""
],
[
"Sun",
"Zhenan",
""
],
[
"Liu",
"Yebin",
""
]
] |
We present PyMAF-X, a regression-based approach to recovering parametric full-body models from monocular images. This task is very challenging since minor parametric deviation may lead to noticeable misalignment between the estimated mesh and the input image. Moreover, when integrating part-specific estimations into the full-body model, existing solutions tend to either degrade the alignment or produce unnatural wrist poses. To address these issues, we propose a Pyramidal Mesh Alignment Feedback (PyMAF) loop in our regression network for well-aligned human mesh recovery and extend it as PyMAF-X for the recovery of expressive full-body models. The core idea of PyMAF is to leverage a feature pyramid and rectify the predicted parameters explicitly based on the mesh-image alignment status. Specifically, given the currently predicted parameters, mesh-aligned evidence will be extracted from finer-resolution features accordingly and fed back for parameter rectification. To enhance the alignment perception, an auxiliary dense supervision is employed to provide mesh-image correspondence guidance while spatial alignment attention is introduced to enable the awareness of the global contexts for our network. When extending PyMAF for full-body mesh recovery, an adaptive integration strategy is proposed in PyMAF-X to produce natural wrist poses while maintaining the well-aligned performance of the part-specific estimations. The efficacy of our approach is validated on several benchmark datasets for body, hand, face, and full-body mesh recovery, where PyMAF and PyMAF-X effectively improve the mesh-image alignment and achieve new state-of-the-art results. The project page with code and video results can be found at https://www.liuyebin.com/pymaf-x.
|
2406.10829
|
Yuxi Liu
|
Yuxi Liu and Mingyu Xiao
|
Solving Co-Path/Cycle Packing Faster than $3^k$
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The \textsc{Co-Path/Cycle Packing} problem asks whether we can delete at most
$k$ vertices from the input graph such that the remaining graph is a collection
of induced paths and cycles. \textsc{Co-Path/Cycle Packing} is a fundamental
graph problem that has important applications in bioinformatics. Although this
problem has been extensively studied in parameterized algorithms, it seems hard
to break the running time bound $3^k$. In 2015, Feng et al. provided an
$O^*(3^k)$-time randomized algorithm. Recently, Tsur showed that this problem
can be solved in $O^*(3^k)$ time deterministically. In this paper, by combining
several techniques such as path decomposition, dynamic programming, and
branch-and-search methods, we show that \textsc{Co-Path/Cycle Packing} can be
solved in $O^*(2.8192^k)$ time. As a by-product, we also show that the
\textsc{$d$-Bounded-Degree Vertex Deletion} problem, a generalization of
\textsc{Co-Path/Cycle Packing}, can be solved in $O^*((d + 2)^p)$ time if a
path decomposition of width $p$ is given, which implies that
\textsc{$d$-Bounded-Degree Vertex Deletion} is FPT with parameter $p+d$.
|
[
{
"created": "Sun, 16 Jun 2024 07:45:14 GMT",
"version": "v1"
}
] |
2024-06-18
|
[
[
"Liu",
"Yuxi",
""
],
[
"Xiao",
"Mingyu",
""
]
] |
The \textsc{Co-Path/Cycle Packing} problem asks whether we can delete at most $k$ vertices from the input graph such that the remaining graph is a collection of induced paths and cycles. \textsc{Co-Path/Cycle Packing} is a fundamental graph problem that has important applications in bioinformatics. Although this problem has been extensively studied in parameterized algorithms, it seems hard to break the running time bound $3^k$. In 2015, Feng et al. provided an $O^*(3^k)$-time randomized algorithm. Recently, Tsur showed that this problem can be solved in $O^*(3^k)$ time deterministically. In this paper, by combining several techniques such as path decomposition, dynamic programming, and branch-and-search methods, we show that \textsc{Co-Path/Cycle Packing} can be solved in $O^*(2.8192^k)$ time. As a by-product, we also show that the \textsc{$d$-Bounded-Degree Vertex Deletion} problem, a generalization of \textsc{Co-Path/Cycle Packing}, can be solved in $O^*((d + 2)^p)$ time if a path decomposition of width $p$ is given, which implies that \textsc{$d$-Bounded-Degree Vertex Deletion} is FPT with parameter $p+d$.
|
1807.04912
|
Francisco Silva
|
Francisco Silva, Mikel Sanz, Jo\~ao Seixas, Enrique Solano, and Yasser
Omar
|
Perceptrons from Memristors
|
Added new result on universality of memristors, minor changes in the
introduction and algorithm, references updated
|
Neural Networks, Volume 122, 273-278 (2020)
|
10.1016/j.neunet.2019.10.013
| null |
cs.ET cs.NE quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Memristors, resistors with memory whose outputs depend on the history of
their inputs, have been used with success in neuromorphic architectures,
particularly as synapses and non-volatile memories. However, to the best of our
knowledge, no model for a network in which both the synapses and the neurons
are implemented using memristors has been proposed so far. In the present work
we introduce models for single and multilayer perceptrons based exclusively on
memristors. We adapt the delta rule to the memristor-based single-layer
perceptron and the backpropagation algorithm to the memristor-based multilayer
perceptron. Our results show that both perform as expected for perceptrons,
including satisfying Minsky-Papert's theorem. As a consequence of the Universal
Approximation Theorem, they also show that memristors are universal function
approximators. By using memristors for both the neurons and the synapses, our
models pave the way for novel memristor-based neural network architectures and
algorithms. A neural network based on memristors could show advantages in terms
of energy conservation and open up possibilities for other learning systems to
be adapted to a memristor-based paradigm, both in the classical and quantum
learning realms.
|
[
{
"created": "Fri, 13 Jul 2018 04:54:29 GMT",
"version": "v1"
},
{
"created": "Wed, 26 Dec 2018 11:07:41 GMT",
"version": "v2"
}
] |
2020-01-14
|
[
[
"Silva",
"Francisco",
""
],
[
"Sanz",
"Mikel",
""
],
[
"Seixas",
"João",
""
],
[
"Solano",
"Enrique",
""
],
[
"Omar",
"Yasser",
""
]
] |
Memristors, resistors with memory whose outputs depend on the history of their inputs, have been used with success in neuromorphic architectures, particularly as synapses and non-volatile memories. However, to the best of our knowledge, no model for a network in which both the synapses and the neurons are implemented using memristors has been proposed so far. In the present work we introduce models for single and multilayer perceptrons based exclusively on memristors. We adapt the delta rule to the memristor-based single-layer perceptron and the backpropagation algorithm to the memristor-based multilayer perceptron. Our results show that both perform as expected for perceptrons, including satisfying Minsky-Papert's theorem. As a consequence of the Universal Approximation Theorem, they also show that memristors are universal function approximators. By using memristors for both the neurons and the synapses, our models pave the way for novel memristor-based neural network architectures and algorithms. A neural network based on memristors could show advantages in terms of energy conservation and open up possibilities for other learning systems to be adapted to a memristor-based paradigm, both in the classical and quantum learning realms.
|
2208.11857
|
Mengnan Du
|
Mengnan Du, Fengxiang He, Na Zou, Dacheng Tao and Xia Hu
|
Shortcut Learning of Large Language Models in Natural Language
Understanding
|
Accepted by Communications of the ACM (CACM), Review Article
| null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Large language models (LLMs) have achieved state-of-the-art performance on a
series of natural language understanding tasks. However, these LLMs might rely
on dataset bias and artifacts as shortcuts for prediction. This has
significantly affected their generalizability and adversarial robustness. In
this paper, we provide a review of recent developments that address the
shortcut learning and robustness challenge of LLMs. We first introduce the
concepts of shortcut learning of language models. We then introduce methods to
identify shortcut learning behavior in language models, characterize the
reasons for shortcut learning, as well as introduce mitigation solutions.
Finally, we discuss key research challenges and potential research directions
in order to advance the field of LLMs.
|
[
{
"created": "Thu, 25 Aug 2022 03:51:39 GMT",
"version": "v1"
},
{
"created": "Sun, 7 May 2023 23:55:09 GMT",
"version": "v2"
}
] |
2023-05-09
|
[
[
"Du",
"Mengnan",
""
],
[
"He",
"Fengxiang",
""
],
[
"Zou",
"Na",
""
],
[
"Tao",
"Dacheng",
""
],
[
"Hu",
"Xia",
""
]
] |
Large language models (LLMs) have achieved state-of-the-art performance on a series of natural language understanding tasks. However, these LLMs might rely on dataset bias and artifacts as shortcuts for prediction. This has significantly affected their generalizability and adversarial robustness. In this paper, we provide a review of recent developments that address the shortcut learning and robustness challenge of LLMs. We first introduce the concepts of shortcut learning of language models. We then introduce methods to identify shortcut learning behavior in language models, characterize the reasons for shortcut learning, as well as introduce mitigation solutions. Finally, we discuss key research challenges and potential research directions in order to advance the field of LLMs.
|
1505.05643
|
Johann Prankl
|
Aitor Aldoma, Johann Prankl, Alexander Svejda and Markus Vincze
|
Object Modelling with a Handheld RGB-D Camera
|
Presented at OAGM Workshop, 2015 (arXiv:1505.01065)
| null | null |
OAGM/2015/08
|
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work presents a flexible system to reconstruct 3D models of objects
captured with an RGB-D sensor. A major advantage of the method is that our
reconstruction pipeline allows the user to acquire a full 3D model of the
object. This is achieved by acquiring several partial 3D models in different
sessions that are automatically merged together to reconstruct a full model. In
addition, the 3D models acquired by our system can be directly used by
state-of-the-art object instance recognition and object tracking modules,
providing object-perception capabilities for different applications, such as
human-object interaction analysis or robot grasping. The system does not impose
constraints in the appearance of objects (textured, untextured) nor in the
modelling setup (moving camera with static object or a turn-table setup). The
proposed reconstruction system has been used to model a large number of objects
resulting in metrically accurate and visually appealing 3D models.
|
[
{
"created": "Thu, 21 May 2015 08:24:21 GMT",
"version": "v1"
}
] |
2015-05-22
|
[
[
"Aldoma",
"Aitor",
""
],
[
"Prankl",
"Johann",
""
],
[
"Svejda",
"Alexander",
""
],
[
"Vincze",
"Markus",
""
]
] |
This work presents a flexible system to reconstruct 3D models of objects captured with an RGB-D sensor. A major advantage of the method is that our reconstruction pipeline allows the user to acquire a full 3D model of the object. This is achieved by acquiring several partial 3D models in different sessions that are automatically merged together to reconstruct a full model. In addition, the 3D models acquired by our system can be directly used by state-of-the-art object instance recognition and object tracking modules, providing object-perception capabilities for different applications, such as human-object interaction analysis or robot grasping. The system does not impose constraints in the appearance of objects (textured, untextured) nor in the modelling setup (moving camera with static object or a turn-table setup). The proposed reconstruction system has been used to model a large number of objects resulting in metrically accurate and visually appealing 3D models.
|
2210.04013
|
Jichen Sun
|
Shuyuan Zhang, Jichen Sun, Shengkang Chen
|
Constrained Optimal Querying: Huffman Coding and Beyond
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Huffman coding is well known to be useful in certain decision problems
involving minimizing the average number of (freely chosen) queries to determine
an unknown random variable. However, in problems where the queries are more
constrained, the original Huffman coding no longer works. In this paper, we
proposed a general model to describe such problems and two code schemes: one is
Huffman-based, and the other called GBSC (Greedy Binary Separation Coding). We
proved the optimality of GBSC by induction on a binary decision tree, telling
us that GBSC is at least as good as Shannon coding. We then compared the two
algorithms based on these two codes, by testing them with two problems: DNA
detection and 1-player Battleship, and found both to be decent approximating
algorithms, with Huffman-based algorithm giving an expected length 1.1 times
the true optimal in DNA detection problem, and GBSC yielding an average number
of queries 1.4 times the theoretical optimal in 1-player Battleship.
|
[
{
"created": "Sat, 8 Oct 2022 12:54:00 GMT",
"version": "v1"
}
] |
2022-10-11
|
[
[
"Zhang",
"Shuyuan",
""
],
[
"Sun",
"Jichen",
""
],
[
"Chen",
"Shengkang",
""
]
] |
Huffman coding is well known to be useful in certain decision problems involving minimizing the average number of (freely chosen) queries to determine an unknown random variable. However, in problems where the queries are more constrained, the original Huffman coding no longer works. In this paper, we proposed a general model to describe such problems and two code schemes: one is Huffman-based, and the other called GBSC (Greedy Binary Separation Coding). We proved the optimality of GBSC by induction on a binary decision tree, telling us that GBSC is at least as good as Shannon coding. We then compared the two algorithms based on these two codes, by testing them with two problems: DNA detection and 1-player Battleship, and found both to be decent approximating algorithms, with Huffman-based algorithm giving an expected length 1.1 times the true optimal in DNA detection problem, and GBSC yielding an average number of queries 1.4 times the theoretical optimal in 1-player Battleship.
|
1807.11419
|
Tselil Schramm
|
Prasad Raghavendra, Tselil Schramm, David Steurer
|
High-dimensional estimation via sum-of-squares proofs
| null | null | null | null |
cs.DS cs.CC cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Estimation is the computational task of recovering a hidden parameter $x$
associated with a distribution $D_x$, given a measurement $y$ sampled from the
distribution. High dimensional estimation problems arise naturally in
statistics, machine learning, and complexity theory.
Many high dimensional estimation problems can be formulated as systems of
polynomial equations and inequalities, and thus give rise to natural
probability distributions over polynomial systems. Sum-of-squares proofs
provide a powerful framework to reason about polynomial systems, and further
there exist efficient algorithms to search for low-degree sum-of-squares
proofs.
Understanding and characterizing the power of sum-of-squares proofs for
estimation problems has been a subject of intense study in recent years. On one
hand, there is a growing body of work utilizing sum-of-squares proofs for
recovering solutions to polynomial systems when the system is feasible. On the
other hand, a general technique referred to as pseudocalibration has been
developed towards showing lower bounds on the degree of sum-of-squares proofs.
Finally, the existence of sum-of-squares refutations of a polynomial system has
been shown to be intimately connected to the existence of spectral algorithms.
In this article we survey these developments.
|
[
{
"created": "Mon, 30 Jul 2018 16:13:57 GMT",
"version": "v1"
},
{
"created": "Tue, 6 Aug 2019 00:56:07 GMT",
"version": "v2"
}
] |
2019-08-07
|
[
[
"Raghavendra",
"Prasad",
""
],
[
"Schramm",
"Tselil",
""
],
[
"Steurer",
"David",
""
]
] |
Estimation is the computational task of recovering a hidden parameter $x$ associated with a distribution $D_x$, given a measurement $y$ sampled from the distribution. High dimensional estimation problems arise naturally in statistics, machine learning, and complexity theory. Many high dimensional estimation problems can be formulated as systems of polynomial equations and inequalities, and thus give rise to natural probability distributions over polynomial systems. Sum-of-squares proofs provide a powerful framework to reason about polynomial systems, and further there exist efficient algorithms to search for low-degree sum-of-squares proofs. Understanding and characterizing the power of sum-of-squares proofs for estimation problems has been a subject of intense study in recent years. On one hand, there is a growing body of work utilizing sum-of-squares proofs for recovering solutions to polynomial systems when the system is feasible. On the other hand, a general technique referred to as pseudocalibration has been developed towards showing lower bounds on the degree of sum-of-squares proofs. Finally, the existence of sum-of-squares refutations of a polynomial system has been shown to be intimately connected to the existence of spectral algorithms. In this article we survey these developments.
|
2007.10300
|
Or Litany
|
Shubham Tulsiani, Or Litany, Charles R. Qi, He Wang, Leonidas J.
Guibas
|
Object-Centric Multi-View Aggregation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present an approach for aggregating a sparse set of views of an object in
order to compute a semi-implicit 3D representation in the form of a volumetric
feature grid. Key to our approach is an object-centric canonical 3D coordinate
system into which views can be lifted, without explicit camera pose estimation,
and then combined -- in a manner that can accommodate a variable number of
views and is view order independent. We show that computing a symmetry-aware
mapping from pixels to the canonical coordinate system allows us to better
propagate information to unseen regions, as well as to robustly overcome pose
ambiguities during inference. Our aggregate representation enables us to
perform 3D inference tasks like volumetric reconstruction and novel view
synthesis, and we use these tasks to demonstrate the benefits of our
aggregation approach as compared to implicit or camera-centric alternatives.
|
[
{
"created": "Mon, 20 Jul 2020 17:38:31 GMT",
"version": "v1"
},
{
"created": "Tue, 21 Jul 2020 05:17:19 GMT",
"version": "v2"
}
] |
2020-07-22
|
[
[
"Tulsiani",
"Shubham",
""
],
[
"Litany",
"Or",
""
],
[
"Qi",
"Charles R.",
""
],
[
"Wang",
"He",
""
],
[
"Guibas",
"Leonidas J.",
""
]
] |
We present an approach for aggregating a sparse set of views of an object in order to compute a semi-implicit 3D representation in the form of a volumetric feature grid. Key to our approach is an object-centric canonical 3D coordinate system into which views can be lifted, without explicit camera pose estimation, and then combined -- in a manner that can accommodate a variable number of views and is view order independent. We show that computing a symmetry-aware mapping from pixels to the canonical coordinate system allows us to better propagate information to unseen regions, as well as to robustly overcome pose ambiguities during inference. Our aggregate representation enables us to perform 3D inference tasks like volumetric reconstruction and novel view synthesis, and we use these tasks to demonstrate the benefits of our aggregation approach as compared to implicit or camera-centric alternatives.
|
1907.12501
|
Matthias Knorr
|
Matti Berthold, Ricardo Gon\c{c}alves, Matthias Knorr, Jo\~ao Leite
|
A Syntactic Operator for Forgetting that Satisfies Strong Persistence
|
Paper presented at the 35th International Conference on Logic
Programming (ICLP 2019), Las Cruces, New Mexico, USA, 20-25 September 2019,
16 pages
|
Theory and Practice of Logic Programming 19 (2019) 1038-1055
|
10.1017/S1471068419000346
| null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Whereas the operation of forgetting has recently seen a considerable amount
of attention in the context of Answer Set Programming (ASP), most of it has
focused on theoretical aspects, leaving the practical issues largely untouched.
Recent studies include results about what sets of properties operators should
satisfy, as well as the abstract characterization of several operators and
their theoretical limits. However, no concrete operators have been
investigated.
In this paper, we address this issue by presenting the first concrete
operator that satisfies strong persistence - a property that seems to best
capture the essence of forgetting in the context of ASP - whenever this is
possible, and many other important properties. The operator is syntactic,
limiting the computation of the forgetting result to manipulating the rules in
which the atoms to be forgotten occur, naturally yielding a forgetting result
that is close to the original program.
This paper is under consideration for acceptance in TPLP.
|
[
{
"created": "Mon, 29 Jul 2019 16:03:48 GMT",
"version": "v1"
},
{
"created": "Wed, 31 Jul 2019 11:32:06 GMT",
"version": "v2"
}
] |
2020-02-19
|
[
[
"Berthold",
"Matti",
""
],
[
"Gonçalves",
"Ricardo",
""
],
[
"Knorr",
"Matthias",
""
],
[
"Leite",
"João",
""
]
] |
Whereas the operation of forgetting has recently seen a considerable amount of attention in the context of Answer Set Programming (ASP), most of it has focused on theoretical aspects, leaving the practical issues largely untouched. Recent studies include results about what sets of properties operators should satisfy, as well as the abstract characterization of several operators and their theoretical limits. However, no concrete operators have been investigated. In this paper, we address this issue by presenting the first concrete operator that satisfies strong persistence - a property that seems to best capture the essence of forgetting in the context of ASP - whenever this is possible, and many other important properties. The operator is syntactic, limiting the computation of the forgetting result to manipulating the rules in which the atoms to be forgotten occur, naturally yielding a forgetting result that is close to the original program. This paper is under consideration for acceptance in TPLP.
|
2310.13291
|
Ruixiang Tang
|
Ruixiang Tang, Gord Lueck, Rodolfo Quispe, Huseyin A Inan, Janardhan
Kulkarni, Xia Hu
|
Assessing Privacy Risks in Language Models: A Case Study on
Summarization Tasks
| null | null | null | null |
cs.CL cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large language models have revolutionized the field of NLP by achieving
state-of-the-art performance on various tasks. However, there is a concern that
these models may disclose information in the training data. In this study, we
focus on the summarization task and investigate the membership inference (MI)
attack: given a sample and black-box access to a model's API, it is possible to
determine if the sample was part of the training data. We exploit text
similarity and the model's resistance to document modifications as potential MI
signals and evaluate their effectiveness on widely used datasets. Our results
demonstrate that summarization models are at risk of exposing data membership,
even in cases where the reference summary is not available. Furthermore, we
discuss several safeguards for training summarization models to protect against
MI attacks and discuss the inherent trade-off between privacy and utility.
|
[
{
"created": "Fri, 20 Oct 2023 05:44:39 GMT",
"version": "v1"
}
] |
2023-10-23
|
[
[
"Tang",
"Ruixiang",
""
],
[
"Lueck",
"Gord",
""
],
[
"Quispe",
"Rodolfo",
""
],
[
"Inan",
"Huseyin A",
""
],
[
"Kulkarni",
"Janardhan",
""
],
[
"Hu",
"Xia",
""
]
] |
Large language models have revolutionized the field of NLP by achieving state-of-the-art performance on various tasks. However, there is a concern that these models may disclose information in the training data. In this study, we focus on the summarization task and investigate the membership inference (MI) attack: given a sample and black-box access to a model's API, it is possible to determine if the sample was part of the training data. We exploit text similarity and the model's resistance to document modifications as potential MI signals and evaluate their effectiveness on widely used datasets. Our results demonstrate that summarization models are at risk of exposing data membership, even in cases where the reference summary is not available. Furthermore, we discuss several safeguards for training summarization models to protect against MI attacks and discuss the inherent trade-off between privacy and utility.
|
2311.10389
|
Wenhao Wang
|
Wenhao Wang, Guyue Li, Zhiming Chu, Haobo Li and Daniele Faccio
|
Two-Factor Authentication Approach Based on Behavior Patterns for
Defeating Puppet Attacks
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fingerprint traits are widely recognized for their unique qualities and
security benefits. Despite their extensive use, fingerprint features can be
vulnerable to puppet attacks, where attackers manipulate a reluctant but
genuine user into completing the authentication process. Defending against such
attacks is challenging due to the coexistence of a legitimate identity and an
illegitimate intent. In this paper, we propose PUPGUARD, a solution designed to
guard against puppet attacks. This method is based on user behavioral patterns,
specifically, the user needs to press the capture device twice successively
with different fingers during the authentication process. PUPGUARD leverages
both the image features of fingerprints and the timing characteristics of the
pressing intervals to establish two-factor authentication. More specifically,
after extracting image features and timing characteristics, and performing
feature selection on the image features, PUPGUARD fuses these two features into
a one-dimensional feature vector, and feeds it into a one-class classifier to
obtain the classification result. This two-factor authentication method
emphasizes dynamic behavioral patterns during the authentication process,
thereby enhancing security against puppet attacks. To assess PUPGUARD's
effectiveness, we conducted experiments on datasets collected from 31 subjects,
including image features and timing characteristics. Our experimental results
demonstrate that PUPGUARD achieves an impressive accuracy rate of 97.87% and a
remarkably low false positive rate (FPR) of 1.89%. Furthermore, we conducted
comparative experiments to validate the superiority of combining image features
and timing characteristics within PUPGUARD for enhancing resistance against
puppet attacks.
|
[
{
"created": "Fri, 17 Nov 2023 08:35:02 GMT",
"version": "v1"
}
] |
2023-11-20
|
[
[
"Wang",
"Wenhao",
""
],
[
"Li",
"Guyue",
""
],
[
"Chu",
"Zhiming",
""
],
[
"Li",
"Haobo",
""
],
[
"Faccio",
"Daniele",
""
]
] |
Fingerprint traits are widely recognized for their unique qualities and security benefits. Despite their extensive use, fingerprint features can be vulnerable to puppet attacks, where attackers manipulate a reluctant but genuine user into completing the authentication process. Defending against such attacks is challenging due to the coexistence of a legitimate identity and an illegitimate intent. In this paper, we propose PUPGUARD, a solution designed to guard against puppet attacks. This method is based on user behavioral patterns, specifically, the user needs to press the capture device twice successively with different fingers during the authentication process. PUPGUARD leverages both the image features of fingerprints and the timing characteristics of the pressing intervals to establish two-factor authentication. More specifically, after extracting image features and timing characteristics, and performing feature selection on the image features, PUPGUARD fuses these two features into a one-dimensional feature vector, and feeds it into a one-class classifier to obtain the classification result. This two-factor authentication method emphasizes dynamic behavioral patterns during the authentication process, thereby enhancing security against puppet attacks. To assess PUPGUARD's effectiveness, we conducted experiments on datasets collected from 31 subjects, including image features and timing characteristics. Our experimental results demonstrate that PUPGUARD achieves an impressive accuracy rate of 97.87% and a remarkably low false positive rate (FPR) of 1.89%. Furthermore, we conducted comparative experiments to validate the superiority of combining image features and timing characteristics within PUPGUARD for enhancing resistance against puppet attacks.
|
cs/9311102
| null |
J. C. Schlimmer, L. A. Hermens
|
Software Agents: Completing Patterns and Constructing User Interfaces
|
See http://www.jair.org/ for an online appendix and other files
accompanying this article
|
Journal of Artificial Intelligence Research, Vol 1, (1993), 61-89
| null | null |
cs.AI
| null |
To support the goal of allowing users to record and retrieve information,
this paper describes an interactive note-taking system for pen-based computers
with two distinctive features. First, it actively predicts what the user is
going to write. Second, it automatically constructs a custom, button-box user
interface on request. The system is an example of a learning-apprentice
software- agent. A machine learning component characterizes the syntax and
semantics of the user's information. A performance system uses this learned
information to generate completion strings and construct a user interface.
Description of Online Appendix: People like to record information. Doing this
on paper is initially efficient, but lacks flexibility. Recording information
on a computer is less efficient but more powerful. In our new note taking
softwre, the user records information directly on a computer. Behind the
interface, an agent acts for the user. To help, it provides defaults and
constructs a custom user interface. The demonstration is a QuickTime movie of
the note taking agent in action. The file is a binhexed self-extracting
archive. Macintosh utilities for binhex are available from
mac.archive.umich.edu. QuickTime is available from ftp.apple.com in the
dts/mac/sys.soft/quicktime.
|
[
{
"created": "Mon, 1 Nov 1993 00:00:00 GMT",
"version": "v1"
}
] |
2009-09-25
|
[
[
"Schlimmer",
"J. C.",
""
],
[
"Hermens",
"L. A.",
""
]
] |
To support the goal of allowing users to record and retrieve information, this paper describes an interactive note-taking system for pen-based computers with two distinctive features. First, it actively predicts what the user is going to write. Second, it automatically constructs a custom, button-box user interface on request. The system is an example of a learning-apprentice software- agent. A machine learning component characterizes the syntax and semantics of the user's information. A performance system uses this learned information to generate completion strings and construct a user interface. Description of Online Appendix: People like to record information. Doing this on paper is initially efficient, but lacks flexibility. Recording information on a computer is less efficient but more powerful. In our new note taking softwre, the user records information directly on a computer. Behind the interface, an agent acts for the user. To help, it provides defaults and constructs a custom user interface. The demonstration is a QuickTime movie of the note taking agent in action. The file is a binhexed self-extracting archive. Macintosh utilities for binhex are available from mac.archive.umich.edu. QuickTime is available from ftp.apple.com in the dts/mac/sys.soft/quicktime.
|
2012.12472
|
Howard H. Yang
|
Howard H. Yang, Chao Xu, Xijun Wang, Daquan Feng, and Tony Q. S. Quek
|
Understanding Age of Information in Large-Scale Wireless Networks
| null | null | null | null |
cs.IT cs.SY eess.SY math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
The notion of age-of-information (AoI) is investigated in the context of
large-scale wireless networks, in which transmitters need to send a sequence of
information packets, which are generated as independent Bernoulli processes, to
their intended receivers over a shared spectrum. Due to interference, the rate
of packet depletion at any given node is entangled with both the spatial
configurations, which determine the path loss, and temporal dynamics, which
influence the active states, of the other transmitters, resulting in the queues
to interact with each other in both space and time over the entire network. To
that end, variants in the packet update frequency affect not just the
inter-arrival time but also the departure process, and the impact of such
phenomena on the AoI is not well understood. In this paper, we establish a
theoretical framework to characterize the AoI performance in the aforementioned
setting. Particularly, tractable expressions are derived for both the peak and
average AoI under two different transmission protocols, namely the FCFS and the
LCFS-PR. Based on the theoretical outcomes, we find that: i) networks operating
under LCFS-PR are able to attain smaller values of peak and average AoI than
that under FCFS, whereas the gain is more pronounced when the infrastructure is
densely deployed, ii) in sparsely deployed networks, ALOHA with a universally
designed channel access probability is not instrumental in reducing the AoI,
thus calling for more advanced channel access approaches, and iii) when the
infrastructure is densely rolled out, there exists a non-trivial ALOHA channel
access probability that minimizes the peak and average AoI under both FCFS and
LCFS-PR.
|
[
{
"created": "Wed, 23 Dec 2020 03:57:50 GMT",
"version": "v1"
}
] |
2020-12-24
|
[
[
"Yang",
"Howard H.",
""
],
[
"Xu",
"Chao",
""
],
[
"Wang",
"Xijun",
""
],
[
"Feng",
"Daquan",
""
],
[
"Quek",
"Tony Q. S.",
""
]
] |
The notion of age-of-information (AoI) is investigated in the context of large-scale wireless networks, in which transmitters need to send a sequence of information packets, which are generated as independent Bernoulli processes, to their intended receivers over a shared spectrum. Due to interference, the rate of packet depletion at any given node is entangled with both the spatial configurations, which determine the path loss, and temporal dynamics, which influence the active states, of the other transmitters, resulting in the queues to interact with each other in both space and time over the entire network. To that end, variants in the packet update frequency affect not just the inter-arrival time but also the departure process, and the impact of such phenomena on the AoI is not well understood. In this paper, we establish a theoretical framework to characterize the AoI performance in the aforementioned setting. Particularly, tractable expressions are derived for both the peak and average AoI under two different transmission protocols, namely the FCFS and the LCFS-PR. Based on the theoretical outcomes, we find that: i) networks operating under LCFS-PR are able to attain smaller values of peak and average AoI than that under FCFS, whereas the gain is more pronounced when the infrastructure is densely deployed, ii) in sparsely deployed networks, ALOHA with a universally designed channel access probability is not instrumental in reducing the AoI, thus calling for more advanced channel access approaches, and iii) when the infrastructure is densely rolled out, there exists a non-trivial ALOHA channel access probability that minimizes the peak and average AoI under both FCFS and LCFS-PR.
|
2305.06951
|
Viet-Man Le
|
Viet-Man Le, Cristian Vidal Silva, Alexander Felfernig, David
Benavides, Jos\'e Galindo, Thi Ngoc Trang Tran
|
FastDiagP: An Algorithm for Parallelized Direct Diagnosis
|
presented at The 37th AAAI Conference on Artificial Intelligence,
AAAI'23, Washington DC, USA
| null |
10.1609/aaai.v37i5.25792
| null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Constraint-based applications attempt to identify a solution that meets all
defined user requirements. If the requirements are inconsistent with the
underlying constraint set, algorithms that compute diagnoses for inconsistent
constraints should be implemented to help users resolve the "no solution could
be found" dilemma. FastDiag is a typical direct diagnosis algorithm that
supports diagnosis calculation without predetermining conflicts. However, this
approach faces runtime performance issues, especially when analyzing complex
and large-scale knowledge bases. In this paper, we propose a novel algorithm,
so-called FastDiagP, which is based on the idea of speculative programming.
This algorithm extends FastDiag by integrating a parallelization mechanism that
anticipates and pre-calculates consistency checks requested by FastDiag. This
mechanism helps to provide consistency checks with fast answers and boosts the
algorithm's runtime performance. The performance improvements of our proposed
algorithm have been shown through empirical results using the Linux-2.6.3.33
configuration knowledge base.
|
[
{
"created": "Thu, 11 May 2023 16:26:23 GMT",
"version": "v1"
}
] |
2023-08-15
|
[
[
"Le",
"Viet-Man",
""
],
[
"Silva",
"Cristian Vidal",
""
],
[
"Felfernig",
"Alexander",
""
],
[
"Benavides",
"David",
""
],
[
"Galindo",
"José",
""
],
[
"Tran",
"Thi Ngoc Trang",
""
]
] |
Constraint-based applications attempt to identify a solution that meets all defined user requirements. If the requirements are inconsistent with the underlying constraint set, algorithms that compute diagnoses for inconsistent constraints should be implemented to help users resolve the "no solution could be found" dilemma. FastDiag is a typical direct diagnosis algorithm that supports diagnosis calculation without predetermining conflicts. However, this approach faces runtime performance issues, especially when analyzing complex and large-scale knowledge bases. In this paper, we propose a novel algorithm, so-called FastDiagP, which is based on the idea of speculative programming. This algorithm extends FastDiag by integrating a parallelization mechanism that anticipates and pre-calculates consistency checks requested by FastDiag. This mechanism helps to provide consistency checks with fast answers and boosts the algorithm's runtime performance. The performance improvements of our proposed algorithm have been shown through empirical results using the Linux-2.6.3.33 configuration knowledge base.
|
1207.1700
|
Nadeem Javaid
|
Z. A. Khan, N. Javaid, M. H. Arshad, A. Bibi, B. Qasim
|
Performance Evaluation of Widely used Portknoking Algorithms
|
3rd WNM in conjunction with 14th HPCC-2012, Liverpool, UK
| null | null | null |
cs.NI cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Port knocking is a technique by which only a single packet or special
sequence will permit the firewall to open a port on a machine where all ports
are blocked by default. It is a passive authorization technique which offers
firewall-level authentication to ensure authorized access to potentially
vulnerable network services. In this paper, we present performance evaluation
and analytical comparison of three widely used port knocking (PK) algorithms,
Aldaba, FWKNOP and SIG-2. Comparative analysis is based upon ten selected
parameters; Platforms (Supported OS), Implementation (PK, SPA or both),
Protocols (UDP, TCP, ICMP), Out of Order packet delivery, NAT (Network Address
Translation), Encryption Algorithms, Root privileges (For installation and
operation), Weak Passwords, Replay Attacks and IPv6 compatibility. Based upon
these parameters, relative performance score has been given to each algorithm.
Finally, we deduce that FWKNOP due to compatibility with windows client is the
most efficient among chosen PK implementations.
|
[
{
"created": "Fri, 6 Jul 2012 18:12:08 GMT",
"version": "v1"
}
] |
2012-07-09
|
[
[
"Khan",
"Z. A.",
""
],
[
"Javaid",
"N.",
""
],
[
"Arshad",
"M. H.",
""
],
[
"Bibi",
"A.",
""
],
[
"Qasim",
"B.",
""
]
] |
Port knocking is a technique by which only a single packet or special sequence will permit the firewall to open a port on a machine where all ports are blocked by default. It is a passive authorization technique which offers firewall-level authentication to ensure authorized access to potentially vulnerable network services. In this paper, we present performance evaluation and analytical comparison of three widely used port knocking (PK) algorithms, Aldaba, FWKNOP and SIG-2. Comparative analysis is based upon ten selected parameters; Platforms (Supported OS), Implementation (PK, SPA or both), Protocols (UDP, TCP, ICMP), Out of Order packet delivery, NAT (Network Address Translation), Encryption Algorithms, Root privileges (For installation and operation), Weak Passwords, Replay Attacks and IPv6 compatibility. Based upon these parameters, relative performance score has been given to each algorithm. Finally, we deduce that FWKNOP due to compatibility with windows client is the most efficient among chosen PK implementations.
|
1903.05625
|
Tim Meinhardt
|
Philipp Bergmann, Tim Meinhardt, Laura Leal-Taixe
|
Tracking without bells and whistles
| null | null |
10.1109/ICCV.2019.00103
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The problem of tracking multiple objects in a video sequence poses several
challenging tasks. For tracking-by-detection, these include object
re-identification, motion prediction and dealing with occlusions. We present a
tracker (without bells and whistles) that accomplishes tracking without
specifically targeting any of these tasks, in particular, we perform no
training or optimization on tracking data. To this end, we exploit the bounding
box regression of an object detector to predict the position of an object in
the next frame, thereby converting a detector into a Tracktor. We demonstrate
the potential of Tracktor and provide a new state-of-the-art on three
multi-object tracking benchmarks by extending it with a straightforward
re-identification and camera motion compensation. We then perform an analysis
on the performance and failure cases of several state-of-the-art tracking
methods in comparison to our Tracktor. Surprisingly, none of the dedicated
tracking methods are considerably better in dealing with complex tracking
scenarios, namely, small and occluded objects or missing detections. However,
our approach tackles most of the easy tracking scenarios. Therefore, we
motivate our approach as a new tracking paradigm and point out promising future
research directions. Overall, Tracktor yields superior tracking performance
than any current tracking method and our analysis exposes remaining and
unsolved tracking challenges to inspire future research directions.
|
[
{
"created": "Wed, 13 Mar 2019 17:45:49 GMT",
"version": "v1"
},
{
"created": "Wed, 10 Apr 2019 15:33:57 GMT",
"version": "v2"
},
{
"created": "Sat, 17 Aug 2019 14:40:56 GMT",
"version": "v3"
}
] |
2021-04-30
|
[
[
"Bergmann",
"Philipp",
""
],
[
"Meinhardt",
"Tim",
""
],
[
"Leal-Taixe",
"Laura",
""
]
] |
The problem of tracking multiple objects in a video sequence poses several challenging tasks. For tracking-by-detection, these include object re-identification, motion prediction and dealing with occlusions. We present a tracker (without bells and whistles) that accomplishes tracking without specifically targeting any of these tasks, in particular, we perform no training or optimization on tracking data. To this end, we exploit the bounding box regression of an object detector to predict the position of an object in the next frame, thereby converting a detector into a Tracktor. We demonstrate the potential of Tracktor and provide a new state-of-the-art on three multi-object tracking benchmarks by extending it with a straightforward re-identification and camera motion compensation. We then perform an analysis on the performance and failure cases of several state-of-the-art tracking methods in comparison to our Tracktor. Surprisingly, none of the dedicated tracking methods are considerably better in dealing with complex tracking scenarios, namely, small and occluded objects or missing detections. However, our approach tackles most of the easy tracking scenarios. Therefore, we motivate our approach as a new tracking paradigm and point out promising future research directions. Overall, Tracktor yields superior tracking performance than any current tracking method and our analysis exposes remaining and unsolved tracking challenges to inspire future research directions.
|
1705.10281
|
Haichuan Ding
|
Haichuan Ding, Chi Zhang, Xuanheng Li, Jianqing Liu, Miao Pan, Yuguang
Fang, Shigang Chen
|
Session-Based Cooperation in Cognitive Radio Networks: A Network-Level
Approach
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In cognitive radio networks (CRNs), secondary users (SUs) can proactively
obtain spectrum access opportunities by helping with primary users' (PUs') data
transmissions. Currently, such kind of spectrum access is implemented via a
cooperative communications based link-level frame-based cooperative (LLC)
approach where individual SUs independently serve as relays for PUs in order to
gain spectrum access opportunities. Unfortunately, this LLC approach cannot
fully exploit spectrum access opportunities to enhance the throughput of CRNs
and fails to motivate PUs to join the spectrum sharing processes. To address
these challenges, we propose a network-level session-based cooperative (NLC)
approach where SUs are grouped together to cooperate with PUs session by
session, instead of frame by frame as what has been done in existing works, for
spectrum access opportunities of the corresponding group. Thanks to our
group-based session-by-session cooperating strategy, our NLC approach is able
to address all those challenges in the LLC approach. To articulate our NLC
approach, we further develop an NLC scheme under a cognitive capacity
harvesting network (CCHN) architecture. We formulate the cooperative mechanism
design as a cross-layer optimization problem with constraints on primary
session selection, flow routing and link scheduling. To search for solutions to
the optimization problem, we propose an augmented scheduling index ordering
based (SIO-based) algorithm to identify maximal independent sets. Through
extensive simulations, we demonstrate the effectiveness of the proposed NLC
approach and the superiority of the augmented SIO-based algorithm over the
traditional method.
|
[
{
"created": "Mon, 29 May 2017 16:27:12 GMT",
"version": "v1"
}
] |
2017-05-30
|
[
[
"Ding",
"Haichuan",
""
],
[
"Zhang",
"Chi",
""
],
[
"Li",
"Xuanheng",
""
],
[
"Liu",
"Jianqing",
""
],
[
"Pan",
"Miao",
""
],
[
"Fang",
"Yuguang",
""
],
[
"Chen",
"Shigang",
""
]
] |
In cognitive radio networks (CRNs), secondary users (SUs) can proactively obtain spectrum access opportunities by helping with primary users' (PUs') data transmissions. Currently, such kind of spectrum access is implemented via a cooperative communications based link-level frame-based cooperative (LLC) approach where individual SUs independently serve as relays for PUs in order to gain spectrum access opportunities. Unfortunately, this LLC approach cannot fully exploit spectrum access opportunities to enhance the throughput of CRNs and fails to motivate PUs to join the spectrum sharing processes. To address these challenges, we propose a network-level session-based cooperative (NLC) approach where SUs are grouped together to cooperate with PUs session by session, instead of frame by frame as what has been done in existing works, for spectrum access opportunities of the corresponding group. Thanks to our group-based session-by-session cooperating strategy, our NLC approach is able to address all those challenges in the LLC approach. To articulate our NLC approach, we further develop an NLC scheme under a cognitive capacity harvesting network (CCHN) architecture. We formulate the cooperative mechanism design as a cross-layer optimization problem with constraints on primary session selection, flow routing and link scheduling. To search for solutions to the optimization problem, we propose an augmented scheduling index ordering based (SIO-based) algorithm to identify maximal independent sets. Through extensive simulations, we demonstrate the effectiveness of the proposed NLC approach and the superiority of the augmented SIO-based algorithm over the traditional method.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.