id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2403.14399
|
Liang Ding
|
Changtong Zan, Liang Ding, Li Shen, Yibing Zhen, Weifeng Liu, Dacheng
Tao
|
Building Accurate Translation-Tailored LLMs with Language Aware
Instruction Tuning
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Translation-tailored Large language models (LLMs) exhibit remarkable
translation capabilities, even competing with supervised-trained commercial
translation systems. However, off-target translation remains an unsolved
problem, especially for low-resource languages, hindering us from developing
accurate LLMs-based translation models. To mitigate the off-target translation
problem and enhance the performance of LLMs on translation, recent works have
either designed advanced prompting strategies to highlight the functionality of
translation instructions or exploited the in-context learning ability of LLMs
by feeding few-shot demonstrations. However, these methods essentially do not
improve LLM's ability to follow translation instructions, especially the
language direction information. In this work, we design a two-stage fine-tuning
algorithm to improve the instruction-following ability (especially the
translation direction) of LLMs. Specifically, we first tune LLMs with the
maximum likelihood estimation loss on the translation dataset to elicit the
basic translation capabilities. In the second stage, we construct
instruction-conflicting samples by randomly replacing the translation
directions with a wrong one within the instruction, and then introduce an extra
unlikelihood loss to learn those samples. Experiments on IWSLT and WMT
benchmarks upon the LLaMA model spanning 16 zero-shot directions show that,
compared to the competitive baseline -- translation-finetuned LLama, our method
could effectively reduce the off-target translation ratio (averagely -53.3\%),
thus improving translation quality with average +5.7 SacreBLEU and +16.4
BLEURT. Analysis shows that our method could preserve the model's general task
performance on AlpacaEval. Code and models will be released at
\url{https://github.com/alphadl/LanguageAware_Tuning}.
|
[
{
"created": "Thu, 21 Mar 2024 13:47:40 GMT",
"version": "v1"
}
] |
2024-03-22
|
[
[
"Zan",
"Changtong",
""
],
[
"Ding",
"Liang",
""
],
[
"Shen",
"Li",
""
],
[
"Zhen",
"Yibing",
""
],
[
"Liu",
"Weifeng",
""
],
[
"Tao",
"Dacheng",
""
]
] |
Translation-tailored Large language models (LLMs) exhibit remarkable translation capabilities, even competing with supervised-trained commercial translation systems. However, off-target translation remains an unsolved problem, especially for low-resource languages, hindering us from developing accurate LLMs-based translation models. To mitigate the off-target translation problem and enhance the performance of LLMs on translation, recent works have either designed advanced prompting strategies to highlight the functionality of translation instructions or exploited the in-context learning ability of LLMs by feeding few-shot demonstrations. However, these methods essentially do not improve LLM's ability to follow translation instructions, especially the language direction information. In this work, we design a two-stage fine-tuning algorithm to improve the instruction-following ability (especially the translation direction) of LLMs. Specifically, we first tune LLMs with the maximum likelihood estimation loss on the translation dataset to elicit the basic translation capabilities. In the second stage, we construct instruction-conflicting samples by randomly replacing the translation directions with a wrong one within the instruction, and then introduce an extra unlikelihood loss to learn those samples. Experiments on IWSLT and WMT benchmarks upon the LLaMA model spanning 16 zero-shot directions show that, compared to the competitive baseline -- translation-finetuned LLama, our method could effectively reduce the off-target translation ratio (averagely -53.3\%), thus improving translation quality with average +5.7 SacreBLEU and +16.4 BLEURT. Analysis shows that our method could preserve the model's general task performance on AlpacaEval. Code and models will be released at \url{https://github.com/alphadl/LanguageAware_Tuning}.
|
2003.12327
|
Lei Huang
|
Lei Huang, Lei Zhao, Yi Zhou, Fan Zhu, Li Liu, Ling Shao
|
An Investigation into the Stochasticity of Batch Whitening
|
Accepted to CVPR 2020. The Code is available at
https://github.com/huangleiBuaa/StochasticityBW
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Batch Normalization (BN) is extensively employed in various network
architectures by performing standardization within mini-batches.
A full understanding of the process has been a central target in the deep
learning communities.
Unlike existing works, which usually only analyze the standardization
operation, this paper investigates the more general Batch Whitening (BW). Our
work originates from the observation that while various whitening
transformations equivalently improve the conditioning, they show significantly
different behaviors in discriminative scenarios and training Generative
Adversarial Networks (GANs).
We attribute this phenomenon to the stochasticity that BW introduces.
We quantitatively investigate the stochasticity of different whitening
transformations and show that it correlates well with the optimization
behaviors during training.
We also investigate how stochasticity relates to the estimation of population
statistics during inference.
Based on our analysis, we provide a framework for designing and comparing BW
algorithms in different scenarios.
Our proposed BW algorithm improves the residual networks by a significant
margin on ImageNet classification.
Besides, we show that the stochasticity of BW can improve the GAN's
performance with, however, the sacrifice of the training stability.
|
[
{
"created": "Fri, 27 Mar 2020 11:06:32 GMT",
"version": "v1"
}
] |
2020-03-30
|
[
[
"Huang",
"Lei",
""
],
[
"Zhao",
"Lei",
""
],
[
"Zhou",
"Yi",
""
],
[
"Zhu",
"Fan",
""
],
[
"Liu",
"Li",
""
],
[
"Shao",
"Ling",
""
]
] |
Batch Normalization (BN) is extensively employed in various network architectures by performing standardization within mini-batches. A full understanding of the process has been a central target in the deep learning communities. Unlike existing works, which usually only analyze the standardization operation, this paper investigates the more general Batch Whitening (BW). Our work originates from the observation that while various whitening transformations equivalently improve the conditioning, they show significantly different behaviors in discriminative scenarios and training Generative Adversarial Networks (GANs). We attribute this phenomenon to the stochasticity that BW introduces. We quantitatively investigate the stochasticity of different whitening transformations and show that it correlates well with the optimization behaviors during training. We also investigate how stochasticity relates to the estimation of population statistics during inference. Based on our analysis, we provide a framework for designing and comparing BW algorithms in different scenarios. Our proposed BW algorithm improves the residual networks by a significant margin on ImageNet classification. Besides, we show that the stochasticity of BW can improve the GAN's performance with, however, the sacrifice of the training stability.
|
2403.05488
|
Fortun\'e Kponou
|
D. Fortune Kponou, Frejus A. A. Laleye, Eugene C. Ezin
|
FFSTC: Fongbe to French Speech Translation Corpus
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we introduce the Fongbe to French Speech Translation Corpus
(FFSTC) for the first time. This corpus encompasses approximately 31 hours of
collected Fongbe language content, featuring both French transcriptions and
corresponding Fongbe voice recordings. FFSTC represents a comprehensive dataset
compiled through various collection methods and the efforts of dedicated
individuals. Furthermore, we conduct baseline experiments using Fairseq's
transformer_s and conformer models to evaluate data quality and validity. Our
results indicate a score of 8.96 for the transformer_s model and 8.14 for the
conformer model, establishing a baseline for the FFSTC corpus.
|
[
{
"created": "Fri, 8 Mar 2024 17:53:58 GMT",
"version": "v1"
}
] |
2024-03-11
|
[
[
"Kponou",
"D. Fortune",
""
],
[
"Laleye",
"Frejus A. A.",
""
],
[
"Ezin",
"Eugene C.",
""
]
] |
In this paper, we introduce the Fongbe to French Speech Translation Corpus (FFSTC) for the first time. This corpus encompasses approximately 31 hours of collected Fongbe language content, featuring both French transcriptions and corresponding Fongbe voice recordings. FFSTC represents a comprehensive dataset compiled through various collection methods and the efforts of dedicated individuals. Furthermore, we conduct baseline experiments using Fairseq's transformer_s and conformer models to evaluate data quality and validity. Our results indicate a score of 8.96 for the transformer_s model and 8.14 for the conformer model, establishing a baseline for the FFSTC corpus.
|
1908.03237
|
Andong Cao
|
Andong Cao, Ali Dhanaliwala, Jianbo Shi, Terence Gade, Brian Park
|
Image-based marker tracking and registration for intraoperative 3D
image-guided interventions using augmented reality
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Augmented reality has the potential to improve operating room workflow by
allowing physicians to "see" inside a patient through the projection of imaging
directly onto the surgical field. For this to be useful the acquired imaging
must be quickly and accurately registered with patient and the registration
must be maintained. Here we describe a method for projecting a CT scan with
Microsoft Hololens and then aligning that projection to a set of fiduciary
markers. Radio-opaque stickers with unique QR-codes are placed on an object
prior to acquiring a CT scan. The location of the markers in the CT scan are
extracted and the CT scan is converted into a 3D surface object. The 3D object
is then projected using the Hololens onto a table on which the same markers are
placed. We designed an algorithm that aligns the markers on the 3D object with
the markers on the table. To extract the markers and convert the CT into a 3D
object took less than 5 seconds. To align three markers, it took $0.9 \pm 0.2$
seconds to achieve an accuracy of $5 \pm 2$ mm. These findings show that it is
feasible to use a combined radio-opaque optical marker, placed on a patient
prior to a CT scan, to subsequently align the acquired CT scan with the
patient.
|
[
{
"created": "Thu, 8 Aug 2019 18:57:34 GMT",
"version": "v1"
}
] |
2019-08-12
|
[
[
"Cao",
"Andong",
""
],
[
"Dhanaliwala",
"Ali",
""
],
[
"Shi",
"Jianbo",
""
],
[
"Gade",
"Terence",
""
],
[
"Park",
"Brian",
""
]
] |
Augmented reality has the potential to improve operating room workflow by allowing physicians to "see" inside a patient through the projection of imaging directly onto the surgical field. For this to be useful the acquired imaging must be quickly and accurately registered with patient and the registration must be maintained. Here we describe a method for projecting a CT scan with Microsoft Hololens and then aligning that projection to a set of fiduciary markers. Radio-opaque stickers with unique QR-codes are placed on an object prior to acquiring a CT scan. The location of the markers in the CT scan are extracted and the CT scan is converted into a 3D surface object. The 3D object is then projected using the Hololens onto a table on which the same markers are placed. We designed an algorithm that aligns the markers on the 3D object with the markers on the table. To extract the markers and convert the CT into a 3D object took less than 5 seconds. To align three markers, it took $0.9 \pm 0.2$ seconds to achieve an accuracy of $5 \pm 2$ mm. These findings show that it is feasible to use a combined radio-opaque optical marker, placed on a patient prior to a CT scan, to subsequently align the acquired CT scan with the patient.
|
1803.10039
|
Lei He
|
Lei He, Guanghui Wang and Zhanyi Hu
|
Learning Depth from Single Images with Deep Neural Network Embedding
Focal Length
| null | null |
10.1109/TIP.2018.2832296
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Learning depth from a single image, as an important issue in scene
understanding, has attracted a lot of attention in the past decade. The
accuracy of the depth estimation has been improved from conditional Markov
random fields, non-parametric methods, to deep convolutional neural networks
most recently. However, there exist inherent ambiguities in recovering 3D from
a single 2D image. In this paper, we first prove the ambiguity between the
focal length and monocular depth learning, and verify the result using
experiments, showing that the focal length has a great influence on accurate
depth recovery. In order to learn monocular depth by embedding the focal
length, we propose a method to generate synthetic varying-focal-length dataset
from fixed-focal-length datasets, and a simple and effective method is
implemented to fill the holes in the newly generated images. For the sake of
accurate depth recovery, we propose a novel deep neural network to infer depth
through effectively fusing the middle-level information on the
fixed-focal-length dataset, which outperforms the state-of-the-art methods
built on pre-trained VGG. Furthermore, the newly generated varying-focal-length
dataset is taken as input to the proposed network in both learning and
inference phases. Extensive experiments on the fixed- and varying-focal-length
datasets demonstrate that the learned monocular depth with embedded focal
length is significantly improved compared to that without embedding the focal
length information.
|
[
{
"created": "Tue, 27 Mar 2018 12:26:15 GMT",
"version": "v1"
}
] |
2018-08-01
|
[
[
"He",
"Lei",
""
],
[
"Wang",
"Guanghui",
""
],
[
"Hu",
"Zhanyi",
""
]
] |
Learning depth from a single image, as an important issue in scene understanding, has attracted a lot of attention in the past decade. The accuracy of the depth estimation has been improved from conditional Markov random fields, non-parametric methods, to deep convolutional neural networks most recently. However, there exist inherent ambiguities in recovering 3D from a single 2D image. In this paper, we first prove the ambiguity between the focal length and monocular depth learning, and verify the result using experiments, showing that the focal length has a great influence on accurate depth recovery. In order to learn monocular depth by embedding the focal length, we propose a method to generate synthetic varying-focal-length dataset from fixed-focal-length datasets, and a simple and effective method is implemented to fill the holes in the newly generated images. For the sake of accurate depth recovery, we propose a novel deep neural network to infer depth through effectively fusing the middle-level information on the fixed-focal-length dataset, which outperforms the state-of-the-art methods built on pre-trained VGG. Furthermore, the newly generated varying-focal-length dataset is taken as input to the proposed network in both learning and inference phases. Extensive experiments on the fixed- and varying-focal-length datasets demonstrate that the learned monocular depth with embedded focal length is significantly improved compared to that without embedding the focal length information.
|
1712.03757
|
Lei You
|
Lei You, Lei Lei, Di Yuan, Sumei Sun, Symeon Chatzinotas, Bj\"orn
Ottersten
|
A Framework for Optimizing Multi-cell NOMA: Delivering Demand with Less
Resource
|
7 pages
|
IEEE GLOBECOM 2017
| null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Non-orthogonal multiple access (NOMA) allows multiple users to simultaneously
access the same time-frequency resource by using superposition coding and
successive interference cancellation (SIC). Thus far, most papers on NOMA have
focused on performance gain for one or sometimes two base stations. In this
paper, we study multi-cell NOMA and provide a general framework for user
clustering and power allocation, taking into account inter-cell interference,
for optimizing resource allocation of NOMA in multi-cell networks of arbitrary
topology. We provide a series of theoretical analysis, to algorithmically
enable optimization approaches. The resulting algorithmic notion is very
general. Namely, we prove that for any performance metric that monotonically
increases in the cells' resource consumption, we have convergence guarantee for
global optimum. We apply the framework with its algorithmic concept to a
multi-cell scenario to demonstrate the gain of NOMA in achieving significantly
higher efficiency.
|
[
{
"created": "Mon, 11 Dec 2017 13:05:15 GMT",
"version": "v1"
}
] |
2017-12-12
|
[
[
"You",
"Lei",
""
],
[
"Lei",
"Lei",
""
],
[
"Yuan",
"Di",
""
],
[
"Sun",
"Sumei",
""
],
[
"Chatzinotas",
"Symeon",
""
],
[
"Ottersten",
"Björn",
""
]
] |
Non-orthogonal multiple access (NOMA) allows multiple users to simultaneously access the same time-frequency resource by using superposition coding and successive interference cancellation (SIC). Thus far, most papers on NOMA have focused on performance gain for one or sometimes two base stations. In this paper, we study multi-cell NOMA and provide a general framework for user clustering and power allocation, taking into account inter-cell interference, for optimizing resource allocation of NOMA in multi-cell networks of arbitrary topology. We provide a series of theoretical analysis, to algorithmically enable optimization approaches. The resulting algorithmic notion is very general. Namely, we prove that for any performance metric that monotonically increases in the cells' resource consumption, we have convergence guarantee for global optimum. We apply the framework with its algorithmic concept to a multi-cell scenario to demonstrate the gain of NOMA in achieving significantly higher efficiency.
|
1711.05159
|
Mathys Rennela
|
Mathys Rennela and Sam Staton
|
Classical Control, Quantum Circuits and Linear Logic in Enriched
Category Theory
| null |
Logical Methods in Computer Science, Volume 16, Issue 1 (March 10,
2020) lmcs:4068
|
10.23638/LMCS-16(1:30)2020
| null |
cs.LO cs.PL math.CT math.OA quant-ph
|
http://creativecommons.org/licenses/by/4.0/
|
We describe categorical models of a circuit-based (quantum) functional
programming language. We show that enriched categories play a crucial role.
Following earlier work on QWire by Paykin et al., we consider both a simple
first-order linear language for circuits, and a more powerful host language,
such that the circuit language is embedded inside the host language. Our
categorical semantics for the host language is standard, and involves cartesian
closed categories and monads. We interpret the circuit language not in an
ordinary category, but in a category that is enriched in the host category. We
show that this structure is also related to linear/non-linear models. As an
extended example, we recall an earlier result that the category of W*-algebras
is dcpo-enriched, and we use this model to extend the circuit language with
some recursive types.
|
[
{
"created": "Tue, 14 Nov 2017 15:59:21 GMT",
"version": "v1"
},
{
"created": "Mon, 12 Aug 2019 17:49:53 GMT",
"version": "v2"
},
{
"created": "Thu, 22 Aug 2019 13:19:22 GMT",
"version": "v3"
},
{
"created": "Fri, 17 Jan 2020 21:38:22 GMT",
"version": "v4"
},
{
"created": "Mon, 9 Mar 2020 16:39:50 GMT",
"version": "v5"
}
] |
2023-06-22
|
[
[
"Rennela",
"Mathys",
""
],
[
"Staton",
"Sam",
""
]
] |
We describe categorical models of a circuit-based (quantum) functional programming language. We show that enriched categories play a crucial role. Following earlier work on QWire by Paykin et al., we consider both a simple first-order linear language for circuits, and a more powerful host language, such that the circuit language is embedded inside the host language. Our categorical semantics for the host language is standard, and involves cartesian closed categories and monads. We interpret the circuit language not in an ordinary category, but in a category that is enriched in the host category. We show that this structure is also related to linear/non-linear models. As an extended example, we recall an earlier result that the category of W*-algebras is dcpo-enriched, and we use this model to extend the circuit language with some recursive types.
|
2211.10344
|
Paul Escapil-Inchausp\'e
|
Paul Escapil-Inchausp\'e and Gonzalo A. Ruz
|
Physics-informed neural networks for operator equations with stochastic
data
| null | null | null | null |
cs.LG cs.NA math.NA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider the computation of statistical moments to operator equations with
stochastic data. We remark that application of PINNs -- referred to as TPINNs
-- allows to solve the induced tensor operator equations under minimal changes
of existing PINNs code, and enabling handling of non-linear and time-dependent
operators. We propose two types of architectures, referred to as vanilla and
multi-output TPINNs, and investigate their benefits and limitations. Exhaustive
numerical experiments are performed; demonstrating applicability and
performance; raising a variety of new promising research avenues.
|
[
{
"created": "Tue, 15 Nov 2022 20:52:01 GMT",
"version": "v1"
},
{
"created": "Fri, 3 May 2024 21:35:02 GMT",
"version": "v2"
}
] |
2024-05-07
|
[
[
"Escapil-Inchauspé",
"Paul",
""
],
[
"Ruz",
"Gonzalo A.",
""
]
] |
We consider the computation of statistical moments to operator equations with stochastic data. We remark that application of PINNs -- referred to as TPINNs -- allows to solve the induced tensor operator equations under minimal changes of existing PINNs code, and enabling handling of non-linear and time-dependent operators. We propose two types of architectures, referred to as vanilla and multi-output TPINNs, and investigate their benefits and limitations. Exhaustive numerical experiments are performed; demonstrating applicability and performance; raising a variety of new promising research avenues.
|
2306.10420
|
Cihat Ke\c{c}eci
|
Cihat Ke\c{c}eci, Katherine R. Davis, Erchin Serpedin
|
Federated Learning Based Distributed Localization of False Data
Injection Attacks on Smart Grids
|
9 pages, 6 figures
| null | null | null |
cs.LG cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Data analysis and monitoring on smart grids are jeopardized by attacks on
cyber-physical systems. False data injection attack (FDIA) is one of the
classes of those attacks that target the smart measurement devices by injecting
malicious data. The employment of machine learning techniques in the detection
and localization of FDIA is proven to provide effective results. Training of
such models requires centralized processing of sensitive user data that may not
be plausible in a practical scenario. By employing federated learning for the
detection of FDIA attacks, it is possible to train a model for the detection
and localization of the attacks while preserving the privacy of sensitive user
data. However, federated learning introduces new problems such as the
personalization of the detectors in each node. In this paper, we propose a
federated learning-based scheme combined with a hybrid deep neural network
architecture that exploits the local correlations between the connected power
buses by employing graph neural networks as well as the temporal patterns in
the data by using LSTM layers. The proposed mechanism offers flexible and
efficient training of an FDIA detector in a distributed setup while preserving
the privacy of the clients. We validate the proposed architecture by extensive
simulations on the IEEE 57, 118, and 300 bus systems and real electricity load
data.
|
[
{
"created": "Sat, 17 Jun 2023 20:29:55 GMT",
"version": "v1"
}
] |
2023-06-21
|
[
[
"Keçeci",
"Cihat",
""
],
[
"Davis",
"Katherine R.",
""
],
[
"Serpedin",
"Erchin",
""
]
] |
Data analysis and monitoring on smart grids are jeopardized by attacks on cyber-physical systems. False data injection attack (FDIA) is one of the classes of those attacks that target the smart measurement devices by injecting malicious data. The employment of machine learning techniques in the detection and localization of FDIA is proven to provide effective results. Training of such models requires centralized processing of sensitive user data that may not be plausible in a practical scenario. By employing federated learning for the detection of FDIA attacks, it is possible to train a model for the detection and localization of the attacks while preserving the privacy of sensitive user data. However, federated learning introduces new problems such as the personalization of the detectors in each node. In this paper, we propose a federated learning-based scheme combined with a hybrid deep neural network architecture that exploits the local correlations between the connected power buses by employing graph neural networks as well as the temporal patterns in the data by using LSTM layers. The proposed mechanism offers flexible and efficient training of an FDIA detector in a distributed setup while preserving the privacy of the clients. We validate the proposed architecture by extensive simulations on the IEEE 57, 118, and 300 bus systems and real electricity load data.
|
2206.09262
|
Shanshan Wu
|
Shanshan Wu, Tian Li, Zachary Charles, Yu Xiao, Ziyu Liu, Zheng Xu,
Virginia Smith
|
Motley: Benchmarking Heterogeneity and Personalization in Federated
Learning
|
40 pages, 10 figures, 7 tables. EMNIST and Landmarks fine-tuning
results are corrected in (and after) v5. Code:
https://github.com/google-research/federated/tree/master/personalization_benchmark
| null | null | null |
cs.LG cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Personalized federated learning considers learning models unique to each
client in a heterogeneous network. The resulting client-specific models have
been purported to improve metrics such as accuracy, fairness, and robustness in
federated networks. However, despite a plethora of work in this area, it
remains unclear: (1) which personalization techniques are most effective in
various settings, and (2) how important personalization truly is for realistic
federated applications. To better answer these questions, we propose Motley, a
benchmark for personalized federated learning. Motley consists of a suite of
cross-device and cross-silo federated datasets from varied problem domains, as
well as thorough evaluation metrics for better understanding the possible
impacts of personalization. We establish baselines on the benchmark by
comparing a number of representative personalized federated learning methods.
These initial results highlight strengths and weaknesses of existing
approaches, and raise several open questions for the community. Motley aims to
provide a reproducible means with which to advance developments in personalized
and heterogeneity-aware federated learning, as well as the related areas of
transfer learning, meta-learning, and multi-task learning.
|
[
{
"created": "Sat, 18 Jun 2022 18:18:49 GMT",
"version": "v1"
},
{
"created": "Tue, 5 Jul 2022 21:27:05 GMT",
"version": "v2"
},
{
"created": "Sun, 10 Jul 2022 02:29:32 GMT",
"version": "v3"
},
{
"created": "Wed, 13 Jul 2022 23:54:01 GMT",
"version": "v4"
},
{
"created": "Thu, 18 Aug 2022 17:39:44 GMT",
"version": "v5"
},
{
"created": "Mon, 26 Sep 2022 04:57:41 GMT",
"version": "v6"
}
] |
2022-09-27
|
[
[
"Wu",
"Shanshan",
""
],
[
"Li",
"Tian",
""
],
[
"Charles",
"Zachary",
""
],
[
"Xiao",
"Yu",
""
],
[
"Liu",
"Ziyu",
""
],
[
"Xu",
"Zheng",
""
],
[
"Smith",
"Virginia",
""
]
] |
Personalized federated learning considers learning models unique to each client in a heterogeneous network. The resulting client-specific models have been purported to improve metrics such as accuracy, fairness, and robustness in federated networks. However, despite a plethora of work in this area, it remains unclear: (1) which personalization techniques are most effective in various settings, and (2) how important personalization truly is for realistic federated applications. To better answer these questions, we propose Motley, a benchmark for personalized federated learning. Motley consists of a suite of cross-device and cross-silo federated datasets from varied problem domains, as well as thorough evaluation metrics for better understanding the possible impacts of personalization. We establish baselines on the benchmark by comparing a number of representative personalized federated learning methods. These initial results highlight strengths and weaknesses of existing approaches, and raise several open questions for the community. Motley aims to provide a reproducible means with which to advance developments in personalized and heterogeneity-aware federated learning, as well as the related areas of transfer learning, meta-learning, and multi-task learning.
|
2309.08008
|
Yanshan Wang
|
Sonish Sivarajkumar, Mark Kelley, Alyssa Samolyk-Mazzanti, Shyam
Visweswaran, Yanshan Wang
|
An Empirical Evaluation of Prompting Strategies for Large Language
Models in Zero-Shot Clinical Natural Language Processing
| null | null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large language models (LLMs) have shown remarkable capabilities in Natural
Language Processing (NLP), especially in domains where labeled data is scarce
or expensive, such as clinical domain. However, to unlock the clinical
knowledge hidden in these LLMs, we need to design effective prompts that can
guide them to perform specific clinical NLP tasks without any task-specific
training data. This is known as in-context learning, which is an art and
science that requires understanding the strengths and weaknesses of different
LLMs and prompt engineering approaches. In this paper, we present a
comprehensive and systematic experimental study on prompt engineering for five
clinical NLP tasks: Clinical Sense Disambiguation, Biomedical Evidence
Extraction, Coreference Resolution, Medication Status Extraction, and
Medication Attribute Extraction. We assessed the prompts proposed in recent
literature, including simple prefix, simple cloze, chain of thought, and
anticipatory prompts, and introduced two new types of prompts, namely heuristic
prompting and ensemble prompting. We evaluated the performance of these prompts
on three state-of-the-art LLMs: GPT-3.5, BARD, and LLAMA2. We also contrasted
zero-shot prompting with few-shot prompting, and provide novel insights and
guidelines for prompt engineering for LLMs in clinical NLP. To the best of our
knowledge, this is one of the first works on the empirical evaluation of
different prompt engineering approaches for clinical NLP in this era of
generative AI, and we hope that it will inspire and inform future research in
this area.
|
[
{
"created": "Thu, 14 Sep 2023 19:35:00 GMT",
"version": "v1"
}
] |
2023-09-18
|
[
[
"Sivarajkumar",
"Sonish",
""
],
[
"Kelley",
"Mark",
""
],
[
"Samolyk-Mazzanti",
"Alyssa",
""
],
[
"Visweswaran",
"Shyam",
""
],
[
"Wang",
"Yanshan",
""
]
] |
Large language models (LLMs) have shown remarkable capabilities in Natural Language Processing (NLP), especially in domains where labeled data is scarce or expensive, such as clinical domain. However, to unlock the clinical knowledge hidden in these LLMs, we need to design effective prompts that can guide them to perform specific clinical NLP tasks without any task-specific training data. This is known as in-context learning, which is an art and science that requires understanding the strengths and weaknesses of different LLMs and prompt engineering approaches. In this paper, we present a comprehensive and systematic experimental study on prompt engineering for five clinical NLP tasks: Clinical Sense Disambiguation, Biomedical Evidence Extraction, Coreference Resolution, Medication Status Extraction, and Medication Attribute Extraction. We assessed the prompts proposed in recent literature, including simple prefix, simple cloze, chain of thought, and anticipatory prompts, and introduced two new types of prompts, namely heuristic prompting and ensemble prompting. We evaluated the performance of these prompts on three state-of-the-art LLMs: GPT-3.5, BARD, and LLAMA2. We also contrasted zero-shot prompting with few-shot prompting, and provide novel insights and guidelines for prompt engineering for LLMs in clinical NLP. To the best of our knowledge, this is one of the first works on the empirical evaluation of different prompt engineering approaches for clinical NLP in this era of generative AI, and we hope that it will inspire and inform future research in this area.
|
2402.10059
|
Jovan Komatovic
|
Pierre Civit, Muhammad Ayaz Dzulfikar, Seth Gilbert, Rachid Guerraoui,
Jovan Komatovic, Manuel Vidigueira, Igor Zablotchi
|
Partial Synchrony for Free? New Upper Bounds for Byzantine Agreement
| null | null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Byzantine agreement allows n processes to decide on a common value, in spite
of arbitrary failures. The seminal Dolev-Reischuk bound states that any
deterministic solution to Byzantine agreement exchanges Omega(n^2) bits. In
synchronous networks, solutions with optimal O(n^2) bit complexity, optimal
fault tolerance, and no cryptography have been established for over three
decades. However, these solutions lack robustness under adverse network
conditions. Therefore, research has increasingly focused on Byzantine agreement
for partially synchronous networks. Numerous solutions have been proposed for
the partially synchronous setting. However, these solutions are notoriously
hard to prove correct, and the most efficient cryptography-free algorithms
still require O(n^3) exchanged bits in the worst case. In this paper, we
introduce Oper, the first generic transformation of deterministic Byzantine
agreement algorithms from synchrony to partial synchrony. Oper requires no
cryptography, is optimally resilient (n >= 3t+1, where t is the maximum number
of failures), and preserves the worst-case per-process bit complexity of the
transformed synchronous algorithm. Leveraging Oper, we present the first
partially synchronous Byzantine agreement algorithm that (1) achieves optimal
O(n^2) bit complexity, (2) requires no cryptography, and (3) is optimally
resilient (n >= 3t+1), thus showing that the Dolev-Reischuk bound is tight even
in partial synchrony. Moreover, we adapt Oper for long values and obtain
several new partially synchronous algorithms with improved complexity and
weaker (or completely absent) cryptographic assumptions.
|
[
{
"created": "Thu, 15 Feb 2024 16:29:24 GMT",
"version": "v1"
},
{
"created": "Fri, 16 Feb 2024 08:40:21 GMT",
"version": "v2"
},
{
"created": "Thu, 22 Feb 2024 12:06:57 GMT",
"version": "v3"
},
{
"created": "Fri, 5 Apr 2024 08:07:16 GMT",
"version": "v4"
}
] |
2024-04-08
|
[
[
"Civit",
"Pierre",
""
],
[
"Dzulfikar",
"Muhammad Ayaz",
""
],
[
"Gilbert",
"Seth",
""
],
[
"Guerraoui",
"Rachid",
""
],
[
"Komatovic",
"Jovan",
""
],
[
"Vidigueira",
"Manuel",
""
],
[
"Zablotchi",
"Igor",
""
]
] |
Byzantine agreement allows n processes to decide on a common value, in spite of arbitrary failures. The seminal Dolev-Reischuk bound states that any deterministic solution to Byzantine agreement exchanges Omega(n^2) bits. In synchronous networks, solutions with optimal O(n^2) bit complexity, optimal fault tolerance, and no cryptography have been established for over three decades. However, these solutions lack robustness under adverse network conditions. Therefore, research has increasingly focused on Byzantine agreement for partially synchronous networks. Numerous solutions have been proposed for the partially synchronous setting. However, these solutions are notoriously hard to prove correct, and the most efficient cryptography-free algorithms still require O(n^3) exchanged bits in the worst case. In this paper, we introduce Oper, the first generic transformation of deterministic Byzantine agreement algorithms from synchrony to partial synchrony. Oper requires no cryptography, is optimally resilient (n >= 3t+1, where t is the maximum number of failures), and preserves the worst-case per-process bit complexity of the transformed synchronous algorithm. Leveraging Oper, we present the first partially synchronous Byzantine agreement algorithm that (1) achieves optimal O(n^2) bit complexity, (2) requires no cryptography, and (3) is optimally resilient (n >= 3t+1), thus showing that the Dolev-Reischuk bound is tight even in partial synchrony. Moreover, we adapt Oper for long values and obtain several new partially synchronous algorithms with improved complexity and weaker (or completely absent) cryptographic assumptions.
|
1507.02099
|
Ludo Waltman
|
Ludo Waltman
|
A review of the literature on citation impact indicators
| null | null | null | null |
cs.DL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Citation impact indicators nowadays play an important role in research
evaluation, and consequently these indicators have received a lot of attention
in the bibliometric and scientometric literature. This paper provides an
in-depth review of the literature on citation impact indicators. First, an
overview is given of the literature on bibliographic databases that can be used
to calculate citation impact indicators (Web of Science, Scopus, and Google
Scholar). Next, selected topics in the literature on citation impact indicators
are reviewed in detail. The first topic is the selection of publications and
citations to be included in the calculation of citation impact indicators. The
second topic is the normalization of citation impact indicators, in particular
normalization for field differences. Counting methods for dealing with
co-authored publications are the third topic, and citation impact indicators
for journals are the last topic. The paper concludes by offering some
recommendations for future research.
|
[
{
"created": "Wed, 8 Jul 2015 11:21:58 GMT",
"version": "v1"
},
{
"created": "Thu, 10 Dec 2015 16:20:02 GMT",
"version": "v2"
},
{
"created": "Thu, 25 Feb 2016 15:11:58 GMT",
"version": "v3"
}
] |
2016-02-26
|
[
[
"Waltman",
"Ludo",
""
]
] |
Citation impact indicators nowadays play an important role in research evaluation, and consequently these indicators have received a lot of attention in the bibliometric and scientometric literature. This paper provides an in-depth review of the literature on citation impact indicators. First, an overview is given of the literature on bibliographic databases that can be used to calculate citation impact indicators (Web of Science, Scopus, and Google Scholar). Next, selected topics in the literature on citation impact indicators are reviewed in detail. The first topic is the selection of publications and citations to be included in the calculation of citation impact indicators. The second topic is the normalization of citation impact indicators, in particular normalization for field differences. Counting methods for dealing with co-authored publications are the third topic, and citation impact indicators for journals are the last topic. The paper concludes by offering some recommendations for future research.
|
2109.03429
|
Friedrich Sommer
|
E. Paxon Frady, Denis Kleyko, Christopher J. Kymn, Bruno A. Olshausen,
Friedrich T. Sommer
|
Computing on Functions Using Randomized Vector Representations
|
33 pages, 18 Figures
| null | null | null |
cs.LG cs.NE q-bio.NC
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Vector space models for symbolic processing that encode symbols by random
vectors have been proposed in cognitive science and connectionist communities
under the names Vector Symbolic Architecture (VSA), and, synonymously,
Hyperdimensional (HD) computing. In this paper, we generalize VSAs to function
spaces by mapping continuous-valued data into a vector space such that the
inner product between the representations of any two data points represents a
similarity kernel. By analogy to VSA, we call this new function encoding and
computing framework Vector Function Architecture (VFA). In VFAs, vectors can
represent individual data points as well as elements of a function space (a
reproducing kernel Hilbert space). The algebraic vector operations, inherited
from VSA, correspond to well-defined operations in function space. Furthermore,
we study a previously proposed method for encoding continuous data, fractional
power encoding (FPE), which uses exponentiation of a random base vector to
produce randomized representations of data points and fulfills the kernel
properties for inducing a VFA. We show that the distribution from which
elements of the base vector are sampled determines the shape of the FPE kernel,
which in turn induces a VFA for computing with band-limited functions. In
particular, VFAs provide an algebraic framework for implementing large-scale
kernel machines with random features, extending Rahimi and Recht, 2007.
Finally, we demonstrate several applications of VFA models to problems in image
recognition, density estimation and nonlinear regression. Our analyses and
results suggest that VFAs constitute a powerful new framework for representing
and manipulating functions in distributed neural systems, with myriad
applications in artificial intelligence.
|
[
{
"created": "Wed, 8 Sep 2021 04:39:48 GMT",
"version": "v1"
}
] |
2021-09-09
|
[
[
"Frady",
"E. Paxon",
""
],
[
"Kleyko",
"Denis",
""
],
[
"Kymn",
"Christopher J.",
""
],
[
"Olshausen",
"Bruno A.",
""
],
[
"Sommer",
"Friedrich T.",
""
]
] |
Vector space models for symbolic processing that encode symbols by random vectors have been proposed in cognitive science and connectionist communities under the names Vector Symbolic Architecture (VSA), and, synonymously, Hyperdimensional (HD) computing. In this paper, we generalize VSAs to function spaces by mapping continuous-valued data into a vector space such that the inner product between the representations of any two data points represents a similarity kernel. By analogy to VSA, we call this new function encoding and computing framework Vector Function Architecture (VFA). In VFAs, vectors can represent individual data points as well as elements of a function space (a reproducing kernel Hilbert space). The algebraic vector operations, inherited from VSA, correspond to well-defined operations in function space. Furthermore, we study a previously proposed method for encoding continuous data, fractional power encoding (FPE), which uses exponentiation of a random base vector to produce randomized representations of data points and fulfills the kernel properties for inducing a VFA. We show that the distribution from which elements of the base vector are sampled determines the shape of the FPE kernel, which in turn induces a VFA for computing with band-limited functions. In particular, VFAs provide an algebraic framework for implementing large-scale kernel machines with random features, extending Rahimi and Recht, 2007. Finally, we demonstrate several applications of VFA models to problems in image recognition, density estimation and nonlinear regression. Our analyses and results suggest that VFAs constitute a powerful new framework for representing and manipulating functions in distributed neural systems, with myriad applications in artificial intelligence.
|
1809.00530
|
Ruidan He
|
Ruidan He, Wee Sun Lee, Hwee Tou Ng, Daniel Dahlmeier
|
Adaptive Semi-supervised Learning for Cross-domain Sentiment
Classification
|
Accepted to EMNLP2018
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider the cross-domain sentiment classification problem, where a
sentiment classifier is to be learned from a source domain and to be
generalized to a target domain. Our approach explicitly minimizes the distance
between the source and the target instances in an embedded feature space. With
the difference between source and target minimized, we then exploit additional
information from the target domain by consolidating the idea of semi-supervised
learning, for which, we jointly employ two regularizations -- entropy
minimization and self-ensemble bootstrapping -- to incorporate the unlabeled
target data for classifier refinement. Our experimental results demonstrate
that the proposed approach can better leverage unlabeled data from the target
domain and achieve substantial improvements over baseline methods in various
experimental settings.
|
[
{
"created": "Mon, 3 Sep 2018 10:15:04 GMT",
"version": "v1"
}
] |
2018-09-05
|
[
[
"He",
"Ruidan",
""
],
[
"Lee",
"Wee Sun",
""
],
[
"Ng",
"Hwee Tou",
""
],
[
"Dahlmeier",
"Daniel",
""
]
] |
We consider the cross-domain sentiment classification problem, where a sentiment classifier is to be learned from a source domain and to be generalized to a target domain. Our approach explicitly minimizes the distance between the source and the target instances in an embedded feature space. With the difference between source and target minimized, we then exploit additional information from the target domain by consolidating the idea of semi-supervised learning, for which, we jointly employ two regularizations -- entropy minimization and self-ensemble bootstrapping -- to incorporate the unlabeled target data for classifier refinement. Our experimental results demonstrate that the proposed approach can better leverage unlabeled data from the target domain and achieve substantial improvements over baseline methods in various experimental settings.
|
1902.11266
|
M. Saquib Sarfraz
|
M. Saquib Sarfraz, Vivek Sharma, Rainer Stiefelhagen
|
Efficient Parameter-free Clustering Using First Neighbor Relations
|
CVPR 2019
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a new clustering method in the form of a single clustering
equation that is able to directly discover groupings in the data. The main
proposition is that the first neighbor of each sample is all one needs to
discover large chains and finding the groups in the data. In contrast to most
existing clustering algorithms our method does not require any
hyper-parameters, distance thresholds and/or the need to specify the number of
clusters. The proposed algorithm belongs to the family of hierarchical
agglomerative methods. The technique has a very low computational overhead, is
easily scalable and applicable to large practical problems. Evaluation on well
known datasets from different domains ranging between 1077 and 8.1 million
samples shows substantial performance gains when compared to the existing
clustering techniques.
|
[
{
"created": "Thu, 28 Feb 2019 18:12:57 GMT",
"version": "v1"
}
] |
2019-03-01
|
[
[
"Sarfraz",
"M. Saquib",
""
],
[
"Sharma",
"Vivek",
""
],
[
"Stiefelhagen",
"Rainer",
""
]
] |
We present a new clustering method in the form of a single clustering equation that is able to directly discover groupings in the data. The main proposition is that the first neighbor of each sample is all one needs to discover large chains and finding the groups in the data. In contrast to most existing clustering algorithms our method does not require any hyper-parameters, distance thresholds and/or the need to specify the number of clusters. The proposed algorithm belongs to the family of hierarchical agglomerative methods. The technique has a very low computational overhead, is easily scalable and applicable to large practical problems. Evaluation on well known datasets from different domains ranging between 1077 and 8.1 million samples shows substantial performance gains when compared to the existing clustering techniques.
|
2401.15048
|
Dmytro Zakharov
|
Dmytro Zakharov, Oleksandr Kuznetsov, Emanuele Frontoni
|
Unrecognizable Yet Identifiable: Image Distortion with Preserved
Embeddings
| null | null | null | null |
cs.CV cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the realm of security applications, biometric authentication systems play
a crucial role, yet one often encounters challenges concerning privacy and
security while developing one. One of the most fundamental challenges lies in
avoiding storing biometrics directly in the storage but still achieving
decently high accuracy. Addressing this issue, we contribute to both artificial
intelligence and engineering fields. We introduce an innovative image
distortion technique that effectively renders facial images unrecognizable to
the eye while maintaining their identifiability by neural network models. From
the theoretical perspective, we explore how reliable state-of-the-art
biometrics recognition neural networks are by checking the maximal degree of
image distortion, which leaves the predicted identity unchanged. On the other
hand, applying this technique demonstrates a practical solution to the
engineering challenge of balancing security, precision, and performance in
biometric authentication systems. Through experimenting on the widely used
datasets, we assess the effectiveness of our method in preserving AI feature
representation and distorting relative to conventional metrics. We also compare
our method with previously used approaches.
|
[
{
"created": "Fri, 26 Jan 2024 18:20:53 GMT",
"version": "v1"
}
] |
2024-01-29
|
[
[
"Zakharov",
"Dmytro",
""
],
[
"Kuznetsov",
"Oleksandr",
""
],
[
"Frontoni",
"Emanuele",
""
]
] |
In the realm of security applications, biometric authentication systems play a crucial role, yet one often encounters challenges concerning privacy and security while developing one. One of the most fundamental challenges lies in avoiding storing biometrics directly in the storage but still achieving decently high accuracy. Addressing this issue, we contribute to both artificial intelligence and engineering fields. We introduce an innovative image distortion technique that effectively renders facial images unrecognizable to the eye while maintaining their identifiability by neural network models. From the theoretical perspective, we explore how reliable state-of-the-art biometrics recognition neural networks are by checking the maximal degree of image distortion, which leaves the predicted identity unchanged. On the other hand, applying this technique demonstrates a practical solution to the engineering challenge of balancing security, precision, and performance in biometric authentication systems. Through experimenting on the widely used datasets, we assess the effectiveness of our method in preserving AI feature representation and distorting relative to conventional metrics. We also compare our method with previously used approaches.
|
2010.04970
|
Yingxue Zhang
|
Yingxue Zhang, Fandong Meng, Peng Li, Ping Jian, Jie Zhou
|
MS-Ranker: Accumulating Evidence from Potentially Correct Candidates for
Answer Selection
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As conventional answer selection (AS) methods generally match the question
with each candidate answer independently, they suffer from the lack of matching
information between the question and the candidate. To address this problem, we
propose a novel reinforcement learning (RL) based multi-step ranking model,
named MS-Ranker, which accumulates information from potentially correct
candidate answers as extra evidence for matching the question with a candidate.
In specific, we explicitly consider the potential correctness of candidates and
update the evidence with a gating mechanism. Moreover, as we use a listwise
ranking reward, our model learns to pay more attention to the overall
performance. Experiments on two benchmarks, namely WikiQA and SemEval-2016 CQA,
show that our model significantly outperforms existing methods that do not rely
on external resources.
|
[
{
"created": "Sat, 10 Oct 2020 10:36:58 GMT",
"version": "v1"
}
] |
2020-10-13
|
[
[
"Zhang",
"Yingxue",
""
],
[
"Meng",
"Fandong",
""
],
[
"Li",
"Peng",
""
],
[
"Jian",
"Ping",
""
],
[
"Zhou",
"Jie",
""
]
] |
As conventional answer selection (AS) methods generally match the question with each candidate answer independently, they suffer from the lack of matching information between the question and the candidate. To address this problem, we propose a novel reinforcement learning (RL) based multi-step ranking model, named MS-Ranker, which accumulates information from potentially correct candidate answers as extra evidence for matching the question with a candidate. In specific, we explicitly consider the potential correctness of candidates and update the evidence with a gating mechanism. Moreover, as we use a listwise ranking reward, our model learns to pay more attention to the overall performance. Experiments on two benchmarks, namely WikiQA and SemEval-2016 CQA, show that our model significantly outperforms existing methods that do not rely on external resources.
|
2006.13326
|
Mohammad Fereydounian
|
Mohammad Fereydounian, Zebang Shen, Aryan Mokhtari, Amin Karbasi,
Hamed Hassani
|
Safe Learning under Uncertain Objectives and Constraints
|
42 pages, 2 figures
| null | null | null |
cs.LG cs.AI math.OC stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we consider non-convex optimization problems under
\textit{unknown} yet safety-critical constraints. Such problems naturally arise
in a variety of domains including robotics, manufacturing, and medical
procedures, where it is infeasible to know or identify all the constraints.
Therefore, the parameter space should be explored in a conservative way to
ensure that none of the constraints are violated during the optimization
process once we start from a safe initialization point. To this end, we develop
an algorithm called Reliable Frank-Wolfe (Reliable-FW). Given a general
non-convex function and an unknown polytope constraint, Reliable-FW
simultaneously learns the landscape of the objective function and the boundary
of the safety polytope. More precisely, by assuming that Reliable-FW has access
to a (stochastic) gradient oracle of the objective function and a noisy
feasibility oracle of the safety polytope, it finds an $\epsilon$-approximate
first-order stationary point with the optimal ${\mathcal{O}}({1}/{\epsilon^2})$
gradient oracle complexity (resp. $\tilde{\mathcal{O}}({1}/{\epsilon^3})$ (also
optimal) in the stochastic gradient setting), while ensuring the safety of all
the iterates. Rather surprisingly, Reliable-FW only makes
$\tilde{\mathcal{O}}(({d^2}/{\epsilon^2})\log 1/\delta)$ queries to the noisy
feasibility oracle (resp. $\tilde{\mathcal{O}}(({d^2}/{\epsilon^4})\log
1/\delta)$ in the stochastic gradient setting) where $d$ is the dimension and
$\delta$ is the reliability parameter, tightening the existing bounds even for
safe minimization of convex functions. We further specialize our results to the
case that the objective function is convex. A crucial component of our analysis
is to introduce and apply a technique called geometric shrinkage in the context
of safe optimization.
|
[
{
"created": "Tue, 23 Jun 2020 20:51:00 GMT",
"version": "v1"
}
] |
2020-06-25
|
[
[
"Fereydounian",
"Mohammad",
""
],
[
"Shen",
"Zebang",
""
],
[
"Mokhtari",
"Aryan",
""
],
[
"Karbasi",
"Amin",
""
],
[
"Hassani",
"Hamed",
""
]
] |
In this paper, we consider non-convex optimization problems under \textit{unknown} yet safety-critical constraints. Such problems naturally arise in a variety of domains including robotics, manufacturing, and medical procedures, where it is infeasible to know or identify all the constraints. Therefore, the parameter space should be explored in a conservative way to ensure that none of the constraints are violated during the optimization process once we start from a safe initialization point. To this end, we develop an algorithm called Reliable Frank-Wolfe (Reliable-FW). Given a general non-convex function and an unknown polytope constraint, Reliable-FW simultaneously learns the landscape of the objective function and the boundary of the safety polytope. More precisely, by assuming that Reliable-FW has access to a (stochastic) gradient oracle of the objective function and a noisy feasibility oracle of the safety polytope, it finds an $\epsilon$-approximate first-order stationary point with the optimal ${\mathcal{O}}({1}/{\epsilon^2})$ gradient oracle complexity (resp. $\tilde{\mathcal{O}}({1}/{\epsilon^3})$ (also optimal) in the stochastic gradient setting), while ensuring the safety of all the iterates. Rather surprisingly, Reliable-FW only makes $\tilde{\mathcal{O}}(({d^2}/{\epsilon^2})\log 1/\delta)$ queries to the noisy feasibility oracle (resp. $\tilde{\mathcal{O}}(({d^2}/{\epsilon^4})\log 1/\delta)$ in the stochastic gradient setting) where $d$ is the dimension and $\delta$ is the reliability parameter, tightening the existing bounds even for safe minimization of convex functions. We further specialize our results to the case that the objective function is convex. A crucial component of our analysis is to introduce and apply a technique called geometric shrinkage in the context of safe optimization.
|
2007.02736
|
Frank Wolter
|
Alessandro Artale and Jean Christoph Jung and Andrea Mazzullo and Ana
Ozaki and Frank Wolter
|
Living Without Beth and Craig: Definitions and Interpolants in
Description and Modal Logics with Nominals and Role Inclusions
|
We have revised a few sections from the previous version. The
relationship between interpolants and explicit definitions is analysed in
more detail now
| null | null | null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Craig interpolation property (CIP) states that an interpolant for an
implication exists iff it is valid. The projective Beth definability property
(PBDP) states that an explicit definition exists iff a formula stating implicit
definability is valid. Thus, the CIP and PBDP reduce potentially hard existence
problems to entailment in the underlying logic. Description (and modal) logics
with nominals and/or role inclusions do not enjoy the CIP nor the PBDP, but
interpolants and explicit definitions have many applications, in particular in
concept learning, ontology engineering, and ontology-based data management. In
this article we show that, even without Beth and Craig, the existence of
interpolants and explicit definitions is decidable in description logics with
nominals and/or role inclusions such as ALCO, ALCH and ALCHOI and corresponding
hybrid modal logics. However, living without Beth and Craig makes this problem
harder than entailment: the existence problems become 2ExpTime-complete in the
presence of an ontology or the universal modality, and coNExpTime-complete
otherwise. We also analyze explicit definition existence if all symbols (except
the one that is defined) are admitted in the definition. In this case the
complexity depends on whether one considers individual or concept names.
Finally, we consider the problem of computing interpolants and explicit
definitions if they exist and turn the complexity upper bound proof into an
algorithm computing them, at least for description logics with role inclusions.
|
[
{
"created": "Mon, 6 Jul 2020 13:16:12 GMT",
"version": "v1"
},
{
"created": "Thu, 20 Aug 2020 08:52:24 GMT",
"version": "v2"
},
{
"created": "Mon, 28 Dec 2020 11:48:02 GMT",
"version": "v3"
},
{
"created": "Wed, 17 Aug 2022 15:49:55 GMT",
"version": "v4"
},
{
"created": "Fri, 28 Apr 2023 13:27:43 GMT",
"version": "v5"
}
] |
2023-05-01
|
[
[
"Artale",
"Alessandro",
""
],
[
"Jung",
"Jean Christoph",
""
],
[
"Mazzullo",
"Andrea",
""
],
[
"Ozaki",
"Ana",
""
],
[
"Wolter",
"Frank",
""
]
] |
The Craig interpolation property (CIP) states that an interpolant for an implication exists iff it is valid. The projective Beth definability property (PBDP) states that an explicit definition exists iff a formula stating implicit definability is valid. Thus, the CIP and PBDP reduce potentially hard existence problems to entailment in the underlying logic. Description (and modal) logics with nominals and/or role inclusions do not enjoy the CIP nor the PBDP, but interpolants and explicit definitions have many applications, in particular in concept learning, ontology engineering, and ontology-based data management. In this article we show that, even without Beth and Craig, the existence of interpolants and explicit definitions is decidable in description logics with nominals and/or role inclusions such as ALCO, ALCH and ALCHOI and corresponding hybrid modal logics. However, living without Beth and Craig makes this problem harder than entailment: the existence problems become 2ExpTime-complete in the presence of an ontology or the universal modality, and coNExpTime-complete otherwise. We also analyze explicit definition existence if all symbols (except the one that is defined) are admitted in the definition. In this case the complexity depends on whether one considers individual or concept names. Finally, we consider the problem of computing interpolants and explicit definitions if they exist and turn the complexity upper bound proof into an algorithm computing them, at least for description logics with role inclusions.
|
2204.06297
|
Giorgio Visani Mr
|
Giorgio Visani, Giacomo Graffi, Mattia Alfero, Enrico Bagli, Davide
Capuzzo, Federico Chesani
|
Enabling Synthetic Data adoption in regulated domains
| null | null | null | null |
cs.LG cs.AI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The switch from a Model-Centric to a Data-Centric mindset is putting emphasis
on data and its quality rather than algorithms, bringing forward new
challenges. In particular, the sensitive nature of the information in highly
regulated scenarios needs to be accounted for. Specific approaches to address
the privacy issue have been developed, as Privacy Enhancing Technologies.
However, they frequently cause loss of information, putting forward a crucial
trade-off among data quality and privacy. A clever way to bypass such a
conundrum relies on Synthetic Data: data obtained from a generative process,
learning the real data properties. Both Academia and Industry realized the
importance of evaluating synthetic data quality: without all-round reliable
metrics, the innovative data generation task has no proper objective function
to maximize. Despite that, the topic remains under-explored. For this reason,
we systematically catalog the important traits of synthetic data quality and
privacy, and devise a specific methodology to test them. The result is DAISYnt
(aDoption of Artificial Intelligence SYnthesis): a comprehensive suite of
advanced tests, which sets a de facto standard for synthetic data evaluation.
As a practical use-case, a variety of generative algorithms have been trained
on real-world Credit Bureau Data. The best model has been assessed, using
DAISYnt on the different synthetic replicas. Further potential uses, among
others, entail auditing and fine-tuning of generative models or ensuring high
quality of a given synthetic dataset. From a prescriptive viewpoint,
eventually, DAISYnt may pave the way to synthetic data adoption in highly
regulated domains, ranging from Finance to Healthcare, through Insurance and
Education.
|
[
{
"created": "Wed, 13 Apr 2022 10:53:54 GMT",
"version": "v1"
}
] |
2022-04-14
|
[
[
"Visani",
"Giorgio",
""
],
[
"Graffi",
"Giacomo",
""
],
[
"Alfero",
"Mattia",
""
],
[
"Bagli",
"Enrico",
""
],
[
"Capuzzo",
"Davide",
""
],
[
"Chesani",
"Federico",
""
]
] |
The switch from a Model-Centric to a Data-Centric mindset is putting emphasis on data and its quality rather than algorithms, bringing forward new challenges. In particular, the sensitive nature of the information in highly regulated scenarios needs to be accounted for. Specific approaches to address the privacy issue have been developed, as Privacy Enhancing Technologies. However, they frequently cause loss of information, putting forward a crucial trade-off among data quality and privacy. A clever way to bypass such a conundrum relies on Synthetic Data: data obtained from a generative process, learning the real data properties. Both Academia and Industry realized the importance of evaluating synthetic data quality: without all-round reliable metrics, the innovative data generation task has no proper objective function to maximize. Despite that, the topic remains under-explored. For this reason, we systematically catalog the important traits of synthetic data quality and privacy, and devise a specific methodology to test them. The result is DAISYnt (aDoption of Artificial Intelligence SYnthesis): a comprehensive suite of advanced tests, which sets a de facto standard for synthetic data evaluation. As a practical use-case, a variety of generative algorithms have been trained on real-world Credit Bureau Data. The best model has been assessed, using DAISYnt on the different synthetic replicas. Further potential uses, among others, entail auditing and fine-tuning of generative models or ensuring high quality of a given synthetic dataset. From a prescriptive viewpoint, eventually, DAISYnt may pave the way to synthetic data adoption in highly regulated domains, ranging from Finance to Healthcare, through Insurance and Education.
|
1209.3644
|
Petrus H Potgieter
|
Petrus H. Potgieter
|
Availability of titles on peer-to-peer file sharing networks
|
8 pages
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
File sharing, typically involving video or audio material in which copyright
may persist and using peer-to-peer (P2P) networks like BitTorrent, has been
reported to make up the bulk of Internet traffic. The free-riding problem
appears in this "digital gift economy" but its users exhibit rational
behaviour, subject to the characteristics of the particular network. The high
demand for the Internet as a delivery channel for entertainment underlines the
importance of understanding the dynamics of this market, especially when
considering possible business models for future pricing or licensing regimes
and for the provisioning of network capacity to support future services. The
availability of specific titles on file sharing networks is the focus of this
paper, with a special emphasis on the P2P protocol BitTorrent. The paper
compares the incentives provided in BitTorrent to those in other file-sharing
communities, including file hosting, and discusses the number of titles
available in the community at any given time, with an emphasis on popular video
items with ambiguous legal status.
|
[
{
"created": "Mon, 17 Sep 2012 13:12:04 GMT",
"version": "v1"
}
] |
2012-09-18
|
[
[
"Potgieter",
"Petrus H.",
""
]
] |
File sharing, typically involving video or audio material in which copyright may persist and using peer-to-peer (P2P) networks like BitTorrent, has been reported to make up the bulk of Internet traffic. The free-riding problem appears in this "digital gift economy" but its users exhibit rational behaviour, subject to the characteristics of the particular network. The high demand for the Internet as a delivery channel for entertainment underlines the importance of understanding the dynamics of this market, especially when considering possible business models for future pricing or licensing regimes and for the provisioning of network capacity to support future services. The availability of specific titles on file sharing networks is the focus of this paper, with a special emphasis on the P2P protocol BitTorrent. The paper compares the incentives provided in BitTorrent to those in other file-sharing communities, including file hosting, and discusses the number of titles available in the community at any given time, with an emphasis on popular video items with ambiguous legal status.
|
2405.18803
|
Minyu Feng
|
Minyu Feng, Ziyan Zeng, Qin Li, Matja\v{z} Perc, J\"urgen Kurths
|
Information Dynamics in Evolving Networks Based on the Birth-Death
Process: Random Drift and Natural Selection Perspective
|
14 pages, 9 figures
| null |
10.1109/TSMC.2024.3389095
| null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dynamic processes in complex networks are crucial for better understanding
collective behavior in human societies, biological systems, and the internet.
In this paper, we first focus on the continuous Markov-based modeling of
evolving networks with the birth-death of individuals. A new individual arrives
at the group by the Poisson process, while new links are established in the
network through either uniform connection or preferential attachment. Moreover,
an existing individual has a limited lifespan before leaving the network. We
determine stationary topological properties of these networks, including their
size and mean degree. To address the effect of the birth-death evolution, we
further study the information dynamics in the proposed network model from the
random drift and natural selection perspective, based on assumptions of
total-stochastic and fitness-driven evolution, respectively. In simulations, we
analyze the fixation probability of individual information and find that means
of new connections affect the random drift process but do not affect the
natural selection process.
|
[
{
"created": "Wed, 29 May 2024 06:47:00 GMT",
"version": "v1"
}
] |
2024-05-30
|
[
[
"Feng",
"Minyu",
""
],
[
"Zeng",
"Ziyan",
""
],
[
"Li",
"Qin",
""
],
[
"Perc",
"Matjaž",
""
],
[
"Kurths",
"Jürgen",
""
]
] |
Dynamic processes in complex networks are crucial for better understanding collective behavior in human societies, biological systems, and the internet. In this paper, we first focus on the continuous Markov-based modeling of evolving networks with the birth-death of individuals. A new individual arrives at the group by the Poisson process, while new links are established in the network through either uniform connection or preferential attachment. Moreover, an existing individual has a limited lifespan before leaving the network. We determine stationary topological properties of these networks, including their size and mean degree. To address the effect of the birth-death evolution, we further study the information dynamics in the proposed network model from the random drift and natural selection perspective, based on assumptions of total-stochastic and fitness-driven evolution, respectively. In simulations, we analyze the fixation probability of individual information and find that means of new connections affect the random drift process but do not affect the natural selection process.
|
2201.06310
|
I\~nigo Martinez
|
I\~nigo Martinez, Elisabeth Viles, Igor G. Olaizola
|
A survey study of success factors in data science projects
|
6 pages, 7 figures, 2 tables, accepted at IEEE Big Data 2021,
International Workshop on Methods to Improve Big Data Science Projects
|
2021 IEEE International Conference on Big Data, pages 2313-2318
|
10.1109/BigData52589.2021.9671588
| null |
cs.DB cs.GL cs.LG cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, the data science community has pursued excellence and made
significant research efforts to develop advanced analytics, focusing on solving
technical problems at the expense of organizational and socio-technical
challenges. According to previous surveys on the state of data science project
management, there is a significant gap between technical and organizational
processes. In this article we present new empirical data from a survey to 237
data science professionals on the use of project management methodologies for
data science. We provide additional profiling of the survey respondents' roles
and their priorities when executing data science projects. Based on this survey
study, the main findings are: (1) Agile data science lifecycle is the most
widely used framework, but only 25% of the survey participants state to follow
a data science project methodology. (2) The most important success factors are
precisely describing stakeholders' needs, communicating the results to
end-users, and team collaboration and coordination. (3) Professionals who
adhere to a project methodology place greater emphasis on the project's
potential risks and pitfalls, version control, the deployment pipeline to
production, and data security and privacy.
|
[
{
"created": "Mon, 17 Jan 2022 09:50:46 GMT",
"version": "v1"
}
] |
2022-01-19
|
[
[
"Martinez",
"Iñigo",
""
],
[
"Viles",
"Elisabeth",
""
],
[
"Olaizola",
"Igor G.",
""
]
] |
In recent years, the data science community has pursued excellence and made significant research efforts to develop advanced analytics, focusing on solving technical problems at the expense of organizational and socio-technical challenges. According to previous surveys on the state of data science project management, there is a significant gap between technical and organizational processes. In this article we present new empirical data from a survey to 237 data science professionals on the use of project management methodologies for data science. We provide additional profiling of the survey respondents' roles and their priorities when executing data science projects. Based on this survey study, the main findings are: (1) Agile data science lifecycle is the most widely used framework, but only 25% of the survey participants state to follow a data science project methodology. (2) The most important success factors are precisely describing stakeholders' needs, communicating the results to end-users, and team collaboration and coordination. (3) Professionals who adhere to a project methodology place greater emphasis on the project's potential risks and pitfalls, version control, the deployment pipeline to production, and data security and privacy.
|
1904.08144
|
Jaechang Lim
|
Jaechang Lim, Seongok Ryu, Kyubyong Park, Yo Joong Choe, Jiyeon Ham,
and Woo Youn Kim
|
Predicting drug-target interaction using 3D structure-embedded graph
representations from graph neural networks
|
20 pages, 2 figures
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Accurate prediction of drug-target interaction (DTI) is essential for in
silico drug design. For the purpose, we propose a novel approach for predicting
DTI using a GNN that directly incorporates the 3D structure of a protein-ligand
complex. We also apply a distance-aware graph attention algorithm with gate
augmentation to increase the performance of our model. As a result, our model
shows better performance than docking and other deep learning methods for both
virtual screening and pose prediction. In addition, our model can reproduce the
natural population distribution of active molecules and inactive molecules.
|
[
{
"created": "Wed, 17 Apr 2019 09:03:54 GMT",
"version": "v1"
}
] |
2019-04-18
|
[
[
"Lim",
"Jaechang",
""
],
[
"Ryu",
"Seongok",
""
],
[
"Park",
"Kyubyong",
""
],
[
"Choe",
"Yo Joong",
""
],
[
"Ham",
"Jiyeon",
""
],
[
"Kim",
"Woo Youn",
""
]
] |
Accurate prediction of drug-target interaction (DTI) is essential for in silico drug design. For the purpose, we propose a novel approach for predicting DTI using a GNN that directly incorporates the 3D structure of a protein-ligand complex. We also apply a distance-aware graph attention algorithm with gate augmentation to increase the performance of our model. As a result, our model shows better performance than docking and other deep learning methods for both virtual screening and pose prediction. In addition, our model can reproduce the natural population distribution of active molecules and inactive molecules.
|
2007.07053
|
Qiang Nie
|
Qiang Nie, Ziwei Liu, Yunhui Liu
|
Unsupervised 3D Human Pose Representation with Viewpoint and Pose
Disentanglement
|
To appear in ECCV 2020. Code and models are available at:
https://github.com/NIEQiang001/unsupervised-human-pose.git
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Learning a good 3D human pose representation is important for human pose
related tasks, e.g. human 3D pose estimation and action recognition. Within all
these problems, preserving the intrinsic pose information and adapting to view
variations are two critical issues. In this work, we propose a novel Siamese
denoising autoencoder to learn a 3D pose representation by disentangling the
pose-dependent and view-dependent feature from the human skeleton data, in a
fully unsupervised manner. These two disentangled features are utilized
together as the representation of the 3D pose. To consider both the kinematic
and geometric dependencies, a sequential bidirectional recursive network
(SeBiReNet) is further proposed to model the human skeleton data. Extensive
experiments demonstrate that the learned representation 1) preserves the
intrinsic information of human pose, 2) shows good transferability across
datasets and tasks. Notably, our approach achieves state-of-the-art performance
on two inherently different tasks: pose denoising and unsupervised action
recognition. Code and models are available at:
\url{https://github.com/NIEQiang001/unsupervised-human-pose.git}
|
[
{
"created": "Tue, 14 Jul 2020 14:25:22 GMT",
"version": "v1"
},
{
"created": "Tue, 2 Nov 2021 03:23:43 GMT",
"version": "v2"
}
] |
2021-11-03
|
[
[
"Nie",
"Qiang",
""
],
[
"Liu",
"Ziwei",
""
],
[
"Liu",
"Yunhui",
""
]
] |
Learning a good 3D human pose representation is important for human pose related tasks, e.g. human 3D pose estimation and action recognition. Within all these problems, preserving the intrinsic pose information and adapting to view variations are two critical issues. In this work, we propose a novel Siamese denoising autoencoder to learn a 3D pose representation by disentangling the pose-dependent and view-dependent feature from the human skeleton data, in a fully unsupervised manner. These two disentangled features are utilized together as the representation of the 3D pose. To consider both the kinematic and geometric dependencies, a sequential bidirectional recursive network (SeBiReNet) is further proposed to model the human skeleton data. Extensive experiments demonstrate that the learned representation 1) preserves the intrinsic information of human pose, 2) shows good transferability across datasets and tasks. Notably, our approach achieves state-of-the-art performance on two inherently different tasks: pose denoising and unsupervised action recognition. Code and models are available at: \url{https://github.com/NIEQiang001/unsupervised-human-pose.git}
|
1608.04529
|
Stephan Druskat
|
Stephan Druskat
|
A Proposal for the Measurement and Documentation of Research Software
Sustainability in Interactive Metadata Repositories
|
2 pages, submitted to and accepted for the 4th Workshop on
Sustainable Software for Science: Practice and Experiences (WSSSPE4), 12-14
September 2016, Manchester, UK
| null | null | null |
cs.SE cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
This paper proposes an interactive repository type for research software
metadata which measures and documents software sustainability by accumulating
metadata, and computing sustainability metrics over them. Such a repository
would help to overcome technical barriers to software sustainability by
furthering the discovery and identification of sustainable software, thereby
also facilitating documentation of research software within the framework of
software management plans.
|
[
{
"created": "Tue, 16 Aug 2016 09:41:23 GMT",
"version": "v1"
},
{
"created": "Wed, 17 Aug 2016 10:29:58 GMT",
"version": "v2"
}
] |
2016-08-18
|
[
[
"Druskat",
"Stephan",
""
]
] |
This paper proposes an interactive repository type for research software metadata which measures and documents software sustainability by accumulating metadata, and computing sustainability metrics over them. Such a repository would help to overcome technical barriers to software sustainability by furthering the discovery and identification of sustainable software, thereby also facilitating documentation of research software within the framework of software management plans.
|
2005.13654
|
Matija Pretnar
|
\v{Z}iga Luk\v{s}i\v{c} and Matija Pretnar
|
Local Algebraic Effect Theories
|
24 pages, 8 figures
|
LUKSIC, Z, & PRETNAR, M. (2020). Local algebraic effect theories.
Journal of Functional Programming, 30, E13
|
10.1017/S0956796819000212
| null |
cs.LO cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Algebraic effects are computational effects that can be described with a set
of basic operations and equations between them. As many interesting effect
handlers do not respect these equations, most approaches assume a trivial
theory, sacrificing both reasoning power and safety.
We present an alternative approach where the type system tracks equations
that are observed in subparts of the program, yielding a sound and flexible
logic, and paving a way for practical optimizations and reasoning tools.
|
[
{
"created": "Wed, 27 May 2020 21:08:54 GMT",
"version": "v1"
}
] |
2020-05-29
|
[
[
"Lukšič",
"Žiga",
""
],
[
"Pretnar",
"Matija",
""
]
] |
Algebraic effects are computational effects that can be described with a set of basic operations and equations between them. As many interesting effect handlers do not respect these equations, most approaches assume a trivial theory, sacrificing both reasoning power and safety. We present an alternative approach where the type system tracks equations that are observed in subparts of the program, yielding a sound and flexible logic, and paving a way for practical optimizations and reasoning tools.
|
2204.00350
|
Zheng Zhao
|
Zheng Zhao and Bonnie Webber
|
Revisiting Shallow Discourse Parsing in the PDTB-3: Handling
Intra-sentential Implicits
|
Accepted to CODI 2021
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In the PDTB-3, several thousand implicit discourse relations were newly
annotated \textit{within} individual sentences, adding to the over 15,000
implicit relations annotated \textit{across} adjacent sentences in the PDTB-2.
Given that the position of the arguments to these \textit{intra-sentential
implicits} is no longer as well-defined as with \textit{inter-sentential
implicits}, a discourse parser must identify both their location and their
sense. That is the focus of the current work. The paper provides a
comprehensive analysis of our results, showcasing model performance under
different scenarios, pointing out limitations and noting future directions.
|
[
{
"created": "Fri, 1 Apr 2022 10:56:25 GMT",
"version": "v1"
}
] |
2022-04-04
|
[
[
"Zhao",
"Zheng",
""
],
[
"Webber",
"Bonnie",
""
]
] |
In the PDTB-3, several thousand implicit discourse relations were newly annotated \textit{within} individual sentences, adding to the over 15,000 implicit relations annotated \textit{across} adjacent sentences in the PDTB-2. Given that the position of the arguments to these \textit{intra-sentential implicits} is no longer as well-defined as with \textit{inter-sentential implicits}, a discourse parser must identify both their location and their sense. That is the focus of the current work. The paper provides a comprehensive analysis of our results, showcasing model performance under different scenarios, pointing out limitations and noting future directions.
|
2212.05613
|
Filip Ilievski
|
Aravinda Kolla, Filip Ilievski, H\^ong-\^An Sandlin and Alain Mermoud
|
A Study of Slang Representation Methods
| null | null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Considering the large amount of content created online by the minute,
slang-aware automatic tools are critically needed to promote social good, and
assist policymakers and moderators in restricting the spread of offensive
language, abuse, and hate speech. Despite the success of large language models
and the spontaneous emergence of slang dictionaries, it is unclear how far
their combination goes in terms of slang understanding for downstream social
good tasks. In this paper, we provide a framework to study different
combinations of representation learning models and knowledge resources for a
variety of downstream tasks that rely on slang understanding. Our experiments
show the superiority of models that have been pre-trained on social media data,
while the impact of dictionaries is positive only for static word embeddings.
Our error analysis identifies core challenges for slang representation
learning, including out-of-vocabulary words, polysemy, variance, and annotation
disagreements, which can be traced to characteristics of slang as a quickly
evolving and highly subjective language.
|
[
{
"created": "Sun, 11 Dec 2022 21:56:44 GMT",
"version": "v1"
},
{
"created": "Fri, 27 Jan 2023 17:06:45 GMT",
"version": "v2"
},
{
"created": "Tue, 31 Jan 2023 19:37:52 GMT",
"version": "v3"
}
] |
2023-02-02
|
[
[
"Kolla",
"Aravinda",
""
],
[
"Ilievski",
"Filip",
""
],
[
"Sandlin",
"Hông-Ân",
""
],
[
"Mermoud",
"Alain",
""
]
] |
Considering the large amount of content created online by the minute, slang-aware automatic tools are critically needed to promote social good, and assist policymakers and moderators in restricting the spread of offensive language, abuse, and hate speech. Despite the success of large language models and the spontaneous emergence of slang dictionaries, it is unclear how far their combination goes in terms of slang understanding for downstream social good tasks. In this paper, we provide a framework to study different combinations of representation learning models and knowledge resources for a variety of downstream tasks that rely on slang understanding. Our experiments show the superiority of models that have been pre-trained on social media data, while the impact of dictionaries is positive only for static word embeddings. Our error analysis identifies core challenges for slang representation learning, including out-of-vocabulary words, polysemy, variance, and annotation disagreements, which can be traced to characteristics of slang as a quickly evolving and highly subjective language.
|
0801.0938
|
Sang-Woon Jeon
|
Sang-Woon Jeon, Natasha Devroye, Mai Vu, Sae-Young Chung, Vahid Tarokh
|
Cognitive Networks Achieve Throughput Scaling of a Homogeneous Network
|
28 pages, 12 figures, submitted to IEEE Trans. on Information Theory
|
IEEE Transactions on Information Theory, vol. 57, no. 8, pp.
5103-5115, Aug. 2011
|
10.1109/TIT.2011.2158874
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study two distinct, but overlapping, networks that operate at the same
time, space, and frequency. The first network consists of $n$ randomly
distributed \emph{primary users}, which form either an ad hoc network, or an
infrastructure-supported ad hoc network with $l$ additional base stations. The
second network consists of $m$ randomly distributed, ad hoc secondary users or
cognitive users. The primary users have priority access to the spectrum and do
not need to change their communication protocol in the presence of secondary
users. The secondary users, however, need to adjust their protocol based on
knowledge about the locations of the primary nodes to bring little loss to the
primary network's throughput. By introducing preservation regions around
primary receivers and avoidance regions around primary base stations, we
propose two modified multihop routing protocols for the cognitive users. Base
on percolation theory, we show that when the secondary network is denser than
the primary network, both networks can simultaneously achieve the same
throughput scaling law as a stand-alone network. Furthermore, the primary
network throughput is subject to only a vanishingly fractional loss.
Specifically, for the ad hoc and the infrastructure-supported primary models,
the primary network achieves sum throughputs of order $n^{1/2}$ and
$\max\{n^{1/2},l\}$, respectively. For both primary network models, for any
$\delta>0$, the secondary network can achieve sum throughput of order
$m^{1/2-\delta}$ with an arbitrarily small fraction of outage. Thus, almost all
secondary source-destination pairs can communicate at a rate of order
$m^{-1/2-\delta}$.
|
[
{
"created": "Mon, 7 Jan 2008 10:52:39 GMT",
"version": "v1"
},
{
"created": "Thu, 16 Jul 2009 05:24:13 GMT",
"version": "v2"
}
] |
2016-11-17
|
[
[
"Jeon",
"Sang-Woon",
""
],
[
"Devroye",
"Natasha",
""
],
[
"Vu",
"Mai",
""
],
[
"Chung",
"Sae-Young",
""
],
[
"Tarokh",
"Vahid",
""
]
] |
We study two distinct, but overlapping, networks that operate at the same time, space, and frequency. The first network consists of $n$ randomly distributed \emph{primary users}, which form either an ad hoc network, or an infrastructure-supported ad hoc network with $l$ additional base stations. The second network consists of $m$ randomly distributed, ad hoc secondary users or cognitive users. The primary users have priority access to the spectrum and do not need to change their communication protocol in the presence of secondary users. The secondary users, however, need to adjust their protocol based on knowledge about the locations of the primary nodes to bring little loss to the primary network's throughput. By introducing preservation regions around primary receivers and avoidance regions around primary base stations, we propose two modified multihop routing protocols for the cognitive users. Base on percolation theory, we show that when the secondary network is denser than the primary network, both networks can simultaneously achieve the same throughput scaling law as a stand-alone network. Furthermore, the primary network throughput is subject to only a vanishingly fractional loss. Specifically, for the ad hoc and the infrastructure-supported primary models, the primary network achieves sum throughputs of order $n^{1/2}$ and $\max\{n^{1/2},l\}$, respectively. For both primary network models, for any $\delta>0$, the secondary network can achieve sum throughput of order $m^{1/2-\delta}$ with an arbitrarily small fraction of outage. Thus, almost all secondary source-destination pairs can communicate at a rate of order $m^{-1/2-\delta}$.
|
cs/0603047
|
Simone Severini
|
Stefano Mancini and Simone Severini
|
The Quantum Separability Problem for Gaussian States
|
8 pages
|
Electronic Notes in Theoretical Computer Science, Volume 169 ,
March 2007, pp. 121-131
|
10.1016/j.entcs.2006.07.034
| null |
cs.CC quant-ph
| null |
Determining whether a quantum state is separable or entangled is a problem of
fundamental importance in quantum information science. This is a brief review
in which we consider the problem for states in infinite dimensional Hilbert
spaces. We show how the problem becomes tractable for a class of Gaussian
states.
|
[
{
"created": "Sun, 12 Mar 2006 14:33:41 GMT",
"version": "v1"
},
{
"created": "Fri, 31 Mar 2006 12:38:49 GMT",
"version": "v2"
}
] |
2007-05-23
|
[
[
"Mancini",
"Stefano",
""
],
[
"Severini",
"Simone",
""
]
] |
Determining whether a quantum state is separable or entangled is a problem of fundamental importance in quantum information science. This is a brief review in which we consider the problem for states in infinite dimensional Hilbert spaces. We show how the problem becomes tractable for a class of Gaussian states.
|
2211.11039
|
Jag Mohan Singh
|
Jag Mohan Singh, Raghavendra Ramachandra
|
Deep Composite Face Image Attacks: Generation, Vulnerability and
Detection
|
The submitted paper is accepted in IEEE Access 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Face manipulation attacks have drawn the attention of biometric researchers
because of their vulnerability to Face Recognition Systems (FRS). This paper
proposes a novel scheme to generate Composite Face Image Attacks (CFIA) based
on facial attributes using Generative Adversarial Networks (GANs). Given the
face images corresponding to two unique data subjects, the proposed CFIA method
will independently generate the segmented facial attributes, then blend them
using transparent masks to generate the CFIA samples. We generate $526$ unique
CFIA combinations of facial attributes for each pair of contributory data
subjects. Extensive experiments are carried out on our newly generated CFIA
dataset consisting of 1000 unique identities with 2000 bona fide samples and
526000 CFIA samples, thus resulting in an overall 528000 face image samples.
{{We present a sequence of experiments to benchmark the attack potential of
CFIA samples using four different automatic FRS}}. We introduced a new metric
named Generalized Morphing Attack Potential (G-MAP) to benchmark the
vulnerability of generated attacks on FRS effectively. Additional experiments
are performed on the representative subset of the CFIA dataset to benchmark
both perceptual quality and human observer response. Finally, the CFIA
detection performance is benchmarked using three different single image based
face Morphing Attack Detection (MAD) algorithms. The source code of the
proposed method together with CFIA dataset will be made publicly available:
\url{https://github.com/jagmohaniiit/LatentCompositionCode}
|
[
{
"created": "Sun, 20 Nov 2022 17:31:52 GMT",
"version": "v1"
},
{
"created": "Thu, 16 Feb 2023 17:41:25 GMT",
"version": "v2"
},
{
"created": "Mon, 20 Mar 2023 20:06:42 GMT",
"version": "v3"
}
] |
2023-03-22
|
[
[
"Singh",
"Jag Mohan",
""
],
[
"Ramachandra",
"Raghavendra",
""
]
] |
Face manipulation attacks have drawn the attention of biometric researchers because of their vulnerability to Face Recognition Systems (FRS). This paper proposes a novel scheme to generate Composite Face Image Attacks (CFIA) based on facial attributes using Generative Adversarial Networks (GANs). Given the face images corresponding to two unique data subjects, the proposed CFIA method will independently generate the segmented facial attributes, then blend them using transparent masks to generate the CFIA samples. We generate $526$ unique CFIA combinations of facial attributes for each pair of contributory data subjects. Extensive experiments are carried out on our newly generated CFIA dataset consisting of 1000 unique identities with 2000 bona fide samples and 526000 CFIA samples, thus resulting in an overall 528000 face image samples. {{We present a sequence of experiments to benchmark the attack potential of CFIA samples using four different automatic FRS}}. We introduced a new metric named Generalized Morphing Attack Potential (G-MAP) to benchmark the vulnerability of generated attacks on FRS effectively. Additional experiments are performed on the representative subset of the CFIA dataset to benchmark both perceptual quality and human observer response. Finally, the CFIA detection performance is benchmarked using three different single image based face Morphing Attack Detection (MAD) algorithms. The source code of the proposed method together with CFIA dataset will be made publicly available: \url{https://github.com/jagmohaniiit/LatentCompositionCode}
|
1611.04455
|
Hongyi Liu
|
Vaidehi Dalmia, Hongyi Liu, Shih-Fu Chang
|
Columbia MVSO Image Sentiment Dataset
| null | null | null | null |
cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Multilingual Visual Sentiment Ontology (MVSO) consists of 15,600 concepts
in 12 different languages that are strongly related to emotions and sentiments
expressed in images. These concepts are defined in the form of Adjective-Noun
Pair (ANP), which are crawled and discovered from online image forum Flickr. In
this work, we used Amazon Mechanical Turk as a crowd-sourcing platform to
collect human judgments on sentiments expressed in images that are uniformly
sampled over 3,911 English ANPs extracted from a tag-restricted subset of MVSO.
Our goal is to use the dataset as a benchmark for the evaluation of systems
that automatically predict sentiments in images or ANPs.
|
[
{
"created": "Mon, 14 Nov 2016 16:48:12 GMT",
"version": "v1"
}
] |
2016-11-15
|
[
[
"Dalmia",
"Vaidehi",
""
],
[
"Liu",
"Hongyi",
""
],
[
"Chang",
"Shih-Fu",
""
]
] |
The Multilingual Visual Sentiment Ontology (MVSO) consists of 15,600 concepts in 12 different languages that are strongly related to emotions and sentiments expressed in images. These concepts are defined in the form of Adjective-Noun Pair (ANP), which are crawled and discovered from online image forum Flickr. In this work, we used Amazon Mechanical Turk as a crowd-sourcing platform to collect human judgments on sentiments expressed in images that are uniformly sampled over 3,911 English ANPs extracted from a tag-restricted subset of MVSO. Our goal is to use the dataset as a benchmark for the evaluation of systems that automatically predict sentiments in images or ANPs.
|
1901.03489
|
Chen Qu
|
Chen Qu, Liu Yang, Bruce Croft, Yongfeng Zhang, Johanne R. Trippas and
Minghui Qiu
|
User Intent Prediction in Information-seeking Conversations
|
Accepted to CHIIR 2019
| null |
10.1145/3295750.3298924
| null |
cs.IR cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Conversational assistants are being progressively adopted by the general
population. However, they are not capable of handling complicated
information-seeking tasks that involve multiple turns of information exchange.
Due to the limited communication bandwidth in conversational search, it is
important for conversational assistants to accurately detect and predict user
intent in information-seeking conversations. In this paper, we investigate two
aspects of user intent prediction in an information-seeking setting. First, we
extract features based on the content, structural, and sentiment
characteristics of a given utterance, and use classic machine learning methods
to perform user intent prediction. We then conduct an in-depth feature
importance analysis to identify key features in this prediction task. We find
that structural features contribute most to the prediction performance. Given
this finding, we construct neural classifiers to incorporate context
information and achieve better performance without feature engineering. Our
findings can provide insights into the important factors and effective methods
of user intent prediction in information-seeking conversations.
|
[
{
"created": "Fri, 11 Jan 2019 05:53:13 GMT",
"version": "v1"
}
] |
2019-01-14
|
[
[
"Qu",
"Chen",
""
],
[
"Yang",
"Liu",
""
],
[
"Croft",
"Bruce",
""
],
[
"Zhang",
"Yongfeng",
""
],
[
"Trippas",
"Johanne R.",
""
],
[
"Qiu",
"Minghui",
""
]
] |
Conversational assistants are being progressively adopted by the general population. However, they are not capable of handling complicated information-seeking tasks that involve multiple turns of information exchange. Due to the limited communication bandwidth in conversational search, it is important for conversational assistants to accurately detect and predict user intent in information-seeking conversations. In this paper, we investigate two aspects of user intent prediction in an information-seeking setting. First, we extract features based on the content, structural, and sentiment characteristics of a given utterance, and use classic machine learning methods to perform user intent prediction. We then conduct an in-depth feature importance analysis to identify key features in this prediction task. We find that structural features contribute most to the prediction performance. Given this finding, we construct neural classifiers to incorporate context information and achieve better performance without feature engineering. Our findings can provide insights into the important factors and effective methods of user intent prediction in information-seeking conversations.
|
1701.02710
|
Subhro Das
|
Subhro Das and Jos\'e M. F. Moura
|
Distributed Estimation of Dynamic Fields over Multi-agent Networks
|
Accepted for publication at ICASSP 2017
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work presents distributed algorithms for estimation of time-varying
random fields over multi-agent/sensor networks. A network of sensors makes
sparse and noisy local measurements of the dynamic field. Each sensor aims to
obtain unbiased distributed estimates of the entire field with bounded
mean-squared error (MSE) based on its own local observations and its neighbors'
estimates. This work develops three novel distributed estimators:
Pseudo-Innovations Kalman Filter (PIKF), Distributed Information Kalman Filter
(DIKF) and Consensus+Innovations Kalman Filter (CIKF). We design the gain
matrices such that the estimators achieve unbiased estimates with bounded MSE
under minimal assumptions on the local observation and network communication
models. This work establishes trade-offs between these three distributed
estimators and demonstrates how they outperform existing solutions. We validate
our results through extensive numerical evaluations.
|
[
{
"created": "Tue, 10 Jan 2017 18:04:16 GMT",
"version": "v1"
}
] |
2017-01-11
|
[
[
"Das",
"Subhro",
""
],
[
"Moura",
"José M. F.",
""
]
] |
This work presents distributed algorithms for estimation of time-varying random fields over multi-agent/sensor networks. A network of sensors makes sparse and noisy local measurements of the dynamic field. Each sensor aims to obtain unbiased distributed estimates of the entire field with bounded mean-squared error (MSE) based on its own local observations and its neighbors' estimates. This work develops three novel distributed estimators: Pseudo-Innovations Kalman Filter (PIKF), Distributed Information Kalman Filter (DIKF) and Consensus+Innovations Kalman Filter (CIKF). We design the gain matrices such that the estimators achieve unbiased estimates with bounded MSE under minimal assumptions on the local observation and network communication models. This work establishes trade-offs between these three distributed estimators and demonstrates how they outperform existing solutions. We validate our results through extensive numerical evaluations.
|
1504.04799
|
Qinghua Guo
|
Qinghua Guo and Jiangtao Xi
|
Approximate Message Passing with Unitary Transformation
|
5 pages
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Approximate message passing (AMP) and its variants, developed based on loopy
belief propagation, are attractive for estimating a vector x from a noisy
version of z = Ax, which arises in many applications. For a large A with i. i.
d. elements, AMP can be characterized by the state evolution and exhibits fast
convergence. However, it has been shown that, AMP mayeasily diverge for a
generic A. In this work, we develop a new variant of AMP based on a unitary
transformation of the original model (hence the variant is called UT-AMP),
where the unitary matrix is available for any matrix A, e.g., the conjugate
transpose of the left singular matrix of A, or a normalized DFT (discrete
Fourier transform) matrix for any circulant A. We prove that, in the case of
Gaussian priors, UT-AMP always converges for any matrix A. It is observed that
UT-AMP is much more robust than the original AMP for difficult A and exhibits
fast convergence.
A special form of UT-AMP with a circulant A was used in our previous work
[13] for turbo equalization. This work extends it to a generic A, and provides
a theoretical investigation on the convergence.
|
[
{
"created": "Sun, 19 Apr 2015 06:06:46 GMT",
"version": "v1"
}
] |
2015-04-21
|
[
[
"Guo",
"Qinghua",
""
],
[
"Xi",
"Jiangtao",
""
]
] |
Approximate message passing (AMP) and its variants, developed based on loopy belief propagation, are attractive for estimating a vector x from a noisy version of z = Ax, which arises in many applications. For a large A with i. i. d. elements, AMP can be characterized by the state evolution and exhibits fast convergence. However, it has been shown that, AMP mayeasily diverge for a generic A. In this work, we develop a new variant of AMP based on a unitary transformation of the original model (hence the variant is called UT-AMP), where the unitary matrix is available for any matrix A, e.g., the conjugate transpose of the left singular matrix of A, or a normalized DFT (discrete Fourier transform) matrix for any circulant A. We prove that, in the case of Gaussian priors, UT-AMP always converges for any matrix A. It is observed that UT-AMP is much more robust than the original AMP for difficult A and exhibits fast convergence. A special form of UT-AMP with a circulant A was used in our previous work [13] for turbo equalization. This work extends it to a generic A, and provides a theoretical investigation on the convergence.
|
2404.18928
|
Michael Luo Zhiyu
|
Michael Luo, Justin Wong, Brandon Trabucco, Yanping Huang, Joseph E.
Gonzalez, Zhifeng Chen, Ruslan Salakhutdinov, Ion Stoica
|
Stylus: Automatic Adapter Selection for Diffusion Models
|
Project Website: https://stylus-diffusion.github.io
| null | null | null |
cs.CV cs.AI cs.CL cs.GR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Beyond scaling base models with more data or parameters, fine-tuned adapters
provide an alternative way to generate high fidelity, custom images at reduced
costs. As such, adapters have been widely adopted by open-source communities,
accumulating a database of over 100K adapters-most of which are highly
customized with insufficient descriptions. This paper explores the problem of
matching the prompt to a set of relevant adapters, built on recent work that
highlight the performance gains of composing adapters. We introduce Stylus,
which efficiently selects and automatically composes task-specific adapters
based on a prompt's keywords. Stylus outlines a three-stage approach that first
summarizes adapters with improved descriptions and embeddings, retrieves
relevant adapters, and then further assembles adapters based on prompts'
keywords by checking how well they fit the prompt. To evaluate Stylus, we
developed StylusDocs, a curated dataset featuring 75K adapters with
pre-computed adapter embeddings. In our evaluation on popular Stable Diffusion
checkpoints, Stylus achieves greater CLIP-FID Pareto efficiency and is twice as
preferred, with humans and multimodal models as evaluators, over the base
model. See stylus-diffusion.github.io for more.
|
[
{
"created": "Mon, 29 Apr 2024 17:59:16 GMT",
"version": "v1"
}
] |
2024-04-30
|
[
[
"Luo",
"Michael",
""
],
[
"Wong",
"Justin",
""
],
[
"Trabucco",
"Brandon",
""
],
[
"Huang",
"Yanping",
""
],
[
"Gonzalez",
"Joseph E.",
""
],
[
"Chen",
"Zhifeng",
""
],
[
"Salakhutdinov",
"Ruslan",
""
],
[
"Stoica",
"Ion",
""
]
] |
Beyond scaling base models with more data or parameters, fine-tuned adapters provide an alternative way to generate high fidelity, custom images at reduced costs. As such, adapters have been widely adopted by open-source communities, accumulating a database of over 100K adapters-most of which are highly customized with insufficient descriptions. This paper explores the problem of matching the prompt to a set of relevant adapters, built on recent work that highlight the performance gains of composing adapters. We introduce Stylus, which efficiently selects and automatically composes task-specific adapters based on a prompt's keywords. Stylus outlines a three-stage approach that first summarizes adapters with improved descriptions and embeddings, retrieves relevant adapters, and then further assembles adapters based on prompts' keywords by checking how well they fit the prompt. To evaluate Stylus, we developed StylusDocs, a curated dataset featuring 75K adapters with pre-computed adapter embeddings. In our evaluation on popular Stable Diffusion checkpoints, Stylus achieves greater CLIP-FID Pareto efficiency and is twice as preferred, with humans and multimodal models as evaluators, over the base model. See stylus-diffusion.github.io for more.
|
2301.08811
|
Cyrus Neary
|
Bo Chen and Calvin Hawkins and Mustafa O. Karabag and Cyrus Neary and
Matthew Hale and Ufuk Topcu
|
Differential Privacy in Cooperative Multiagent Planning
| null | null | null | null |
cs.MA cs.AI cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Privacy-aware multiagent systems must protect agents' sensitive data while
simultaneously ensuring that agents accomplish their shared objectives. Towards
this goal, we propose a framework to privatize inter-agent communications in
cooperative multiagent decision-making problems. We study sequential
decision-making problems formulated as cooperative Markov games with
reach-avoid objectives. We apply a differential privacy mechanism to privatize
agents' communicated symbolic state trajectories, and then we analyze tradeoffs
between the strength of privacy and the team's performance. For a given level
of privacy, this tradeoff is shown to depend critically upon the total
correlation among agents' state-action processes. We synthesize policies that
are robust to privacy by reducing the value of the total correlation. Numerical
experiments demonstrate that the team's performance under these policies
decreases by only 3 percent when comparing private versus non-private
implementations of communication. By contrast, the team's performance decreases
by roughly 86 percent when using baseline policies that ignore total
correlation and only optimize team performance.
|
[
{
"created": "Fri, 20 Jan 2023 21:36:57 GMT",
"version": "v1"
}
] |
2023-01-24
|
[
[
"Chen",
"Bo",
""
],
[
"Hawkins",
"Calvin",
""
],
[
"Karabag",
"Mustafa O.",
""
],
[
"Neary",
"Cyrus",
""
],
[
"Hale",
"Matthew",
""
],
[
"Topcu",
"Ufuk",
""
]
] |
Privacy-aware multiagent systems must protect agents' sensitive data while simultaneously ensuring that agents accomplish their shared objectives. Towards this goal, we propose a framework to privatize inter-agent communications in cooperative multiagent decision-making problems. We study sequential decision-making problems formulated as cooperative Markov games with reach-avoid objectives. We apply a differential privacy mechanism to privatize agents' communicated symbolic state trajectories, and then we analyze tradeoffs between the strength of privacy and the team's performance. For a given level of privacy, this tradeoff is shown to depend critically upon the total correlation among agents' state-action processes. We synthesize policies that are robust to privacy by reducing the value of the total correlation. Numerical experiments demonstrate that the team's performance under these policies decreases by only 3 percent when comparing private versus non-private implementations of communication. By contrast, the team's performance decreases by roughly 86 percent when using baseline policies that ignore total correlation and only optimize team performance.
|
2004.04242
|
Matheus Gadelha
|
Matheus Gadelha, Rui Wang, Subhransu Maji
|
Deep Manifold Prior
|
22 pages, 12 figures
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a prior for manifold structured data, such as surfaces of 3D
shapes, where deep neural networks are adopted to reconstruct a target shape
using gradient descent starting from a random initialization. We show that
surfaces generated this way are smooth, with limiting behavior characterized by
Gaussian processes, and we mathematically derive such properties for
fully-connected as well as convolutional networks. We demonstrate our method in
a variety of manifold reconstruction applications, such as point cloud
denoising and interpolation, achieving considerably better results against
competitive baselines while requiring no training data. We also show that when
training data is available, our method allows developing alternate
parametrizations of surfaces under the framework of AtlasNet, leading to a
compact network architecture and better reconstruction results on standard
image to shape reconstruction benchmarks.
|
[
{
"created": "Wed, 8 Apr 2020 20:47:56 GMT",
"version": "v1"
}
] |
2020-04-10
|
[
[
"Gadelha",
"Matheus",
""
],
[
"Wang",
"Rui",
""
],
[
"Maji",
"Subhransu",
""
]
] |
We present a prior for manifold structured data, such as surfaces of 3D shapes, where deep neural networks are adopted to reconstruct a target shape using gradient descent starting from a random initialization. We show that surfaces generated this way are smooth, with limiting behavior characterized by Gaussian processes, and we mathematically derive such properties for fully-connected as well as convolutional networks. We demonstrate our method in a variety of manifold reconstruction applications, such as point cloud denoising and interpolation, achieving considerably better results against competitive baselines while requiring no training data. We also show that when training data is available, our method allows developing alternate parametrizations of surfaces under the framework of AtlasNet, leading to a compact network architecture and better reconstruction results on standard image to shape reconstruction benchmarks.
|
2309.07264
|
Oliver Johnson
|
Vivekanand Paligadu and Oliver Johnson and Matthew Aldridge
|
Small error algorithms for tropical group testing
| null | null | null | null |
cs.IT math.IT math.PR
|
http://creativecommons.org/licenses/by/4.0/
|
We consider a version of the classical group testing problem motivated by PCR
testing for COVID-19. In the so-called tropical group testing model, the
outcome of a test is the lowest cycle threshold (Ct) level of the individuals
pooled within it, rather than a simple binary indicator variable. We introduce
the tropical counterparts of three classical non-adaptive algorithms (COMP, DD
and SCOMP), and analyse their behaviour through both simulations and bounds on
error probabilities. By comparing the results of the tropical and classical
algorithms, we gain insight into the extra information provided by learning the
outcomes (Ct levels) of the tests. We show that in a limiting regime the
tropical COMP algorithm requires as many tests as its classical counterpart,
but that for sufficiently dense problems tropical DD can recover more
information with fewer tests, and can be viewed as essentially optimal in
certain regimes.
|
[
{
"created": "Wed, 13 Sep 2023 18:56:38 GMT",
"version": "v1"
},
{
"created": "Thu, 15 Aug 2024 05:51:13 GMT",
"version": "v2"
}
] |
2024-08-16
|
[
[
"Paligadu",
"Vivekanand",
""
],
[
"Johnson",
"Oliver",
""
],
[
"Aldridge",
"Matthew",
""
]
] |
We consider a version of the classical group testing problem motivated by PCR testing for COVID-19. In the so-called tropical group testing model, the outcome of a test is the lowest cycle threshold (Ct) level of the individuals pooled within it, rather than a simple binary indicator variable. We introduce the tropical counterparts of three classical non-adaptive algorithms (COMP, DD and SCOMP), and analyse their behaviour through both simulations and bounds on error probabilities. By comparing the results of the tropical and classical algorithms, we gain insight into the extra information provided by learning the outcomes (Ct levels) of the tests. We show that in a limiting regime the tropical COMP algorithm requires as many tests as its classical counterpart, but that for sufficiently dense problems tropical DD can recover more information with fewer tests, and can be viewed as essentially optimal in certain regimes.
|
1911.11376
|
Markus Knecht
|
Markus Knecht
|
Mandala: A Smart Contract Programming Language
| null | null | null | null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Smart contracts on a blockchain behave precisely as specified by their code.
A vulnerability in this code can lead to unexpected behaviour, which is hard to
fix because a blockchain does not allow to change smart contract code after its
deployment. Such vulnerabilities have led to several incidents. In the
aftermath of such an event, a hard-fork between Ethereum and Ethereum classic
was the result. This thesis proposes to develop a new smart contract
programming language with the primary focus on safety, auditability, and the
intention to prevent as many of the known categories of vulnerabilities by
design as possible. The programming language's code is validated during
deployment and afterwards isolated from other smart contracts running on the
same blockchain to enforce compile-time guarantees during runtime. The designed
programming language does evaluate new concepts and paradigms rarely used in
non-smart contract environments for their potential benefit in a smart contract
environment.
|
[
{
"created": "Tue, 26 Nov 2019 07:33:22 GMT",
"version": "v1"
}
] |
2019-11-27
|
[
[
"Knecht",
"Markus",
""
]
] |
Smart contracts on a blockchain behave precisely as specified by their code. A vulnerability in this code can lead to unexpected behaviour, which is hard to fix because a blockchain does not allow to change smart contract code after its deployment. Such vulnerabilities have led to several incidents. In the aftermath of such an event, a hard-fork between Ethereum and Ethereum classic was the result. This thesis proposes to develop a new smart contract programming language with the primary focus on safety, auditability, and the intention to prevent as many of the known categories of vulnerabilities by design as possible. The programming language's code is validated during deployment and afterwards isolated from other smart contracts running on the same blockchain to enforce compile-time guarantees during runtime. The designed programming language does evaluate new concepts and paradigms rarely used in non-smart contract environments for their potential benefit in a smart contract environment.
|
2103.00902
|
Bamdev Mishra
|
Bamdev Mishra, N T V Satyadev, Hiroyuki Kasai, and Pratik Jawanpuria
|
Manifold optimization for non-linear optimal transport problems
|
technical report, change is title, addition of experiments
| null | null | null |
cs.LG math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Optimal transport (OT) has recently found widespread interest in machine
learning. It allows to define novel distances between probability measures,
which have shown promise in several applications. In this work, we discuss how
to computationally approach general non-linear OT problems within the framework
of Riemannian manifold optimization. The basis of this is the manifold of
doubly stochastic matrices (and their generalization). Even though the manifold
geometry is not new, surprisingly, its usefulness for solving general
non-linear OT problems has not been popular. To this end, we specifically
discuss optimization-related ingredients that allow modeling the OT problem on
smooth Riemannian manifolds by exploiting the geometry of the search space. We
also discuss extensions where we reuse the developed optimization ingredients.
We make available the Manifold optimization-based Optimal Transport, or MOT,
repository with codes useful in solving OT problems in Python and Matlab. The
codes are available at \url{https://github.com/SatyadevNtv/MOT}.
|
[
{
"created": "Mon, 1 Mar 2021 10:49:19 GMT",
"version": "v1"
},
{
"created": "Fri, 8 Oct 2021 06:01:03 GMT",
"version": "v2"
}
] |
2021-10-11
|
[
[
"Mishra",
"Bamdev",
""
],
[
"Satyadev",
"N T V",
""
],
[
"Kasai",
"Hiroyuki",
""
],
[
"Jawanpuria",
"Pratik",
""
]
] |
Optimal transport (OT) has recently found widespread interest in machine learning. It allows to define novel distances between probability measures, which have shown promise in several applications. In this work, we discuss how to computationally approach general non-linear OT problems within the framework of Riemannian manifold optimization. The basis of this is the manifold of doubly stochastic matrices (and their generalization). Even though the manifold geometry is not new, surprisingly, its usefulness for solving general non-linear OT problems has not been popular. To this end, we specifically discuss optimization-related ingredients that allow modeling the OT problem on smooth Riemannian manifolds by exploiting the geometry of the search space. We also discuss extensions where we reuse the developed optimization ingredients. We make available the Manifold optimization-based Optimal Transport, or MOT, repository with codes useful in solving OT problems in Python and Matlab. The codes are available at \url{https://github.com/SatyadevNtv/MOT}.
|
1705.01226
|
EPTCS
|
David M. Russinoff (ARM Ltd.)
|
A Computationally Surveyable Proof of the Group Properties of an
Elliptic Curve
|
In Proceedings ACL2Workshop 2017, arXiv:1705.00766
|
EPTCS 249, 2017, pp. 30-46
|
10.4204/EPTCS.249.3
| null |
cs.CR math.NT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present an elementary proof of the group properties of the elliptic curve
known as "Curve25519", as a component of a comprehensive proof of correctness
of a hardware implementation of the associated Diffie-Hellman key agreement
algorithm. The entire proof has been formalized and mechanically verified with
ACL2, and is computationally surveyable in the sense that all steps that
require mechanical support are presented in such a way that they may readily
reproduced in any suitable programming language.
|
[
{
"created": "Wed, 3 May 2017 01:48:57 GMT",
"version": "v1"
}
] |
2017-05-04
|
[
[
"Russinoff",
"David M.",
"",
"ARM Ltd."
]
] |
We present an elementary proof of the group properties of the elliptic curve known as "Curve25519", as a component of a comprehensive proof of correctness of a hardware implementation of the associated Diffie-Hellman key agreement algorithm. The entire proof has been formalized and mechanically verified with ACL2, and is computationally surveyable in the sense that all steps that require mechanical support are presented in such a way that they may readily reproduced in any suitable programming language.
|
1202.2037
|
Lianlin Li Dr
|
Lianlin Li
|
Note on RIP-based Co-sparse Analysis
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Over the past years, there are increasing interests in recovering the signals
from undersampling data where such signals are sparse under some orthogonal
dictionary or tight framework, which is referred to be sparse synthetic model.
More recently, its counterpart, i.e., the sparse analysis model, has also
attracted researcher's attentions where many practical signals which are sparse
in the truly redundant dictionary are concerned. This short paper presents
important complement to the results in existing literatures for treating sparse
analysis model. Firstly, we give the natural generalization of well-known
restricted isometry property (RIP) to deal with sparse analysis model, where
the truly arbitrary incoherent dictionary is considered. Secondly, we studied
the theoretical guarantee for the accurate recovery of signal which is sparse
in general redundant dictionaries through solving l1-norm sparsity-promoted
optimization problem. This work shows not only that compressed sensing is
viable in the context of sparse analysis, but also that accurate recovery is
possible via solving l1-minimization problem.
|
[
{
"created": "Thu, 9 Feb 2012 16:34:47 GMT",
"version": "v1"
}
] |
2012-02-10
|
[
[
"Li",
"Lianlin",
""
]
] |
Over the past years, there are increasing interests in recovering the signals from undersampling data where such signals are sparse under some orthogonal dictionary or tight framework, which is referred to be sparse synthetic model. More recently, its counterpart, i.e., the sparse analysis model, has also attracted researcher's attentions where many practical signals which are sparse in the truly redundant dictionary are concerned. This short paper presents important complement to the results in existing literatures for treating sparse analysis model. Firstly, we give the natural generalization of well-known restricted isometry property (RIP) to deal with sparse analysis model, where the truly arbitrary incoherent dictionary is considered. Secondly, we studied the theoretical guarantee for the accurate recovery of signal which is sparse in general redundant dictionaries through solving l1-norm sparsity-promoted optimization problem. This work shows not only that compressed sensing is viable in the context of sparse analysis, but also that accurate recovery is possible via solving l1-minimization problem.
|
1903.07904
|
Sadaf Zuhra Ms
|
Sadaf ul Zuhra, Prasanna Chaporkar, Abhay Karandikar
|
Resource Allocation for Loss Tolerant Video Streaming in eMBMS
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Bandwidth hungry video content has become the dominant contributor to the
data traffic world over. Cellular networks are constantly evolving to meet the
growing traffic demands. Over the past few years, wireless multicast has been
garnering a lot of attention as a means of efficient resource utilization.
Multicast transmission lets spectral resources to be shared between users
streaming the same content. Even though multicast transmission allows to serve
multiple users on the same resources, in order to serve all these users
successfully, the base station cannot transmit the content at a rate greater
than that decodable by the user with the worst channel conditions. In this
paper, we propose a way to overcome this bottleneck. Video streaming services
can sustain a certain amount of packet loss without any significant degradation
in the quality experienced by the users. We leverage this loss tolerant nature
of video streaming applications to improve the performance of multicast video
services in LTE and 5G. We convert the problem of resource allocation for loss
tolerant multicasting into the problem of stabilizing a queueing system. We
then propose two throughput optimal Maximum Weight (MW) policies that
successfully stabilize the constructed queueing system. However, brute force
implementation of MW policies is mostly NP-hard. To overcome this, we propose a
maximum weight bipartite matching approach that results in a polynomial time
implementation of the proposed policies. We also evaluate the performance of
our policies via extensive simulations.
|
[
{
"created": "Tue, 19 Mar 2019 09:39:42 GMT",
"version": "v1"
}
] |
2019-03-20
|
[
[
"Zuhra",
"Sadaf ul",
""
],
[
"Chaporkar",
"Prasanna",
""
],
[
"Karandikar",
"Abhay",
""
]
] |
Bandwidth hungry video content has become the dominant contributor to the data traffic world over. Cellular networks are constantly evolving to meet the growing traffic demands. Over the past few years, wireless multicast has been garnering a lot of attention as a means of efficient resource utilization. Multicast transmission lets spectral resources to be shared between users streaming the same content. Even though multicast transmission allows to serve multiple users on the same resources, in order to serve all these users successfully, the base station cannot transmit the content at a rate greater than that decodable by the user with the worst channel conditions. In this paper, we propose a way to overcome this bottleneck. Video streaming services can sustain a certain amount of packet loss without any significant degradation in the quality experienced by the users. We leverage this loss tolerant nature of video streaming applications to improve the performance of multicast video services in LTE and 5G. We convert the problem of resource allocation for loss tolerant multicasting into the problem of stabilizing a queueing system. We then propose two throughput optimal Maximum Weight (MW) policies that successfully stabilize the constructed queueing system. However, brute force implementation of MW policies is mostly NP-hard. To overcome this, we propose a maximum weight bipartite matching approach that results in a polynomial time implementation of the proposed policies. We also evaluate the performance of our policies via extensive simulations.
|
2405.11905
|
Jaewon Son
|
Jaewon Son, Jaehun Park, Kwangsu Kim
|
CSTA: CNN-based Spatiotemporal Attention for Video Summarization
|
Accepted at CVPR 2024
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Video summarization aims to generate a concise representation of a video,
capturing its essential content and key moments while reducing its overall
length. Although several methods employ attention mechanisms to handle
long-term dependencies, they often fail to capture the visual significance
inherent in frames. To address this limitation, we propose a CNN-based
SpatioTemporal Attention (CSTA) method that stacks each feature of frames from
a single video to form image-like frame representations and applies 2D CNN to
these frame features. Our methodology relies on CNN to comprehend the inter and
intra-frame relations and to find crucial attributes in videos by exploiting
its ability to learn absolute positions within images. In contrast to previous
work compromising efficiency by designing additional modules to focus on
spatial importance, CSTA requires minimal computational overhead as it uses CNN
as a sliding window. Extensive experiments on two benchmark datasets (SumMe and
TVSum) demonstrate that our proposed approach achieves state-of-the-art
performance with fewer MACs compared to previous methods. Codes are available
at https://github.com/thswodnjs3/CSTA.
|
[
{
"created": "Mon, 20 May 2024 09:38:37 GMT",
"version": "v1"
},
{
"created": "Tue, 21 May 2024 07:04:23 GMT",
"version": "v2"
}
] |
2024-05-22
|
[
[
"Son",
"Jaewon",
""
],
[
"Park",
"Jaehun",
""
],
[
"Kim",
"Kwangsu",
""
]
] |
Video summarization aims to generate a concise representation of a video, capturing its essential content and key moments while reducing its overall length. Although several methods employ attention mechanisms to handle long-term dependencies, they often fail to capture the visual significance inherent in frames. To address this limitation, we propose a CNN-based SpatioTemporal Attention (CSTA) method that stacks each feature of frames from a single video to form image-like frame representations and applies 2D CNN to these frame features. Our methodology relies on CNN to comprehend the inter and intra-frame relations and to find crucial attributes in videos by exploiting its ability to learn absolute positions within images. In contrast to previous work compromising efficiency by designing additional modules to focus on spatial importance, CSTA requires minimal computational overhead as it uses CNN as a sliding window. Extensive experiments on two benchmark datasets (SumMe and TVSum) demonstrate that our proposed approach achieves state-of-the-art performance with fewer MACs compared to previous methods. Codes are available at https://github.com/thswodnjs3/CSTA.
|
1908.02555
|
Damien Chablat
|
Yang Zhang (LS2N, RoMas), Vigen Arakelian (DGMA, LS2N, RoMas),
Baptiste Veron (IRT Jules Verne), Damien Chablat (LS2N, ReV)
|
Key Features of the Coupled Hand-operated Balanced Manipulator (HOBM)
and Lightweight Robot (LWR)
| null |
Advances in Mechanism and Machine Science, pp.2289-2298, 2019
| null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The paper deals with coupled systems including hand-operated balanced
manipulators and lightweight robots. The aim of such a cooperation is to
displace heavy payloads with less powerful robots. In other term, in the
coupled system for handling of heavy payloads by a HOBM an operator is replaced
by a LWR. The advantages of the coupled HOBM and LWR are disclosed and the
optimal design of the cooperative workspace is discussed. Behavior of the
coupled system in a static mode when the velocities the HOBM are limited does
not present any special problems. In this mode, the inertial forces are
significantly lower than the gravitational one. The payload is completely
balanced by the HOBM and the LWR assumes the prescribed displacements with low
load. However, in a dynamic mode, the HOBM with massive links creates
additional loads on the LWR, which can be significant. The present study
considers a method for determination of inertia effects of the HOBM on the LWR.
The given numerical simulations show the significant increasing of the input
torques due to the inertia forces of the HOBM. Behavior of the HOBM with cable
lift and the LWR is also examined.
|
[
{
"created": "Wed, 7 Aug 2019 12:21:45 GMT",
"version": "v1"
}
] |
2019-08-08
|
[
[
"Zhang",
"Yang",
"",
"LS2N, RoMas"
],
[
"Arakelian",
"Vigen",
"",
"DGMA, LS2N, RoMas"
],
[
"Veron",
"Baptiste",
"",
"IRT Jules Verne"
],
[
"Chablat",
"Damien",
"",
"LS2N, ReV"
]
] |
The paper deals with coupled systems including hand-operated balanced manipulators and lightweight robots. The aim of such a cooperation is to displace heavy payloads with less powerful robots. In other term, in the coupled system for handling of heavy payloads by a HOBM an operator is replaced by a LWR. The advantages of the coupled HOBM and LWR are disclosed and the optimal design of the cooperative workspace is discussed. Behavior of the coupled system in a static mode when the velocities the HOBM are limited does not present any special problems. In this mode, the inertial forces are significantly lower than the gravitational one. The payload is completely balanced by the HOBM and the LWR assumes the prescribed displacements with low load. However, in a dynamic mode, the HOBM with massive links creates additional loads on the LWR, which can be significant. The present study considers a method for determination of inertia effects of the HOBM on the LWR. The given numerical simulations show the significant increasing of the input torques due to the inertia forces of the HOBM. Behavior of the HOBM with cable lift and the LWR is also examined.
|
2202.06262
|
Rajni Dabas
|
Rajni Dabas and Neelima Gupta
|
Locating Charging Stations: Connected, Capacitated and Prize- Collecting
| null | null | null | null |
cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we study locating charging station problem as facility
location problem and its variants ($k$-Median, $k$-Facility location and
$k$-center). We study the connectivity and the capacity constraints in these
problem.
Capacity and connectivity constraints have been studied in the literature
separately for all these problems. We give first constant factor approximations
when both the constraints are present. Extending/modifying the techniques used
for connected variants of the problem to include capacities or for capacitated
variants of problem to include connectivity is a tedious and challenging task.
In this paper, we combine the two constraints by reducing the problem to
underlying well studied problems, solving them as black box and combine the
obtained solutions. We also, combine the two constraints in the
prize-collection set up.
In the prize-collecting set up, the problems are not even studied when one of
the constraint is present. We present constant factor approximation for them as
well.
|
[
{
"created": "Sun, 13 Feb 2022 08:55:13 GMT",
"version": "v1"
}
] |
2022-02-15
|
[
[
"Dabas",
"Rajni",
""
],
[
"Gupta",
"Neelima",
""
]
] |
In this paper, we study locating charging station problem as facility location problem and its variants ($k$-Median, $k$-Facility location and $k$-center). We study the connectivity and the capacity constraints in these problem. Capacity and connectivity constraints have been studied in the literature separately for all these problems. We give first constant factor approximations when both the constraints are present. Extending/modifying the techniques used for connected variants of the problem to include capacities or for capacitated variants of problem to include connectivity is a tedious and challenging task. In this paper, we combine the two constraints by reducing the problem to underlying well studied problems, solving them as black box and combine the obtained solutions. We also, combine the two constraints in the prize-collection set up. In the prize-collecting set up, the problems are not even studied when one of the constraint is present. We present constant factor approximation for them as well.
|
1912.00191
|
Jiankai Sun
|
Junning Huang, Sirui Xie, Jiankai Sun, Qiurui Ma, Chunxiao Liu,
Jianping Shi, Dahua Lin, Bolei Zhou
|
Learning a Decision Module by Imitating Driver's Control Behaviors
|
Proceedings of the Conference on Robot Learning (CoRL) 2020
| null | null | null |
cs.AI cs.RO eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Autonomous driving systems have a pipeline of perception, decision, planning,
and control. The decision module processes information from the perception
module and directs the execution of downstream planning and control modules. On
the other hand, the recent success of deep learning suggests that this pipeline
could be replaced by end-to-end neural control policies, however, safety cannot
be well guaranteed for the data-driven neural networks. In this work, we
propose a hybrid framework to learn neural decisions in the classical modular
pipeline through end-to-end imitation learning. This hybrid framework can
preserve the merits of the classical pipeline such as the strict enforcement of
physical and logical constraints while learning complex driving decisions from
data. To circumvent the ambiguous annotation of human driving decisions, our
method learns high-level driving decisions by imitating low-level control
behaviors. We show in the simulation experiments that our modular driving agent
can generalize its driving decision and control to various complex scenarios
where the rule-based programs fail. It can also generate smoother and safer
driving trajectories than end-to-end neural policies.
|
[
{
"created": "Sat, 30 Nov 2019 11:57:35 GMT",
"version": "v1"
},
{
"created": "Wed, 5 May 2021 02:06:49 GMT",
"version": "v2"
},
{
"created": "Thu, 6 May 2021 01:38:16 GMT",
"version": "v3"
}
] |
2021-05-07
|
[
[
"Huang",
"Junning",
""
],
[
"Xie",
"Sirui",
""
],
[
"Sun",
"Jiankai",
""
],
[
"Ma",
"Qiurui",
""
],
[
"Liu",
"Chunxiao",
""
],
[
"Shi",
"Jianping",
""
],
[
"Lin",
"Dahua",
""
],
[
"Zhou",
"Bolei",
""
]
] |
Autonomous driving systems have a pipeline of perception, decision, planning, and control. The decision module processes information from the perception module and directs the execution of downstream planning and control modules. On the other hand, the recent success of deep learning suggests that this pipeline could be replaced by end-to-end neural control policies, however, safety cannot be well guaranteed for the data-driven neural networks. In this work, we propose a hybrid framework to learn neural decisions in the classical modular pipeline through end-to-end imitation learning. This hybrid framework can preserve the merits of the classical pipeline such as the strict enforcement of physical and logical constraints while learning complex driving decisions from data. To circumvent the ambiguous annotation of human driving decisions, our method learns high-level driving decisions by imitating low-level control behaviors. We show in the simulation experiments that our modular driving agent can generalize its driving decision and control to various complex scenarios where the rule-based programs fail. It can also generate smoother and safer driving trajectories than end-to-end neural policies.
|
2310.06947
|
Roc\'io Carratal\'a-S\'aez
|
Roc\'io Carratal\'a-S\'aez and Francisco J. and\'ujar and Yuri Torres
and Arturo Gonzalez-Escribano and Diego R. Llanos
|
Open SYCL on heterogeneous GPU systems: A case of study
| null | null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Computational platforms for high-performance scientific applications are
becoming more heterogenous, including hardware accelerators such as multiple
GPUs. Applications in a wide variety of scientific fields require an efficient
and careful management of the computational resources of this type of hardware
to obtain the best possible performance. However, there are currently different
GPU vendors, architectures and families that can be found in heterogeneous
clusters or machines. Programming with the vendor provided languages or
frameworks, and optimizing for specific devices, may become cumbersome and
compromise portability to other systems. To overcome this problem, several
proposals for high-level heterogeneous programming have appeared, trying to
reduce the development effort and increase functional and performance
portability, specifically when using GPU hardware accelerators.
This paper evaluates the SYCL programming model, using the Open SYCL
compiler, from two different perspectives: The performance it offers when
dealing with single or multiple GPU devices from the same or different vendors,
and the development effort required to implement the code. We use as case of
study the Finite Time Lyapunov Exponent calculation over two real-world
scenarios and compare the performance and the development effort of its Open
SYCL-based version against the equivalent versions that use CUDA or HIP.
Based on the experimental results, we observe that the use of SYCL does not
lead to a remarkable overhead in terms of the GPU kernels execution time. In
general terms, the Open SYCL development effort for the host code is lower than
that observed with CUDA or HIP. Moreover, the SYCL version can take advantage
of both CUDA and AMD GPU devices simultaneously much easier than directly using
the vendor-specific programming solutions.
|
[
{
"created": "Tue, 10 Oct 2023 19:07:52 GMT",
"version": "v1"
}
] |
2023-10-12
|
[
[
"Carratalá-Sáez",
"Rocío",
""
],
[
"andújar",
"Francisco J.",
""
],
[
"Torres",
"Yuri",
""
],
[
"Gonzalez-Escribano",
"Arturo",
""
],
[
"Llanos",
"Diego R.",
""
]
] |
Computational platforms for high-performance scientific applications are becoming more heterogenous, including hardware accelerators such as multiple GPUs. Applications in a wide variety of scientific fields require an efficient and careful management of the computational resources of this type of hardware to obtain the best possible performance. However, there are currently different GPU vendors, architectures and families that can be found in heterogeneous clusters or machines. Programming with the vendor provided languages or frameworks, and optimizing for specific devices, may become cumbersome and compromise portability to other systems. To overcome this problem, several proposals for high-level heterogeneous programming have appeared, trying to reduce the development effort and increase functional and performance portability, specifically when using GPU hardware accelerators. This paper evaluates the SYCL programming model, using the Open SYCL compiler, from two different perspectives: The performance it offers when dealing with single or multiple GPU devices from the same or different vendors, and the development effort required to implement the code. We use as case of study the Finite Time Lyapunov Exponent calculation over two real-world scenarios and compare the performance and the development effort of its Open SYCL-based version against the equivalent versions that use CUDA or HIP. Based on the experimental results, we observe that the use of SYCL does not lead to a remarkable overhead in terms of the GPU kernels execution time. In general terms, the Open SYCL development effort for the host code is lower than that observed with CUDA or HIP. Moreover, the SYCL version can take advantage of both CUDA and AMD GPU devices simultaneously much easier than directly using the vendor-specific programming solutions.
|
1609.00214
|
Wojciech Czerwi\'nski
|
Lorenzo Clemente, Wojciech Czerwi\'nski, S{\l}awomir Lasota, Charles
Paperman
|
Separability of Reachability Sets of Vector Addition Systems
| null | null | null | null |
cs.FL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Given two families of sets $\mathcal{F}$ and $\mathcal{G}$, the $\mathcal{F}$
separability problem for $\mathcal{G}$ asks whether for two given sets $U, V
\in \mathcal{G}$ there exists a set $S \in \mathcal{F}$, such that $U$ is
included in $S$ and $V$ is disjoint with $S$. We consider two families of sets
$\mathcal{F}$: modular sets $S \subseteq \mathbb{N}^d$, defined as unions of
equivalence classes modulo some natural number $n \in \mathbb{N}$, and unary
sets. Our main result is decidability of modular and unary separability for the
class $\mathcal{G}$ of reachability sets of Vector Addition Systems, Petri
Nets, Vector Addition Systems with States, and for sections thereof.
|
[
{
"created": "Thu, 1 Sep 2016 12:42:18 GMT",
"version": "v1"
}
] |
2016-09-02
|
[
[
"Clemente",
"Lorenzo",
""
],
[
"Czerwiński",
"Wojciech",
""
],
[
"Lasota",
"Sławomir",
""
],
[
"Paperman",
"Charles",
""
]
] |
Given two families of sets $\mathcal{F}$ and $\mathcal{G}$, the $\mathcal{F}$ separability problem for $\mathcal{G}$ asks whether for two given sets $U, V \in \mathcal{G}$ there exists a set $S \in \mathcal{F}$, such that $U$ is included in $S$ and $V$ is disjoint with $S$. We consider two families of sets $\mathcal{F}$: modular sets $S \subseteq \mathbb{N}^d$, defined as unions of equivalence classes modulo some natural number $n \in \mathbb{N}$, and unary sets. Our main result is decidability of modular and unary separability for the class $\mathcal{G}$ of reachability sets of Vector Addition Systems, Petri Nets, Vector Addition Systems with States, and for sections thereof.
|
2403.11069
|
Mohammad Heydari
|
Mohammad Heydari, Mohsen Khazeni, Mohammad Ali Soltanshahi
|
Deep Learning-based Sentiment Analysis in Persian Language
|
7 pages, 5 figures, 4 tables
| null |
10.1109/ICWR51868.2021.9443152
| null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Recently, there has been a growing interest in the use of deep learning
techniques for tasks in natural language processing (NLP), with sentiment
analysis being one of the most challenging areas, particularly in the Persian
language. The vast amounts of content generated by Persian users on thousands
of websites, blogs, and social networks such as Telegram, Instagram, and
Twitter present a rich resource of information. Deep learning techniques have
become increasingly favored for extracting insights from this extensive pool of
raw data, although they face several challenges. In this study, we introduced
and implemented a hybrid deep learning-based model for sentiment analysis,
using customer review data from the Digikala Online Retailer website. We
employed a variety of deep learning networks and regularization techniques as
classifiers. Ultimately, our hybrid approach yielded an impressive performance,
achieving an F1 score of 78.3 across three sentiment categories: positive,
negative, and neutral.
|
[
{
"created": "Sun, 17 Mar 2024 03:15:29 GMT",
"version": "v1"
}
] |
2024-03-19
|
[
[
"Heydari",
"Mohammad",
""
],
[
"Khazeni",
"Mohsen",
""
],
[
"Soltanshahi",
"Mohammad Ali",
""
]
] |
Recently, there has been a growing interest in the use of deep learning techniques for tasks in natural language processing (NLP), with sentiment analysis being one of the most challenging areas, particularly in the Persian language. The vast amounts of content generated by Persian users on thousands of websites, blogs, and social networks such as Telegram, Instagram, and Twitter present a rich resource of information. Deep learning techniques have become increasingly favored for extracting insights from this extensive pool of raw data, although they face several challenges. In this study, we introduced and implemented a hybrid deep learning-based model for sentiment analysis, using customer review data from the Digikala Online Retailer website. We employed a variety of deep learning networks and regularization techniques as classifiers. Ultimately, our hybrid approach yielded an impressive performance, achieving an F1 score of 78.3 across three sentiment categories: positive, negative, and neutral.
|
1710.04313
|
Marc Zeitoun
|
Thomas Place and Marc Zeitoun
|
Generic Results for Concatenation Hierarchies
| null | null | null | null |
cs.FL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the theory of formal languages, the understanding of concatenation
hierarchies of regular languages is one of the most fundamental and challenging
topic. In this paper, we survey progress made in the comprehension of this
problem since 1971, and we establish new generic statements regarding this
problem.
|
[
{
"created": "Wed, 11 Oct 2017 21:38:35 GMT",
"version": "v1"
}
] |
2017-10-13
|
[
[
"Place",
"Thomas",
""
],
[
"Zeitoun",
"Marc",
""
]
] |
In the theory of formal languages, the understanding of concatenation hierarchies of regular languages is one of the most fundamental and challenging topic. In this paper, we survey progress made in the comprehension of this problem since 1971, and we establish new generic statements regarding this problem.
|
2211.13219
|
Karolis Martinkus
|
Jeremia Geiger, Karolis Martinkus, Oliver Richter and Roger
Wattenhofer
|
Automating Rigid Origami Design
|
IJCAI 2023 AI, Arts & Creativity Special Track
| null | null | null |
cs.GR cs.AI cs.LG cs.NE cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Rigid origami has shown potential in large diversity of practical
applications. However, current rigid origami crease pattern design mostly
relies on known tessellations. This strongly limits the diversity and novelty
of patterns that can be created. In this work, we build upon the recently
developed principle of three units method to formulate rigid origami design as
a discrete optimization problem, the rigid origami game. Our implementation
allows for a simple definition of diverse objectives and thereby expands the
potential of rigid origami further to optimized, application-specific crease
patterns. We showcase the flexibility of our formulation through use of a
diverse set of search methods in several illustrative case studies. We are not
only able to construct various patterns that approximate given target shapes,
but to also specify abstract, function-based rewards which result in novel,
foldable and functional designs for everyday objects.
|
[
{
"created": "Sun, 20 Nov 2022 17:13:50 GMT",
"version": "v1"
},
{
"created": "Fri, 28 Apr 2023 12:45:57 GMT",
"version": "v2"
}
] |
2023-05-01
|
[
[
"Geiger",
"Jeremia",
""
],
[
"Martinkus",
"Karolis",
""
],
[
"Richter",
"Oliver",
""
],
[
"Wattenhofer",
"Roger",
""
]
] |
Rigid origami has shown potential in large diversity of practical applications. However, current rigid origami crease pattern design mostly relies on known tessellations. This strongly limits the diversity and novelty of patterns that can be created. In this work, we build upon the recently developed principle of three units method to formulate rigid origami design as a discrete optimization problem, the rigid origami game. Our implementation allows for a simple definition of diverse objectives and thereby expands the potential of rigid origami further to optimized, application-specific crease patterns. We showcase the flexibility of our formulation through use of a diverse set of search methods in several illustrative case studies. We are not only able to construct various patterns that approximate given target shapes, but to also specify abstract, function-based rewards which result in novel, foldable and functional designs for everyday objects.
|
2004.11814
|
Feng Li
|
Feng Li, Runming Cong, Huihui Bai, and Yifan He
|
Deep Interleaved Network for Image Super-Resolution With Asymmetric
Co-Attention
|
Accepted by the IJCAI-PRICAI 2020
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, Convolutional Neural Networks (CNN) based image super-resolution
(SR) have shown significant success in the literature. However, these methods
are implemented as single-path stream to enrich feature maps from the input for
the final prediction, which fail to fully incorporate former low-level features
into later high-level features. In this paper, to tackle this problem, we
propose a deep interleaved network (DIN) to learn how information at different
states should be combined for image SR where shallow information guides deep
representative features prediction. Our DIN follows a multi-branch pattern
allowing multiple interconnected branches to interleave and fuse at different
states. Besides, the asymmetric co-attention (AsyCA) is proposed and attacked
to the interleaved nodes to adaptively emphasize informative features from
different states and improve the discriminative ability of networks. Extensive
experiments demonstrate the superiority of our proposed DIN in comparison with
the state-of-the-art SR methods.
|
[
{
"created": "Fri, 24 Apr 2020 15:49:18 GMT",
"version": "v1"
}
] |
2020-04-27
|
[
[
"Li",
"Feng",
""
],
[
"Cong",
"Runming",
""
],
[
"Bai",
"Huihui",
""
],
[
"He",
"Yifan",
""
]
] |
Recently, Convolutional Neural Networks (CNN) based image super-resolution (SR) have shown significant success in the literature. However, these methods are implemented as single-path stream to enrich feature maps from the input for the final prediction, which fail to fully incorporate former low-level features into later high-level features. In this paper, to tackle this problem, we propose a deep interleaved network (DIN) to learn how information at different states should be combined for image SR where shallow information guides deep representative features prediction. Our DIN follows a multi-branch pattern allowing multiple interconnected branches to interleave and fuse at different states. Besides, the asymmetric co-attention (AsyCA) is proposed and attacked to the interleaved nodes to adaptively emphasize informative features from different states and improve the discriminative ability of networks. Extensive experiments demonstrate the superiority of our proposed DIN in comparison with the state-of-the-art SR methods.
|
1807.06132
|
Fatemeh Sadat Saleh
|
Fatemeh Sadat Saleh, Mohammad Sadegh Aliakbarian, Mathieu Salzmann,
Lars Petersson, Jose M. Alvarez
|
Effective Use of Synthetic Data for Urban Scene Semantic Segmentation
|
Accepted in European Conference on Computer Vision (ECCV), 2018
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Training a deep network to perform semantic segmentation requires large
amounts of labeled data. To alleviate the manual effort of annotating real
images, researchers have investigated the use of synthetic data, which can be
labeled automatically. Unfortunately, a network trained on synthetic data
performs relatively poorly on real images. While this can be addressed by
domain adaptation, existing methods all require having access to real images
during training. In this paper, we introduce a drastically different way to
handle synthetic images that does not require seeing any real images at
training time. Our approach builds on the observation that foreground and
background classes are not affected in the same manner by the domain shift, and
thus should be treated differently. In particular, the former should be handled
in a detection-based manner to better account for the fact that, while their
texture in synthetic images is not photo-realistic, their shape looks natural.
Our experiments evidence the effectiveness of our approach on Cityscapes and
CamVid with models trained on synthetic data only.
|
[
{
"created": "Mon, 16 Jul 2018 22:10:09 GMT",
"version": "v1"
}
] |
2018-07-18
|
[
[
"Saleh",
"Fatemeh Sadat",
""
],
[
"Aliakbarian",
"Mohammad Sadegh",
""
],
[
"Salzmann",
"Mathieu",
""
],
[
"Petersson",
"Lars",
""
],
[
"Alvarez",
"Jose M.",
""
]
] |
Training a deep network to perform semantic segmentation requires large amounts of labeled data. To alleviate the manual effort of annotating real images, researchers have investigated the use of synthetic data, which can be labeled automatically. Unfortunately, a network trained on synthetic data performs relatively poorly on real images. While this can be addressed by domain adaptation, existing methods all require having access to real images during training. In this paper, we introduce a drastically different way to handle synthetic images that does not require seeing any real images at training time. Our approach builds on the observation that foreground and background classes are not affected in the same manner by the domain shift, and thus should be treated differently. In particular, the former should be handled in a detection-based manner to better account for the fact that, while their texture in synthetic images is not photo-realistic, their shape looks natural. Our experiments evidence the effectiveness of our approach on Cityscapes and CamVid with models trained on synthetic data only.
|
2102.06228
|
Vidyadhar Upadhya
|
Vidyadhar Upadhya and P S Sastry
|
Learning Gaussian-Bernoulli RBMs using Difference of Convex Functions
Optimization
|
Submitted to IEEE-TNNLS
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The Gaussian-Bernoulli restricted Boltzmann machine (GB-RBM) is a useful
generative model that captures meaningful features from the given
$n$-dimensional continuous data. The difficulties associated with learning
GB-RBM are reported extensively in earlier studies. They indicate that the
training of the GB-RBM using the current standard algorithms, namely,
contrastive divergence (CD) and persistent contrastive divergence (PCD), needs
a carefully chosen small learning rate to avoid divergence which, in turn,
results in slow learning. In this work, we alleviate such difficulties by
showing that the negative log-likelihood for a GB-RBM can be expressed as a
difference of convex functions if we keep the variance of the conditional
distribution of visible units (given hidden unit states) and the biases of the
visible units, constant. Using this, we propose a stochastic {\em difference of
convex functions} (DC) programming (S-DCP) algorithm for learning the GB-RBM.
We present extensive empirical studies on several benchmark datasets to
validate the performance of this S-DCP algorithm. It is seen that S-DCP is
better than the CD and PCD algorithms in terms of speed of learning and the
quality of the generative model learnt.
|
[
{
"created": "Thu, 11 Feb 2021 19:15:54 GMT",
"version": "v1"
}
] |
2021-02-15
|
[
[
"Upadhya",
"Vidyadhar",
""
],
[
"Sastry",
"P S",
""
]
] |
The Gaussian-Bernoulli restricted Boltzmann machine (GB-RBM) is a useful generative model that captures meaningful features from the given $n$-dimensional continuous data. The difficulties associated with learning GB-RBM are reported extensively in earlier studies. They indicate that the training of the GB-RBM using the current standard algorithms, namely, contrastive divergence (CD) and persistent contrastive divergence (PCD), needs a carefully chosen small learning rate to avoid divergence which, in turn, results in slow learning. In this work, we alleviate such difficulties by showing that the negative log-likelihood for a GB-RBM can be expressed as a difference of convex functions if we keep the variance of the conditional distribution of visible units (given hidden unit states) and the biases of the visible units, constant. Using this, we propose a stochastic {\em difference of convex functions} (DC) programming (S-DCP) algorithm for learning the GB-RBM. We present extensive empirical studies on several benchmark datasets to validate the performance of this S-DCP algorithm. It is seen that S-DCP is better than the CD and PCD algorithms in terms of speed of learning and the quality of the generative model learnt.
|
2010.13198
|
Klaus Schneider
|
Klaus Schneider, Beichuan Zhang, Van Sy Mai, Lotfi Benmohamed
|
The Case for Hop-by-Hop Traffic Engineering
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
State-of-the-art Internet traffic engineering uses source-based explicit
routing via MPLS or Segment Routing. Though widely adopted in practice, source
routing can face certain inefficiencies and operational issues, caused by its
use of bandwidth reservations.
In this work, we make the case for Hop-by-Hop (HBH) Traffic Engineering:
splitting traffic among nexthops at every router, rather than splitting traffic
among paths only at edge routers. We show that HBH traffic engineering can
achieve the original goals of MPLS (i.e., efficient use of network resources),
with a much simpler design that does not need bandwidth reservations or
predictions of traffic demand.
We implement a prototype in the ns-3 network simulator, to investigate the
cost imposed by 1) the restricted path choice of loop-free HBH multipath
routing, and 2) the distributed decisions of each router, based on its local
network view. We show that the former is more important than the latter, but
that, other than a few outliers, our design shows a performance (= aggregate
user utility) close to the theoretical optimum.
|
[
{
"created": "Sun, 25 Oct 2020 19:14:33 GMT",
"version": "v1"
}
] |
2020-10-27
|
[
[
"Schneider",
"Klaus",
""
],
[
"Zhang",
"Beichuan",
""
],
[
"Mai",
"Van Sy",
""
],
[
"Benmohamed",
"Lotfi",
""
]
] |
State-of-the-art Internet traffic engineering uses source-based explicit routing via MPLS or Segment Routing. Though widely adopted in practice, source routing can face certain inefficiencies and operational issues, caused by its use of bandwidth reservations. In this work, we make the case for Hop-by-Hop (HBH) Traffic Engineering: splitting traffic among nexthops at every router, rather than splitting traffic among paths only at edge routers. We show that HBH traffic engineering can achieve the original goals of MPLS (i.e., efficient use of network resources), with a much simpler design that does not need bandwidth reservations or predictions of traffic demand. We implement a prototype in the ns-3 network simulator, to investigate the cost imposed by 1) the restricted path choice of loop-free HBH multipath routing, and 2) the distributed decisions of each router, based on its local network view. We show that the former is more important than the latter, but that, other than a few outliers, our design shows a performance (= aggregate user utility) close to the theoretical optimum.
|
2406.18459
|
Younghyun Kim
|
Younghyun Kim, Geunmin Hwang, Junyu Zhang, Eunbyung Park
|
DiffuseHigh: Training-free Progressive High-Resolution Image Synthesis
through Structure Guidance
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent surge in large-scale generative models has spurred the development of
vast fields in computer vision. In particular, text-to-image diffusion models
have garnered widespread adoption across diverse domain due to their potential
for high-fidelity image generation. Nonetheless, existing large-scale diffusion
models are confined to generate images of up to 1K resolution, which is far
from meeting the demands of contemporary commercial applications. Directly
sampling higher-resolution images often yields results marred by artifacts such
as object repetition and distorted shapes. Addressing the aforementioned issues
typically necessitates training or fine-tuning models on higher resolution
datasets. However, this undertaking poses a formidable challenge due to the
difficulty in collecting large-scale high-resolution contents and substantial
computational resources. While several preceding works have proposed
alternatives, they often fail to produce convincing results. In this work, we
probe the generative ability of diffusion models at higher resolution beyond
its original capability and propose a novel progressive approach that fully
utilizes generated low-resolution image to guide the generation of higher
resolution image. Our method obviates the need for additional training or
fine-tuning which significantly lowers the burden of computational costs.
Extensive experiments and results validate the efficiency and efficacy of our
method. Project page: https://yhyun225.github.io/DiffuseHigh/
|
[
{
"created": "Wed, 26 Jun 2024 16:10:31 GMT",
"version": "v1"
},
{
"created": "Mon, 8 Jul 2024 06:06:53 GMT",
"version": "v2"
},
{
"created": "Wed, 10 Jul 2024 06:18:13 GMT",
"version": "v3"
},
{
"created": "Thu, 11 Jul 2024 15:03:44 GMT",
"version": "v4"
}
] |
2024-07-12
|
[
[
"Kim",
"Younghyun",
""
],
[
"Hwang",
"Geunmin",
""
],
[
"Zhang",
"Junyu",
""
],
[
"Park",
"Eunbyung",
""
]
] |
Recent surge in large-scale generative models has spurred the development of vast fields in computer vision. In particular, text-to-image diffusion models have garnered widespread adoption across diverse domain due to their potential for high-fidelity image generation. Nonetheless, existing large-scale diffusion models are confined to generate images of up to 1K resolution, which is far from meeting the demands of contemporary commercial applications. Directly sampling higher-resolution images often yields results marred by artifacts such as object repetition and distorted shapes. Addressing the aforementioned issues typically necessitates training or fine-tuning models on higher resolution datasets. However, this undertaking poses a formidable challenge due to the difficulty in collecting large-scale high-resolution contents and substantial computational resources. While several preceding works have proposed alternatives, they often fail to produce convincing results. In this work, we probe the generative ability of diffusion models at higher resolution beyond its original capability and propose a novel progressive approach that fully utilizes generated low-resolution image to guide the generation of higher resolution image. Our method obviates the need for additional training or fine-tuning which significantly lowers the burden of computational costs. Extensive experiments and results validate the efficiency and efficacy of our method. Project page: https://yhyun225.github.io/DiffuseHigh/
|
2303.07470
|
Shrihari Sridharan
|
Shrihari Sridharan, Jacob R. Stevens, Kaushik Roy and Anand
Raghunathan
|
X-Former: In-Memory Acceleration of Transformers
| null | null | null | null |
cs.LG cs.AR
|
http://creativecommons.org/licenses/by/4.0/
|
Transformers have achieved great success in a wide variety of natural
language processing (NLP) tasks due to the attention mechanism, which assigns
an importance score for every word relative to other words in a sequence.
However, these models are very large, often reaching hundreds of billions of
parameters, and therefore require a large number of DRAM accesses. Hence,
traditional deep neural network (DNN) accelerators such as GPUs and TPUs face
limitations in processing Transformers efficiently. In-memory accelerators
based on non-volatile memory promise to be an effective solution to this
challenge, since they provide high storage density while performing massively
parallel matrix vector multiplications within memory arrays. However, attention
score computations, which are frequently used in Transformers (unlike CNNs and
RNNs), require matrix vector multiplications (MVM) where both operands change
dynamically for each input. As a result, conventional NVM-based accelerators
incur high write latency and write energy when used for Transformers, and
further suffer from the low endurance of most NVM technologies. To address
these challenges, we present X-Former, a hybrid in-memory hardware accelerator
that consists of both NVM and CMOS processing elements to execute transformer
workloads efficiently. To improve the hardware utilization of X-Former, we also
propose a sequence blocking dataflow, which overlaps the computations of the
two processing elements and reduces execution time. Across several benchmarks,
we show that X-Former achieves upto 85x and 7.5x improvements in latency and
energy over a NVIDIA GeForce GTX 1060 GPU and upto 10.7x and 4.6x improvements
in latency and energy over a state-of-the-art in-memory NVM accelerator.
|
[
{
"created": "Mon, 13 Mar 2023 21:11:54 GMT",
"version": "v1"
}
] |
2023-03-15
|
[
[
"Sridharan",
"Shrihari",
""
],
[
"Stevens",
"Jacob R.",
""
],
[
"Roy",
"Kaushik",
""
],
[
"Raghunathan",
"Anand",
""
]
] |
Transformers have achieved great success in a wide variety of natural language processing (NLP) tasks due to the attention mechanism, which assigns an importance score for every word relative to other words in a sequence. However, these models are very large, often reaching hundreds of billions of parameters, and therefore require a large number of DRAM accesses. Hence, traditional deep neural network (DNN) accelerators such as GPUs and TPUs face limitations in processing Transformers efficiently. In-memory accelerators based on non-volatile memory promise to be an effective solution to this challenge, since they provide high storage density while performing massively parallel matrix vector multiplications within memory arrays. However, attention score computations, which are frequently used in Transformers (unlike CNNs and RNNs), require matrix vector multiplications (MVM) where both operands change dynamically for each input. As a result, conventional NVM-based accelerators incur high write latency and write energy when used for Transformers, and further suffer from the low endurance of most NVM technologies. To address these challenges, we present X-Former, a hybrid in-memory hardware accelerator that consists of both NVM and CMOS processing elements to execute transformer workloads efficiently. To improve the hardware utilization of X-Former, we also propose a sequence blocking dataflow, which overlaps the computations of the two processing elements and reduces execution time. Across several benchmarks, we show that X-Former achieves upto 85x and 7.5x improvements in latency and energy over a NVIDIA GeForce GTX 1060 GPU and upto 10.7x and 4.6x improvements in latency and energy over a state-of-the-art in-memory NVM accelerator.
|
2102.00849
|
Muhammad Umer Gurchani
|
Muhammad Umer Gurchani
|
Crawling political communities in Twitter and extracting political
affiliations
|
22 Pages
| null | null | null |
cs.SI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
In theory, a major advantage to the big data approach in studying online
communities is that it should be possible to collect a representative random
sample from a broadly defined population. However, in practice, data collection
processes are not formalized, even for famous social media platforms such as
Twitter and Facebook. As a result, there is ambiguity left on questions such as
"how much data is enough?" and how representative are the samples of the
broader population being studied in online social networks. In this paper, I
propose a focused back-and-forth crawl approach and a validated seed choice
method for collecting network-level data from Twitter. The proposed crawl
method can extract community structures without needing a complete network
graph for the Twitter network and validate its size using "reference score". It
also takes care of the sampling size problem in Twitter by tracking the
percentage of known nodes that have been included in the data. Thus, solving
most major problems in Twitter data collection procedures and moving a step
further to formalizing data collection methods for the platform. Once the
communities are crawled, and the network graph is clean and complete; it is
then possible to train Machine Learning classifiers using communities as
features to predict political affiliations of users on a larger scale. As a
case, I used the proposed method for separating French political communities on
Twitter from the global Twitter community and knowing the political
affiliations of users on a continuous scale.
|
[
{
"created": "Mon, 1 Feb 2021 13:57:48 GMT",
"version": "v1"
}
] |
2021-02-02
|
[
[
"Gurchani",
"Muhammad Umer",
""
]
] |
In theory, a major advantage to the big data approach in studying online communities is that it should be possible to collect a representative random sample from a broadly defined population. However, in practice, data collection processes are not formalized, even for famous social media platforms such as Twitter and Facebook. As a result, there is ambiguity left on questions such as "how much data is enough?" and how representative are the samples of the broader population being studied in online social networks. In this paper, I propose a focused back-and-forth crawl approach and a validated seed choice method for collecting network-level data from Twitter. The proposed crawl method can extract community structures without needing a complete network graph for the Twitter network and validate its size using "reference score". It also takes care of the sampling size problem in Twitter by tracking the percentage of known nodes that have been included in the data. Thus, solving most major problems in Twitter data collection procedures and moving a step further to formalizing data collection methods for the platform. Once the communities are crawled, and the network graph is clean and complete; it is then possible to train Machine Learning classifiers using communities as features to predict political affiliations of users on a larger scale. As a case, I used the proposed method for separating French political communities on Twitter from the global Twitter community and knowing the political affiliations of users on a continuous scale.
|
2304.05632
|
Haozhi Wang
|
Haozhi Wang, Yinchuan Li, Qing Wang, Yunfeng Shao, Jianye Hao
|
Multi-agent Policy Reciprocity with Theoretical Guarantee
| null | null | null | null |
cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modern multi-agent reinforcement learning (RL) algorithms hold great
potential for solving a variety of real-world problems. However, they do not
fully exploit cross-agent knowledge to reduce sample complexity and improve
performance. Although transfer RL supports knowledge sharing, it is
hyperparameter sensitive and complex. To solve this problem, we propose a novel
multi-agent policy reciprocity (PR) framework, where each agent can fully
exploit cross-agent policies even in mismatched states. We then define an
adjacency space for mismatched states and design a plug-and-play module for
value iteration, which enables agents to infer more precise returns. To improve
the scalability of PR, deep PR is proposed for continuous control tasks.
Moreover, theoretical analysis shows that agents can asymptotically reach
consensus through individual perceived rewards and converge to an optimal value
function, which implies the stability and effectiveness of PR, respectively.
Experimental results on discrete and continuous environments demonstrate that
PR outperforms various existing RL and transfer RL methods.
|
[
{
"created": "Wed, 12 Apr 2023 06:27:10 GMT",
"version": "v1"
}
] |
2023-04-13
|
[
[
"Wang",
"Haozhi",
""
],
[
"Li",
"Yinchuan",
""
],
[
"Wang",
"Qing",
""
],
[
"Shao",
"Yunfeng",
""
],
[
"Hao",
"Jianye",
""
]
] |
Modern multi-agent reinforcement learning (RL) algorithms hold great potential for solving a variety of real-world problems. However, they do not fully exploit cross-agent knowledge to reduce sample complexity and improve performance. Although transfer RL supports knowledge sharing, it is hyperparameter sensitive and complex. To solve this problem, we propose a novel multi-agent policy reciprocity (PR) framework, where each agent can fully exploit cross-agent policies even in mismatched states. We then define an adjacency space for mismatched states and design a plug-and-play module for value iteration, which enables agents to infer more precise returns. To improve the scalability of PR, deep PR is proposed for continuous control tasks. Moreover, theoretical analysis shows that agents can asymptotically reach consensus through individual perceived rewards and converge to an optimal value function, which implies the stability and effectiveness of PR, respectively. Experimental results on discrete and continuous environments demonstrate that PR outperforms various existing RL and transfer RL methods.
|
2309.05550
|
Levent Aksoy
|
Levent Aksoy and Debapriya Basu Roy and Malik Imran and Samuel
Pagliarini
|
Multiplierless Design of High-Speed Very Large Constant Multiplications
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
In cryptographic algorithms, the constants to be multiplied by a variable can
be very large due to security requirements. Thus, the hardware complexity of
such algorithms heavily depends on the design architecture handling large
constants. In this paper, we introduce an electronic design automation tool,
called LEIGER, which can automatically generate the realizations of very large
constant multiplications for low-complexity and high-speed applications,
targeting the ASIC design platform. LEIGER can utilize the shift-adds
architecture and use 3-input operations, i.e., carry-save adders (CSAs), where
the number of CSAs is reduced using a prominent optimization algorithm. It can
also generate constant multiplications under a hybrid design architecture,
where 2-and 3-input operations are used at different stages. Moreover, it can
describe constant multiplications under a design architecture using compressor
trees. As a case study, high-speed Montgomery multiplication, which is a
fundamental operation in cryptographic algorithms, is designed with its
constant multiplication block realized under the proposed architectures.
Experimental results indicate that LEIGER enables a designer to explore the
trade-off between area and delay of the very large constant and Montgomery
multiplications and leads to designs with area-delay product, latency, and
energy consumption values significantly better than those obtained by a
recently proposed algorithm.
|
[
{
"created": "Mon, 11 Sep 2023 15:35:02 GMT",
"version": "v1"
},
{
"created": "Tue, 12 Sep 2023 06:30:00 GMT",
"version": "v2"
}
] |
2023-09-13
|
[
[
"Aksoy",
"Levent",
""
],
[
"Roy",
"Debapriya Basu",
""
],
[
"Imran",
"Malik",
""
],
[
"Pagliarini",
"Samuel",
""
]
] |
In cryptographic algorithms, the constants to be multiplied by a variable can be very large due to security requirements. Thus, the hardware complexity of such algorithms heavily depends on the design architecture handling large constants. In this paper, we introduce an electronic design automation tool, called LEIGER, which can automatically generate the realizations of very large constant multiplications for low-complexity and high-speed applications, targeting the ASIC design platform. LEIGER can utilize the shift-adds architecture and use 3-input operations, i.e., carry-save adders (CSAs), where the number of CSAs is reduced using a prominent optimization algorithm. It can also generate constant multiplications under a hybrid design architecture, where 2-and 3-input operations are used at different stages. Moreover, it can describe constant multiplications under a design architecture using compressor trees. As a case study, high-speed Montgomery multiplication, which is a fundamental operation in cryptographic algorithms, is designed with its constant multiplication block realized under the proposed architectures. Experimental results indicate that LEIGER enables a designer to explore the trade-off between area and delay of the very large constant and Montgomery multiplications and leads to designs with area-delay product, latency, and energy consumption values significantly better than those obtained by a recently proposed algorithm.
|
2007.05196
|
Matthias Hutsebaut-Buysse
|
Matthias Hutsebaut-Buysse, Kevin Mets, Steven Latr\'e
|
Pre-trained Word Embeddings for Goal-conditional Transfer Learning in
Reinforcement Learning
|
Paper accepted to the ICML 2020 Language in Reinforcement Learning
(LaReL) Workshop
| null | null | null |
cs.LG cs.AI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reinforcement learning (RL) algorithms typically start tabula rasa, without
any prior knowledge of the environment, and without any prior skills. This
however often leads to low sample efficiency, requiring a large amount of
interaction with the environment. This is especially true in a lifelong
learning setting, in which the agent needs to continually extend its
capabilities. In this paper, we examine how a pre-trained task-independent
language model can make a goal-conditional RL agent more sample efficient. We
do this by facilitating transfer learning between different related tasks. We
experimentally demonstrate our approach on a set of object navigation tasks.
|
[
{
"created": "Fri, 10 Jul 2020 06:42:00 GMT",
"version": "v1"
}
] |
2020-07-13
|
[
[
"Hutsebaut-Buysse",
"Matthias",
""
],
[
"Mets",
"Kevin",
""
],
[
"Latré",
"Steven",
""
]
] |
Reinforcement learning (RL) algorithms typically start tabula rasa, without any prior knowledge of the environment, and without any prior skills. This however often leads to low sample efficiency, requiring a large amount of interaction with the environment. This is especially true in a lifelong learning setting, in which the agent needs to continually extend its capabilities. In this paper, we examine how a pre-trained task-independent language model can make a goal-conditional RL agent more sample efficient. We do this by facilitating transfer learning between different related tasks. We experimentally demonstrate our approach on a set of object navigation tasks.
|
1901.00707
|
Huaiping Ming
|
Huaiping Ming, Lei He, Haohan Guo, Frank K. Soong
|
Feature reinforcement with word embedding and parsing information in
neural TTS
| null | null | null | null |
cs.SD cs.CL eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a feature reinforcement method under the
sequence-to-sequence neural text-to-speech (TTS) synthesis framework. The
proposed method utilizes the multiple input encoder to take three levels of
text information, i.e., phoneme sequence, pre-trained word embedding, and
grammatical structure of sentences from parser as the input feature for the
neural TTS system. The added word and sentence level information can be viewed
as the feature based pre-training strategy, which clearly enhances the model
generalization ability. The proposed method not only improves the system
robustness significantly but also improves the synthesized speech to near
recording quality in our experiments for out-of-domain text.
|
[
{
"created": "Thu, 3 Jan 2019 13:15:19 GMT",
"version": "v1"
},
{
"created": "Wed, 6 Mar 2019 15:24:38 GMT",
"version": "v2"
}
] |
2019-03-07
|
[
[
"Ming",
"Huaiping",
""
],
[
"He",
"Lei",
""
],
[
"Guo",
"Haohan",
""
],
[
"Soong",
"Frank K.",
""
]
] |
In this paper, we propose a feature reinforcement method under the sequence-to-sequence neural text-to-speech (TTS) synthesis framework. The proposed method utilizes the multiple input encoder to take three levels of text information, i.e., phoneme sequence, pre-trained word embedding, and grammatical structure of sentences from parser as the input feature for the neural TTS system. The added word and sentence level information can be viewed as the feature based pre-training strategy, which clearly enhances the model generalization ability. The proposed method not only improves the system robustness significantly but also improves the synthesized speech to near recording quality in our experiments for out-of-domain text.
|
2304.00813
|
Chi Zhang
|
Chi Zhang, Wenjie Ruan, Fu Wang, Peipei Xu, Geyong Min, Xiaowei Huang
|
Model-Agnostic Reachability Analysis on Deep Neural Networks
|
PAKDD 2023
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Verification plays an essential role in the formal analysis of
safety-critical systems. Most current verification methods have specific
requirements when working on Deep Neural Networks (DNNs). They either target
one particular network category, e.g., Feedforward Neural Networks (FNNs), or
networks with specific activation functions, e.g., RdLU. In this paper, we
develop a model-agnostic verification framework, called DeepAgn, and show that
it can be applied to FNNs, Recurrent Neural Networks (RNNs), or a mixture of
both. Under the assumption of Lipschitz continuity, DeepAgn analyses the
reachability of DNNs based on a novel optimisation scheme with a global
convergence guarantee. It does not require access to the network's internal
structures, such as layers and parameters. Through reachability analysis,
DeepAgn can tackle several well-known robustness problems, including computing
the maximum safe radius for a given input, and generating the ground-truth
adversarial examples. We also empirically demonstrate DeepAgn's superior
capability and efficiency in handling a broader class of deep neural networks,
including both FNNs, and RNNs with very deep layers and millions of neurons,
than other state-of-the-art verification approaches.
|
[
{
"created": "Mon, 3 Apr 2023 09:01:59 GMT",
"version": "v1"
}
] |
2023-04-04
|
[
[
"Zhang",
"Chi",
""
],
[
"Ruan",
"Wenjie",
""
],
[
"Wang",
"Fu",
""
],
[
"Xu",
"Peipei",
""
],
[
"Min",
"Geyong",
""
],
[
"Huang",
"Xiaowei",
""
]
] |
Verification plays an essential role in the formal analysis of safety-critical systems. Most current verification methods have specific requirements when working on Deep Neural Networks (DNNs). They either target one particular network category, e.g., Feedforward Neural Networks (FNNs), or networks with specific activation functions, e.g., RdLU. In this paper, we develop a model-agnostic verification framework, called DeepAgn, and show that it can be applied to FNNs, Recurrent Neural Networks (RNNs), or a mixture of both. Under the assumption of Lipschitz continuity, DeepAgn analyses the reachability of DNNs based on a novel optimisation scheme with a global convergence guarantee. It does not require access to the network's internal structures, such as layers and parameters. Through reachability analysis, DeepAgn can tackle several well-known robustness problems, including computing the maximum safe radius for a given input, and generating the ground-truth adversarial examples. We also empirically demonstrate DeepAgn's superior capability and efficiency in handling a broader class of deep neural networks, including both FNNs, and RNNs with very deep layers and millions of neurons, than other state-of-the-art verification approaches.
|
2111.04525
|
Wentao Lu
|
Wentao Lu, Claude Sammut
|
D-Flow: A Real Time Spatial Temporal Model for Target Area Segmentation
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Semantic segmentation has attracted a large amount of attention in recent
years. In robotics, segmentation can be used to identify a region of interest,
or \emph{target area}. For example, in the RoboCup Standard Platform League
(SPL), segmentation separates the soccer field from the background and from
players on the field. For satellite or vehicle applications, it is often
necessary to find certain regions such as roads, bodies of water or kinds of
terrain. In this paper, we propose a novel approach to real-time target area
segmentation based on a newly designed spatial temporal network. The method
operates under domain constraints defined by both the robot's hardware and its
operating environment . The proposed network is able to run in real-time,
working within the constraints of limited run time and computing power. This
work is compared against other real time segmentation methods on a dataset
generated by a Nao V6 humanoid robot simulating the RoboCup SPL competition. In
this case, the target area is defined as the artificial grass field. The method
is also tested on a maritime dataset collected by a moving vessel, where the
aim is to separate the ocean region from the rest of the image. This dataset
demonstrates that the proposed model can generalise to a variety of vision
problems.
|
[
{
"created": "Mon, 8 Nov 2021 14:18:45 GMT",
"version": "v1"
}
] |
2021-11-09
|
[
[
"Lu",
"Wentao",
""
],
[
"Sammut",
"Claude",
""
]
] |
Semantic segmentation has attracted a large amount of attention in recent years. In robotics, segmentation can be used to identify a region of interest, or \emph{target area}. For example, in the RoboCup Standard Platform League (SPL), segmentation separates the soccer field from the background and from players on the field. For satellite or vehicle applications, it is often necessary to find certain regions such as roads, bodies of water or kinds of terrain. In this paper, we propose a novel approach to real-time target area segmentation based on a newly designed spatial temporal network. The method operates under domain constraints defined by both the robot's hardware and its operating environment . The proposed network is able to run in real-time, working within the constraints of limited run time and computing power. This work is compared against other real time segmentation methods on a dataset generated by a Nao V6 humanoid robot simulating the RoboCup SPL competition. In this case, the target area is defined as the artificial grass field. The method is also tested on a maritime dataset collected by a moving vessel, where the aim is to separate the ocean region from the rest of the image. This dataset demonstrates that the proposed model can generalise to a variety of vision problems.
|
1910.04907
|
Gourav Datta
|
Gourav Datta, Haolin Cong, Souvik Kundu, Peter A. Beerel
|
Metastability-Resilient Synchronization FIFO for SFQ Logic
|
Accepted in ISEC 2019
| null | null | null |
cs.ET physics.app-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Digital single-flux quantum (SFQ) technology promises to meet the demands of
ultra low power and high speed computing needed for future exascale
supercomputing systems. The combination of ultra high clock frequencies,
gate-level pipelines, and numerous sources of variability in SFQ circuits,
however, make low-skew global clock distribution a challenge. This motivates
the support of multiple independent clock domains and related clock domain
crossing circuits that enable reliable communication across domains. Existing
J-SIM simulation models indicate that setup violations can cause clock-to-Q
increases of up to 100%. This paper first shows that naive SFQ clock domain
crossing (CDC) first-in-first-out buffers (FIFOs) are vulnerable to these delay
increases, motivating the need for more robust CDC FIFOs. Inspired by CMOS
multi-flip-flop asynchronous FIFO synchronizers, we then propose a novel 1-bit
metastability-resilient SFQ CDC FIFO that simulations show delivers over a 1000
reduction in logical error rate at 30 GHz. Moreover, for a 10-stage FIFO, the
Josephson junction (JJ) area of our proposed design is only 7.5% larger than
the non-resilient counterpart. Finally, we propose design guidelines that
define the minimal FIFO depth subject to both throughput and burstiness
constraints.
|
[
{
"created": "Thu, 10 Oct 2019 23:17:06 GMT",
"version": "v1"
},
{
"created": "Wed, 23 Oct 2019 21:47:03 GMT",
"version": "v2"
}
] |
2019-10-25
|
[
[
"Datta",
"Gourav",
""
],
[
"Cong",
"Haolin",
""
],
[
"Kundu",
"Souvik",
""
],
[
"Beerel",
"Peter A.",
""
]
] |
Digital single-flux quantum (SFQ) technology promises to meet the demands of ultra low power and high speed computing needed for future exascale supercomputing systems. The combination of ultra high clock frequencies, gate-level pipelines, and numerous sources of variability in SFQ circuits, however, make low-skew global clock distribution a challenge. This motivates the support of multiple independent clock domains and related clock domain crossing circuits that enable reliable communication across domains. Existing J-SIM simulation models indicate that setup violations can cause clock-to-Q increases of up to 100%. This paper first shows that naive SFQ clock domain crossing (CDC) first-in-first-out buffers (FIFOs) are vulnerable to these delay increases, motivating the need for more robust CDC FIFOs. Inspired by CMOS multi-flip-flop asynchronous FIFO synchronizers, we then propose a novel 1-bit metastability-resilient SFQ CDC FIFO that simulations show delivers over a 1000 reduction in logical error rate at 30 GHz. Moreover, for a 10-stage FIFO, the Josephson junction (JJ) area of our proposed design is only 7.5% larger than the non-resilient counterpart. Finally, we propose design guidelines that define the minimal FIFO depth subject to both throughput and burstiness constraints.
|
1906.04547
|
Alex Hern\'andez Garc\'ia
|
Alex Hern\'andez-Garc\'ia, Peter K\"onig, Tim C. Kietzmann
|
Learning robust visual representations using data augmentation
invariance
|
6 pages, 2 figures, work in progress
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep convolutional neural networks trained for image object categorization
have shown remarkable similarities with representations found across the
primate ventral visual stream. Yet, artificial and biological networks still
exhibit important differences. Here we investigate one such property:
increasing invariance to identity-preserving image transformations found along
the ventral stream. Despite theoretical evidence that invariance should emerge
naturally from the optimization process, we present empirical evidence that the
activations of convolutional neural networks trained for object categorization
are not robust to identity-preserving image transformations commonly used in
data augmentation. As a solution, we propose data augmentation invariance, an
unsupervised learning objective which improves the robustness of the learned
representations by promoting the similarity between the activations of
augmented image samples. Our results show that this approach is a simple, yet
effective and efficient (10 % increase in training time) way of increasing the
invariance of the models while obtaining similar categorization performance.
|
[
{
"created": "Tue, 11 Jun 2019 13:03:19 GMT",
"version": "v1"
}
] |
2019-06-12
|
[
[
"Hernández-García",
"Alex",
""
],
[
"König",
"Peter",
""
],
[
"Kietzmann",
"Tim C.",
""
]
] |
Deep convolutional neural networks trained for image object categorization have shown remarkable similarities with representations found across the primate ventral visual stream. Yet, artificial and biological networks still exhibit important differences. Here we investigate one such property: increasing invariance to identity-preserving image transformations found along the ventral stream. Despite theoretical evidence that invariance should emerge naturally from the optimization process, we present empirical evidence that the activations of convolutional neural networks trained for object categorization are not robust to identity-preserving image transformations commonly used in data augmentation. As a solution, we propose data augmentation invariance, an unsupervised learning objective which improves the robustness of the learned representations by promoting the similarity between the activations of augmented image samples. Our results show that this approach is a simple, yet effective and efficient (10 % increase in training time) way of increasing the invariance of the models while obtaining similar categorization performance.
|
1809.10336
|
Tao Ma
|
Tao Ma
|
Multi-task Learning for Financial Forecasting
|
The methods and results of this paper have been proved to be wrong.
So we want to withdraw it to keep others from following the wrong results
| null | null | null |
cs.LG cs.AI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Financial forecasting is challenging and attractive in machine learning.
There are many classic solutions, as well as many deep learning based methods,
proposed to deal with it yielding encouraging performance. Stock time series
forecasting is the most representative problem in financial forecasting. Due to
the strong connections among stocks, the information valuable for forecasting
is not only included in individual stocks, but also included in the stocks
related to them. However, most previous works focus on one single stock, which
easily ignore the valuable information in others. To leverage more information,
in this paper, we propose a jointly forecasting approach to process multiple
time series of related stocks simultaneously, using multi-task learning
framework. Compared to the previous works, we use multiple networks to forecast
multiple related stocks, using the shared and private information of them
simultaneously through multi-task learning. Moreover, we propose an attention
method learning an optimized weighted combination of shared and private
information based on the idea of Capital Asset Pricing Model (CAPM) to help
forecast. Experimental results on various data show improved forecasting
performance over baseline methods.
|
[
{
"created": "Thu, 27 Sep 2018 04:03:03 GMT",
"version": "v1"
},
{
"created": "Sat, 15 Dec 2018 14:25:41 GMT",
"version": "v2"
},
{
"created": "Tue, 22 Jan 2019 09:19:22 GMT",
"version": "v3"
}
] |
2019-01-23
|
[
[
"Ma",
"Tao",
""
]
] |
Financial forecasting is challenging and attractive in machine learning. There are many classic solutions, as well as many deep learning based methods, proposed to deal with it yielding encouraging performance. Stock time series forecasting is the most representative problem in financial forecasting. Due to the strong connections among stocks, the information valuable for forecasting is not only included in individual stocks, but also included in the stocks related to them. However, most previous works focus on one single stock, which easily ignore the valuable information in others. To leverage more information, in this paper, we propose a jointly forecasting approach to process multiple time series of related stocks simultaneously, using multi-task learning framework. Compared to the previous works, we use multiple networks to forecast multiple related stocks, using the shared and private information of them simultaneously through multi-task learning. Moreover, we propose an attention method learning an optimized weighted combination of shared and private information based on the idea of Capital Asset Pricing Model (CAPM) to help forecast. Experimental results on various data show improved forecasting performance over baseline methods.
|
1607.00501
|
Le Dong
|
Le Dong, Na Lv, Qianni Zhang, Shanshan Xie, Ling He, Mengdie Mao
|
A Distributed Deep Representation Learning Model for Big Image Data
Classification
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper describes an effective and efficient image classification
framework nominated distributed deep representation learning model (DDRL). The
aim is to strike the balance between the computational intensive deep learning
approaches (tuned parameters) which are intended for distributed computing, and
the approaches that focused on the designed parameters but often limited by
sequential computing and cannot scale up. In the evaluation of our approach, it
is shown that DDRL is able to achieve state-of-art classification accuracy
efficiently on both medium and large datasets. The result implies that our
approach is more efficient than the conventional deep learning approaches, and
can be applied to big data that is too complex for parameter designing focused
approaches. More specifically, DDRL contains two main components, i.e., feature
extraction and selection. A hierarchical distributed deep representation
learning algorithm is designed to extract image statistics and a nonlinear
mapping algorithm is used to map the inherent statistics into abstract
features. Both algorithms are carefully designed to avoid millions of
parameters tuning. This leads to a more compact solution for image
classification of big data. We note that the proposed approach is designed to
be friendly with parallel computing. It is generic and easy to be deployed to
different distributed computing resources. In the experiments, the largescale
image datasets are classified with a DDRM implementation on Hadoop MapReduce,
which shows high scalability and resilience.
|
[
{
"created": "Sat, 2 Jul 2016 12:33:12 GMT",
"version": "v1"
}
] |
2016-07-05
|
[
[
"Dong",
"Le",
""
],
[
"Lv",
"Na",
""
],
[
"Zhang",
"Qianni",
""
],
[
"Xie",
"Shanshan",
""
],
[
"He",
"Ling",
""
],
[
"Mao",
"Mengdie",
""
]
] |
This paper describes an effective and efficient image classification framework nominated distributed deep representation learning model (DDRL). The aim is to strike the balance between the computational intensive deep learning approaches (tuned parameters) which are intended for distributed computing, and the approaches that focused on the designed parameters but often limited by sequential computing and cannot scale up. In the evaluation of our approach, it is shown that DDRL is able to achieve state-of-art classification accuracy efficiently on both medium and large datasets. The result implies that our approach is more efficient than the conventional deep learning approaches, and can be applied to big data that is too complex for parameter designing focused approaches. More specifically, DDRL contains two main components, i.e., feature extraction and selection. A hierarchical distributed deep representation learning algorithm is designed to extract image statistics and a nonlinear mapping algorithm is used to map the inherent statistics into abstract features. Both algorithms are carefully designed to avoid millions of parameters tuning. This leads to a more compact solution for image classification of big data. We note that the proposed approach is designed to be friendly with parallel computing. It is generic and easy to be deployed to different distributed computing resources. In the experiments, the largescale image datasets are classified with a DDRM implementation on Hadoop MapReduce, which shows high scalability and resilience.
|
2002.10420
|
Fahad Sohrab
|
Fahad Sohrab, Jenni Raitoharju
|
Boosting rare benthic macroinvertebrates taxa identification with
one-class classification
|
5 pages, 1 figure, 2 tables
|
2020 IEEE Symposium Series on Computational Intelligence (SSCI)
|
10.1109/SSCI47803.2020.9308359
| null |
cs.CV cs.LG eess.IV stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Insect monitoring is crucial for understanding the consequences of rapid
ecological changes, but taxa identification currently requires tedious manual
expert work and cannot be scaled-up efficiently. Deep convolutional neural
networks (CNNs), provide a viable way to significantly increase the
biomonitoring volumes. However, taxa abundances are typically very imbalanced
and the amounts of training images for the rarest classes are simply too low
for deep CNNs. As a result, the samples from the rare classes are often
completely missed, while detecting them has biological importance. In this
paper, we propose combining the trained deep CNN with one-class classifiers to
improve the rare species identification. One-class classification models are
traditionally trained with much fewer samples and they can provide a mechanism
to indicate samples potentially belonging to the rare classes for human
inspection. Our experiments confirm that the proposed approach may indeed
support moving towards partial automation of the taxa identification task.
|
[
{
"created": "Wed, 12 Feb 2020 09:46:24 GMT",
"version": "v1"
}
] |
2021-01-07
|
[
[
"Sohrab",
"Fahad",
""
],
[
"Raitoharju",
"Jenni",
""
]
] |
Insect monitoring is crucial for understanding the consequences of rapid ecological changes, but taxa identification currently requires tedious manual expert work and cannot be scaled-up efficiently. Deep convolutional neural networks (CNNs), provide a viable way to significantly increase the biomonitoring volumes. However, taxa abundances are typically very imbalanced and the amounts of training images for the rarest classes are simply too low for deep CNNs. As a result, the samples from the rare classes are often completely missed, while detecting them has biological importance. In this paper, we propose combining the trained deep CNN with one-class classifiers to improve the rare species identification. One-class classification models are traditionally trained with much fewer samples and they can provide a mechanism to indicate samples potentially belonging to the rare classes for human inspection. Our experiments confirm that the proposed approach may indeed support moving towards partial automation of the taxa identification task.
|
1907.00967
|
David Hofbauer MSc
|
David Hofbauer, Christoph Schmittner, Manuela Brandstetter, Markus
Tauber
|
Autonomous CPS mobility securely designed
|
The 5th IEEE International workshop on Communication, Computing, and
Networking in Cyber Physical Systems (CCNCPS 2019) in association with 20th
IEEE International Symposium on a World of Wireless, Mobile and Multimedia
Networks (IEEE WoWMoM 2019) - Washington D.C., USA;
| null | null | null |
cs.OH
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the last years the interconnection and ongoing development of physical
systems combined with cyber resources has led to increasing automation. Through
this progress in technology, autonomous vehicles, especially autonomous trains
are getting more attention from industry and are already under test. The use of
autonomous trains is known for increasing operation efficiency and reduction of
personnel and infrastructure costs, which is mostly considered for main tracks.
However, for less-used secondary lines, autonomous trains and their underlying
sensor infrastructure are not yet considered. Thus, a system needs to be
developed, which is less expensive for installation and operation of these
trains and underlying infrastructure for secondary lines. Therefore, this
position paper describes the process of how to derive an approach to help
develop a digital interlocking system at design time for the use with secondary
railway lines. In this work, we motivate the necessary research by
investigating gaps in existing work as well as presenting a possible solution
for this problem, a meta-model. The model considers safety, security as well as
interoperability like 5G and socio-technical aspects to provide a holistic
modeling approach for the development of the interlocking system for industrial
secondary line use cases.
|
[
{
"created": "Tue, 2 Jul 2019 11:22:34 GMT",
"version": "v1"
}
] |
2019-07-03
|
[
[
"Hofbauer",
"David",
""
],
[
"Schmittner",
"Christoph",
""
],
[
"Brandstetter",
"Manuela",
""
],
[
"Tauber",
"Markus",
""
]
] |
In the last years the interconnection and ongoing development of physical systems combined with cyber resources has led to increasing automation. Through this progress in technology, autonomous vehicles, especially autonomous trains are getting more attention from industry and are already under test. The use of autonomous trains is known for increasing operation efficiency and reduction of personnel and infrastructure costs, which is mostly considered for main tracks. However, for less-used secondary lines, autonomous trains and their underlying sensor infrastructure are not yet considered. Thus, a system needs to be developed, which is less expensive for installation and operation of these trains and underlying infrastructure for secondary lines. Therefore, this position paper describes the process of how to derive an approach to help develop a digital interlocking system at design time for the use with secondary railway lines. In this work, we motivate the necessary research by investigating gaps in existing work as well as presenting a possible solution for this problem, a meta-model. The model considers safety, security as well as interoperability like 5G and socio-technical aspects to provide a holistic modeling approach for the development of the interlocking system for industrial secondary line use cases.
|
2407.15964
|
David Keaton
|
David Keaton, Amol S. Joshi, Jeremy Dawson, Nasser M. Nasrabadi
|
FDWST: Fingerphoto Deblurring using Wavelet Style Transfer
|
Accepted by IJCB 2024
| null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The challenge of deblurring fingerphoto images, or generating a sharp
fingerphoto from a given blurry one, is a significant problem in the realm of
computer vision. To address this problem, we propose a fingerphoto deblurring
architecture referred to as Fingerphoto Deblurring using Wavelet Style Transfer
(FDWST), which aims to utilize the information transmission of Style Transfer
techniques to deblur fingerphotos. Additionally, we incorporate the Discrete
Wavelet Transform (DWT) for its ability to split images into different
frequency bands. By combining these two techniques, we can perform Style
Transfer over a wide array of wavelet frequency bands, thereby increasing the
quality and variety of sharpness information transferred from sharp to blurry
images. Using this technique, our model was able to drastically increase the
quality of the generated fingerphotos compared to their originals, and achieve
a peak matching accuracy of 0.9907 when tasked with matching a deblurred
fingerphoto to its sharp counterpart, outperforming multiple other
state-of-the-art deblurring and style transfer techniques.
|
[
{
"created": "Mon, 22 Jul 2024 18:26:43 GMT",
"version": "v1"
}
] |
2024-07-24
|
[
[
"Keaton",
"David",
""
],
[
"Joshi",
"Amol S.",
""
],
[
"Dawson",
"Jeremy",
""
],
[
"Nasrabadi",
"Nasser M.",
""
]
] |
The challenge of deblurring fingerphoto images, or generating a sharp fingerphoto from a given blurry one, is a significant problem in the realm of computer vision. To address this problem, we propose a fingerphoto deblurring architecture referred to as Fingerphoto Deblurring using Wavelet Style Transfer (FDWST), which aims to utilize the information transmission of Style Transfer techniques to deblur fingerphotos. Additionally, we incorporate the Discrete Wavelet Transform (DWT) for its ability to split images into different frequency bands. By combining these two techniques, we can perform Style Transfer over a wide array of wavelet frequency bands, thereby increasing the quality and variety of sharpness information transferred from sharp to blurry images. Using this technique, our model was able to drastically increase the quality of the generated fingerphotos compared to their originals, and achieve a peak matching accuracy of 0.9907 when tasked with matching a deblurred fingerphoto to its sharp counterpart, outperforming multiple other state-of-the-art deblurring and style transfer techniques.
|
1911.13162
|
Alexander Preuhs
|
Alexander Preuhs, Michael Manhart, Philipp Roser, Bernhard Stimpel,
Christopher Syben, Marios Psychogios, Markus Kowarschik, Andreas Maier
|
Deep autofocus with cone-beam CT consistency constraint
|
Accepted at BVM 2020, review score under Top-6 of the conference
| null | null | null |
cs.LG cs.CV eess.IV stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
High quality reconstruction with interventional C-arm cone-beam computed
tomography (CBCT) requires exact geometry information. If the geometry
information is corrupted, e. g., by unexpected patient or system movement, the
measured signal is misplaced in the backprojection operation. With prolonged
acquisition times of interventional C-arm CBCT the likelihood of rigid patient
motion increases. To adapt the backprojection operation accordingly, a motion
estimation strategy is necessary. Recently, a novel learning-based approach was
proposed, capable of compensating motions within the acquisition plane. We
extend this method by a CBCT consistency constraint, which was proven to be
efficient for motions perpendicular to the acquisition plane. By the
synergistic combination of these two measures, in and out-plane motion is well
detectable, achieving an average artifact suppression of 93 [percent]. This
outperforms the entropy-based state-of-the-art autofocus measure which achieves
on average an artifact suppression of 54 [percent].
|
[
{
"created": "Fri, 29 Nov 2019 15:54:38 GMT",
"version": "v1"
},
{
"created": "Mon, 2 Dec 2019 12:17:44 GMT",
"version": "v2"
},
{
"created": "Wed, 4 Dec 2019 21:43:50 GMT",
"version": "v3"
}
] |
2019-12-06
|
[
[
"Preuhs",
"Alexander",
""
],
[
"Manhart",
"Michael",
""
],
[
"Roser",
"Philipp",
""
],
[
"Stimpel",
"Bernhard",
""
],
[
"Syben",
"Christopher",
""
],
[
"Psychogios",
"Marios",
""
],
[
"Kowarschik",
"Markus",
""
],
[
"Maier",
"Andreas",
""
]
] |
High quality reconstruction with interventional C-arm cone-beam computed tomography (CBCT) requires exact geometry information. If the geometry information is corrupted, e. g., by unexpected patient or system movement, the measured signal is misplaced in the backprojection operation. With prolonged acquisition times of interventional C-arm CBCT the likelihood of rigid patient motion increases. To adapt the backprojection operation accordingly, a motion estimation strategy is necessary. Recently, a novel learning-based approach was proposed, capable of compensating motions within the acquisition plane. We extend this method by a CBCT consistency constraint, which was proven to be efficient for motions perpendicular to the acquisition plane. By the synergistic combination of these two measures, in and out-plane motion is well detectable, achieving an average artifact suppression of 93 [percent]. This outperforms the entropy-based state-of-the-art autofocus measure which achieves on average an artifact suppression of 54 [percent].
|
2012.14349
|
Filip Biljecki
|
Abraham Noah Wu, Filip Biljecki
|
Roofpedia: Automatic mapping of green and solar roofs for an open
roofscape registry and evaluation of urban sustainability
| null |
Landscape and Urban Planning 214: 104167, 2021
|
10.1016/j.landurbplan.2021.104167
| null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sustainable roofs, such as those with greenery and photovoltaic panels,
contribute to the roadmap for reducing the carbon footprint of cities. However,
research on sustainable urban roofscapes is rather focused on their potential
and it is hindered by the scarcity of data, limiting our understanding of their
current content, spatial distribution, and temporal evolution. To tackle this
issue, we introduce Roofpedia, a set of three contributions: (i) automatic
mapping of relevant urban roof typology from satellite imagery; (ii) an open
roof registry mapping the spatial distribution and area of solar and green
roofs of more than one million buildings across 17 cities; and (iii) the
Roofpedia Index, a derivative of the registry, to benchmark the cities by the
extent of sustainable roofscape in term of solar and green roof penetration.
This project, partly inspired by its street greenery counterpart `Treepedia',
is made possible by a multi-step pipeline that combines deep learning and
geospatial techniques, demonstrating the feasibility of an automated
methodology that generalises successfully across cities with an accuracy of
detecting sustainable roofs of up to 100% in some cities. We offer our results
as an interactive map and open dataset so that our work could aid researchers,
local governments, and the public to uncover the pattern of sustainable
rooftops across cities, track and monitor the current use of rooftops,
complement studies on their potential, evaluate the effectiveness of existing
incentives, verify the use of subsidies and fulfilment of climate pledges,
estimate carbon offset capacities of cities, and ultimately support better
policies and strategies to increase the adoption of instruments contributing to
the sustainable development of cities.
|
[
{
"created": "Tue, 22 Dec 2020 13:34:50 GMT",
"version": "v1"
},
{
"created": "Fri, 2 Apr 2021 23:59:10 GMT",
"version": "v2"
},
{
"created": "Fri, 14 May 2021 08:02:59 GMT",
"version": "v3"
},
{
"created": "Thu, 24 Jun 2021 12:20:27 GMT",
"version": "v4"
}
] |
2021-06-25
|
[
[
"Wu",
"Abraham Noah",
""
],
[
"Biljecki",
"Filip",
""
]
] |
Sustainable roofs, such as those with greenery and photovoltaic panels, contribute to the roadmap for reducing the carbon footprint of cities. However, research on sustainable urban roofscapes is rather focused on their potential and it is hindered by the scarcity of data, limiting our understanding of their current content, spatial distribution, and temporal evolution. To tackle this issue, we introduce Roofpedia, a set of three contributions: (i) automatic mapping of relevant urban roof typology from satellite imagery; (ii) an open roof registry mapping the spatial distribution and area of solar and green roofs of more than one million buildings across 17 cities; and (iii) the Roofpedia Index, a derivative of the registry, to benchmark the cities by the extent of sustainable roofscape in term of solar and green roof penetration. This project, partly inspired by its street greenery counterpart `Treepedia', is made possible by a multi-step pipeline that combines deep learning and geospatial techniques, demonstrating the feasibility of an automated methodology that generalises successfully across cities with an accuracy of detecting sustainable roofs of up to 100% in some cities. We offer our results as an interactive map and open dataset so that our work could aid researchers, local governments, and the public to uncover the pattern of sustainable rooftops across cities, track and monitor the current use of rooftops, complement studies on their potential, evaluate the effectiveness of existing incentives, verify the use of subsidies and fulfilment of climate pledges, estimate carbon offset capacities of cities, and ultimately support better policies and strategies to increase the adoption of instruments contributing to the sustainable development of cities.
|
1602.05741
|
Alexis Decurninge
|
Alexis Decurninge and Maxime Guillaud and Dirk Slock
|
Channel Covariance Estimation in Massive MIMO Frequency Division Duplex
Systems
|
6 pages in Globecom, "From theory to practice" workshop, December
2015
| null | null | null |
cs.IT math.IT stat.AP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Channel covariance is emerging as a critical ingredient of the acquisition of
instantaneous channel state information (CSI) in multi-user Massive MIMO
systems operating in frequency division duplex (FDD) mode. In this context,
channel reciprocity does not hold, and it is generally expected that covariance
information about the downlink channel must be estimated and fed back by the
user equipment (UE). As an alternative CSI acquisition technique, we propose to
infer the downlink covariance based on the observed uplink covariance. This
inference process relies on a dictionary of uplink/downlink covariance
matrices, and on interpolation in the corresponding Riemannian space; once the
dictionary is known, the estimation does not rely on any form of feedback from
the UE. In this article, we present several variants of the interpolation
method, and benchmark them through simulations.
|
[
{
"created": "Thu, 18 Feb 2016 10:15:14 GMT",
"version": "v1"
}
] |
2016-02-19
|
[
[
"Decurninge",
"Alexis",
""
],
[
"Guillaud",
"Maxime",
""
],
[
"Slock",
"Dirk",
""
]
] |
Channel covariance is emerging as a critical ingredient of the acquisition of instantaneous channel state information (CSI) in multi-user Massive MIMO systems operating in frequency division duplex (FDD) mode. In this context, channel reciprocity does not hold, and it is generally expected that covariance information about the downlink channel must be estimated and fed back by the user equipment (UE). As an alternative CSI acquisition technique, we propose to infer the downlink covariance based on the observed uplink covariance. This inference process relies on a dictionary of uplink/downlink covariance matrices, and on interpolation in the corresponding Riemannian space; once the dictionary is known, the estimation does not rely on any form of feedback from the UE. In this article, we present several variants of the interpolation method, and benchmark them through simulations.
|
1303.1418
|
Merrick McCracken
|
Merrick McCracken, Maurizio Bocca, Neal Patwari
|
Joint Ultra-wideband and Signal Strength-based Through-building Tracking
for Tactical Operations
|
9 pages, conference submission
| null | null | null |
cs.OH
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Accurate device free localization (DFL) based on received signal strength
(RSS) measurements requires placement of radio transceivers on all sides of the
target area. Accuracy degrades dramatically if sensors do not surround the
area. However, law enforcement officers sometimes face situations where it is
not possible or practical to place sensors on all sides of the target room or
building. For example, for an armed subject barricaded in a motel room, police
may be able to place sensors in adjacent rooms, but not in front of the room,
where the subject would see them. In this paper, we show that using two
ultra-wideband (UWB) impulse radios, in addition to multiple RSS sensors,
improves the localization accuracy, particularly on the axis where no sensors
are placed (which we call the x-axis). We introduce three methods for combining
the RSS and UWB data. By using UWB radios together with RSS sensors, it is
still possible to localize a person through walls even when the devices are
placed only on two sides of the target area. Including the data from the UWB
radios can reduce the localization area of uncertainty by more than 60%.
|
[
{
"created": "Wed, 6 Mar 2013 18:38:48 GMT",
"version": "v1"
}
] |
2013-03-07
|
[
[
"McCracken",
"Merrick",
""
],
[
"Bocca",
"Maurizio",
""
],
[
"Patwari",
"Neal",
""
]
] |
Accurate device free localization (DFL) based on received signal strength (RSS) measurements requires placement of radio transceivers on all sides of the target area. Accuracy degrades dramatically if sensors do not surround the area. However, law enforcement officers sometimes face situations where it is not possible or practical to place sensors on all sides of the target room or building. For example, for an armed subject barricaded in a motel room, police may be able to place sensors in adjacent rooms, but not in front of the room, where the subject would see them. In this paper, we show that using two ultra-wideband (UWB) impulse radios, in addition to multiple RSS sensors, improves the localization accuracy, particularly on the axis where no sensors are placed (which we call the x-axis). We introduce three methods for combining the RSS and UWB data. By using UWB radios together with RSS sensors, it is still possible to localize a person through walls even when the devices are placed only on two sides of the target area. Including the data from the UWB radios can reduce the localization area of uncertainty by more than 60%.
|
2308.08934
|
Limin Wang
|
Limin Wang, Masatoshi Hanai, Toyotaro Suzumura, Shun Takashige,
Kenjiro Taura
|
On Data Imbalance in Molecular Property Prediction with Pre-training
| null | null | null | null |
cs.LG cond-mat.mtrl-sci
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Revealing and analyzing the various properties of materials is an essential
and critical issue in the development of materials, including batteries,
semiconductors, catalysts, and pharmaceuticals. Traditionally, these properties
have been determined through theoretical calculations and simulations. However,
it is not practical to perform such calculations on every single candidate
material. Recently, a combination method of the theoretical calculation and
machine learning has emerged, that involves training machine learning models on
a subset of theoretical calculation results to construct a surrogate model that
can be applied to the remaining materials. On the other hand, a technique
called pre-training is used to improve the accuracy of machine learning models.
Pre-training involves training the model on pretext task, which is different
from the target task, before training the model on the target task. This
process aims to extract the input data features, stabilizing the learning
process and improving its accuracy. However, in the case of molecular property
prediction, there is a strong imbalance in the distribution of input data and
features, which may lead to biased learning towards frequently occurring data
during pre-training. In this study, we propose an effective pre-training method
that addresses the imbalance in input data. We aim to improve the final
accuracy by modifying the loss function of the existing representative
pre-training method, node masking, to compensate the imbalance. We have
investigated and assessed the impact of our proposed imbalance compensation on
pre-training and the final prediction accuracy through experiments and
evaluations using benchmark of molecular property prediction models.
|
[
{
"created": "Thu, 17 Aug 2023 12:04:14 GMT",
"version": "v1"
}
] |
2023-08-21
|
[
[
"Wang",
"Limin",
""
],
[
"Hanai",
"Masatoshi",
""
],
[
"Suzumura",
"Toyotaro",
""
],
[
"Takashige",
"Shun",
""
],
[
"Taura",
"Kenjiro",
""
]
] |
Revealing and analyzing the various properties of materials is an essential and critical issue in the development of materials, including batteries, semiconductors, catalysts, and pharmaceuticals. Traditionally, these properties have been determined through theoretical calculations and simulations. However, it is not practical to perform such calculations on every single candidate material. Recently, a combination method of the theoretical calculation and machine learning has emerged, that involves training machine learning models on a subset of theoretical calculation results to construct a surrogate model that can be applied to the remaining materials. On the other hand, a technique called pre-training is used to improve the accuracy of machine learning models. Pre-training involves training the model on pretext task, which is different from the target task, before training the model on the target task. This process aims to extract the input data features, stabilizing the learning process and improving its accuracy. However, in the case of molecular property prediction, there is a strong imbalance in the distribution of input data and features, which may lead to biased learning towards frequently occurring data during pre-training. In this study, we propose an effective pre-training method that addresses the imbalance in input data. We aim to improve the final accuracy by modifying the loss function of the existing representative pre-training method, node masking, to compensate the imbalance. We have investigated and assessed the impact of our proposed imbalance compensation on pre-training and the final prediction accuracy through experiments and evaluations using benchmark of molecular property prediction models.
|
2211.07844
|
Michael Murray
|
Michael Murray, Hui Jin, Benjamin Bowman, Guido Montufar
|
Characterizing the Spectrum of the NTK via a Power Series Expansion
|
55 pages, 3 Figures, 1 Table
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Under mild conditions on the network initialization we derive a power series
expansion for the Neural Tangent Kernel (NTK) of arbitrarily deep feedforward
networks in the infinite width limit. We provide expressions for the
coefficients of this power series which depend on both the Hermite coefficients
of the activation function as well as the depth of the network. We observe
faster decay of the Hermite coefficients leads to faster decay in the NTK
coefficients and explore the role of depth. Using this series, first we relate
the effective rank of the NTK to the effective rank of the input-data Gram.
Second, for data drawn uniformly on the sphere we study the eigenvalues of the
NTK, analyzing the impact of the choice of activation function. Finally, for
generic data and activation functions with sufficiently fast Hermite
coefficient decay, we derive an asymptotic upper bound on the spectrum of the
NTK.
|
[
{
"created": "Tue, 15 Nov 2022 02:01:41 GMT",
"version": "v1"
},
{
"created": "Wed, 18 Jan 2023 23:47:25 GMT",
"version": "v2"
},
{
"created": "Tue, 21 Feb 2023 19:49:18 GMT",
"version": "v3"
},
{
"created": "Tue, 28 Feb 2023 19:27:25 GMT",
"version": "v4"
}
] |
2023-03-02
|
[
[
"Murray",
"Michael",
""
],
[
"Jin",
"Hui",
""
],
[
"Bowman",
"Benjamin",
""
],
[
"Montufar",
"Guido",
""
]
] |
Under mild conditions on the network initialization we derive a power series expansion for the Neural Tangent Kernel (NTK) of arbitrarily deep feedforward networks in the infinite width limit. We provide expressions for the coefficients of this power series which depend on both the Hermite coefficients of the activation function as well as the depth of the network. We observe faster decay of the Hermite coefficients leads to faster decay in the NTK coefficients and explore the role of depth. Using this series, first we relate the effective rank of the NTK to the effective rank of the input-data Gram. Second, for data drawn uniformly on the sphere we study the eigenvalues of the NTK, analyzing the impact of the choice of activation function. Finally, for generic data and activation functions with sufficiently fast Hermite coefficient decay, we derive an asymptotic upper bound on the spectrum of the NTK.
|
2111.02529
|
Theodore Heiser
|
Theodore James Thibault Heiser, Mari-Liis Allikivi, Meelis Kull
|
Shift Happens: Adjusting Classifiers
|
ECML PKDD 2019 conference paper, 16 pages
|
ECML PKDD 2019. Lecture Notes in Computer Science, vol 11907.
Springer, Cham (2020)
|
10.1007/978-3-030-46147-8_4
| null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Minimizing expected loss measured by a proper scoring rule, such as Brier
score or log-loss (cross-entropy), is a common objective while training a
probabilistic classifier. If the data have experienced dataset shift where the
class distributions change post-training, then often the model's performance
will decrease, over-estimating the probabilities of some classes while
under-estimating the others on average. We propose unbounded and bounded
general adjustment (UGA and BGA) methods that transform all predictions to
(re-)equalize the average prediction and the class distribution. These methods
act differently depending on which proper scoring rule is to be minimized, and
we have a theoretical guarantee of reducing loss on test data, if the exact
class distribution is known. We also demonstrate experimentally that, when in
practice the class distribution is known only approximately, there is often
still a reduction in loss depending on the amount of shift and the precision to
which the class distribution is known.
|
[
{
"created": "Wed, 3 Nov 2021 21:27:27 GMT",
"version": "v1"
}
] |
2021-11-05
|
[
[
"Heiser",
"Theodore James Thibault",
""
],
[
"Allikivi",
"Mari-Liis",
""
],
[
"Kull",
"Meelis",
""
]
] |
Minimizing expected loss measured by a proper scoring rule, such as Brier score or log-loss (cross-entropy), is a common objective while training a probabilistic classifier. If the data have experienced dataset shift where the class distributions change post-training, then often the model's performance will decrease, over-estimating the probabilities of some classes while under-estimating the others on average. We propose unbounded and bounded general adjustment (UGA and BGA) methods that transform all predictions to (re-)equalize the average prediction and the class distribution. These methods act differently depending on which proper scoring rule is to be minimized, and we have a theoretical guarantee of reducing loss on test data, if the exact class distribution is known. We also demonstrate experimentally that, when in practice the class distribution is known only approximately, there is often still a reduction in loss depending on the amount of shift and the precision to which the class distribution is known.
|
1510.02836
|
Mauricio Toro
|
Mauricio Toro and Myriam Desainte-Catherine and Pascal Baltazar
|
A Model for Interactive Scores with Temporal Constraints and Conditional
Branching
|
14 pages, extended version of conference paper on Journ\'ees de
INformatique Musicale 2010
| null | null | null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Interactive Scores (IS) are a formalism for the design and performance of
interactive multimedia scenarios. IS provide temporal relations (TR), but they
cannot represent conditional branching and TRs simultaneously. We propose an
extension to Allombert et al.'s IS model by including a condition on the TRs.
We found out that in order to have a coherent model in all possible scenarios,
durations must be flexible; however, sometimes it is possible to have fixed
durations. To show the relevance of our model, we modeled an existing
multimedia installation called Mariona. In Mariona there is choice, random
durations and loops. Whether we can represent all the TRs available in
Allombert et al.'s model into ours, or we have to choose between a timed
conditional branching model and a pure temporal model before writing a
scenario, still remains as an open question.
|
[
{
"created": "Fri, 9 Oct 2015 22:08:08 GMT",
"version": "v1"
}
] |
2015-10-13
|
[
[
"Toro",
"Mauricio",
""
],
[
"Desainte-Catherine",
"Myriam",
""
],
[
"Baltazar",
"Pascal",
""
]
] |
Interactive Scores (IS) are a formalism for the design and performance of interactive multimedia scenarios. IS provide temporal relations (TR), but they cannot represent conditional branching and TRs simultaneously. We propose an extension to Allombert et al.'s IS model by including a condition on the TRs. We found out that in order to have a coherent model in all possible scenarios, durations must be flexible; however, sometimes it is possible to have fixed durations. To show the relevance of our model, we modeled an existing multimedia installation called Mariona. In Mariona there is choice, random durations and loops. Whether we can represent all the TRs available in Allombert et al.'s model into ours, or we have to choose between a timed conditional branching model and a pure temporal model before writing a scenario, still remains as an open question.
|
1107.4684
|
Bas van Vlijmen
|
J.A. Bergstra, G.P.A.J. Delen, S.F.M. van Vlijmen
|
Introducing Sourcements
| null | null | null | null |
cs.SE cs.GL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sourcing processes are discussed at a high abstraction level. A dedicated
terminology is developed concerning general aspects of sourcing. The term
sourcement is coined to denote a building block for sourcing. No- tions of
allocation, functional architecture and allocational architecture, equilibrium,
and configuration are discussed. Limitations of the concept of outsourcing are
outlined. This theoretical work is meant to serve as a point of departure for
the subsequent development of a detailed theory of sourcing and sourcing
transformations, which can be a tool for dealing with practical applica- tions.
|
[
{
"created": "Sat, 23 Jul 2011 12:12:12 GMT",
"version": "v1"
}
] |
2011-07-26
|
[
[
"Bergstra",
"J. A.",
""
],
[
"Delen",
"G. P. A. J.",
""
],
[
"van Vlijmen",
"S. F. M.",
""
]
] |
Sourcing processes are discussed at a high abstraction level. A dedicated terminology is developed concerning general aspects of sourcing. The term sourcement is coined to denote a building block for sourcing. No- tions of allocation, functional architecture and allocational architecture, equilibrium, and configuration are discussed. Limitations of the concept of outsourcing are outlined. This theoretical work is meant to serve as a point of departure for the subsequent development of a detailed theory of sourcing and sourcing transformations, which can be a tool for dealing with practical applica- tions.
|
1801.00254
|
Youngsam Kim
|
Youngsam Kim, Hyopil Shin
|
A New Approach for Measuring Sentiment Orientation based on
Multi-Dimensional Vector Space
|
8 pages
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This study implements a vector space model approach to measure the sentiment
orientations of words. Two representative vectors for positive/negative
polarity are constructed using high-dimensional vec-tor space in both an
unsupervised and a semi-supervised manner. A sentiment ori-entation value per
word is determined by taking the difference between the cosine distances
against the two reference vec-tors. These two conditions (unsupervised and
semi-supervised) are compared against an existing unsupervised method (Turney,
2002). As a result of our experi-ment, we demonstrate that this novel ap-proach
significantly outperforms the pre-vious unsupervised approach and is more
practical and data efficient as well.
|
[
{
"created": "Sun, 31 Dec 2017 08:44:51 GMT",
"version": "v1"
}
] |
2018-01-03
|
[
[
"Kim",
"Youngsam",
""
],
[
"Shin",
"Hyopil",
""
]
] |
This study implements a vector space model approach to measure the sentiment orientations of words. Two representative vectors for positive/negative polarity are constructed using high-dimensional vec-tor space in both an unsupervised and a semi-supervised manner. A sentiment ori-entation value per word is determined by taking the difference between the cosine distances against the two reference vec-tors. These two conditions (unsupervised and semi-supervised) are compared against an existing unsupervised method (Turney, 2002). As a result of our experi-ment, we demonstrate that this novel ap-proach significantly outperforms the pre-vious unsupervised approach and is more practical and data efficient as well.
|
1705.03242
|
Farbod Kayhan
|
Guido Montorsi and Farbod Kayhan
|
Low Complexity Two-Stage Soft/Hard Decoders
|
Submitted to IEEE Transaction on Communications, 18 pages, 7 figures
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Next generation wireless systems will need higher spectral efficiency as the
expected traffic volumes per unit bandwidth and dimension will inevitably grow.
As a consequence, it is necessary to design coding schemes with performances
close to the theoretical limits, having high flexibility and low complexity
requirements at transmitter and receiver. In this paper, we point out some of
the limitations of the Bit Interleaved Code Modulation (BICM) technique which
is the state of the art adopted in several standards and then propose some new
lower complexity alternatives. These low complexity alternatives are obtained
by applying the recently introduced Analog Digital Belief Propagation (ADBP)
algorithm to a two stage encoding scheme embedding a hard decoding stage. First
we show that for PAM$^2$ type constellations over the AWGN channel, the
performance loss caused by using a hard decoded stage for all modulation bits
except the two least protected is negligible. Next, we consider the application
of two stage decoders to more challenging Rician channels, showing that in this
case the number of bits needed to be soft decoded depends on the Rician factor
and increases to a maximum of three bits per dimension for the Rayleigh
channel. Finally, we apply the ADBP algorithm to further reduce the detection
and decoding complexity.
|
[
{
"created": "Tue, 9 May 2017 09:19:58 GMT",
"version": "v1"
}
] |
2017-05-10
|
[
[
"Montorsi",
"Guido",
""
],
[
"Kayhan",
"Farbod",
""
]
] |
Next generation wireless systems will need higher spectral efficiency as the expected traffic volumes per unit bandwidth and dimension will inevitably grow. As a consequence, it is necessary to design coding schemes with performances close to the theoretical limits, having high flexibility and low complexity requirements at transmitter and receiver. In this paper, we point out some of the limitations of the Bit Interleaved Code Modulation (BICM) technique which is the state of the art adopted in several standards and then propose some new lower complexity alternatives. These low complexity alternatives are obtained by applying the recently introduced Analog Digital Belief Propagation (ADBP) algorithm to a two stage encoding scheme embedding a hard decoding stage. First we show that for PAM$^2$ type constellations over the AWGN channel, the performance loss caused by using a hard decoded stage for all modulation bits except the two least protected is negligible. Next, we consider the application of two stage decoders to more challenging Rician channels, showing that in this case the number of bits needed to be soft decoded depends on the Rician factor and increases to a maximum of three bits per dimension for the Rayleigh channel. Finally, we apply the ADBP algorithm to further reduce the detection and decoding complexity.
|
1911.07657
|
Minjia Shi
|
Minjia Shi, Tor Helleseth and Patrick Sole
|
Two-weight codes over the integers modulo a prime power
| null | null | null | null |
cs.IT cs.CR math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Let $p$ be a prime number. Irreducible cyclic codes of length $p^2-1$ and
dimension $2$ over the integers modulo $p^h$ are shown to have exactly two
nonzero Hamming weights. The construction uses the Galois ring of
characteristic $p^h$ and order $p^{2h}.$ When the check polynomial is
primitive, the code meets the Griesmer bound of (Shiromoto, Storme) (2012). By
puncturing some projective codes are constructed. Those in length $p+1$ meet a
Singleton-like bound of (Shiromoto , 2000). An infinite family of strongly
regular graphs is constructed as coset graphs of the duals of these projective
codes. A common cover of all these graphs, for fixed $p$, is provided by
considering the Hensel lifting of these cyclic codes over the $p$-adic numbers.
|
[
{
"created": "Fri, 15 Nov 2019 02:01:54 GMT",
"version": "v1"
}
] |
2019-11-19
|
[
[
"Shi",
"Minjia",
""
],
[
"Helleseth",
"Tor",
""
],
[
"Sole",
"Patrick",
""
]
] |
Let $p$ be a prime number. Irreducible cyclic codes of length $p^2-1$ and dimension $2$ over the integers modulo $p^h$ are shown to have exactly two nonzero Hamming weights. The construction uses the Galois ring of characteristic $p^h$ and order $p^{2h}.$ When the check polynomial is primitive, the code meets the Griesmer bound of (Shiromoto, Storme) (2012). By puncturing some projective codes are constructed. Those in length $p+1$ meet a Singleton-like bound of (Shiromoto , 2000). An infinite family of strongly regular graphs is constructed as coset graphs of the duals of these projective codes. A common cover of all these graphs, for fixed $p$, is provided by considering the Hensel lifting of these cyclic codes over the $p$-adic numbers.
|
1511.01293
|
Leonardo Parisi
|
Andrea Cavagna, Chiara Creato, Lorenzo Del Castello, Stefania Melillo,
Leonardo Parisi, Massimiliano Viale
|
Towards a tracking algorithm based on the clustering of spatio-temporal
clouds of points
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The interest in 3D dynamical tracking is growing in fields such as robotics,
biology and fluid dynamics. Recently, a major source of progress in 3D tracking
has been the study of collective behaviour in biological systems, where the
trajectories of individual animals moving within large and dense groups need to
be reconstructed to understand the behavioural interaction rules. Experimental
data in this field are generally noisy and at low spatial resolution, so that
individuals appear as small featureless objects and trajectories must be
retrieved by making use of epipolar information only. Moreover, optical
occlusions often occur: in a multi-camera system one or more objects become
indistinguishable in one view, potentially jeopardizing the conservation of
identity over long-time trajectories. The most advanced 3D tracking algorithms
overcome optical occlusions making use of set-cover techniques, which however
have to solve NP-hard optimization problems. Moreover, current methods are not
able to cope with occlusions arising from actual physical proximity of objects
in 3D space. Here, we present a new method designed to work directly in 3D
space and time, creating (3D+1) clouds of points representing the full
spatio-temporal evolution of the moving targets. We can then use a simple
connected components labeling routine, which is linear in time, to solve
optical occlusions, hence lowering from NP to P the complexity of the problem.
Finally, we use normalized cut spectral clustering to tackle 3D physical
proximity.
|
[
{
"created": "Wed, 4 Nov 2015 11:20:51 GMT",
"version": "v1"
}
] |
2015-11-05
|
[
[
"Cavagna",
"Andrea",
""
],
[
"Creato",
"Chiara",
""
],
[
"Del Castello",
"Lorenzo",
""
],
[
"Melillo",
"Stefania",
""
],
[
"Parisi",
"Leonardo",
""
],
[
"Viale",
"Massimiliano",
""
]
] |
The interest in 3D dynamical tracking is growing in fields such as robotics, biology and fluid dynamics. Recently, a major source of progress in 3D tracking has been the study of collective behaviour in biological systems, where the trajectories of individual animals moving within large and dense groups need to be reconstructed to understand the behavioural interaction rules. Experimental data in this field are generally noisy and at low spatial resolution, so that individuals appear as small featureless objects and trajectories must be retrieved by making use of epipolar information only. Moreover, optical occlusions often occur: in a multi-camera system one or more objects become indistinguishable in one view, potentially jeopardizing the conservation of identity over long-time trajectories. The most advanced 3D tracking algorithms overcome optical occlusions making use of set-cover techniques, which however have to solve NP-hard optimization problems. Moreover, current methods are not able to cope with occlusions arising from actual physical proximity of objects in 3D space. Here, we present a new method designed to work directly in 3D space and time, creating (3D+1) clouds of points representing the full spatio-temporal evolution of the moving targets. We can then use a simple connected components labeling routine, which is linear in time, to solve optical occlusions, hence lowering from NP to P the complexity of the problem. Finally, we use normalized cut spectral clustering to tackle 3D physical proximity.
|
2212.08658
|
Frederic Guerrero-Sol\'e
|
Frederic Guerrero-Sole
|
IMAGINE: An Integrated Model of Artificial Intelligence-Mediated
Communication Effects
|
29 pages
| null | null | null |
cs.AI cs.CY cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Artificial Intelligence (AI) is transforming all fields of knowledge and
production. From surgery, autonomous driving, to image and video creation, AI
seems to make possible hitherto unimaginable processes of automation and
efficient creation. Media and communication are not an exception, and we are
currently witnessing the dawn of powerful AI tools capable of creating artistic
images from simple keywords, or to capture emotions from facial expression.
These examples may be only the beginning of what can be in the future the
engines for automatic AI real time creation of media content linked to the
emotional and behavioural responses of individuals. Although it may seem we are
still far from there, it is already the moment to adapt our theories about
media to the hypothetical scenario in which content production can be done
without human intervention, and governed by the controlled any reactions of the
individual to the exposure to media content. Following that, I propose the
definition of the Integrated Model of Artificial Intelligence-Mediated
Communication Effects (IMAGINE), and its consequences on the way we understand
media evolution (Scolari, 2012) and we think about media effects (Potter,
2010). The conceptual framework proposed is aimed to help scholars theorizing
and doing research in a scenario of continuous real-time connection between AI
measurement of people's responses to media, and the AI creation of content,
with the objective of optimizing and maximizing the processes of influence.
Parasocial interaction and real-time beautification are used as examples to
model the functioning of the IMAGINE process.
|
[
{
"created": "Tue, 13 Dec 2022 19:48:38 GMT",
"version": "v1"
}
] |
2022-12-20
|
[
[
"Guerrero-Sole",
"Frederic",
""
]
] |
Artificial Intelligence (AI) is transforming all fields of knowledge and production. From surgery, autonomous driving, to image and video creation, AI seems to make possible hitherto unimaginable processes of automation and efficient creation. Media and communication are not an exception, and we are currently witnessing the dawn of powerful AI tools capable of creating artistic images from simple keywords, or to capture emotions from facial expression. These examples may be only the beginning of what can be in the future the engines for automatic AI real time creation of media content linked to the emotional and behavioural responses of individuals. Although it may seem we are still far from there, it is already the moment to adapt our theories about media to the hypothetical scenario in which content production can be done without human intervention, and governed by the controlled any reactions of the individual to the exposure to media content. Following that, I propose the definition of the Integrated Model of Artificial Intelligence-Mediated Communication Effects (IMAGINE), and its consequences on the way we understand media evolution (Scolari, 2012) and we think about media effects (Potter, 2010). The conceptual framework proposed is aimed to help scholars theorizing and doing research in a scenario of continuous real-time connection between AI measurement of people's responses to media, and the AI creation of content, with the objective of optimizing and maximizing the processes of influence. Parasocial interaction and real-time beautification are used as examples to model the functioning of the IMAGINE process.
|
2308.16382
|
Xiao Wang
|
Xiao Wang, Fang Dai, Wenyan Guo, Junfeng Wang
|
A stochastic block model for community detection in attributed networks
| null | null | null | null |
cs.SI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Community detection is an important content in complex network analysis. The
existing community detection methods in attributed networks mostly focus on
only using network structure, while the methods of integrating node attributes
is mainly for the traditional community structures, and cannot detect
multipartite structures and mixture structures in network. In addition, the
model-based community detection methods currently proposed for attributed
networks do not fully consider unique topology information of nodes, such as
betweenness centrality and clustering coefficient. Therefore, a stochastic
block model that integrates betweenness centrality and clustering coefficient
of nodes for community detection in attributed networks, named BCSBM, is
proposed in this paper. Different from other generative models for attributed
networks, the generation process of links and attributes in BCSBM model follows
the Poisson distribution, and the probability between community is considered
based on the stochastic block model. Moreover, the betweenness centrality and
clustering coefficient of nodes are introduced into the process of links and
attributes generation. Finally, the expectation maximization algorithm is
employed to estimate the parameters of the BCSBM model, and the node-community
memberships is obtained through the hard division process, so the community
detection is completed. By experimenting on six real-work networks containing
different network structures, and comparing with the community detection
results of five algorithms, the experimental results show that the BCSBM model
not only inherits the advantages of the stochastic block model and can detect
various network structures, but also has good data fitting ability due to
introducing the betweenness centrality and clustering coefficient of nodes.
Overall, the performance of this model is superior to other five compared
algorithms.
|
[
{
"created": "Thu, 31 Aug 2023 01:00:24 GMT",
"version": "v1"
}
] |
2023-09-01
|
[
[
"Wang",
"Xiao",
""
],
[
"Dai",
"Fang",
""
],
[
"Guo",
"Wenyan",
""
],
[
"Wang",
"Junfeng",
""
]
] |
Community detection is an important content in complex network analysis. The existing community detection methods in attributed networks mostly focus on only using network structure, while the methods of integrating node attributes is mainly for the traditional community structures, and cannot detect multipartite structures and mixture structures in network. In addition, the model-based community detection methods currently proposed for attributed networks do not fully consider unique topology information of nodes, such as betweenness centrality and clustering coefficient. Therefore, a stochastic block model that integrates betweenness centrality and clustering coefficient of nodes for community detection in attributed networks, named BCSBM, is proposed in this paper. Different from other generative models for attributed networks, the generation process of links and attributes in BCSBM model follows the Poisson distribution, and the probability between community is considered based on the stochastic block model. Moreover, the betweenness centrality and clustering coefficient of nodes are introduced into the process of links and attributes generation. Finally, the expectation maximization algorithm is employed to estimate the parameters of the BCSBM model, and the node-community memberships is obtained through the hard division process, so the community detection is completed. By experimenting on six real-work networks containing different network structures, and comparing with the community detection results of five algorithms, the experimental results show that the BCSBM model not only inherits the advantages of the stochastic block model and can detect various network structures, but also has good data fitting ability due to introducing the betweenness centrality and clustering coefficient of nodes. Overall, the performance of this model is superior to other five compared algorithms.
|
1803.05396
|
Carla Groenland
|
Carla Groenland and Karolina Okrasa and Pawel Rz\k{a}\.zewski and Alex
Scott and Paul Seymour and Sophie Spirkl
|
$H$-colouring $P_t$-free graphs in subexponential time
|
Fixed some typo's
| null | null | null |
cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A graph is called $P_t$-free if it does not contain the path on $t$ vertices
as an induced subgraph. Let $H$ be a multigraph with the property that any two
distinct vertices share at most one common neighbour. We show that the
generating function for (list) graph homomorphisms from $G$ to $H$ can be
calculated in subexponential time $2^{O\left(\sqrt{tn\log(n)}\right)}$ for
$n=|V(G)|$ in the class of $P_t$-free graphs $G$. As a corollary, we show that
the number of 3-colourings of a $P_t$-free graph $G$ can be found in
subexponential time. On the other hand, no subexponential time algorithm exists
for 4-colourability of $P_t$-free graphs assuming the Exponential Time
Hypothesis. Along the way, we prove that $P_t$-free graphs have pathwidth that
is linear in their maximum degree.
|
[
{
"created": "Wed, 14 Mar 2018 16:45:27 GMT",
"version": "v1"
},
{
"created": "Thu, 10 May 2018 07:18:57 GMT",
"version": "v2"
},
{
"created": "Fri, 22 Mar 2019 15:33:04 GMT",
"version": "v3"
}
] |
2019-03-25
|
[
[
"Groenland",
"Carla",
""
],
[
"Okrasa",
"Karolina",
""
],
[
"Rzążewski",
"Pawel",
""
],
[
"Scott",
"Alex",
""
],
[
"Seymour",
"Paul",
""
],
[
"Spirkl",
"Sophie",
""
]
] |
A graph is called $P_t$-free if it does not contain the path on $t$ vertices as an induced subgraph. Let $H$ be a multigraph with the property that any two distinct vertices share at most one common neighbour. We show that the generating function for (list) graph homomorphisms from $G$ to $H$ can be calculated in subexponential time $2^{O\left(\sqrt{tn\log(n)}\right)}$ for $n=|V(G)|$ in the class of $P_t$-free graphs $G$. As a corollary, we show that the number of 3-colourings of a $P_t$-free graph $G$ can be found in subexponential time. On the other hand, no subexponential time algorithm exists for 4-colourability of $P_t$-free graphs assuming the Exponential Time Hypothesis. Along the way, we prove that $P_t$-free graphs have pathwidth that is linear in their maximum degree.
|
2212.06146
|
Audrey Der
|
Audrey Der, Chin-Chia Michael Yeh, Renjie Wu, Junpeng Wang, Yan Zheng,
Zhongfang Zhuang, Liang Wang, Wei Zhang, Eamonn Keogh
|
Matrix Profile XXVII: A Novel Distance Measure for Comparing Long Time
Series
|
Accepted at IEEE ICKG 2022. (Previously entitled IEEE ICBK.) Abridged
abstract as per arxiv's requirements
| null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The most useful data mining primitives are distance measures. With an
effective distance measure, it is possible to perform classification,
clustering, anomaly detection, segmentation, etc. For single-event time series
Euclidean Distance and Dynamic Time Warping distance are known to be extremely
effective. However, for time series containing cyclical behaviors, the semantic
meaningfulness of such comparisons is less clear. For example, on two separate
days the telemetry from an athlete workout routine might be very similar. The
second day may change the order in of performing push-ups and squats, adding
repetitions of pull-ups, or completely omitting dumbbell curls. Any of these
minor changes would defeat existing time series distance measures. Some
bag-of-features methods have been proposed to address this problem, but we
argue that in many cases, similarity is intimately tied to the shapes of
subsequences within these longer time series. In such cases, summative features
will lack discrimination ability. In this work we introduce PRCIS, which stands
for Pattern Representation Comparison in Series. PRCIS is a distance measure
for long time series, which exploits recent progress in our ability to
summarize time series with dictionaries. We will demonstrate the utility of our
ideas on diverse tasks and datasets.
|
[
{
"created": "Fri, 9 Dec 2022 23:02:23 GMT",
"version": "v1"
}
] |
2022-12-14
|
[
[
"Der",
"Audrey",
""
],
[
"Yeh",
"Chin-Chia Michael",
""
],
[
"Wu",
"Renjie",
""
],
[
"Wang",
"Junpeng",
""
],
[
"Zheng",
"Yan",
""
],
[
"Zhuang",
"Zhongfang",
""
],
[
"Wang",
"Liang",
""
],
[
"Zhang",
"Wei",
""
],
[
"Keogh",
"Eamonn",
""
]
] |
The most useful data mining primitives are distance measures. With an effective distance measure, it is possible to perform classification, clustering, anomaly detection, segmentation, etc. For single-event time series Euclidean Distance and Dynamic Time Warping distance are known to be extremely effective. However, for time series containing cyclical behaviors, the semantic meaningfulness of such comparisons is less clear. For example, on two separate days the telemetry from an athlete workout routine might be very similar. The second day may change the order in of performing push-ups and squats, adding repetitions of pull-ups, or completely omitting dumbbell curls. Any of these minor changes would defeat existing time series distance measures. Some bag-of-features methods have been proposed to address this problem, but we argue that in many cases, similarity is intimately tied to the shapes of subsequences within these longer time series. In such cases, summative features will lack discrimination ability. In this work we introduce PRCIS, which stands for Pattern Representation Comparison in Series. PRCIS is a distance measure for long time series, which exploits recent progress in our ability to summarize time series with dictionaries. We will demonstrate the utility of our ideas on diverse tasks and datasets.
|
2104.06127
|
Maartje De Graaf
|
Sven Y. Neuteboom and Maartje M.A. de Graaf
|
Cobbler Stick With Your Reads: People's Perceptions of Gendered Robots
Performing Gender Stereotypical Tasks
|
in TRAITS Workshop Proceedings (arXiv:2103.12679) held in conjunction
with Companion of the 2021 ACM/IEEE International Conference on Human-Robot
Interaction, March 2021, Pages 709-711
| null | null |
TRAITS/2021/01
|
cs.RO cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Previous research found that robots should best be designed to fit their
given task, whilst others identified gender effects in people's evaluations of
robots. This study combines this knowledge to investigate stereotyping effects
of robot genderedness and assigned tasks in an online experiment (n = 89)
manipulating robot gender (male vs. female) and task type (analytical vs.
social) in a between subject's design in terms of trust, social perception, and
humanness. People deem robots more competent and have higher trust in their
capacity when they perform analytical tasks compared to social tasks,
independent of the robot's gender. Furthermore, we observed a trend in the data
indicating that people seem to dehumanize female robots (regardless of task
performed) to animals lacking higher-level mental processes, and additionally
that people seem to dehumanize robots to emotionless objects only when gendered
robots perform tasks contradicting the stereotypes of their gender.
|
[
{
"created": "Tue, 13 Apr 2021 12:00:17 GMT",
"version": "v1"
}
] |
2021-04-14
|
[
[
"Neuteboom",
"Sven Y.",
""
],
[
"de Graaf",
"Maartje M. A.",
""
]
] |
Previous research found that robots should best be designed to fit their given task, whilst others identified gender effects in people's evaluations of robots. This study combines this knowledge to investigate stereotyping effects of robot genderedness and assigned tasks in an online experiment (n = 89) manipulating robot gender (male vs. female) and task type (analytical vs. social) in a between subject's design in terms of trust, social perception, and humanness. People deem robots more competent and have higher trust in their capacity when they perform analytical tasks compared to social tasks, independent of the robot's gender. Furthermore, we observed a trend in the data indicating that people seem to dehumanize female robots (regardless of task performed) to animals lacking higher-level mental processes, and additionally that people seem to dehumanize robots to emotionless objects only when gendered robots perform tasks contradicting the stereotypes of their gender.
|
1309.7313
|
Jiwei Li
|
Jiwei Li and Claire Cardie
|
Timeline Generation: Tracking individuals on Twitter
| null | null | null | null |
cs.SI cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a unsupervised framework to reconstruct a person's
life history by creating a chronological list for {\it personal important
events} (PIE) of individuals based on the tweets they published. By analyzing
individual tweet collections, we find that what are suitable for inclusion in
the personal timeline should be tweets talking about personal (as opposed to
public) and time-specific (as opposed to time-general) topics. To further
extract these types of topics, we introduce a non-parametric multi-level
Dirichlet Process model to recognize four types of tweets: personal
time-specific (PersonTS), personal time-general (PersonTG), public
time-specific (PublicTS) and public time-general (PublicTG) topics, which, in
turn, are used for further personal event extraction and timeline generation.
To the best of our knowledge, this is the first work focused on the generation
of timeline for individuals from twitter data. For evaluation, we have built a
new golden standard Timelines based on Twitter and Wikipedia that contain PIE
related events from 20 {\it ordinary twitter users} and 20 {\it celebrities}.
Experiments on real Twitter data quantitatively demonstrate the effectiveness
of our approach.
|
[
{
"created": "Fri, 27 Sep 2013 17:56:35 GMT",
"version": "v1"
},
{
"created": "Sun, 20 Oct 2013 00:16:14 GMT",
"version": "v2"
},
{
"created": "Fri, 7 Feb 2014 19:10:58 GMT",
"version": "v3"
}
] |
2014-02-10
|
[
[
"Li",
"Jiwei",
""
],
[
"Cardie",
"Claire",
""
]
] |
In this paper, we propose a unsupervised framework to reconstruct a person's life history by creating a chronological list for {\it personal important events} (PIE) of individuals based on the tweets they published. By analyzing individual tweet collections, we find that what are suitable for inclusion in the personal timeline should be tweets talking about personal (as opposed to public) and time-specific (as opposed to time-general) topics. To further extract these types of topics, we introduce a non-parametric multi-level Dirichlet Process model to recognize four types of tweets: personal time-specific (PersonTS), personal time-general (PersonTG), public time-specific (PublicTS) and public time-general (PublicTG) topics, which, in turn, are used for further personal event extraction and timeline generation. To the best of our knowledge, this is the first work focused on the generation of timeline for individuals from twitter data. For evaluation, we have built a new golden standard Timelines based on Twitter and Wikipedia that contain PIE related events from 20 {\it ordinary twitter users} and 20 {\it celebrities}. Experiments on real Twitter data quantitatively demonstrate the effectiveness of our approach.
|
1101.4236
|
Yashwanth Vijay Kothapalli
|
Yashwanth Kothapalli
|
Indexing Properties of Primitive Pythagorean Triples for Cryptography
Applications
|
8 pages
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents new properties of Primitive Pythagorean Triples (PPT)
that have relevance in applications where events of different probability need
to be generated and in cryptography.
|
[
{
"created": "Fri, 21 Jan 2011 21:53:18 GMT",
"version": "v1"
}
] |
2011-01-25
|
[
[
"Kothapalli",
"Yashwanth",
""
]
] |
This paper presents new properties of Primitive Pythagorean Triples (PPT) that have relevance in applications where events of different probability need to be generated and in cryptography.
|
2303.08562
|
Weijian Huang
|
Weijian Huang, Hao Yang, Cheng Li, Mingtong Dai, Rui Yang, Shanshan
Wang
|
MGA: Medical generalist agent through text-guided knowledge
transformation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-modal representation methods have achieved advanced performance in
medical applications by extracting more robust features from multi-domain data.
However, existing methods usually need to train additional branches for
downstream tasks, which may increase the model complexities in clinical
applications as well as introduce additional human inductive bias. Besides,
very few studies exploit the rich clinical knowledge embedded in clinical daily
reports. To this end, we propose a novel medical generalist agent, MGA, that
can address three kinds of common clinical tasks via clinical reports knowledge
transformation. Unlike the existing methods, MGA can easily adapt to different
tasks without specific downstream branches when their corresponding annotations
are missing. More importantly, we are the first attempt to use medical
professional language guidance as a transmission medium to guide the agent's
behavior. The proposed method is implemented on four well-known X-ray
open-source datasets, MIMIC-CXR, CheXpert, MIMIC-CXR-JPG, and MIMIC-CXR-MS.
Promising results are obtained, which validate the effectiveness of our
proposed MGA. Code is available at: https://github.com/SZUHvern/MGA
|
[
{
"created": "Wed, 15 Mar 2023 12:28:31 GMT",
"version": "v1"
}
] |
2023-03-16
|
[
[
"Huang",
"Weijian",
""
],
[
"Yang",
"Hao",
""
],
[
"Li",
"Cheng",
""
],
[
"Dai",
"Mingtong",
""
],
[
"Yang",
"Rui",
""
],
[
"Wang",
"Shanshan",
""
]
] |
Multi-modal representation methods have achieved advanced performance in medical applications by extracting more robust features from multi-domain data. However, existing methods usually need to train additional branches for downstream tasks, which may increase the model complexities in clinical applications as well as introduce additional human inductive bias. Besides, very few studies exploit the rich clinical knowledge embedded in clinical daily reports. To this end, we propose a novel medical generalist agent, MGA, that can address three kinds of common clinical tasks via clinical reports knowledge transformation. Unlike the existing methods, MGA can easily adapt to different tasks without specific downstream branches when their corresponding annotations are missing. More importantly, we are the first attempt to use medical professional language guidance as a transmission medium to guide the agent's behavior. The proposed method is implemented on four well-known X-ray open-source datasets, MIMIC-CXR, CheXpert, MIMIC-CXR-JPG, and MIMIC-CXR-MS. Promising results are obtained, which validate the effectiveness of our proposed MGA. Code is available at: https://github.com/SZUHvern/MGA
|
2403.11451
|
Haolan Chen
|
Haolan Chen, Jinhua Hao, Kai Zhao, Kun Yuan, Ming Sun, Chao Zhou and
Wei Hu
|
CasSR: Activating Image Power for Real-World Image Super-Resolution
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The objective of image super-resolution is to generate clean and
high-resolution images from degraded versions. Recent advancements in diffusion
modeling have led to the emergence of various image super-resolution techniques
that leverage pretrained text-to-image (T2I) models. Nevertheless, due to the
prevalent severe degradation in low-resolution images and the inherent
characteristics of diffusion models, achieving high-fidelity image restoration
remains challenging. Existing methods often exhibit issues including semantic
loss, artifacts, and the introduction of spurious content not present in the
original image. To tackle this challenge, we propose Cascaded diffusion for
Super-Resolution, CasSR , a novel method designed to produce highly detailed
and realistic images. In particular, we develop a cascaded controllable
diffusion model that aims to optimize the extraction of information from
low-resolution images. This model generates a preliminary reference image to
facilitate initial information extraction and degradation mitigation.
Furthermore, we propose a multi-attention mechanism to enhance the T2I model's
capability in maximizing the restoration of the original image content. Through
a comprehensive blend of qualitative and quantitative analyses, we substantiate
the efficacy and superiority of our approach.
|
[
{
"created": "Mon, 18 Mar 2024 03:59:43 GMT",
"version": "v1"
}
] |
2024-03-19
|
[
[
"Chen",
"Haolan",
""
],
[
"Hao",
"Jinhua",
""
],
[
"Zhao",
"Kai",
""
],
[
"Yuan",
"Kun",
""
],
[
"Sun",
"Ming",
""
],
[
"Zhou",
"Chao",
""
],
[
"Hu",
"Wei",
""
]
] |
The objective of image super-resolution is to generate clean and high-resolution images from degraded versions. Recent advancements in diffusion modeling have led to the emergence of various image super-resolution techniques that leverage pretrained text-to-image (T2I) models. Nevertheless, due to the prevalent severe degradation in low-resolution images and the inherent characteristics of diffusion models, achieving high-fidelity image restoration remains challenging. Existing methods often exhibit issues including semantic loss, artifacts, and the introduction of spurious content not present in the original image. To tackle this challenge, we propose Cascaded diffusion for Super-Resolution, CasSR , a novel method designed to produce highly detailed and realistic images. In particular, we develop a cascaded controllable diffusion model that aims to optimize the extraction of information from low-resolution images. This model generates a preliminary reference image to facilitate initial information extraction and degradation mitigation. Furthermore, we propose a multi-attention mechanism to enhance the T2I model's capability in maximizing the restoration of the original image content. Through a comprehensive blend of qualitative and quantitative analyses, we substantiate the efficacy and superiority of our approach.
|
2204.14059
|
Siyuan Liu
|
Yi Lin (1), Siyuan Liu (1), Lei Yan (1 and 2), Kai Yan (3), Yelu Zeng
(4) and Bin Yang (5) ((1) Institute of Remote Sensing and Geographic
Information System, School of Earth and Space Sciences, Peking University.
(2) Guangxi Key Laboratory of Remote Measuring System, Guiling University of
Aerospace Technology. (3) School of Land Science and Techniques, China
University of Geosciences. (4) College of Land Science and Technology, China
Agricultural University. (5) College of Electrical and Information
Engineering, Hunan University.)
|
Improving the estimation of directional area scattering factor (DASF)
from canopy reflectance: theoretical basis and validation
| null | null | null | null |
cs.CE eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Directional area scattering factor (DASF) is a critical canopy structural
parameter for vegetation monitoring. It provides an efficient tool for
decoupling of canopy structure and leaf optics from canopy reflectance. Current
standard approach to estimate DASF from canopy bidirectional reflectance factor
(BRF) is based on the assumption that in the weakly absorbing 710 to 790 nm
spectral interval, leaf scattering does not change much with the concentration
of dry matter and thus its variation can be neglected. This results in biased
estimates of DASF and consequently leads to uncertainty in DASF-related
applications. This study proposes a new approach to account for variations in
concentrations of this biochemical constituent, which additionally uses the
canopy BRF at 2260 nm. In silico analysis of the proposed approach suggests
significant increase in accuracy over the standard technique by a relative root
mean square error (rRMSE) of 49% and 34% for one- and three dimensional scenes,
respectively. When compared with indoor multi-angular hyperspectral
measurements reported in literature, the mean absolute error has reduced by 68%
for needle leaf and 20% for broadleaf canopies. Thus, the proposed DASF
estimation approach outperforms the current one and can be used more reliably
in DASF-related applications, such as vegetation monitoring of functional
traits, dynamics, and radiation budget.
|
[
{
"created": "Thu, 28 Apr 2022 02:54:24 GMT",
"version": "v1"
}
] |
2022-05-02
|
[
[
"Lin",
"Yi",
"",
"1 and 2"
],
[
"Liu",
"Siyuan",
"",
"1 and 2"
],
[
"Yan",
"Lei",
"",
"1 and 2"
],
[
"Yan",
"Kai",
""
],
[
"Zeng",
"Yelu",
""
],
[
"Yang",
"Bin",
""
]
] |
Directional area scattering factor (DASF) is a critical canopy structural parameter for vegetation monitoring. It provides an efficient tool for decoupling of canopy structure and leaf optics from canopy reflectance. Current standard approach to estimate DASF from canopy bidirectional reflectance factor (BRF) is based on the assumption that in the weakly absorbing 710 to 790 nm spectral interval, leaf scattering does not change much with the concentration of dry matter and thus its variation can be neglected. This results in biased estimates of DASF and consequently leads to uncertainty in DASF-related applications. This study proposes a new approach to account for variations in concentrations of this biochemical constituent, which additionally uses the canopy BRF at 2260 nm. In silico analysis of the proposed approach suggests significant increase in accuracy over the standard technique by a relative root mean square error (rRMSE) of 49% and 34% for one- and three dimensional scenes, respectively. When compared with indoor multi-angular hyperspectral measurements reported in literature, the mean absolute error has reduced by 68% for needle leaf and 20% for broadleaf canopies. Thus, the proposed DASF estimation approach outperforms the current one and can be used more reliably in DASF-related applications, such as vegetation monitoring of functional traits, dynamics, and radiation budget.
|
2007.04604
|
Sao Mai Nguyen
|
Linda Nanan Vall\'ee (ESATIC), Christophe Lohr, Sao Mai Nguyen (IMT
Atlantique), Ioannis Kanellos (IMT Atlantique - INFO), O. Asseu (ESATIC)
|
Building an Automated Gesture Imitation Game for Teenagers with ASD
| null |
Far East Journal of Electronics and Communications, 2019, 22,
pp.19 - 28
|
10.17654/EC023010001
| null |
cs.HC cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Autism spectrum disorder is a neurodevelopmental condition that includes
issues with communication and social interactions. People with ASD also often
have restricted interests and repetitive behaviors. In this paper we build
preliminary bricks of an automated gesture imitation game that will aim at
improving social interactions with teenagers with ASD. The structure of the
game is presented, as well as support tools and methods for skeleton detection
and imitation learning. The game shall later be implemented using an
interactive robot.
|
[
{
"created": "Thu, 9 Jul 2020 07:27:43 GMT",
"version": "v1"
}
] |
2020-07-10
|
[
[
"Vallée",
"Linda Nanan",
"",
"ESATIC"
],
[
"Lohr",
"Christophe",
"",
"IMT\n Atlantique"
],
[
"Nguyen",
"Sao Mai",
"",
"IMT\n Atlantique"
],
[
"Kanellos",
"Ioannis",
"",
"IMT Atlantique - INFO"
],
[
"Asseu",
"O.",
"",
"ESATIC"
]
] |
Autism spectrum disorder is a neurodevelopmental condition that includes issues with communication and social interactions. People with ASD also often have restricted interests and repetitive behaviors. In this paper we build preliminary bricks of an automated gesture imitation game that will aim at improving social interactions with teenagers with ASD. The structure of the game is presented, as well as support tools and methods for skeleton detection and imitation learning. The game shall later be implemented using an interactive robot.
|
1802.08150
|
Diego Moussallem
|
Diego Moussallem, Thiago Castro Ferreira, Marcos Zampieri, Maria
Claudia Cavalcanti, Geraldo Xex\'eo, Mariana Neves, Axel-Cyrille Ngonga Ngomo
|
RDF2PT: Generating Brazilian Portuguese Texts from RDF Data
|
Accepted for publication in Language Resources and Evaluation
Conference (LREC) 2018
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The generation of natural language from Resource Description Framework (RDF)
data has recently gained significant attention due to the continuous growth of
Linked Data. A number of these approaches generate natural language in
languages other than English, however, no work has been proposed to generate
Brazilian Portuguese texts out of RDF. We address this research gap by
presenting RDF2PT, an approach that verbalizes RDF data to Brazilian Portuguese
language. We evaluated RDF2PT in an open questionnaire with 44 native speakers
divided into experts and non-experts. Our results suggest that RDF2PT is able
to generate text which is similar to that generated by humans and can hence be
easily understood.
|
[
{
"created": "Thu, 22 Feb 2018 16:41:56 GMT",
"version": "v1"
}
] |
2018-02-23
|
[
[
"Moussallem",
"Diego",
""
],
[
"Ferreira",
"Thiago Castro",
""
],
[
"Zampieri",
"Marcos",
""
],
[
"Cavalcanti",
"Maria Claudia",
""
],
[
"Xexéo",
"Geraldo",
""
],
[
"Neves",
"Mariana",
""
],
[
"Ngomo",
"Axel-Cyrille Ngonga",
""
]
] |
The generation of natural language from Resource Description Framework (RDF) data has recently gained significant attention due to the continuous growth of Linked Data. A number of these approaches generate natural language in languages other than English, however, no work has been proposed to generate Brazilian Portuguese texts out of RDF. We address this research gap by presenting RDF2PT, an approach that verbalizes RDF data to Brazilian Portuguese language. We evaluated RDF2PT in an open questionnaire with 44 native speakers divided into experts and non-experts. Our results suggest that RDF2PT is able to generate text which is similar to that generated by humans and can hence be easily understood.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.