id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1305.3803
|
Hassan Mansour
|
Hassan Mansour and Ozgur Yilmaz
|
A fast randomized Kaczmarz algorithm for sparse solutions of consistent
linear systems
| null | null | null | null |
cs.NA cs.IT math.IT math.NA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Kaczmarz algorithm is a popular solver for overdetermined linear systems
due to its simplicity and speed. In this paper, we propose a modification that
speeds up the convergence of the randomized Kaczmarz algorithm for systems of
linear equations with sparse solutions. The speedup is achieved by projecting
every iterate onto a weighted row of the linear system while maintaining the
random row selection criteria of Strohmer and Vershynin. The weights are chosen
to attenuate the contribution of row elements that lie outside of the estimated
support of the sparse solution. While the Kaczmarz algorithm and its variants
can only find solutions to overdetermined linear systems, our algorithm
surprisingly succeeds in finding sparse solutions to underdetermined linear
systems as well. We present empirical studies which demonstrate the
acceleration in convergence to the sparse solution using this modified approach
in the overdetermined case. We also demonstrate the sparse recovery
capabilities of our approach in the underdetermined case and compare the
performance with that of $\ell_1$ minimization.
|
[
{
"created": "Thu, 16 May 2013 13:44:42 GMT",
"version": "v1"
}
] |
2013-05-17
|
[
[
"Mansour",
"Hassan",
""
],
[
"Yilmaz",
"Ozgur",
""
]
] |
The Kaczmarz algorithm is a popular solver for overdetermined linear systems due to its simplicity and speed. In this paper, we propose a modification that speeds up the convergence of the randomized Kaczmarz algorithm for systems of linear equations with sparse solutions. The speedup is achieved by projecting every iterate onto a weighted row of the linear system while maintaining the random row selection criteria of Strohmer and Vershynin. The weights are chosen to attenuate the contribution of row elements that lie outside of the estimated support of the sparse solution. While the Kaczmarz algorithm and its variants can only find solutions to overdetermined linear systems, our algorithm surprisingly succeeds in finding sparse solutions to underdetermined linear systems as well. We present empirical studies which demonstrate the acceleration in convergence to the sparse solution using this modified approach in the overdetermined case. We also demonstrate the sparse recovery capabilities of our approach in the underdetermined case and compare the performance with that of $\ell_1$ minimization.
|
2306.12457
|
Ruhan Liu
|
Ruhan Liu, Jiajia Li, Yang Wen, Huating Li, Ping Zhang, Bin Sheng,
David Dagan Feng
|
Deep Dynamic Epidemiological Modelling for COVID-19 Forecasting in
Multi-level Districts
| null | null | null | null |
cs.LG cs.AI q-bio.PE
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Objective: COVID-19 has spread worldwide and made a huge influence across the
world. Modeling the infectious spread situation of COVID-19 is essential to
understand the current condition and to formulate intervention measurements.
Epidemiological equations based on the SEIR model simulate disease development.
The traditional parameter estimation method to solve SEIR equations could not
precisely fit real-world data due to different situations, such as social
distancing policies and intervention strategies. Additionally, learning-based
models achieve outstanding fitting performance, but cannot visualize
mechanisms. Methods: Thus, we propose a deep dynamic epidemiological (DDE)
method that combines epidemiological equations and deep-learning advantages to
obtain high accuracy and visualization. The DDE contains deep networks to fit
the effect function to simulate the ever-changing situations based on the
neural ODE method in solving variants' equations, ensuring the fitting
performance of multi-level areas. Results: We introduce four SEIR variants to
fit different situations in different countries and regions. We compare our DDE
method with traditional parameter estimation methods (Nelder-Mead, BFGS,
Powell, Truncated Newton Conjugate-Gradient, Neural ODE) in fitting the
real-world data in the cases of countries (the USA, Columbia, South Africa) and
regions (Wuhan in China, Piedmont in Italy). Our DDE method achieves the best
Mean Square Error and Pearson coefficient in all five areas. Further, compared
with the state-of-art learning-based approaches, the DDE outperforms all
techniques, including LSTM, RNN, GRU, Random Forest, Extremely Random Trees,
and Decision Tree. Conclusion: DDE presents outstanding predictive ability and
visualized display of the changes in infection rates in different regions and
countries.
|
[
{
"created": "Wed, 21 Jun 2023 06:30:02 GMT",
"version": "v1"
}
] |
2023-06-23
|
[
[
"Liu",
"Ruhan",
""
],
[
"Li",
"Jiajia",
""
],
[
"Wen",
"Yang",
""
],
[
"Li",
"Huating",
""
],
[
"Zhang",
"Ping",
""
],
[
"Sheng",
"Bin",
""
],
[
"Feng",
"David Dagan",
""
]
] |
Objective: COVID-19 has spread worldwide and made a huge influence across the world. Modeling the infectious spread situation of COVID-19 is essential to understand the current condition and to formulate intervention measurements. Epidemiological equations based on the SEIR model simulate disease development. The traditional parameter estimation method to solve SEIR equations could not precisely fit real-world data due to different situations, such as social distancing policies and intervention strategies. Additionally, learning-based models achieve outstanding fitting performance, but cannot visualize mechanisms. Methods: Thus, we propose a deep dynamic epidemiological (DDE) method that combines epidemiological equations and deep-learning advantages to obtain high accuracy and visualization. The DDE contains deep networks to fit the effect function to simulate the ever-changing situations based on the neural ODE method in solving variants' equations, ensuring the fitting performance of multi-level areas. Results: We introduce four SEIR variants to fit different situations in different countries and regions. We compare our DDE method with traditional parameter estimation methods (Nelder-Mead, BFGS, Powell, Truncated Newton Conjugate-Gradient, Neural ODE) in fitting the real-world data in the cases of countries (the USA, Columbia, South Africa) and regions (Wuhan in China, Piedmont in Italy). Our DDE method achieves the best Mean Square Error and Pearson coefficient in all five areas. Further, compared with the state-of-art learning-based approaches, the DDE outperforms all techniques, including LSTM, RNN, GRU, Random Forest, Extremely Random Trees, and Decision Tree. Conclusion: DDE presents outstanding predictive ability and visualized display of the changes in infection rates in different regions and countries.
|
2110.05702
|
Homanga Bharadhwaj
|
Homanga Bharadhwaj
|
Auditing Robot Learning for Safety and Compliance during Deployment
|
Blue Sky paper at the 5th Conference on Robot Learning (CoRL 2021)
| null | null | null |
cs.RO cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Robots of the future are going to exhibit increasingly human-like and
super-human intelligence in a myriad of different tasks. They are also likely
going to fail and be incompliant with human preferences in increasingly subtle
ways. Towards the goal of achieving autonomous robots, the robot learning
community has made rapid strides in applying machine learning techniques to
train robots through data and interaction. This makes the study of how best to
audit these algorithms for checking their compatibility with humans, pertinent
and urgent. In this paper, we draw inspiration from the AI Safety and Alignment
communities and make the case that we need to urgently consider ways in which
we can best audit our robot learning algorithms to check for failure modes, and
ensure that when operating autonomously, they are indeed behaving in ways that
the human algorithm designers intend them to. We believe that this is a
challenging problem that will require efforts from the entire robot learning
community, and do not attempt to provide a concrete framework for auditing.
Instead, we outline high-level guidance and a possible approach towards
formulating this framework which we hope will serve as a useful starting point
for thinking about auditing in the context of robot learning.
|
[
{
"created": "Tue, 12 Oct 2021 02:40:11 GMT",
"version": "v1"
}
] |
2021-10-13
|
[
[
"Bharadhwaj",
"Homanga",
""
]
] |
Robots of the future are going to exhibit increasingly human-like and super-human intelligence in a myriad of different tasks. They are also likely going to fail and be incompliant with human preferences in increasingly subtle ways. Towards the goal of achieving autonomous robots, the robot learning community has made rapid strides in applying machine learning techniques to train robots through data and interaction. This makes the study of how best to audit these algorithms for checking their compatibility with humans, pertinent and urgent. In this paper, we draw inspiration from the AI Safety and Alignment communities and make the case that we need to urgently consider ways in which we can best audit our robot learning algorithms to check for failure modes, and ensure that when operating autonomously, they are indeed behaving in ways that the human algorithm designers intend them to. We believe that this is a challenging problem that will require efforts from the entire robot learning community, and do not attempt to provide a concrete framework for auditing. Instead, we outline high-level guidance and a possible approach towards formulating this framework which we hope will serve as a useful starting point for thinking about auditing in the context of robot learning.
|
2304.07514
|
Ahmad Khan
|
Ahmad Faraz Khan, Xinran Wang, Qi Le, Azal Ahmad Khan, Haider Ali, Jie
Ding, Ali Butt, Ali Anwar
|
PI-FL: Personalized and Incentivized Federated Learning
| null | null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Personalized FL has been widely used to cater to heterogeneity challenges
with non-IID data. A primary obstacle is considering the personalization
process from the client's perspective to preserve their autonomy. Allowing the
clients to participate in personalized FL decisions becomes significant due to
privacy and security concerns, where the clients may not be at liberty to share
private information necessary for producing good quality personalized models.
Moreover, clients with high-quality data and resources are reluctant to
participate in the FL process without reasonable incentive. In this paper, we
propose PI-FL, a one-shot personalization solution complemented by a
token-based incentive mechanism that rewards personalized training. PI-FL
outperforms other state-of-the-art approaches and can generate good-quality
personalized models while respecting clients' privacy.
|
[
{
"created": "Sat, 15 Apr 2023 09:02:06 GMT",
"version": "v1"
},
{
"created": "Thu, 27 Apr 2023 15:36:55 GMT",
"version": "v2"
}
] |
2023-04-28
|
[
[
"Khan",
"Ahmad Faraz",
""
],
[
"Wang",
"Xinran",
""
],
[
"Le",
"Qi",
""
],
[
"Khan",
"Azal Ahmad",
""
],
[
"Ali",
"Haider",
""
],
[
"Ding",
"Jie",
""
],
[
"Butt",
"Ali",
""
],
[
"Anwar",
"Ali",
""
]
] |
Personalized FL has been widely used to cater to heterogeneity challenges with non-IID data. A primary obstacle is considering the personalization process from the client's perspective to preserve their autonomy. Allowing the clients to participate in personalized FL decisions becomes significant due to privacy and security concerns, where the clients may not be at liberty to share private information necessary for producing good quality personalized models. Moreover, clients with high-quality data and resources are reluctant to participate in the FL process without reasonable incentive. In this paper, we propose PI-FL, a one-shot personalization solution complemented by a token-based incentive mechanism that rewards personalized training. PI-FL outperforms other state-of-the-art approaches and can generate good-quality personalized models while respecting clients' privacy.
|
1808.09479
|
Mohammadzaman Zamani
|
Mohammadzaman Zamani, H. Andrew Schwartz, Veronica E. Lynn, Salvatore
Giorgi and Niranjan Balasubramanian
|
Residualized Factor Adaptation for Community Social Media Prediction
Tasks
|
Conference on Empirical Methods in Natural Language Processing (EMNLP
2018)
|
Proceedings of the 2018 Conference on Empirical Methods in Natural
Language Processing, pages 3560-3569, 2018
| null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Predictive models over social media language have shown promise in capturing
community outcomes, but approaches thus far largely neglect the
socio-demographic context (e.g. age, education rates, race) of the community
from which the language originates. For example, it may be inaccurate to assume
people in Mobile, Alabama, where the population is relatively older, will use
words the same way as those from San Francisco, where the median age is younger
with a higher rate of college education. In this paper, we present residualized
factor adaptation, a novel approach to community prediction tasks which both
(a) effectively integrates community attributes, as well as (b) adapts
linguistic features to community attributes (factors). We use eleven
demographic and socioeconomic attributes, and evaluate our approach over five
different community-level predictive tasks, spanning health (heart disease
mortality, percent fair/poor health), psychology (life satisfaction), and
economics (percent housing price increase, foreclosure rate). Our evaluation
shows that residualized factor adaptation significantly improves 4 out of 5
community-level outcome predictions over prior state-of-the-art for
incorporating socio-demographic contexts.
|
[
{
"created": "Tue, 28 Aug 2018 18:23:56 GMT",
"version": "v1"
}
] |
2019-04-17
|
[
[
"Zamani",
"Mohammadzaman",
""
],
[
"Schwartz",
"H. Andrew",
""
],
[
"Lynn",
"Veronica E.",
""
],
[
"Giorgi",
"Salvatore",
""
],
[
"Balasubramanian",
"Niranjan",
""
]
] |
Predictive models over social media language have shown promise in capturing community outcomes, but approaches thus far largely neglect the socio-demographic context (e.g. age, education rates, race) of the community from which the language originates. For example, it may be inaccurate to assume people in Mobile, Alabama, where the population is relatively older, will use words the same way as those from San Francisco, where the median age is younger with a higher rate of college education. In this paper, we present residualized factor adaptation, a novel approach to community prediction tasks which both (a) effectively integrates community attributes, as well as (b) adapts linguistic features to community attributes (factors). We use eleven demographic and socioeconomic attributes, and evaluate our approach over five different community-level predictive tasks, spanning health (heart disease mortality, percent fair/poor health), psychology (life satisfaction), and economics (percent housing price increase, foreclosure rate). Our evaluation shows that residualized factor adaptation significantly improves 4 out of 5 community-level outcome predictions over prior state-of-the-art for incorporating socio-demographic contexts.
|
2105.12479
|
Celia Cintas
|
Celia Cintas, Skyler Speakman, Girmaw Abebe Tadesse, Victor Akinwande,
Edward McFowland III, Komminist Weldemariam
|
Pattern Detection in the Activation Space for Identifying Synthesized
Content
|
The paper is under consideration at Pattern Recognition Letters
| null | null | null |
cs.CV cs.CR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Generative Adversarial Networks (GANs) have recently achieved unprecedented
success in photo-realistic image synthesis from low-dimensional random noise.
The ability to synthesize high-quality content at a large scale brings
potential risks as the generated samples may lead to misinformation that can
create severe social, political, health, and business hazards. We propose
SubsetGAN to identify generated content by detecting a subset of anomalous
node-activations in the inner layers of pre-trained neural networks. These
nodes, as a group, maximize a non-parametric measure of divergence away from
the expected distribution of activations created from real data. This enable us
to identify synthesised images without prior knowledge of their distribution.
SubsetGAN efficiently scores subsets of nodes and returns the group of nodes
within the pre-trained classifier that contributed to the maximum score. The
classifier can be a general fake classifier trained over samples from multiple
sources or the discriminator network from different GANs. Our approach shows
consistently higher detection power than existing detection methods across
several state-of-the-art GANs (PGGAN, StarGAN, and CycleGAN) and over different
proportions of generated content.
|
[
{
"created": "Wed, 26 May 2021 11:28:36 GMT",
"version": "v1"
},
{
"created": "Thu, 27 May 2021 08:40:27 GMT",
"version": "v2"
}
] |
2021-05-28
|
[
[
"Cintas",
"Celia",
""
],
[
"Speakman",
"Skyler",
""
],
[
"Tadesse",
"Girmaw Abebe",
""
],
[
"Akinwande",
"Victor",
""
],
[
"McFowland",
"Edward",
"III"
],
[
"Weldemariam",
"Komminist",
""
]
] |
Generative Adversarial Networks (GANs) have recently achieved unprecedented success in photo-realistic image synthesis from low-dimensional random noise. The ability to synthesize high-quality content at a large scale brings potential risks as the generated samples may lead to misinformation that can create severe social, political, health, and business hazards. We propose SubsetGAN to identify generated content by detecting a subset of anomalous node-activations in the inner layers of pre-trained neural networks. These nodes, as a group, maximize a non-parametric measure of divergence away from the expected distribution of activations created from real data. This enable us to identify synthesised images without prior knowledge of their distribution. SubsetGAN efficiently scores subsets of nodes and returns the group of nodes within the pre-trained classifier that contributed to the maximum score. The classifier can be a general fake classifier trained over samples from multiple sources or the discriminator network from different GANs. Our approach shows consistently higher detection power than existing detection methods across several state-of-the-art GANs (PGGAN, StarGAN, and CycleGAN) and over different proportions of generated content.
|
1409.7476
|
Michel Fliess
|
C\'edric Join (INRIA Lille - Nord Europe, CRAN, AL.I.E.N.), Cyril
Voyant (SPE), Michel Fliess (AL.I.E.N., LIX), Marc Muselli (SPE), Marie Laure
Nivet (SPE), Christophe Paoli, Fr\'ed\'eric Chaxel (CRAN)
|
Short-term solar irradiance and irradiation forecasts via different time
series techniques: A preliminary study
| null |
3rd International Symposium on Environment-Friendly Energies and
Applications (EFEA 2014), Pars : France (2014)
| null | null |
cs.LG physics.ao-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This communication is devoted to solar irradiance and irradiation short-term
forecasts, which are useful for electricity production. Several different time
series approaches are employed. Our results and the corresponding numerical
simulations show that techniques which do not need a large amount of historical
data behave better than those which need them, especially when those data are
quite noisy.
|
[
{
"created": "Fri, 26 Sep 2014 06:27:30 GMT",
"version": "v1"
}
] |
2014-09-29
|
[
[
"Join",
"Cédric",
"",
"INRIA Lille - Nord Europe, CRAN, AL.I.E.N."
],
[
"Voyant",
"Cyril",
"",
"SPE"
],
[
"Fliess",
"Michel",
"",
"AL.I.E.N., LIX"
],
[
"Muselli",
"Marc",
"",
"SPE"
],
[
"Nivet",
"Marie Laure",
"",
"SPE"
],
[
"Paoli",
"Christophe",
"",
"CRAN"
],
[
"Chaxel",
"Frédéric",
"",
"CRAN"
]
] |
This communication is devoted to solar irradiance and irradiation short-term forecasts, which are useful for electricity production. Several different time series approaches are employed. Our results and the corresponding numerical simulations show that techniques which do not need a large amount of historical data behave better than those which need them, especially when those data are quite noisy.
|
1703.00835
|
Simon Hangl
|
Simon Hangl, Sebastian Stabinger, Justus Piater
|
Autonomous Skill-centric Testing using Deep Learning
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Software testing is an important tool to ensure software quality. This is a
hard task in robotics due to dynamic environments and the expensive development
and time-consuming execution of test cases. Most testing approaches use
model-based and / or simulation-based testing to overcome these problems. We
propose model-free skill-centric testing in which a robot autonomously executes
skills in the real world and compares it to previous experiences. The skills
are selected by maximising the expected information gain on the distribution of
erroneous software functions. We use deep learning to model the sensor data
observed during previous successful skill executions and to detect
irregularities. Sensor data is connected to function call profiles such that
certain misbehaviour can be related to specific functions. We evaluate our
approach in simulation and in experiments with a KUKA LWR 4+ robot by
purposefully introducing bugs to the software. We demonstrate that these bugs
can be detected with high accuracy and without the need for the implementation
of specific tests or task-specific models.
|
[
{
"created": "Thu, 2 Mar 2017 15:41:48 GMT",
"version": "v1"
},
{
"created": "Tue, 21 Mar 2017 15:07:04 GMT",
"version": "v2"
},
{
"created": "Sun, 13 Aug 2017 11:40:32 GMT",
"version": "v3"
}
] |
2017-08-15
|
[
[
"Hangl",
"Simon",
""
],
[
"Stabinger",
"Sebastian",
""
],
[
"Piater",
"Justus",
""
]
] |
Software testing is an important tool to ensure software quality. This is a hard task in robotics due to dynamic environments and the expensive development and time-consuming execution of test cases. Most testing approaches use model-based and / or simulation-based testing to overcome these problems. We propose model-free skill-centric testing in which a robot autonomously executes skills in the real world and compares it to previous experiences. The skills are selected by maximising the expected information gain on the distribution of erroneous software functions. We use deep learning to model the sensor data observed during previous successful skill executions and to detect irregularities. Sensor data is connected to function call profiles such that certain misbehaviour can be related to specific functions. We evaluate our approach in simulation and in experiments with a KUKA LWR 4+ robot by purposefully introducing bugs to the software. We demonstrate that these bugs can be detected with high accuracy and without the need for the implementation of specific tests or task-specific models.
|
2305.04572
|
Grzegorz Chrupa{\l}a
|
Grzegorz Chrupa{\l}a
|
Putting Natural in Natural Language Processing
|
Findings of the ACL 2023
| null | null | null |
cs.CL cs.AI eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Human language is firstly spoken and only secondarily written. Text, however,
is a very convenient and efficient representation of language, and modern
civilization has made it ubiquitous. Thus the field of NLP has overwhelmingly
focused on processing written rather than spoken language. Work on spoken
language, on the other hand, has been siloed off within the largely separate
speech processing community which has been inordinately preoccupied with
transcribing speech into text. Recent advances in deep learning have led to a
fortuitous convergence in methods between speech processing and mainstream NLP.
Arguably, the time is ripe for a unification of these two fields, and for
starting to take spoken language seriously as the primary mode of human
communication. Truly natural language processing could lead to better
integration with the rest of language science and could lead to systems which
are more data-efficient and more human-like, and which can communicate beyond
the textual modality.
|
[
{
"created": "Mon, 8 May 2023 09:29:31 GMT",
"version": "v1"
},
{
"created": "Tue, 23 May 2023 14:15:00 GMT",
"version": "v2"
}
] |
2023-05-24
|
[
[
"Chrupała",
"Grzegorz",
""
]
] |
Human language is firstly spoken and only secondarily written. Text, however, is a very convenient and efficient representation of language, and modern civilization has made it ubiquitous. Thus the field of NLP has overwhelmingly focused on processing written rather than spoken language. Work on spoken language, on the other hand, has been siloed off within the largely separate speech processing community which has been inordinately preoccupied with transcribing speech into text. Recent advances in deep learning have led to a fortuitous convergence in methods between speech processing and mainstream NLP. Arguably, the time is ripe for a unification of these two fields, and for starting to take spoken language seriously as the primary mode of human communication. Truly natural language processing could lead to better integration with the rest of language science and could lead to systems which are more data-efficient and more human-like, and which can communicate beyond the textual modality.
|
2404.18170
|
Ianna Osborne
|
Ianna Osborne, Jim Pivarski, Jerry Ling
|
Bridging Worlds: Achieving Language Interoperability between Julia and
Python in Scientific Computing
|
8 pages, 1 figure, ACAT2024 workshop
| null | null | null |
cs.PL physics.data-an
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In the realm of scientific computing, both Julia and Python have established
themselves as powerful tools. Within the context of High Energy Physics (HEP)
data analysis, Python has been traditionally favored, yet there exists a
compelling case for migrating legacy software to Julia. This article focuses on
language interoperability, specifically exploring how Awkward Array data
structures can bridge the gap between Julia and Python. The talk offers
insights into key considerations such as memory management, data buffer copies,
and dependency handling. It delves into the performance enhancements achieved
by invoking Julia from Python and vice versa, particularly for intensive
array-oriented calculations involving large-scale, though not excessively
dimensional, arrays of HEP data. The advantages and challenges inherent in
achieving interoperability between Julia and Python in the domain of scientific
computing are discussed.
|
[
{
"created": "Sun, 28 Apr 2024 12:58:15 GMT",
"version": "v1"
}
] |
2024-04-30
|
[
[
"Osborne",
"Ianna",
""
],
[
"Pivarski",
"Jim",
""
],
[
"Ling",
"Jerry",
""
]
] |
In the realm of scientific computing, both Julia and Python have established themselves as powerful tools. Within the context of High Energy Physics (HEP) data analysis, Python has been traditionally favored, yet there exists a compelling case for migrating legacy software to Julia. This article focuses on language interoperability, specifically exploring how Awkward Array data structures can bridge the gap between Julia and Python. The talk offers insights into key considerations such as memory management, data buffer copies, and dependency handling. It delves into the performance enhancements achieved by invoking Julia from Python and vice versa, particularly for intensive array-oriented calculations involving large-scale, though not excessively dimensional, arrays of HEP data. The advantages and challenges inherent in achieving interoperability between Julia and Python in the domain of scientific computing are discussed.
|
1801.07402
|
Mohammad Vahid Jamali
|
Mohammad Vahid Jamali, Ali Mirani, Alireza Parsay, Bahman Abolhassani,
Pooya Nabavi, Ata Chizari, Pirazh Khorramshahi, Sajjad Abdollahramezani, and
Jawad A. Salehi
|
Statistical Studies of Fading in Underwater Wireless Optical Channels in
the Presence of Air Bubble, Temperature, and Salinity Random Variations (Long
Version)
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Optical signal propagation through underwater channels is affected by three
main degrading phenomena, namely absorption, scattering, and fading. In this
paper, we experimentally study the statistical distribution of intensity
fluctuations in underwater wireless optical channels with random temperature
and salinity variations as well as the presence of air bubbles. In particular,
we define different scenarios to produce random fluctuations on the water
refractive index across the propagation path, and then examine the accuracy of
various statistical distributions in terms of their goodness of fit to the
experimental data. We also obtain the channel coherence time to address the
average period of fading temporal variations. The scenarios under consideration
cover a wide range of scintillation index from weak to strong turbulence.
Moreover, the effects of beam-collimator at the transmitter side and aperture
averaging lens at the receiver side are experimentally investigated. We show
that the use of a transmitter beam-collimator and/or a receiver aperture
averaging lens suits single-lobe distributions such that the generalized Gamma
and exponential Weibull distributions can excellently match the histograms of
the acquired data. Our experimental results further reveal that the channel
coherence time is on the order of $10^{-3}$ seconds and larger which implies to
the slow fading turbulent channels.
|
[
{
"created": "Tue, 23 Jan 2018 06:00:36 GMT",
"version": "v1"
},
{
"created": "Mon, 5 Feb 2018 00:16:47 GMT",
"version": "v2"
}
] |
2018-02-06
|
[
[
"Jamali",
"Mohammad Vahid",
""
],
[
"Mirani",
"Ali",
""
],
[
"Parsay",
"Alireza",
""
],
[
"Abolhassani",
"Bahman",
""
],
[
"Nabavi",
"Pooya",
""
],
[
"Chizari",
"Ata",
""
],
[
"Khorramshahi",
"Pirazh",
""
],
[
"Abdollahramezani",
"Sajjad",
""
],
[
"Salehi",
"Jawad A.",
""
]
] |
Optical signal propagation through underwater channels is affected by three main degrading phenomena, namely absorption, scattering, and fading. In this paper, we experimentally study the statistical distribution of intensity fluctuations in underwater wireless optical channels with random temperature and salinity variations as well as the presence of air bubbles. In particular, we define different scenarios to produce random fluctuations on the water refractive index across the propagation path, and then examine the accuracy of various statistical distributions in terms of their goodness of fit to the experimental data. We also obtain the channel coherence time to address the average period of fading temporal variations. The scenarios under consideration cover a wide range of scintillation index from weak to strong turbulence. Moreover, the effects of beam-collimator at the transmitter side and aperture averaging lens at the receiver side are experimentally investigated. We show that the use of a transmitter beam-collimator and/or a receiver aperture averaging lens suits single-lobe distributions such that the generalized Gamma and exponential Weibull distributions can excellently match the histograms of the acquired data. Our experimental results further reveal that the channel coherence time is on the order of $10^{-3}$ seconds and larger which implies to the slow fading turbulent channels.
|
1410.4095
|
Ana Salagean
|
Ana S\u{a}l\u{a}gean, Matei Mandache-S\u{a}l\u{a}gean, Richard Winter,
Raphael C.-W. Phan
|
Higher Order Differentiation over Finite Fields with Applications to
Generalising the Cube Attack
|
submitted to a journal
|
Designs Codes Cryptography 84, 425-449 (2017)
|
10.1007/s10623-016-0277-5
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Higher order differentiation was introduced in a cryptographic context by
Lai. Several attacks can be viewed in the context of higher order
differentiations, amongst them the cube attack and the AIDA attack. All of the
above have been developed for the binary case.
We examine differentiation in larger fields, starting with the field $GF(p)$
of integers modulo a prime $p$. We prove a number of results on differentiating
polynomials over such fields and then apply these techniques to generalising
the cube attack to $GF(p)$. The crucial difference is that now the degree in
each variable can be higher than one, and our proposed attack will
differentiate several times with respect to each variable (unlike the classical
cube attack and its larger field version described by Dinur and Shamir, both of
which differentiate at most once with respect to each variable).
Finally we describe differentiation over finite fields $GF(p^m)$ with $p^m$
elements and prove that it can be reduced to differentiation over $GF(p)$, so a
cube attack over $GF(p^m)$ would be equivalent to cube attacks over $GF(p)$.
|
[
{
"created": "Wed, 15 Oct 2014 15:13:08 GMT",
"version": "v1"
}
] |
2023-06-22
|
[
[
"Sălăgean",
"Ana",
""
],
[
"Mandache-Sălăgean",
"Matei",
""
],
[
"Winter",
"Richard",
""
],
[
"Phan",
"Raphael C. -W.",
""
]
] |
Higher order differentiation was introduced in a cryptographic context by Lai. Several attacks can be viewed in the context of higher order differentiations, amongst them the cube attack and the AIDA attack. All of the above have been developed for the binary case. We examine differentiation in larger fields, starting with the field $GF(p)$ of integers modulo a prime $p$. We prove a number of results on differentiating polynomials over such fields and then apply these techniques to generalising the cube attack to $GF(p)$. The crucial difference is that now the degree in each variable can be higher than one, and our proposed attack will differentiate several times with respect to each variable (unlike the classical cube attack and its larger field version described by Dinur and Shamir, both of which differentiate at most once with respect to each variable). Finally we describe differentiation over finite fields $GF(p^m)$ with $p^m$ elements and prove that it can be reduced to differentiation over $GF(p)$, so a cube attack over $GF(p^m)$ would be equivalent to cube attacks over $GF(p)$.
|
2407.16062
|
Nina Deliu
|
Nina Deliu and Bibhas Chakraborty
|
Artificial Intelligence-based Decision Support Systems for Precision and
Digital Health
|
arXiv admin note: substantial text overlap with arXiv:2203.02605
| null | null | null |
cs.AI cs.LG stat.ML
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Precision health, increasingly supported by digital technologies, is a domain
of research that broadens the paradigm of precision medicine, advancing
everyday healthcare. This vision goes hand in hand with the groundbreaking
advent of artificial intelligence (AI), which is reshaping the way we diagnose,
treat, and monitor both clinical subjects and the general population. AI tools
powered by machine learning have shown considerable improvements in a variety
of healthcare domains. In particular, reinforcement learning (RL) holds great
promise for sequential and dynamic problems such as dynamic treatment regimes
and just-in-time adaptive interventions in digital health. In this work, we
discuss the opportunity offered by AI, more specifically RL, to current trends
in healthcare, providing a methodological survey of RL methods in the context
of precision and digital health. Focusing on the area of adaptive
interventions, we expand the methodological survey with illustrative case
studies that used RL in real practice.
This invited article has undergone anonymous review and is intended as a book
chapter for the volume "Frontiers of Statistics and Data Science" edited by
Subhashis Ghoshal and Anindya Roy for the International Indian Statistical
Association Series on Statistics and Data Science, published by Springer. It
covers the material from a short course titled "Artificial Intelligence in
Precision and Digital Health" taught by the author Bibhas Chakraborty at the
IISA 2022 Conference, December 26-30 2022, at the Indian Institute of Science,
Bengaluru.
|
[
{
"created": "Mon, 22 Jul 2024 21:39:34 GMT",
"version": "v1"
}
] |
2024-07-24
|
[
[
"Deliu",
"Nina",
""
],
[
"Chakraborty",
"Bibhas",
""
]
] |
Precision health, increasingly supported by digital technologies, is a domain of research that broadens the paradigm of precision medicine, advancing everyday healthcare. This vision goes hand in hand with the groundbreaking advent of artificial intelligence (AI), which is reshaping the way we diagnose, treat, and monitor both clinical subjects and the general population. AI tools powered by machine learning have shown considerable improvements in a variety of healthcare domains. In particular, reinforcement learning (RL) holds great promise for sequential and dynamic problems such as dynamic treatment regimes and just-in-time adaptive interventions in digital health. In this work, we discuss the opportunity offered by AI, more specifically RL, to current trends in healthcare, providing a methodological survey of RL methods in the context of precision and digital health. Focusing on the area of adaptive interventions, we expand the methodological survey with illustrative case studies that used RL in real practice. This invited article has undergone anonymous review and is intended as a book chapter for the volume "Frontiers of Statistics and Data Science" edited by Subhashis Ghoshal and Anindya Roy for the International Indian Statistical Association Series on Statistics and Data Science, published by Springer. It covers the material from a short course titled "Artificial Intelligence in Precision and Digital Health" taught by the author Bibhas Chakraborty at the IISA 2022 Conference, December 26-30 2022, at the Indian Institute of Science, Bengaluru.
|
1311.2503
|
Stefan Richthofer
|
Stefan Richthofer, Laurenz Wiskott
|
Predictable Feature Analysis
| null | null | null | null |
cs.LG stat.ML
|
http://creativecommons.org/licenses/by-nc-sa/3.0/
|
Every organism in an environment, whether biological, robotic or virtual,
must be able to predict certain aspects of its environment in order to survive
or perform whatever task is intended. It needs a model that is capable of
estimating the consequences of possible actions, so that planning, control, and
decision-making become feasible. For scientific purposes, such models are
usually created in a problem specific manner using differential equations and
other techniques from control- and system-theory. In contrast to that, we aim
for an unsupervised approach that builds up the desired model in a
self-organized fashion. Inspired by Slow Feature Analysis (SFA), our approach
is to extract sub-signals from the input, that behave as predictable as
possible. These "predictable features" are highly relevant for modeling,
because predictability is a desired property of the needed
consequence-estimating model by definition. In our approach, we measure
predictability with respect to a certain prediction model. We focus here on the
solution of the arising optimization problem and present a tractable algorithm
based on algebraic methods which we call Predictable Feature Analysis (PFA). We
prove that the algorithm finds the globally optimal signal, if this signal can
be predicted with low error. To deal with cases where the optimal signal has a
significant prediction error, we provide a robust, heuristically motivated
variant of the algorithm and verify it empirically. Additionally, we give
formal criteria a prediction-model must meet to be suitable for measuring
predictability in the PFA setting and also provide a suitable default-model
along with a formal proof that it meets these criteria.
|
[
{
"created": "Mon, 11 Nov 2013 17:05:22 GMT",
"version": "v1"
}
] |
2013-11-12
|
[
[
"Richthofer",
"Stefan",
""
],
[
"Wiskott",
"Laurenz",
""
]
] |
Every organism in an environment, whether biological, robotic or virtual, must be able to predict certain aspects of its environment in order to survive or perform whatever task is intended. It needs a model that is capable of estimating the consequences of possible actions, so that planning, control, and decision-making become feasible. For scientific purposes, such models are usually created in a problem specific manner using differential equations and other techniques from control- and system-theory. In contrast to that, we aim for an unsupervised approach that builds up the desired model in a self-organized fashion. Inspired by Slow Feature Analysis (SFA), our approach is to extract sub-signals from the input, that behave as predictable as possible. These "predictable features" are highly relevant for modeling, because predictability is a desired property of the needed consequence-estimating model by definition. In our approach, we measure predictability with respect to a certain prediction model. We focus here on the solution of the arising optimization problem and present a tractable algorithm based on algebraic methods which we call Predictable Feature Analysis (PFA). We prove that the algorithm finds the globally optimal signal, if this signal can be predicted with low error. To deal with cases where the optimal signal has a significant prediction error, we provide a robust, heuristically motivated variant of the algorithm and verify it empirically. Additionally, we give formal criteria a prediction-model must meet to be suitable for measuring predictability in the PFA setting and also provide a suitable default-model along with a formal proof that it meets these criteria.
|
1610.08262
|
Matija Pi\v{s}korec
|
Matija Pi\v{s}korec, Nino Antulov-Fantulin, Iva Miholi\'c, Tomislav
\v{S}muc, Mile \v{S}iki\'c
|
Modeling peer and external influence in online social networks
| null | null |
10.1007/978-3-319-72150-7_82
| null |
cs.SI physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Opinion polls mediated through a social network can give us, in addition to
usual demographics data like age, gender and geographic location, a friendship
structure between voters and the temporal dynamics of their activity during the
voting process. Using a Facebook application we collected friendship
relationships, demographics and votes of over ten thousand users on the
referendum on the definition of marriage in Croatia held on 1st of December
2013. We also collected data on online news articles mentioning our
application. Publication of these articles align closely with large peaks of
voting activity, indicating that these external events have a crucial influence
in engaging the voters. Also, existence of strongly connected friendship
communities where majority of users vote during short time period, and the fact
that majority of users in general tend to friend users that voted the same
suggest that peer influence also has its role in engaging the voters. As we are
not able to track activity of our users at all times, and we do not know their
motivations for expressing their votes through our application, the question is
whether we can infer peer and external influence using friendship network of
users and the times of their voting. We propose a new method for estimation of
magnitude of peer and external influence in friendship network and demonstrate
its validity on both simulated and actual data.
|
[
{
"created": "Wed, 26 Oct 2016 10:03:07 GMT",
"version": "v1"
}
] |
2018-11-28
|
[
[
"Piškorec",
"Matija",
""
],
[
"Antulov-Fantulin",
"Nino",
""
],
[
"Miholić",
"Iva",
""
],
[
"Šmuc",
"Tomislav",
""
],
[
"Šikić",
"Mile",
""
]
] |
Opinion polls mediated through a social network can give us, in addition to usual demographics data like age, gender and geographic location, a friendship structure between voters and the temporal dynamics of their activity during the voting process. Using a Facebook application we collected friendship relationships, demographics and votes of over ten thousand users on the referendum on the definition of marriage in Croatia held on 1st of December 2013. We also collected data on online news articles mentioning our application. Publication of these articles align closely with large peaks of voting activity, indicating that these external events have a crucial influence in engaging the voters. Also, existence of strongly connected friendship communities where majority of users vote during short time period, and the fact that majority of users in general tend to friend users that voted the same suggest that peer influence also has its role in engaging the voters. As we are not able to track activity of our users at all times, and we do not know their motivations for expressing their votes through our application, the question is whether we can infer peer and external influence using friendship network of users and the times of their voting. We propose a new method for estimation of magnitude of peer and external influence in friendship network and demonstrate its validity on both simulated and actual data.
|
2405.04883
|
Zehan Wang
|
Zehan Wang, Ziang Zhang, Xize Cheng, Rongjie Huang, Luping Liu,
Zhenhui Ye, Haifeng Huang, Yang Zhao, Tao Jin, Peng Gao, Zhou Zhao
|
FreeBind: Free Lunch in Unified Multimodal Space via Knowledge Fusion
|
Accepted by ICML 2024. The code and checkpoints will be released at
https://github.com/zehanwang01/FreeBind
| null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Unified multi-model representation spaces are the foundation of multimodal
understanding and generation. However, the billions of model parameters and
catastrophic forgetting problems make it challenging to further enhance
pre-trained unified spaces. In this work, we propose FreeBind, an idea that
treats multimodal representation spaces as basic units, and freely augments
pre-trained unified space by integrating knowledge from extra expert spaces via
"space bonds". Specifically, we introduce two kinds of basic space bonds: 1)
Space Displacement Bond and 2) Space Combination Bond. Based on these basic
bonds, we design Complex Sequential & Parallel Bonds to effectively integrate
multiple spaces simultaneously. Benefiting from the modularization concept, we
further propose a coarse-to-fine customized inference strategy to flexibly
adjust the enhanced unified space for different purposes. Experimentally, we
bind ImageBind with extra image-text and audio-text expert spaces, resulting in
three main variants: ImageBind++, InternVL_IB, and InternVL_IB++. These
resulting spaces outperform ImageBind on 5 audio-image-text downstream tasks
across 9 datasets. Moreover, via customized inference, it even surpasses the
advanced audio-text and image-text expert spaces.
|
[
{
"created": "Wed, 8 May 2024 08:32:34 GMT",
"version": "v1"
},
{
"created": "Fri, 10 May 2024 07:18:00 GMT",
"version": "v2"
}
] |
2024-05-13
|
[
[
"Wang",
"Zehan",
""
],
[
"Zhang",
"Ziang",
""
],
[
"Cheng",
"Xize",
""
],
[
"Huang",
"Rongjie",
""
],
[
"Liu",
"Luping",
""
],
[
"Ye",
"Zhenhui",
""
],
[
"Huang",
"Haifeng",
""
],
[
"Zhao",
"Yang",
""
],
[
"Jin",
"Tao",
""
],
[
"Gao",
"Peng",
""
],
[
"Zhao",
"Zhou",
""
]
] |
Unified multi-model representation spaces are the foundation of multimodal understanding and generation. However, the billions of model parameters and catastrophic forgetting problems make it challenging to further enhance pre-trained unified spaces. In this work, we propose FreeBind, an idea that treats multimodal representation spaces as basic units, and freely augments pre-trained unified space by integrating knowledge from extra expert spaces via "space bonds". Specifically, we introduce two kinds of basic space bonds: 1) Space Displacement Bond and 2) Space Combination Bond. Based on these basic bonds, we design Complex Sequential & Parallel Bonds to effectively integrate multiple spaces simultaneously. Benefiting from the modularization concept, we further propose a coarse-to-fine customized inference strategy to flexibly adjust the enhanced unified space for different purposes. Experimentally, we bind ImageBind with extra image-text and audio-text expert spaces, resulting in three main variants: ImageBind++, InternVL_IB, and InternVL_IB++. These resulting spaces outperform ImageBind on 5 audio-image-text downstream tasks across 9 datasets. Moreover, via customized inference, it even surpasses the advanced audio-text and image-text expert spaces.
|
cs/0512039
|
Jianqin Zhou
|
Jianqin Zhou, Xirong Xu
|
An algorithm for the k-error linear complexity of a sequence with period
2pn over GF(q)
|
6 pages
| null | null | null |
cs.CR
| null |
The union cost is used, so that an efficient algorithm for computing the
k-error linear complexity of a sequence with period 2pn over GF(q) is
presented, where p and q are odd primes, and q is a primitive root of modulo
p2.
|
[
{
"created": "Sat, 10 Dec 2005 00:21:59 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Zhou",
"Jianqin",
""
],
[
"Xu",
"Xirong",
""
]
] |
The union cost is used, so that an efficient algorithm for computing the k-error linear complexity of a sequence with period 2pn over GF(q) is presented, where p and q are odd primes, and q is a primitive root of modulo p2.
|
2404.19513
|
Sho Ueda
|
Sho Ueda, Xujun Ye
|
A Smartphone-Based Method for Assessing Tomato Nutrient Status through
Trichome Density Measurement
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Accurately assessing tomato plant nutrient status is crucial for maintaining
high yields. Consequently, accurately identifying fertilizer-induced stress
through the morphological traits of tomato plants has become a critical
agricultural challenge. Research and development efforts have focused on
developing noninvasive diagnostic tools for nutrition that leverage a
combination of morphological traits and advanced sensor technologies. Given
these advancements, detecting fertilizer stress by observing morphological
traits near the growth points of tomatoes is still a significant challenge. To
address this challenge, we developed a simple and cost-effective
smartphone-based method for measuring trichome density. This method involves
transferring trichomes from the surface of a leaf onto cellophane tape and
capturing images using a smartphone. The images are processed using computer
vision techniques to calculate the trichome density. To assess the efficacy of
this method, we performed experiments on hydroponically grown tomato plants
subjected to varying fertilizer concentrations. Our results indicate that our
novel method for measuring trichome density accurately reflects fertilizer
stress in tomato plants. The predictive performance of our model, as evaluated
by the mean area under the precision recall curve, was 0.824, despite
variations in the measurement data caused by differences in optical conditions.
This study introduces an innovative approach for designing diagnostic devices
for detecting fertilizer stress in plants by considering the surface structures
of plants. Our proposed method represents a straightforward, efficient, and
economical approach for evaluating the nutrient status of tomato plants and has
the potential to overcome the limitations of conventional noncontact optical
methods.
|
[
{
"created": "Tue, 30 Apr 2024 12:45:41 GMT",
"version": "v1"
}
] |
2024-05-01
|
[
[
"Ueda",
"Sho",
""
],
[
"Ye",
"Xujun",
""
]
] |
Accurately assessing tomato plant nutrient status is crucial for maintaining high yields. Consequently, accurately identifying fertilizer-induced stress through the morphological traits of tomato plants has become a critical agricultural challenge. Research and development efforts have focused on developing noninvasive diagnostic tools for nutrition that leverage a combination of morphological traits and advanced sensor technologies. Given these advancements, detecting fertilizer stress by observing morphological traits near the growth points of tomatoes is still a significant challenge. To address this challenge, we developed a simple and cost-effective smartphone-based method for measuring trichome density. This method involves transferring trichomes from the surface of a leaf onto cellophane tape and capturing images using a smartphone. The images are processed using computer vision techniques to calculate the trichome density. To assess the efficacy of this method, we performed experiments on hydroponically grown tomato plants subjected to varying fertilizer concentrations. Our results indicate that our novel method for measuring trichome density accurately reflects fertilizer stress in tomato plants. The predictive performance of our model, as evaluated by the mean area under the precision recall curve, was 0.824, despite variations in the measurement data caused by differences in optical conditions. This study introduces an innovative approach for designing diagnostic devices for detecting fertilizer stress in plants by considering the surface structures of plants. Our proposed method represents a straightforward, efficient, and economical approach for evaluating the nutrient status of tomato plants and has the potential to overcome the limitations of conventional noncontact optical methods.
|
1706.04353
|
Alexey Abramov
|
Alexey Abramov, Christopher Bayer, Claudio Heller, Claudia Loy
|
Multi-Lane Perception Using Feature Fusion Based on GraphSLAM
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An extensive, precise and robust recognition and modeling of the environment
is a key factor for next generations of Advanced Driver Assistance Systems and
development of autonomous vehicles. In this paper, a real-time approach for the
perception of multiple lanes on highways is proposed. Lane markings detected by
camera systems and observations of other traffic participants provide the input
data for the algorithm. The information is accumulated and fused using
GraphSLAM and the result constitutes the basis for a multilane clothoid model.
To allow incorporation of additional information sources, input data is
processed in a generic format. Evaluation of the method is performed by
comparing real data, collected with an experimental vehicle on highways, to a
ground truth map. The results show that ego and adjacent lanes are robustly
detected with high quality up to a distance of 120 m. In comparison to serial
lane detection, an increase in the detection range of the ego lane and a
continuous perception of neighboring lanes is achieved. The method can
potentially be utilized for the longitudinal and lateral control of
self-driving vehicles.
|
[
{
"created": "Wed, 14 Jun 2017 08:17:03 GMT",
"version": "v1"
}
] |
2017-06-15
|
[
[
"Abramov",
"Alexey",
""
],
[
"Bayer",
"Christopher",
""
],
[
"Heller",
"Claudio",
""
],
[
"Loy",
"Claudia",
""
]
] |
An extensive, precise and robust recognition and modeling of the environment is a key factor for next generations of Advanced Driver Assistance Systems and development of autonomous vehicles. In this paper, a real-time approach for the perception of multiple lanes on highways is proposed. Lane markings detected by camera systems and observations of other traffic participants provide the input data for the algorithm. The information is accumulated and fused using GraphSLAM and the result constitutes the basis for a multilane clothoid model. To allow incorporation of additional information sources, input data is processed in a generic format. Evaluation of the method is performed by comparing real data, collected with an experimental vehicle on highways, to a ground truth map. The results show that ego and adjacent lanes are robustly detected with high quality up to a distance of 120 m. In comparison to serial lane detection, an increase in the detection range of the ego lane and a continuous perception of neighboring lanes is achieved. The method can potentially be utilized for the longitudinal and lateral control of self-driving vehicles.
|
2202.09685
|
Jovan Blanu\v{s}a
|
Jovan Blanu\v{s}a, Paolo Ienne, and Kubilay Atasu
|
Scalable Fine-Grained Parallel Cycle Enumeration Algorithms
|
To be published in Proceedings of the 34th ACM Symposium on
Parallelism in Algorithms and Architectures (SPAA '22). The source codes of
all the algorithms evaluated in our experiments are available here
https://github.com/IBM/parallel-cycle-enumeration
| null |
10.1145/3490148.3538585
| null |
cs.DC cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Enumerating simple cycles has important applications in computational
biology, network science, and financial crime analysis. In this work, we focus
on parallelising the state-of-the-art simple cycle enumeration algorithms by
Johnson and Read-Tarjan along with their applications to temporal graphs. To
our knowledge, we are the first ones to parallelise these two algorithms in a
fine-grained manner. We are also the first to demonstrate experimentally a
linear performance scaling. Such a scaling is made possible by our
decomposition of long sequential searches into fine-grained tasks, which are
then dynamically scheduled across CPU cores, enabling an optimal load
balancing. Furthermore, we show that coarse-grained parallel versions of the
Johnson and the Read-Tarjan algorithms that exploit edge- or vertex-level
parallelism are not scalable. On a cluster of four multi-core CPUs with $256$
physical cores, our fine-grained parallel algorithms are, on average, an order
of magnitude faster than their coarse-grained parallel counterparts. The
performance gap between the fine-grained and the coarse-grained parallel
algorithms widens as we use more CPU cores. When using all 256 CPU cores, our
parallel algorithms enumerate temporal cycles, on average, $260\times$ faster
than the serial algorithm of Kumar and Calders.
|
[
{
"created": "Sat, 19 Feb 2022 21:55:17 GMT",
"version": "v1"
},
{
"created": "Tue, 31 May 2022 08:02:48 GMT",
"version": "v2"
}
] |
2022-06-01
|
[
[
"Blanuša",
"Jovan",
""
],
[
"Ienne",
"Paolo",
""
],
[
"Atasu",
"Kubilay",
""
]
] |
Enumerating simple cycles has important applications in computational biology, network science, and financial crime analysis. In this work, we focus on parallelising the state-of-the-art simple cycle enumeration algorithms by Johnson and Read-Tarjan along with their applications to temporal graphs. To our knowledge, we are the first ones to parallelise these two algorithms in a fine-grained manner. We are also the first to demonstrate experimentally a linear performance scaling. Such a scaling is made possible by our decomposition of long sequential searches into fine-grained tasks, which are then dynamically scheduled across CPU cores, enabling an optimal load balancing. Furthermore, we show that coarse-grained parallel versions of the Johnson and the Read-Tarjan algorithms that exploit edge- or vertex-level parallelism are not scalable. On a cluster of four multi-core CPUs with $256$ physical cores, our fine-grained parallel algorithms are, on average, an order of magnitude faster than their coarse-grained parallel counterparts. The performance gap between the fine-grained and the coarse-grained parallel algorithms widens as we use more CPU cores. When using all 256 CPU cores, our parallel algorithms enumerate temporal cycles, on average, $260\times$ faster than the serial algorithm of Kumar and Calders.
|
1302.6562
|
Daniel Cullina
|
Daniel Cullina and Negar Kiyavash
|
An Improvement to Levenshtein's Upper Bound on the Cardinality of
Deletion Correcting Codes
| null | null | null | null |
cs.IT cs.DM math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider deletion correcting codes over a q-ary alphabet. It is well known
that any code capable of correcting s deletions can also correct any
combination of s total insertions and deletions. To obtain asymptotic upper
bounds on code size, we apply a packing argument to channels that perform
different mixtures of insertions and deletions. Even though the set of codes is
identical for all of these channels, the bounds that we obtain vary. Prior to
this work, only the bounds corresponding to the all insertion case and the all
deletion case were known. We recover these as special cases. The bound from the
all deletion case, due to Levenshtein, has been the best known for more than
forty five years. Our generalized bound is better than Levenshtein's bound
whenever the number of deletions to be corrected is larger than the alphabet
size.
|
[
{
"created": "Tue, 26 Feb 2013 20:12:59 GMT",
"version": "v1"
},
{
"created": "Mon, 29 Jul 2013 17:50:50 GMT",
"version": "v2"
}
] |
2013-07-30
|
[
[
"Cullina",
"Daniel",
""
],
[
"Kiyavash",
"Negar",
""
]
] |
We consider deletion correcting codes over a q-ary alphabet. It is well known that any code capable of correcting s deletions can also correct any combination of s total insertions and deletions. To obtain asymptotic upper bounds on code size, we apply a packing argument to channels that perform different mixtures of insertions and deletions. Even though the set of codes is identical for all of these channels, the bounds that we obtain vary. Prior to this work, only the bounds corresponding to the all insertion case and the all deletion case were known. We recover these as special cases. The bound from the all deletion case, due to Levenshtein, has been the best known for more than forty five years. Our generalized bound is better than Levenshtein's bound whenever the number of deletions to be corrected is larger than the alphabet size.
|
1905.13183
|
Minjie Xu
|
Minjie Xu and Gary Kazantsev
|
Understanding Goal-Oriented Active Learning via Influence Functions
|
14 pages, to be presented at the NeurIPS 2019 workshop on "ML with
Guarantees"
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Active learning (AL) concerns itself with learning a model from as few
labelled data as possible through actively and iteratively querying an oracle
with selected unlabelled samples. In this paper, we focus on analyzing a
popular type of AL in which the utility of a sample is measured by a specified
goal achieved by the retrained model after accounting for the sample's marginal
influence. Such AL strategies attract a lot of attention thanks to their
intuitive motivations, yet they also suffer from impractically high
computational costs due to their need for many iterations of model retraining.
With the help of influence functions, we present an effective approximation
that bypasses model retraining altogether, and propose a general efficient
implementation that makes such AL strategies applicable in practice, both in
the serial and the more challenging batch-mode setting. Additionally, we
present both theoretical and empirical findings which call into question a few
common practices and beliefs about such AL strategies.
|
[
{
"created": "Thu, 30 May 2019 17:09:39 GMT",
"version": "v1"
},
{
"created": "Thu, 3 Oct 2019 18:14:52 GMT",
"version": "v2"
},
{
"created": "Sat, 30 Nov 2019 22:10:07 GMT",
"version": "v3"
}
] |
2019-12-03
|
[
[
"Xu",
"Minjie",
""
],
[
"Kazantsev",
"Gary",
""
]
] |
Active learning (AL) concerns itself with learning a model from as few labelled data as possible through actively and iteratively querying an oracle with selected unlabelled samples. In this paper, we focus on analyzing a popular type of AL in which the utility of a sample is measured by a specified goal achieved by the retrained model after accounting for the sample's marginal influence. Such AL strategies attract a lot of attention thanks to their intuitive motivations, yet they also suffer from impractically high computational costs due to their need for many iterations of model retraining. With the help of influence functions, we present an effective approximation that bypasses model retraining altogether, and propose a general efficient implementation that makes such AL strategies applicable in practice, both in the serial and the more challenging batch-mode setting. Additionally, we present both theoretical and empirical findings which call into question a few common practices and beliefs about such AL strategies.
|
2005.10438
|
Haohan Guo
|
Haohan Guo, Shaofei Zhang, Frank K. Soong, Lei He, Lei Xie
|
Conversational End-to-End TTS for Voice Agent
|
Accepted by SLT 2021; 7 pages
| null | null | null |
cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
End-to-end neural TTS has achieved superior performance on reading style
speech synthesis. However, it's still a challenge to build a high-quality
conversational TTS due to the limitations of the corpus and modeling
capability. This study aims at building a conversational TTS for a voice agent
under sequence to sequence modeling framework. We firstly construct a
spontaneous conversational speech corpus well designed for the voice agent with
a new recording scheme ensuring both recording quality and conversational
speaking style. Secondly, we propose a conversation context-aware end-to-end
TTS approach which has an auxiliary encoder and a conversational context
encoder to reinforce the information about the current utterance and its
context in a conversation as well. Experimental results show that the proposed
methods produce more natural prosody in accordance with the conversational
context, with significant preference gains at both utterance-level and
conversation-level. Moreover, we find that the model has the ability to express
some spontaneous behaviors, like fillers and repeated words, which makes the
conversational speaking style more realistic.
|
[
{
"created": "Thu, 21 May 2020 02:52:25 GMT",
"version": "v1"
},
{
"created": "Mon, 16 Nov 2020 10:02:10 GMT",
"version": "v2"
}
] |
2020-11-17
|
[
[
"Guo",
"Haohan",
""
],
[
"Zhang",
"Shaofei",
""
],
[
"Soong",
"Frank K.",
""
],
[
"He",
"Lei",
""
],
[
"Xie",
"Lei",
""
]
] |
End-to-end neural TTS has achieved superior performance on reading style speech synthesis. However, it's still a challenge to build a high-quality conversational TTS due to the limitations of the corpus and modeling capability. This study aims at building a conversational TTS for a voice agent under sequence to sequence modeling framework. We firstly construct a spontaneous conversational speech corpus well designed for the voice agent with a new recording scheme ensuring both recording quality and conversational speaking style. Secondly, we propose a conversation context-aware end-to-end TTS approach which has an auxiliary encoder and a conversational context encoder to reinforce the information about the current utterance and its context in a conversation as well. Experimental results show that the proposed methods produce more natural prosody in accordance with the conversational context, with significant preference gains at both utterance-level and conversation-level. Moreover, we find that the model has the ability to express some spontaneous behaviors, like fillers and repeated words, which makes the conversational speaking style more realistic.
|
2204.10905
|
Jing Gao
|
Jing Gao, Tilo Burghardt, and Neill W. Campbell
|
Label a Herd in Minutes: Individual Holstein-Friesian Cattle
Identification
|
ICIAP Workshop on Learning in Precision Livestock Farming (accepted).
10 pages, 7 figures
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We describe a practically evaluated approach for training visual cattle ID
systems for a whole farm requiring only ten minutes of labelling effort. In
particular, for the task of automatic identification of individual
Holstein-Friesians in real-world farm CCTV, we show that self-supervision,
metric learning, cluster analysis, and active learning can complement each
other to significantly reduce the annotation requirements usually needed to
train cattle identification frameworks. Evaluating the approach on the test
portion of the publicly available Cows2021 dataset, for training we use 23,350
frames across 435 single individual tracklets generated by automated oriented
cattle detection and tracking in operational farm footage. Self-supervised
metric learning is first employed to initialise a candidate identity space
where each tracklet is considered a distinct entity. Grouping entities into
equivalence classes representing cattle identities is then performed by
automated merging via cluster analysis and active learning. Critically, we
identify the inflection point at which automated choices cannot replicate
improvements based on human intervention to reduce annotation to a minimum.
Experimental results show that cluster analysis and a few minutes of labelling
after automated self-supervision can improve the test identification accuracy
of 153 identities to 92.44% (ARI=0.93) from the 74.9% (ARI=0.754) obtained by
self-supervision only. These promising results indicate that a tailored
combination of human and machine reasoning in visual cattle ID pipelines can be
highly effective whilst requiring only minimal labelling effort. We provide all
key source code and network weights with this paper for easy result
reproduction.
|
[
{
"created": "Fri, 22 Apr 2022 19:41:47 GMT",
"version": "v1"
}
] |
2022-04-26
|
[
[
"Gao",
"Jing",
""
],
[
"Burghardt",
"Tilo",
""
],
[
"Campbell",
"Neill W.",
""
]
] |
We describe a practically evaluated approach for training visual cattle ID systems for a whole farm requiring only ten minutes of labelling effort. In particular, for the task of automatic identification of individual Holstein-Friesians in real-world farm CCTV, we show that self-supervision, metric learning, cluster analysis, and active learning can complement each other to significantly reduce the annotation requirements usually needed to train cattle identification frameworks. Evaluating the approach on the test portion of the publicly available Cows2021 dataset, for training we use 23,350 frames across 435 single individual tracklets generated by automated oriented cattle detection and tracking in operational farm footage. Self-supervised metric learning is first employed to initialise a candidate identity space where each tracklet is considered a distinct entity. Grouping entities into equivalence classes representing cattle identities is then performed by automated merging via cluster analysis and active learning. Critically, we identify the inflection point at which automated choices cannot replicate improvements based on human intervention to reduce annotation to a minimum. Experimental results show that cluster analysis and a few minutes of labelling after automated self-supervision can improve the test identification accuracy of 153 identities to 92.44% (ARI=0.93) from the 74.9% (ARI=0.754) obtained by self-supervision only. These promising results indicate that a tailored combination of human and machine reasoning in visual cattle ID pipelines can be highly effective whilst requiring only minimal labelling effort. We provide all key source code and network weights with this paper for easy result reproduction.
|
1401.6336
|
Jean-Marc Kelif
|
Jean-Marc Kelif, Stephane Senecal, Constant Bridon, Marceau Coupechoux
|
A Fluid Approach for Poisson Wireless Networks
|
6 pages, 7 figures
| null | null | null |
cs.IT cs.NI math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Among the different models of networks usually considered, the hexagonal
network model is the most popular. However, it requires extensive numerical
computations. The Poisson network model, for which the base stations (BS)
locations form a spatial Poisson process, allows to consider a non constant
distance between base stations. Therefore, it may characterize more
realistically operational networks. The Fluid network model, for which the
interfering BS are replaced by a continuum of infinitesimal interferers, allows
to establish closed-form formula for the SINR (Signal on Interference plus
Noise Ratio). This model was validated by comparison with an hexagonal network.
The two models establish very close results. As a consequence, the Fluid
network model can be used to analyze hexagonal networks. In this paper, we show
that the Fluid network model can also be used to analyze Poisson networks.
Therefore, the analysis of performance and quality of service becomes very
easy, whatever the type of network model, by using the analytical expression of
the SINR established by considering the Fluid network model.
|
[
{
"created": "Fri, 24 Jan 2014 13:25:03 GMT",
"version": "v1"
}
] |
2014-01-27
|
[
[
"Kelif",
"Jean-Marc",
""
],
[
"Senecal",
"Stephane",
""
],
[
"Bridon",
"Constant",
""
],
[
"Coupechoux",
"Marceau",
""
]
] |
Among the different models of networks usually considered, the hexagonal network model is the most popular. However, it requires extensive numerical computations. The Poisson network model, for which the base stations (BS) locations form a spatial Poisson process, allows to consider a non constant distance between base stations. Therefore, it may characterize more realistically operational networks. The Fluid network model, for which the interfering BS are replaced by a continuum of infinitesimal interferers, allows to establish closed-form formula for the SINR (Signal on Interference plus Noise Ratio). This model was validated by comparison with an hexagonal network. The two models establish very close results. As a consequence, the Fluid network model can be used to analyze hexagonal networks. In this paper, we show that the Fluid network model can also be used to analyze Poisson networks. Therefore, the analysis of performance and quality of service becomes very easy, whatever the type of network model, by using the analytical expression of the SINR established by considering the Fluid network model.
|
2403.14729
|
Xidong Wu
|
Xidong Wu, Shangqian Gao, Zeyu Zhang, Zhenzhen Li, Runxue Bao, Yanfu
Zhang, Xiaoqian Wang, Heng Huang
|
Auto-Train-Once: Controller Network Guided Automatic Network Pruning
from Scratch
| null | null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Current techniques for deep neural network (DNN) pruning often involve
intricate multi-step processes that require domain-specific expertise, making
their widespread adoption challenging. To address the limitation, the
Only-Train-Once (OTO) and OTOv2 are proposed to eliminate the need for
additional fine-tuning steps by directly training and compressing a general DNN
from scratch. Nevertheless, the static design of optimizers (in OTO) can lead
to convergence issues of local optima. In this paper, we proposed the
Auto-Train-Once (ATO), an innovative network pruning algorithm designed to
automatically reduce the computational and storage costs of DNNs. During the
model training phase, our approach not only trains the target model but also
leverages a controller network as an architecture generator to guide the
learning of target model weights. Furthermore, we developed a novel stochastic
gradient algorithm that enhances the coordination between model training and
controller network training, thereby improving pruning performance. We provide
a comprehensive convergence analysis as well as extensive experiments, and the
results show that our approach achieves state-of-the-art performance across
various model architectures (including ResNet18, ResNet34, ResNet50, ResNet56,
and MobileNetv2) on standard benchmark datasets (CIFAR-10, CIFAR-100, and
ImageNet).
|
[
{
"created": "Thu, 21 Mar 2024 02:33:37 GMT",
"version": "v1"
}
] |
2024-03-25
|
[
[
"Wu",
"Xidong",
""
],
[
"Gao",
"Shangqian",
""
],
[
"Zhang",
"Zeyu",
""
],
[
"Li",
"Zhenzhen",
""
],
[
"Bao",
"Runxue",
""
],
[
"Zhang",
"Yanfu",
""
],
[
"Wang",
"Xiaoqian",
""
],
[
"Huang",
"Heng",
""
]
] |
Current techniques for deep neural network (DNN) pruning often involve intricate multi-step processes that require domain-specific expertise, making their widespread adoption challenging. To address the limitation, the Only-Train-Once (OTO) and OTOv2 are proposed to eliminate the need for additional fine-tuning steps by directly training and compressing a general DNN from scratch. Nevertheless, the static design of optimizers (in OTO) can lead to convergence issues of local optima. In this paper, we proposed the Auto-Train-Once (ATO), an innovative network pruning algorithm designed to automatically reduce the computational and storage costs of DNNs. During the model training phase, our approach not only trains the target model but also leverages a controller network as an architecture generator to guide the learning of target model weights. Furthermore, we developed a novel stochastic gradient algorithm that enhances the coordination between model training and controller network training, thereby improving pruning performance. We provide a comprehensive convergence analysis as well as extensive experiments, and the results show that our approach achieves state-of-the-art performance across various model architectures (including ResNet18, ResNet34, ResNet50, ResNet56, and MobileNetv2) on standard benchmark datasets (CIFAR-10, CIFAR-100, and ImageNet).
|
2312.13510
|
Aaron Lye
|
Hans-J\"org Kreowski, Aaron Lye, Aljoscha Windhorst
|
Moving a Derivation Along a Derivation Preserves the Spine in Adhesive
Categories
|
Extended version of the paper published in the proceedings of the
16th International Conference on Graph Transformation; 30 pages
| null | null | null |
cs.DM
|
http://creativecommons.org/licenses/by-sa/4.0/
|
In this paper, we investigate the relationship between two elementary
operations on derivations in adhesive high-level replacement systems that are
well-known in the context of graph transformation: moving a derivation along a
derivation based on parallel and sequential independence on one hand and
restriction of a derivation with respect to a monomorphism into the start
object on the other hand. Intuitively, a restriction clips off parts of the
start object that are never matched by a rule application throughout the
derivation on the other hand. As main result, it is shown that moving a
derivation preserves its spine being the minimal restriction.
|
[
{
"created": "Thu, 21 Dec 2023 01:12:25 GMT",
"version": "v1"
},
{
"created": "Tue, 30 Jul 2024 21:29:40 GMT",
"version": "v2"
}
] |
2024-08-01
|
[
[
"Kreowski",
"Hans-Jörg",
""
],
[
"Lye",
"Aaron",
""
],
[
"Windhorst",
"Aljoscha",
""
]
] |
In this paper, we investigate the relationship between two elementary operations on derivations in adhesive high-level replacement systems that are well-known in the context of graph transformation: moving a derivation along a derivation based on parallel and sequential independence on one hand and restriction of a derivation with respect to a monomorphism into the start object on the other hand. Intuitively, a restriction clips off parts of the start object that are never matched by a rule application throughout the derivation on the other hand. As main result, it is shown that moving a derivation preserves its spine being the minimal restriction.
|
2407.14571
|
K. Selcuk Candan
|
Fahim Tasneema Azad, Javier Redondo Anton, Shubhodeep Mitra, Fateh
Singh, Hans Behrens, Mao-Lin Li, Bilgehan Arslan, K. Sel\c{c}uk Candan, Maria
Luisa Sapino
|
DataStorm-EM: Exploration of Alternative Timelines within
Continuous-Coupled Simulation Ensembles
| null | null | null | null |
cs.MA
|
http://creativecommons.org/licenses/by/4.0/
|
Many socio-economical critical domains (such as sustainability, public
health, and disasters) are characterized by highly complex and dynamic systems,
requiring data and model-driven simulations to support decision-making. Due to
a large number of unknowns, decision-makers usually need to generate ensembles
of stochastic scenarios, requiring hundreds or thousands of individual
simulation instances, each with different parameter settings corresponding to
distinct scenarios, As the number of model parameters increases, the number of
potential timelines one can simulate increases exponentially. Consequently,
simulation ensembles are inherently sparse, even when they are extremely large.
This necessitates a platform for (a) deciding which simulation instances to
execute and (b) given a large simulation ensemble, enabling decision-makers to
explore the resulting alternative timelines, by extracting and visualizing
consistent, yet diverse timelines from continuous-coupled simulation ensembles.
In this article, we present DataStorm-EM platform for data- and model-driven
simulation ensemble management, optimization, analysis, and exploration,
describe underlying challenges and present our solution.
|
[
{
"created": "Fri, 19 Jul 2024 09:12:59 GMT",
"version": "v1"
}
] |
2024-07-23
|
[
[
"Azad",
"Fahim Tasneema",
""
],
[
"Anton",
"Javier Redondo",
""
],
[
"Mitra",
"Shubhodeep",
""
],
[
"Singh",
"Fateh",
""
],
[
"Behrens",
"Hans",
""
],
[
"Li",
"Mao-Lin",
""
],
[
"Arslan",
"Bilgehan",
""
],
[
"Candan",
"K. Selçuk",
""
],
[
"Sapino",
"Maria Luisa",
""
]
] |
Many socio-economical critical domains (such as sustainability, public health, and disasters) are characterized by highly complex and dynamic systems, requiring data and model-driven simulations to support decision-making. Due to a large number of unknowns, decision-makers usually need to generate ensembles of stochastic scenarios, requiring hundreds or thousands of individual simulation instances, each with different parameter settings corresponding to distinct scenarios, As the number of model parameters increases, the number of potential timelines one can simulate increases exponentially. Consequently, simulation ensembles are inherently sparse, even when they are extremely large. This necessitates a platform for (a) deciding which simulation instances to execute and (b) given a large simulation ensemble, enabling decision-makers to explore the resulting alternative timelines, by extracting and visualizing consistent, yet diverse timelines from continuous-coupled simulation ensembles. In this article, we present DataStorm-EM platform for data- and model-driven simulation ensemble management, optimization, analysis, and exploration, describe underlying challenges and present our solution.
|
2301.13288
|
Loren Rieffer-Champlin
|
Loren Rieffer-Champlin
|
Team Plan Recognition: A Review of the State of the Art
|
10 pages, 1 figure, 1 table. Abstract accepted, paper submitted to
14th International Conference on Applied Human Factors and Ergonomics (AHFE
2023)
| null |
10.54941/ahfe1003557
| null |
cs.AI cs.MA
|
http://creativecommons.org/licenses/by-sa/4.0/
|
There is an increasing need to develop artificial intelligence systems that
assist groups of humans working on coordinated tasks. These systems must
recognize and understand the plans and relationships between actions for a team
of humans working toward a common objective. This article reviews the
literature on team plan recognition and surveys the most recent logic-based
approaches for implementing it. First, we provide some background knowledge,
including a general definition of plan recognition in a team setting and a
discussion of implementation challenges. Next, we explain our reasoning for
focusing on logic-based methods. Finally, we survey recent approaches from two
primary classes of logic-based methods (plan library-based and domain
theory-based). We aim to bring more attention to this sparse but vital topic
and inspire new directions for implementing team plan recognition.
|
[
{
"created": "Mon, 30 Jan 2023 21:01:14 GMT",
"version": "v1"
}
] |
2023-08-01
|
[
[
"Rieffer-Champlin",
"Loren",
""
]
] |
There is an increasing need to develop artificial intelligence systems that assist groups of humans working on coordinated tasks. These systems must recognize and understand the plans and relationships between actions for a team of humans working toward a common objective. This article reviews the literature on team plan recognition and surveys the most recent logic-based approaches for implementing it. First, we provide some background knowledge, including a general definition of plan recognition in a team setting and a discussion of implementation challenges. Next, we explain our reasoning for focusing on logic-based methods. Finally, we survey recent approaches from two primary classes of logic-based methods (plan library-based and domain theory-based). We aim to bring more attention to this sparse but vital topic and inspire new directions for implementing team plan recognition.
|
2108.10378
|
Zhang Yuxiang
|
Yuxiang Zhang, Zhe Li, Liang An, Mengcheng Li, Tao Yu, Yebin Liu
|
Lightweight Multi-person Total Motion Capture Using Sparse Multi-view
Cameras
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-person total motion capture is extremely challenging when it comes to
handle severe occlusions, different reconstruction granularities from body to
face and hands, drastically changing observation scales and fast body
movements. To overcome these challenges above, we contribute a lightweight
total motion capture system for multi-person interactive scenarios using only
sparse multi-view cameras. By contributing a novel hand and face bootstrapping
algorithm, our method is capable of efficient localization and accurate
association of the hands and faces even on severe occluded occasions. We
leverage both pose regression and keypoints detection methods and further
propose a unified two-stage parametric fitting method for achieving
pixel-aligned accuracy. Moreover, for extremely self-occluded poses and close
interactions, a novel feedback mechanism is proposed to propagate the
pixel-aligned reconstructions into the next frame for more accurate
association. Overall, we propose the first light-weight total capture system
and achieves fast, robust and accurate multi-person total motion capture
performance. The results and experiments show that our method achieves more
accurate results than existing methods under sparse-view setups.
|
[
{
"created": "Mon, 23 Aug 2021 19:23:35 GMT",
"version": "v1"
}
] |
2021-08-25
|
[
[
"Zhang",
"Yuxiang",
""
],
[
"Li",
"Zhe",
""
],
[
"An",
"Liang",
""
],
[
"Li",
"Mengcheng",
""
],
[
"Yu",
"Tao",
""
],
[
"Liu",
"Yebin",
""
]
] |
Multi-person total motion capture is extremely challenging when it comes to handle severe occlusions, different reconstruction granularities from body to face and hands, drastically changing observation scales and fast body movements. To overcome these challenges above, we contribute a lightweight total motion capture system for multi-person interactive scenarios using only sparse multi-view cameras. By contributing a novel hand and face bootstrapping algorithm, our method is capable of efficient localization and accurate association of the hands and faces even on severe occluded occasions. We leverage both pose regression and keypoints detection methods and further propose a unified two-stage parametric fitting method for achieving pixel-aligned accuracy. Moreover, for extremely self-occluded poses and close interactions, a novel feedback mechanism is proposed to propagate the pixel-aligned reconstructions into the next frame for more accurate association. Overall, we propose the first light-weight total capture system and achieves fast, robust and accurate multi-person total motion capture performance. The results and experiments show that our method achieves more accurate results than existing methods under sparse-view setups.
|
2312.03705
|
Diego Salda\~na Ulloa
|
Diego Salda\~na Ulloa
|
A Process for Topic Modelling Via Word Embeddings
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
This work combines algorithms based on word embeddings, dimensionality
reduction, and clustering. The objective is to obtain topics from a set of
unclassified texts. The algorithm to obtain the word embeddings is the BERT
model, a neural network architecture widely used in NLP tasks. Due to the high
dimensionality, a dimensionality reduction technique called UMAP is used. This
method manages to reduce the dimensions while preserving part of the local and
global information of the original data. K-Means is used as the clustering
algorithm to obtain the topics. Then, the topics are evaluated using the TF-IDF
statistics, Topic Diversity, and Topic Coherence to get the meaning of the
words on the clusters. The results of the process show good values, so the
topic modeling of this process is a viable option for classifying or clustering
texts without labels.
|
[
{
"created": "Fri, 6 Oct 2023 15:10:35 GMT",
"version": "v1"
}
] |
2023-12-08
|
[
[
"Ulloa",
"Diego Saldaña",
""
]
] |
This work combines algorithms based on word embeddings, dimensionality reduction, and clustering. The objective is to obtain topics from a set of unclassified texts. The algorithm to obtain the word embeddings is the BERT model, a neural network architecture widely used in NLP tasks. Due to the high dimensionality, a dimensionality reduction technique called UMAP is used. This method manages to reduce the dimensions while preserving part of the local and global information of the original data. K-Means is used as the clustering algorithm to obtain the topics. Then, the topics are evaluated using the TF-IDF statistics, Topic Diversity, and Topic Coherence to get the meaning of the words on the clusters. The results of the process show good values, so the topic modeling of this process is a viable option for classifying or clustering texts without labels.
|
2311.16261
|
Sotirios Karapiperis
|
Sotiris Karapiperis, Markos Diomataris, Vassilis Pitsikalis
|
RelVAE: Generative Pretraining for few-shot Visual Relationship
Detection
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Visual relations are complex, multimodal concepts that play an important role
in the way humans perceive the world. As a result of their complexity,
high-quality, diverse and large scale datasets for visual relations are still
absent. In an attempt to overcome this data barrier, we choose to focus on the
problem of few-shot Visual Relationship Detection (VRD), a setting that has
been so far neglected by the community. In this work we present the first
pretraining method for few-shot predicate classification that does not require
any annotated relations. We achieve this by introducing a generative model that
is able to capture the variation of semantic, visual and spatial information of
relations inside a latent space and later exploiting its representations in
order to achieve efficient few-shot classification. We construct few-shot
training splits and show quantitative experiments on VG200 and VRD datasets
where our model outperforms the baselines. Lastly we attempt to interpret the
decisions of the model by conducting various qualitative experiments.
|
[
{
"created": "Mon, 27 Nov 2023 19:08:08 GMT",
"version": "v1"
}
] |
2023-11-29
|
[
[
"Karapiperis",
"Sotiris",
""
],
[
"Diomataris",
"Markos",
""
],
[
"Pitsikalis",
"Vassilis",
""
]
] |
Visual relations are complex, multimodal concepts that play an important role in the way humans perceive the world. As a result of their complexity, high-quality, diverse and large scale datasets for visual relations are still absent. In an attempt to overcome this data barrier, we choose to focus on the problem of few-shot Visual Relationship Detection (VRD), a setting that has been so far neglected by the community. In this work we present the first pretraining method for few-shot predicate classification that does not require any annotated relations. We achieve this by introducing a generative model that is able to capture the variation of semantic, visual and spatial information of relations inside a latent space and later exploiting its representations in order to achieve efficient few-shot classification. We construct few-shot training splits and show quantitative experiments on VG200 and VRD datasets where our model outperforms the baselines. Lastly we attempt to interpret the decisions of the model by conducting various qualitative experiments.
|
1606.02319
|
Mark Newman
|
M. E. J. Newman
|
Community detection in networks: Modularity optimization and maximum
likelihood are equivalent
|
8 pages, 1 figure, 1 table
|
Phys. Rev. E 94, 052315 (2016)
|
10.1103/PhysRevE.94.052315
| null |
cs.SI physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We demonstrate an exact equivalence between two widely used methods of
community detection in networks, the method of modularity maximization in its
generalized form which incorporates a resolution parameter controlling the size
of the communities discovered, and the method of maximum likelihood applied to
the special case of the stochastic block model known as the planted partition
model, in which all communities in a network are assumed to have statistically
similar properties. Among other things, this equivalence provides a
mathematically principled derivation of the modularity function, clarifies the
conditions and assumptions of its use, and gives an explicit formula for the
optimal value of the resolution parameter.
|
[
{
"created": "Tue, 7 Jun 2016 20:10:20 GMT",
"version": "v1"
}
] |
2016-11-24
|
[
[
"Newman",
"M. E. J.",
""
]
] |
We demonstrate an exact equivalence between two widely used methods of community detection in networks, the method of modularity maximization in its generalized form which incorporates a resolution parameter controlling the size of the communities discovered, and the method of maximum likelihood applied to the special case of the stochastic block model known as the planted partition model, in which all communities in a network are assumed to have statistically similar properties. Among other things, this equivalence provides a mathematically principled derivation of the modularity function, clarifies the conditions and assumptions of its use, and gives an explicit formula for the optimal value of the resolution parameter.
|
2401.15366
|
Shiv Ram Dubey
|
Trinetra Devkatte, Shiv Ram Dubey, Satish Kumar Singh, Abdenour Hadid
|
Face to Cartoon Incremental Super-Resolution using Knowledge
Distillation
| null | null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Facial super-resolution/hallucination is an important area of research that
seeks to enhance low-resolution facial images for a variety of applications.
While Generative Adversarial Networks (GANs) have shown promise in this area,
their ability to adapt to new, unseen data remains a challenge. This paper
addresses this problem by proposing an incremental super-resolution using GANs
with knowledge distillation (ISR-KD) for face to cartoon. Previous research in
this area has not investigated incremental learning, which is critical for
real-world applications where new data is continually being generated. The
proposed ISR-KD aims to develop a novel unified framework for facial
super-resolution that can handle different settings, including different types
of faces such as cartoon face and various levels of detail. To achieve this, a
GAN-based super-resolution network was pre-trained on the CelebA dataset and
then incrementally trained on the iCartoonFace dataset, using knowledge
distillation to retain performance on the CelebA test set while improving the
performance on iCartoonFace test set. Our experiments demonstrate the
effectiveness of knowledge distillation in incrementally adding capability to
the model for cartoon face super-resolution while retaining the learned
knowledge for facial hallucination tasks in GANs.
|
[
{
"created": "Sat, 27 Jan 2024 10:06:52 GMT",
"version": "v1"
}
] |
2024-01-30
|
[
[
"Devkatte",
"Trinetra",
""
],
[
"Dubey",
"Shiv Ram",
""
],
[
"Singh",
"Satish Kumar",
""
],
[
"Hadid",
"Abdenour",
""
]
] |
Facial super-resolution/hallucination is an important area of research that seeks to enhance low-resolution facial images for a variety of applications. While Generative Adversarial Networks (GANs) have shown promise in this area, their ability to adapt to new, unseen data remains a challenge. This paper addresses this problem by proposing an incremental super-resolution using GANs with knowledge distillation (ISR-KD) for face to cartoon. Previous research in this area has not investigated incremental learning, which is critical for real-world applications where new data is continually being generated. The proposed ISR-KD aims to develop a novel unified framework for facial super-resolution that can handle different settings, including different types of faces such as cartoon face and various levels of detail. To achieve this, a GAN-based super-resolution network was pre-trained on the CelebA dataset and then incrementally trained on the iCartoonFace dataset, using knowledge distillation to retain performance on the CelebA test set while improving the performance on iCartoonFace test set. Our experiments demonstrate the effectiveness of knowledge distillation in incrementally adding capability to the model for cartoon face super-resolution while retaining the learned knowledge for facial hallucination tasks in GANs.
|
2211.12551
|
Meihua Dang
|
Meihua Dang, Anji Liu, Guy Van den Broeck
|
Sparse Probabilistic Circuits via Pruning and Growing
|
36th Conference on Neural Information Processing Systems (NeurIPS
2022)
| null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Probabilistic circuits (PCs) are a tractable representation of probability
distributions allowing for exact and efficient computation of likelihoods and
marginals. There has been significant recent progress on improving the scale
and expressiveness of PCs. However, PC training performance plateaus as model
size increases. We discover that most capacity in existing large PC structures
is wasted: fully-connected parameter layers are only sparsely used. We propose
two operations: pruning and growing, that exploit the sparsity of PC
structures. Specifically, the pruning operation removes unimportant
sub-networks of the PC for model compression and comes with theoretical
guarantees. The growing operation increases model capacity by increasing the
size of the latent space. By alternatingly applying pruning and growing, we
increase the capacity that is meaningfully used, allowing us to significantly
scale up PC learning. Empirically, our learner achieves state-of-the-art
likelihoods on MNIST-family image datasets and on Penn Tree Bank language data
compared to other PC learners and less tractable deep generative models such as
flow-based models and variational autoencoders (VAEs).
|
[
{
"created": "Tue, 22 Nov 2022 19:54:52 GMT",
"version": "v1"
}
] |
2022-11-24
|
[
[
"Dang",
"Meihua",
""
],
[
"Liu",
"Anji",
""
],
[
"Broeck",
"Guy Van den",
""
]
] |
Probabilistic circuits (PCs) are a tractable representation of probability distributions allowing for exact and efficient computation of likelihoods and marginals. There has been significant recent progress on improving the scale and expressiveness of PCs. However, PC training performance plateaus as model size increases. We discover that most capacity in existing large PC structures is wasted: fully-connected parameter layers are only sparsely used. We propose two operations: pruning and growing, that exploit the sparsity of PC structures. Specifically, the pruning operation removes unimportant sub-networks of the PC for model compression and comes with theoretical guarantees. The growing operation increases model capacity by increasing the size of the latent space. By alternatingly applying pruning and growing, we increase the capacity that is meaningfully used, allowing us to significantly scale up PC learning. Empirically, our learner achieves state-of-the-art likelihoods on MNIST-family image datasets and on Penn Tree Bank language data compared to other PC learners and less tractable deep generative models such as flow-based models and variational autoencoders (VAEs).
|
1804.08302
|
Boitumelo Ruf
|
Boitumelo Ruf, Laurenz Thiel, Martin Weinmann
|
Deep cross-domain building extraction for selective depth estimation
from oblique aerial imagery
|
Accepted in the ISPRS Annals of the Photogrammetry, Remote Sensing
and Spatial Information Science
|
ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., IV-1,
125-132, 2018
|
10.5194/isprs-annals-IV-1-125-2018
| null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
With the technological advancements of aerial imagery and accurate 3d
reconstruction of urban environments, more and more attention has been paid to
the automated analyses of urban areas. In our work, we examine two important
aspects that allow live analysis of building structures in city models given
oblique aerial imagery, namely automatic building extraction with convolutional
neural networks (CNNs) and selective real-time depth estimation from aerial
imagery. We use transfer learning to train the Faster R-CNN method for
real-time deep object detection, by combining a large ground-based dataset for
urban scene understanding with a smaller number of images from an aerial
dataset. We achieve an average precision (AP) of about 80% for the task of
building extraction on a selected evaluation dataset. Our evaluation focuses on
both dataset-specific learning and transfer learning. Furthermore, we present
an algorithm that allows for multi-view depth estimation from aerial imagery in
real-time. We adopt the semi-global matching (SGM) optimization strategy to
preserve sharp edges at object boundaries. In combination with the Faster
R-CNN, it allows a selective reconstruction of buildings, identified with
regions of interest (RoIs), from oblique aerial imagery.
|
[
{
"created": "Mon, 23 Apr 2018 09:22:55 GMT",
"version": "v1"
},
{
"created": "Tue, 17 Jul 2018 07:49:31 GMT",
"version": "v2"
},
{
"created": "Sat, 21 Sep 2019 20:24:52 GMT",
"version": "v3"
}
] |
2019-09-24
|
[
[
"Ruf",
"Boitumelo",
""
],
[
"Thiel",
"Laurenz",
""
],
[
"Weinmann",
"Martin",
""
]
] |
With the technological advancements of aerial imagery and accurate 3d reconstruction of urban environments, more and more attention has been paid to the automated analyses of urban areas. In our work, we examine two important aspects that allow live analysis of building structures in city models given oblique aerial imagery, namely automatic building extraction with convolutional neural networks (CNNs) and selective real-time depth estimation from aerial imagery. We use transfer learning to train the Faster R-CNN method for real-time deep object detection, by combining a large ground-based dataset for urban scene understanding with a smaller number of images from an aerial dataset. We achieve an average precision (AP) of about 80% for the task of building extraction on a selected evaluation dataset. Our evaluation focuses on both dataset-specific learning and transfer learning. Furthermore, we present an algorithm that allows for multi-view depth estimation from aerial imagery in real-time. We adopt the semi-global matching (SGM) optimization strategy to preserve sharp edges at object boundaries. In combination with the Faster R-CNN, it allows a selective reconstruction of buildings, identified with regions of interest (RoIs), from oblique aerial imagery.
|
2103.03717
|
Flavio de Barros Vidal
|
Andre da Silva Abade, Lucas Faria Porto, Paulo Afonso Ferreira, Flavio
de Barros Vidal
|
NemaNet: A convolutional neural network model for identification of
nematodes soybean crop in brazil
|
21 pages, 13 figures
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Phytoparasitic nematodes (or phytonematodes) are causing severe damage to
crops and generating large-scale economic losses worldwide. In soybean crops,
annual losses are estimated at 10.6% of world production. Besides, identifying
these species through microscopic analysis by an expert with taxonomy knowledge
is often laborious, time-consuming, and susceptible to failure. In this
perspective, robust and automatic approaches are necessary for identifying
phytonematodes capable of providing correct diagnoses for the classification of
species and subsidizing the taking of all control and prevention measures. This
work presents a new public data set called NemaDataset containing 3,063
microscopic images from five nematode species with the most significant damage
relevance for the soybean crop. Additionally, we propose a new Convolutional
Neural Network (CNN) model defined as NemaNet and a comparative assessment with
thirteen popular models of CNNs, all of them representing the state of the art
classification and recognition. The general average calculated for each model,
on a from-scratch training, the NemaNet model reached 96.99% accuracy, while
the best evaluation fold reached 98.03%. In training with transfer learning,
the average accuracy reached 98.88\%. The best evaluation fold reached 99.34%
and achieve an overall accuracy improvement over 6.83% and 4.1%, for
from-scratch and transfer learning training, respectively, when compared to
other popular models.
|
[
{
"created": "Fri, 5 Mar 2021 14:47:00 GMT",
"version": "v1"
}
] |
2021-03-08
|
[
[
"Abade",
"Andre da Silva",
""
],
[
"Porto",
"Lucas Faria",
""
],
[
"Ferreira",
"Paulo Afonso",
""
],
[
"Vidal",
"Flavio de Barros",
""
]
] |
Phytoparasitic nematodes (or phytonematodes) are causing severe damage to crops and generating large-scale economic losses worldwide. In soybean crops, annual losses are estimated at 10.6% of world production. Besides, identifying these species through microscopic analysis by an expert with taxonomy knowledge is often laborious, time-consuming, and susceptible to failure. In this perspective, robust and automatic approaches are necessary for identifying phytonematodes capable of providing correct diagnoses for the classification of species and subsidizing the taking of all control and prevention measures. This work presents a new public data set called NemaDataset containing 3,063 microscopic images from five nematode species with the most significant damage relevance for the soybean crop. Additionally, we propose a new Convolutional Neural Network (CNN) model defined as NemaNet and a comparative assessment with thirteen popular models of CNNs, all of them representing the state of the art classification and recognition. The general average calculated for each model, on a from-scratch training, the NemaNet model reached 96.99% accuracy, while the best evaluation fold reached 98.03%. In training with transfer learning, the average accuracy reached 98.88\%. The best evaluation fold reached 99.34% and achieve an overall accuracy improvement over 6.83% and 4.1%, for from-scratch and transfer learning training, respectively, when compared to other popular models.
|
2009.09140
|
Jindong Gu
|
Jindong Gu and Zhiliang Wu and Volker Tresp
|
Introspective Learning by Distilling Knowledge from Online
Self-explanation
| null |
15th Asian Conference on Computer Vision (ACCV) 2020
| null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, many explanation methods have been proposed to explain
individual classifications of deep neural networks. However, how to leverage
the created explanations to improve the learning process has been less
explored. As the privileged information, the explanations of a model can be
used to guide the learning process of the model itself. In the community,
another intensively investigated privileged information used to guide the
training of a model is the knowledge from a powerful teacher model. The goal of
this work is to leverage the self-explanation to improve the learning process
by borrowing ideas from knowledge distillation. We start by investigating the
effective components of the knowledge transferred from the teacher network to
the student network. Our investigation reveals that both the responses in
non-ground-truth classes and class-similarity information in teacher's outputs
contribute to the success of the knowledge distillation. Motivated by the
conclusion, we propose an implementation of introspective learning by
distilling knowledge from online self-explanations. The models trained with the
introspective learning procedure outperform the ones trained with the standard
learning procedure, as well as the ones trained with different regularization
methods. When compared to the models learned from peer networks or teacher
networks, our models also show competitive performance and requires neither
peers nor teachers.
|
[
{
"created": "Sat, 19 Sep 2020 02:05:32 GMT",
"version": "v1"
}
] |
2020-09-22
|
[
[
"Gu",
"Jindong",
""
],
[
"Wu",
"Zhiliang",
""
],
[
"Tresp",
"Volker",
""
]
] |
In recent years, many explanation methods have been proposed to explain individual classifications of deep neural networks. However, how to leverage the created explanations to improve the learning process has been less explored. As the privileged information, the explanations of a model can be used to guide the learning process of the model itself. In the community, another intensively investigated privileged information used to guide the training of a model is the knowledge from a powerful teacher model. The goal of this work is to leverage the self-explanation to improve the learning process by borrowing ideas from knowledge distillation. We start by investigating the effective components of the knowledge transferred from the teacher network to the student network. Our investigation reveals that both the responses in non-ground-truth classes and class-similarity information in teacher's outputs contribute to the success of the knowledge distillation. Motivated by the conclusion, we propose an implementation of introspective learning by distilling knowledge from online self-explanations. The models trained with the introspective learning procedure outperform the ones trained with the standard learning procedure, as well as the ones trained with different regularization methods. When compared to the models learned from peer networks or teacher networks, our models also show competitive performance and requires neither peers nor teachers.
|
cs/0701187
|
Christian Schallhart
|
Sagar Chaki, Christian Schallhart, Helmut Veith
|
Verification Across Intellectual Property Boundaries
| null | null | null | null |
cs.OH
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In many industries, the importance of software components provided by
third-party suppliers is steadily increasing. As the suppliers seek to secure
their intellectual property (IP) rights, the customer usually has no direct
access to the suppliers' source code, and is able to enforce the use of
verification tools only by legal requirements. In turn, the supplier has no
means to convince the customer about successful verification without revealing
the source code. This paper presents an approach to resolve the conflict
between the IP interests of the supplier and the quality interests of the
customer. We introduce a protocol in which a dedicated server (called the
"amanat") is controlled by both parties: the customer controls the verification
task performed by the amanat, while the supplier controls the communication
channels of the amanat to ensure that the amanat does not leak information
about the source code. We argue that the protocol is both practically useful
and mathematically sound. As the protocol is based on well-known (and
relatively lightweight) cryptographic primitives, it allows a straightforward
implementation on top of existing verification tool chains. To substantiate our
security claims, we establish the correctness of the protocol by cryptographic
reduction proofs.
|
[
{
"created": "Mon, 29 Jan 2007 17:09:17 GMT",
"version": "v1"
},
{
"created": "Wed, 30 Nov 2011 17:41:16 GMT",
"version": "v2"
}
] |
2011-12-01
|
[
[
"Chaki",
"Sagar",
""
],
[
"Schallhart",
"Christian",
""
],
[
"Veith",
"Helmut",
""
]
] |
In many industries, the importance of software components provided by third-party suppliers is steadily increasing. As the suppliers seek to secure their intellectual property (IP) rights, the customer usually has no direct access to the suppliers' source code, and is able to enforce the use of verification tools only by legal requirements. In turn, the supplier has no means to convince the customer about successful verification without revealing the source code. This paper presents an approach to resolve the conflict between the IP interests of the supplier and the quality interests of the customer. We introduce a protocol in which a dedicated server (called the "amanat") is controlled by both parties: the customer controls the verification task performed by the amanat, while the supplier controls the communication channels of the amanat to ensure that the amanat does not leak information about the source code. We argue that the protocol is both practically useful and mathematically sound. As the protocol is based on well-known (and relatively lightweight) cryptographic primitives, it allows a straightforward implementation on top of existing verification tool chains. To substantiate our security claims, we establish the correctness of the protocol by cryptographic reduction proofs.
|
2402.11633
|
Arian Askari
|
Arian Askari, Roxana Petcu, Chuan Meng, Mohammad Aliannejadi, Amin
Abolghasemi, Evangelos Kanoulas, Suzan Verberne
|
Self-seeding and Multi-intent Self-instructing LLMs for Generating
Intent-aware Information-Seeking dialogs
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Identifying user intents in information-seeking dialogs is crucial for a
system to meet user's information needs. Intent prediction (IP) is challenging
and demands sufficient dialogs with human-labeled intents for training.
However, manually annotating intents is resource-intensive. While large
language models (LLMs) have been shown to be effective in generating synthetic
data, there is no study on using LLMs to generate intent-aware
information-seeking dialogs. In this paper, we focus on leveraging LLMs for
zero-shot generation of large-scale, open-domain, and intent-aware
information-seeking dialogs. We propose SOLID, which has novel self-seeding and
multi-intent self-instructing schemes. The former improves the generation
quality by using the LLM's own knowledge scope to initiate dialog generation;
the latter prompts the LLM to generate utterances sequentially, and mitigates
the need for manual prompt design by asking the LLM to autonomously adapt its
prompt instruction when generating complex multi-intent utterances.
Furthermore, we propose SOLID-RL, which is further trained to generate a dialog
in one step on the data generated by SOLID. We propose a length-based quality
estimation mechanism to assign varying weights to SOLID-generated dialogs based
on their quality during the training process of SOLID-RL. We use SOLID and
SOLID-RL to generate more than 300k intent-aware dialogs, surpassing the size
of existing datasets. Experiments show that IP methods trained on dialogs
generated by SOLID and SOLID-RL achieve better IP quality than ones trained on
human-generated dialogs.
|
[
{
"created": "Sun, 18 Feb 2024 16:20:43 GMT",
"version": "v1"
}
] |
2024-02-20
|
[
[
"Askari",
"Arian",
""
],
[
"Petcu",
"Roxana",
""
],
[
"Meng",
"Chuan",
""
],
[
"Aliannejadi",
"Mohammad",
""
],
[
"Abolghasemi",
"Amin",
""
],
[
"Kanoulas",
"Evangelos",
""
],
[
"Verberne",
"Suzan",
""
]
] |
Identifying user intents in information-seeking dialogs is crucial for a system to meet user's information needs. Intent prediction (IP) is challenging and demands sufficient dialogs with human-labeled intents for training. However, manually annotating intents is resource-intensive. While large language models (LLMs) have been shown to be effective in generating synthetic data, there is no study on using LLMs to generate intent-aware information-seeking dialogs. In this paper, we focus on leveraging LLMs for zero-shot generation of large-scale, open-domain, and intent-aware information-seeking dialogs. We propose SOLID, which has novel self-seeding and multi-intent self-instructing schemes. The former improves the generation quality by using the LLM's own knowledge scope to initiate dialog generation; the latter prompts the LLM to generate utterances sequentially, and mitigates the need for manual prompt design by asking the LLM to autonomously adapt its prompt instruction when generating complex multi-intent utterances. Furthermore, we propose SOLID-RL, which is further trained to generate a dialog in one step on the data generated by SOLID. We propose a length-based quality estimation mechanism to assign varying weights to SOLID-generated dialogs based on their quality during the training process of SOLID-RL. We use SOLID and SOLID-RL to generate more than 300k intent-aware dialogs, surpassing the size of existing datasets. Experiments show that IP methods trained on dialogs generated by SOLID and SOLID-RL achieve better IP quality than ones trained on human-generated dialogs.
|
2210.06071
|
Ruiyuan Kang
|
Ruiyuan Kang, Dimitrios C. Kyritsis, Panos Liatsis
|
Self-Validated Physics-Embedding Network: A General Framework for
Inverse Modelling
|
32 pages, 25 figures, four tables
| null | null | null |
cs.NE cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
Physics-based inverse modeling techniques are typically restricted to
particular research fields, whereas popular machine-learning-based ones are too
data-dependent to guarantee the physical compatibility of the solution. In this
paper, Self-Validated Physics-Embedding Network (SVPEN), a general neural
network framework for inverse modeling is proposed. As its name suggests, the
embedded physical forward model ensures that any solution that successfully
passes its validation is physically reasonable. SVPEN operates in two modes:
(a) the inverse function mode offers rapid state estimation as conventional
supervised learning, and (b) the optimization mode offers a way to iteratively
correct estimations that fail the validation process. Furthermore, the
optimization mode provides SVPEN with reconfigurability i.e., replacing
components like neural networks, physical models, and error calculations at
will to solve a series of distinct inverse problems without pretraining. More
than ten case studies in two highly nonlinear and entirely distinct
applications: molecular absorption spectroscopy and Turbofan cycle analysis,
demonstrate the generality, physical reliability, and reconfigurability of
SVPEN. More importantly, SVPEN offers a solid foundation to use existing
physical models within the context of AI, so as to striking a balance between
data-driven and physics-driven models.
|
[
{
"created": "Wed, 12 Oct 2022 10:31:36 GMT",
"version": "v1"
},
{
"created": "Mon, 17 Oct 2022 11:36:26 GMT",
"version": "v2"
},
{
"created": "Tue, 17 Jan 2023 06:48:07 GMT",
"version": "v3"
}
] |
2023-01-18
|
[
[
"Kang",
"Ruiyuan",
""
],
[
"Kyritsis",
"Dimitrios C.",
""
],
[
"Liatsis",
"Panos",
""
]
] |
Physics-based inverse modeling techniques are typically restricted to particular research fields, whereas popular machine-learning-based ones are too data-dependent to guarantee the physical compatibility of the solution. In this paper, Self-Validated Physics-Embedding Network (SVPEN), a general neural network framework for inverse modeling is proposed. As its name suggests, the embedded physical forward model ensures that any solution that successfully passes its validation is physically reasonable. SVPEN operates in two modes: (a) the inverse function mode offers rapid state estimation as conventional supervised learning, and (b) the optimization mode offers a way to iteratively correct estimations that fail the validation process. Furthermore, the optimization mode provides SVPEN with reconfigurability i.e., replacing components like neural networks, physical models, and error calculations at will to solve a series of distinct inverse problems without pretraining. More than ten case studies in two highly nonlinear and entirely distinct applications: molecular absorption spectroscopy and Turbofan cycle analysis, demonstrate the generality, physical reliability, and reconfigurability of SVPEN. More importantly, SVPEN offers a solid foundation to use existing physical models within the context of AI, so as to striking a balance between data-driven and physics-driven models.
|
2311.04503
|
Thibault Simonetto
|
Thibault Simonetto, Salah Ghamizi, Antoine Desjardins, Maxime Cordy,
Yves Le Traon
|
Constrained Adaptive Attacks: Realistic Evaluation of Adversarial
Examples and Robust Training of Deep Neural Networks for Tabular Data
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
State-of-the-art deep learning models for tabular data have recently achieved
acceptable performance to be deployed in industrial settings. However, the
robustness of these models remains scarcely explored. Contrary to computer
vision, there is to date no realistic protocol to properly evaluate the
adversarial robustness of deep tabular models due to intrinsic properties of
tabular data such as categorical features, immutability, and feature
relationship constraints. To fill this gap, we propose CAA, the first efficient
evasion attack for constrained tabular deep learning models. CAA is an
iterative parameter-free attack that combines gradient and search attacks to
generate adversarial examples under constraints. We leverage CAA to build a
benchmark of deep tabular models across three popular use cases: credit
scoring, phishing and botnet attacks detection. Our benchmark supports ten
threat models with increasing capabilities of the attacker, and reflects
real-world attack scenarios for each use case. Overall, our results demonstrate
how domain knowledge, adversarial training, and attack budgets impact the
robustness assessment of deep tabular models and provide security practitioners
with a set of recommendations to improve the robustness of deep tabular models
against various evasion attack scenarios.
|
[
{
"created": "Wed, 8 Nov 2023 07:35:28 GMT",
"version": "v1"
}
] |
2023-11-09
|
[
[
"Simonetto",
"Thibault",
""
],
[
"Ghamizi",
"Salah",
""
],
[
"Desjardins",
"Antoine",
""
],
[
"Cordy",
"Maxime",
""
],
[
"Traon",
"Yves Le",
""
]
] |
State-of-the-art deep learning models for tabular data have recently achieved acceptable performance to be deployed in industrial settings. However, the robustness of these models remains scarcely explored. Contrary to computer vision, there is to date no realistic protocol to properly evaluate the adversarial robustness of deep tabular models due to intrinsic properties of tabular data such as categorical features, immutability, and feature relationship constraints. To fill this gap, we propose CAA, the first efficient evasion attack for constrained tabular deep learning models. CAA is an iterative parameter-free attack that combines gradient and search attacks to generate adversarial examples under constraints. We leverage CAA to build a benchmark of deep tabular models across three popular use cases: credit scoring, phishing and botnet attacks detection. Our benchmark supports ten threat models with increasing capabilities of the attacker, and reflects real-world attack scenarios for each use case. Overall, our results demonstrate how domain knowledge, adversarial training, and attack budgets impact the robustness assessment of deep tabular models and provide security practitioners with a set of recommendations to improve the robustness of deep tabular models against various evasion attack scenarios.
|
2007.12144
|
Oladapo Oyebode
|
Oladapo Oyebode, Chinenye Ndulue, Ashfaq Adib, Dinesh Mulchandani,
Banuchitra Suruliraj, Fidelia Anulika Orji, Christine Chambers, Sandra Meier,
and Rita Orji
|
Health, Psychosocial, and Social issues emanating from COVID-19 pandemic
based on Social Media Comments using Natural Language Processing
| null |
JMIR Medical Informatics. 2021. 9(4):e22734
|
10.2196/22734
| null |
cs.CL cs.IR cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The COVID-19 pandemic has caused a global health crisis that affects many
aspects of human lives. In the absence of vaccines and antivirals, several
behavioural change and policy initiatives, such as physical distancing, have
been implemented to control the spread of the coronavirus. Social media data
can reveal public perceptions toward how governments and health agencies across
the globe are handling the pandemic, as well as the impact of the disease on
people regardless of their geographic locations in line with various factors
that hinder or facilitate the efforts to control the spread of the pandemic
globally. This paper aims to investigate the impact of the COVID-19 pandemic on
people globally using social media data. We apply natural language processing
(NLP) and thematic analysis to understand public opinions, experiences, and
issues with respect to the COVID-19 pandemic using social media data. First, we
collect over 47 million COVID-19-related comments from Twitter, Facebook,
YouTube, and three online discussion forums. Second, we perform data
preprocessing which involves applying NLP techniques to clean and prepare the
data for automated theme extraction. Third, we apply context-aware NLP approach
to extract meaningful keyphrases or themes from over 1 million randomly
selected comments, as well as compute sentiment scores for each theme and
assign sentiment polarity based on the scores using lexicon-based technique.
Fourth, we categorize related themes into broader themes. A total of 34
negative themes emerged, out of which 15 are health-related issues,
psychosocial issues, and social issues related to the COVID-19 pandemic from
the public perspective. In addition, 20 positive themes emerged from our
results. Finally, we recommend interventions that can help address the negative
issues based on the positive themes and other remedial ideas rooted in
research.
|
[
{
"created": "Thu, 23 Jul 2020 17:19:50 GMT",
"version": "v1"
}
] |
2021-04-09
|
[
[
"Oyebode",
"Oladapo",
""
],
[
"Ndulue",
"Chinenye",
""
],
[
"Adib",
"Ashfaq",
""
],
[
"Mulchandani",
"Dinesh",
""
],
[
"Suruliraj",
"Banuchitra",
""
],
[
"Orji",
"Fidelia Anulika",
""
],
[
"Chambers",
"Christine",
""
],
[
"Meier",
"Sandra",
""
],
[
"Orji",
"Rita",
""
]
] |
The COVID-19 pandemic has caused a global health crisis that affects many aspects of human lives. In the absence of vaccines and antivirals, several behavioural change and policy initiatives, such as physical distancing, have been implemented to control the spread of the coronavirus. Social media data can reveal public perceptions toward how governments and health agencies across the globe are handling the pandemic, as well as the impact of the disease on people regardless of their geographic locations in line with various factors that hinder or facilitate the efforts to control the spread of the pandemic globally. This paper aims to investigate the impact of the COVID-19 pandemic on people globally using social media data. We apply natural language processing (NLP) and thematic analysis to understand public opinions, experiences, and issues with respect to the COVID-19 pandemic using social media data. First, we collect over 47 million COVID-19-related comments from Twitter, Facebook, YouTube, and three online discussion forums. Second, we perform data preprocessing which involves applying NLP techniques to clean and prepare the data for automated theme extraction. Third, we apply context-aware NLP approach to extract meaningful keyphrases or themes from over 1 million randomly selected comments, as well as compute sentiment scores for each theme and assign sentiment polarity based on the scores using lexicon-based technique. Fourth, we categorize related themes into broader themes. A total of 34 negative themes emerged, out of which 15 are health-related issues, psychosocial issues, and social issues related to the COVID-19 pandemic from the public perspective. In addition, 20 positive themes emerged from our results. Finally, we recommend interventions that can help address the negative issues based on the positive themes and other remedial ideas rooted in research.
|
1412.7479
|
Sudheendra Vijayanarasimhan
|
Sudheendra Vijayanarasimhan and Jonathon Shlens and Rajat Monga and
Jay Yagnik
|
Deep Networks With Large Output Spaces
| null | null | null | null |
cs.NE cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep neural networks have been extremely successful at various image, speech,
video recognition tasks because of their ability to model deep structures
within the data. However, they are still prohibitively expensive to train and
apply for problems containing millions of classes in the output layer. Based on
the observation that the key computation common to most neural network layers
is a vector/matrix product, we propose a fast locality-sensitive hashing
technique to approximate the actual dot product enabling us to scale up the
training and inference to millions of output classes. We evaluate our technique
on three diverse large-scale recognition tasks and show that our approach can
train large-scale models at a faster rate (in terms of steps/total time)
compared to baseline methods.
|
[
{
"created": "Tue, 23 Dec 2014 19:22:59 GMT",
"version": "v1"
},
{
"created": "Mon, 29 Dec 2014 18:45:36 GMT",
"version": "v2"
},
{
"created": "Sat, 28 Feb 2015 01:12:58 GMT",
"version": "v3"
},
{
"created": "Fri, 10 Apr 2015 19:53:21 GMT",
"version": "v4"
}
] |
2015-04-13
|
[
[
"Vijayanarasimhan",
"Sudheendra",
""
],
[
"Shlens",
"Jonathon",
""
],
[
"Monga",
"Rajat",
""
],
[
"Yagnik",
"Jay",
""
]
] |
Deep neural networks have been extremely successful at various image, speech, video recognition tasks because of their ability to model deep structures within the data. However, they are still prohibitively expensive to train and apply for problems containing millions of classes in the output layer. Based on the observation that the key computation common to most neural network layers is a vector/matrix product, we propose a fast locality-sensitive hashing technique to approximate the actual dot product enabling us to scale up the training and inference to millions of output classes. We evaluate our technique on three diverse large-scale recognition tasks and show that our approach can train large-scale models at a faster rate (in terms of steps/total time) compared to baseline methods.
|
2011.15038
|
Rafi Trad
|
Rafi Trad, Myra Spiliopoulou
|
A Framework for Authorial Clustering of Shorter Texts in Latent Semantic
Spaces
|
8 pages including references
| null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Authorial clustering involves the grouping of documents written by the same
author or team of authors without any prior positive examples of an author's
writing style or thematic preferences. For authorial clustering on shorter
texts (paragraph-length texts that are typically shorter than conventional
documents), the document representation is particularly important: very
high-dimensional feature spaces lead to data sparsity and suffer from serious
consequences like the curse of dimensionality, while feature selection may lead
to information loss. We propose a high-level framework which utilizes a compact
data representation in a latent feature space derived with non-parametric topic
modeling. Authorial clusters are identified thereafter in two scenarios: (a)
fully unsupervised and (b) semi-supervised where a small number of shorter
texts are known to belong to the same author (must-link constraints) or not
(cannot-link constraints). We report on experiments with 120 collections in
three languages and two genres and show that the topic-based latent feature
space provides a promising level of performance while reducing the
dimensionality by a factor of 1500 compared to state-of-the-arts. We also
demonstrate that, while prior knowledge on the precise number of authors (i.e.
authorial clusters) does not contribute much to additional quality, little
knowledge on constraints in authorial clusters memberships leads to clear
performance improvements in front of this difficult task. Thorough
experimentation with standard metrics indicates that there still remains an
ample room for improvement for authorial clustering, especially with shorter
texts
|
[
{
"created": "Mon, 30 Nov 2020 17:39:44 GMT",
"version": "v1"
}
] |
2020-12-01
|
[
[
"Trad",
"Rafi",
""
],
[
"Spiliopoulou",
"Myra",
""
]
] |
Authorial clustering involves the grouping of documents written by the same author or team of authors without any prior positive examples of an author's writing style or thematic preferences. For authorial clustering on shorter texts (paragraph-length texts that are typically shorter than conventional documents), the document representation is particularly important: very high-dimensional feature spaces lead to data sparsity and suffer from serious consequences like the curse of dimensionality, while feature selection may lead to information loss. We propose a high-level framework which utilizes a compact data representation in a latent feature space derived with non-parametric topic modeling. Authorial clusters are identified thereafter in two scenarios: (a) fully unsupervised and (b) semi-supervised where a small number of shorter texts are known to belong to the same author (must-link constraints) or not (cannot-link constraints). We report on experiments with 120 collections in three languages and two genres and show that the topic-based latent feature space provides a promising level of performance while reducing the dimensionality by a factor of 1500 compared to state-of-the-arts. We also demonstrate that, while prior knowledge on the precise number of authors (i.e. authorial clusters) does not contribute much to additional quality, little knowledge on constraints in authorial clusters memberships leads to clear performance improvements in front of this difficult task. Thorough experimentation with standard metrics indicates that there still remains an ample room for improvement for authorial clustering, especially with shorter texts
|
2006.16541
|
Jiaxuan Wang
|
Jiaxuan Wang, Jenna Wiens
|
AdaSGD: Bridging the gap between SGD and Adam
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the context of stochastic gradient descent(SGD) and adaptive moment
estimation (Adam),researchers have recently proposed optimization techniques
that transition from Adam to SGD with the goal of improving both convergence
and generalization performance. However, precisely how each approach trades off
early progress and generalization is not well understood; thus, it is unclear
when or even if, one should transition from one approach to the other. In this
work, by first studying the convex setting, we identify potential contributors
to observed differences in performance between SGD and Adam. In particular,we
provide theoretical insights for when and why Adam outperforms SGD and vice
versa. We ad-dress the performance gap by adapting a single global learning
rate for SGD, which we refer to as AdaSGD. We justify this proposed approach
with empirical analyses in non-convex settings. On several datasets that span
three different domains,we demonstrate how AdaSGD combines the benefits of both
SGD and Adam, eliminating the need for approaches that transition from Adam to
SGD.
|
[
{
"created": "Tue, 30 Jun 2020 05:44:19 GMT",
"version": "v1"
}
] |
2020-07-01
|
[
[
"Wang",
"Jiaxuan",
""
],
[
"Wiens",
"Jenna",
""
]
] |
In the context of stochastic gradient descent(SGD) and adaptive moment estimation (Adam),researchers have recently proposed optimization techniques that transition from Adam to SGD with the goal of improving both convergence and generalization performance. However, precisely how each approach trades off early progress and generalization is not well understood; thus, it is unclear when or even if, one should transition from one approach to the other. In this work, by first studying the convex setting, we identify potential contributors to observed differences in performance between SGD and Adam. In particular,we provide theoretical insights for when and why Adam outperforms SGD and vice versa. We ad-dress the performance gap by adapting a single global learning rate for SGD, which we refer to as AdaSGD. We justify this proposed approach with empirical analyses in non-convex settings. On several datasets that span three different domains,we demonstrate how AdaSGD combines the benefits of both SGD and Adam, eliminating the need for approaches that transition from Adam to SGD.
|
2404.01954
|
Kang Min Yoo
|
Kang Min Yoo, Jaegeun Han, Sookyo In, Heewon Jeon, Jisu Jeong, Jaewook
Kang, Hyunwook Kim, Kyung-Min Kim, Munhyong Kim, Sungju Kim, Donghyun Kwak,
Hanock Kwak, Se Jung Kwon, Bado Lee, Dongsoo Lee, Gichang Lee, Jooho Lee,
Baeseong Park, Seongjin Shin, Joonsang Yu, Seolki Baek, Sumin Byeon, Eungsup
Cho, Dooseok Choe, Jeesung Han, Youngkyun Jin, Hyein Jun, Jaeseung Jung,
Chanwoong Kim, Jinhong Kim, Jinuk Kim, Dokyeong Lee, Dongwook Park, Jeong Min
Sohn, Sujung Han, Jiae Heo, Sungju Hong, Mina Jeon, Hyunhoon Jung, Jungeun
Jung, Wangkyo Jung, Chungjoon Kim, Hyeri Kim, Jonghyun Kim, Min Young Kim,
Soeun Lee, Joonhee Park, Jieun Shin, Sojin Yang, Jungsoon Yoon, Hwaran Lee,
Sanghwan Bae, Jeehwan Cha, Karl Gylleus, Donghoon Ham, Mihak Hong, Youngki
Hong, Yunki Hong, Dahyun Jang, Hyojun Jeon, Yujin Jeon, Yeji Jeong, Myunggeun
Ji, Yeguk Jin, Chansong Jo, Shinyoung Joo, Seunghwan Jung, Adrian Jungmyung
Kim, Byoung Hoon Kim, Hyomin Kim, Jungwhan Kim, Minkyoung Kim, Minseung Kim,
Sungdong Kim, Yonghee Kim, Youngjun Kim, Youngkwan Kim, Donghyeon Ko, Dughyun
Lee, Ha Young Lee, Jaehong Lee, Jieun Lee, Jonghyun Lee, Jongjin Lee, Min
Young Lee, Yehbin Lee, Taehong Min, Yuri Min, Kiyoon Moon, Hyangnam Oh,
Jaesun Park, Kyuyon Park, Younghun Park, Hanbae Seo, Seunghyun Seo, Mihyun
Sim, Gyubin Son, Matt Yeo, Kyung Hoon Yeom, Wonjoon Yoo, Myungin You, Doheon
Ahn, Homin Ahn, Joohee Ahn, Seongmin Ahn, Chanwoo An, Hyeryun An, Junho An,
Sang-Min An, Boram Byun, Eunbin Byun, Jongho Cha, Minji Chang, Seunggyu
Chang, Haesong Cho, Youngdo Cho, Dalnim Choi, Daseul Choi, Hyoseok Choi,
Minseong Choi, Sangho Choi, Seongjae Choi, Wooyong Choi, Sewhan Chun, Dong
Young Go, Chiheon Ham, Danbi Han, Jaemin Han, Moonyoung Hong, Sung Bum Hong,
Dong-Hyun Hwang, Seongchan Hwang, Jinbae Im, Hyuk Jin Jang, Jaehyung Jang,
Jaeni Jang, Sihyeon Jang, Sungwon Jang, Joonha Jeon, Daun Jeong, Joonhyun
Jeong, Kyeongseok Jeong, Mini Jeong, Sol Jin, Hanbyeol Jo, Hanju Jo, Minjung
Jo, Chaeyoon Jung, Hyungsik Jung, Jaeuk Jung, Ju Hwan Jung, Kwangsun Jung,
Seungjae Jung, Soonwon Ka, Donghan Kang, Soyoung Kang, Taeho Kil, Areum Kim,
Beomyoung Kim, Byeongwook Kim, Daehee Kim, Dong-Gyun Kim, Donggook Kim,
Donghyun Kim, Euna Kim, Eunchul Kim, Geewook Kim, Gyu Ri Kim, Hanbyul Kim,
Heesu Kim, Isaac Kim, Jeonghoon Kim, Jihye Kim, Joonghoon Kim, Minjae Kim,
Minsub Kim, Pil Hwan Kim, Sammy Kim, Seokhun Kim, Seonghyeon Kim, Soojin Kim,
Soong Kim, Soyoon Kim, Sunyoung Kim, Taeho Kim, Wonho Kim, Yoonsik Kim, You
Jin Kim, Yuri Kim, Beomseok Kwon, Ohsung Kwon, Yoo-Hwan Kwon, Anna Lee,
Byungwook Lee, Changho Lee, Daun Lee, Dongjae Lee, Ha-Ram Lee, Hodong Lee,
Hwiyeong Lee, Hyunmi Lee, Injae Lee, Jaeung Lee, Jeongsang Lee, Jisoo Lee,
Jongsoo Lee, Joongjae Lee, Juhan Lee, Jung Hyun Lee, Junghoon Lee, Junwoo
Lee, Se Yun Lee, Sujin Lee, Sungjae Lee, Sungwoo Lee, Wonjae Lee, Zoo Hyun
Lee, Jong Kun Lim, Kun Lim, Taemin Lim, Nuri Na, Jeongyeon Nam, Kyeong-Min
Nam, Yeonseog Noh, Biro Oh, Jung-Sik Oh, Solgil Oh, Yeontaek Oh, Boyoun Park,
Cheonbok Park, Dongju Park, Hyeonjin Park, Hyun Tae Park, Hyunjung Park,
Jihye Park, Jooseok Park, Junghwan Park, Jungsoo Park, Miru Park, Sang Hee
Park, Seunghyun Park, Soyoung Park, Taerim Park, Wonkyeong Park, Hyunjoon
Ryu, Jeonghun Ryu, Nahyeon Ryu, Soonshin Seo, Suk Min Seo, Yoonjeong Shim,
Kyuyong Shin, Wonkwang Shin, Hyun Sim, Woongseob Sim, Hyejin Soh, Bokyong
Son, Hyunjun Son, Seulah Son, Chi-Yun Song, Chiyoung Song, Ka Yeon Song,
Minchul Song, Seungmin Song, Jisung Wang, Yonggoo Yeo, Myeong Yeon Yi, Moon
Bin Yim, Taehwan Yoo, Youngjoon Yoo, Sungmin Yoon, Young Jin Yoon, Hangyeol
Yu, Ui Seon Yu, Xingdong Zuo, Jeongin Bae, Joungeun Bae, Hyunsoo Cho,
Seonghyun Cho, Yongjin Cho, Taekyoon Choi, Yera Choi, Jiwan Chung, Zhenghui
Han, Byeongho Heo, Euisuk Hong, Taebaek Hwang, Seonyeol Im, Sumin Jegal,
Sumin Jeon, Yelim Jeong, Yonghyun Jeong, Can Jiang, Juyong Jiang, Jiho Jin,
Ara Jo, Younghyun Jo, Hoyoun Jung, Juyoung Jung, Seunghyeong Kang, Dae Hee
Kim, Ginam Kim, Hangyeol Kim, Heeseung Kim, Hyojin Kim, Hyojun Kim, Hyun-Ah
Kim, Jeehye Kim, Jin-Hwa Kim, Jiseon Kim, Jonghak Kim, Jung Yoon Kim, Rak
Yeong Kim, Seongjin Kim, Seoyoon Kim, Sewon Kim, Sooyoung Kim, Sukyoung Kim,
Taeyong Kim, Naeun Ko, Bonseung Koo, Heeyoung Kwak, Haena Kwon, Youngjin
Kwon, Boram Lee, Bruce W. Lee, Dagyeong Lee, Erin Lee, Euijin Lee, Ha Gyeong
Lee, Hyojin Lee, Hyunjeong Lee, Jeeyoon Lee, Jeonghyun Lee, Jongheok Lee,
Joonhyung Lee, Junhyuk Lee, Mingu Lee, Nayeon Lee, Sangkyu Lee, Se Young Lee,
Seulgi Lee, Seung Jin Lee, Suhyeon Lee, Yeonjae Lee, Yesol Lee, Youngbeom
Lee, Yujin Lee, Shaodong Li, Tianyu Liu, Seong-Eun Moon, Taehong Moon,
Max-Lasse Nihlenramstroem, Wonseok Oh, Yuri Oh, Hongbeen Park, Hyekyung Park,
Jaeho Park, Nohil Park, Sangjin Park, Jiwon Ryu, Miru Ryu, Simo Ryu, Ahreum
Seo, Hee Seo, Kangdeok Seo, Jamin Shin, Seungyoun Shin, Heetae Sin, Jiangping
Wang, Lei Wang, Ning Xiang, Longxiang Xiao, Jing Xu, Seonyeong Yi, Haanju
Yoo, Haneul Yoo, Hwanhee Yoo, Liang Yu, Youngjae Yu, Weijie Yuan, Bo Zeng,
Qian Zhou, Kyunghyun Cho, Jung-Woo Ha, Joonsuk Park, Jihyun Hwang, Hyoung Jo
Kwon, Soonyong Kwon, Jungyeon Lee, Seungho Lee, Seonghyeon Lim, Hyunkyung
Noh, Seungho Choi, Sang-Woo Lee, Jung Hwa Lim, Nako Sung
|
HyperCLOVA X Technical Report
|
44 pages; updated authors list and fixed author names
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce HyperCLOVA X, a family of large language models (LLMs) tailored
to the Korean language and culture, along with competitive capabilities in
English, math, and coding. HyperCLOVA X was trained on a balanced mix of
Korean, English, and code data, followed by instruction-tuning with
high-quality human-annotated datasets while abiding by strict safety guidelines
reflecting our commitment to responsible AI. The model is evaluated across
various benchmarks, including comprehensive reasoning, knowledge, commonsense,
factuality, coding, math, chatting, instruction-following, and harmlessness, in
both Korean and English. HyperCLOVA X exhibits strong reasoning capabilities in
Korean backed by a deep understanding of the language and cultural nuances.
Further analysis of the inherent bilingual nature and its extension to
multilingualism highlights the model's cross-lingual proficiency and strong
generalization ability to untargeted languages, including machine translation
between several language pairs and cross-lingual inference tasks. We believe
that HyperCLOVA X can provide helpful guidance for regions or countries in
developing their sovereign LLMs.
|
[
{
"created": "Tue, 2 Apr 2024 13:48:49 GMT",
"version": "v1"
},
{
"created": "Sat, 13 Apr 2024 15:06:19 GMT",
"version": "v2"
}
] |
2024-04-16
|
[
[
"Yoo",
"Kang Min",
""
],
[
"Han",
"Jaegeun",
""
],
[
"In",
"Sookyo",
""
],
[
"Jeon",
"Heewon",
""
],
[
"Jeong",
"Jisu",
""
],
[
"Kang",
"Jaewook",
""
],
[
"Kim",
"Hyunwook",
""
],
[
"Kim",
"Kyung-Min",
""
],
[
"Kim",
"Munhyong",
""
],
[
"Kim",
"Sungju",
""
],
[
"Kwak",
"Donghyun",
""
],
[
"Kwak",
"Hanock",
""
],
[
"Kwon",
"Se Jung",
""
],
[
"Lee",
"Bado",
""
],
[
"Lee",
"Dongsoo",
""
],
[
"Lee",
"Gichang",
""
],
[
"Lee",
"Jooho",
""
],
[
"Park",
"Baeseong",
""
],
[
"Shin",
"Seongjin",
""
],
[
"Yu",
"Joonsang",
""
],
[
"Baek",
"Seolki",
""
],
[
"Byeon",
"Sumin",
""
],
[
"Cho",
"Eungsup",
""
],
[
"Choe",
"Dooseok",
""
],
[
"Han",
"Jeesung",
""
],
[
"Jin",
"Youngkyun",
""
],
[
"Jun",
"Hyein",
""
],
[
"Jung",
"Jaeseung",
""
],
[
"Kim",
"Chanwoong",
""
],
[
"Kim",
"Jinhong",
""
],
[
"Kim",
"Jinuk",
""
],
[
"Lee",
"Dokyeong",
""
],
[
"Park",
"Dongwook",
""
],
[
"Sohn",
"Jeong Min",
""
],
[
"Han",
"Sujung",
""
],
[
"Heo",
"Jiae",
""
],
[
"Hong",
"Sungju",
""
],
[
"Jeon",
"Mina",
""
],
[
"Jung",
"Hyunhoon",
""
],
[
"Jung",
"Jungeun",
""
],
[
"Jung",
"Wangkyo",
""
],
[
"Kim",
"Chungjoon",
""
],
[
"Kim",
"Hyeri",
""
],
[
"Kim",
"Jonghyun",
""
],
[
"Kim",
"Min Young",
""
],
[
"Lee",
"Soeun",
""
],
[
"Park",
"Joonhee",
""
],
[
"Shin",
"Jieun",
""
],
[
"Yang",
"Sojin",
""
],
[
"Yoon",
"Jungsoon",
""
],
[
"Lee",
"Hwaran",
""
],
[
"Bae",
"Sanghwan",
""
],
[
"Cha",
"Jeehwan",
""
],
[
"Gylleus",
"Karl",
""
],
[
"Ham",
"Donghoon",
""
],
[
"Hong",
"Mihak",
""
],
[
"Hong",
"Youngki",
""
],
[
"Hong",
"Yunki",
""
],
[
"Jang",
"Dahyun",
""
],
[
"Jeon",
"Hyojun",
""
],
[
"Jeon",
"Yujin",
""
],
[
"Jeong",
"Yeji",
""
],
[
"Ji",
"Myunggeun",
""
],
[
"Jin",
"Yeguk",
""
],
[
"Jo",
"Chansong",
""
],
[
"Joo",
"Shinyoung",
""
],
[
"Jung",
"Seunghwan",
""
],
[
"Kim",
"Adrian Jungmyung",
""
],
[
"Kim",
"Byoung Hoon",
""
],
[
"Kim",
"Hyomin",
""
],
[
"Kim",
"Jungwhan",
""
],
[
"Kim",
"Minkyoung",
""
],
[
"Kim",
"Minseung",
""
],
[
"Kim",
"Sungdong",
""
],
[
"Kim",
"Yonghee",
""
],
[
"Kim",
"Youngjun",
""
],
[
"Kim",
"Youngkwan",
""
],
[
"Ko",
"Donghyeon",
""
],
[
"Lee",
"Dughyun",
""
],
[
"Lee",
"Ha Young",
""
],
[
"Lee",
"Jaehong",
""
],
[
"Lee",
"Jieun",
""
],
[
"Lee",
"Jonghyun",
""
],
[
"Lee",
"Jongjin",
""
],
[
"Lee",
"Min Young",
""
],
[
"Lee",
"Yehbin",
""
],
[
"Min",
"Taehong",
""
],
[
"Min",
"Yuri",
""
],
[
"Moon",
"Kiyoon",
""
],
[
"Oh",
"Hyangnam",
""
],
[
"Park",
"Jaesun",
""
],
[
"Park",
"Kyuyon",
""
],
[
"Park",
"Younghun",
""
],
[
"Seo",
"Hanbae",
""
],
[
"Seo",
"Seunghyun",
""
],
[
"Sim",
"Mihyun",
""
],
[
"Son",
"Gyubin",
""
],
[
"Yeo",
"Matt",
""
],
[
"Yeom",
"Kyung Hoon",
""
],
[
"Yoo",
"Wonjoon",
""
],
[
"You",
"Myungin",
""
],
[
"Ahn",
"Doheon",
""
],
[
"Ahn",
"Homin",
""
],
[
"Ahn",
"Joohee",
""
],
[
"Ahn",
"Seongmin",
""
],
[
"An",
"Chanwoo",
""
],
[
"An",
"Hyeryun",
""
],
[
"An",
"Junho",
""
],
[
"An",
"Sang-Min",
""
],
[
"Byun",
"Boram",
""
],
[
"Byun",
"Eunbin",
""
],
[
"Cha",
"Jongho",
""
],
[
"Chang",
"Minji",
""
],
[
"Chang",
"Seunggyu",
""
],
[
"Cho",
"Haesong",
""
],
[
"Cho",
"Youngdo",
""
],
[
"Choi",
"Dalnim",
""
],
[
"Choi",
"Daseul",
""
],
[
"Choi",
"Hyoseok",
""
],
[
"Choi",
"Minseong",
""
],
[
"Choi",
"Sangho",
""
],
[
"Choi",
"Seongjae",
""
],
[
"Choi",
"Wooyong",
""
],
[
"Chun",
"Sewhan",
""
],
[
"Go",
"Dong Young",
""
],
[
"Ham",
"Chiheon",
""
],
[
"Han",
"Danbi",
""
],
[
"Han",
"Jaemin",
""
],
[
"Hong",
"Moonyoung",
""
],
[
"Hong",
"Sung Bum",
""
],
[
"Hwang",
"Dong-Hyun",
""
],
[
"Hwang",
"Seongchan",
""
],
[
"Im",
"Jinbae",
""
],
[
"Jang",
"Hyuk Jin",
""
],
[
"Jang",
"Jaehyung",
""
],
[
"Jang",
"Jaeni",
""
],
[
"Jang",
"Sihyeon",
""
],
[
"Jang",
"Sungwon",
""
],
[
"Jeon",
"Joonha",
""
],
[
"Jeong",
"Daun",
""
],
[
"Jeong",
"Joonhyun",
""
],
[
"Jeong",
"Kyeongseok",
""
],
[
"Jeong",
"Mini",
""
],
[
"Jin",
"Sol",
""
],
[
"Jo",
"Hanbyeol",
""
],
[
"Jo",
"Hanju",
""
],
[
"Jo",
"Minjung",
""
],
[
"Jung",
"Chaeyoon",
""
],
[
"Jung",
"Hyungsik",
""
],
[
"Jung",
"Jaeuk",
""
],
[
"Jung",
"Ju Hwan",
""
],
[
"Jung",
"Kwangsun",
""
],
[
"Jung",
"Seungjae",
""
],
[
"Ka",
"Soonwon",
""
],
[
"Kang",
"Donghan",
""
],
[
"Kang",
"Soyoung",
""
],
[
"Kil",
"Taeho",
""
],
[
"Kim",
"Areum",
""
],
[
"Kim",
"Beomyoung",
""
],
[
"Kim",
"Byeongwook",
""
],
[
"Kim",
"Daehee",
""
],
[
"Kim",
"Dong-Gyun",
""
],
[
"Kim",
"Donggook",
""
],
[
"Kim",
"Donghyun",
""
],
[
"Kim",
"Euna",
""
],
[
"Kim",
"Eunchul",
""
],
[
"Kim",
"Geewook",
""
],
[
"Kim",
"Gyu Ri",
""
],
[
"Kim",
"Hanbyul",
""
],
[
"Kim",
"Heesu",
""
],
[
"Kim",
"Isaac",
""
],
[
"Kim",
"Jeonghoon",
""
],
[
"Kim",
"Jihye",
""
],
[
"Kim",
"Joonghoon",
""
],
[
"Kim",
"Minjae",
""
],
[
"Kim",
"Minsub",
""
],
[
"Kim",
"Pil Hwan",
""
],
[
"Kim",
"Sammy",
""
],
[
"Kim",
"Seokhun",
""
],
[
"Kim",
"Seonghyeon",
""
],
[
"Kim",
"Soojin",
""
],
[
"Kim",
"Soong",
""
],
[
"Kim",
"Soyoon",
""
],
[
"Kim",
"Sunyoung",
""
],
[
"Kim",
"Taeho",
""
],
[
"Kim",
"Wonho",
""
],
[
"Kim",
"Yoonsik",
""
],
[
"Kim",
"You Jin",
""
],
[
"Kim",
"Yuri",
""
],
[
"Kwon",
"Beomseok",
""
],
[
"Kwon",
"Ohsung",
""
],
[
"Kwon",
"Yoo-Hwan",
""
],
[
"Lee",
"Anna",
""
],
[
"Lee",
"Byungwook",
""
],
[
"Lee",
"Changho",
""
],
[
"Lee",
"Daun",
""
],
[
"Lee",
"Dongjae",
""
],
[
"Lee",
"Ha-Ram",
""
],
[
"Lee",
"Hodong",
""
],
[
"Lee",
"Hwiyeong",
""
],
[
"Lee",
"Hyunmi",
""
],
[
"Lee",
"Injae",
""
],
[
"Lee",
"Jaeung",
""
],
[
"Lee",
"Jeongsang",
""
],
[
"Lee",
"Jisoo",
""
],
[
"Lee",
"Jongsoo",
""
],
[
"Lee",
"Joongjae",
""
],
[
"Lee",
"Juhan",
""
],
[
"Lee",
"Jung Hyun",
""
],
[
"Lee",
"Junghoon",
""
],
[
"Lee",
"Junwoo",
""
],
[
"Lee",
"Se Yun",
""
],
[
"Lee",
"Sujin",
""
],
[
"Lee",
"Sungjae",
""
],
[
"Lee",
"Sungwoo",
""
],
[
"Lee",
"Wonjae",
""
],
[
"Lee",
"Zoo Hyun",
""
],
[
"Lim",
"Jong Kun",
""
],
[
"Lim",
"Kun",
""
],
[
"Lim",
"Taemin",
""
],
[
"Na",
"Nuri",
""
],
[
"Nam",
"Jeongyeon",
""
],
[
"Nam",
"Kyeong-Min",
""
],
[
"Noh",
"Yeonseog",
""
],
[
"Oh",
"Biro",
""
],
[
"Oh",
"Jung-Sik",
""
],
[
"Oh",
"Solgil",
""
],
[
"Oh",
"Yeontaek",
""
],
[
"Park",
"Boyoun",
""
],
[
"Park",
"Cheonbok",
""
],
[
"Park",
"Dongju",
""
],
[
"Park",
"Hyeonjin",
""
],
[
"Park",
"Hyun Tae",
""
],
[
"Park",
"Hyunjung",
""
],
[
"Park",
"Jihye",
""
],
[
"Park",
"Jooseok",
""
],
[
"Park",
"Junghwan",
""
],
[
"Park",
"Jungsoo",
""
],
[
"Park",
"Miru",
""
],
[
"Park",
"Sang Hee",
""
],
[
"Park",
"Seunghyun",
""
],
[
"Park",
"Soyoung",
""
],
[
"Park",
"Taerim",
""
],
[
"Park",
"Wonkyeong",
""
],
[
"Ryu",
"Hyunjoon",
""
],
[
"Ryu",
"Jeonghun",
""
],
[
"Ryu",
"Nahyeon",
""
],
[
"Seo",
"Soonshin",
""
],
[
"Seo",
"Suk Min",
""
],
[
"Shim",
"Yoonjeong",
""
],
[
"Shin",
"Kyuyong",
""
],
[
"Shin",
"Wonkwang",
""
],
[
"Sim",
"Hyun",
""
],
[
"Sim",
"Woongseob",
""
],
[
"Soh",
"Hyejin",
""
],
[
"Son",
"Bokyong",
""
],
[
"Son",
"Hyunjun",
""
],
[
"Son",
"Seulah",
""
],
[
"Song",
"Chi-Yun",
""
],
[
"Song",
"Chiyoung",
""
],
[
"Song",
"Ka Yeon",
""
],
[
"Song",
"Minchul",
""
],
[
"Song",
"Seungmin",
""
],
[
"Wang",
"Jisung",
""
],
[
"Yeo",
"Yonggoo",
""
],
[
"Yi",
"Myeong Yeon",
""
],
[
"Yim",
"Moon Bin",
""
],
[
"Yoo",
"Taehwan",
""
],
[
"Yoo",
"Youngjoon",
""
],
[
"Yoon",
"Sungmin",
""
],
[
"Yoon",
"Young Jin",
""
],
[
"Yu",
"Hangyeol",
""
],
[
"Yu",
"Ui Seon",
""
],
[
"Zuo",
"Xingdong",
""
],
[
"Bae",
"Jeongin",
""
],
[
"Bae",
"Joungeun",
""
],
[
"Cho",
"Hyunsoo",
""
],
[
"Cho",
"Seonghyun",
""
],
[
"Cho",
"Yongjin",
""
],
[
"Choi",
"Taekyoon",
""
],
[
"Choi",
"Yera",
""
],
[
"Chung",
"Jiwan",
""
],
[
"Han",
"Zhenghui",
""
],
[
"Heo",
"Byeongho",
""
],
[
"Hong",
"Euisuk",
""
],
[
"Hwang",
"Taebaek",
""
],
[
"Im",
"Seonyeol",
""
],
[
"Jegal",
"Sumin",
""
],
[
"Jeon",
"Sumin",
""
],
[
"Jeong",
"Yelim",
""
],
[
"Jeong",
"Yonghyun",
""
],
[
"Jiang",
"Can",
""
],
[
"Jiang",
"Juyong",
""
],
[
"Jin",
"Jiho",
""
],
[
"Jo",
"Ara",
""
],
[
"Jo",
"Younghyun",
""
],
[
"Jung",
"Hoyoun",
""
],
[
"Jung",
"Juyoung",
""
],
[
"Kang",
"Seunghyeong",
""
],
[
"Kim",
"Dae Hee",
""
],
[
"Kim",
"Ginam",
""
],
[
"Kim",
"Hangyeol",
""
],
[
"Kim",
"Heeseung",
""
],
[
"Kim",
"Hyojin",
""
],
[
"Kim",
"Hyojun",
""
],
[
"Kim",
"Hyun-Ah",
""
],
[
"Kim",
"Jeehye",
""
],
[
"Kim",
"Jin-Hwa",
""
],
[
"Kim",
"Jiseon",
""
],
[
"Kim",
"Jonghak",
""
],
[
"Kim",
"Jung Yoon",
""
],
[
"Kim",
"Rak Yeong",
""
],
[
"Kim",
"Seongjin",
""
],
[
"Kim",
"Seoyoon",
""
],
[
"Kim",
"Sewon",
""
],
[
"Kim",
"Sooyoung",
""
],
[
"Kim",
"Sukyoung",
""
],
[
"Kim",
"Taeyong",
""
],
[
"Ko",
"Naeun",
""
],
[
"Koo",
"Bonseung",
""
],
[
"Kwak",
"Heeyoung",
""
],
[
"Kwon",
"Haena",
""
],
[
"Kwon",
"Youngjin",
""
],
[
"Lee",
"Boram",
""
],
[
"Lee",
"Bruce W.",
""
],
[
"Lee",
"Dagyeong",
""
],
[
"Lee",
"Erin",
""
],
[
"Lee",
"Euijin",
""
],
[
"Lee",
"Ha Gyeong",
""
],
[
"Lee",
"Hyojin",
""
],
[
"Lee",
"Hyunjeong",
""
],
[
"Lee",
"Jeeyoon",
""
],
[
"Lee",
"Jeonghyun",
""
],
[
"Lee",
"Jongheok",
""
],
[
"Lee",
"Joonhyung",
""
],
[
"Lee",
"Junhyuk",
""
],
[
"Lee",
"Mingu",
""
],
[
"Lee",
"Nayeon",
""
],
[
"Lee",
"Sangkyu",
""
],
[
"Lee",
"Se Young",
""
],
[
"Lee",
"Seulgi",
""
],
[
"Lee",
"Seung Jin",
""
],
[
"Lee",
"Suhyeon",
""
],
[
"Lee",
"Yeonjae",
""
],
[
"Lee",
"Yesol",
""
],
[
"Lee",
"Youngbeom",
""
],
[
"Lee",
"Yujin",
""
],
[
"Li",
"Shaodong",
""
],
[
"Liu",
"Tianyu",
""
],
[
"Moon",
"Seong-Eun",
""
],
[
"Moon",
"Taehong",
""
],
[
"Nihlenramstroem",
"Max-Lasse",
""
],
[
"Oh",
"Wonseok",
""
],
[
"Oh",
"Yuri",
""
],
[
"Park",
"Hongbeen",
""
],
[
"Park",
"Hyekyung",
""
],
[
"Park",
"Jaeho",
""
],
[
"Park",
"Nohil",
""
],
[
"Park",
"Sangjin",
""
],
[
"Ryu",
"Jiwon",
""
],
[
"Ryu",
"Miru",
""
],
[
"Ryu",
"Simo",
""
],
[
"Seo",
"Ahreum",
""
],
[
"Seo",
"Hee",
""
],
[
"Seo",
"Kangdeok",
""
],
[
"Shin",
"Jamin",
""
],
[
"Shin",
"Seungyoun",
""
],
[
"Sin",
"Heetae",
""
],
[
"Wang",
"Jiangping",
""
],
[
"Wang",
"Lei",
""
],
[
"Xiang",
"Ning",
""
],
[
"Xiao",
"Longxiang",
""
],
[
"Xu",
"Jing",
""
],
[
"Yi",
"Seonyeong",
""
],
[
"Yoo",
"Haanju",
""
],
[
"Yoo",
"Haneul",
""
],
[
"Yoo",
"Hwanhee",
""
],
[
"Yu",
"Liang",
""
],
[
"Yu",
"Youngjae",
""
],
[
"Yuan",
"Weijie",
""
],
[
"Zeng",
"Bo",
""
],
[
"Zhou",
"Qian",
""
],
[
"Cho",
"Kyunghyun",
""
],
[
"Ha",
"Jung-Woo",
""
],
[
"Park",
"Joonsuk",
""
],
[
"Hwang",
"Jihyun",
""
],
[
"Kwon",
"Hyoung Jo",
""
],
[
"Kwon",
"Soonyong",
""
],
[
"Lee",
"Jungyeon",
""
],
[
"Lee",
"Seungho",
""
],
[
"Lim",
"Seonghyeon",
""
],
[
"Noh",
"Hyunkyung",
""
],
[
"Choi",
"Seungho",
""
],
[
"Lee",
"Sang-Woo",
""
],
[
"Lim",
"Jung Hwa",
""
],
[
"Sung",
"Nako",
""
]
] |
We introduce HyperCLOVA X, a family of large language models (LLMs) tailored to the Korean language and culture, along with competitive capabilities in English, math, and coding. HyperCLOVA X was trained on a balanced mix of Korean, English, and code data, followed by instruction-tuning with high-quality human-annotated datasets while abiding by strict safety guidelines reflecting our commitment to responsible AI. The model is evaluated across various benchmarks, including comprehensive reasoning, knowledge, commonsense, factuality, coding, math, chatting, instruction-following, and harmlessness, in both Korean and English. HyperCLOVA X exhibits strong reasoning capabilities in Korean backed by a deep understanding of the language and cultural nuances. Further analysis of the inherent bilingual nature and its extension to multilingualism highlights the model's cross-lingual proficiency and strong generalization ability to untargeted languages, including machine translation between several language pairs and cross-lingual inference tasks. We believe that HyperCLOVA X can provide helpful guidance for regions or countries in developing their sovereign LLMs.
|
2310.18630
|
Xu Chen
|
Xu Chen, XinXin He, Zhiyong Feng, Zhiqing Wei, Qixun Zhang, Xin Yuan,
and Ping Zhang
|
Joint Localization and Communication Enhancement in Uplink Integrated
Sensing and Communications System with Clock Asynchronism
|
13 pages, 11 figures, submitted to JSAC special issue "Positioning
and Sensing Over Wireless Networks"
| null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In this paper, we propose a joint single-base localization and communication
enhancement scheme for the uplink (UL) integrated sensing and communications
(ISAC) system with asynchronism, which can achieve accurate single-base
localization of user equipment (UE) and significantly improve the communication
reliability despite the existence of timing offset (TO) due to the clock
asynchronism between UE and base station (BS). Our proposed scheme integrates
the CSI enhancement into the multiple signal classification (MUSIC)-based AoA
estimation and thus imposes no extra complexity on the ISAC system. We further
exploit a MUSIC-based range estimation method and prove that it can suppress
the time-varying TO-related phase terms. Exploiting the AoA and range
estimation of UE, we can estimate the location of UE. Finally, we propose a
joint CSI and data signals-based localization scheme that can coherently
exploit the data and the CSI signals to improve the AoA and range estimation,
which further enhances the single-base localization of UE. The extensive
simulation results show that the enhanced CSI can achieve equivalent bit error
rate performance to the minimum mean square error (MMSE) CSI estimator. The
proposed joint CSI and data signals-based localization scheme can achieve
decimeter-level localization accuracy despite the existing clock asynchronism
and improve the localization mean square error (MSE) by about 8 dB compared
with the maximum likelihood (ML)-based benchmark method.
|
[
{
"created": "Sat, 28 Oct 2023 07:57:35 GMT",
"version": "v1"
}
] |
2023-10-31
|
[
[
"Chen",
"Xu",
""
],
[
"He",
"XinXin",
""
],
[
"Feng",
"Zhiyong",
""
],
[
"Wei",
"Zhiqing",
""
],
[
"Zhang",
"Qixun",
""
],
[
"Yuan",
"Xin",
""
],
[
"Zhang",
"Ping",
""
]
] |
In this paper, we propose a joint single-base localization and communication enhancement scheme for the uplink (UL) integrated sensing and communications (ISAC) system with asynchronism, which can achieve accurate single-base localization of user equipment (UE) and significantly improve the communication reliability despite the existence of timing offset (TO) due to the clock asynchronism between UE and base station (BS). Our proposed scheme integrates the CSI enhancement into the multiple signal classification (MUSIC)-based AoA estimation and thus imposes no extra complexity on the ISAC system. We further exploit a MUSIC-based range estimation method and prove that it can suppress the time-varying TO-related phase terms. Exploiting the AoA and range estimation of UE, we can estimate the location of UE. Finally, we propose a joint CSI and data signals-based localization scheme that can coherently exploit the data and the CSI signals to improve the AoA and range estimation, which further enhances the single-base localization of UE. The extensive simulation results show that the enhanced CSI can achieve equivalent bit error rate performance to the minimum mean square error (MMSE) CSI estimator. The proposed joint CSI and data signals-based localization scheme can achieve decimeter-level localization accuracy despite the existing clock asynchronism and improve the localization mean square error (MSE) by about 8 dB compared with the maximum likelihood (ML)-based benchmark method.
|
1509.04806
|
Francisco Su\'arez-Ruiz
|
Francisco Su\'arez-Ruiz and Quang-Cuong Pham
|
A Framework for Fine Robotic Assembly
|
8 pages, 7 figures, 2 tables
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fine robotic assembly, in which the parts to be assembled are small and
fragile and lie in an unstructured environment, is still out of reach of
today's industrial robots. The main difficulties arise in the precise
localization of the parts in an unstructured environment and the control of
contact interactions. Our contribution in this paper is twofold. First, we
propose a taxonomy of the manipulation primitives that are specifically
involved in fine assembly. Such a taxonomy is crucial for designing a scalable
robotic system (both hardware and software) given the complexity of real-world
assembly tasks. Second, we present a hardware and software architecture where
we have addressed, in an integrated way, a number of issues arising in fine
assembly, such as workspace optimization, external wrench compensation,
position-based force control, etc. Finally, we show the above taxonomy and
architecture in action on a highly dexterous task -- bimanual pin insertion --
which is one of the key steps in our long term project, the autonomous assembly
of an IKEA chair.
|
[
{
"created": "Wed, 16 Sep 2015 03:55:33 GMT",
"version": "v1"
}
] |
2015-09-17
|
[
[
"Suárez-Ruiz",
"Francisco",
""
],
[
"Pham",
"Quang-Cuong",
""
]
] |
Fine robotic assembly, in which the parts to be assembled are small and fragile and lie in an unstructured environment, is still out of reach of today's industrial robots. The main difficulties arise in the precise localization of the parts in an unstructured environment and the control of contact interactions. Our contribution in this paper is twofold. First, we propose a taxonomy of the manipulation primitives that are specifically involved in fine assembly. Such a taxonomy is crucial for designing a scalable robotic system (both hardware and software) given the complexity of real-world assembly tasks. Second, we present a hardware and software architecture where we have addressed, in an integrated way, a number of issues arising in fine assembly, such as workspace optimization, external wrench compensation, position-based force control, etc. Finally, we show the above taxonomy and architecture in action on a highly dexterous task -- bimanual pin insertion -- which is one of the key steps in our long term project, the autonomous assembly of an IKEA chair.
|
1601.05187
|
Ron van der Meyden
|
Sebastian Eggert and Ron van der Meyden
|
Dynamic Intransitive Noninterference Revisited
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The paper studies dynamic information flow security policies in an
automaton-based model. Two semantic interpretations of such policies are
developed, both of which generalize the notion of TA-security [van der Meyden
ESORICS 2007] for static intransitive noninterference policies. One of the
interpretations focuses on information flows permitted by policy edges, the
other focuses on prohibitions implied by absence of policy edges. In general,
the two interpretations differ, but necessary and sufficient conditions are
identified for the two interpretations to be equivalent. Sound and complete
proof techniques are developed for both interpretations. Two applications of
the theory are presented. The first is a general result showing that access
control mechanisms are able to enforce a dynamic information flow policy. The
second is a simple capability system motivated by the Flume operating system.
|
[
{
"created": "Wed, 20 Jan 2016 07:04:28 GMT",
"version": "v1"
}
] |
2016-01-21
|
[
[
"Eggert",
"Sebastian",
""
],
[
"van der Meyden",
"Ron",
""
]
] |
The paper studies dynamic information flow security policies in an automaton-based model. Two semantic interpretations of such policies are developed, both of which generalize the notion of TA-security [van der Meyden ESORICS 2007] for static intransitive noninterference policies. One of the interpretations focuses on information flows permitted by policy edges, the other focuses on prohibitions implied by absence of policy edges. In general, the two interpretations differ, but necessary and sufficient conditions are identified for the two interpretations to be equivalent. Sound and complete proof techniques are developed for both interpretations. Two applications of the theory are presented. The first is a general result showing that access control mechanisms are able to enforce a dynamic information flow policy. The second is a simple capability system motivated by the Flume operating system.
|
2311.03225
|
Yuto Okada
|
Tatsuya Gima, Soh Kumabe, Kazuhiro Kurita, Yuto Okada, Yota Otachi
|
Dichotomies for Tree Minor Containment with Structural Parameters
|
25 pages, 4 figures, WALCOM 2024
| null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The problem of determining whether a graph $G$ contains another graph $H$ as
a minor, referred to as the minor containment problem, is a fundamental problem
in the field of graph algorithms. While it is NP-complete when $G$ and $H$ are
general graphs, it is sometimes tractable on more restricted graph classes.
This study focuses on the case where both $G$ and $H$ are trees, known as the
tree minor containment problem. Even in this case, the problem is known to be
NP-complete. In contrast, polynomial-time algorithms are known for the case
when both trees are caterpillars or when the maximum degree of $H$ is a
constant. Our research aims to clarify the boundary of tractability and
intractability for the tree minor containment problem. Specifically, we provide
dichotomies for the computational complexities of the problem based on three
structural parameters: the diameter, pathwidth, and path eccentricity.
|
[
{
"created": "Mon, 6 Nov 2023 16:11:37 GMT",
"version": "v1"
}
] |
2023-11-07
|
[
[
"Gima",
"Tatsuya",
""
],
[
"Kumabe",
"Soh",
""
],
[
"Kurita",
"Kazuhiro",
""
],
[
"Okada",
"Yuto",
""
],
[
"Otachi",
"Yota",
""
]
] |
The problem of determining whether a graph $G$ contains another graph $H$ as a minor, referred to as the minor containment problem, is a fundamental problem in the field of graph algorithms. While it is NP-complete when $G$ and $H$ are general graphs, it is sometimes tractable on more restricted graph classes. This study focuses on the case where both $G$ and $H$ are trees, known as the tree minor containment problem. Even in this case, the problem is known to be NP-complete. In contrast, polynomial-time algorithms are known for the case when both trees are caterpillars or when the maximum degree of $H$ is a constant. Our research aims to clarify the boundary of tractability and intractability for the tree minor containment problem. Specifically, we provide dichotomies for the computational complexities of the problem based on three structural parameters: the diameter, pathwidth, and path eccentricity.
|
1312.6173
|
Karl Moritz Hermann
|
Karl Moritz Hermann and Phil Blunsom
|
Multilingual Distributed Representations without Word Alignment
|
To appear at ICLR 2014
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Distributed representations of meaning are a natural way to encode covariance
relationships between words and phrases in NLP. By overcoming data sparsity
problems, as well as providing information about semantic relatedness which is
not available in discrete representations, distributed representations have
proven useful in many NLP tasks. Recent work has shown how compositional
semantic representations can successfully be applied to a number of monolingual
applications such as sentiment analysis. At the same time, there has been some
initial success in work on learning shared word-level representations across
languages. We combine these two approaches by proposing a method for learning
distributed representations in a multilingual setup. Our model learns to assign
similar embeddings to aligned sentences and dissimilar ones to sentence which
are not aligned while not requiring word alignments. We show that our
representations are semantically informative and apply them to a cross-lingual
document classification task where we outperform the previous state of the art.
Further, by employing parallel corpora of multiple language pairs we find that
our model learns representations that capture semantic relationships across
languages for which no parallel data was used.
|
[
{
"created": "Fri, 20 Dec 2013 23:13:38 GMT",
"version": "v1"
},
{
"created": "Fri, 21 Feb 2014 20:24:06 GMT",
"version": "v2"
},
{
"created": "Mon, 17 Mar 2014 17:52:13 GMT",
"version": "v3"
},
{
"created": "Thu, 20 Mar 2014 13:55:02 GMT",
"version": "v4"
}
] |
2014-03-21
|
[
[
"Hermann",
"Karl Moritz",
""
],
[
"Blunsom",
"Phil",
""
]
] |
Distributed representations of meaning are a natural way to encode covariance relationships between words and phrases in NLP. By overcoming data sparsity problems, as well as providing information about semantic relatedness which is not available in discrete representations, distributed representations have proven useful in many NLP tasks. Recent work has shown how compositional semantic representations can successfully be applied to a number of monolingual applications such as sentiment analysis. At the same time, there has been some initial success in work on learning shared word-level representations across languages. We combine these two approaches by proposing a method for learning distributed representations in a multilingual setup. Our model learns to assign similar embeddings to aligned sentences and dissimilar ones to sentence which are not aligned while not requiring word alignments. We show that our representations are semantically informative and apply them to a cross-lingual document classification task where we outperform the previous state of the art. Further, by employing parallel corpora of multiple language pairs we find that our model learns representations that capture semantic relationships across languages for which no parallel data was used.
|
2302.10801
|
Halid Ziya Yerebakan
|
Halid Ziya Yerebakan, Gerardo Hermosillo Valadez
|
Deep Generative Neural Embeddings for High Dimensional Data
Visualization
|
High Dimensional Data Visualization
| null | null | null |
cs.LG cs.CV cs.HC
|
http://creativecommons.org/licenses/by-sa/4.0/
|
We propose a visualization technique that utilizes neural network embeddings
and a generative network to reconstruct original data. This method allows for
independent manipulation of individual image embeddings through its
non-parametric structure, providing more flexibility than traditional
autoencoder approaches. We have evaluated the effectiveness of this technique
in data visualization and compared it to t-SNE and VAE methods. Furthermore, we
have demonstrated the scalability of our method through visualizations on the
ImageNet dataset. Our technique has potential applications in human-in-the-loop
training, as it allows for independent editing of embedding locations without
affecting the optimization process.
|
[
{
"created": "Wed, 25 Jan 2023 14:18:09 GMT",
"version": "v1"
}
] |
2023-02-22
|
[
[
"Yerebakan",
"Halid Ziya",
""
],
[
"Valadez",
"Gerardo Hermosillo",
""
]
] |
We propose a visualization technique that utilizes neural network embeddings and a generative network to reconstruct original data. This method allows for independent manipulation of individual image embeddings through its non-parametric structure, providing more flexibility than traditional autoencoder approaches. We have evaluated the effectiveness of this technique in data visualization and compared it to t-SNE and VAE methods. Furthermore, we have demonstrated the scalability of our method through visualizations on the ImageNet dataset. Our technique has potential applications in human-in-the-loop training, as it allows for independent editing of embedding locations without affecting the optimization process.
|
2209.10155
|
Zihui Guo
|
Zihui Guo, Yonghong Hou, Pichao Wang, Zhimin Gao, Mingliang Xu, and
Wanqing Li
|
FT-HID: A Large Scale RGB-D Dataset for First and Third Person Human
Interaction Analysis
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Analysis of human interaction is one important research topic of human motion
analysis. It has been studied either using first person vision (FPV) or third
person vision (TPV). However, the joint learning of both types of vision has so
far attracted little attention. One of the reasons is the lack of suitable
datasets that cover both FPV and TPV. In addition, existing benchmark datasets
of either FPV or TPV have several limitations, including the limited number of
samples, participant subjects, interaction categories, and modalities. In this
work, we contribute a large-scale human interaction dataset, namely, FT-HID
dataset. FT-HID contains pair-aligned samples of first person and third person
visions. The dataset was collected from 109 distinct subjects and has more than
90K samples for three modalities. The dataset has been validated by using
several existing action recognition methods. In addition, we introduce a novel
multi-view interaction mechanism for skeleton sequences, and a joint learning
multi-stream framework for first person and third person visions. Both methods
yield promising results on the FT-HID dataset. It is expected that the
introduction of this vision-aligned large-scale dataset will promote the
development of both FPV and TPV, and their joint learning techniques for human
action analysis. The dataset and code are available at
\href{https://github.com/ENDLICHERE/FT-HID}{here}.
|
[
{
"created": "Wed, 21 Sep 2022 07:24:15 GMT",
"version": "v1"
}
] |
2022-09-22
|
[
[
"Guo",
"Zihui",
""
],
[
"Hou",
"Yonghong",
""
],
[
"Wang",
"Pichao",
""
],
[
"Gao",
"Zhimin",
""
],
[
"Xu",
"Mingliang",
""
],
[
"Li",
"Wanqing",
""
]
] |
Analysis of human interaction is one important research topic of human motion analysis. It has been studied either using first person vision (FPV) or third person vision (TPV). However, the joint learning of both types of vision has so far attracted little attention. One of the reasons is the lack of suitable datasets that cover both FPV and TPV. In addition, existing benchmark datasets of either FPV or TPV have several limitations, including the limited number of samples, participant subjects, interaction categories, and modalities. In this work, we contribute a large-scale human interaction dataset, namely, FT-HID dataset. FT-HID contains pair-aligned samples of first person and third person visions. The dataset was collected from 109 distinct subjects and has more than 90K samples for three modalities. The dataset has been validated by using several existing action recognition methods. In addition, we introduce a novel multi-view interaction mechanism for skeleton sequences, and a joint learning multi-stream framework for first person and third person visions. Both methods yield promising results on the FT-HID dataset. It is expected that the introduction of this vision-aligned large-scale dataset will promote the development of both FPV and TPV, and their joint learning techniques for human action analysis. The dataset and code are available at \href{https://github.com/ENDLICHERE/FT-HID}{here}.
|
1109.1314
|
Tom Schaul
|
Tom Schaul, Julian Togelius, J\"urgen Schmidhuber
|
Measuring Intelligence through Games
| null | null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Artificial general intelligence (AGI) refers to research aimed at tackling
the full problem of artificial intelligence, that is, create truly intelligent
agents. This sets it apart from most AI research which aims at solving
relatively narrow domains, such as character recognition, motion planning, or
increasing player satisfaction in games. But how do we know when an agent is
truly intelligent? A common point of reference in the AGI community is Legg and
Hutter's formal definition of universal intelligence, which has the appeal of
simplicity and generality but is unfortunately incomputable. Games of various
kinds are commonly used as benchmarks for "narrow" AI research, as they are
considered to have many important properties. We argue that many of these
properties carry over to the testing of general intelligence as well. We then
sketch how such testing could practically be carried out. The central part of
this sketch is an extension of universal intelligence to deal with finite time,
and the use of sampling of the space of games expressed in a suitably biased
game description language.
|
[
{
"created": "Tue, 6 Sep 2011 22:13:30 GMT",
"version": "v1"
}
] |
2011-09-08
|
[
[
"Schaul",
"Tom",
""
],
[
"Togelius",
"Julian",
""
],
[
"Schmidhuber",
"Jürgen",
""
]
] |
Artificial general intelligence (AGI) refers to research aimed at tackling the full problem of artificial intelligence, that is, create truly intelligent agents. This sets it apart from most AI research which aims at solving relatively narrow domains, such as character recognition, motion planning, or increasing player satisfaction in games. But how do we know when an agent is truly intelligent? A common point of reference in the AGI community is Legg and Hutter's formal definition of universal intelligence, which has the appeal of simplicity and generality but is unfortunately incomputable. Games of various kinds are commonly used as benchmarks for "narrow" AI research, as they are considered to have many important properties. We argue that many of these properties carry over to the testing of general intelligence as well. We then sketch how such testing could practically be carried out. The central part of this sketch is an extension of universal intelligence to deal with finite time, and the use of sampling of the space of games expressed in a suitably biased game description language.
|
2202.06407
|
En Yen Puang
|
En Yen Puang, Hao Zhang, Hongyuan Zhu, Wei Jing
|
Hierarchical Point Cloud Encoding and Decoding with Lightweight
Self-Attention based Model
|
Accepted by RA-Letters and ICRA 2022
| null |
10.1109/LRA.2022.3149569
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we present SA-CNN, a hierarchical and lightweight
self-attention based encoding and decoding architecture for representation
learning of point cloud data. The proposed SA-CNN introduces convolution and
transposed convolution stacks to capture and generate contextual information
among unordered 3D points. Following conventional hierarchical pipeline, the
encoding process extracts feature in local-to-global manner, while the decoding
process generates feature and point cloud in coarse-to-fine, multi-resolution
stages. We demonstrate that SA-CNN is capable of a wide range of applications,
namely classification, part segmentation, reconstruction, shape retrieval, and
unsupervised classification. While achieving the state-of-the-art or comparable
performance in the benchmarks, SA-CNN maintains its model complexity several
order of magnitude lower than the others. In term of qualitative results, we
visualize the multi-stage point cloud reconstructions and latent walks on rigid
objects as well as deformable non-rigid human and robot models.
|
[
{
"created": "Sun, 13 Feb 2022 21:10:06 GMT",
"version": "v1"
}
] |
2022-03-16
|
[
[
"Puang",
"En Yen",
""
],
[
"Zhang",
"Hao",
""
],
[
"Zhu",
"Hongyuan",
""
],
[
"Jing",
"Wei",
""
]
] |
In this paper we present SA-CNN, a hierarchical and lightweight self-attention based encoding and decoding architecture for representation learning of point cloud data. The proposed SA-CNN introduces convolution and transposed convolution stacks to capture and generate contextual information among unordered 3D points. Following conventional hierarchical pipeline, the encoding process extracts feature in local-to-global manner, while the decoding process generates feature and point cloud in coarse-to-fine, multi-resolution stages. We demonstrate that SA-CNN is capable of a wide range of applications, namely classification, part segmentation, reconstruction, shape retrieval, and unsupervised classification. While achieving the state-of-the-art or comparable performance in the benchmarks, SA-CNN maintains its model complexity several order of magnitude lower than the others. In term of qualitative results, we visualize the multi-stage point cloud reconstructions and latent walks on rigid objects as well as deformable non-rigid human and robot models.
|
2307.00610
|
Inna Vogel
|
Raphael Frick, Inna Vogel
|
Fraunhofer SIT at CheckThat! 2023: Mixing Single-Modal Classifiers to
Estimate the Check-Worthiness of Multi-Modal Tweets
|
8 pages
|
CLEF 2023
| null | null |
cs.LG cs.CL cs.SI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The option of sharing images, videos and audio files on social media opens up
new possibilities for distinguishing between false information and fake news on
the Internet. Due to the vast amount of data shared every second on social
media, not all data can be verified by a computer or a human expert. Here, a
check-worthiness analysis can be used as a first step in the fact-checking
pipeline and as a filtering mechanism to improve efficiency. This paper
proposes a novel way of detecting the check-worthiness in multi-modal tweets.
It takes advantage of two classifiers, each trained on a single modality. For
image data, extracting the embedded text with an OCR analysis has shown to
perform best. By combining the two classifiers, the proposed solution was able
to place first in the CheckThat! 2023 Task 1A with an F1 score of 0.7297
achieved on the private test set.
|
[
{
"created": "Sun, 2 Jul 2023 16:35:54 GMT",
"version": "v1"
},
{
"created": "Thu, 27 Jul 2023 14:54:13 GMT",
"version": "v2"
}
] |
2023-07-28
|
[
[
"Frick",
"Raphael",
""
],
[
"Vogel",
"Inna",
""
]
] |
The option of sharing images, videos and audio files on social media opens up new possibilities for distinguishing between false information and fake news on the Internet. Due to the vast amount of data shared every second on social media, not all data can be verified by a computer or a human expert. Here, a check-worthiness analysis can be used as a first step in the fact-checking pipeline and as a filtering mechanism to improve efficiency. This paper proposes a novel way of detecting the check-worthiness in multi-modal tweets. It takes advantage of two classifiers, each trained on a single modality. For image data, extracting the embedded text with an OCR analysis has shown to perform best. By combining the two classifiers, the proposed solution was able to place first in the CheckThat! 2023 Task 1A with an F1 score of 0.7297 achieved on the private test set.
|
1704.06611
|
Jonathon Cai
|
Jonathon Cai, Richard Shin, Dawn Song
|
Making Neural Programming Architectures Generalize via Recursion
|
Published in ICLR 2017
| null | null | null |
cs.LG cs.NE cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Empirically, neural networks that attempt to learn programs from data have
exhibited poor generalizability. Moreover, it has traditionally been difficult
to reason about the behavior of these models beyond a certain level of input
complexity. In order to address these issues, we propose augmenting neural
architectures with a key abstraction: recursion. As an application, we
implement recursion in the Neural Programmer-Interpreter framework on four
tasks: grade-school addition, bubble sort, topological sort, and quicksort. We
demonstrate superior generalizability and interpretability with small amounts
of training data. Recursion divides the problem into smaller pieces and
drastically reduces the domain of each neural network component, making it
tractable to prove guarantees about the overall system's behavior. Our
experience suggests that in order for neural architectures to robustly learn
program semantics, it is necessary to incorporate a concept like recursion.
|
[
{
"created": "Fri, 21 Apr 2017 16:02:26 GMT",
"version": "v1"
}
] |
2017-04-24
|
[
[
"Cai",
"Jonathon",
""
],
[
"Shin",
"Richard",
""
],
[
"Song",
"Dawn",
""
]
] |
Empirically, neural networks that attempt to learn programs from data have exhibited poor generalizability. Moreover, it has traditionally been difficult to reason about the behavior of these models beyond a certain level of input complexity. In order to address these issues, we propose augmenting neural architectures with a key abstraction: recursion. As an application, we implement recursion in the Neural Programmer-Interpreter framework on four tasks: grade-school addition, bubble sort, topological sort, and quicksort. We demonstrate superior generalizability and interpretability with small amounts of training data. Recursion divides the problem into smaller pieces and drastically reduces the domain of each neural network component, making it tractable to prove guarantees about the overall system's behavior. Our experience suggests that in order for neural architectures to robustly learn program semantics, it is necessary to incorporate a concept like recursion.
|
1905.13127
|
Xiao Zhou
|
Xiao Zhou, Cecilia Mascolo and Zhongxiang Zhao
|
Topic-Enhanced Memory Networks for Personalised Point-of-Interest
Recommendation
|
11 pages, 6 figures, The 25th ACM SIGKDD Conference on Knowledge
Discovery and Data Mining (KDD '19)
| null |
10.1145/3292500.3330781
| null |
cs.IR cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Point-of-Interest (POI) recommender systems play a vital role in people's
lives by recommending unexplored POIs to users and have drawn extensive
attention from both academia and industry. Despite their value, however, they
still suffer from the challenges of capturing complicated user preferences and
fine-grained user-POI relationship for spatio-temporal sensitive POI
recommendation. Existing recommendation algorithms, including both shallow and
deep approaches, usually embed the visiting records of a user into a single
latent vector to model user preferences: this has limited power of
representation and interpretability. In this paper, we propose a novel
topic-enhanced memory network (TEMN), a deep architecture to integrate the
topic model and memory network capitalising on the strengths of both the global
structure of latent patterns and local neighbourhood-based features in a
nonlinear fashion. We further incorporate a geographical module to exploit
user-specific spatial preference and POI-specific spatial influence to enhance
recommendations. The proposed unified hybrid model is widely applicable to
various POI recommendation scenarios. Extensive experiments on real-world
WeChat datasets demonstrate its effectiveness (improvement ratio of 3.25% and
29.95% for context-aware and sequential recommendation, respectively). Also,
qualitative analysis of the attention weights and topic modeling provides
insight into the model's recommendation process and results.
|
[
{
"created": "Sun, 19 May 2019 18:00:05 GMT",
"version": "v1"
}
] |
2019-05-31
|
[
[
"Zhou",
"Xiao",
""
],
[
"Mascolo",
"Cecilia",
""
],
[
"Zhao",
"Zhongxiang",
""
]
] |
Point-of-Interest (POI) recommender systems play a vital role in people's lives by recommending unexplored POIs to users and have drawn extensive attention from both academia and industry. Despite their value, however, they still suffer from the challenges of capturing complicated user preferences and fine-grained user-POI relationship for spatio-temporal sensitive POI recommendation. Existing recommendation algorithms, including both shallow and deep approaches, usually embed the visiting records of a user into a single latent vector to model user preferences: this has limited power of representation and interpretability. In this paper, we propose a novel topic-enhanced memory network (TEMN), a deep architecture to integrate the topic model and memory network capitalising on the strengths of both the global structure of latent patterns and local neighbourhood-based features in a nonlinear fashion. We further incorporate a geographical module to exploit user-specific spatial preference and POI-specific spatial influence to enhance recommendations. The proposed unified hybrid model is widely applicable to various POI recommendation scenarios. Extensive experiments on real-world WeChat datasets demonstrate its effectiveness (improvement ratio of 3.25% and 29.95% for context-aware and sequential recommendation, respectively). Also, qualitative analysis of the attention weights and topic modeling provides insight into the model's recommendation process and results.
|
1401.2405
|
Ghassan Samara
|
Ghassan Samara, Tareq Alhmiedat, Amer O. Abu Salem
|
Dynamic Safety Message Power Control in VANET Using PSO
|
9 pages. arXiv admin note: substantial text overlap with
arXiv:1311.2364
|
The World of Computer Science and Information Technology Journal
(WSCIT). 2013, Volume 3, Issue 10. pp. 176.184
| null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the recent years Vehicular Ad hoc Networks (VANET) became one of the most
challenging research area in the field of Mobile Ad hoc Networks (MANET).
Vehicles in VANET send emergency and safety periodic messages through one
control channel having a limited bandwidth, which causes a growing collision to
the channel especially in dense traffic situations. In this paper a protocol
Particle swarm optimization Beacon Power Control (PBPC) is proposed, which
makes dynamic transmission power control to adjust the transmission power of
the safety periodic messages that have been aggressively sent by all vehicles
on the road 10 times per a second, the proposed protocol aims to decrease the
packet collision resulted from periodic safety messages, which leads to control
the load on the channel while ensuring a high probability of message reception
within the safety distance of the sender vehicle.
|
[
{
"created": "Fri, 10 Jan 2014 17:10:20 GMT",
"version": "v1"
},
{
"created": "Tue, 21 Jan 2014 13:22:00 GMT",
"version": "v2"
}
] |
2014-01-22
|
[
[
"Samara",
"Ghassan",
""
],
[
"Alhmiedat",
"Tareq",
""
],
[
"Salem",
"Amer O. Abu",
""
]
] |
In the recent years Vehicular Ad hoc Networks (VANET) became one of the most challenging research area in the field of Mobile Ad hoc Networks (MANET). Vehicles in VANET send emergency and safety periodic messages through one control channel having a limited bandwidth, which causes a growing collision to the channel especially in dense traffic situations. In this paper a protocol Particle swarm optimization Beacon Power Control (PBPC) is proposed, which makes dynamic transmission power control to adjust the transmission power of the safety periodic messages that have been aggressively sent by all vehicles on the road 10 times per a second, the proposed protocol aims to decrease the packet collision resulted from periodic safety messages, which leads to control the load on the channel while ensuring a high probability of message reception within the safety distance of the sender vehicle.
|
1911.02373
|
Davood Mohajerani
|
Alexander Brandt, Davood Mohajerani, Marc Moreno Maza, Jeeva Paudel,
Linxiao Wang
|
KLARAPTOR: A Tool for Dynamically Finding Optimal Kernel Launch
Parameters Targeting CUDA Programs
|
10 pages. arXiv admin note: text overlap with arXiv:1906.00142
| null | null | null |
cs.DC cs.PF
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we present KLARAPTOR (Kernel LAunch parameters RAtional Program
estimaTOR), a new tool built on top of the LLVM Pass Framework and NVIDIA CUPTI
API to dynamically determine the optimal values of kernel launch parameters of
a CUDA program P. To be precise, we describe a novel technique to statically
build (at the compile time of P) a so-called rational program R. Using a
performance prediction model, and knowing particular data and hardware
parameters of P at runtime, the program R can automatically and dynamically
determine the values of launch parameters of P that will yield optimal
performance. Our technique can be applied to parallel programs in general, as
well as to generic performance prediction models which account for program and
hardware parameters. We are particularly interested in programs targeting
manycore accelerators. We have implemented and successfully tested our
technique in the context of GPU kernels written in CUDA using the MWP-CWP
performance prediction model.
|
[
{
"created": "Tue, 5 Nov 2019 00:24:56 GMT",
"version": "v1"
}
] |
2019-11-07
|
[
[
"Brandt",
"Alexander",
""
],
[
"Mohajerani",
"Davood",
""
],
[
"Maza",
"Marc Moreno",
""
],
[
"Paudel",
"Jeeva",
""
],
[
"Wang",
"Linxiao",
""
]
] |
In this paper we present KLARAPTOR (Kernel LAunch parameters RAtional Program estimaTOR), a new tool built on top of the LLVM Pass Framework and NVIDIA CUPTI API to dynamically determine the optimal values of kernel launch parameters of a CUDA program P. To be precise, we describe a novel technique to statically build (at the compile time of P) a so-called rational program R. Using a performance prediction model, and knowing particular data and hardware parameters of P at runtime, the program R can automatically and dynamically determine the values of launch parameters of P that will yield optimal performance. Our technique can be applied to parallel programs in general, as well as to generic performance prediction models which account for program and hardware parameters. We are particularly interested in programs targeting manycore accelerators. We have implemented and successfully tested our technique in the context of GPU kernels written in CUDA using the MWP-CWP performance prediction model.
|
2103.16345
|
Nicolas Moes
|
Nicolas Moes and Nicolas Chevaugeon
|
Lipschitz regularization for softening material models: the Lip-field
approach
|
21 pages
|
Comptes Rendus. M\'ecanique, Tome 349 (2021) no. 2, pp. 415-434
|
10.5802/crmeca.91
| null |
cs.CE
|
http://creativecommons.org/licenses/by/4.0/
|
Softening material models are known to trigger spurious localizations.This
may be shown theoretically by the existence of solutions with zero dissipation
when localization occurs and numerically with spurious mesh dependency and
localization in a single layer of elements. We introduce in this paper a new
way to avoid spurious localization. The idea is to enforce a Lipschitz
regularity on the internal variables responsible for the material softening.
The regularity constraint introduces the needed length scale in the material
formulation. Moreover, we prove bounds on the domain affected by this
constraint. A first one-dimensional finite element implementation is proposed
for softening elasticity and softening plasticity.
|
[
{
"created": "Tue, 30 Mar 2021 13:43:05 GMT",
"version": "v1"
},
{
"created": "Tue, 20 Jul 2021 12:40:26 GMT",
"version": "v2"
}
] |
2021-08-10
|
[
[
"Moes",
"Nicolas",
""
],
[
"Chevaugeon",
"Nicolas",
""
]
] |
Softening material models are known to trigger spurious localizations.This may be shown theoretically by the existence of solutions with zero dissipation when localization occurs and numerically with spurious mesh dependency and localization in a single layer of elements. We introduce in this paper a new way to avoid spurious localization. The idea is to enforce a Lipschitz regularity on the internal variables responsible for the material softening. The regularity constraint introduces the needed length scale in the material formulation. Moreover, we prove bounds on the domain affected by this constraint. A first one-dimensional finite element implementation is proposed for softening elasticity and softening plasticity.
|
1308.4469
|
Craig Dillabaugh Dr
|
Craig Dillabaugh
|
External Memory Algorithms For Path Traversal in Graphs
|
181 pages
| null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This thesis presents a number of results related to path traversal in trees
and graphs. In particular, we focus on data structures which allow such
traversals to be performed efficiently in the external memory setting. In
addition, for trees and planar graphs the data structures we present are
succinct. Our tree structures permit efficient bottom-up path traversal in
rooted trees of arbitrary degree and efficient top-down path traversal in
binary trees. In the graph setting, we permit efficient traversal of an
arbitrary path in bounded degree planar graphs. Our data structures for both
trees and graphs match or slightly improve current best results for external
memory path traversal in these settings while at the same time improving space
bounds due to the succinct nature of our data structures. Employing our path
traversal structure for bounded degree planar graphs, we describe a number of
useful applications of this technique for triangular meshes in R^2. As an
extension of the R^2 representation for triangular meshes we also present an
efficient external memory representation for well-shaped tetrahedral meshes in
R^3. The external memory representation we present is based on a partitioning
scheme that matches the current best-known results for well-shaped tetrahedral
meshes. We describe applications of path traversal in tetrahedral meshes which
are made efficient in the external memory setting using our structure. Finally,
we present a result on using jump-and-walk point location in well-shaped meshes
in both R^2 and R^3. We demonstrate that, given an approximate nearest
neighbour from among the vertices of a mesh, locating the simplex containing
the query point involves a constant length walk (path traversal) in the mesh.
|
[
{
"created": "Wed, 21 Aug 2013 01:56:50 GMT",
"version": "v1"
}
] |
2013-08-22
|
[
[
"Dillabaugh",
"Craig",
""
]
] |
This thesis presents a number of results related to path traversal in trees and graphs. In particular, we focus on data structures which allow such traversals to be performed efficiently in the external memory setting. In addition, for trees and planar graphs the data structures we present are succinct. Our tree structures permit efficient bottom-up path traversal in rooted trees of arbitrary degree and efficient top-down path traversal in binary trees. In the graph setting, we permit efficient traversal of an arbitrary path in bounded degree planar graphs. Our data structures for both trees and graphs match or slightly improve current best results for external memory path traversal in these settings while at the same time improving space bounds due to the succinct nature of our data structures. Employing our path traversal structure for bounded degree planar graphs, we describe a number of useful applications of this technique for triangular meshes in R^2. As an extension of the R^2 representation for triangular meshes we also present an efficient external memory representation for well-shaped tetrahedral meshes in R^3. The external memory representation we present is based on a partitioning scheme that matches the current best-known results for well-shaped tetrahedral meshes. We describe applications of path traversal in tetrahedral meshes which are made efficient in the external memory setting using our structure. Finally, we present a result on using jump-and-walk point location in well-shaped meshes in both R^2 and R^3. We demonstrate that, given an approximate nearest neighbour from among the vertices of a mesh, locating the simplex containing the query point involves a constant length walk (path traversal) in the mesh.
|
1711.05557
|
Chee Seng Chan
|
Ying Hua Tan, Chee Seng Chan
|
Phrase-based Image Captioning with Hierarchical LSTM Model
|
17 pages, 12 figures, ACCV2016 extension, phrase-based image
captioning
| null | null | null |
cs.CV cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automatic generation of caption to describe the content of an image has been
gaining a lot of research interests recently, where most of the existing works
treat the image caption as pure sequential data. Natural language, however
possess a temporal hierarchy structure, with complex dependencies between each
subsequence. In this paper, we propose a phrase-based hierarchical Long
Short-Term Memory (phi-LSTM) model to generate image description. In contrast
to the conventional solutions that generate caption in a pure sequential
manner, our proposed model decodes image caption from phrase to sentence. It
consists of a phrase decoder at the bottom hierarchy to decode noun phrases of
variable length, and an abbreviated sentence decoder at the upper hierarchy to
decode an abbreviated form of the image description. A complete image caption
is formed by combining the generated phrases with sentence during the inference
stage. Empirically, our proposed model shows a better or competitive result on
the Flickr8k, Flickr30k and MS-COCO datasets in comparison to the state-of-the
art models. We also show that our proposed model is able to generate more novel
captions (not seen in the training data) which are richer in word contents in
all these three datasets.
|
[
{
"created": "Sat, 11 Nov 2017 10:48:59 GMT",
"version": "v1"
}
] |
2017-11-16
|
[
[
"Tan",
"Ying Hua",
""
],
[
"Chan",
"Chee Seng",
""
]
] |
Automatic generation of caption to describe the content of an image has been gaining a lot of research interests recently, where most of the existing works treat the image caption as pure sequential data. Natural language, however possess a temporal hierarchy structure, with complex dependencies between each subsequence. In this paper, we propose a phrase-based hierarchical Long Short-Term Memory (phi-LSTM) model to generate image description. In contrast to the conventional solutions that generate caption in a pure sequential manner, our proposed model decodes image caption from phrase to sentence. It consists of a phrase decoder at the bottom hierarchy to decode noun phrases of variable length, and an abbreviated sentence decoder at the upper hierarchy to decode an abbreviated form of the image description. A complete image caption is formed by combining the generated phrases with sentence during the inference stage. Empirically, our proposed model shows a better or competitive result on the Flickr8k, Flickr30k and MS-COCO datasets in comparison to the state-of-the art models. We also show that our proposed model is able to generate more novel captions (not seen in the training data) which are richer in word contents in all these three datasets.
|
2406.07042
|
Yining Shi
|
Yining Shi, Kun Jiang, Ke Wang, Kangan Qian, Yunlong Wang, Jiusi Li,
Tuopu Wen, Mengmeng Yang, Yiliang Xu, Diange Yang
|
EFFOcc: A Minimal Baseline for EFficient Fusion-based 3D Occupancy
Network
|
preprint under review
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
3D occupancy prediction (Occ) is a rapidly rising challenging perception task
in the field of autonomous driving which represents the driving scene as
uniformly partitioned 3D voxel grids with semantics. Compared to 3D object
detection, grid perception has great advantage of better recognizing
irregularly shaped, unknown category, or partially occluded general objects.
However, existing 3D occupancy networks (occnets) are both computationally
heavy and label-hungry. In terms of model complexity, occnets are commonly
composed of heavy Conv3D modules or transformers on the voxel level. In terms
of label annotations requirements, occnets are supervised with large-scale
expensive dense voxel labels. Model and data inefficiency, caused by excessive
network parameters and label annotations requirement, severely hinder the
onboard deployment of occnets. This paper proposes an efficient 3d occupancy
network (EFFOcc), that targets the minimal network complexity and label
requirement while achieving state-of-the-art accuracy. EFFOcc only uses simple
2D operators, and improves Occ accuracy to the state-of-the-art on multiple
large-scale benchmarks: Occ3D-nuScenes, Occ3D-Waymo, and
OpenOccupancy-nuScenes. On Occ3D-nuScenes benchmark, EFFOcc has only 18.4M
parameters, and achieves 50.46 in terms of mean IoU (mIoU), to our knowledge,
it is the occnet with minimal parameters compared with related occnets.
Moreover, we propose a two-stage active learning strategy to reduce the
requirements of labelled data. Active EFFOcc trained with 6\% labelled voxels
achieves 47.19 mIoU, which is 95.7% fully supervised performance. The proposed
EFFOcc also supports improved vision-only occupancy prediction with the aid of
region-decomposed distillation. Code and demo videos will be available at
https://github.com/synsin0/EFFOcc.
|
[
{
"created": "Tue, 11 Jun 2024 08:01:02 GMT",
"version": "v1"
}
] |
2024-06-12
|
[
[
"Shi",
"Yining",
""
],
[
"Jiang",
"Kun",
""
],
[
"Wang",
"Ke",
""
],
[
"Qian",
"Kangan",
""
],
[
"Wang",
"Yunlong",
""
],
[
"Li",
"Jiusi",
""
],
[
"Wen",
"Tuopu",
""
],
[
"Yang",
"Mengmeng",
""
],
[
"Xu",
"Yiliang",
""
],
[
"Yang",
"Diange",
""
]
] |
3D occupancy prediction (Occ) is a rapidly rising challenging perception task in the field of autonomous driving which represents the driving scene as uniformly partitioned 3D voxel grids with semantics. Compared to 3D object detection, grid perception has great advantage of better recognizing irregularly shaped, unknown category, or partially occluded general objects. However, existing 3D occupancy networks (occnets) are both computationally heavy and label-hungry. In terms of model complexity, occnets are commonly composed of heavy Conv3D modules or transformers on the voxel level. In terms of label annotations requirements, occnets are supervised with large-scale expensive dense voxel labels. Model and data inefficiency, caused by excessive network parameters and label annotations requirement, severely hinder the onboard deployment of occnets. This paper proposes an efficient 3d occupancy network (EFFOcc), that targets the minimal network complexity and label requirement while achieving state-of-the-art accuracy. EFFOcc only uses simple 2D operators, and improves Occ accuracy to the state-of-the-art on multiple large-scale benchmarks: Occ3D-nuScenes, Occ3D-Waymo, and OpenOccupancy-nuScenes. On Occ3D-nuScenes benchmark, EFFOcc has only 18.4M parameters, and achieves 50.46 in terms of mean IoU (mIoU), to our knowledge, it is the occnet with minimal parameters compared with related occnets. Moreover, we propose a two-stage active learning strategy to reduce the requirements of labelled data. Active EFFOcc trained with 6\% labelled voxels achieves 47.19 mIoU, which is 95.7% fully supervised performance. The proposed EFFOcc also supports improved vision-only occupancy prediction with the aid of region-decomposed distillation. Code and demo videos will be available at https://github.com/synsin0/EFFOcc.
|
cs/0607099
|
Syed Jafar
|
Syed A. Jafar, Shlomo Shamai (Shitz)
|
Degrees of Freedom Region for the MIMO X Channel
|
31 pages
| null | null | null |
cs.IT math.IT
| null |
We provide achievability as well as converse results for the degrees of
freedom region of a MIMO $X$ channel, i.e., a system with two transmitters, two
receivers, each equipped with multiple antennas, where independent messages
need to be conveyed over fixed channels from each transmitter to each receiver.
With M=1 antennas at each node, we find that the total (sum rate) degrees of
freedom are bounded above and below as $1 \leq\eta_X^\star \leq {4/3}$. If
$M>1$ and channel matrices are non-degenerate then the precise degrees of
freedom $\eta_X^\star = {4/3}M$. Simple zero forcing without dirty paper
encoding or successive decoding, suffices to achieve the ${4/3}M$ degrees of
freedom. With equal number of antennas at all nodes, we explore the increase in
degrees of freedom when some of the messages are made available to a
transmitter or receiver in the manner of cognitive radio. With a cognitive
transmitter we show that the number of degrees of freedom $\eta = {3/2}M$ (for
$M>1$) on the MIMO $X$ channel. The same degrees of freedom are obtained on the
MIMO $X$ channel with a cognitive receiver as well. In contrast to the $X$
channel result, we show that for the MIMO \emph{interference} channel, the
degrees of freedom are not increased even if both the transmitter and the
receiver of one user know the other user's message. However, the interference
channel can achieve the full $2M$ degrees of freedom if \emph{each} user has
either a cognitive transmitter or a cognitive receiver. Lastly, if the channels
vary with time/frequency then the $X$ channel with single antennas $(M=1)$ at
all nodes has exactly 4/3 degrees of freedom with no shared messages and
exactly 3/2 degrees of freedom with a cognitive transmitter or a cognitive
receiver.
|
[
{
"created": "Fri, 21 Jul 2006 07:50:42 GMT",
"version": "v1"
},
{
"created": "Tue, 12 Sep 2006 23:55:53 GMT",
"version": "v2"
},
{
"created": "Fri, 11 May 2007 20:48:40 GMT",
"version": "v3"
}
] |
2007-07-13
|
[
[
"Jafar",
"Syed A.",
"",
"Shitz"
],
[
"Shamai",
"Shlomo",
"",
"Shitz"
]
] |
We provide achievability as well as converse results for the degrees of freedom region of a MIMO $X$ channel, i.e., a system with two transmitters, two receivers, each equipped with multiple antennas, where independent messages need to be conveyed over fixed channels from each transmitter to each receiver. With M=1 antennas at each node, we find that the total (sum rate) degrees of freedom are bounded above and below as $1 \leq\eta_X^\star \leq {4/3}$. If $M>1$ and channel matrices are non-degenerate then the precise degrees of freedom $\eta_X^\star = {4/3}M$. Simple zero forcing without dirty paper encoding or successive decoding, suffices to achieve the ${4/3}M$ degrees of freedom. With equal number of antennas at all nodes, we explore the increase in degrees of freedom when some of the messages are made available to a transmitter or receiver in the manner of cognitive radio. With a cognitive transmitter we show that the number of degrees of freedom $\eta = {3/2}M$ (for $M>1$) on the MIMO $X$ channel. The same degrees of freedom are obtained on the MIMO $X$ channel with a cognitive receiver as well. In contrast to the $X$ channel result, we show that for the MIMO \emph{interference} channel, the degrees of freedom are not increased even if both the transmitter and the receiver of one user know the other user's message. However, the interference channel can achieve the full $2M$ degrees of freedom if \emph{each} user has either a cognitive transmitter or a cognitive receiver. Lastly, if the channels vary with time/frequency then the $X$ channel with single antennas $(M=1)$ at all nodes has exactly 4/3 degrees of freedom with no shared messages and exactly 3/2 degrees of freedom with a cognitive transmitter or a cognitive receiver.
|
1905.13000
|
Nicol\`o Pagliana
|
Nicol\`o Pagliana and Lorenzo Rosasco
|
Implicit Regularization of Accelerated Methods in Hilbert Spaces
| null | null | null | null |
cs.LG math.OC math.SP stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study learning properties of accelerated gradient descent methods for
linear least-squares in Hilbert spaces. We analyze the implicit regularization
properties of Nesterov acceleration and a variant of heavy-ball in terms of
corresponding learning error bounds. Our results show that acceleration can
provides faster bias decay than gradient descent, but also suffers of a more
unstable behavior. As a result acceleration cannot be in general expected to
improve learning accuracy with respect to gradient descent, but rather to
achieve the same accuracy with reduced computations. Our theoretical results
are validated by numerical simulations. Our analysis is based on studying
suitable polynomials induced by the accelerated dynamics and combining spectral
techniques with concentration inequalities.
|
[
{
"created": "Thu, 30 May 2019 12:37:23 GMT",
"version": "v1"
},
{
"created": "Fri, 31 May 2019 10:41:40 GMT",
"version": "v2"
},
{
"created": "Tue, 18 Jun 2019 07:34:58 GMT",
"version": "v3"
},
{
"created": "Mon, 16 Dec 2019 03:47:45 GMT",
"version": "v4"
}
] |
2019-12-17
|
[
[
"Pagliana",
"Nicolò",
""
],
[
"Rosasco",
"Lorenzo",
""
]
] |
We study learning properties of accelerated gradient descent methods for linear least-squares in Hilbert spaces. We analyze the implicit regularization properties of Nesterov acceleration and a variant of heavy-ball in terms of corresponding learning error bounds. Our results show that acceleration can provides faster bias decay than gradient descent, but also suffers of a more unstable behavior. As a result acceleration cannot be in general expected to improve learning accuracy with respect to gradient descent, but rather to achieve the same accuracy with reduced computations. Our theoretical results are validated by numerical simulations. Our analysis is based on studying suitable polynomials induced by the accelerated dynamics and combining spectral techniques with concentration inequalities.
|
2103.04059
|
Ali Cheraghian
|
Ali Cheraghian, Shafin Rahman, Pengfei Fang, Soumava Kumar Roy, Lars
Petersson, Mehrtash Harandi
|
Semantic-aware Knowledge Distillation for Few-Shot Class-Incremental
Learning
|
Accepted at CVPR 2021
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Few-shot class incremental learning (FSCIL) portrays the problem of learning
new concepts gradually, where only a few examples per concept are available to
the learner. Due to the limited number of examples for training, the techniques
developed for standard incremental learning cannot be applied verbatim to
FSCIL. In this work, we introduce a distillation algorithm to address the
problem of FSCIL and propose to make use of semantic information during
training. To this end, we make use of word embeddings as semantic information
which is cheap to obtain and which facilitate the distillation process.
Furthermore, we propose a method based on an attention mechanism on multiple
parallel embeddings of visual data to align visual and semantic vectors, which
reduces issues related to catastrophic forgetting. Via experiments on
MiniImageNet, CUB200, and CIFAR100 dataset, we establish new state-of-the-art
results by outperforming existing approaches.
|
[
{
"created": "Sat, 6 Mar 2021 08:07:26 GMT",
"version": "v1"
},
{
"created": "Wed, 31 Mar 2021 02:27:53 GMT",
"version": "v2"
}
] |
2021-04-01
|
[
[
"Cheraghian",
"Ali",
""
],
[
"Rahman",
"Shafin",
""
],
[
"Fang",
"Pengfei",
""
],
[
"Roy",
"Soumava Kumar",
""
],
[
"Petersson",
"Lars",
""
],
[
"Harandi",
"Mehrtash",
""
]
] |
Few-shot class incremental learning (FSCIL) portrays the problem of learning new concepts gradually, where only a few examples per concept are available to the learner. Due to the limited number of examples for training, the techniques developed for standard incremental learning cannot be applied verbatim to FSCIL. In this work, we introduce a distillation algorithm to address the problem of FSCIL and propose to make use of semantic information during training. To this end, we make use of word embeddings as semantic information which is cheap to obtain and which facilitate the distillation process. Furthermore, we propose a method based on an attention mechanism on multiple parallel embeddings of visual data to align visual and semantic vectors, which reduces issues related to catastrophic forgetting. Via experiments on MiniImageNet, CUB200, and CIFAR100 dataset, we establish new state-of-the-art results by outperforming existing approaches.
|
1506.02515
|
Vadim Lebedev
|
Vadim Lebedev, Victor Lempitsky
|
Fast ConvNets Using Group-wise Brain Damage
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We revisit the idea of brain damage, i.e. the pruning of the coefficients of
a neural network, and suggest how brain damage can be modified and used to
speedup convolutional layers. The approach uses the fact that many efficient
implementations reduce generalized convolutions to matrix multiplications. The
suggested brain damage process prunes the convolutional kernel tensor in a
group-wise fashion by adding group-sparsity regularization to the standard
training process. After such group-wise pruning, convolutions can be reduced to
multiplications of thinned dense matrices, which leads to speedup. In the
comparison on AlexNet, the method achieves very competitive performance.
|
[
{
"created": "Mon, 8 Jun 2015 14:20:37 GMT",
"version": "v1"
},
{
"created": "Mon, 7 Dec 2015 18:11:36 GMT",
"version": "v2"
}
] |
2015-12-08
|
[
[
"Lebedev",
"Vadim",
""
],
[
"Lempitsky",
"Victor",
""
]
] |
We revisit the idea of brain damage, i.e. the pruning of the coefficients of a neural network, and suggest how brain damage can be modified and used to speedup convolutional layers. The approach uses the fact that many efficient implementations reduce generalized convolutions to matrix multiplications. The suggested brain damage process prunes the convolutional kernel tensor in a group-wise fashion by adding group-sparsity regularization to the standard training process. After such group-wise pruning, convolutions can be reduced to multiplications of thinned dense matrices, which leads to speedup. In the comparison on AlexNet, the method achieves very competitive performance.
|
1307.5667
|
Masoumeh Vali
|
Masoumeh Vali
|
New Optimization Approach Using Clustering-Based Parallel Genetic
Algorithm
| null | null | null | null |
cs.NE math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In many global Optimization Problems, it is required to evaluate a global
point (min or max) in large space that calculation effort is very high. In this
paper is presented new approach for optimization problem with subdivision
labeling method (SLM) but in this method for higher dimensional has high
calculation effort. Clustering-Based Parallel Genetic Algorithm (CBPGA) in
optimization problems is one of the solutions of this problem. That the initial
population is crossing points and subdividing in each step is according to
mutation. After labeling all of crossing points, selecting is according to
polytope that has complete label. In this method we propose an algorithm, based
on parallelization scheme using master-slave. SLM algorithm is implemented by
CBPGA and compared the experimental results. The numerical examples and
numerical results show that SLMCBPGA is improved speed up and efficiency.
|
[
{
"created": "Mon, 22 Jul 2013 12:07:22 GMT",
"version": "v1"
}
] |
2013-07-23
|
[
[
"Vali",
"Masoumeh",
""
]
] |
In many global Optimization Problems, it is required to evaluate a global point (min or max) in large space that calculation effort is very high. In this paper is presented new approach for optimization problem with subdivision labeling method (SLM) but in this method for higher dimensional has high calculation effort. Clustering-Based Parallel Genetic Algorithm (CBPGA) in optimization problems is one of the solutions of this problem. That the initial population is crossing points and subdividing in each step is according to mutation. After labeling all of crossing points, selecting is according to polytope that has complete label. In this method we propose an algorithm, based on parallelization scheme using master-slave. SLM algorithm is implemented by CBPGA and compared the experimental results. The numerical examples and numerical results show that SLMCBPGA is improved speed up and efficiency.
|
cs/0603100
|
Alin Suciu PhD
|
Alin Suciu, Kalman Pusztai
|
Efficient Compression of Prolog Programs
| null | null | null | null |
cs.PL
| null |
We propose a special-purpose class of compression algorithms for efficient
compression of Prolog programs. It is a dictionary-based compression method,
specially designed for the compression of Prolog code, and therefore we name it
PCA (Prolog Compression Algorithm). According to the experimental results this
method provides better compression than state-of-the-art general-purpose
compression algorithms. Since the algorithm works with Prolog syntactic
entities (e.g. atoms, terms, etc.) the implementation of a Prolog prototype is
straightforward and very easy to use in any Prolog application that needs
compression. Although the algorithm is designed for Prolog programs, the idea
can be easily applied for the compression of programs written in other (logic)
languages.
|
[
{
"created": "Sun, 26 Mar 2006 20:27:40 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Suciu",
"Alin",
""
],
[
"Pusztai",
"Kalman",
""
]
] |
We propose a special-purpose class of compression algorithms for efficient compression of Prolog programs. It is a dictionary-based compression method, specially designed for the compression of Prolog code, and therefore we name it PCA (Prolog Compression Algorithm). According to the experimental results this method provides better compression than state-of-the-art general-purpose compression algorithms. Since the algorithm works with Prolog syntactic entities (e.g. atoms, terms, etc.) the implementation of a Prolog prototype is straightforward and very easy to use in any Prolog application that needs compression. Although the algorithm is designed for Prolog programs, the idea can be easily applied for the compression of programs written in other (logic) languages.
|
1212.6602
|
Haizhang Zhang
|
Haizhang Zhang
|
Multidimensional Analytic Signals and the Bedrosian Identity
| null | null | null | null |
cs.IT math.CA math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The analytic signal method via the Hilbert transform is a key tool in signal
analysis and processing, especially in the time-frquency analysis. Imaging and
other applications to multidimensional signals call for extension of the method
to higher dimensions. We justify the usage of partial Hilbert transforms to
define multidimensional analytic signals from both engineering and mathematical
perspectives. The important associated Bedrosian identity $T(fg)=fTg$ for
partial Hilbert transforms $T$ are then studied. Characterizations and several
necessity theorems are established. We also make use of the identity to
construct basis functions for the time-frequency analysis.
|
[
{
"created": "Sat, 29 Dec 2012 09:18:52 GMT",
"version": "v1"
}
] |
2013-01-01
|
[
[
"Zhang",
"Haizhang",
""
]
] |
The analytic signal method via the Hilbert transform is a key tool in signal analysis and processing, especially in the time-frquency analysis. Imaging and other applications to multidimensional signals call for extension of the method to higher dimensions. We justify the usage of partial Hilbert transforms to define multidimensional analytic signals from both engineering and mathematical perspectives. The important associated Bedrosian identity $T(fg)=fTg$ for partial Hilbert transforms $T$ are then studied. Characterizations and several necessity theorems are established. We also make use of the identity to construct basis functions for the time-frequency analysis.
|
2404.14097
|
Gabriele Taentzer
|
Christoph Bockisch, Gabriele Taentzer, Daniel Neufeld
|
MMT: Mutation Testing of Java Bytecode with Model Transformation -- An
Illustrative Demonstration
| null | null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Mutation testing is an approach to check the robustness of test suites. The
program code is slightly changed by mutations to inject errors. A test suite is
robust enough if it finds such errors. Tools for mutation testing usually
integrate sets of mutation operators such as, for example, swapping arithmetic
operators; modern tools typically work with compiled code such as Java
bytecode. In this case, the mutations must be defined in such a way that the
mutated program still can be loaded and executed. The results of mutation tests
depend directly on the possible mutations. More advanced mutations and even
domain-specific mutations can pose another challenge to the test suite. Since
extending the classical approaches to more complex mutations is not well
supported and is difficult, we propose a model-driven approach where mutations
of Java bytecode can be flexibly defined by model transformation. The
corresponding tool called MMT has been extended with advanced mutation
operators for modifying object-oriented structures, Java-specific properties
and method calls of APIs, making it the only mutation testing tool for Java
bytecode that supports such mutations.
|
[
{
"created": "Mon, 22 Apr 2024 11:33:21 GMT",
"version": "v1"
}
] |
2024-04-23
|
[
[
"Bockisch",
"Christoph",
""
],
[
"Taentzer",
"Gabriele",
""
],
[
"Neufeld",
"Daniel",
""
]
] |
Mutation testing is an approach to check the robustness of test suites. The program code is slightly changed by mutations to inject errors. A test suite is robust enough if it finds such errors. Tools for mutation testing usually integrate sets of mutation operators such as, for example, swapping arithmetic operators; modern tools typically work with compiled code such as Java bytecode. In this case, the mutations must be defined in such a way that the mutated program still can be loaded and executed. The results of mutation tests depend directly on the possible mutations. More advanced mutations and even domain-specific mutations can pose another challenge to the test suite. Since extending the classical approaches to more complex mutations is not well supported and is difficult, we propose a model-driven approach where mutations of Java bytecode can be flexibly defined by model transformation. The corresponding tool called MMT has been extended with advanced mutation operators for modifying object-oriented structures, Java-specific properties and method calls of APIs, making it the only mutation testing tool for Java bytecode that supports such mutations.
|
2401.05912
|
Ivandr\'e Paraboni
|
Wesley Ramos dos Santos and Ivandre Paraboni
|
Prompt-based mental health screening from social media text
|
To appear in BrasNam-2024
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
This article presents a method for prompt-based mental health screening from
a large and noisy dataset of social media text. Our method uses GPT 3.5.
prompting to distinguish publications that may be more relevant to the task,
and then uses a straightforward bag-of-words text classifier to predict actual
user labels. Results are found to be on pair with a BERT mixture of experts
classifier, and incurring only a fraction of its training costs.
|
[
{
"created": "Thu, 11 Jan 2024 13:44:28 GMT",
"version": "v1"
},
{
"created": "Sat, 11 May 2024 12:18:51 GMT",
"version": "v2"
}
] |
2024-05-14
|
[
[
"Santos",
"Wesley Ramos dos",
""
],
[
"Paraboni",
"Ivandre",
""
]
] |
This article presents a method for prompt-based mental health screening from a large and noisy dataset of social media text. Our method uses GPT 3.5. prompting to distinguish publications that may be more relevant to the task, and then uses a straightforward bag-of-words text classifier to predict actual user labels. Results are found to be on pair with a BERT mixture of experts classifier, and incurring only a fraction of its training costs.
|
2309.14303
|
Tuan Truong Vu
|
Quang Nguyen, Truong Vu, Anh Tran, Khoi Nguyen
|
Dataset Diffusion: Diffusion-based Synthetic Dataset Generation for
Pixel-Level Semantic Segmentation
|
Accepted to NeurIPS 2023. Our project page:
https://dataset-diffusion.github.io/
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Preparing training data for deep vision models is a labor-intensive task. To
address this, generative models have emerged as an effective solution for
generating synthetic data. While current generative models produce image-level
category labels, we propose a novel method for generating pixel-level semantic
segmentation labels using the text-to-image generative model Stable Diffusion
(SD). By utilizing the text prompts, cross-attention, and self-attention of SD,
we introduce three new techniques: class-prompt appending, class-prompt
cross-attention, and self-attention exponentiation. These techniques enable us
to generate segmentation maps corresponding to synthetic images. These maps
serve as pseudo-labels for training semantic segmenters, eliminating the need
for labor-intensive pixel-wise annotation. To account for the imperfections in
our pseudo-labels, we incorporate uncertainty regions into the segmentation,
allowing us to disregard loss from those regions. We conduct evaluations on two
datasets, PASCAL VOC and MSCOCO, and our approach significantly outperforms
concurrent work. Our benchmarks and code will be released at
https://github.com/VinAIResearch/Dataset-Diffusion
|
[
{
"created": "Mon, 25 Sep 2023 17:19:26 GMT",
"version": "v1"
},
{
"created": "Wed, 27 Sep 2023 04:26:29 GMT",
"version": "v2"
},
{
"created": "Fri, 29 Sep 2023 03:28:36 GMT",
"version": "v3"
},
{
"created": "Mon, 13 Nov 2023 05:11:52 GMT",
"version": "v4"
}
] |
2023-11-14
|
[
[
"Nguyen",
"Quang",
""
],
[
"Vu",
"Truong",
""
],
[
"Tran",
"Anh",
""
],
[
"Nguyen",
"Khoi",
""
]
] |
Preparing training data for deep vision models is a labor-intensive task. To address this, generative models have emerged as an effective solution for generating synthetic data. While current generative models produce image-level category labels, we propose a novel method for generating pixel-level semantic segmentation labels using the text-to-image generative model Stable Diffusion (SD). By utilizing the text prompts, cross-attention, and self-attention of SD, we introduce three new techniques: class-prompt appending, class-prompt cross-attention, and self-attention exponentiation. These techniques enable us to generate segmentation maps corresponding to synthetic images. These maps serve as pseudo-labels for training semantic segmenters, eliminating the need for labor-intensive pixel-wise annotation. To account for the imperfections in our pseudo-labels, we incorporate uncertainty regions into the segmentation, allowing us to disregard loss from those regions. We conduct evaluations on two datasets, PASCAL VOC and MSCOCO, and our approach significantly outperforms concurrent work. Our benchmarks and code will be released at https://github.com/VinAIResearch/Dataset-Diffusion
|
2103.01618
|
Eugene d'Eon
|
Eugene d'Eon
|
An analytic BRDF for materials with spherical Lambertian scatterers
|
11 pages
| null | null | null |
cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a new analytic BRDF for porous materials comprised of spherical
Lambertian scatterers. The BRDF has a single parameter: the albedo of the
Lambertian particles. The resulting appearance exhibits strong back scattering
and saturation effects that height-field-based models such as Oren-Nayar cannot
reproduce.
|
[
{
"created": "Tue, 2 Mar 2021 10:19:17 GMT",
"version": "v1"
}
] |
2021-03-03
|
[
[
"d'Eon",
"Eugene",
""
]
] |
We present a new analytic BRDF for porous materials comprised of spherical Lambertian scatterers. The BRDF has a single parameter: the albedo of the Lambertian particles. The resulting appearance exhibits strong back scattering and saturation effects that height-field-based models such as Oren-Nayar cannot reproduce.
|
1904.01913
|
Sudhir R. Ghorpade
|
Sudhir R. Ghorpade and Trygve Johnsen
|
A Polymatroid Approach to Generalized Weights of Rank Metric Codes
|
22 pages; with minor revisions in the previous version
| null | null | null |
cs.IT math.CO math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider the notion of a $(q,m)$-polymatroid, due to Shiromoto, and the
more general notion of $(q,m)$-demi-polymatroid, and show how generalized
weights can be defined for them. Further, we establish a duality for these
weights analogous to Wei duality for generalized Hamming weights of linear
codes. The corresponding results of Ravagnani for Delsarte rank metric codes,
and Martinez-Penas and Matsumoto for relative generalized rank weights are
derived as a consequence.
|
[
{
"created": "Wed, 3 Apr 2019 11:00:14 GMT",
"version": "v1"
},
{
"created": "Mon, 27 May 2019 16:03:12 GMT",
"version": "v2"
}
] |
2019-05-28
|
[
[
"Ghorpade",
"Sudhir R.",
""
],
[
"Johnsen",
"Trygve",
""
]
] |
We consider the notion of a $(q,m)$-polymatroid, due to Shiromoto, and the more general notion of $(q,m)$-demi-polymatroid, and show how generalized weights can be defined for them. Further, we establish a duality for these weights analogous to Wei duality for generalized Hamming weights of linear codes. The corresponding results of Ravagnani for Delsarte rank metric codes, and Martinez-Penas and Matsumoto for relative generalized rank weights are derived as a consequence.
|
2101.11476
|
Alvaro Gomariz
|
Alvaro Gomariz, Raphael Egli, Tiziano Portenier, C\'esar
Nombela-Arrieta, Orcun Goksel
|
Utilizing Uncertainty Estimation in Deep Learning Segmentation of
Fluorescence Microscopy Images with Missing Markers
|
Accepted at the IEEE International Symposium on Biomedical Imaging
(ISBI) 2021. 4 pages and 4 figures
| null | null | null |
cs.CV cs.LG eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Fluorescence microscopy images contain several channels, each indicating a
marker staining the sample. Since many different marker combinations are
utilized in practice, it has been challenging to apply deep learning based
segmentation models, which expect a predefined channel combination for all
training samples as well as at inference for future application. Recent work
circumvents this problem using a modality attention approach to be effective
across any possible marker combination. However, for combinations that do not
exist in a labeled training dataset, one cannot have any estimation of
potential segmentation quality if that combination is encountered during
inference. Without this, not only one lacks quality assurance but one also does
not know where to put any additional imaging and labeling effort. We herein
propose a method to estimate segmentation quality on unlabeled images by (i)
estimating both aleatoric and epistemic uncertainties of convolutional neural
networks for image segmentation, and (ii) training a Random Forest model for
the interpretation of uncertainty features via regression to their
corresponding segmentation metrics. Additionally, we demonstrate that including
these uncertainty measures during training can provide an improvement on
segmentation performance.
|
[
{
"created": "Wed, 27 Jan 2021 15:06:04 GMT",
"version": "v1"
}
] |
2021-01-28
|
[
[
"Gomariz",
"Alvaro",
""
],
[
"Egli",
"Raphael",
""
],
[
"Portenier",
"Tiziano",
""
],
[
"Nombela-Arrieta",
"César",
""
],
[
"Goksel",
"Orcun",
""
]
] |
Fluorescence microscopy images contain several channels, each indicating a marker staining the sample. Since many different marker combinations are utilized in practice, it has been challenging to apply deep learning based segmentation models, which expect a predefined channel combination for all training samples as well as at inference for future application. Recent work circumvents this problem using a modality attention approach to be effective across any possible marker combination. However, for combinations that do not exist in a labeled training dataset, one cannot have any estimation of potential segmentation quality if that combination is encountered during inference. Without this, not only one lacks quality assurance but one also does not know where to put any additional imaging and labeling effort. We herein propose a method to estimate segmentation quality on unlabeled images by (i) estimating both aleatoric and epistemic uncertainties of convolutional neural networks for image segmentation, and (ii) training a Random Forest model for the interpretation of uncertainty features via regression to their corresponding segmentation metrics. Additionally, we demonstrate that including these uncertainty measures during training can provide an improvement on segmentation performance.
|
2401.02097
|
Shao-Yu Chang
|
Jeffrey Zhang, Shao-Yu Chang, Kedan Li, David Forsyth
|
Preserving Image Properties Through Initializations in Diffusion Models
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Retail photography imposes specific requirements on images. For instance,
images may need uniform background colors, consistent model poses, centered
products, and consistent lighting. Minor deviations from these standards impact
a site's aesthetic appeal, making the images unsuitable for use. We show that
Stable Diffusion methods, as currently applied, do not respect these
requirements. The usual practice of training the denoiser with a very noisy
image and starting inference with a sample of pure noise leads to inconsistent
generated images during inference. This inconsistency occurs because it is easy
to tell the difference between samples of the training and inference
distributions. As a result, a network trained with centered retail product
images with uniform backgrounds generates images with erratic backgrounds. The
problem is easily fixed by initializing inference with samples from an
approximation of noisy images. However, in using such an approximation, the
joint distribution of text and noisy image at inference time still slightly
differs from that at training time. This discrepancy is corrected by training
the network with samples from the approximate noisy image distribution.
Extensive experiments on real application data show significant qualitative and
quantitative improvements in performance from adopting these procedures.
Finally, our procedure can interact well with other control-based methods to
further enhance the controllability of diffusion-based methods.
|
[
{
"created": "Thu, 4 Jan 2024 06:55:49 GMT",
"version": "v1"
}
] |
2024-01-05
|
[
[
"Zhang",
"Jeffrey",
""
],
[
"Chang",
"Shao-Yu",
""
],
[
"Li",
"Kedan",
""
],
[
"Forsyth",
"David",
""
]
] |
Retail photography imposes specific requirements on images. For instance, images may need uniform background colors, consistent model poses, centered products, and consistent lighting. Minor deviations from these standards impact a site's aesthetic appeal, making the images unsuitable for use. We show that Stable Diffusion methods, as currently applied, do not respect these requirements. The usual practice of training the denoiser with a very noisy image and starting inference with a sample of pure noise leads to inconsistent generated images during inference. This inconsistency occurs because it is easy to tell the difference between samples of the training and inference distributions. As a result, a network trained with centered retail product images with uniform backgrounds generates images with erratic backgrounds. The problem is easily fixed by initializing inference with samples from an approximation of noisy images. However, in using such an approximation, the joint distribution of text and noisy image at inference time still slightly differs from that at training time. This discrepancy is corrected by training the network with samples from the approximate noisy image distribution. Extensive experiments on real application data show significant qualitative and quantitative improvements in performance from adopting these procedures. Finally, our procedure can interact well with other control-based methods to further enhance the controllability of diffusion-based methods.
|
1903.05683
|
Mohammad Sadegh Rasooli
|
Mohammad Sadegh Rasooli and Michael Collins
|
Low-Resource Syntactic Transfer with Unsupervised Source Reordering
|
Accepted in NAACL 2019
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We describe a cross-lingual transfer method for dependency parsing that takes
into account the problem of word order differences between source and target
languages. Our model only relies on the Bible, a considerably smaller parallel
data than the commonly used parallel data in transfer methods. We use the
concatenation of projected trees from the Bible corpus, and the gold-standard
treebanks in multiple source languages along with cross-lingual word
representations. We demonstrate that reordering the source treebanks before
training on them for a target language improves the accuracy of languages
outside the European language family. Our experiments on 68 treebanks (38
languages) in the Universal Dependencies corpus achieve a high accuracy for all
languages. Among them, our experiments on 16 treebanks of 12 non-European
languages achieve an average UAS absolute improvement of 3.3% over a
state-of-the-art method.
|
[
{
"created": "Wed, 13 Mar 2019 19:01:00 GMT",
"version": "v1"
}
] |
2019-03-15
|
[
[
"Rasooli",
"Mohammad Sadegh",
""
],
[
"Collins",
"Michael",
""
]
] |
We describe a cross-lingual transfer method for dependency parsing that takes into account the problem of word order differences between source and target languages. Our model only relies on the Bible, a considerably smaller parallel data than the commonly used parallel data in transfer methods. We use the concatenation of projected trees from the Bible corpus, and the gold-standard treebanks in multiple source languages along with cross-lingual word representations. We demonstrate that reordering the source treebanks before training on them for a target language improves the accuracy of languages outside the European language family. Our experiments on 68 treebanks (38 languages) in the Universal Dependencies corpus achieve a high accuracy for all languages. Among them, our experiments on 16 treebanks of 12 non-European languages achieve an average UAS absolute improvement of 3.3% over a state-of-the-art method.
|
2212.04563
|
Harsiddh Kalariya
|
Harsiddh Kalariya, Kavish Shah and Vini Patel
|
An SLR on Edge Computing Security and possible threat protection
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Mobile and Internet of Things devices are generating enormous amounts of
multi-modal data due to their exponential growth and accessibility. As a
result, these data sources must be directly analyzed in real time at the
network edge rather than relying on the cloud. Significant processing power at
the network's edge has made it possible to gather data and make decisions prior
to data being sent to the cloud. Moreover, security problems have significantly
towered as a result of the rapid expansion of mobile devices, Internet of
Things (IoT) devices, and various network points. It's much harder than ever to
guarantee the privacy of sensitive data, including customer information. This
systematic literature review depicts the fact that new technologies are a great
weapon to fight with the attack and threats to the edge computing security.
|
[
{
"created": "Thu, 8 Dec 2022 21:10:20 GMT",
"version": "v1"
}
] |
2022-12-12
|
[
[
"Kalariya",
"Harsiddh",
""
],
[
"Shah",
"Kavish",
""
],
[
"Patel",
"Vini",
""
]
] |
Mobile and Internet of Things devices are generating enormous amounts of multi-modal data due to their exponential growth and accessibility. As a result, these data sources must be directly analyzed in real time at the network edge rather than relying on the cloud. Significant processing power at the network's edge has made it possible to gather data and make decisions prior to data being sent to the cloud. Moreover, security problems have significantly towered as a result of the rapid expansion of mobile devices, Internet of Things (IoT) devices, and various network points. It's much harder than ever to guarantee the privacy of sensitive data, including customer information. This systematic literature review depicts the fact that new technologies are a great weapon to fight with the attack and threats to the edge computing security.
|
2110.10746
|
Maxime Peyrard
|
Maxime Peyrard, Wei Zhao, Steffen Eger, Robert West
|
Better than Average: Paired Evaluation of NLP Systems
|
Published in ACL 2021 (long paper)
| null |
10.18653/v1/2021.acl-long.179
| null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Evaluation in NLP is usually done by comparing the scores of competing
systems independently averaged over a common set of test instances. In this
work, we question the use of averages for aggregating evaluation scores into a
final number used to decide which system is best, since the average, as well as
alternatives such as the median, ignores the pairing arising from the fact that
systems are evaluated on the same test instances. We illustrate the importance
of taking the instance-level pairing of evaluation scores into account and
demonstrate, both theoretically and empirically, the advantages of aggregation
methods based on pairwise comparisons, such as the Bradley-Terry (BT) model, a
mechanism based on the estimated probability that a given system scores better
than another on the test set. By re-evaluating 296 real NLP evaluation setups
across four tasks and 18 evaluation metrics, we show that the choice of
aggregation mechanism matters and yields different conclusions as to which
systems are state of the art in about 30% of the setups. To facilitate the
adoption of pairwise evaluation, we release a practical tool for performing the
full analysis of evaluation scores with the mean, median, BT, and two variants
of BT (Elo and TrueSkill), alongside functionality for appropriate statistical
testing.
|
[
{
"created": "Wed, 20 Oct 2021 19:40:31 GMT",
"version": "v1"
}
] |
2021-10-22
|
[
[
"Peyrard",
"Maxime",
""
],
[
"Zhao",
"Wei",
""
],
[
"Eger",
"Steffen",
""
],
[
"West",
"Robert",
""
]
] |
Evaluation in NLP is usually done by comparing the scores of competing systems independently averaged over a common set of test instances. In this work, we question the use of averages for aggregating evaluation scores into a final number used to decide which system is best, since the average, as well as alternatives such as the median, ignores the pairing arising from the fact that systems are evaluated on the same test instances. We illustrate the importance of taking the instance-level pairing of evaluation scores into account and demonstrate, both theoretically and empirically, the advantages of aggregation methods based on pairwise comparisons, such as the Bradley-Terry (BT) model, a mechanism based on the estimated probability that a given system scores better than another on the test set. By re-evaluating 296 real NLP evaluation setups across four tasks and 18 evaluation metrics, we show that the choice of aggregation mechanism matters and yields different conclusions as to which systems are state of the art in about 30% of the setups. To facilitate the adoption of pairwise evaluation, we release a practical tool for performing the full analysis of evaluation scores with the mean, median, BT, and two variants of BT (Elo and TrueSkill), alongside functionality for appropriate statistical testing.
|
1312.7520
|
Muhammad Imran
|
Muhammad Imran
|
An Effective End-User Development Approach Through Domain-Specific
Mashups for Research Impact Evaluation
|
This PhD dissertation consists of 206 pages
| null | null | null |
cs.DL cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Over the last decade, there has been growing interest in the assessment of
the performance of researchers, research groups, universities and even
countries. The assessment of productivity is an instrument to select and
promote personnel, assign research grants and measure the results of research
projects. One particular assessment approach is bibliometrics i.e., the
quantitative analysis of scientific publications through citation and content
analysis. However, there is little consensus today on how research evaluation
should be performed, and it is commonly acknowledged that the quantitative
metrics available today are largely unsatisfactory. A number of different
scientific data sources available on the Web (e.g., DBLP, Google Scholar) that
are used for such analysis purposes. Taking data from these diverse sources,
performing the analysis and visualizing results in different ways is not a
trivial and straight forward task. Moreover, people involved in such evaluation
processes are not always IT experts and hence not capable to crawl data
sources, merge them and compute the needed evaluation procedures. The recent
emergence of mashup tools has refueled research on end-user development, i.e.,
on enabling end-users without programming skills to produce their own
applications. We believe that the heart of the problem is that it is
impractical to design tools that are generic enough to cover a wide range of
application domains, powerful enough to enable the specification of non-trivial
logic, and simple enough to be actually accessible to non-programmers. This
thesis presents a novel approach for an effective end-user development,
specifically for non-programmers. That is, we introduce a domain-specific
approach to mashups that "speaks the language of users"., i.e., that is aware
of the terminology, concepts, rules, and conventions (the domain) the user is
comfortable with.
|
[
{
"created": "Sun, 29 Dec 2013 11:19:21 GMT",
"version": "v1"
},
{
"created": "Wed, 1 Jan 2014 18:11:41 GMT",
"version": "v2"
},
{
"created": "Mon, 6 Jan 2014 19:08:43 GMT",
"version": "v3"
}
] |
2014-01-07
|
[
[
"Imran",
"Muhammad",
""
]
] |
Over the last decade, there has been growing interest in the assessment of the performance of researchers, research groups, universities and even countries. The assessment of productivity is an instrument to select and promote personnel, assign research grants and measure the results of research projects. One particular assessment approach is bibliometrics i.e., the quantitative analysis of scientific publications through citation and content analysis. However, there is little consensus today on how research evaluation should be performed, and it is commonly acknowledged that the quantitative metrics available today are largely unsatisfactory. A number of different scientific data sources available on the Web (e.g., DBLP, Google Scholar) that are used for such analysis purposes. Taking data from these diverse sources, performing the analysis and visualizing results in different ways is not a trivial and straight forward task. Moreover, people involved in such evaluation processes are not always IT experts and hence not capable to crawl data sources, merge them and compute the needed evaluation procedures. The recent emergence of mashup tools has refueled research on end-user development, i.e., on enabling end-users without programming skills to produce their own applications. We believe that the heart of the problem is that it is impractical to design tools that are generic enough to cover a wide range of application domains, powerful enough to enable the specification of non-trivial logic, and simple enough to be actually accessible to non-programmers. This thesis presents a novel approach for an effective end-user development, specifically for non-programmers. That is, we introduce a domain-specific approach to mashups that "speaks the language of users"., i.e., that is aware of the terminology, concepts, rules, and conventions (the domain) the user is comfortable with.
|
1304.5705
|
Rajendra Bera
|
Rajendra K. Bera
|
A novice looks at emotional cognition
|
11 pages, 1 figure
| null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modeling emotional-cognition is in a nascent stage and therefore wide-open
for new ideas and discussions. In this paper the author looks at the modeling
problem by bringing in ideas from axiomatic mathematics, information theory,
computer science, molecular biology, non-linear dynamical systems and quantum
computing and explains how ideas from these disciplines may have applications
in modeling emotional-cognition.
|
[
{
"created": "Sun, 21 Apr 2013 08:08:38 GMT",
"version": "v1"
}
] |
2013-04-23
|
[
[
"Bera",
"Rajendra K.",
""
]
] |
Modeling emotional-cognition is in a nascent stage and therefore wide-open for new ideas and discussions. In this paper the author looks at the modeling problem by bringing in ideas from axiomatic mathematics, information theory, computer science, molecular biology, non-linear dynamical systems and quantum computing and explains how ideas from these disciplines may have applications in modeling emotional-cognition.
|
2206.09389
|
Nicolas Peltier
|
Mnacho Echenim and Nicolas Peltier
|
Two Results on Separation Logic With Theory Reasoning
|
ASL 2022 - Workshop on Advancing Separation Logic. arXiv admin note:
substantial text overlap with arXiv:2201.13227
| null | null | null |
cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
Two results are presented concerning the entailment problem in Separation
Logic with inductively defined predicate symbols and theory reasoning. First,
we show that the entailment problem is undecidable for rules with bounded
tree-width, if theory reasoning is considered. The result holds for a wide
class of theories, even with a very low expressive power. For instance it
applies to the natural numbers with the successor function, or with the usual
order. Second, we show that every entailment problem can be reduced to an
entailment problem containing no equality (neither in the formulas nor in the
recursive rules defining the semantics of the predicate symbols).
|
[
{
"created": "Sun, 19 Jun 2022 12:40:10 GMT",
"version": "v1"
}
] |
2022-06-22
|
[
[
"Echenim",
"Mnacho",
""
],
[
"Peltier",
"Nicolas",
""
]
] |
Two results are presented concerning the entailment problem in Separation Logic with inductively defined predicate symbols and theory reasoning. First, we show that the entailment problem is undecidable for rules with bounded tree-width, if theory reasoning is considered. The result holds for a wide class of theories, even with a very low expressive power. For instance it applies to the natural numbers with the successor function, or with the usual order. Second, we show that every entailment problem can be reduced to an entailment problem containing no equality (neither in the formulas nor in the recursive rules defining the semantics of the predicate symbols).
|
1407.0474
|
Suzhi Bi
|
Suzhi Bi, Chin Keong Ho, and Rui Zhang
|
Recent Advances in Joint Wireless Energy and Information Transfer
|
Conference submission accepted by ITW 2014
| null | null | null |
cs.NI cs.IT math.IT
|
http://creativecommons.org/licenses/by/3.0/
|
In this paper, we provide an overview of the recent advances in
microwave-enabled wireless energy transfer (WET) technologies and their
applications in wireless communications. Specifically, we divide our
discussions into three parts. First, we introduce the state-of-the-art WET
technologies and the signal processing techniques to maximize the energy
transfer efficiency. Then, we discuss an interesting paradigm named
simultaneous wireless information and power transfer (SWIPT), where energy and
information are jointly transmitted using the same radio waveform. At last, we
review the recent progress in wireless powered communication networks (WPCN),
where wireless devices communicate using the power harvested by means of WET.
Extensions and future directions are also discussed in each of these areas.
|
[
{
"created": "Wed, 2 Jul 2014 07:51:08 GMT",
"version": "v1"
}
] |
2014-07-03
|
[
[
"Bi",
"Suzhi",
""
],
[
"Ho",
"Chin Keong",
""
],
[
"Zhang",
"Rui",
""
]
] |
In this paper, we provide an overview of the recent advances in microwave-enabled wireless energy transfer (WET) technologies and their applications in wireless communications. Specifically, we divide our discussions into three parts. First, we introduce the state-of-the-art WET technologies and the signal processing techniques to maximize the energy transfer efficiency. Then, we discuss an interesting paradigm named simultaneous wireless information and power transfer (SWIPT), where energy and information are jointly transmitted using the same radio waveform. At last, we review the recent progress in wireless powered communication networks (WPCN), where wireless devices communicate using the power harvested by means of WET. Extensions and future directions are also discussed in each of these areas.
|
2108.07405
|
Xinyue Zhang
|
Xinyue Zhang, Nannan Wu, Zixu Zhen, Wenjun Wang
|
ANOMALYMAXQ:Anomaly-Structured Maximization to Query in Attributed
Network
| null | null | null | null |
cs.DS cs.NA math.NA
|
http://creativecommons.org/licenses/by/4.0/
|
The detection of anomaly subgraphs naturally appears in various real-life
tasks, yet label noise seriously interferes with the result. As a motivation
for our work, we focus on inaccurate supervision and use prior knowledge to
reduce effects of noise, like query graphs. Anomalies in attributed networks
exhibit structured-properties, e.g., anomaly in money laundering with "ring
structure" property. It is the main challenge to fast and approximate query
anomaly in attributed networks. We propose a novel search method: 1)
decomposing a query graph into stars; 2) sorting attributed vertices; and 3)
assembling anomaly stars under the root vertex sequence into near query. We
present ANOMALYMAXQ and perform on 68,411 company network (Tianyancha
dataset),7.72m patent networks (Company patents) and so on. Extensive
experiments show that our method has high robustness and fast response time.
When running the patent dataset,the average running time to query the graph
once is about 252 seconds.
|
[
{
"created": "Tue, 17 Aug 2021 02:13:22 GMT",
"version": "v1"
}
] |
2021-08-18
|
[
[
"Zhang",
"Xinyue",
""
],
[
"Wu",
"Nannan",
""
],
[
"Zhen",
"Zixu",
""
],
[
"Wang",
"Wenjun",
""
]
] |
The detection of anomaly subgraphs naturally appears in various real-life tasks, yet label noise seriously interferes with the result. As a motivation for our work, we focus on inaccurate supervision and use prior knowledge to reduce effects of noise, like query graphs. Anomalies in attributed networks exhibit structured-properties, e.g., anomaly in money laundering with "ring structure" property. It is the main challenge to fast and approximate query anomaly in attributed networks. We propose a novel search method: 1) decomposing a query graph into stars; 2) sorting attributed vertices; and 3) assembling anomaly stars under the root vertex sequence into near query. We present ANOMALYMAXQ and perform on 68,411 company network (Tianyancha dataset),7.72m patent networks (Company patents) and so on. Extensive experiments show that our method has high robustness and fast response time. When running the patent dataset,the average running time to query the graph once is about 252 seconds.
|
2001.01328
|
Xuechen Li
|
Xuechen Li, Ting-Kam Leonard Wong, Ricky T. Q. Chen, David Duvenaud
|
Scalable Gradients for Stochastic Differential Equations
|
AISTATS 2020; 25 pages, 6 figures in main text; clarify notation in
appendix
| null | null | null |
cs.LG cs.NA math.NA stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The adjoint sensitivity method scalably computes gradients of solutions to
ordinary differential equations. We generalize this method to stochastic
differential equations, allowing time-efficient and constant-memory computation
of gradients with high-order adaptive solvers. Specifically, we derive a
stochastic differential equation whose solution is the gradient, a
memory-efficient algorithm for caching noise, and conditions under which
numerical solutions converge. In addition, we combine our method with
gradient-based stochastic variational inference for latent stochastic
differential equations. We use our method to fit stochastic dynamics defined by
neural networks, achieving competitive performance on a 50-dimensional motion
capture dataset.
|
[
{
"created": "Sun, 5 Jan 2020 23:05:55 GMT",
"version": "v1"
},
{
"created": "Tue, 7 Jan 2020 07:00:00 GMT",
"version": "v2"
},
{
"created": "Wed, 8 Jan 2020 18:15:19 GMT",
"version": "v3"
},
{
"created": "Mon, 24 Feb 2020 17:27:17 GMT",
"version": "v4"
},
{
"created": "Tue, 7 Jul 2020 05:40:07 GMT",
"version": "v5"
},
{
"created": "Sun, 18 Oct 2020 21:16:05 GMT",
"version": "v6"
}
] |
2020-10-20
|
[
[
"Li",
"Xuechen",
""
],
[
"Wong",
"Ting-Kam Leonard",
""
],
[
"Chen",
"Ricky T. Q.",
""
],
[
"Duvenaud",
"David",
""
]
] |
The adjoint sensitivity method scalably computes gradients of solutions to ordinary differential equations. We generalize this method to stochastic differential equations, allowing time-efficient and constant-memory computation of gradients with high-order adaptive solvers. Specifically, we derive a stochastic differential equation whose solution is the gradient, a memory-efficient algorithm for caching noise, and conditions under which numerical solutions converge. In addition, we combine our method with gradient-based stochastic variational inference for latent stochastic differential equations. We use our method to fit stochastic dynamics defined by neural networks, achieving competitive performance on a 50-dimensional motion capture dataset.
|
2308.16602
|
Nesreen Mufid
|
Nesreen Mufid
|
Design Challenges for the Implementation of Smart Homes
| null | null | null | null |
cs.CR eess.SP
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Home automation for many years had faced challenges that limit its spreading
around the world. These challenges caused by the high cost of Own such a home,
inflexibility system (cannot be monitored outside the home) and issues to
achieve optimal security. Our main objective is to design and implement a smart
home model that is simple, affordable to the users. The proposed system provide
flexibility to monitor the home, using the reliable cellular network. The user
will be able what is inside the home when he /she is away from home. In
addition to that, our model overcome the issue of the security by providing
different sensors that detects smoke, gas, leakage of water and incases of
burglary. Moreover, a camera will be available in the home to give a full view
for the user when he/she is outside the home. The user will be informed by an
application on his/she phone incase if there is a fire, water leakage and if
someone break into the house. This will give the user a chance to take an
action if such cases happened. Furthermore, the user can monitor the lighting
system of the home, by giving the user a chance to turn the lights on and off
remotely.
|
[
{
"created": "Thu, 31 Aug 2023 10:03:29 GMT",
"version": "v1"
}
] |
2023-09-01
|
[
[
"Mufid",
"Nesreen",
""
]
] |
Home automation for many years had faced challenges that limit its spreading around the world. These challenges caused by the high cost of Own such a home, inflexibility system (cannot be monitored outside the home) and issues to achieve optimal security. Our main objective is to design and implement a smart home model that is simple, affordable to the users. The proposed system provide flexibility to monitor the home, using the reliable cellular network. The user will be able what is inside the home when he /she is away from home. In addition to that, our model overcome the issue of the security by providing different sensors that detects smoke, gas, leakage of water and incases of burglary. Moreover, a camera will be available in the home to give a full view for the user when he/she is outside the home. The user will be informed by an application on his/she phone incase if there is a fire, water leakage and if someone break into the house. This will give the user a chance to take an action if such cases happened. Furthermore, the user can monitor the lighting system of the home, by giving the user a chance to turn the lights on and off remotely.
|
cs/0411020
|
Vedran Kordic
|
A. Albagul and Wahyudi
|
Dynamic Modelling and Adaptive Traction Control for Mobile Robots
| null |
International Journal of Advanced Robotic Systems, Volume 1,
Number 3, September 2004, pp.149-154
| null | null |
cs.RO
| null |
Mobile robots have received a great deal of research in recent years. A
significant amount of research has been published in many aspects related to
mobile robots. Most of the research is devoted to design and develop some
control techniques for robot motion and path planning. A large number of
researchers have used kinematic models to develop motion control strategy for
mobile robots. Their argument and assumption that these models are valid if the
robot has low speed, low acceleration and light load. However, dynamic
modelling of mobile robots is very important as they are designed to travel at
higher speed and perform heavy duty work. This paper presents and discusses a
new approach to develop a dynamic model and control strategy for wheeled mobile
robot which I modelled as a rigid body that roles on two wheels and a castor.
The motion control strategy consists of two levels. The first level is dealing
with the dynamic of the system and denoted as Low level controller. The second
level is developed to take care of path planning and trajectory generation.
|
[
{
"created": "Mon, 8 Nov 2004 20:44:03 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Albagul",
"A.",
""
],
[
"Wahyudi",
"",
""
]
] |
Mobile robots have received a great deal of research in recent years. A significant amount of research has been published in many aspects related to mobile robots. Most of the research is devoted to design and develop some control techniques for robot motion and path planning. A large number of researchers have used kinematic models to develop motion control strategy for mobile robots. Their argument and assumption that these models are valid if the robot has low speed, low acceleration and light load. However, dynamic modelling of mobile robots is very important as they are designed to travel at higher speed and perform heavy duty work. This paper presents and discusses a new approach to develop a dynamic model and control strategy for wheeled mobile robot which I modelled as a rigid body that roles on two wheels and a castor. The motion control strategy consists of two levels. The first level is dealing with the dynamic of the system and denoted as Low level controller. The second level is developed to take care of path planning and trajectory generation.
|
2102.03583
|
Vincent Neiger
|
Seung Gyu Hyun, Vincent Neiger, \'Eric Schost
|
Algorithms for Linearly Recurrent Sequences of Truncated Polynomials
|
8 pages, ISSAC 2021
| null | null | null |
cs.SC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Linear recurrent sequences are those whose elements are defined as linear
combinations of preceding elements, and finding recurrence relations is a
fundamental problem in computer algebra. In this paper, we focus on sequences
whose elements are vectors over the ring $\mathbb{A} = \mathbb{K}[x]/(x^d)$ of
truncated polynomials. Finding the ideal of their recurrence relations has
applications such as the computation of minimal polynomials and determinants of
sparse matrices over $\mathbb{A}$. We present three methods for finding this
ideal: a Berlekamp-Massey-like approach due to Kurakin, one which computes the
kernel of some block-Hankel matrix over $\mathbb{A}$ via a minimal approximant
basis, and one based on bivariate Pad\'e approximation. We propose complexity
improvements for the first two methods, respectively by avoiding the
computation of redundant relations and by exploiting the Hankel structure to
compress the approximation problem. Then we confirm these improvements
empirically through a C++ implementation, and we discuss the above-mentioned
applications.
|
[
{
"created": "Sat, 6 Feb 2021 13:21:03 GMT",
"version": "v1"
},
{
"created": "Tue, 8 Jun 2021 20:23:09 GMT",
"version": "v2"
}
] |
2021-06-10
|
[
[
"Hyun",
"Seung Gyu",
""
],
[
"Neiger",
"Vincent",
""
],
[
"Schost",
"Éric",
""
]
] |
Linear recurrent sequences are those whose elements are defined as linear combinations of preceding elements, and finding recurrence relations is a fundamental problem in computer algebra. In this paper, we focus on sequences whose elements are vectors over the ring $\mathbb{A} = \mathbb{K}[x]/(x^d)$ of truncated polynomials. Finding the ideal of their recurrence relations has applications such as the computation of minimal polynomials and determinants of sparse matrices over $\mathbb{A}$. We present three methods for finding this ideal: a Berlekamp-Massey-like approach due to Kurakin, one which computes the kernel of some block-Hankel matrix over $\mathbb{A}$ via a minimal approximant basis, and one based on bivariate Pad\'e approximation. We propose complexity improvements for the first two methods, respectively by avoiding the computation of redundant relations and by exploiting the Hankel structure to compress the approximation problem. Then we confirm these improvements empirically through a C++ implementation, and we discuss the above-mentioned applications.
|
1212.0114
|
Dilip Aldar
|
Bhalchandra B. Godbole, Dilip S. Aldar
|
Performance Improvement by Changing Modulation Methods for Software
Defined Radios
|
IJACSA
| null | null | null |
cs.OH
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper describes an automatic switching of modulation method to
reconfigure transceivers of Software Defined Radio (SDR) based wireless
communication system. The programmable architecture of Software Radio promotes
a flexible implementation of modulation methods. This flexibility also
translates into adaptively, which is used here to optimize the throughput of a
wireless network, operating under varying channel conditions. It is robust and
efficient with processing time overhead that still allows the SDR to maintain
its real-time operating objectives. This technique is studied for digital
wireless communication systems. Tests and simulations using an AWGN channel
show that the SNR threshold is 5dB for the case study.
|
[
{
"created": "Sat, 1 Dec 2012 13:47:25 GMT",
"version": "v1"
}
] |
2012-12-04
|
[
[
"Godbole",
"Bhalchandra B.",
""
],
[
"Aldar",
"Dilip S.",
""
]
] |
This paper describes an automatic switching of modulation method to reconfigure transceivers of Software Defined Radio (SDR) based wireless communication system. The programmable architecture of Software Radio promotes a flexible implementation of modulation methods. This flexibility also translates into adaptively, which is used here to optimize the throughput of a wireless network, operating under varying channel conditions. It is robust and efficient with processing time overhead that still allows the SDR to maintain its real-time operating objectives. This technique is studied for digital wireless communication systems. Tests and simulations using an AWGN channel show that the SNR threshold is 5dB for the case study.
|
1806.02300
|
Yuankai Huo
|
Yuankai Huo, Katherine Swett, Susan M. Resnick, Laurie E. Cutting,
Bennett A. Landman
|
Data-driven Probabilistic Atlases Capture Whole-brain Individual
Variation
| null | null | null | null |
cs.LG q-bio.NC stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Probabilistic atlases provide essential spatial contextual information for
image interpretation, Bayesian modeling, and algorithmic processing. Such
atlases are typically constructed by grouping subjects with similar demographic
information. Importantly, use of the same scanner minimizes inter-group
variability. However, generalizability and spatial specificity of such
approaches is more limited than one might like. Inspired by Commowick
"Frankenstein's creature paradigm" which builds a personal specific anatomical
atlas, we propose a data-driven framework to build a personal specific
probabilistic atlas under the large-scale data scheme. The data-driven
framework clusters regions with similar features using a point distribution
model to learn different anatomical phenotypes. Regional structural atlases and
corresponding regional probabilistic atlases are used as indices and targets in
the dictionary. By indexing the dictionary, the whole brain probabilistic
atlases adapt to each new subject quickly and can be used as spatial priors for
visualization and processing. The novelties of this approach are (1) it
provides a new perspective of generating personal specific whole brain
probabilistic atlases (132 regions) under data-driven scheme across sites. (2)
The framework employs the large amount of heterogeneous data (2349 images). (3)
The proposed framework achieves low computational cost since only one affine
registration and Pearson correlation operation are required for a new subject.
Our method matches individual regions better with higher Dice similarity value
when testing the probabilistic atlases. Importantly, the advantage the
large-scale scheme is demonstrated by the better performance of using
large-scale training data (1888 images) than smaller training set (720 images).
|
[
{
"created": "Wed, 6 Jun 2018 16:53:55 GMT",
"version": "v1"
}
] |
2018-06-07
|
[
[
"Huo",
"Yuankai",
""
],
[
"Swett",
"Katherine",
""
],
[
"Resnick",
"Susan M.",
""
],
[
"Cutting",
"Laurie E.",
""
],
[
"Landman",
"Bennett A.",
""
]
] |
Probabilistic atlases provide essential spatial contextual information for image interpretation, Bayesian modeling, and algorithmic processing. Such atlases are typically constructed by grouping subjects with similar demographic information. Importantly, use of the same scanner minimizes inter-group variability. However, generalizability and spatial specificity of such approaches is more limited than one might like. Inspired by Commowick "Frankenstein's creature paradigm" which builds a personal specific anatomical atlas, we propose a data-driven framework to build a personal specific probabilistic atlas under the large-scale data scheme. The data-driven framework clusters regions with similar features using a point distribution model to learn different anatomical phenotypes. Regional structural atlases and corresponding regional probabilistic atlases are used as indices and targets in the dictionary. By indexing the dictionary, the whole brain probabilistic atlases adapt to each new subject quickly and can be used as spatial priors for visualization and processing. The novelties of this approach are (1) it provides a new perspective of generating personal specific whole brain probabilistic atlases (132 regions) under data-driven scheme across sites. (2) The framework employs the large amount of heterogeneous data (2349 images). (3) The proposed framework achieves low computational cost since only one affine registration and Pearson correlation operation are required for a new subject. Our method matches individual regions better with higher Dice similarity value when testing the probabilistic atlases. Importantly, the advantage the large-scale scheme is demonstrated by the better performance of using large-scale training data (1888 images) than smaller training set (720 images).
|
2404.09737
|
Daria Cherniuk
|
Daniil Merkulov, Daria Cherniuk, Alexander Rudikov, Ivan Oseledets,
Ekaterina Muravleva, Aleksandr Mikhalev, Boris Kashin
|
Quantization of Large Language Models with an Overdetermined Basis
| null | null | null | null |
cs.LG cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we introduce an algorithm for data quantization based on the
principles of Kashin representation. This approach hinges on decomposing any
given vector, matrix, or tensor into two factors. The first factor maintains a
small infinity norm, while the second exhibits a similarly constrained norm
when multiplied by an orthogonal matrix. Surprisingly, the entries of factors
after decomposition are well-concentrated around several peaks, which allows us
to efficiently replace them with corresponding centroids for quantization
purposes. We study the theoretical properties of the proposed approach and
rigorously evaluate our compression algorithm in the context of next-word
prediction tasks and on a set of downstream tasks for text classification. Our
findings demonstrate that Kashin Quantization achieves competitive or superior
quality in model performance while ensuring data compression, marking a
significant advancement in the field of data quantization.
|
[
{
"created": "Mon, 15 Apr 2024 12:38:46 GMT",
"version": "v1"
}
] |
2024-04-16
|
[
[
"Merkulov",
"Daniil",
""
],
[
"Cherniuk",
"Daria",
""
],
[
"Rudikov",
"Alexander",
""
],
[
"Oseledets",
"Ivan",
""
],
[
"Muravleva",
"Ekaterina",
""
],
[
"Mikhalev",
"Aleksandr",
""
],
[
"Kashin",
"Boris",
""
]
] |
In this paper, we introduce an algorithm for data quantization based on the principles of Kashin representation. This approach hinges on decomposing any given vector, matrix, or tensor into two factors. The first factor maintains a small infinity norm, while the second exhibits a similarly constrained norm when multiplied by an orthogonal matrix. Surprisingly, the entries of factors after decomposition are well-concentrated around several peaks, which allows us to efficiently replace them with corresponding centroids for quantization purposes. We study the theoretical properties of the proposed approach and rigorously evaluate our compression algorithm in the context of next-word prediction tasks and on a set of downstream tasks for text classification. Our findings demonstrate that Kashin Quantization achieves competitive or superior quality in model performance while ensuring data compression, marking a significant advancement in the field of data quantization.
|
1703.06520
|
Baoguang Shi
|
Baoguang Shi, Xiang Bai, Serge Belongie
|
Detecting Oriented Text in Natural Images by Linking Segments
|
To Appear in CVPR 2017
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most state-of-the-art text detection methods are specific to horizontal Latin
text and are not fast enough for real-time applications. We introduce Segment
Linking (SegLink), an oriented text detection method. The main idea is to
decompose text into two locally detectable elements, namely segments and links.
A segment is an oriented box covering a part of a word or text line; A link
connects two adjacent segments, indicating that they belong to the same word or
text line. Both elements are detected densely at multiple scales by an
end-to-end trained, fully-convolutional neural network. Final detections are
produced by combining segments connected by links. Compared with previous
methods, SegLink improves along the dimensions of accuracy, speed, and ease of
training. It achieves an f-measure of 75.0% on the standard ICDAR 2015
Incidental (Challenge 4) benchmark, outperforming the previous best by a large
margin. It runs at over 20 FPS on 512x512 images. Moreover, without
modification, SegLink is able to detect long lines of non-Latin text, such as
Chinese.
|
[
{
"created": "Sun, 19 Mar 2017 21:43:41 GMT",
"version": "v1"
},
{
"created": "Fri, 24 Mar 2017 03:18:55 GMT",
"version": "v2"
},
{
"created": "Thu, 13 Apr 2017 17:40:43 GMT",
"version": "v3"
}
] |
2017-04-14
|
[
[
"Shi",
"Baoguang",
""
],
[
"Bai",
"Xiang",
""
],
[
"Belongie",
"Serge",
""
]
] |
Most state-of-the-art text detection methods are specific to horizontal Latin text and are not fast enough for real-time applications. We introduce Segment Linking (SegLink), an oriented text detection method. The main idea is to decompose text into two locally detectable elements, namely segments and links. A segment is an oriented box covering a part of a word or text line; A link connects two adjacent segments, indicating that they belong to the same word or text line. Both elements are detected densely at multiple scales by an end-to-end trained, fully-convolutional neural network. Final detections are produced by combining segments connected by links. Compared with previous methods, SegLink improves along the dimensions of accuracy, speed, and ease of training. It achieves an f-measure of 75.0% on the standard ICDAR 2015 Incidental (Challenge 4) benchmark, outperforming the previous best by a large margin. It runs at over 20 FPS on 512x512 images. Moreover, without modification, SegLink is able to detect long lines of non-Latin text, such as Chinese.
|
2206.11489
|
Pihe Hu
|
Pihe Hu, Yu Chen, Longbo Huang
|
Nearly Minimax Optimal Reinforcement Learning with Linear Function
Approximation
|
This is an updated version of our ICML camera-ready version, which
has a technical error in building the over-optimistic value function. In this
version, this error is fixed using the technique of the "rare-switching"
value function from (He et al., 2022)
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study reinforcement learning with linear function approximation where the
transition probability and reward functions are linear with respect to a
feature mapping $\boldsymbol{\phi}(s,a)$. Specifically, we consider the
episodic inhomogeneous linear Markov Decision Process (MDP), and propose a
novel computation-efficient algorithm, LSVI-UCB$^+$, which achieves an
$\widetilde{O}(Hd\sqrt{T})$ regret bound where $H$ is the episode length, $d$
is the feature dimension, and $T$ is the number of steps. LSVI-UCB$^+$ builds
on weighted ridge regression and upper confidence value iteration with a
Bernstein-type exploration bonus. Our statistical results are obtained with
novel analytical tools, including a new Bernstein self-normalized bound with
conservatism on elliptical potentials, and refined analysis of the correction
term. This is a minimax optimal algorithm for linear MDPs up to logarithmic
factors, which closes the $\sqrt{Hd}$ gap between the upper bound of
$\widetilde{O}(\sqrt{H^3d^3T})$ in (Jin et al., 2020) and lower bound of
$\Omega(Hd\sqrt{T})$ for linear MDPs.
|
[
{
"created": "Thu, 23 Jun 2022 06:04:21 GMT",
"version": "v1"
},
{
"created": "Thu, 19 Jan 2023 17:25:20 GMT",
"version": "v2"
},
{
"created": "Sun, 29 Jan 2023 16:14:06 GMT",
"version": "v3"
}
] |
2023-01-31
|
[
[
"Hu",
"Pihe",
""
],
[
"Chen",
"Yu",
""
],
[
"Huang",
"Longbo",
""
]
] |
We study reinforcement learning with linear function approximation where the transition probability and reward functions are linear with respect to a feature mapping $\boldsymbol{\phi}(s,a)$. Specifically, we consider the episodic inhomogeneous linear Markov Decision Process (MDP), and propose a novel computation-efficient algorithm, LSVI-UCB$^+$, which achieves an $\widetilde{O}(Hd\sqrt{T})$ regret bound where $H$ is the episode length, $d$ is the feature dimension, and $T$ is the number of steps. LSVI-UCB$^+$ builds on weighted ridge regression and upper confidence value iteration with a Bernstein-type exploration bonus. Our statistical results are obtained with novel analytical tools, including a new Bernstein self-normalized bound with conservatism on elliptical potentials, and refined analysis of the correction term. This is a minimax optimal algorithm for linear MDPs up to logarithmic factors, which closes the $\sqrt{Hd}$ gap between the upper bound of $\widetilde{O}(\sqrt{H^3d^3T})$ in (Jin et al., 2020) and lower bound of $\Omega(Hd\sqrt{T})$ for linear MDPs.
|
2312.04606
|
Fengze Sun
|
Fengze Sun, Jianzhong Qi, Yanchuan Chang, Xiaoliang Fan, Shanika
Karunasekera, Egemen Tanin
|
Urban Region Representation Learning with Attentive Fusion
| null | null | null | null |
cs.LG cs.DB
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
An increasing number of related urban data sources have brought forth novel
opportunities for learning urban region representations, i.e., embeddings. The
embeddings describe latent features of urban regions and enable discovering
similar regions for urban planning applications. Existing methods learn an
embedding for a region using every different type of region feature data, and
subsequently fuse all learned embeddings of a region to generate a unified
region embedding. However, these studies often overlook the significance of the
fusion process. The typical fusion methods rely on simple aggregation, such as
summation and concatenation, thereby disregarding correlations within the fused
region embeddings.
To address this limitation, we propose a novel model named HAFusion. Our
model is powered by a dual-feature attentive fusion module named DAFusion,
which fuses embeddings from different region features to learn higher-order
correlations between the regions as well as between the different types of
region features. DAFusion is generic - it can be integrated into existing
models to enhance their fusion process. Further, motivated by the effective
fusion capability of an attentive module, we propose a hybrid attentive feature
learning module named HALearning to enhance the embedding learning from each
individual type of region features. Extensive experiments on three real-world
datasets demonstrate that our model HAFusion outperforms state-of-the-art
methods across three different prediction tasks. Using our learned region
embedding leads to consistent and up to 31% improvements in the prediction
accuracy.
|
[
{
"created": "Thu, 7 Dec 2023 11:05:06 GMT",
"version": "v1"
},
{
"created": "Fri, 26 Apr 2024 05:56:32 GMT",
"version": "v2"
}
] |
2024-04-29
|
[
[
"Sun",
"Fengze",
""
],
[
"Qi",
"Jianzhong",
""
],
[
"Chang",
"Yanchuan",
""
],
[
"Fan",
"Xiaoliang",
""
],
[
"Karunasekera",
"Shanika",
""
],
[
"Tanin",
"Egemen",
""
]
] |
An increasing number of related urban data sources have brought forth novel opportunities for learning urban region representations, i.e., embeddings. The embeddings describe latent features of urban regions and enable discovering similar regions for urban planning applications. Existing methods learn an embedding for a region using every different type of region feature data, and subsequently fuse all learned embeddings of a region to generate a unified region embedding. However, these studies often overlook the significance of the fusion process. The typical fusion methods rely on simple aggregation, such as summation and concatenation, thereby disregarding correlations within the fused region embeddings. To address this limitation, we propose a novel model named HAFusion. Our model is powered by a dual-feature attentive fusion module named DAFusion, which fuses embeddings from different region features to learn higher-order correlations between the regions as well as between the different types of region features. DAFusion is generic - it can be integrated into existing models to enhance their fusion process. Further, motivated by the effective fusion capability of an attentive module, we propose a hybrid attentive feature learning module named HALearning to enhance the embedding learning from each individual type of region features. Extensive experiments on three real-world datasets demonstrate that our model HAFusion outperforms state-of-the-art methods across three different prediction tasks. Using our learned region embedding leads to consistent and up to 31% improvements in the prediction accuracy.
|
1710.08014
|
Wenguan Wang
|
Wenguan Wang and Jianbing Shen
|
Deep Cropping via Attention Box Prediction and Aesthetics Assessment
|
Accepted by ICCV2017
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We model the photo cropping problem as a cascade of attention box regression
and aesthetic quality classification, based on deep learning. A neural network
is designed that has two branches for predicting attention bounding box and
analyzing aesthetics, respectively. The predicted attention box is treated as
an initial crop window where a set of cropping candidates are generated around
it, without missing important information. Then, aesthetics assessment is
employed to select the final crop as the one with the best aesthetic quality.
With our network, cropping candidates share features within full-image
convolutional feature maps, thus avoiding repeated feature computation and
leading to higher computation efficiency. Via leveraging rich data for
attention prediction and aesthetics assessment, the proposed method produces
high-quality cropping results, even with the limited availability of training
data for photo cropping. The experimental results demonstrate the competitive
results and fast processing speed (5 fps with all steps).
|
[
{
"created": "Sun, 22 Oct 2017 21:03:01 GMT",
"version": "v1"
}
] |
2017-10-24
|
[
[
"Wang",
"Wenguan",
""
],
[
"Shen",
"Jianbing",
""
]
] |
We model the photo cropping problem as a cascade of attention box regression and aesthetic quality classification, based on deep learning. A neural network is designed that has two branches for predicting attention bounding box and analyzing aesthetics, respectively. The predicted attention box is treated as an initial crop window where a set of cropping candidates are generated around it, without missing important information. Then, aesthetics assessment is employed to select the final crop as the one with the best aesthetic quality. With our network, cropping candidates share features within full-image convolutional feature maps, thus avoiding repeated feature computation and leading to higher computation efficiency. Via leveraging rich data for attention prediction and aesthetics assessment, the proposed method produces high-quality cropping results, even with the limited availability of training data for photo cropping. The experimental results demonstrate the competitive results and fast processing speed (5 fps with all steps).
|
1805.04605
|
Adrian Dalca
|
Adrian V. Dalca, Guha Balakrishnan, John Guttag, Mert R. Sabuncu
|
Unsupervised Learning for Fast Probabilistic Diffeomorphic Registration
|
MICCAI 2018 (Oral Presentation). Proceedings: LNCS 11070, pp 729-738
|
LNCS 11070, pp 729-738, Springer. 2018
|
10.1007/978-3-030-00928-1_82
| null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Traditional deformable registration techniques achieve impressive results and
offer a rigorous theoretical treatment, but are computationally intensive since
they solve an optimization problem for each image pair. Recently,
learning-based methods have facilitated fast registration by learning spatial
deformation functions. However, these approaches use restricted deformation
models, require supervised labels, or do not guarantee a diffeomorphic
(topology-preserving) registration. Furthermore, learning-based registration
tools have not been derived from a probabilistic framework that can offer
uncertainty estimates. In this paper, we present a probabilistic generative
model and derive an unsupervised learning-based inference algorithm that makes
use of recent developments in convolutional neural networks (CNNs). We
demonstrate our method on a 3D brain registration task, and provide an
empirical analysis of the algorithm. Our approach results in state of the art
accuracy and very fast runtimes, while providing diffeomorphic guarantees and
uncertainty estimates. Our implementation is available online at
http://voxelmorph.csail.mit.edu .
|
[
{
"created": "Fri, 11 May 2018 22:12:01 GMT",
"version": "v1"
},
{
"created": "Fri, 14 Sep 2018 13:28:36 GMT",
"version": "v2"
}
] |
2019-03-15
|
[
[
"Dalca",
"Adrian V.",
""
],
[
"Balakrishnan",
"Guha",
""
],
[
"Guttag",
"John",
""
],
[
"Sabuncu",
"Mert R.",
""
]
] |
Traditional deformable registration techniques achieve impressive results and offer a rigorous theoretical treatment, but are computationally intensive since they solve an optimization problem for each image pair. Recently, learning-based methods have facilitated fast registration by learning spatial deformation functions. However, these approaches use restricted deformation models, require supervised labels, or do not guarantee a diffeomorphic (topology-preserving) registration. Furthermore, learning-based registration tools have not been derived from a probabilistic framework that can offer uncertainty estimates. In this paper, we present a probabilistic generative model and derive an unsupervised learning-based inference algorithm that makes use of recent developments in convolutional neural networks (CNNs). We demonstrate our method on a 3D brain registration task, and provide an empirical analysis of the algorithm. Our approach results in state of the art accuracy and very fast runtimes, while providing diffeomorphic guarantees and uncertainty estimates. Our implementation is available online at http://voxelmorph.csail.mit.edu .
|
2307.01448
|
Ming Zhong
|
Ming Zhong, Siru Ouyang, Minhao Jiang, Vivian Hu, Yizhu Jiao, Xuan
Wang, Jiawei Han
|
ReactIE: Enhancing Chemical Reaction Extraction with Weak Supervision
|
Findings of ACL 2023, Short Paper
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Structured chemical reaction information plays a vital role for chemists
engaged in laboratory work and advanced endeavors such as computer-aided drug
design. Despite the importance of extracting structured reactions from
scientific literature, data annotation for this purpose is cost-prohibitive due
to the significant labor required from domain experts. Consequently, the
scarcity of sufficient training data poses an obstacle to the progress of
related models in this domain. In this paper, we propose ReactIE, which
combines two weakly supervised approaches for pre-training. Our method utilizes
frequent patterns within the text as linguistic cues to identify specific
characteristics of chemical reactions. Additionally, we adopt synthetic data
from patent records as distant supervision to incorporate domain knowledge into
the model. Experiments demonstrate that ReactIE achieves substantial
improvements and outperforms all existing baselines.
|
[
{
"created": "Tue, 4 Jul 2023 02:52:30 GMT",
"version": "v1"
}
] |
2023-07-06
|
[
[
"Zhong",
"Ming",
""
],
[
"Ouyang",
"Siru",
""
],
[
"Jiang",
"Minhao",
""
],
[
"Hu",
"Vivian",
""
],
[
"Jiao",
"Yizhu",
""
],
[
"Wang",
"Xuan",
""
],
[
"Han",
"Jiawei",
""
]
] |
Structured chemical reaction information plays a vital role for chemists engaged in laboratory work and advanced endeavors such as computer-aided drug design. Despite the importance of extracting structured reactions from scientific literature, data annotation for this purpose is cost-prohibitive due to the significant labor required from domain experts. Consequently, the scarcity of sufficient training data poses an obstacle to the progress of related models in this domain. In this paper, we propose ReactIE, which combines two weakly supervised approaches for pre-training. Our method utilizes frequent patterns within the text as linguistic cues to identify specific characteristics of chemical reactions. Additionally, we adopt synthetic data from patent records as distant supervision to incorporate domain knowledge into the model. Experiments demonstrate that ReactIE achieves substantial improvements and outperforms all existing baselines.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.