id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2012.09476
|
Albert Atserias
|
Albert Atserias, Ilario Bonacina, Susanna F. de Rezende, Massimo
Lauria, Jakob Nordstr\"om, and Alexander Razborov
|
Clique Is Hard on Average for Regular Resolution
| null | null | null | null |
cs.CC
|
http://creativecommons.org/licenses/by/4.0/
|
We prove that for $k \ll \sqrt[4]{n}$ regular resolution requires length
$n^{\Omega(k)}$ to establish that an Erd\H{o}s-R\'enyi graph with appropriately
chosen edge density does not contain a $k$-clique. This lower bound is optimal
up to the multiplicative constant in the exponent, and also implies
unconditional $n^{\Omega(k)}$ lower bounds on running time for several
state-of-the-art algorithms for finding maximum cliques in graphs.
|
[
{
"created": "Thu, 17 Dec 2020 10:02:34 GMT",
"version": "v1"
}
] |
2020-12-18
|
[
[
"Atserias",
"Albert",
""
],
[
"Bonacina",
"Ilario",
""
],
[
"de Rezende",
"Susanna F.",
""
],
[
"Lauria",
"Massimo",
""
],
[
"Nordström",
"Jakob",
""
],
[
"Razborov",
"Alexander",
""
]
] |
We prove that for $k \ll \sqrt[4]{n}$ regular resolution requires length $n^{\Omega(k)}$ to establish that an Erd\H{o}s-R\'enyi graph with appropriately chosen edge density does not contain a $k$-clique. This lower bound is optimal up to the multiplicative constant in the exponent, and also implies unconditional $n^{\Omega(k)}$ lower bounds on running time for several state-of-the-art algorithms for finding maximum cliques in graphs.
|
1707.00644
|
Cedomir Stefanovic
|
Gerhard Wunder, Cedomir Stefanovic, Petar Popovski, Lars Thiele
|
Compressive Coded Random Access for Massive MTC Traffic in 5G Systems
|
Presented at 49th Asilomar Conference on Signals, Systems and
Computers 2015. arXiv admin note: text overlap with arXiv:1504.05318
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Massive MTC support is an important future market segment, but not yet
efficiently supported in cellular systems. In this paper we follow-up on recent
concepts combining advanced MAC protocols with Compressed Sensing (CS) based
multiuser detection. Specifically, we introduce a concept for sparse joint
activity, channel and data detection in the context of the Coded ALOHA (FDMA)
protocol. We will argue that a simple sparse activity and data detection is not
sufficient (as many papers do) because control resources are in the order of
the data. In addition, we will improve on the performance of such protocols in
terms of the reduction of resources required for the user activity, channel
estimation and data detection. We will mathematically analyze the system
accordingly and provide expressions for the capture probabilities of the
underlying sparse multiuser detector. Finally, we will provide structured CS
algorithms for the joint estimation scheme and evaluate its performance.
|
[
{
"created": "Thu, 29 Jun 2017 19:37:33 GMT",
"version": "v1"
}
] |
2017-07-04
|
[
[
"Wunder",
"Gerhard",
""
],
[
"Stefanovic",
"Cedomir",
""
],
[
"Popovski",
"Petar",
""
],
[
"Thiele",
"Lars",
""
]
] |
Massive MTC support is an important future market segment, but not yet efficiently supported in cellular systems. In this paper we follow-up on recent concepts combining advanced MAC protocols with Compressed Sensing (CS) based multiuser detection. Specifically, we introduce a concept for sparse joint activity, channel and data detection in the context of the Coded ALOHA (FDMA) protocol. We will argue that a simple sparse activity and data detection is not sufficient (as many papers do) because control resources are in the order of the data. In addition, we will improve on the performance of such protocols in terms of the reduction of resources required for the user activity, channel estimation and data detection. We will mathematically analyze the system accordingly and provide expressions for the capture probabilities of the underlying sparse multiuser detector. Finally, we will provide structured CS algorithms for the joint estimation scheme and evaluate its performance.
|
1302.5681
|
Samantha Leung
|
Joseph Y. Halpern and Samantha Leung
|
Weighted Sets of Probabilities and Minimax Weighted Expected Regret: New
Approaches for Representing Uncertainty and Making Decisions
|
Full version of an article [arXiv:1210.4853] that appeared in UAI '12
| null | null | null |
cs.GT cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider a setting where an agent's uncertainty is represented by a set of
probability measures, rather than a single measure. Measure-by-measure updating
of such a set of measures upon acquiring new information is well-known to
suffer from problems; agents are not always able to learn appropriately. To
deal with these problems, we propose using weighted sets of probabilities: a
representation where each measure is associated with a weight, which denotes
its significance. We describe a natural approach to updating in such a
situation and a natural approach to determining the weights. We then show how
this representation can be used in decision-making, by modifying a standard
approach to decision making -- minimizing expected regret -- to obtain minimax
weighted expected regret (MWER). We provide an axiomatization that
characterizes preferences induced by MWER both in the static and dynamic case.
|
[
{
"created": "Thu, 21 Feb 2013 20:11:12 GMT",
"version": "v1"
}
] |
2016-11-04
|
[
[
"Halpern",
"Joseph Y.",
""
],
[
"Leung",
"Samantha",
""
]
] |
We consider a setting where an agent's uncertainty is represented by a set of probability measures, rather than a single measure. Measure-by-measure updating of such a set of measures upon acquiring new information is well-known to suffer from problems; agents are not always able to learn appropriately. To deal with these problems, we propose using weighted sets of probabilities: a representation where each measure is associated with a weight, which denotes its significance. We describe a natural approach to updating in such a situation and a natural approach to determining the weights. We then show how this representation can be used in decision-making, by modifying a standard approach to decision making -- minimizing expected regret -- to obtain minimax weighted expected regret (MWER). We provide an axiomatization that characterizes preferences induced by MWER both in the static and dynamic case.
|
1911.02241
|
Haoyuan Pan
|
Haoyuan Pan, Soung Chang Liew
|
Information Update: TDMA or FDMA?
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper studies information freshness in information update systems
operated with TDMA and FDMA. Information freshness is characterized by a
recently introduced metric, age of information (AoI), defined as the time
elapsed since the generation of the last successfully received update. In an
update system with multiple users sharing the same wireless channel to send
updates to a common receiver, how to divide the channel among users affects
information freshness. We investigate the AoI performances of two fundamental
multiple access schemes, TDMA and FDMA. We first derive the time-averaged AoI
by estimating the packet error rate of short update packets based on Gallager's
random coding bound. For time-critical systems, we further define a new AoI
metric, termed bounded AoI, which corresponds to an AoI threshold for the
instantaneous AoI. Specifically, the instantaneous AoI is below the bounded AoI
a large percentage of the time. We give a theoretical upper bound for bounded
AoI. Our simulation results are consistent with our theoretical analysis.
Although TDMA outperforms FDMA in terms of average AoI, FDMA is more robust
against varying channel conditions since it gives a more stable bounded AoI
across different received powers. Overall, our findings give insight to the
design of practical multiple access systems with AoI requirements.
|
[
{
"created": "Wed, 6 Nov 2019 07:59:07 GMT",
"version": "v1"
}
] |
2019-11-07
|
[
[
"Pan",
"Haoyuan",
""
],
[
"Liew",
"Soung Chang",
""
]
] |
This paper studies information freshness in information update systems operated with TDMA and FDMA. Information freshness is characterized by a recently introduced metric, age of information (AoI), defined as the time elapsed since the generation of the last successfully received update. In an update system with multiple users sharing the same wireless channel to send updates to a common receiver, how to divide the channel among users affects information freshness. We investigate the AoI performances of two fundamental multiple access schemes, TDMA and FDMA. We first derive the time-averaged AoI by estimating the packet error rate of short update packets based on Gallager's random coding bound. For time-critical systems, we further define a new AoI metric, termed bounded AoI, which corresponds to an AoI threshold for the instantaneous AoI. Specifically, the instantaneous AoI is below the bounded AoI a large percentage of the time. We give a theoretical upper bound for bounded AoI. Our simulation results are consistent with our theoretical analysis. Although TDMA outperforms FDMA in terms of average AoI, FDMA is more robust against varying channel conditions since it gives a more stable bounded AoI across different received powers. Overall, our findings give insight to the design of practical multiple access systems with AoI requirements.
|
2304.08802
|
Stein Stroobants
|
Stein Stroobants, Julien Dupeyroux, Guido C.H.E. de Croon
|
Neuromorphic computing for attitude estimation onboard quadrotors
| null |
Neuromorphic Computing and Engineering 2.3 (2022): 034005
|
10.1088/2634-4386/ac7ee0
| null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Compelling evidence has been given for the high energy efficiency and update
rates of neuromorphic processors, with performance beyond what standard Von
Neumann architectures can achieve. Such promising features could be
advantageous in critical embedded systems, especially in robotics. To date, the
constraints inherent in robots (e.g., size and weight, battery autonomy,
available sensors, computing resources, processing time, etc.), and
particularly in aerial vehicles, severely hamper the performance of
fully-autonomous on-board control, including sensor processing and state
estimation. In this work, we propose a spiking neural network (SNN) capable of
estimating the pitch and roll angles of a quadrotor in highly dynamic movements
from 6-degree of freedom Inertial Measurement Unit (IMU) data. With only 150
neurons and a limited training dataset obtained using a quadrotor in a real
world setup, the network shows competitive results as compared to
state-of-the-art, non-neuromorphic attitude estimators. The proposed
architecture was successfully tested on the Loihi neuromorphic processor
on-board a quadrotor to estimate the attitude when flying. Our results show the
robustness of neuromorphic attitude estimation and pave the way towards
energy-efficient, fully autonomous control of quadrotors with dedicated
neuromorphic computing systems.
|
[
{
"created": "Tue, 18 Apr 2023 08:07:22 GMT",
"version": "v1"
}
] |
2023-04-19
|
[
[
"Stroobants",
"Stein",
""
],
[
"Dupeyroux",
"Julien",
""
],
[
"de Croon",
"Guido C. H. E.",
""
]
] |
Compelling evidence has been given for the high energy efficiency and update rates of neuromorphic processors, with performance beyond what standard Von Neumann architectures can achieve. Such promising features could be advantageous in critical embedded systems, especially in robotics. To date, the constraints inherent in robots (e.g., size and weight, battery autonomy, available sensors, computing resources, processing time, etc.), and particularly in aerial vehicles, severely hamper the performance of fully-autonomous on-board control, including sensor processing and state estimation. In this work, we propose a spiking neural network (SNN) capable of estimating the pitch and roll angles of a quadrotor in highly dynamic movements from 6-degree of freedom Inertial Measurement Unit (IMU) data. With only 150 neurons and a limited training dataset obtained using a quadrotor in a real world setup, the network shows competitive results as compared to state-of-the-art, non-neuromorphic attitude estimators. The proposed architecture was successfully tested on the Loihi neuromorphic processor on-board a quadrotor to estimate the attitude when flying. Our results show the robustness of neuromorphic attitude estimation and pave the way towards energy-efficient, fully autonomous control of quadrotors with dedicated neuromorphic computing systems.
|
2208.06233
|
Simone M\"uller
|
Simone M\"uller and Dieter Kranzlm\"uller
|
Dynamic Sensor Matching based on Geomagnetic Inertial Navigation
|
Page 16-25
|
Journal of WSCG, 2022, Vol.30., No.1-2, ISSN 1213-6972
|
10.24132/JWSCG.2022.3
| null |
cs.RO cs.AI cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Optical sensors can capture dynamic environments and derive depth information
in near real-time. The quality of these digital reconstructions is determined
by factors like illumination, surface and texture conditions, sensing speed and
other sensor characteristics as well as the sensor-object relations.
Improvements can be obtained by using dynamically collected data from multiple
sensors. However, matching the data from multiple sensors requires a shared
world coordinate system. We present a concept for transferring multi-sensor
data into a commonly referenced world coordinate system: the earth's magnetic
field. The steady presence of our planetary magnetic field provides a reliable
world coordinate system, which can serve as a reference for a position-defined
reconstruction of dynamic environments. Our approach is evaluated using
magnetic field sensors of the ZED 2 stereo camera from Stereolabs, which
provides orientation relative to the North Pole similar to a compass. With the
help of inertial measurement unit informations, each camera's position data can
be transferred into the unified world coordinate system. Our evaluation reveals
the level of quality possible using the earth magnetic field and allows a basis
for dynamic and real-time-based applications of optical multi-sensors for
environment detection.
|
[
{
"created": "Fri, 12 Aug 2022 12:04:04 GMT",
"version": "v1"
},
{
"created": "Tue, 30 Jan 2024 09:28:35 GMT",
"version": "v2"
}
] |
2024-01-31
|
[
[
"Müller",
"Simone",
""
],
[
"Kranzlmüller",
"Dieter",
""
]
] |
Optical sensors can capture dynamic environments and derive depth information in near real-time. The quality of these digital reconstructions is determined by factors like illumination, surface and texture conditions, sensing speed and other sensor characteristics as well as the sensor-object relations. Improvements can be obtained by using dynamically collected data from multiple sensors. However, matching the data from multiple sensors requires a shared world coordinate system. We present a concept for transferring multi-sensor data into a commonly referenced world coordinate system: the earth's magnetic field. The steady presence of our planetary magnetic field provides a reliable world coordinate system, which can serve as a reference for a position-defined reconstruction of dynamic environments. Our approach is evaluated using magnetic field sensors of the ZED 2 stereo camera from Stereolabs, which provides orientation relative to the North Pole similar to a compass. With the help of inertial measurement unit informations, each camera's position data can be transferred into the unified world coordinate system. Our evaluation reveals the level of quality possible using the earth magnetic field and allows a basis for dynamic and real-time-based applications of optical multi-sensors for environment detection.
|
1902.00618
|
Chi Jin
|
Chi Jin, Praneeth Netrapalli, Michael I. Jordan
|
What is Local Optimality in Nonconvex-Nonconcave Minimax Optimization?
|
This paper has been published at ICML2020. This new version made a
correction to Proposition 19, and added more related works
| null | null | null |
cs.LG math.OC stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Minimax optimization has found extensive applications in modern machine
learning, in settings such as generative adversarial networks (GANs),
adversarial training and multi-agent reinforcement learning. As most of these
applications involve continuous nonconvex-nonconcave formulations, a very basic
question arises---"what is a proper definition of local optima?"
Most previous work answers this question using classical notions of
equilibria from simultaneous games, where the min-player and the max-player act
simultaneously. In contrast, most applications in machine learning, including
GANs and adversarial training, correspond to sequential games, where the order
of which player acts first is crucial (since minimax is in general not equal to
maximin due to the nonconvex-nonconcave nature of the problems). The main
contribution of this paper is to propose a proper mathematical definition of
local optimality for this sequential setting---local minimax, as well as to
present its properties and existence results. Finally, we establish a strong
connection to a basic local search algorithm---gradient descent ascent (GDA):
under mild conditions, all stable limit points of GDA are exactly local minimax
points up to some degenerate points.
|
[
{
"created": "Sat, 2 Feb 2019 02:08:28 GMT",
"version": "v1"
},
{
"created": "Mon, 3 Jun 2019 06:56:15 GMT",
"version": "v2"
},
{
"created": "Sat, 15 Aug 2020 05:13:24 GMT",
"version": "v3"
}
] |
2020-08-18
|
[
[
"Jin",
"Chi",
""
],
[
"Netrapalli",
"Praneeth",
""
],
[
"Jordan",
"Michael I.",
""
]
] |
Minimax optimization has found extensive applications in modern machine learning, in settings such as generative adversarial networks (GANs), adversarial training and multi-agent reinforcement learning. As most of these applications involve continuous nonconvex-nonconcave formulations, a very basic question arises---"what is a proper definition of local optima?" Most previous work answers this question using classical notions of equilibria from simultaneous games, where the min-player and the max-player act simultaneously. In contrast, most applications in machine learning, including GANs and adversarial training, correspond to sequential games, where the order of which player acts first is crucial (since minimax is in general not equal to maximin due to the nonconvex-nonconcave nature of the problems). The main contribution of this paper is to propose a proper mathematical definition of local optimality for this sequential setting---local minimax, as well as to present its properties and existence results. Finally, we establish a strong connection to a basic local search algorithm---gradient descent ascent (GDA): under mild conditions, all stable limit points of GDA are exactly local minimax points up to some degenerate points.
|
2310.13104
|
Zhiru Zhu
|
Zhiru Zhu, Raul Castro Fernandez
|
Making Differential Privacy Easier to Use for Data Controllers and Data
Analysts using a Privacy Risk Indicator and an Escrow-Based Platform
| null | null | null | null |
cs.DB cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Differential privacy (DP) enables private data analysis but is hard to use in
practice. For data controllers who decide what output to release, choosing the
amount of noise to add to the output is a non-trivial task because of the
difficulty of interpreting the privacy parameter $\epsilon$. For data analysts
who submit queries, it is hard to understand the impact of the noise introduced
by DP on their tasks.
To address these two challenges: 1) we define a privacy risk indicator that
indicates the impact of choosing $\epsilon$ on individuals' privacy and use
that to design an algorithm to choose $\epsilon$ and release output based on
controllers' privacy preferences; 2) we introduce a utility signaling protocol
that helps analysts interpret the impact of DP on their downstream tasks. We
implement the algorithm and the protocol inside a new platform built on top of
a data escrow, which allows controllers to control dataflows while maintaining
high performance. We demonstrate our contributions through an IRB-approved user
study, extensive experimental evaluations, and comparison with other DP
platforms. All in all, our work contributes to making DP easier to use by
lowering adoption barriers.
|
[
{
"created": "Thu, 19 Oct 2023 19:01:27 GMT",
"version": "v1"
},
{
"created": "Sat, 2 Mar 2024 23:21:52 GMT",
"version": "v2"
}
] |
2024-03-05
|
[
[
"Zhu",
"Zhiru",
""
],
[
"Fernandez",
"Raul Castro",
""
]
] |
Differential privacy (DP) enables private data analysis but is hard to use in practice. For data controllers who decide what output to release, choosing the amount of noise to add to the output is a non-trivial task because of the difficulty of interpreting the privacy parameter $\epsilon$. For data analysts who submit queries, it is hard to understand the impact of the noise introduced by DP on their tasks. To address these two challenges: 1) we define a privacy risk indicator that indicates the impact of choosing $\epsilon$ on individuals' privacy and use that to design an algorithm to choose $\epsilon$ and release output based on controllers' privacy preferences; 2) we introduce a utility signaling protocol that helps analysts interpret the impact of DP on their downstream tasks. We implement the algorithm and the protocol inside a new platform built on top of a data escrow, which allows controllers to control dataflows while maintaining high performance. We demonstrate our contributions through an IRB-approved user study, extensive experimental evaluations, and comparison with other DP platforms. All in all, our work contributes to making DP easier to use by lowering adoption barriers.
|
2003.10477
|
Yiding Yang
|
Yiding Yang, Jiayan Qiu, Mingli Song, Dacheng Tao, Xinchao Wang
|
Distilling Knowledge from Graph Convolutional Networks
|
Accepted by CVPR 2020
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing knowledge distillation methods focus on convolutional neural
networks (CNNs), where the input samples like images lie in a grid domain, and
have largely overlooked graph convolutional networks (GCN) that handle non-grid
data. In this paper, we propose to our best knowledge the first dedicated
approach to distilling knowledge from a pre-trained GCN model. To enable the
knowledge transfer from the teacher GCN to the student, we propose a local
structure preserving module that explicitly accounts for the topological
semantics of the teacher. In this module, the local structure information from
both the teacher and the student are extracted as distributions, and hence
minimizing the distance between these distributions enables topology-aware
knowledge transfer from the teacher, yielding a compact yet high-performance
student model. Moreover, the proposed approach is readily extendable to dynamic
graph models, where the input graphs for the teacher and the student may
differ. We evaluate the proposed method on two different datasets using GCN
models of different architectures, and demonstrate that our method achieves the
state-of-the-art knowledge distillation performance for GCN models. Code is
publicly available at https://github.com/ihollywhy/DistillGCN.PyTorch.
|
[
{
"created": "Mon, 23 Mar 2020 18:23:11 GMT",
"version": "v1"
},
{
"created": "Wed, 25 Mar 2020 03:09:30 GMT",
"version": "v2"
},
{
"created": "Sat, 28 Mar 2020 19:30:02 GMT",
"version": "v3"
},
{
"created": "Sun, 10 Jan 2021 03:55:05 GMT",
"version": "v4"
}
] |
2021-01-12
|
[
[
"Yang",
"Yiding",
""
],
[
"Qiu",
"Jiayan",
""
],
[
"Song",
"Mingli",
""
],
[
"Tao",
"Dacheng",
""
],
[
"Wang",
"Xinchao",
""
]
] |
Existing knowledge distillation methods focus on convolutional neural networks (CNNs), where the input samples like images lie in a grid domain, and have largely overlooked graph convolutional networks (GCN) that handle non-grid data. In this paper, we propose to our best knowledge the first dedicated approach to distilling knowledge from a pre-trained GCN model. To enable the knowledge transfer from the teacher GCN to the student, we propose a local structure preserving module that explicitly accounts for the topological semantics of the teacher. In this module, the local structure information from both the teacher and the student are extracted as distributions, and hence minimizing the distance between these distributions enables topology-aware knowledge transfer from the teacher, yielding a compact yet high-performance student model. Moreover, the proposed approach is readily extendable to dynamic graph models, where the input graphs for the teacher and the student may differ. We evaluate the proposed method on two different datasets using GCN models of different architectures, and demonstrate that our method achieves the state-of-the-art knowledge distillation performance for GCN models. Code is publicly available at https://github.com/ihollywhy/DistillGCN.PyTorch.
|
2403.19897
|
Seyma Yucer
|
Seyma Yucer, Amir Atapour Abarghouei, Noura Al Moubayed, Toby P.
Breckon
|
Disentangling Racial Phenotypes: Fine-Grained Control of Race-related
Facial Phenotype Characteristics
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Achieving an effective fine-grained appearance variation over 2D facial
images, whilst preserving facial identity, is a challenging task due to the
high complexity and entanglement of common 2D facial feature encoding spaces.
Despite these challenges, such fine-grained control, by way of disentanglement
is a crucial enabler for data-driven racial bias mitigation strategies across
multiple automated facial analysis tasks, as it allows to analyse, characterise
and synthesise human facial diversity. In this paper, we propose a novel GAN
framework to enable fine-grained control over individual race-related phenotype
attributes of the facial images. Our framework factors the latent (feature)
space into elements that correspond to race-related facial phenotype
representations, thereby separating phenotype aspects (e.g. skin, hair colour,
nose, eye, mouth shapes), which are notoriously difficult to annotate robustly
in real-world facial data. Concurrently, we also introduce a high quality
augmented, diverse 2D face image dataset drawn from CelebA-HQ for GAN training.
Unlike prior work, our framework only relies upon 2D imagery and related
parameters to achieve state-of-the-art individual control over race-related
phenotype attributes with improved photo-realistic output.
|
[
{
"created": "Fri, 29 Mar 2024 00:36:38 GMT",
"version": "v1"
}
] |
2024-04-01
|
[
[
"Yucer",
"Seyma",
""
],
[
"Abarghouei",
"Amir Atapour",
""
],
[
"Moubayed",
"Noura Al",
""
],
[
"Breckon",
"Toby P.",
""
]
] |
Achieving an effective fine-grained appearance variation over 2D facial images, whilst preserving facial identity, is a challenging task due to the high complexity and entanglement of common 2D facial feature encoding spaces. Despite these challenges, such fine-grained control, by way of disentanglement is a crucial enabler for data-driven racial bias mitigation strategies across multiple automated facial analysis tasks, as it allows to analyse, characterise and synthesise human facial diversity. In this paper, we propose a novel GAN framework to enable fine-grained control over individual race-related phenotype attributes of the facial images. Our framework factors the latent (feature) space into elements that correspond to race-related facial phenotype representations, thereby separating phenotype aspects (e.g. skin, hair colour, nose, eye, mouth shapes), which are notoriously difficult to annotate robustly in real-world facial data. Concurrently, we also introduce a high quality augmented, diverse 2D face image dataset drawn from CelebA-HQ for GAN training. Unlike prior work, our framework only relies upon 2D imagery and related parameters to achieve state-of-the-art individual control over race-related phenotype attributes with improved photo-realistic output.
|
2112.05118
|
Praveena Satkunarajah
|
Praveena Satkunarajah and Kat Agres
|
Web Platform for Visualisation of Kinematic Data captured from a Motor
Tele-rehabilitation System
| null | null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Stroke can have a severe impact on an individual's quality of life, leading
to consequences such as motor loss and communication problems, especially among
the elderly. Studies have shown that early and easy access to stroke
rehabilitation can improve an elderly individual's quality of life, and that
telerehabilitation is a solution that facilitates this improvement. In this
work, we visualize movement to music during rehabilitation exercises captured
by the Kinect motion sensor, using a dedicated Serious Game called `Move to the
Music'(MoMu). Our system provides a quantitative view of progress made by
patients during a motor rehabilitation regime for healthcare professionals to
track remotely (tele-rehab).
|
[
{
"created": "Thu, 9 Dec 2021 18:53:18 GMT",
"version": "v1"
}
] |
2021-12-10
|
[
[
"Satkunarajah",
"Praveena",
""
],
[
"Agres",
"Kat",
""
]
] |
Stroke can have a severe impact on an individual's quality of life, leading to consequences such as motor loss and communication problems, especially among the elderly. Studies have shown that early and easy access to stroke rehabilitation can improve an elderly individual's quality of life, and that telerehabilitation is a solution that facilitates this improvement. In this work, we visualize movement to music during rehabilitation exercises captured by the Kinect motion sensor, using a dedicated Serious Game called `Move to the Music'(MoMu). Our system provides a quantitative view of progress made by patients during a motor rehabilitation regime for healthcare professionals to track remotely (tele-rehab).
|
2212.05765
|
Sunjae Yoon
|
Sunjae Yoon, Eunseop Yoon, Hee Suk Yoon, Junyeong Kim, Chang D. Yoo
|
Information-Theoretic Text Hallucination Reduction for Video-grounded
Dialogue
|
12 pages, Accepted in EMNLP 2022
| null |
10.18653/v1/2022.emnlp-main.280
| null |
cs.CL cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Video-grounded Dialogue (VGD) aims to decode an answer sentence to a question
regarding a given video and dialogue context. Despite the recent success of
multi-modal reasoning to generate answer sentences, existing dialogue systems
still suffer from a text hallucination problem, which denotes indiscriminate
text-copying from input texts without an understanding of the question. This is
due to learning spurious correlations from the fact that answer sentences in
the dataset usually include the words of input texts, thus the VGD system
excessively relies on copying words from input texts by hoping those words to
overlap with ground-truth texts. Hence, we design Text Hallucination Mitigating
(THAM) framework, which incorporates Text Hallucination Regularization (THR)
loss derived from the proposed information-theoretic text hallucination
measurement approach. Applying THAM with current dialogue systems validates the
effectiveness on VGD benchmarks (i.e., AVSD@DSTC7 and AVSD@DSTC8) and shows
enhanced interpretability.
|
[
{
"created": "Mon, 12 Dec 2022 08:38:28 GMT",
"version": "v1"
}
] |
2023-10-10
|
[
[
"Yoon",
"Sunjae",
""
],
[
"Yoon",
"Eunseop",
""
],
[
"Yoon",
"Hee Suk",
""
],
[
"Kim",
"Junyeong",
""
],
[
"Yoo",
"Chang D.",
""
]
] |
Video-grounded Dialogue (VGD) aims to decode an answer sentence to a question regarding a given video and dialogue context. Despite the recent success of multi-modal reasoning to generate answer sentences, existing dialogue systems still suffer from a text hallucination problem, which denotes indiscriminate text-copying from input texts without an understanding of the question. This is due to learning spurious correlations from the fact that answer sentences in the dataset usually include the words of input texts, thus the VGD system excessively relies on copying words from input texts by hoping those words to overlap with ground-truth texts. Hence, we design Text Hallucination Mitigating (THAM) framework, which incorporates Text Hallucination Regularization (THR) loss derived from the proposed information-theoretic text hallucination measurement approach. Applying THAM with current dialogue systems validates the effectiveness on VGD benchmarks (i.e., AVSD@DSTC7 and AVSD@DSTC8) and shows enhanced interpretability.
|
2202.07350
|
Antonia Marcu
|
Dominic Belcher, Antonia Marcu, Adam Pr\"ugel-Bennett
|
Generalisation and the Risk--Entropy Curve
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper we show that the expected generalisation performance of a
learning machine is determined by the distribution of risks or equivalently its
logarithm -- a quantity we term the risk entropy -- and the fluctuations in a
quantity we call the training ratio. We show that the risk entropy can be
empirically inferred for deep neural network models using Markov Chain Monte
Carlo techniques. Results are presented for different deep neural networks on a
variety of problems. The asymptotic behaviour of the risk entropy acts in an
analogous way to the capacity of the learning machine, but the generalisation
performance experienced in practical situations is determined by the behaviour
of the risk entropy before the asymptotic regime is reached. This performance
is strongly dependent on the distribution of the data (features and targets)
and not just on the capacity of the learning machine.
|
[
{
"created": "Tue, 15 Feb 2022 12:19:10 GMT",
"version": "v1"
}
] |
2022-02-16
|
[
[
"Belcher",
"Dominic",
""
],
[
"Marcu",
"Antonia",
""
],
[
"Prügel-Bennett",
"Adam",
""
]
] |
In this paper we show that the expected generalisation performance of a learning machine is determined by the distribution of risks or equivalently its logarithm -- a quantity we term the risk entropy -- and the fluctuations in a quantity we call the training ratio. We show that the risk entropy can be empirically inferred for deep neural network models using Markov Chain Monte Carlo techniques. Results are presented for different deep neural networks on a variety of problems. The asymptotic behaviour of the risk entropy acts in an analogous way to the capacity of the learning machine, but the generalisation performance experienced in practical situations is determined by the behaviour of the risk entropy before the asymptotic regime is reached. This performance is strongly dependent on the distribution of the data (features and targets) and not just on the capacity of the learning machine.
|
2408.00056
|
Anna Beer
|
Anna Beer, Martin Heinrigs, Claudia Plant, Ira Assent
|
Temporal Subspace Clustering for Molecular Dynamics Data
|
Accepted as a research paper at BIOKDD 2024
| null | null | null |
cs.LG cs.IR physics.chem-ph
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce MOSCITO (MOlecular Dynamics Subspace Clustering with Temporal
Observance), a subspace clustering for molecular dynamics data. MOSCITO groups
those timesteps of a molecular dynamics trajectory together into clusters in
which the molecule has similar conformations. In contrast to state-of-the-art
methods, MOSCITO takes advantage of sequential relationships found in time
series data. Unlike existing work, MOSCITO does not need a two-step procedure
with tedious post-processing, but directly models essential properties of the
data. Interpreting clusters as Markov states allows us to evaluate the
clustering performance based on the resulting Markov state models. In
experiments on 60 trajectories and 4 different proteins, we show that the
performance of MOSCITO achieves state-of-the-art performance in a novel
single-step method. Moreover, by modeling temporal aspects, MOSCITO obtains
better segmentation of trajectories, especially for small numbers of clusters.
|
[
{
"created": "Wed, 31 Jul 2024 17:13:34 GMT",
"version": "v1"
}
] |
2024-08-02
|
[
[
"Beer",
"Anna",
""
],
[
"Heinrigs",
"Martin",
""
],
[
"Plant",
"Claudia",
""
],
[
"Assent",
"Ira",
""
]
] |
We introduce MOSCITO (MOlecular Dynamics Subspace Clustering with Temporal Observance), a subspace clustering for molecular dynamics data. MOSCITO groups those timesteps of a molecular dynamics trajectory together into clusters in which the molecule has similar conformations. In contrast to state-of-the-art methods, MOSCITO takes advantage of sequential relationships found in time series data. Unlike existing work, MOSCITO does not need a two-step procedure with tedious post-processing, but directly models essential properties of the data. Interpreting clusters as Markov states allows us to evaluate the clustering performance based on the resulting Markov state models. In experiments on 60 trajectories and 4 different proteins, we show that the performance of MOSCITO achieves state-of-the-art performance in a novel single-step method. Moreover, by modeling temporal aspects, MOSCITO obtains better segmentation of trajectories, especially for small numbers of clusters.
|
2205.14895
|
Jiangtong Li
|
Jiangtong Li, Li Niu, Liqing Zhang
|
From Representation to Reasoning: Towards both Evidence and Commonsense
Reasoning for Video Question-Answering
|
To appear in CVPR 2022
| null | null | null |
cs.CV cs.CL cs.MM
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Video understanding has achieved great success in representation learning,
such as video caption, video object grounding, and video descriptive
question-answer. However, current methods still struggle on video reasoning,
including evidence reasoning and commonsense reasoning. To facilitate deeper
video understanding towards video reasoning, we present the task of
Causal-VidQA, which includes four types of questions ranging from scene
description (description) to evidence reasoning (explanation) and commonsense
reasoning (prediction and counterfactual). For commonsense reasoning, we set up
a two-step solution by answering the question and providing a proper reason.
Through extensive experiments on existing VideoQA methods, we find that the
state-of-the-art methods are strong in descriptions but weak in reasoning. We
hope that Causal-VidQA can guide the research of video understanding from
representation learning to deeper reasoning. The dataset and related resources
are available at \url{https://github.com/bcmi/Causal-VidQA.git}.
|
[
{
"created": "Mon, 30 May 2022 07:26:54 GMT",
"version": "v1"
}
] |
2022-05-31
|
[
[
"Li",
"Jiangtong",
""
],
[
"Niu",
"Li",
""
],
[
"Zhang",
"Liqing",
""
]
] |
Video understanding has achieved great success in representation learning, such as video caption, video object grounding, and video descriptive question-answer. However, current methods still struggle on video reasoning, including evidence reasoning and commonsense reasoning. To facilitate deeper video understanding towards video reasoning, we present the task of Causal-VidQA, which includes four types of questions ranging from scene description (description) to evidence reasoning (explanation) and commonsense reasoning (prediction and counterfactual). For commonsense reasoning, we set up a two-step solution by answering the question and providing a proper reason. Through extensive experiments on existing VideoQA methods, we find that the state-of-the-art methods are strong in descriptions but weak in reasoning. We hope that Causal-VidQA can guide the research of video understanding from representation learning to deeper reasoning. The dataset and related resources are available at \url{https://github.com/bcmi/Causal-VidQA.git}.
|
1905.06292
|
Junseok Kwon
|
Dong Wook Shu and Sung Woo Park and Junseok Kwon
|
3D Point Cloud Generative Adversarial Network Based on Tree Structured
Graph Convolutions
|
10 pages
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a novel generative adversarial network (GAN) for 3D
point clouds generation, which is called tree-GAN. To achieve state-of-the-art
performance for multi-class 3D point cloud generation, a tree-structured graph
convolution network (TreeGCN) is introduced as a generator for tree-GAN.
Because TreeGCN performs graph convolutions within a tree, it can use ancestor
information to boost the representation power for features. To evaluate GANs
for 3D point clouds accurately, we develop a novel evaluation metric called
Frechet point cloud distance (FPD). Experimental results demonstrate that the
proposed tree-GAN outperforms state-of-the-art GANs in terms of both
conventional metrics and FPD, and can generate point clouds for different
semantic parts without prior knowledge.
|
[
{
"created": "Wed, 15 May 2019 16:51:18 GMT",
"version": "v1"
},
{
"created": "Thu, 16 May 2019 02:26:59 GMT",
"version": "v2"
}
] |
2019-05-17
|
[
[
"Shu",
"Dong Wook",
""
],
[
"Park",
"Sung Woo",
""
],
[
"Kwon",
"Junseok",
""
]
] |
In this paper, we propose a novel generative adversarial network (GAN) for 3D point clouds generation, which is called tree-GAN. To achieve state-of-the-art performance for multi-class 3D point cloud generation, a tree-structured graph convolution network (TreeGCN) is introduced as a generator for tree-GAN. Because TreeGCN performs graph convolutions within a tree, it can use ancestor information to boost the representation power for features. To evaluate GANs for 3D point clouds accurately, we develop a novel evaluation metric called Frechet point cloud distance (FPD). Experimental results demonstrate that the proposed tree-GAN outperforms state-of-the-art GANs in terms of both conventional metrics and FPD, and can generate point clouds for different semantic parts without prior knowledge.
|
2207.05557
|
Tao Huang
|
Tao Huang, Lang Huang, Shan You, Fei Wang, Chen Qian, Chang Xu
|
LightViT: Towards Light-Weight Convolution-Free Vision Transformers
|
13 pages, 7 figures, 9 tables
| null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Vision transformers (ViTs) are usually considered to be less light-weight
than convolutional neural networks (CNNs) due to the lack of inductive bias.
Recent works thus resort to convolutions as a plug-and-play module and embed
them in various ViT counterparts. In this paper, we argue that the
convolutional kernels perform information aggregation to connect all tokens;
however, they would be actually unnecessary for light-weight ViTs if this
explicit aggregation could function in a more homogeneous way. Inspired by
this, we present LightViT as a new family of light-weight ViTs to achieve
better accuracy-efficiency balance upon the pure transformer blocks without
convolution. Concretely, we introduce a global yet efficient aggregation scheme
into both self-attention and feed-forward network (FFN) of ViTs, where
additional learnable tokens are introduced to capture global dependencies; and
bi-dimensional channel and spatial attentions are imposed over token
embeddings. Experiments show that our model achieves significant improvements
on image classification, object detection, and semantic segmentation tasks. For
example, our LightViT-T achieves 78.7% accuracy on ImageNet with only 0.7G
FLOPs, outperforming PVTv2-B0 by 8.2% while 11% faster on GPU. Code is
available at https://github.com/hunto/LightViT.
|
[
{
"created": "Tue, 12 Jul 2022 14:27:57 GMT",
"version": "v1"
}
] |
2022-07-13
|
[
[
"Huang",
"Tao",
""
],
[
"Huang",
"Lang",
""
],
[
"You",
"Shan",
""
],
[
"Wang",
"Fei",
""
],
[
"Qian",
"Chen",
""
],
[
"Xu",
"Chang",
""
]
] |
Vision transformers (ViTs) are usually considered to be less light-weight than convolutional neural networks (CNNs) due to the lack of inductive bias. Recent works thus resort to convolutions as a plug-and-play module and embed them in various ViT counterparts. In this paper, we argue that the convolutional kernels perform information aggregation to connect all tokens; however, they would be actually unnecessary for light-weight ViTs if this explicit aggregation could function in a more homogeneous way. Inspired by this, we present LightViT as a new family of light-weight ViTs to achieve better accuracy-efficiency balance upon the pure transformer blocks without convolution. Concretely, we introduce a global yet efficient aggregation scheme into both self-attention and feed-forward network (FFN) of ViTs, where additional learnable tokens are introduced to capture global dependencies; and bi-dimensional channel and spatial attentions are imposed over token embeddings. Experiments show that our model achieves significant improvements on image classification, object detection, and semantic segmentation tasks. For example, our LightViT-T achieves 78.7% accuracy on ImageNet with only 0.7G FLOPs, outperforming PVTv2-B0 by 8.2% while 11% faster on GPU. Code is available at https://github.com/hunto/LightViT.
|
1901.09590
|
Ivana Bala\v{z}evi\'c
|
Ivana Bala\v{z}evi\'c, Carl Allen and Timothy M. Hospedales
|
TuckER: Tensor Factorization for Knowledge Graph Completion
| null | null |
10.18653/v1/D19-1522
| null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Knowledge graphs are structured representations of real world facts. However,
they typically contain only a small subset of all possible facts. Link
prediction is a task of inferring missing facts based on existing ones. We
propose TuckER, a relatively straightforward but powerful linear model based on
Tucker decomposition of the binary tensor representation of knowledge graph
triples. TuckER outperforms previous state-of-the-art models across standard
link prediction datasets, acting as a strong baseline for more elaborate
models. We show that TuckER is a fully expressive model, derive sufficient
bounds on its embedding dimensionalities and demonstrate that several
previously introduced linear models can be viewed as special cases of TuckER.
|
[
{
"created": "Mon, 28 Jan 2019 10:42:26 GMT",
"version": "v1"
},
{
"created": "Sat, 24 Aug 2019 15:36:04 GMT",
"version": "v2"
}
] |
2019-11-07
|
[
[
"Balažević",
"Ivana",
""
],
[
"Allen",
"Carl",
""
],
[
"Hospedales",
"Timothy M.",
""
]
] |
Knowledge graphs are structured representations of real world facts. However, they typically contain only a small subset of all possible facts. Link prediction is a task of inferring missing facts based on existing ones. We propose TuckER, a relatively straightforward but powerful linear model based on Tucker decomposition of the binary tensor representation of knowledge graph triples. TuckER outperforms previous state-of-the-art models across standard link prediction datasets, acting as a strong baseline for more elaborate models. We show that TuckER is a fully expressive model, derive sufficient bounds on its embedding dimensionalities and demonstrate that several previously introduced linear models can be viewed as special cases of TuckER.
|
2306.01743
|
Quazi Adibur Rahman Adib
|
Nazmuddoha Ansary, Quazi Adibur Rahman Adib, Tahsin Reasat, Asif
Shahriyar Sushmit, Ahmed Imtiaz Humayun, Sazia Mehnaz, Kanij Fatema, Mohammad
Mamun Or Rashid, Farig Sadeque
|
Unicode Normalization and Grapheme Parsing of Indic Languages
|
Published at LREC-COLING 2024
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Writing systems of Indic languages have orthographic syllables, also known as
complex graphemes, as unique horizontal units. A prominent feature of these
languages is these complex grapheme units that comprise consonants/consonant
conjuncts, vowel diacritics, and consonant diacritics, which, together make a
unique Language. Unicode-based writing schemes of these languages often
disregard this feature of these languages and encode words as linear sequences
of Unicode characters using an intricate scheme of connector characters and
font interpreters. Due to this way of using a few dozen Unicode glyphs to write
thousands of different unique glyphs (complex graphemes), there are serious
ambiguities that lead to malformed words. In this paper, we are proposing two
libraries: i) a normalizer for normalizing inconsistencies caused by a
Unicode-based encoding scheme for Indic languages and ii) a grapheme parser for
Abugida text. It deconstructs words into visually distinct orthographic
syllables or complex graphemes and their constituents. Our proposed normalizer
is a more efficient and effective tool than the previously used IndicNLP
normalizer. Moreover, our parser and normalizer are also suitable tools for
general Abugida text processing as they performed well in our robust word-based
and NLP experiments. We report the pipeline for the scripts of 7 languages in
this work and develop the framework for the integration of more scripts.
|
[
{
"created": "Thu, 11 May 2023 14:34:08 GMT",
"version": "v1"
},
{
"created": "Mon, 27 May 2024 12:48:00 GMT",
"version": "v2"
}
] |
2024-05-28
|
[
[
"Ansary",
"Nazmuddoha",
""
],
[
"Adib",
"Quazi Adibur Rahman",
""
],
[
"Reasat",
"Tahsin",
""
],
[
"Sushmit",
"Asif Shahriyar",
""
],
[
"Humayun",
"Ahmed Imtiaz",
""
],
[
"Mehnaz",
"Sazia",
""
],
[
"Fatema",
"Kanij",
""
],
[
"Rashid",
"Mohammad Mamun Or",
""
],
[
"Sadeque",
"Farig",
""
]
] |
Writing systems of Indic languages have orthographic syllables, also known as complex graphemes, as unique horizontal units. A prominent feature of these languages is these complex grapheme units that comprise consonants/consonant conjuncts, vowel diacritics, and consonant diacritics, which, together make a unique Language. Unicode-based writing schemes of these languages often disregard this feature of these languages and encode words as linear sequences of Unicode characters using an intricate scheme of connector characters and font interpreters. Due to this way of using a few dozen Unicode glyphs to write thousands of different unique glyphs (complex graphemes), there are serious ambiguities that lead to malformed words. In this paper, we are proposing two libraries: i) a normalizer for normalizing inconsistencies caused by a Unicode-based encoding scheme for Indic languages and ii) a grapheme parser for Abugida text. It deconstructs words into visually distinct orthographic syllables or complex graphemes and their constituents. Our proposed normalizer is a more efficient and effective tool than the previously used IndicNLP normalizer. Moreover, our parser and normalizer are also suitable tools for general Abugida text processing as they performed well in our robust word-based and NLP experiments. We report the pipeline for the scripts of 7 languages in this work and develop the framework for the integration of more scripts.
|
2104.02822
|
Cenk Baykal
|
Cenk Baykal, Lucas Liebenwein, Dan Feldman, Daniela Rus
|
Low-Regret Active learning
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We develop an online learning algorithm for identifying unlabeled data points
that are most informative for training (i.e., active learning). By formulating
the active learning problem as the prediction with sleeping experts problem, we
provide a regret minimization framework for identifying relevant data with
respect to any given definition of informativeness. Motivated by the successes
of ensembles in active learning, we define regret with respect to an omnipotent
algorithm that has access to an infinity large ensemble. At the core of our
work is an efficient algorithm for sleeping experts that is tailored to achieve
low regret on easy instances while remaining resilient to adversarial ones. Low
regret implies that we can be provably competitive with an ensemble method
\emph{without the computational burden of having to train an ensemble}. This
stands in contrast to state-of-the-art active learning methods that are
overwhelmingly based on greedy selection, and hence cannot ensure good
performance across problem instances with high amounts of noise. We present
empirical results demonstrating that our method (i) instantiated with an
informativeness measure consistently outperforms its greedy counterpart and
(ii) reliably outperforms uniform sampling on real-world scenarios.
|
[
{
"created": "Tue, 6 Apr 2021 22:53:45 GMT",
"version": "v1"
},
{
"created": "Sat, 5 Jun 2021 01:22:38 GMT",
"version": "v2"
},
{
"created": "Tue, 22 Feb 2022 23:04:50 GMT",
"version": "v3"
}
] |
2022-02-24
|
[
[
"Baykal",
"Cenk",
""
],
[
"Liebenwein",
"Lucas",
""
],
[
"Feldman",
"Dan",
""
],
[
"Rus",
"Daniela",
""
]
] |
We develop an online learning algorithm for identifying unlabeled data points that are most informative for training (i.e., active learning). By formulating the active learning problem as the prediction with sleeping experts problem, we provide a regret minimization framework for identifying relevant data with respect to any given definition of informativeness. Motivated by the successes of ensembles in active learning, we define regret with respect to an omnipotent algorithm that has access to an infinity large ensemble. At the core of our work is an efficient algorithm for sleeping experts that is tailored to achieve low regret on easy instances while remaining resilient to adversarial ones. Low regret implies that we can be provably competitive with an ensemble method \emph{without the computational burden of having to train an ensemble}. This stands in contrast to state-of-the-art active learning methods that are overwhelmingly based on greedy selection, and hence cannot ensure good performance across problem instances with high amounts of noise. We present empirical results demonstrating that our method (i) instantiated with an informativeness measure consistently outperforms its greedy counterpart and (ii) reliably outperforms uniform sampling on real-world scenarios.
|
2307.04229
|
Murat Kuscu Dr
|
Ali Abdali, Murat Kuscu
|
Frequency-Domain Model of Microfluidic Molecular Communication Channels
with Graphene BioFET-based Receivers
| null | null | null | null |
cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Molecular Communication (MC) is a bio-inspired communication paradigm
utilizing molecules for information transfer. Research on this unconventional
communication technique has recently started to transition from theoretical
investigations to practical testbed implementations, primarily harnessing
microfluidics and sensor technologies. Developing accurate models for
input-output relationships on these platforms, which mirror real-world
scenarios, is crucial for assessing modulation and detection techniques,
devising optimized MC methods, and understanding the impact of physical
parameters on performance. In this study, we consider a practical microfluidic
MC system equipped with a graphene field effect transistor biosensor
(bioFET)-based MC receiver as the model system, and develop an analytical
end-to-end frequency-domain model. The model provides practical insights into
the dispersion and distortion of received signals, thus potentially informing
the design of new frequency-domain MC techniques, such as modulation and
detection methods. The accuracy of the developed model is verified through
particle-based spatial stochastic simulations of pulse transmission in
microfluidic channels and ligand-receptor binding reactions on the receiver
surface.
|
[
{
"created": "Sun, 9 Jul 2023 17:09:52 GMT",
"version": "v1"
}
] |
2023-07-11
|
[
[
"Abdali",
"Ali",
""
],
[
"Kuscu",
"Murat",
""
]
] |
Molecular Communication (MC) is a bio-inspired communication paradigm utilizing molecules for information transfer. Research on this unconventional communication technique has recently started to transition from theoretical investigations to practical testbed implementations, primarily harnessing microfluidics and sensor technologies. Developing accurate models for input-output relationships on these platforms, which mirror real-world scenarios, is crucial for assessing modulation and detection techniques, devising optimized MC methods, and understanding the impact of physical parameters on performance. In this study, we consider a practical microfluidic MC system equipped with a graphene field effect transistor biosensor (bioFET)-based MC receiver as the model system, and develop an analytical end-to-end frequency-domain model. The model provides practical insights into the dispersion and distortion of received signals, thus potentially informing the design of new frequency-domain MC techniques, such as modulation and detection methods. The accuracy of the developed model is verified through particle-based spatial stochastic simulations of pulse transmission in microfluidic channels and ligand-receptor binding reactions on the receiver surface.
|
1811.03482
|
Aftab Hussain
|
Aftab Hussain and Omar Asadi and Debra J. Richardson
|
A holistic look at requirements engineering practices in the gaming
industry
| null | null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
In this work we present an account of the status of requirements engineering
in the gaming industry. Recent papers in the area were surveyed.
Characterizations of the gaming industry were deliberated upon by portraying
its relations with the market industry. Some research directions in the area of
requirements engineering in the gaming industry were also mentioned.
|
[
{
"created": "Thu, 8 Nov 2018 15:10:09 GMT",
"version": "v1"
}
] |
2018-11-09
|
[
[
"Hussain",
"Aftab",
""
],
[
"Asadi",
"Omar",
""
],
[
"Richardson",
"Debra J.",
""
]
] |
In this work we present an account of the status of requirements engineering in the gaming industry. Recent papers in the area were surveyed. Characterizations of the gaming industry were deliberated upon by portraying its relations with the market industry. Some research directions in the area of requirements engineering in the gaming industry were also mentioned.
|
1706.03833
|
Siamak Talebi
|
Adel Ahmadi and Siamak Talebi
|
Fast Maximum-Likelihood Decoder for 4*4 Quasi-Orthogonal Space-Time
Block Code
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This letter introduces two fast maximum-likelihood (ML) detection methods for
4*4 quasi-orthogonal space-time block code (QOSTBC). The first algorithm with a
relatively simple design exploits structure of quadrature amplitude modulation
(QAM) constellations to achieve its goal and the second algorithm, though
somewhat more complex, can be applied to any arbitrary constellation. Both
decoders utilize a novel decomposition technique for ML metric which divides
the metric into independent positive parts and a positive interference part.
Search spaces of symbols are substantially reduced by employing the independent
parts and statistics of noise. Finally, the members of search spaces are
successively evaluated until the metric is minimized. Simulation results
confirm that the proposed decoder is superior to some of the most recently
published methods in terms of complexity level. More specifically, the results
verified that application of the new algorithm with 1024-QAM would require
reduced computational complexity compared to state-of-the-art solution with
16-QAM.
|
[
{
"created": "Mon, 12 Jun 2017 20:08:56 GMT",
"version": "v1"
}
] |
2017-06-14
|
[
[
"Ahmadi",
"Adel",
""
],
[
"Talebi",
"Siamak",
""
]
] |
This letter introduces two fast maximum-likelihood (ML) detection methods for 4*4 quasi-orthogonal space-time block code (QOSTBC). The first algorithm with a relatively simple design exploits structure of quadrature amplitude modulation (QAM) constellations to achieve its goal and the second algorithm, though somewhat more complex, can be applied to any arbitrary constellation. Both decoders utilize a novel decomposition technique for ML metric which divides the metric into independent positive parts and a positive interference part. Search spaces of symbols are substantially reduced by employing the independent parts and statistics of noise. Finally, the members of search spaces are successively evaluated until the metric is minimized. Simulation results confirm that the proposed decoder is superior to some of the most recently published methods in terms of complexity level. More specifically, the results verified that application of the new algorithm with 1024-QAM would require reduced computational complexity compared to state-of-the-art solution with 16-QAM.
|
2312.15258
|
Mingwei Li
|
Mingwei Li, Jiachen Tao, Zongxin Yang, Yi Yang
|
Human101: Training 100+FPS Human Gaussians in 100s from 1 View
|
Website: https://github.com/longxiang-ai/Human101
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reconstructing the human body from single-view videos plays a pivotal role in
the virtual reality domain. One prevalent application scenario necessitates the
rapid reconstruction of high-fidelity 3D digital humans while simultaneously
ensuring real-time rendering and interaction. Existing methods often struggle
to fulfill both requirements. In this paper, we introduce Human101, a novel
framework adept at producing high-fidelity dynamic 3D human reconstructions
from 1-view videos by training 3D Gaussians in 100 seconds and rendering in
100+ FPS. Our method leverages the strengths of 3D Gaussian Splatting, which
provides an explicit and efficient representation of 3D humans. Standing apart
from prior NeRF-based pipelines, Human101 ingeniously applies a Human-centric
Forward Gaussian Animation method to deform the parameters of 3D Gaussians,
thereby enhancing rendering speed (i.e., rendering 1024-resolution images at an
impressive 60+ FPS and rendering 512-resolution images at 100+ FPS).
Experimental results indicate that our approach substantially eclipses current
methods, clocking up to a 10 times surge in frames per second and delivering
comparable or superior rendering quality. Code and demos will be released at
https://github.com/longxiang-ai/Human101.
|
[
{
"created": "Sat, 23 Dec 2023 13:41:56 GMT",
"version": "v1"
}
] |
2023-12-27
|
[
[
"Li",
"Mingwei",
""
],
[
"Tao",
"Jiachen",
""
],
[
"Yang",
"Zongxin",
""
],
[
"Yang",
"Yi",
""
]
] |
Reconstructing the human body from single-view videos plays a pivotal role in the virtual reality domain. One prevalent application scenario necessitates the rapid reconstruction of high-fidelity 3D digital humans while simultaneously ensuring real-time rendering and interaction. Existing methods often struggle to fulfill both requirements. In this paper, we introduce Human101, a novel framework adept at producing high-fidelity dynamic 3D human reconstructions from 1-view videos by training 3D Gaussians in 100 seconds and rendering in 100+ FPS. Our method leverages the strengths of 3D Gaussian Splatting, which provides an explicit and efficient representation of 3D humans. Standing apart from prior NeRF-based pipelines, Human101 ingeniously applies a Human-centric Forward Gaussian Animation method to deform the parameters of 3D Gaussians, thereby enhancing rendering speed (i.e., rendering 1024-resolution images at an impressive 60+ FPS and rendering 512-resolution images at 100+ FPS). Experimental results indicate that our approach substantially eclipses current methods, clocking up to a 10 times surge in frames per second and delivering comparable or superior rendering quality. Code and demos will be released at https://github.com/longxiang-ai/Human101.
|
2305.15064
|
Siqi Ouyang
|
Siqi Ouyang and Lei Li
|
AutoPlan: Automatic Planning of Interactive Decision-Making Tasks With
Large Language Models
|
EMNLP 2023 Findings
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Recent large language models (LLMs) are promising for making decisions in
grounded environments. However, LLMs frequently fail in complex decision-making
tasks due to the misalignment between the pre-trained knowledge in LLMs and the
actual rules in the environment. Existing methods require either costly
gradient computation or lengthy in-context demonstrations. In this paper, we
propose AutoPlan, an approach to guide LLM-based agents to accomplish
interactive decision-making tasks. AutoPlan augments the LLM prompt with a
task-solving plan and optimizes it through iterative experience collection and
reflection. Our experiments show that AutoPlan, though using no in-context
demonstrations, achieves success rates on par with the baselines using
human-written demonstrations on ALFWorld and even outperforms them by 8% on
HotpotQA. The code is available at https://github.com/owaski/AutoPlan.
|
[
{
"created": "Wed, 24 May 2023 11:52:23 GMT",
"version": "v1"
},
{
"created": "Fri, 20 Oct 2023 18:27:33 GMT",
"version": "v2"
},
{
"created": "Thu, 26 Oct 2023 16:44:39 GMT",
"version": "v3"
}
] |
2023-10-27
|
[
[
"Ouyang",
"Siqi",
""
],
[
"Li",
"Lei",
""
]
] |
Recent large language models (LLMs) are promising for making decisions in grounded environments. However, LLMs frequently fail in complex decision-making tasks due to the misalignment between the pre-trained knowledge in LLMs and the actual rules in the environment. Existing methods require either costly gradient computation or lengthy in-context demonstrations. In this paper, we propose AutoPlan, an approach to guide LLM-based agents to accomplish interactive decision-making tasks. AutoPlan augments the LLM prompt with a task-solving plan and optimizes it through iterative experience collection and reflection. Our experiments show that AutoPlan, though using no in-context demonstrations, achieves success rates on par with the baselines using human-written demonstrations on ALFWorld and even outperforms them by 8% on HotpotQA. The code is available at https://github.com/owaski/AutoPlan.
|
2103.16446
|
Richard Plant
|
Richard Plant, Amir Hussain
|
CovidTracker: A comprehensive Covid-related social media dataset for NLP
tasks
| null | null | null | null |
cs.SI cs.CY
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The Covid-19 pandemic presented an unprecedented global public health
emergency, and concomitantly an unparalleled opportunity to investigate public
responses to adverse social conditions. The widespread ability to post messages
to social media platforms provided an invaluable outlet for such an outpouring
of public sentiment, including not only expressions of social solidarity, but
also the spread of misinformation and misconceptions around the effect and
potential risks of the pandemic. This archive of message content therefore
represents a key resource in understanding public responses to health crises,
analysis of which could help to inform public policy interventions to better
respond to similar events in future. We present a benchmark database of public
social media postings from the United Kingdom related to the Covid-19 pandemic
for academic research purposes, along with some initial analysis, including a
taxonomy of key themes organised by keyword. This release supports the findings
of a research study funded by the Scottish Government Chief Scientists' Office
that aims to investigate social sentiment in order to understand the response
to public health measures implemented during the pandemic.
|
[
{
"created": "Tue, 30 Mar 2021 15:44:48 GMT",
"version": "v1"
},
{
"created": "Fri, 17 Jun 2022 11:40:35 GMT",
"version": "v2"
}
] |
2022-06-20
|
[
[
"Plant",
"Richard",
""
],
[
"Hussain",
"Amir",
""
]
] |
The Covid-19 pandemic presented an unprecedented global public health emergency, and concomitantly an unparalleled opportunity to investigate public responses to adverse social conditions. The widespread ability to post messages to social media platforms provided an invaluable outlet for such an outpouring of public sentiment, including not only expressions of social solidarity, but also the spread of misinformation and misconceptions around the effect and potential risks of the pandemic. This archive of message content therefore represents a key resource in understanding public responses to health crises, analysis of which could help to inform public policy interventions to better respond to similar events in future. We present a benchmark database of public social media postings from the United Kingdom related to the Covid-19 pandemic for academic research purposes, along with some initial analysis, including a taxonomy of key themes organised by keyword. This release supports the findings of a research study funded by the Scottish Government Chief Scientists' Office that aims to investigate social sentiment in order to understand the response to public health measures implemented during the pandemic.
|
2204.05546
|
Yahao Liu
|
Yahao Liu, Jinhong Deng, Jiale Tao, Tong Chu, Lixin Duan, Wen Li
|
Undoing the Damage of Label Shift for Cross-domain Semantic Segmentation
|
Tech report
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing works typically treat cross-domain semantic segmentation (CDSS) as a
data distribution mismatch problem and focus on aligning the marginal
distribution or conditional distribution. However, the label shift issue is
unfortunately overlooked, which actually commonly exists in the CDSS task, and
often causes a classifier bias in the learnt model. In this paper, we give an
in-depth analysis and show that the damage of label shift can be overcome by
aligning the data conditional distribution and correcting the posterior
probability. To this end, we propose a novel approach to undo the damage of the
label shift problem in CDSS. In implementation, we adopt class-level feature
alignment for conditional distribution alignment, as well as two simple yet
effective methods to rectify the classifier bias from source to target by
remolding the classifier predictions. We conduct extensive experiments on the
benchmark datasets of urban scenes, including GTA5 to Cityscapes and SYNTHIA to
Cityscapes, where our proposed approach outperforms previous methods by a large
margin. For instance, our model equipped with a self-training strategy reaches
59.3% mIoU on GTA5 to Cityscapes, pushing to a new state-of-the-art. The code
will be available at https://github.com/manmanjun/Undoing UDA.
|
[
{
"created": "Tue, 12 Apr 2022 06:18:50 GMT",
"version": "v1"
}
] |
2022-04-13
|
[
[
"Liu",
"Yahao",
""
],
[
"Deng",
"Jinhong",
""
],
[
"Tao",
"Jiale",
""
],
[
"Chu",
"Tong",
""
],
[
"Duan",
"Lixin",
""
],
[
"Li",
"Wen",
""
]
] |
Existing works typically treat cross-domain semantic segmentation (CDSS) as a data distribution mismatch problem and focus on aligning the marginal distribution or conditional distribution. However, the label shift issue is unfortunately overlooked, which actually commonly exists in the CDSS task, and often causes a classifier bias in the learnt model. In this paper, we give an in-depth analysis and show that the damage of label shift can be overcome by aligning the data conditional distribution and correcting the posterior probability. To this end, we propose a novel approach to undo the damage of the label shift problem in CDSS. In implementation, we adopt class-level feature alignment for conditional distribution alignment, as well as two simple yet effective methods to rectify the classifier bias from source to target by remolding the classifier predictions. We conduct extensive experiments on the benchmark datasets of urban scenes, including GTA5 to Cityscapes and SYNTHIA to Cityscapes, where our proposed approach outperforms previous methods by a large margin. For instance, our model equipped with a self-training strategy reaches 59.3% mIoU on GTA5 to Cityscapes, pushing to a new state-of-the-art. The code will be available at https://github.com/manmanjun/Undoing UDA.
|
2404.05694
|
Amin Dada
|
Ahmad Idrissi-Yaghir, Amin Dada, Henning Sch\"afer, Kamyar Arzideh,
Giulia Baldini, Jan Trienes, Max Hasin, Jeanette Bewersdorff, Cynthia S.
Schmidt, Marie Bauer, Kaleb E. Smith, Jiang Bian, Yonghui Wu, J\"org
Schl\"otterer, Torsten Zesch, Peter A. Horn, Christin Seifert, Felix Nensa,
Jens Kleesiek, Christoph M. Friedrich
|
Comprehensive Study on German Language Models for Clinical and
Biomedical Text Understanding
|
Accepted at LREC-COLING 2024
| null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Recent advances in natural language processing (NLP) can be largely
attributed to the advent of pre-trained language models such as BERT and
RoBERTa. While these models demonstrate remarkable performance on general
datasets, they can struggle in specialized domains such as medicine, where
unique domain-specific terminologies, domain-specific abbreviations, and
varying document structures are common. This paper explores strategies for
adapting these models to domain-specific requirements, primarily through
continuous pre-training on domain-specific data. We pre-trained several German
medical language models on 2.4B tokens derived from translated public English
medical data and 3B tokens of German clinical data. The resulting models were
evaluated on various German downstream tasks, including named entity
recognition (NER), multi-label classification, and extractive question
answering. Our results suggest that models augmented by clinical and
translation-based pre-training typically outperform general domain models in
medical contexts. We conclude that continuous pre-training has demonstrated the
ability to match or even exceed the performance of clinical models trained from
scratch. Furthermore, pre-training on clinical data or leveraging translated
texts have proven to be reliable methods for domain adaptation in medical NLP
tasks.
|
[
{
"created": "Mon, 8 Apr 2024 17:24:04 GMT",
"version": "v1"
},
{
"created": "Wed, 8 May 2024 08:53:53 GMT",
"version": "v2"
}
] |
2024-05-09
|
[
[
"Idrissi-Yaghir",
"Ahmad",
""
],
[
"Dada",
"Amin",
""
],
[
"Schäfer",
"Henning",
""
],
[
"Arzideh",
"Kamyar",
""
],
[
"Baldini",
"Giulia",
""
],
[
"Trienes",
"Jan",
""
],
[
"Hasin",
"Max",
""
],
[
"Bewersdorff",
"Jeanette",
""
],
[
"Schmidt",
"Cynthia S.",
""
],
[
"Bauer",
"Marie",
""
],
[
"Smith",
"Kaleb E.",
""
],
[
"Bian",
"Jiang",
""
],
[
"Wu",
"Yonghui",
""
],
[
"Schlötterer",
"Jörg",
""
],
[
"Zesch",
"Torsten",
""
],
[
"Horn",
"Peter A.",
""
],
[
"Seifert",
"Christin",
""
],
[
"Nensa",
"Felix",
""
],
[
"Kleesiek",
"Jens",
""
],
[
"Friedrich",
"Christoph M.",
""
]
] |
Recent advances in natural language processing (NLP) can be largely attributed to the advent of pre-trained language models such as BERT and RoBERTa. While these models demonstrate remarkable performance on general datasets, they can struggle in specialized domains such as medicine, where unique domain-specific terminologies, domain-specific abbreviations, and varying document structures are common. This paper explores strategies for adapting these models to domain-specific requirements, primarily through continuous pre-training on domain-specific data. We pre-trained several German medical language models on 2.4B tokens derived from translated public English medical data and 3B tokens of German clinical data. The resulting models were evaluated on various German downstream tasks, including named entity recognition (NER), multi-label classification, and extractive question answering. Our results suggest that models augmented by clinical and translation-based pre-training typically outperform general domain models in medical contexts. We conclude that continuous pre-training has demonstrated the ability to match or even exceed the performance of clinical models trained from scratch. Furthermore, pre-training on clinical data or leveraging translated texts have proven to be reliable methods for domain adaptation in medical NLP tasks.
|
2405.10988
|
Runjie Yan
|
Runjie Yan, Kailu Wu, Kaisheng Ma
|
Flow Score Distillation for Diverse Text-to-3D Generation
|
Consistent Flow Distillation is an improved version of this paper
| null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Recent advancements in Text-to-3D generation have yielded remarkable
progress, particularly through methods that rely on Score Distillation Sampling
(SDS). While SDS exhibits the capability to create impressive 3D assets, it is
hindered by its inherent maximum-likelihood-seeking essence, resulting in
limited diversity in generation outcomes. In this paper, we discover that the
Denoise Diffusion Implicit Models (DDIM) generation process (\ie PF-ODE) can be
succinctly expressed using an analogue of SDS loss. One step further, one can
see SDS as a generalized DDIM generation process. Following this insight, we
show that the noise sampling strategy in the noise addition stage significantly
restricts the diversity of generation results. To address this limitation, we
present an innovative noise sampling approach and introduce a novel text-to-3D
method called Flow Score Distillation (FSD). Our validation experiments across
various text-to-image Diffusion Models demonstrate that FSD substantially
enhances generation diversity without compromising quality.
|
[
{
"created": "Thu, 16 May 2024 06:05:16 GMT",
"version": "v1"
},
{
"created": "Sun, 28 Jul 2024 21:52:11 GMT",
"version": "v2"
}
] |
2024-07-30
|
[
[
"Yan",
"Runjie",
""
],
[
"Wu",
"Kailu",
""
],
[
"Ma",
"Kaisheng",
""
]
] |
Recent advancements in Text-to-3D generation have yielded remarkable progress, particularly through methods that rely on Score Distillation Sampling (SDS). While SDS exhibits the capability to create impressive 3D assets, it is hindered by its inherent maximum-likelihood-seeking essence, resulting in limited diversity in generation outcomes. In this paper, we discover that the Denoise Diffusion Implicit Models (DDIM) generation process (\ie PF-ODE) can be succinctly expressed using an analogue of SDS loss. One step further, one can see SDS as a generalized DDIM generation process. Following this insight, we show that the noise sampling strategy in the noise addition stage significantly restricts the diversity of generation results. To address this limitation, we present an innovative noise sampling approach and introduce a novel text-to-3D method called Flow Score Distillation (FSD). Our validation experiments across various text-to-image Diffusion Models demonstrate that FSD substantially enhances generation diversity without compromising quality.
|
2108.11212
|
Bernhard Scholz
|
Xiaowen Hu, Joshua Karp, David Zhao, Abdul Zreika, Xi Wu, and Bernhard
Scholz
|
The Choice Construct in the Souffle Language
|
This is an extended technical report of work that will be published
at APLAS'21
| null | null | null |
cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
Datalog has become a popular implementation language for solving large-scale,
real-world problems, including bug finders, network analysis tools, and
disassemblers. These applications express complex behaviour with hundreds of
relations and rules that often require a non-deterministic choice for tuples in
relations to express worklist algorithms. This work is an experience report
that describes the implementation of a choice construct in the Datalog engine
Souffle. With the choice construct, we can express worklist algorithms such as
spanning trees in a few lines of code. We highlight the differences between
rule-based choice as described in prior work, and relation-based choice
introduced by this work. We show that a choice construct enables certain
worklist algorithms to be computed up to 10kx faster than having no choice
construct.
|
[
{
"created": "Wed, 25 Aug 2021 13:01:43 GMT",
"version": "v1"
}
] |
2021-08-26
|
[
[
"Hu",
"Xiaowen",
""
],
[
"Karp",
"Joshua",
""
],
[
"Zhao",
"David",
""
],
[
"Zreika",
"Abdul",
""
],
[
"Wu",
"Xi",
""
],
[
"Scholz",
"Bernhard",
""
]
] |
Datalog has become a popular implementation language for solving large-scale, real-world problems, including bug finders, network analysis tools, and disassemblers. These applications express complex behaviour with hundreds of relations and rules that often require a non-deterministic choice for tuples in relations to express worklist algorithms. This work is an experience report that describes the implementation of a choice construct in the Datalog engine Souffle. With the choice construct, we can express worklist algorithms such as spanning trees in a few lines of code. We highlight the differences between rule-based choice as described in prior work, and relation-based choice introduced by this work. We show that a choice construct enables certain worklist algorithms to be computed up to 10kx faster than having no choice construct.
|
2402.04893
|
Daniel Gratzer
|
Daniel Gratzer, H\r{a}kon Gylterud, Anders M\"ortberg, and Elisabeth
Stenholm
|
The Category of Iterative Sets in Homotopy Type Theory and Univalent
Foundations
| null | null | null | null |
cs.LO math.LO
|
http://creativecommons.org/licenses/by/4.0/
|
When working in Homotopy Type Theory and Univalent Foundations, the
traditional role of the category of sets, Set, is replaced by the category hSet
of homotopy sets (h-sets); types with h-propositional identity types. Many of
the properties of Set hold for hSet ((co)completeness, exactness, local
cartesian closure, etc.). Notably, however, the univalence axiom implies that
Ob(hSet) is not itself an h-set, but an h-groupoid. This is expected in
univalent foundations, but it is sometimes useful to also have a stricter
universe of sets, for example when constructing internal models of type theory.
In this work, we equip the type of iterative sets V0, due to Gylterud (2018) as
a refinement of the pioneering work of Aczel (1978) on universes of sets in
type theory, with the structure of a Tarski universe and show that it satisfies
many of the good properties of h-sets. In particular, we organize V0 into a
(non-univalent strict) category and prove that it is locally cartesian closed.
This enables us to organize it into a category with families with the structure
necessary to model extensional type theory internally in HoTT/UF. We do this in
a rather minimal univalent type theory with W-types, in particular we do not
rely on any HITs, or other complex extensions of type theory. Furthermore, the
construction of V0 and the model is fully constructive and predicative, while
still being very convenient to work with as the decoding from V0 into h-sets
commutes definitionally for all type constructors. Almost all of the paper has
been formalized in Agda using the agda-unimath library of univalent
mathematics.
|
[
{
"created": "Wed, 7 Feb 2024 14:24:06 GMT",
"version": "v1"
}
] |
2024-02-08
|
[
[
"Gratzer",
"Daniel",
""
],
[
"Gylterud",
"Håkon",
""
],
[
"Mörtberg",
"Anders",
""
],
[
"Stenholm",
"Elisabeth",
""
]
] |
When working in Homotopy Type Theory and Univalent Foundations, the traditional role of the category of sets, Set, is replaced by the category hSet of homotopy sets (h-sets); types with h-propositional identity types. Many of the properties of Set hold for hSet ((co)completeness, exactness, local cartesian closure, etc.). Notably, however, the univalence axiom implies that Ob(hSet) is not itself an h-set, but an h-groupoid. This is expected in univalent foundations, but it is sometimes useful to also have a stricter universe of sets, for example when constructing internal models of type theory. In this work, we equip the type of iterative sets V0, due to Gylterud (2018) as a refinement of the pioneering work of Aczel (1978) on universes of sets in type theory, with the structure of a Tarski universe and show that it satisfies many of the good properties of h-sets. In particular, we organize V0 into a (non-univalent strict) category and prove that it is locally cartesian closed. This enables us to organize it into a category with families with the structure necessary to model extensional type theory internally in HoTT/UF. We do this in a rather minimal univalent type theory with W-types, in particular we do not rely on any HITs, or other complex extensions of type theory. Furthermore, the construction of V0 and the model is fully constructive and predicative, while still being very convenient to work with as the decoding from V0 into h-sets commutes definitionally for all type constructors. Almost all of the paper has been formalized in Agda using the agda-unimath library of univalent mathematics.
|
2211.06800
|
Stephanie Schoch
|
Stephanie Schoch, Haifeng Xu, Yangfeng Ji
|
CS-Shapley: Class-wise Shapley Values for Data Valuation in
Classification
|
Accepted to NeurIPS 2022
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Data valuation, or the valuation of individual datum contributions, has seen
growing interest in machine learning due to its demonstrable efficacy for tasks
such as noisy label detection. In particular, due to the desirable axiomatic
properties, several Shapley value approximation methods have been proposed. In
these methods, the value function is typically defined as the predictive
accuracy over the entire development set. However, this limits the ability to
differentiate between training instances that are helpful or harmful to their
own classes. Intuitively, instances that harm their own classes may be noisy or
mislabeled and should receive a lower valuation than helpful instances. In this
work, we propose CS-Shapley, a Shapley value with a new value function that
discriminates between training instances' in-class and out-of-class
contributions. Our theoretical analysis shows the proposed value function is
(essentially) the unique function that satisfies two desirable properties for
evaluating data values in classification. Further, our experiments on two
benchmark evaluation tasks (data removal and noisy label detection) and four
classifiers demonstrate the effectiveness of CS-Shapley over existing methods.
Lastly, we evaluate the "transferability" of data values estimated from one
classifier to others, and our results suggest Shapley-based data valuation is
transferable for application across different models.
|
[
{
"created": "Sun, 13 Nov 2022 03:32:33 GMT",
"version": "v1"
}
] |
2022-11-15
|
[
[
"Schoch",
"Stephanie",
""
],
[
"Xu",
"Haifeng",
""
],
[
"Ji",
"Yangfeng",
""
]
] |
Data valuation, or the valuation of individual datum contributions, has seen growing interest in machine learning due to its demonstrable efficacy for tasks such as noisy label detection. In particular, due to the desirable axiomatic properties, several Shapley value approximation methods have been proposed. In these methods, the value function is typically defined as the predictive accuracy over the entire development set. However, this limits the ability to differentiate between training instances that are helpful or harmful to their own classes. Intuitively, instances that harm their own classes may be noisy or mislabeled and should receive a lower valuation than helpful instances. In this work, we propose CS-Shapley, a Shapley value with a new value function that discriminates between training instances' in-class and out-of-class contributions. Our theoretical analysis shows the proposed value function is (essentially) the unique function that satisfies two desirable properties for evaluating data values in classification. Further, our experiments on two benchmark evaluation tasks (data removal and noisy label detection) and four classifiers demonstrate the effectiveness of CS-Shapley over existing methods. Lastly, we evaluate the "transferability" of data values estimated from one classifier to others, and our results suggest Shapley-based data valuation is transferable for application across different models.
|
2404.04120
|
Rui Wang
|
Rui Wang, Chuanfu Shen, Manuel J. Marin-Jimenez, George Q. Huang and
Shiqi Yu
|
Cross-Modality Gait Recognition: Bridging LiDAR and Camera Modalities
for Human Identification
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Current gait recognition research mainly focuses on identifying pedestrians
captured by the same type of sensor, neglecting the fact that individuals may
be captured by different sensors in order to adapt to various environments. A
more practical approach should involve cross-modality matching across different
sensors. Hence, this paper focuses on investigating the problem of
cross-modality gait recognition, with the objective of accurately identifying
pedestrians across diverse vision sensors. We present CrossGait inspired by the
feature alignment strategy, capable of cross retrieving diverse data
modalities. Specifically, we investigate the cross-modality recognition task by
initially extracting features within each modality and subsequently aligning
these features across modalities. To further enhance the cross-modality
performance, we propose a Prototypical Modality-shared Attention Module that
learns modality-shared features from two modality-specific features.
Additionally, we design a Cross-modality Feature Adapter that transforms the
learned modality-specific features into a unified feature space. Extensive
experiments conducted on the SUSTech1K dataset demonstrate the effectiveness of
CrossGait: (1) it exhibits promising cross-modality ability in retrieving
pedestrians across various modalities from different sensors in diverse scenes,
and (2) CrossGait not only learns modality-shared features for cross-modality
gait recognition but also maintains modality-specific features for
single-modality recognition.
|
[
{
"created": "Thu, 4 Apr 2024 10:12:55 GMT",
"version": "v1"
}
] |
2024-04-08
|
[
[
"Wang",
"Rui",
""
],
[
"Shen",
"Chuanfu",
""
],
[
"Marin-Jimenez",
"Manuel J.",
""
],
[
"Huang",
"George Q.",
""
],
[
"Yu",
"Shiqi",
""
]
] |
Current gait recognition research mainly focuses on identifying pedestrians captured by the same type of sensor, neglecting the fact that individuals may be captured by different sensors in order to adapt to various environments. A more practical approach should involve cross-modality matching across different sensors. Hence, this paper focuses on investigating the problem of cross-modality gait recognition, with the objective of accurately identifying pedestrians across diverse vision sensors. We present CrossGait inspired by the feature alignment strategy, capable of cross retrieving diverse data modalities. Specifically, we investigate the cross-modality recognition task by initially extracting features within each modality and subsequently aligning these features across modalities. To further enhance the cross-modality performance, we propose a Prototypical Modality-shared Attention Module that learns modality-shared features from two modality-specific features. Additionally, we design a Cross-modality Feature Adapter that transforms the learned modality-specific features into a unified feature space. Extensive experiments conducted on the SUSTech1K dataset demonstrate the effectiveness of CrossGait: (1) it exhibits promising cross-modality ability in retrieving pedestrians across various modalities from different sensors in diverse scenes, and (2) CrossGait not only learns modality-shared features for cross-modality gait recognition but also maintains modality-specific features for single-modality recognition.
|
2212.08764
|
Benjamin Hall
|
Benjamin Hall, Andrew Goeden, Sahan Reddy, Timothy Gallion, Charles
Koduru, M. Hassan Tanveer
|
Occupancy Grid Based Reactive Planner
|
5 pages
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
This paper proposes a perception and path planning pipeline for autonomous
racing in an unknown bounded course. The pipeline was initially created for the
2021 evGrandPrix autonomous division and was further improved for the 2022
event, both of which resulting in first place finishes. Using a simple
LiDAR-based perception pipeline feeding into an occupancy grid based expansion
algorithm, we determine a goal point to drive. This pipeline successfully
achieved reliable and consistent laps in addition with occupancy grid algorithm
to know the ways around a cone-defined track with an averaging speeds of 6.85
m/s over a distance 434.2 meters for a total lap time of 63.4 seconds.
|
[
{
"created": "Sat, 17 Dec 2022 00:21:28 GMT",
"version": "v1"
}
] |
2022-12-20
|
[
[
"Hall",
"Benjamin",
""
],
[
"Goeden",
"Andrew",
""
],
[
"Reddy",
"Sahan",
""
],
[
"Gallion",
"Timothy",
""
],
[
"Koduru",
"Charles",
""
],
[
"Tanveer",
"M. Hassan",
""
]
] |
This paper proposes a perception and path planning pipeline for autonomous racing in an unknown bounded course. The pipeline was initially created for the 2021 evGrandPrix autonomous division and was further improved for the 2022 event, both of which resulting in first place finishes. Using a simple LiDAR-based perception pipeline feeding into an occupancy grid based expansion algorithm, we determine a goal point to drive. This pipeline successfully achieved reliable and consistent laps in addition with occupancy grid algorithm to know the ways around a cone-defined track with an averaging speeds of 6.85 m/s over a distance 434.2 meters for a total lap time of 63.4 seconds.
|
1911.05755
|
Nicholas Schmidt
|
Nicholas Schmidt and Bryce Stephens
|
An Introduction to Artificial Intelligence and Solutions to the Problems
of Algorithmic Discrimination
|
16 pages
| null | null | null |
cs.CY cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
There is substantial evidence that Artificial Intelligence (AI) and Machine
Learning (ML) algorithms can generate bias against minorities, women, and other
protected classes. Federal and state laws have been enacted to protect
consumers from discrimination in credit, housing, and employment, where
regulators and agencies are tasked with enforcing these laws. Additionally,
there are laws in place to ensure that consumers understand why they are denied
access to services and products, such as consumer loans. In this article, we
provide an overview of the potential benefits and risks associated with the use
of algorithms and data, and focus specifically on fairness. While our
observations generalize to many contexts, we focus on the fairness concerns
raised in consumer credit and the legal requirements of the Equal Credit and
Opportunity Act. We propose a methodology for evaluating algorithmic fairness
and minimizing algorithmic bias that aligns with the provisions of federal and
state anti-discrimination statutes that outlaw overt, disparate treatment, and,
specifically, disparate impact discrimination. We argue that while the use of
AI and ML algorithms heighten potential discrimination risks, these risks can
be evaluated and mitigated, but doing so requires a deep understanding of these
algorithms and the contexts and domains in which they are being used.
|
[
{
"created": "Fri, 8 Nov 2019 22:29:56 GMT",
"version": "v1"
}
] |
2021-08-23
|
[
[
"Schmidt",
"Nicholas",
""
],
[
"Stephens",
"Bryce",
""
]
] |
There is substantial evidence that Artificial Intelligence (AI) and Machine Learning (ML) algorithms can generate bias against minorities, women, and other protected classes. Federal and state laws have been enacted to protect consumers from discrimination in credit, housing, and employment, where regulators and agencies are tasked with enforcing these laws. Additionally, there are laws in place to ensure that consumers understand why they are denied access to services and products, such as consumer loans. In this article, we provide an overview of the potential benefits and risks associated with the use of algorithms and data, and focus specifically on fairness. While our observations generalize to many contexts, we focus on the fairness concerns raised in consumer credit and the legal requirements of the Equal Credit and Opportunity Act. We propose a methodology for evaluating algorithmic fairness and minimizing algorithmic bias that aligns with the provisions of federal and state anti-discrimination statutes that outlaw overt, disparate treatment, and, specifically, disparate impact discrimination. We argue that while the use of AI and ML algorithms heighten potential discrimination risks, these risks can be evaluated and mitigated, but doing so requires a deep understanding of these algorithms and the contexts and domains in which they are being used.
|
0909.1151
|
Antoine Seilles
|
Jean Sallantin (LIRMM), Antoine Seilles (LIRMM)
|
n-Opposition theory to structure debates
| null | null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
2007 was the first international congress on the ?square of oppositions?. A
first attempt to structure debate using n-opposition theory was presented along
with the results of a first experiment on the web. Our proposal for this paper
is to define relations between arguments through a structure of opposition
(square of oppositions is one structure of opposition). We will be trying to
answer the following questions: How to organize debates on the web 2.0? How to
structure them in a logical way? What is the role of n-opposition theory, in
this context? We present in this paper results of three experiments
(Betapolitique 2007, ECAP 2008, Intermed 2008).
|
[
{
"created": "Mon, 7 Sep 2009 06:41:06 GMT",
"version": "v1"
}
] |
2009-09-08
|
[
[
"Sallantin",
"Jean",
"",
"LIRMM"
],
[
"Seilles",
"Antoine",
"",
"LIRMM"
]
] |
2007 was the first international congress on the ?square of oppositions?. A first attempt to structure debate using n-opposition theory was presented along with the results of a first experiment on the web. Our proposal for this paper is to define relations between arguments through a structure of opposition (square of oppositions is one structure of opposition). We will be trying to answer the following questions: How to organize debates on the web 2.0? How to structure them in a logical way? What is the role of n-opposition theory, in this context? We present in this paper results of three experiments (Betapolitique 2007, ECAP 2008, Intermed 2008).
|
2403.06563
|
Hui Su
|
Hui Su, Zhi Tian, Xiaoyu Shen, Xunliang Cai
|
Unraveling the Mystery of Scaling Laws: Part I
| null | null | null | null |
cs.LG cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Scaling law principles indicate a power-law correlation between loss and
variables such as model size, dataset size, and computational resources
utilized during training. These principles play a vital role in optimizing
various aspects of model pre-training, ultimately contributing to the success
of large language models such as GPT-4, Llama and Gemini. However, the original
scaling law paper by OpenAI did not disclose the complete details necessary to
derive the precise scaling law formulas, and their conclusions are only based
on models containing up to 1.5 billion parameters. Though some subsequent works
attempt to unveil these details and scale to larger models, they often neglect
the training dependency of important factors such as the learning rate, context
length and batch size, leading to their failure to establish a reliable formula
for predicting the test loss trajectory. In this technical report, we confirm
that the scaling law formulations proposed in the original OpenAI paper remain
valid when scaling the model size up to 33 billion, but the constant
coefficients in these formulas vary significantly with the experiment setup. We
meticulously identify influential factors and provide transparent, step-by-step
instructions to estimate all constant terms in scaling-law formulas by training
on models with only 1M~60M parameters. Using these estimated formulas, we
showcase the capability to accurately predict various attributes for models
with up to 33B parameters before their training, including (1) the minimum
possible test loss; (2) the minimum required training steps and processed
tokens to achieve a specific loss; (3) the critical batch size with an optimal
time/computation trade-off at any loss value; and (4) the complete test loss
trajectory with arbitrary batch size.
|
[
{
"created": "Mon, 11 Mar 2024 10:05:29 GMT",
"version": "v1"
},
{
"created": "Thu, 21 Mar 2024 17:08:43 GMT",
"version": "v2"
},
{
"created": "Fri, 5 Apr 2024 06:39:34 GMT",
"version": "v3"
}
] |
2024-04-08
|
[
[
"Su",
"Hui",
""
],
[
"Tian",
"Zhi",
""
],
[
"Shen",
"Xiaoyu",
""
],
[
"Cai",
"Xunliang",
""
]
] |
Scaling law principles indicate a power-law correlation between loss and variables such as model size, dataset size, and computational resources utilized during training. These principles play a vital role in optimizing various aspects of model pre-training, ultimately contributing to the success of large language models such as GPT-4, Llama and Gemini. However, the original scaling law paper by OpenAI did not disclose the complete details necessary to derive the precise scaling law formulas, and their conclusions are only based on models containing up to 1.5 billion parameters. Though some subsequent works attempt to unveil these details and scale to larger models, they often neglect the training dependency of important factors such as the learning rate, context length and batch size, leading to their failure to establish a reliable formula for predicting the test loss trajectory. In this technical report, we confirm that the scaling law formulations proposed in the original OpenAI paper remain valid when scaling the model size up to 33 billion, but the constant coefficients in these formulas vary significantly with the experiment setup. We meticulously identify influential factors and provide transparent, step-by-step instructions to estimate all constant terms in scaling-law formulas by training on models with only 1M~60M parameters. Using these estimated formulas, we showcase the capability to accurately predict various attributes for models with up to 33B parameters before their training, including (1) the minimum possible test loss; (2) the minimum required training steps and processed tokens to achieve a specific loss; (3) the critical batch size with an optimal time/computation trade-off at any loss value; and (4) the complete test loss trajectory with arbitrary batch size.
|
1604.07243
|
Mattia Desana
|
Mattia Desana and Christoph Schn\"orr
|
Learning Arbitrary Sum-Product Network Leaves with
Expectation-Maximization
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sum-Product Networks with complex probability distribution at the leaves have
been shown to be powerful tractable-inference probabilistic models. However,
while learning the internal parameters has been amply studied, learning complex
leaf distribution is an open problem with only few results available in special
cases. In this paper we derive an efficient method to learn a very large class
of leaf distributions with Expectation-Maximization. The EM updates have the
form of simple weighted maximum likelihood problems, allowing to use any
distribution that can be learned with maximum likelihood, even approximately.
The algorithm has cost linear in the model size and converges even if only
partial optimizations are performed. We demonstrate this approach with
experiments on twenty real-life datasets for density estimation, using tree
graphical models as leaves. Our model outperforms state-of-the-art methods for
parameter learning despite using SPNs with much fewer parameters.
|
[
{
"created": "Mon, 25 Apr 2016 13:22:55 GMT",
"version": "v1"
},
{
"created": "Mon, 12 Dec 2016 16:42:59 GMT",
"version": "v2"
},
{
"created": "Wed, 14 Jun 2017 14:08:22 GMT",
"version": "v3"
}
] |
2017-06-15
|
[
[
"Desana",
"Mattia",
""
],
[
"Schnörr",
"Christoph",
""
]
] |
Sum-Product Networks with complex probability distribution at the leaves have been shown to be powerful tractable-inference probabilistic models. However, while learning the internal parameters has been amply studied, learning complex leaf distribution is an open problem with only few results available in special cases. In this paper we derive an efficient method to learn a very large class of leaf distributions with Expectation-Maximization. The EM updates have the form of simple weighted maximum likelihood problems, allowing to use any distribution that can be learned with maximum likelihood, even approximately. The algorithm has cost linear in the model size and converges even if only partial optimizations are performed. We demonstrate this approach with experiments on twenty real-life datasets for density estimation, using tree graphical models as leaves. Our model outperforms state-of-the-art methods for parameter learning despite using SPNs with much fewer parameters.
|
2406.07217
|
Mark Vero
|
Hanna Yukhymenko, Robin Staab, Mark Vero, Martin Vechev
|
A Synthetic Dataset for Personal Attribute Inference
| null | null | null | null |
cs.LG cs.AI cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Recently, powerful Large Language Models (LLMs) have become easily accessible
to hundreds of millions of users worldwide. However, their strong capabilities
and vast world knowledge do not come without associated privacy risks. In this
work, we focus on the emerging privacy threat LLMs pose - the ability to
accurately infer personal information from online texts. Despite the growing
importance of LLM-based author profiling, research in this area has been
hampered by a lack of suitable public datasets, largely due to ethical and
privacy concerns associated with real personal data. In this work, we take two
steps to address this problem: (i) we construct a simulation framework for the
popular social media platform Reddit using LLM agents seeded with synthetic
personal profiles; (ii) using this framework, we generate SynthPAI, a diverse
synthetic dataset of over 7800 comments manually labeled for personal
attributes. We validate our dataset with a human study showing that humans
barely outperform random guessing on the task of distinguishing our synthetic
comments from real ones. Further, we verify that our dataset enables meaningful
personal attribute inference research by showing across 18 state-of-the-art
LLMs that our synthetic comments allow us to draw the same conclusions as
real-world data. Together, this indicates that our dataset and pipeline provide
a strong and privacy-preserving basis for future research toward understanding
and mitigating the inference-based privacy threats LLMs pose.
|
[
{
"created": "Tue, 11 Jun 2024 12:50:53 GMT",
"version": "v1"
}
] |
2024-06-12
|
[
[
"Yukhymenko",
"Hanna",
""
],
[
"Staab",
"Robin",
""
],
[
"Vero",
"Mark",
""
],
[
"Vechev",
"Martin",
""
]
] |
Recently, powerful Large Language Models (LLMs) have become easily accessible to hundreds of millions of users worldwide. However, their strong capabilities and vast world knowledge do not come without associated privacy risks. In this work, we focus on the emerging privacy threat LLMs pose - the ability to accurately infer personal information from online texts. Despite the growing importance of LLM-based author profiling, research in this area has been hampered by a lack of suitable public datasets, largely due to ethical and privacy concerns associated with real personal data. In this work, we take two steps to address this problem: (i) we construct a simulation framework for the popular social media platform Reddit using LLM agents seeded with synthetic personal profiles; (ii) using this framework, we generate SynthPAI, a diverse synthetic dataset of over 7800 comments manually labeled for personal attributes. We validate our dataset with a human study showing that humans barely outperform random guessing on the task of distinguishing our synthetic comments from real ones. Further, we verify that our dataset enables meaningful personal attribute inference research by showing across 18 state-of-the-art LLMs that our synthetic comments allow us to draw the same conclusions as real-world data. Together, this indicates that our dataset and pipeline provide a strong and privacy-preserving basis for future research toward understanding and mitigating the inference-based privacy threats LLMs pose.
|
2311.00335
|
Yuval Shavitt
|
Liron David and Yuval Shavitt
|
BGP Typo: A Longitudinal Study and Remedies
| null | null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
BGP is the protocol that keeps Internet connected. Operators use it by
announcing Address Prefixes (APs), namely IP address blocks, that they own or
that they agree to serve as transit for. BGP enables ISPs to devise complex
policies to control what AP announcements to accept (import policy), the route
selection, and what AP to announce and to whom (export policy). In addition,
BGP is also used to coarse traffic engineering for incoming traffic via the
prepend mechanism.
However, there are no wide-spread good tools for managing BGP and much of the
complex configuration is done by home-brewed scripts or simply by manually
configuring router with bare-bone terminal interface. This process generates
many configuration mistakes.
In this study, we examine typos that propagates in BGP announcements and can
be found in many of the public databases. We classify them and quantify their
presence, and surprisingly found tens of ASNs and hundreds of APs affected by
typos on any given time. In addition, we suggest a simple algorithm that can
detect (and clean) most of them with almost no false positives.
|
[
{
"created": "Wed, 1 Nov 2023 07:06:43 GMT",
"version": "v1"
}
] |
2023-11-02
|
[
[
"David",
"Liron",
""
],
[
"Shavitt",
"Yuval",
""
]
] |
BGP is the protocol that keeps Internet connected. Operators use it by announcing Address Prefixes (APs), namely IP address blocks, that they own or that they agree to serve as transit for. BGP enables ISPs to devise complex policies to control what AP announcements to accept (import policy), the route selection, and what AP to announce and to whom (export policy). In addition, BGP is also used to coarse traffic engineering for incoming traffic via the prepend mechanism. However, there are no wide-spread good tools for managing BGP and much of the complex configuration is done by home-brewed scripts or simply by manually configuring router with bare-bone terminal interface. This process generates many configuration mistakes. In this study, we examine typos that propagates in BGP announcements and can be found in many of the public databases. We classify them and quantify their presence, and surprisingly found tens of ASNs and hundreds of APs affected by typos on any given time. In addition, we suggest a simple algorithm that can detect (and clean) most of them with almost no false positives.
|
1608.06495
|
Dan Xu
|
Nannan Li, Dan Xu, Zhenqiang Ying, Zhihao Li, Ge Li
|
Searching Action Proposals via Spatial Actionness Estimation and
Temporal Path Inference and Tracking
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we address the problem of searching action proposals in
unconstrained video clips. Our approach starts from actionness estimation on
frame-level bounding boxes, and then aggregates the bounding boxes belonging to
the same actor across frames via linking, associating, tracking to generate
spatial-temporal continuous action paths. To achieve the target, a novel
actionness estimation method is firstly proposed by utilizing both human
appearance and motion cues. Then, the association of the action paths is
formulated as a maximum set coverage problem with the results of actionness
estimation as a priori. To further promote the performance, we design an
improved optimization objective for the problem and provide a greedy search
algorithm to solve it. Finally, a tracking-by-detection scheme is designed to
further refine the searched action paths. Extensive experiments on two
challenging datasets, UCF-Sports and UCF-101, show that the proposed approach
advances state-of-the-art proposal generation performance in terms of both
accuracy and proposal quantity.
|
[
{
"created": "Tue, 23 Aug 2016 13:08:30 GMT",
"version": "v1"
}
] |
2016-08-24
|
[
[
"Li",
"Nannan",
""
],
[
"Xu",
"Dan",
""
],
[
"Ying",
"Zhenqiang",
""
],
[
"Li",
"Zhihao",
""
],
[
"Li",
"Ge",
""
]
] |
In this paper, we address the problem of searching action proposals in unconstrained video clips. Our approach starts from actionness estimation on frame-level bounding boxes, and then aggregates the bounding boxes belonging to the same actor across frames via linking, associating, tracking to generate spatial-temporal continuous action paths. To achieve the target, a novel actionness estimation method is firstly proposed by utilizing both human appearance and motion cues. Then, the association of the action paths is formulated as a maximum set coverage problem with the results of actionness estimation as a priori. To further promote the performance, we design an improved optimization objective for the problem and provide a greedy search algorithm to solve it. Finally, a tracking-by-detection scheme is designed to further refine the searched action paths. Extensive experiments on two challenging datasets, UCF-Sports and UCF-101, show that the proposed approach advances state-of-the-art proposal generation performance in terms of both accuracy and proposal quantity.
|
1704.01041
|
Yan Shuo Tan
|
Yan Shuo Tan, Roman Vershynin
|
Polynomial Time and Sample Complexity for Non-Gaussian Component
Analysis: Spectral Methods
| null | null | null | null |
cs.LG math.PR stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The problem of Non-Gaussian Component Analysis (NGCA) is about finding a
maximal low-dimensional subspace $E$ in $\mathbb{R}^n$ so that data points
projected onto $E$ follow a non-gaussian distribution. Although this is an
appropriate model for some real world data analysis problems, there has been
little progress on this problem over the last decade.
In this paper, we attempt to address this state of affairs in two ways.
First, we give a new characterization of standard gaussian distributions in
high-dimensions, which lead to effective tests for non-gaussianness. Second, we
propose a simple algorithm, \emph{Reweighted PCA}, as a method for solving the
NGCA problem. We prove that for a general unknown non-gaussian distribution,
this algorithm recovers at least one direction in $E$, with sample and time
complexity depending polynomially on the dimension of the ambient space. We
conjecture that the algorithm actually recovers the entire $E$.
|
[
{
"created": "Tue, 4 Apr 2017 14:46:00 GMT",
"version": "v1"
}
] |
2017-04-05
|
[
[
"Tan",
"Yan Shuo",
""
],
[
"Vershynin",
"Roman",
""
]
] |
The problem of Non-Gaussian Component Analysis (NGCA) is about finding a maximal low-dimensional subspace $E$ in $\mathbb{R}^n$ so that data points projected onto $E$ follow a non-gaussian distribution. Although this is an appropriate model for some real world data analysis problems, there has been little progress on this problem over the last decade. In this paper, we attempt to address this state of affairs in two ways. First, we give a new characterization of standard gaussian distributions in high-dimensions, which lead to effective tests for non-gaussianness. Second, we propose a simple algorithm, \emph{Reweighted PCA}, as a method for solving the NGCA problem. We prove that for a general unknown non-gaussian distribution, this algorithm recovers at least one direction in $E$, with sample and time complexity depending polynomially on the dimension of the ambient space. We conjecture that the algorithm actually recovers the entire $E$.
|
2103.09382
|
Chuang Niu
|
Chuang Niu and Hongming Shan and Ge Wang
|
SPICE: Semantic Pseudo-labeling for Image Clustering
| null |
IEEE Transactions on Image Processing, 2022
|
10.1109/TIP.2022.3221290
| null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The similarity among samples and the discrepancy between clusters are two
crucial aspects of image clustering. However, current deep clustering methods
suffer from the inaccurate estimation of either feature similarity or semantic
discrepancy. In this paper, we present a Semantic Pseudo-labeling-based Image
ClustEring (SPICE) framework, which divides the clustering network into a
feature model for measuring the instance-level similarity and a clustering head
for identifying the cluster-level discrepancy. We design two semantics-aware
pseudo-labeling algorithms, prototype pseudo-labeling, and reliable
pseudo-labeling, which enable accurate and reliable self-supervision over
clustering. Without using any ground-truth label, we optimize the clustering
network in three stages: 1) train the feature model through contrastive
learning to measure the instance similarity, 2) train the clustering head with
the prototype pseudo-labeling algorithm to identify cluster semantics, and 3)
jointly train the feature model and clustering head with the reliable
pseudo-labeling algorithm to improve the clustering performance. Extensive
experimental results demonstrate that SPICE achieves significant improvements
(~10%) over existing methods and establishes the new state-of-the-art
clustering results on six image benchmark datasets in terms of three popular
metrics. Importantly, SPICE significantly reduces the gap between unsupervised
and fully-supervised classification; e.g., there is only a 2% (91.8% vs 93.8%)
accuracy difference on CIFAR-10. Our code has been made publically available at
https://github.com/niuchuangnn/SPICE.
|
[
{
"created": "Wed, 17 Mar 2021 00:52:27 GMT",
"version": "v1"
},
{
"created": "Mon, 25 Oct 2021 14:11:41 GMT",
"version": "v2"
},
{
"created": "Fri, 14 Jan 2022 14:18:19 GMT",
"version": "v3"
}
] |
2022-11-23
|
[
[
"Niu",
"Chuang",
""
],
[
"Shan",
"Hongming",
""
],
[
"Wang",
"Ge",
""
]
] |
The similarity among samples and the discrepancy between clusters are two crucial aspects of image clustering. However, current deep clustering methods suffer from the inaccurate estimation of either feature similarity or semantic discrepancy. In this paper, we present a Semantic Pseudo-labeling-based Image ClustEring (SPICE) framework, which divides the clustering network into a feature model for measuring the instance-level similarity and a clustering head for identifying the cluster-level discrepancy. We design two semantics-aware pseudo-labeling algorithms, prototype pseudo-labeling, and reliable pseudo-labeling, which enable accurate and reliable self-supervision over clustering. Without using any ground-truth label, we optimize the clustering network in three stages: 1) train the feature model through contrastive learning to measure the instance similarity, 2) train the clustering head with the prototype pseudo-labeling algorithm to identify cluster semantics, and 3) jointly train the feature model and clustering head with the reliable pseudo-labeling algorithm to improve the clustering performance. Extensive experimental results demonstrate that SPICE achieves significant improvements (~10%) over existing methods and establishes the new state-of-the-art clustering results on six image benchmark datasets in terms of three popular metrics. Importantly, SPICE significantly reduces the gap between unsupervised and fully-supervised classification; e.g., there is only a 2% (91.8% vs 93.8%) accuracy difference on CIFAR-10. Our code has been made publically available at https://github.com/niuchuangnn/SPICE.
|
2010.11940
|
Youngwoon Lee
|
Jun Yamada, Youngwoon Lee, Gautam Salhotra, Karl Pertsch, Max
Pflueger, Gaurav S. Sukhatme, Joseph J. Lim, Peter Englert
|
Motion Planner Augmented Reinforcement Learning for Robot Manipulation
in Obstructed Environments
|
Published at the Conference on Robot Learning (CoRL) 2020
| null | null | null |
cs.RO cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep reinforcement learning (RL) agents are able to learn contact-rich
manipulation tasks by maximizing a reward signal, but require large amounts of
experience, especially in environments with many obstacles that complicate
exploration. In contrast, motion planners use explicit models of the agent and
environment to plan collision-free paths to faraway goals, but suffer from
inaccurate models in tasks that require contacts with the environment. To
combine the benefits of both approaches, we propose motion planner augmented RL
(MoPA-RL) which augments the action space of an RL agent with the long-horizon
planning capabilities of motion planners. Based on the magnitude of the action,
our approach smoothly transitions between directly executing the action and
invoking a motion planner. We evaluate our approach on various simulated
manipulation tasks and compare it to alternative action spaces in terms of
learning efficiency and safety. The experiments demonstrate that MoPA-RL
increases learning efficiency, leads to a faster exploration, and results in
safer policies that avoid collisions with the environment. Videos and code are
available at https://clvrai.com/mopa-rl .
|
[
{
"created": "Thu, 22 Oct 2020 17:59:09 GMT",
"version": "v1"
}
] |
2020-10-23
|
[
[
"Yamada",
"Jun",
""
],
[
"Lee",
"Youngwoon",
""
],
[
"Salhotra",
"Gautam",
""
],
[
"Pertsch",
"Karl",
""
],
[
"Pflueger",
"Max",
""
],
[
"Sukhatme",
"Gaurav S.",
""
],
[
"Lim",
"Joseph J.",
""
],
[
"Englert",
"Peter",
""
]
] |
Deep reinforcement learning (RL) agents are able to learn contact-rich manipulation tasks by maximizing a reward signal, but require large amounts of experience, especially in environments with many obstacles that complicate exploration. In contrast, motion planners use explicit models of the agent and environment to plan collision-free paths to faraway goals, but suffer from inaccurate models in tasks that require contacts with the environment. To combine the benefits of both approaches, we propose motion planner augmented RL (MoPA-RL) which augments the action space of an RL agent with the long-horizon planning capabilities of motion planners. Based on the magnitude of the action, our approach smoothly transitions between directly executing the action and invoking a motion planner. We evaluate our approach on various simulated manipulation tasks and compare it to alternative action spaces in terms of learning efficiency and safety. The experiments demonstrate that MoPA-RL increases learning efficiency, leads to a faster exploration, and results in safer policies that avoid collisions with the environment. Videos and code are available at https://clvrai.com/mopa-rl .
|
1803.09570
|
Leander Tentrup
|
Peter Faymonville and Bernd Finkbeiner and Markus N. Rabe and Leander
Tentrup
|
Encodings of Bounded Synthesis
|
Appeared in the proceedings of TACAS 2017
| null | null | null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The reactive synthesis problem is to compute a system satisfying a given
specification in temporal logic. Bounded synthesis is the approach to bound the
maximum size of the system that we accept as a solution to the reactive
synthesis problem. As a result, bounded synthesis is decidable whenever the
corresponding verification problem is decidable, and can be applied in settings
where classic synthesis fails, such as in the synthesis of distributed systems.
In this paper, we study the constraint solving problem behind bounded
synthesis. We consider different reductions of the bounded synthesis problem of
linear-time temporal logic (LTL) to constraint systems given as boolean
formulas (SAT), quantified boolean formulas (QBF), and dependency quantified
boolean formulas (DQBF). The reductions represent different trade-offs between
conciseness and algorithmic efficiency. In the SAT encoding, both inputs and
states of the system are represented explicitly; in QBF, inputs are symbolic
and states are explicit; in DQBF, both inputs and states are symbolic. We
evaluate the encodings systematically using benchmarks from the reactive
synthesis competition (SYNTCOMP) and state-of-the-art solvers. Our key, and
perhaps surprising, empirical finding is that QBF clearly dominates both SAT
and DQBF.
|
[
{
"created": "Mon, 26 Mar 2018 13:16:59 GMT",
"version": "v1"
}
] |
2018-03-28
|
[
[
"Faymonville",
"Peter",
""
],
[
"Finkbeiner",
"Bernd",
""
],
[
"Rabe",
"Markus N.",
""
],
[
"Tentrup",
"Leander",
""
]
] |
The reactive synthesis problem is to compute a system satisfying a given specification in temporal logic. Bounded synthesis is the approach to bound the maximum size of the system that we accept as a solution to the reactive synthesis problem. As a result, bounded synthesis is decidable whenever the corresponding verification problem is decidable, and can be applied in settings where classic synthesis fails, such as in the synthesis of distributed systems. In this paper, we study the constraint solving problem behind bounded synthesis. We consider different reductions of the bounded synthesis problem of linear-time temporal logic (LTL) to constraint systems given as boolean formulas (SAT), quantified boolean formulas (QBF), and dependency quantified boolean formulas (DQBF). The reductions represent different trade-offs between conciseness and algorithmic efficiency. In the SAT encoding, both inputs and states of the system are represented explicitly; in QBF, inputs are symbolic and states are explicit; in DQBF, both inputs and states are symbolic. We evaluate the encodings systematically using benchmarks from the reactive synthesis competition (SYNTCOMP) and state-of-the-art solvers. Our key, and perhaps surprising, empirical finding is that QBF clearly dominates both SAT and DQBF.
|
2109.07703
|
Guanxiong Chen
|
Guanxiong Chen, Haoyu Yang and Ian M. Mitchell
|
ROS-X-Habitat: Bridging the ROS Ecosystem with Embodied AI
|
Camera-ready version submitted to Canadian Conference on Computer and
Robot Vision (CRV) 2022
| null | null | null |
cs.RO cs.AI cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce ROS-X-Habitat, a software interface that bridges the AI Habitat
platform for embodied learning-based agents with other robotics resources via
ROS. This interface not only offers standardized communication protocols
between embodied agents and simulators, but also enables physically and
photorealistic simulation that benefits the training and/or testing of
vision-based embodied agents. With this interface, roboticists can evaluate
their own Habitat RL agents in another ROS-based simulator or use Habitat Sim
v2 as the test bed for their own robotic algorithms. Through in silico
experiments, we demonstrate that ROS-X-Habitat has minimal impact on the
navigation performance and simulation speed of a Habitat RGBD agent; that a
standard set of ROS mapping, planning and navigation tools can run in Habitat
Sim v2; and that a Habitat agent can run in the standard ROS simulator Gazebo.
|
[
{
"created": "Thu, 16 Sep 2021 03:53:52 GMT",
"version": "v1"
},
{
"created": "Fri, 17 Sep 2021 04:27:25 GMT",
"version": "v2"
},
{
"created": "Fri, 29 Apr 2022 06:11:42 GMT",
"version": "v3"
}
] |
2022-05-02
|
[
[
"Chen",
"Guanxiong",
""
],
[
"Yang",
"Haoyu",
""
],
[
"Mitchell",
"Ian M.",
""
]
] |
We introduce ROS-X-Habitat, a software interface that bridges the AI Habitat platform for embodied learning-based agents with other robotics resources via ROS. This interface not only offers standardized communication protocols between embodied agents and simulators, but also enables physically and photorealistic simulation that benefits the training and/or testing of vision-based embodied agents. With this interface, roboticists can evaluate their own Habitat RL agents in another ROS-based simulator or use Habitat Sim v2 as the test bed for their own robotic algorithms. Through in silico experiments, we demonstrate that ROS-X-Habitat has minimal impact on the navigation performance and simulation speed of a Habitat RGBD agent; that a standard set of ROS mapping, planning and navigation tools can run in Habitat Sim v2; and that a Habitat agent can run in the standard ROS simulator Gazebo.
|
2304.04854
|
Aleksandr Dekhovich
|
Aleksandr Dekhovich, Marcel H.F. Sluiter, David M.J. Tax and Miguel A.
Bessa
|
iPINNs: Incremental learning for Physics-informed neural networks
| null | null | null | null |
cs.LG cs.NA math.NA
|
http://creativecommons.org/licenses/by/4.0/
|
Physics-informed neural networks (PINNs) have recently become a powerful tool
for solving partial differential equations (PDEs). However, finding a set of
neural network parameters that lead to fulfilling a PDE can be challenging and
non-unique due to the complexity of the loss landscape that needs to be
traversed. Although a variety of multi-task learning and transfer learning
approaches have been proposed to overcome these issues, there is no incremental
training procedure for PINNs that can effectively mitigate such training
challenges. We propose incremental PINNs (iPINNs) that can learn multiple tasks
(equations) sequentially without additional parameters for new tasks and
improve performance for every equation in the sequence. Our approach learns
multiple PDEs starting from the simplest one by creating its own subnetwork for
each PDE and allowing each subnetwork to overlap with previously learned
subnetworks. We demonstrate that previous subnetworks are a good initialization
for a new equation if PDEs share similarities. We also show that iPINNs achieve
lower prediction error than regular PINNs for two different scenarios: (1)
learning a family of equations (e.g., 1-D convection PDE); and (2) learning
PDEs resulting from a combination of processes (e.g., 1-D reaction-diffusion
PDE). The ability to learn all problems with a single network together with
learning more complex PDEs with better generalization than regular PINNs will
open new avenues in this field.
|
[
{
"created": "Mon, 10 Apr 2023 20:19:20 GMT",
"version": "v1"
}
] |
2023-04-12
|
[
[
"Dekhovich",
"Aleksandr",
""
],
[
"Sluiter",
"Marcel H. F.",
""
],
[
"Tax",
"David M. J.",
""
],
[
"Bessa",
"Miguel A.",
""
]
] |
Physics-informed neural networks (PINNs) have recently become a powerful tool for solving partial differential equations (PDEs). However, finding a set of neural network parameters that lead to fulfilling a PDE can be challenging and non-unique due to the complexity of the loss landscape that needs to be traversed. Although a variety of multi-task learning and transfer learning approaches have been proposed to overcome these issues, there is no incremental training procedure for PINNs that can effectively mitigate such training challenges. We propose incremental PINNs (iPINNs) that can learn multiple tasks (equations) sequentially without additional parameters for new tasks and improve performance for every equation in the sequence. Our approach learns multiple PDEs starting from the simplest one by creating its own subnetwork for each PDE and allowing each subnetwork to overlap with previously learned subnetworks. We demonstrate that previous subnetworks are a good initialization for a new equation if PDEs share similarities. We also show that iPINNs achieve lower prediction error than regular PINNs for two different scenarios: (1) learning a family of equations (e.g., 1-D convection PDE); and (2) learning PDEs resulting from a combination of processes (e.g., 1-D reaction-diffusion PDE). The ability to learn all problems with a single network together with learning more complex PDEs with better generalization than regular PINNs will open new avenues in this field.
|
1804.08559
|
Srijan Kumar
|
Srijan Kumar, Neil Shah
|
False Information on Web and Social Media: A Survey
|
To appear in the book titled Social Media Analytics: Advances and
Applications, by CRC press, 2018
| null | null | null |
cs.SI cs.CL cs.CY cs.DL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
False information can be created and spread easily through the web and social
media platforms, resulting in widespread real-world impact. Characterizing how
false information proliferates on social platforms and why it succeeds in
deceiving readers are critical to develop efficient detection algorithms and
tools for early detection. A recent surge of research in this area has aimed to
address the key issues using methods based on feature engineering, graph
mining, and information modeling. Majority of the research has primarily
focused on two broad categories of false information: opinion-based (e.g., fake
reviews), and fact-based (e.g., false news and hoaxes). Therefore, in this
work, we present a comprehensive survey spanning diverse aspects of false
information, namely (i) the actors involved in spreading false information,
(ii) rationale behind successfully deceiving readers, (iii) quantifying the
impact of false information, (iv) measuring its characteristics across
different dimensions, and finally, (iv) algorithms developed to detect false
information. In doing so, we create a unified framework to describe these
recent methods and highlight a number of important directions for future
research.
|
[
{
"created": "Mon, 23 Apr 2018 16:52:49 GMT",
"version": "v1"
}
] |
2018-04-24
|
[
[
"Kumar",
"Srijan",
""
],
[
"Shah",
"Neil",
""
]
] |
False information can be created and spread easily through the web and social media platforms, resulting in widespread real-world impact. Characterizing how false information proliferates on social platforms and why it succeeds in deceiving readers are critical to develop efficient detection algorithms and tools for early detection. A recent surge of research in this area has aimed to address the key issues using methods based on feature engineering, graph mining, and information modeling. Majority of the research has primarily focused on two broad categories of false information: opinion-based (e.g., fake reviews), and fact-based (e.g., false news and hoaxes). Therefore, in this work, we present a comprehensive survey spanning diverse aspects of false information, namely (i) the actors involved in spreading false information, (ii) rationale behind successfully deceiving readers, (iii) quantifying the impact of false information, (iv) measuring its characteristics across different dimensions, and finally, (iv) algorithms developed to detect false information. In doing so, we create a unified framework to describe these recent methods and highlight a number of important directions for future research.
|
2301.09496
|
Lorenzo Simone
|
Lorenzo Simone and Davide Bacciu
|
ECGAN: Self-supervised generative adversarial network for
electrocardiography
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
High-quality synthetic data can support the development of effective
predictive models for biomedical tasks, especially in rare diseases or when
subject to compelling privacy constraints. These limitations, for instance,
negatively impact open access to electrocardiography datasets about
arrhythmias. This work introduces a self-supervised approach to the generation
of synthetic electrocardiography time series which is shown to promote
morphological plausibility. Our model (ECGAN) allows conditioning the
generative process for specific rhythm abnormalities, enhancing synchronization
and diversity across samples with respect to literature models. A dedicated
sample quality assessment framework is also defined, leveraging arrhythmia
classifiers. The empirical results highlight a substantial improvement against
state-of-the-art generative models for sequences and audio synthesis.
|
[
{
"created": "Mon, 23 Jan 2023 15:48:02 GMT",
"version": "v1"
}
] |
2023-01-24
|
[
[
"Simone",
"Lorenzo",
""
],
[
"Bacciu",
"Davide",
""
]
] |
High-quality synthetic data can support the development of effective predictive models for biomedical tasks, especially in rare diseases or when subject to compelling privacy constraints. These limitations, for instance, negatively impact open access to electrocardiography datasets about arrhythmias. This work introduces a self-supervised approach to the generation of synthetic electrocardiography time series which is shown to promote morphological plausibility. Our model (ECGAN) allows conditioning the generative process for specific rhythm abnormalities, enhancing synchronization and diversity across samples with respect to literature models. A dedicated sample quality assessment framework is also defined, leveraging arrhythmia classifiers. The empirical results highlight a substantial improvement against state-of-the-art generative models for sequences and audio synthesis.
|
2203.08945
|
Alexander Levine
|
Alexander Levine, Soheil Feizi
|
Provable Adversarial Robustness for Fractional Lp Threat Models
|
AISTATS 2022 accepted paper
| null | null | null |
cs.LG cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, researchers have extensively studied adversarial robustness
in a variety of threat models, including L_0, L_1, L_2, and L_infinity-norm
bounded adversarial attacks. However, attacks bounded by fractional L_p "norms"
(quasi-norms defined by the L_p distance with 0<p<1) have yet to be thoroughly
considered. We proactively propose a defense with several desirable properties:
it provides provable (certified) robustness, scales to ImageNet, and yields
deterministic (rather than high-probability) certified guarantees when applied
to quantized data (e.g., images). Our technique for fractional L_p robustness
constructs expressive, deep classifiers that are globally Lipschitz with
respect to the L_p^p metric, for any 0<p<1. However, our method is even more
general: we can construct classifiers which are globally Lipschitz with respect
to any metric defined as the sum of concave functions of components. Our
approach builds on a recent work, Levine and Feizi (2021), which provides a
provable defense against L_1 attacks. However, we demonstrate that our proposed
guarantees are highly non-vacuous, compared to the trivial solution of using
(Levine and Feizi, 2021) directly and applying norm inequalities. Code is
available at https://github.com/alevine0/fractionalLpRobustness.
|
[
{
"created": "Wed, 16 Mar 2022 21:11:41 GMT",
"version": "v1"
}
] |
2022-03-18
|
[
[
"Levine",
"Alexander",
""
],
[
"Feizi",
"Soheil",
""
]
] |
In recent years, researchers have extensively studied adversarial robustness in a variety of threat models, including L_0, L_1, L_2, and L_infinity-norm bounded adversarial attacks. However, attacks bounded by fractional L_p "norms" (quasi-norms defined by the L_p distance with 0<p<1) have yet to be thoroughly considered. We proactively propose a defense with several desirable properties: it provides provable (certified) robustness, scales to ImageNet, and yields deterministic (rather than high-probability) certified guarantees when applied to quantized data (e.g., images). Our technique for fractional L_p robustness constructs expressive, deep classifiers that are globally Lipschitz with respect to the L_p^p metric, for any 0<p<1. However, our method is even more general: we can construct classifiers which are globally Lipschitz with respect to any metric defined as the sum of concave functions of components. Our approach builds on a recent work, Levine and Feizi (2021), which provides a provable defense against L_1 attacks. However, we demonstrate that our proposed guarantees are highly non-vacuous, compared to the trivial solution of using (Levine and Feizi, 2021) directly and applying norm inequalities. Code is available at https://github.com/alevine0/fractionalLpRobustness.
|
2002.12249
|
Carlo Tiseo
|
Carlo Tiseo, Wolfgang Merkt, Wouter Wolfslag, Sethu Vijayakumar and
Michael Mistry
|
Safe and Compliant Control of Redundant Robots Using Superimposition of
Passive Task-Space Controllers
|
Nonlinear Dyn (2023)
| null |
10.1007/s11071-023-09045-x
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Safe and compliant control of dynamic systems in interaction with the
environment, e.g., in shared workspaces, continues to represent a major
challenge. Mismatches in the dynamic model of the robots, numerical
singularities, and the intrinsic environmental unpredictability are all
contributing factors. Online optimization of impedance controllers has recently
shown great promise in addressing this challenge, however, their performance is
not sufficiently robust to be deployed in challenging environments. This work
proposes a compliant control method for redundant manipulators based on a
superimposition of multiple passive task-space controllers in a hierarchy. Our
control framework of passive controllers is inherently stable, numerically
well-conditioned (as no matrix inversions are required), and computationally
inexpensive (as no optimization is used). We leverage and introduce a novel
stiffness profile for a recently proposed passive controller with smooth
transitions between the divergence and convergence phases making it
particularly suitable when multiple passive controllers are combined through
superimposition. Our experimental results demonstrate that the proposed method
achieves sub-centimeter tracking performance during demanding dynamic tasks
with fast-changing references, while remaining safe to interact with and robust
to singularities. he proposed framework achieves such results without knowledge
of the robot dynamics and thanks to its passivity is intrinsically stable. The
data further show that the robot can fully take advantage of the redundancy to
maintain the primary task accuracy while compensating for unknown environmental
interactions, which is not possible from current frameworks that require
accurate contact information.
|
[
{
"created": "Thu, 27 Feb 2020 16:46:58 GMT",
"version": "v1"
},
{
"created": "Mon, 21 Sep 2020 11:59:25 GMT",
"version": "v2"
}
] |
2023-11-28
|
[
[
"Tiseo",
"Carlo",
""
],
[
"Merkt",
"Wolfgang",
""
],
[
"Wolfslag",
"Wouter",
""
],
[
"Vijayakumar",
"Sethu",
""
],
[
"Mistry",
"Michael",
""
]
] |
Safe and compliant control of dynamic systems in interaction with the environment, e.g., in shared workspaces, continues to represent a major challenge. Mismatches in the dynamic model of the robots, numerical singularities, and the intrinsic environmental unpredictability are all contributing factors. Online optimization of impedance controllers has recently shown great promise in addressing this challenge, however, their performance is not sufficiently robust to be deployed in challenging environments. This work proposes a compliant control method for redundant manipulators based on a superimposition of multiple passive task-space controllers in a hierarchy. Our control framework of passive controllers is inherently stable, numerically well-conditioned (as no matrix inversions are required), and computationally inexpensive (as no optimization is used). We leverage and introduce a novel stiffness profile for a recently proposed passive controller with smooth transitions between the divergence and convergence phases making it particularly suitable when multiple passive controllers are combined through superimposition. Our experimental results demonstrate that the proposed method achieves sub-centimeter tracking performance during demanding dynamic tasks with fast-changing references, while remaining safe to interact with and robust to singularities. he proposed framework achieves such results without knowledge of the robot dynamics and thanks to its passivity is intrinsically stable. The data further show that the robot can fully take advantage of the redundancy to maintain the primary task accuracy while compensating for unknown environmental interactions, which is not possible from current frameworks that require accurate contact information.
|
2305.11725
|
Fangyu Lei
|
Fangyu Lei, Xiang Li, Yifan Wei, Shizhu He, Yiming Huang, Jun Zhao,
Kang Liu
|
S$^3$HQA: A Three-Stage Approach for Multi-hop Text-Table Hybrid
Question Answering
|
ACL 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Answering multi-hop questions over hybrid factual knowledge from the given
text and table (TextTableQA) is a challenging task. Existing models mainly
adopt a retriever-reader framework, which have several deficiencies, such as
noisy labeling in training retriever, insufficient utilization of heterogeneous
information over text and table, and deficient ability for different reasoning
operations. In this paper, we propose a three-stage TextTableQA framework
S3HQA, which comprises of retriever, selector, and reasoner. We use a retriever
with refinement training to solve the noisy labeling problem. Then, a hybrid
selector considers the linked relationships between heterogeneous data to
select the most relevant factual knowledge. For the final stage, instead of
adapting a reading comprehension module like in previous methods, we employ a
generation-based reasoner to obtain answers. This includes two approaches: a
row-wise generator and an LLM prompting generator~(first time used in this
task). The experimental results demonstrate that our method achieves
competitive results in the few-shot setting. When trained on the full dataset,
our approach outperforms all baseline methods, ranking first on the HybridQA
leaderboard.
|
[
{
"created": "Fri, 19 May 2023 15:01:48 GMT",
"version": "v1"
},
{
"created": "Tue, 25 Jun 2024 09:53:44 GMT",
"version": "v2"
}
] |
2024-06-26
|
[
[
"Lei",
"Fangyu",
""
],
[
"Li",
"Xiang",
""
],
[
"Wei",
"Yifan",
""
],
[
"He",
"Shizhu",
""
],
[
"Huang",
"Yiming",
""
],
[
"Zhao",
"Jun",
""
],
[
"Liu",
"Kang",
""
]
] |
Answering multi-hop questions over hybrid factual knowledge from the given text and table (TextTableQA) is a challenging task. Existing models mainly adopt a retriever-reader framework, which have several deficiencies, such as noisy labeling in training retriever, insufficient utilization of heterogeneous information over text and table, and deficient ability for different reasoning operations. In this paper, we propose a three-stage TextTableQA framework S3HQA, which comprises of retriever, selector, and reasoner. We use a retriever with refinement training to solve the noisy labeling problem. Then, a hybrid selector considers the linked relationships between heterogeneous data to select the most relevant factual knowledge. For the final stage, instead of adapting a reading comprehension module like in previous methods, we employ a generation-based reasoner to obtain answers. This includes two approaches: a row-wise generator and an LLM prompting generator~(first time used in this task). The experimental results demonstrate that our method achieves competitive results in the few-shot setting. When trained on the full dataset, our approach outperforms all baseline methods, ranking first on the HybridQA leaderboard.
|
1308.1971
|
Ji Zhu
|
Ji Zhu and Bruce Hajek
|
Tree dynamics for peer-to-peer streaming
| null | null | null | null |
cs.DS cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents an asynchronous distributed algorithm to manage multiple
trees for peer-to-peer streaming in a flow level model. It is assumed that
videos are cut into substreams, with or without source coding, to be
distributed to all nodes. The algorithm guarantees that each node receives
sufficiently many substreams within delay logarithmic in the number of peers.
The algorithm works by constantly updating the topology so that each substream
is distributed through trees to as many nodes as possible without interference.
Competition among trees for limited upload capacity is managed so that both
coverage and balance are achieved. The algorithm is robust in that it
efficiently eliminates cycles and maintains tree structures in a distributed
way. The algorithm favors nodes with higher degree, so it not only works for
live streaming and video on demand, but also in the case a few nodes with large
degree act as servers and other nodes act as clients.
A proof of convergence of the algorithm is given assuming instantaneous
update of depth information, and for the case of a single tree it is shown that
the convergence time is stochastically tightly bounded by a small constant
times the log of the number of nodes. These theoretical results are
complemented by simulations showing that the algorithm works well even when
most assumptions for the theoretical tractability do not hold.
|
[
{
"created": "Thu, 8 Aug 2013 20:38:25 GMT",
"version": "v1"
}
] |
2013-08-12
|
[
[
"Zhu",
"Ji",
""
],
[
"Hajek",
"Bruce",
""
]
] |
This paper presents an asynchronous distributed algorithm to manage multiple trees for peer-to-peer streaming in a flow level model. It is assumed that videos are cut into substreams, with or without source coding, to be distributed to all nodes. The algorithm guarantees that each node receives sufficiently many substreams within delay logarithmic in the number of peers. The algorithm works by constantly updating the topology so that each substream is distributed through trees to as many nodes as possible without interference. Competition among trees for limited upload capacity is managed so that both coverage and balance are achieved. The algorithm is robust in that it efficiently eliminates cycles and maintains tree structures in a distributed way. The algorithm favors nodes with higher degree, so it not only works for live streaming and video on demand, but also in the case a few nodes with large degree act as servers and other nodes act as clients. A proof of convergence of the algorithm is given assuming instantaneous update of depth information, and for the case of a single tree it is shown that the convergence time is stochastically tightly bounded by a small constant times the log of the number of nodes. These theoretical results are complemented by simulations showing that the algorithm works well even when most assumptions for the theoretical tractability do not hold.
|
2005.00937
|
Ben Chugg
|
Ben Chugg, William S. Evans, Kelvin Wong
|
Simultaneous Visibility Representations of Undirected Pairs of Graphs
|
22 pages, 8 figures
| null | null | null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider the problem of determining if a pair of undirected graphs
$\langle G_\mathsf{V}, G_\mathsf{H} \rangle$, which share the same vertex set,
has a representation using opaque geometric shapes for vertices, and
vertical/horizontal visibility between shapes to determine the edges of
$G_\mathsf{V}$/$G_\mathsf{H}$. While such a simultaneous visibility
representation of two graphs can be determined efficiently if the direction of
the required visibility for each edge is provided (and the vertex shapes are
sufficiently simple), it was unclear if edge direction is critical for
efficiency. We show that the problem is $\mathsf{NP}$-complete without that
information, even for graphs that are only slightly more complex than paths. In
addition, we characterize which pairs of paths have simultaneous visibility
representations using fixed orientation L-shapes. This narrows the range of
possible graph families for which determining simultaneous visibility
representation is non-trivial yet not $\mathsf{NP}$-hard.
|
[
{
"created": "Sat, 2 May 2020 23:01:09 GMT",
"version": "v1"
},
{
"created": "Tue, 25 May 2021 16:54:42 GMT",
"version": "v2"
}
] |
2021-05-26
|
[
[
"Chugg",
"Ben",
""
],
[
"Evans",
"William S.",
""
],
[
"Wong",
"Kelvin",
""
]
] |
We consider the problem of determining if a pair of undirected graphs $\langle G_\mathsf{V}, G_\mathsf{H} \rangle$, which share the same vertex set, has a representation using opaque geometric shapes for vertices, and vertical/horizontal visibility between shapes to determine the edges of $G_\mathsf{V}$/$G_\mathsf{H}$. While such a simultaneous visibility representation of two graphs can be determined efficiently if the direction of the required visibility for each edge is provided (and the vertex shapes are sufficiently simple), it was unclear if edge direction is critical for efficiency. We show that the problem is $\mathsf{NP}$-complete without that information, even for graphs that are only slightly more complex than paths. In addition, we characterize which pairs of paths have simultaneous visibility representations using fixed orientation L-shapes. This narrows the range of possible graph families for which determining simultaneous visibility representation is non-trivial yet not $\mathsf{NP}$-hard.
|
1610.02991
|
Kazutoshi Sasahara
|
Rishemjit Kaur and Kazutoshi Sasahara
|
Quantifying moral foundations from various topics on Twitter
conversations
|
16 pages, 5 figures, 4 tables, The Proceedings of the 2016 IEEE
International Conference on Big Data
| null |
10.1109/BigData.2016.7840889
| null |
cs.CY cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Moral foundations theory explains variations in moral behavior using innate
moral foundations: Care, Fairness, Ingroup, Authority, and Purity, along with
experimental supports. However, little is known about the roles of and
relationships between those foundations in everyday moral situations. To
address these, we quantify moral foundations from a large amount of online
conversations (tweets) about moral topics on the social media site Twitter. We
measure moral loadings using latent semantic analysis of tweets related to
topics on abortion, homosexuality, immigration, religion, and immorality in
general, showing how the five moral foundations function in spontaneous
conversations about moral violating situations. The results indicate that
although the five foundations are mutually related, Purity is the most
distinctive foundation and Care is the most dominant foundation in everyday
conversations on immorality. Our study shows a new possibility of natural
language processing and social big data for moral psychology.
|
[
{
"created": "Mon, 10 Oct 2016 16:36:02 GMT",
"version": "v1"
},
{
"created": "Mon, 7 Nov 2016 19:01:01 GMT",
"version": "v2"
}
] |
2017-11-21
|
[
[
"Kaur",
"Rishemjit",
""
],
[
"Sasahara",
"Kazutoshi",
""
]
] |
Moral foundations theory explains variations in moral behavior using innate moral foundations: Care, Fairness, Ingroup, Authority, and Purity, along with experimental supports. However, little is known about the roles of and relationships between those foundations in everyday moral situations. To address these, we quantify moral foundations from a large amount of online conversations (tweets) about moral topics on the social media site Twitter. We measure moral loadings using latent semantic analysis of tweets related to topics on abortion, homosexuality, immigration, religion, and immorality in general, showing how the five moral foundations function in spontaneous conversations about moral violating situations. The results indicate that although the five foundations are mutually related, Purity is the most distinctive foundation and Care is the most dominant foundation in everyday conversations on immorality. Our study shows a new possibility of natural language processing and social big data for moral psychology.
|
2001.07747
|
Maciej Besta
|
Robert Gerstenberger, Maciej Besta, and Torsten Hoefler
|
Enabling Highly-Scalable Remote Memory Access Programming with MPI-3 One
Sided
|
Best Paper Award at ACM/IEEE Supercomputing'13 (1/92), also Best
Student Paper finalist (8/92); source code of foMPI can be downloaded from
http://spcl.inf.ethz.ch/Research/Parallel_Programming/foMPI
|
Proceedings of the ACM/IEEE International Conference on High
Performance Computing, Networking, Storage and Analysis, pages 53:1--53:12,
November 2013
|
10.1145/2503210.2503286
| null |
cs.DC cs.PF
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modern interconnects offer remote direct memory access (RDMA) features. Yet,
most applications rely on explicit message passing for communications albeit
their unwanted overheads. The MPI-3.0 standard defines a programming interface
for exploiting RDMA networks directly, however, it's scalability and
practicability has to be demonstrated in practice. In this work, we develop
scalable bufferless protocols that implement the MPI-3.0 specification. Our
protocols support scaling to millions of cores with negligible memory
consumption while providing highest performance and minimal overheads. To arm
programmers, we provide a spectrum of performance models for all critical
functions and demonstrate the usability of our library and models with several
application studies with up to half a million processes. We show that our
design is comparable to, or better than UPC and Fortran Coarrays in terms of
latency, bandwidth, and message rate. We also demonstrate application
performance improvements with comparable programming complexity.
|
[
{
"created": "Tue, 21 Jan 2020 19:20:03 GMT",
"version": "v1"
},
{
"created": "Tue, 30 Jun 2020 13:20:08 GMT",
"version": "v2"
}
] |
2020-07-01
|
[
[
"Gerstenberger",
"Robert",
""
],
[
"Besta",
"Maciej",
""
],
[
"Hoefler",
"Torsten",
""
]
] |
Modern interconnects offer remote direct memory access (RDMA) features. Yet, most applications rely on explicit message passing for communications albeit their unwanted overheads. The MPI-3.0 standard defines a programming interface for exploiting RDMA networks directly, however, it's scalability and practicability has to be demonstrated in practice. In this work, we develop scalable bufferless protocols that implement the MPI-3.0 specification. Our protocols support scaling to millions of cores with negligible memory consumption while providing highest performance and minimal overheads. To arm programmers, we provide a spectrum of performance models for all critical functions and demonstrate the usability of our library and models with several application studies with up to half a million processes. We show that our design is comparable to, or better than UPC and Fortran Coarrays in terms of latency, bandwidth, and message rate. We also demonstrate application performance improvements with comparable programming complexity.
|
2305.05911
|
Yang Yu
|
Ziqian Zhang, Lei Yuan, Lihe Li, Ke Xue, Chengxing Jia, Cong Guan,
Chao Qian, Yang Yu
|
Fast Teammate Adaptation in the Presence of Sudden Policy Change
|
In: Proceedings of the 39th Conference on Uncertainty in Artificial
Intelligence (UAI'23), Pittsburgh, PA, 2023
| null | null | null |
cs.MA
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In cooperative multi-agent reinforcement learning (MARL), where an agent
coordinates with teammate(s) for a shared goal, it may sustain non-stationary
caused by the policy change of teammates. Prior works mainly concentrate on the
policy change during the training phase or teammates altering cross episodes,
ignoring the fact that teammates may suffer from policy change suddenly within
an episode, which might lead to miscoordination and poor performance as a
result. We formulate the problem as an open Dec-POMDP, where we control some
agents to coordinate with uncontrolled teammates, whose policies could be
changed within one episode. Then we develop a new framework, fast teammates
adaptation (Fastap), to address the problem. Concretely, we first train
versatile teammates' policies and assign them to different clusters via the
Chinese Restaurant Process (CRP). Then, we train the controlled agent(s) to
coordinate with the sampled uncontrolled teammates by capturing their
identifications as context for fast adaptation. Finally, each agent applies its
local information to anticipate the teammates' context for decision-making
accordingly. This process proceeds alternately, leading to a robust policy that
can adapt to any teammates during the decentralized execution phase. We show in
multiple multi-agent benchmarks that Fastap can achieve superior performance
than multiple baselines in stationary and non-stationary scenarios.
|
[
{
"created": "Wed, 10 May 2023 05:42:47 GMT",
"version": "v1"
}
] |
2023-05-11
|
[
[
"Zhang",
"Ziqian",
""
],
[
"Yuan",
"Lei",
""
],
[
"Li",
"Lihe",
""
],
[
"Xue",
"Ke",
""
],
[
"Jia",
"Chengxing",
""
],
[
"Guan",
"Cong",
""
],
[
"Qian",
"Chao",
""
],
[
"Yu",
"Yang",
""
]
] |
In cooperative multi-agent reinforcement learning (MARL), where an agent coordinates with teammate(s) for a shared goal, it may sustain non-stationary caused by the policy change of teammates. Prior works mainly concentrate on the policy change during the training phase or teammates altering cross episodes, ignoring the fact that teammates may suffer from policy change suddenly within an episode, which might lead to miscoordination and poor performance as a result. We formulate the problem as an open Dec-POMDP, where we control some agents to coordinate with uncontrolled teammates, whose policies could be changed within one episode. Then we develop a new framework, fast teammates adaptation (Fastap), to address the problem. Concretely, we first train versatile teammates' policies and assign them to different clusters via the Chinese Restaurant Process (CRP). Then, we train the controlled agent(s) to coordinate with the sampled uncontrolled teammates by capturing their identifications as context for fast adaptation. Finally, each agent applies its local information to anticipate the teammates' context for decision-making accordingly. This process proceeds alternately, leading to a robust policy that can adapt to any teammates during the decentralized execution phase. We show in multiple multi-agent benchmarks that Fastap can achieve superior performance than multiple baselines in stationary and non-stationary scenarios.
|
1705.00290
|
A V Sreejith
|
A. Baskar, A. V. Sreejith, R. S. Thinniyam
|
Modulo quantifiers over functional vocabularies extending addition
|
There are many errors
| null | null | null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We show that first order logic (FO) and first order logic extended with
modulo counting quantifiers (FOMOD) over purely functional vocabularies which
extend addition, satisfy the Crane beach property (CBP) if the logic satisfies
a normal form (called positional normal form). This not only shows why logics
over the addition vocabulary have the CBP but also gives new CBP results, for
example for the vocabulary which extends addition with the exponentiation
function. The above results can also be viewed from the perspective of circuit
complexity. Showing the existence of regular languages not definable in
FOMOD[<, +, *] is equivalent to the separation of the circuit complexity
classes ACC0 and NC1 . Our theorem shows that a weaker logic , namely,
FOMOD[<,+,2^x] cannot define all regular languages.
|
[
{
"created": "Sun, 30 Apr 2017 09:34:26 GMT",
"version": "v1"
},
{
"created": "Tue, 2 May 2017 05:55:18 GMT",
"version": "v2"
},
{
"created": "Wed, 14 Jun 2017 07:50:57 GMT",
"version": "v3"
},
{
"created": "Thu, 15 Jun 2017 03:22:55 GMT",
"version": "v4"
},
{
"created": "Mon, 31 Dec 2018 04:49:43 GMT",
"version": "v5"
},
{
"created": "Sat, 3 Jul 2021 16:48:11 GMT",
"version": "v6"
}
] |
2021-07-06
|
[
[
"Baskar",
"A.",
""
],
[
"Sreejith",
"A. V.",
""
],
[
"Thinniyam",
"R. S.",
""
]
] |
We show that first order logic (FO) and first order logic extended with modulo counting quantifiers (FOMOD) over purely functional vocabularies which extend addition, satisfy the Crane beach property (CBP) if the logic satisfies a normal form (called positional normal form). This not only shows why logics over the addition vocabulary have the CBP but also gives new CBP results, for example for the vocabulary which extends addition with the exponentiation function. The above results can also be viewed from the perspective of circuit complexity. Showing the existence of regular languages not definable in FOMOD[<, +, *] is equivalent to the separation of the circuit complexity classes ACC0 and NC1 . Our theorem shows that a weaker logic , namely, FOMOD[<,+,2^x] cannot define all regular languages.
|
2303.12384
|
Jiuming Liu
|
Jiuming Liu, Guangming Wang, Zhe Liu, Chaokang Jiang, Marc Pollefeys,
Hesheng Wang
|
RegFormer: An Efficient Projection-Aware Transformer Network for
Large-Scale Point Cloud Registration
|
Accepted by ICCV2023. Codes are released at
https://github.com/IRMVLab/RegFormer
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Although point cloud registration has achieved remarkable advances in
object-level and indoor scenes, large-scale registration methods are rarely
explored. Challenges mainly arise from the huge point number, complex
distribution, and outliers of outdoor LiDAR scans. In addition, most existing
registration works generally adopt a two-stage paradigm: They first find
correspondences by extracting discriminative local features and then leverage
estimators (eg. RANSAC) to filter outliers, which are highly dependent on
well-designed descriptors and post-processing choices. To address these
problems, we propose an end-to-end transformer network (RegFormer) for
large-scale point cloud alignment without any further post-processing.
Specifically, a projection-aware hierarchical transformer is proposed to
capture long-range dependencies and filter outliers by extracting point
features globally. Our transformer has linear complexity, which guarantees high
efficiency even for large-scale scenes. Furthermore, to effectively reduce
mismatches, a bijective association transformer is designed for regressing the
initial transformation. Extensive experiments on KITTI and NuScenes datasets
demonstrate that our RegFormer achieves competitive performance in terms of
both accuracy and efficiency.
|
[
{
"created": "Wed, 22 Mar 2023 08:47:37 GMT",
"version": "v1"
},
{
"created": "Thu, 20 Jul 2023 07:04:04 GMT",
"version": "v2"
},
{
"created": "Thu, 10 Aug 2023 02:39:22 GMT",
"version": "v3"
}
] |
2023-08-11
|
[
[
"Liu",
"Jiuming",
""
],
[
"Wang",
"Guangming",
""
],
[
"Liu",
"Zhe",
""
],
[
"Jiang",
"Chaokang",
""
],
[
"Pollefeys",
"Marc",
""
],
[
"Wang",
"Hesheng",
""
]
] |
Although point cloud registration has achieved remarkable advances in object-level and indoor scenes, large-scale registration methods are rarely explored. Challenges mainly arise from the huge point number, complex distribution, and outliers of outdoor LiDAR scans. In addition, most existing registration works generally adopt a two-stage paradigm: They first find correspondences by extracting discriminative local features and then leverage estimators (eg. RANSAC) to filter outliers, which are highly dependent on well-designed descriptors and post-processing choices. To address these problems, we propose an end-to-end transformer network (RegFormer) for large-scale point cloud alignment without any further post-processing. Specifically, a projection-aware hierarchical transformer is proposed to capture long-range dependencies and filter outliers by extracting point features globally. Our transformer has linear complexity, which guarantees high efficiency even for large-scale scenes. Furthermore, to effectively reduce mismatches, a bijective association transformer is designed for regressing the initial transformation. Extensive experiments on KITTI and NuScenes datasets demonstrate that our RegFormer achieves competitive performance in terms of both accuracy and efficiency.
|
2206.08919
|
Teng Wang
|
Teng Wang, Wenhao Jiang, Zhichao Lu, Feng Zheng, Ran Cheng, Chengguo
Yin, Ping Luo
|
VLMixer: Unpaired Vision-Language Pre-training via Cross-Modal CutMix
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing vision-language pre-training (VLP) methods primarily rely on paired
image-text datasets, which are either annotated by enormous human labors, or
crawled from the internet followed by elaborate data cleaning techniques. To
reduce the dependency on well-aligned image-text pairs, it is promising to
directly leverage the large-scale text-only and image-only corpora. This paper
proposes a data augmentation method, namely cross-modal CutMix (CMC), for
implicit cross-modal alignment learning in unpaired VLP. Specifically, CMC
transforms natural sentences from the textual view into a multi-modal view,
where visually-grounded words in a sentence are randomly replaced by diverse
image patches with similar semantics. There are several appealing proprieties
of the proposed CMC. First, it enhances the data diversity while keeping the
semantic meaning intact for tackling problems where the aligned data are
scarce; Second, by attaching cross-modal noise on uni-modal data, it guides
models to learn token-level interactions across modalities for better
denoising. Furthermore, we present a new unpaired VLP method, dubbed as
VLMixer, that integrates CMC with contrastive learning to pull together the
uni-modal and multi-modal views for better instance-level alignments among
different modalities. Extensive experiments on five downstream tasks show that
VLMixer could surpass previous state-of-the-art unpaired VLP methods.
|
[
{
"created": "Fri, 17 Jun 2022 17:56:47 GMT",
"version": "v1"
}
] |
2022-06-20
|
[
[
"Wang",
"Teng",
""
],
[
"Jiang",
"Wenhao",
""
],
[
"Lu",
"Zhichao",
""
],
[
"Zheng",
"Feng",
""
],
[
"Cheng",
"Ran",
""
],
[
"Yin",
"Chengguo",
""
],
[
"Luo",
"Ping",
""
]
] |
Existing vision-language pre-training (VLP) methods primarily rely on paired image-text datasets, which are either annotated by enormous human labors, or crawled from the internet followed by elaborate data cleaning techniques. To reduce the dependency on well-aligned image-text pairs, it is promising to directly leverage the large-scale text-only and image-only corpora. This paper proposes a data augmentation method, namely cross-modal CutMix (CMC), for implicit cross-modal alignment learning in unpaired VLP. Specifically, CMC transforms natural sentences from the textual view into a multi-modal view, where visually-grounded words in a sentence are randomly replaced by diverse image patches with similar semantics. There are several appealing proprieties of the proposed CMC. First, it enhances the data diversity while keeping the semantic meaning intact for tackling problems where the aligned data are scarce; Second, by attaching cross-modal noise on uni-modal data, it guides models to learn token-level interactions across modalities for better denoising. Furthermore, we present a new unpaired VLP method, dubbed as VLMixer, that integrates CMC with contrastive learning to pull together the uni-modal and multi-modal views for better instance-level alignments among different modalities. Extensive experiments on five downstream tasks show that VLMixer could surpass previous state-of-the-art unpaired VLP methods.
|
2404.03543
|
JiaWei Guo
|
Jiawei Guo, Ziming Li, Xueling Liu, Kaijing Ma, Tianyu Zheng,
Zhouliang Yu, Ding Pan, Yizhi LI, Ruibo Liu, Yue Wang, Shuyue Guo, Xingwei
Qu, Xiang Yue, Ge Zhang, Wenhu Chen, Jie Fu
|
CodeEditorBench: Evaluating Code Editing Capability of Large Language
Models
| null | null | null | null |
cs.SE cs.AI cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large Language Models (LLMs) for code are rapidly evolving, with code editing
emerging as a critical capability. We introduce CodeEditorBench, an evaluation
framework designed to rigorously assess the performance of LLMs in code editing
tasks, including debugging, translating, polishing, and requirement switching.
Unlike existing benchmarks focusing solely on code generation, CodeEditorBench
emphasizes real-world scenarios and practical aspects of software development.
We curate diverse coding challenges and scenarios from five sources, covering
various programming languages, complexity levels, and editing tasks. Evaluation
of 19 LLMs reveals that closed-source models (particularly Gemini-Ultra and
GPT-4), outperform open-source models in CodeEditorBench, highlighting
differences in model performance based on problem types and prompt
sensitivities. CodeEditorBench aims to catalyze advancements in LLMs by
providing a robust platform for assessing code editing capabilities. We will
release all prompts and datasets to enable the community to expand the dataset
and benchmark emerging LLMs. By introducing CodeEditorBench, we contribute to
the advancement of LLMs in code editing and provide a valuable resource for
researchers and practitioners.
|
[
{
"created": "Thu, 4 Apr 2024 15:49:49 GMT",
"version": "v1"
},
{
"created": "Sat, 6 Apr 2024 04:29:25 GMT",
"version": "v2"
}
] |
2024-04-09
|
[
[
"Guo",
"Jiawei",
""
],
[
"Li",
"Ziming",
""
],
[
"Liu",
"Xueling",
""
],
[
"Ma",
"Kaijing",
""
],
[
"Zheng",
"Tianyu",
""
],
[
"Yu",
"Zhouliang",
""
],
[
"Pan",
"Ding",
""
],
[
"LI",
"Yizhi",
""
],
[
"Liu",
"Ruibo",
""
],
[
"Wang",
"Yue",
""
],
[
"Guo",
"Shuyue",
""
],
[
"Qu",
"Xingwei",
""
],
[
"Yue",
"Xiang",
""
],
[
"Zhang",
"Ge",
""
],
[
"Chen",
"Wenhu",
""
],
[
"Fu",
"Jie",
""
]
] |
Large Language Models (LLMs) for code are rapidly evolving, with code editing emerging as a critical capability. We introduce CodeEditorBench, an evaluation framework designed to rigorously assess the performance of LLMs in code editing tasks, including debugging, translating, polishing, and requirement switching. Unlike existing benchmarks focusing solely on code generation, CodeEditorBench emphasizes real-world scenarios and practical aspects of software development. We curate diverse coding challenges and scenarios from five sources, covering various programming languages, complexity levels, and editing tasks. Evaluation of 19 LLMs reveals that closed-source models (particularly Gemini-Ultra and GPT-4), outperform open-source models in CodeEditorBench, highlighting differences in model performance based on problem types and prompt sensitivities. CodeEditorBench aims to catalyze advancements in LLMs by providing a robust platform for assessing code editing capabilities. We will release all prompts and datasets to enable the community to expand the dataset and benchmark emerging LLMs. By introducing CodeEditorBench, we contribute to the advancement of LLMs in code editing and provide a valuable resource for researchers and practitioners.
|
2106.05532
|
Anjana Arunkumar
|
Swaroop Mishra, Anjana Arunkumar
|
How Robust are Model Rankings: A Leaderboard Customization Approach for
Equitable Evaluation
|
AAAI 2021
| null | null | null |
cs.CL cs.AI cs.HC cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Models that top leaderboards often perform unsatisfactorily when deployed in
real world applications; this has necessitated rigorous and expensive
pre-deployment model testing. A hitherto unexplored facet of model performance
is: Are our leaderboards doing equitable evaluation? In this paper, we
introduce a task-agnostic method to probe leaderboards by weighting samples
based on their `difficulty' level. We find that leaderboards can be
adversarially attacked and top performing models may not always be the best
models. We subsequently propose alternate evaluation metrics. Our experiments
on 10 models show changes in model ranking and an overall reduction in
previously reported performance -- thus rectifying the overestimation of AI
systems' capabilities. Inspired by behavioral testing principles, we further
develop a prototype of a visual analytics tool that enables leaderboard
revamping through customization, based on an end user's focus area. This helps
users analyze models' strengths and weaknesses, and guides them in the
selection of a model best suited for their application scenario. In a user
study, members of various commercial product development teams, covering 5
focus areas, find that our prototype reduces pre-deployment development and
testing effort by 41% on average.
|
[
{
"created": "Thu, 10 Jun 2021 06:47:35 GMT",
"version": "v1"
}
] |
2021-06-11
|
[
[
"Mishra",
"Swaroop",
""
],
[
"Arunkumar",
"Anjana",
""
]
] |
Models that top leaderboards often perform unsatisfactorily when deployed in real world applications; this has necessitated rigorous and expensive pre-deployment model testing. A hitherto unexplored facet of model performance is: Are our leaderboards doing equitable evaluation? In this paper, we introduce a task-agnostic method to probe leaderboards by weighting samples based on their `difficulty' level. We find that leaderboards can be adversarially attacked and top performing models may not always be the best models. We subsequently propose alternate evaluation metrics. Our experiments on 10 models show changes in model ranking and an overall reduction in previously reported performance -- thus rectifying the overestimation of AI systems' capabilities. Inspired by behavioral testing principles, we further develop a prototype of a visual analytics tool that enables leaderboard revamping through customization, based on an end user's focus area. This helps users analyze models' strengths and weaknesses, and guides them in the selection of a model best suited for their application scenario. In a user study, members of various commercial product development teams, covering 5 focus areas, find that our prototype reduces pre-deployment development and testing effort by 41% on average.
|
1609.08116
|
Justin Miller
|
Justin Miller and Jonathan P. How
|
Predictive Positioning and Quality Of Service Ridesharing for Campus
Mobility On Demand Systems
|
8 pages, 5 figures
| null | null | null |
cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Autonomous Mobility On Demand (MOD) systems can utilize fleet management
strategies in order to provide a high customer quality of service (QoS).
Previous works on autonomous MOD systems have developed methods for rebalancing
single capacity vehicles, where QoS is maintained through large fleet sizing.
This work focuses on MOD systems utilizing a small number of vehicles, such as
those found on a campus, where additional vehicles cannot be introduced as
demand for rides increases. A predictive positioning method is presented for
improving customer QoS by identifying key locations to position the fleet in
order to minimize expected customer wait time. Ridesharing is introduced as a
means for improving customer QoS as arrival rates increase. However, with
ridesharing perceived QoS is dependent on an often unknown customer preference.
To address this challenge, a customer ratings model, which learns customer
preference from a 5-star rating, is developed and incorporated directly into a
ridesharing algorithm. The predictive positioning and ridesharing methods are
applied to simulation of a real-world campus MOD system.A combined predictive
positioning and ridesharing approach is shown to reduce customer service times
by up to 29% and the customer ratings model is shown to provide the best
overall MOD fleet management performance over a range of customer preferences.
|
[
{
"created": "Mon, 26 Sep 2016 18:52:46 GMT",
"version": "v1"
},
{
"created": "Mon, 6 Mar 2017 23:07:01 GMT",
"version": "v2"
}
] |
2017-03-08
|
[
[
"Miller",
"Justin",
""
],
[
"How",
"Jonathan P.",
""
]
] |
Autonomous Mobility On Demand (MOD) systems can utilize fleet management strategies in order to provide a high customer quality of service (QoS). Previous works on autonomous MOD systems have developed methods for rebalancing single capacity vehicles, where QoS is maintained through large fleet sizing. This work focuses on MOD systems utilizing a small number of vehicles, such as those found on a campus, where additional vehicles cannot be introduced as demand for rides increases. A predictive positioning method is presented for improving customer QoS by identifying key locations to position the fleet in order to minimize expected customer wait time. Ridesharing is introduced as a means for improving customer QoS as arrival rates increase. However, with ridesharing perceived QoS is dependent on an often unknown customer preference. To address this challenge, a customer ratings model, which learns customer preference from a 5-star rating, is developed and incorporated directly into a ridesharing algorithm. The predictive positioning and ridesharing methods are applied to simulation of a real-world campus MOD system.A combined predictive positioning and ridesharing approach is shown to reduce customer service times by up to 29% and the customer ratings model is shown to provide the best overall MOD fleet management performance over a range of customer preferences.
|
2109.10312
|
Bohan Wu
|
Bohan Wu, Suraj Nair, Li Fei-Fei, Chelsea Finn
|
Example-Driven Model-Based Reinforcement Learning for Solving
Long-Horizon Visuomotor Tasks
|
Equal advising and contribution for last two authors
| null | null | null |
cs.RO cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In this paper, we study the problem of learning a repertoire of low-level
skills from raw images that can be sequenced to complete long-horizon
visuomotor tasks. Reinforcement learning (RL) is a promising approach for
acquiring short-horizon skills autonomously. However, the focus of RL
algorithms has largely been on the success of those individual skills, more so
than learning and grounding a large repertoire of skills that can be sequenced
to complete extended multi-stage tasks. The latter demands robustness and
persistence, as errors in skills can compound over time, and may require the
robot to have a number of primitive skills in its repertoire, rather than just
one. To this end, we introduce EMBER, a model-based RL method for learning
primitive skills that are suitable for completing long-horizon visuomotor
tasks. EMBER learns and plans using a learned model, critic, and success
classifier, where the success classifier serves both as a reward function for
RL and as a grounding mechanism to continuously detect if the robot should
retry a skill when unsuccessful or under perturbations. Further, the learned
model is task-agnostic and trained using data from all skills, enabling the
robot to efficiently learn a number of distinct primitives. These visuomotor
primitive skills and their associated pre- and post-conditions can then be
directly combined with off-the-shelf symbolic planners to complete long-horizon
tasks. On a Franka Emika robot arm, we find that EMBER enables the robot to
complete three long-horizon visuomotor tasks at 85% success rate, such as
organizing an office desk, a file cabinet, and drawers, which require
sequencing up to 12 skills, involve 14 unique learned primitives, and demand
generalization to novel objects.
|
[
{
"created": "Tue, 21 Sep 2021 16:48:07 GMT",
"version": "v1"
},
{
"created": "Mon, 19 Sep 2022 04:20:27 GMT",
"version": "v2"
}
] |
2022-09-20
|
[
[
"Wu",
"Bohan",
""
],
[
"Nair",
"Suraj",
""
],
[
"Fei-Fei",
"Li",
""
],
[
"Finn",
"Chelsea",
""
]
] |
In this paper, we study the problem of learning a repertoire of low-level skills from raw images that can be sequenced to complete long-horizon visuomotor tasks. Reinforcement learning (RL) is a promising approach for acquiring short-horizon skills autonomously. However, the focus of RL algorithms has largely been on the success of those individual skills, more so than learning and grounding a large repertoire of skills that can be sequenced to complete extended multi-stage tasks. The latter demands robustness and persistence, as errors in skills can compound over time, and may require the robot to have a number of primitive skills in its repertoire, rather than just one. To this end, we introduce EMBER, a model-based RL method for learning primitive skills that are suitable for completing long-horizon visuomotor tasks. EMBER learns and plans using a learned model, critic, and success classifier, where the success classifier serves both as a reward function for RL and as a grounding mechanism to continuously detect if the robot should retry a skill when unsuccessful or under perturbations. Further, the learned model is task-agnostic and trained using data from all skills, enabling the robot to efficiently learn a number of distinct primitives. These visuomotor primitive skills and their associated pre- and post-conditions can then be directly combined with off-the-shelf symbolic planners to complete long-horizon tasks. On a Franka Emika robot arm, we find that EMBER enables the robot to complete three long-horizon visuomotor tasks at 85% success rate, such as organizing an office desk, a file cabinet, and drawers, which require sequencing up to 12 skills, involve 14 unique learned primitives, and demand generalization to novel objects.
|
1705.04906
|
James Cusick
|
James J. Cusick
|
Achieving and Managing Availability SLAs with ITIL Driven Processes,
DevOps, and Workflow Tools
|
8 pages, 1 Table, 2 Figures
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
System and application availability continues to be a fundamental
characteristic of IT services. In recent years the IT Operations team at
Wolters Kluwer CT Corporation has placed special focus on this area. Using a
combination of goals, metrics, processes, organizational models, communication
methods, corrective maintenance, root cause analysis, preventative engineering,
automated alerting, and workflow automation significant progress has been made
in meeting availability SLAs or Service Level Agreements. This paper presents
the background of this work, approach, details of its implementation, and
results. A special focus is provided on the use of a classical ITIL view as
operationalized in an Agile and DevOps environment.
Keywords: System Availability, Software Reliability, ITIL, Workflow
Automation, Process Engineering, Production Support, Customer Support, Product
Support, Change Management, Release Management, Incident Management, Problem
Management, Organizational Design, Scrum, Agile, DevOps, Service Level
Agreements, Software Measurement, Microsoft SharePoint.
|
[
{
"created": "Sun, 14 May 2017 01:32:37 GMT",
"version": "v1"
}
] |
2017-05-16
|
[
[
"Cusick",
"James J.",
""
]
] |
System and application availability continues to be a fundamental characteristic of IT services. In recent years the IT Operations team at Wolters Kluwer CT Corporation has placed special focus on this area. Using a combination of goals, metrics, processes, organizational models, communication methods, corrective maintenance, root cause analysis, preventative engineering, automated alerting, and workflow automation significant progress has been made in meeting availability SLAs or Service Level Agreements. This paper presents the background of this work, approach, details of its implementation, and results. A special focus is provided on the use of a classical ITIL view as operationalized in an Agile and DevOps environment. Keywords: System Availability, Software Reliability, ITIL, Workflow Automation, Process Engineering, Production Support, Customer Support, Product Support, Change Management, Release Management, Incident Management, Problem Management, Organizational Design, Scrum, Agile, DevOps, Service Level Agreements, Software Measurement, Microsoft SharePoint.
|
1003.1319
|
Zolt\'an K\'asa
|
Shariefuddin Pirzada, Guofei Zhou
|
On k-hypertournament losing scores
| null |
Acta Univ. Sapientiae, Informatica 2,1 (2010) 5-9
| null | null |
cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We give a new and short proof of a theorem on k-hypertournament losing scores
due to Zhou et al. [G. Zhou, T. Yao, K. Zhang, On score sequences of
k-tournaments, European J. Comb., 21, 8 (2000) 993-1000.]
|
[
{
"created": "Fri, 5 Mar 2010 19:58:38 GMT",
"version": "v1"
},
{
"created": "Tue, 9 Mar 2010 13:06:44 GMT",
"version": "v2"
}
] |
2010-03-13
|
[
[
"Pirzada",
"Shariefuddin",
""
],
[
"Zhou",
"Guofei",
""
]
] |
We give a new and short proof of a theorem on k-hypertournament losing scores due to Zhou et al. [G. Zhou, T. Yao, K. Zhang, On score sequences of k-tournaments, European J. Comb., 21, 8 (2000) 993-1000.]
|
2108.10986
|
Melika Golestani
|
Melika Golestani, Seyedeh Zahra Razavi, Zeinab Borhanifard, Farnaz
Tahmasebian, and Hesham Faili
|
Using BERT Encoding and Sentence-Level Language Model for Sentence
Ordering
|
12 pages, 2 figures, The 24th International Conference of Text,
Speech and Dialogue (TSD2021)
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Discovering the logical sequence of events is one of the cornerstones in
Natural Language Understanding. One approach to learn the sequence of events is
to study the order of sentences in a coherent text. Sentence ordering can be
applied in various tasks such as retrieval-based Question Answering, document
summarization, storytelling, text generation, and dialogue systems.
Furthermore, we can learn to model text coherence by learning how to order a
set of shuffled sentences. Previous research has relied on RNN, LSTM, and
BiLSTM architecture for learning text language models. However, these networks
have performed poorly due to the lack of attention mechanisms. We propose an
algorithm for sentence ordering in a corpus of short stories. Our proposed
method uses a language model based on Universal Transformers (UT) that captures
sentences' dependencies by employing an attention mechanism. Our method
improves the previous state-of-the-art in terms of Perfect Match Ratio (PMR)
score in the ROCStories dataset, a corpus of nearly 100K short human-made
stories. The proposed model includes three components: Sentence Encoder,
Language Model, and Sentence Arrangement with Brute Force Search. The first
component generates sentence embeddings using SBERT-WK pre-trained model
fine-tuned on the ROCStories data. Then a Universal Transformer network
generates a sentence-level language model. For decoding, the network generates
a candidate sentence as the following sentence of the current sentence. We use
cosine similarity as a scoring function to assign scores to the candidate
embedding and the embeddings of other sentences in the shuffled set. Then a
Brute Force Search is employed to maximize the sum of similarities between
pairs of consecutive sentences.
|
[
{
"created": "Tue, 24 Aug 2021 23:03:36 GMT",
"version": "v1"
}
] |
2021-08-26
|
[
[
"Golestani",
"Melika",
""
],
[
"Razavi",
"Seyedeh Zahra",
""
],
[
"Borhanifard",
"Zeinab",
""
],
[
"Tahmasebian",
"Farnaz",
""
],
[
"Faili",
"Hesham",
""
]
] |
Discovering the logical sequence of events is one of the cornerstones in Natural Language Understanding. One approach to learn the sequence of events is to study the order of sentences in a coherent text. Sentence ordering can be applied in various tasks such as retrieval-based Question Answering, document summarization, storytelling, text generation, and dialogue systems. Furthermore, we can learn to model text coherence by learning how to order a set of shuffled sentences. Previous research has relied on RNN, LSTM, and BiLSTM architecture for learning text language models. However, these networks have performed poorly due to the lack of attention mechanisms. We propose an algorithm for sentence ordering in a corpus of short stories. Our proposed method uses a language model based on Universal Transformers (UT) that captures sentences' dependencies by employing an attention mechanism. Our method improves the previous state-of-the-art in terms of Perfect Match Ratio (PMR) score in the ROCStories dataset, a corpus of nearly 100K short human-made stories. The proposed model includes three components: Sentence Encoder, Language Model, and Sentence Arrangement with Brute Force Search. The first component generates sentence embeddings using SBERT-WK pre-trained model fine-tuned on the ROCStories data. Then a Universal Transformer network generates a sentence-level language model. For decoding, the network generates a candidate sentence as the following sentence of the current sentence. We use cosine similarity as a scoring function to assign scores to the candidate embedding and the embeddings of other sentences in the shuffled set. Then a Brute Force Search is employed to maximize the sum of similarities between pairs of consecutive sentences.
|
2310.10902
|
Yue Niu
|
Yue Niu, Rajgopal Kannan, Ajitesh Srivastava, Viktor Prasanna
|
Reuse Kernels or Activations? A Flexible Dataflow for Low-latency
Spectral CNN Acceleration
|
11 pages, 11 figures Accepted to ACM/SIGDA International Symposium on
Field-Programmable Gate Arrays (FPGA) 2020
| null | null | null |
cs.AR eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Spectral-domain CNNs have been shown to be more efficient than traditional
spatial CNNs in terms of reducing computation complexity. However they come
with a `kernel explosion' problem that, even after compression (pruning),
imposes a high memory burden and off-chip bandwidth requirement for kernel
access. This creates a performance gap between the potential acceleration
offered by compression and actual FPGA implementation performance, especially
for low-latency CNN inference. In this paper, we develop a principled approach
to overcoming this performance gap and designing a low-latency, low-bandwidth,
spectral sparse CNN accelerator on FPGAs. First, we analyze the
bandwidth-storage tradeoff of sparse convolutional layers and locate
communication bottlenecks. We then develop a dataflow for flexibly optimizing
data reuse in different layers to minimize off-chip communication. Finally, we
propose a novel scheduling algorithm to optimally schedule the on-chip memory
access of multiple sparse kernels and minimize read conflicts. On a
state-of-the-art FPGA platform, our design reduces data transfers by 42\% with
DSP utilization up to 90\% and achieves inference latency of 9 ms for VGG16,
compared to the baseline state-of-the-art latency of 68 ms.
|
[
{
"created": "Tue, 17 Oct 2023 00:21:07 GMT",
"version": "v1"
}
] |
2023-10-18
|
[
[
"Niu",
"Yue",
""
],
[
"Kannan",
"Rajgopal",
""
],
[
"Srivastava",
"Ajitesh",
""
],
[
"Prasanna",
"Viktor",
""
]
] |
Spectral-domain CNNs have been shown to be more efficient than traditional spatial CNNs in terms of reducing computation complexity. However they come with a `kernel explosion' problem that, even after compression (pruning), imposes a high memory burden and off-chip bandwidth requirement for kernel access. This creates a performance gap between the potential acceleration offered by compression and actual FPGA implementation performance, especially for low-latency CNN inference. In this paper, we develop a principled approach to overcoming this performance gap and designing a low-latency, low-bandwidth, spectral sparse CNN accelerator on FPGAs. First, we analyze the bandwidth-storage tradeoff of sparse convolutional layers and locate communication bottlenecks. We then develop a dataflow for flexibly optimizing data reuse in different layers to minimize off-chip communication. Finally, we propose a novel scheduling algorithm to optimally schedule the on-chip memory access of multiple sparse kernels and minimize read conflicts. On a state-of-the-art FPGA platform, our design reduces data transfers by 42\% with DSP utilization up to 90\% and achieves inference latency of 9 ms for VGG16, compared to the baseline state-of-the-art latency of 68 ms.
|
1902.00208
|
Matias Bender
|
Mat\'ias Bender (PolSys), Jean-Charles Faug\`ere (PolSys), Elias
Tsigaridas (PolSys)
|
Gr{\"o}bner Basis over Semigroup Algebras: Algorithms and Applications
for Sparse Polynomial Systems
| null | null | null | null |
cs.SC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Gr{\"o}bner bases is one the most powerful tools in algorithmic non-linear
algebra. Their computation is an intrinsically hard problem with a complexity
at least single exponential in the number of variables. However, in most of the
cases, the polynomial systems coming from applications have some kind of
structure. For example , several problems in computer-aided design, robotics,
vision, biology , kinematics, cryptography, and optimization involve sparse
systems where the input polynomials have a few non-zero terms. Our approach to
exploit sparsity is to embed the systems in a semigroup algebra and to compute
Gr{\"o}bner bases over this algebra. Up to now, the algorithms that follow this
approach benefit from the sparsity only in the case where all the polynomials
have the same sparsity structure, that is the same Newton polytope. We
introduce the first algorithm that overcomes this restriction. Under regularity
assumptions, it performs no redundant computations. Further, we extend this
algorithm to compute Gr{\"o}bner basis in the standard algebra and solve sparse
polynomials systems over the torus $(C*)^n$. The complexity of the algorithm
depends on the Newton polytopes.
|
[
{
"created": "Fri, 1 Feb 2019 07:42:59 GMT",
"version": "v1"
}
] |
2019-02-04
|
[
[
"Bender",
"Matías",
"",
"PolSys"
],
[
"Faugère",
"Jean-Charles",
"",
"PolSys"
],
[
"Tsigaridas",
"Elias",
"",
"PolSys"
]
] |
Gr{\"o}bner bases is one the most powerful tools in algorithmic non-linear algebra. Their computation is an intrinsically hard problem with a complexity at least single exponential in the number of variables. However, in most of the cases, the polynomial systems coming from applications have some kind of structure. For example , several problems in computer-aided design, robotics, vision, biology , kinematics, cryptography, and optimization involve sparse systems where the input polynomials have a few non-zero terms. Our approach to exploit sparsity is to embed the systems in a semigroup algebra and to compute Gr{\"o}bner bases over this algebra. Up to now, the algorithms that follow this approach benefit from the sparsity only in the case where all the polynomials have the same sparsity structure, that is the same Newton polytope. We introduce the first algorithm that overcomes this restriction. Under regularity assumptions, it performs no redundant computations. Further, we extend this algorithm to compute Gr{\"o}bner basis in the standard algebra and solve sparse polynomials systems over the torus $(C*)^n$. The complexity of the algorithm depends on the Newton polytopes.
|
2202.09163
|
Claudia Schon
|
Claudia Schon
|
Selection Strategies for Commonsense Knowledge
| null | null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Selection strategies are broadly used in first-order logic theorem proving to
select those parts of a large knowledge base that are necessary to proof a
theorem at hand. Usually, these selection strategies do not take the meaning of
symbol names into account. In knowledge bases with commonsense knowledge,
symbol names are usually chosen to have a meaning and this meaning provides
valuable information for selection strategies. We introduce the vector-based
selection strategy, a purely statistical selection technique for commonsense
knowledge based on word embeddings. We compare different commonsense knowledge
selection techniques for the purpose of theorem proving and demonstrate the
usefulness of vector-based selection with a case study.
|
[
{
"created": "Fri, 18 Feb 2022 12:28:09 GMT",
"version": "v1"
},
{
"created": "Mon, 21 Feb 2022 06:39:05 GMT",
"version": "v2"
}
] |
2022-02-22
|
[
[
"Schon",
"Claudia",
""
]
] |
Selection strategies are broadly used in first-order logic theorem proving to select those parts of a large knowledge base that are necessary to proof a theorem at hand. Usually, these selection strategies do not take the meaning of symbol names into account. In knowledge bases with commonsense knowledge, symbol names are usually chosen to have a meaning and this meaning provides valuable information for selection strategies. We introduce the vector-based selection strategy, a purely statistical selection technique for commonsense knowledge based on word embeddings. We compare different commonsense knowledge selection techniques for the purpose of theorem proving and demonstrate the usefulness of vector-based selection with a case study.
|
2405.15240
|
Peng Kuang
|
Zhibo Wang, Peng Kuang, Zhixuan Chu, Jingyi Wang, Kui Ren
|
Towards Real World Debiasing: A Fine-grained Analysis On Spurious
Correlation
|
9 pages of main paper, 10 pages of appendix
| null | null | null |
cs.LG cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Spurious correlations in training data significantly hinder the
generalization capability of machine learning models when faced with
distribution shifts in real-world scenarios. To tackle the problem, numerous
debias approaches have been proposed and benchmarked on datasets intentionally
designed with severe biases. However, it remains to be asked: \textit{1. Do
existing benchmarks really capture biases in the real world? 2. Can existing
debias methods handle biases in the real world?} To answer the questions, we
revisit biased distributions in existing benchmarks and real-world datasets,
and propose a fine-grained framework for analyzing dataset bias by
disentangling it into the magnitude and prevalence of bias. We observe and
theoretically demonstrate that existing benchmarks poorly represent real-world
biases. We further introduce two novel biased distributions to bridge this gap,
forming a nuanced evaluation framework for real-world debiasing. Building upon
these results, we evaluate existing debias methods with our evaluation
framework. Results show that existing methods are incapable of handling
real-world biases. Through in-depth analysis, we propose a simple yet effective
approach that can be easily applied to existing debias methods, named Debias in
Destruction (DiD). Empirical results demonstrate the superiority of DiD,
improving the performance of existing methods on all types of biases within the
proposed evaluation framework.
|
[
{
"created": "Fri, 24 May 2024 06:06:41 GMT",
"version": "v1"
},
{
"created": "Thu, 30 May 2024 12:14:05 GMT",
"version": "v2"
}
] |
2024-05-31
|
[
[
"Wang",
"Zhibo",
""
],
[
"Kuang",
"Peng",
""
],
[
"Chu",
"Zhixuan",
""
],
[
"Wang",
"Jingyi",
""
],
[
"Ren",
"Kui",
""
]
] |
Spurious correlations in training data significantly hinder the generalization capability of machine learning models when faced with distribution shifts in real-world scenarios. To tackle the problem, numerous debias approaches have been proposed and benchmarked on datasets intentionally designed with severe biases. However, it remains to be asked: \textit{1. Do existing benchmarks really capture biases in the real world? 2. Can existing debias methods handle biases in the real world?} To answer the questions, we revisit biased distributions in existing benchmarks and real-world datasets, and propose a fine-grained framework for analyzing dataset bias by disentangling it into the magnitude and prevalence of bias. We observe and theoretically demonstrate that existing benchmarks poorly represent real-world biases. We further introduce two novel biased distributions to bridge this gap, forming a nuanced evaluation framework for real-world debiasing. Building upon these results, we evaluate existing debias methods with our evaluation framework. Results show that existing methods are incapable of handling real-world biases. Through in-depth analysis, we propose a simple yet effective approach that can be easily applied to existing debias methods, named Debias in Destruction (DiD). Empirical results demonstrate the superiority of DiD, improving the performance of existing methods on all types of biases within the proposed evaluation framework.
|
2301.01494
|
Takaaki Fukai
|
Takaaki Fukai (1), Kento Sato (2), Takahiro Hirofuchi (1) ((1)
National Institute of Advanced Industrial Science and Technology, Tokyo,
Japan, (2) RIKEN Center for Computational Science, Kobe, Japan)
|
Analyzing I/O Performance of a Hierarchical HPC Storage System for
Distributed Deep Learning
| null | null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Today, deep learning is an essential technology for our life. To solve more
complex problems with deep learning, both sizes of training datasets and neural
networks are increasing. To train a model with large datasets and networks,
distributed deep neural network (DDNN) training technique is necessary. For
large-scale DDNN training, HPC clusters are a promising computation
environment. In large-scale DDNN on HPC clusters, I/O performance is critical
because it is becoming a bottleneck. Most flagship-class HPC clusters have
hierarchical storage systems. For designing future HPC storage systems, it is
necessary to quantify the performance improvement effect of the hierarchical
storage system on the workloads. This paper demonstrates the quantitative
performance analysis of the hierarchical storage system for DDNN workload in a
flagship-class supercomputer. Our analysis shows how much performance
improvement and volume increment of the storage will be required to meet the
performance goal.
|
[
{
"created": "Wed, 4 Jan 2023 08:58:30 GMT",
"version": "v1"
}
] |
2023-01-05
|
[
[
"Fukai",
"Takaaki",
""
],
[
"Sato",
"Kento",
""
],
[
"Hirofuchi",
"Takahiro",
""
]
] |
Today, deep learning is an essential technology for our life. To solve more complex problems with deep learning, both sizes of training datasets and neural networks are increasing. To train a model with large datasets and networks, distributed deep neural network (DDNN) training technique is necessary. For large-scale DDNN training, HPC clusters are a promising computation environment. In large-scale DDNN on HPC clusters, I/O performance is critical because it is becoming a bottleneck. Most flagship-class HPC clusters have hierarchical storage systems. For designing future HPC storage systems, it is necessary to quantify the performance improvement effect of the hierarchical storage system on the workloads. This paper demonstrates the quantitative performance analysis of the hierarchical storage system for DDNN workload in a flagship-class supercomputer. Our analysis shows how much performance improvement and volume increment of the storage will be required to meet the performance goal.
|
2302.05528
|
Ni Wang
|
Ni Wang, Gautham P. Das, Alan G. Millard
|
Learning cooperative behaviours in adversarial multi-agent systems
|
23rd Annual Conference, Towards Autonomous Robotic Systems 2022
|
Lecture Notes in Computer Science(), vol 13546. Springer, Cham.
2022
|
10.1007/978-3-031-15908-4_15
| null |
cs.AI cs.MA cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
This work extends an existing virtual multi-agent platform called RoboSumo to
create TripleSumo -- a platform for investigating multi-agent cooperative
behaviors in continuous action spaces, with physical contact in an adversarial
environment. In this paper we investigate a scenario in which two agents,
namely `Bug' and `Ant', must team up and push another agent `Spider' out of the
arena. To tackle this goal, the newly added agent `Bug' is trained during an
ongoing match between `Ant' and `Spider'. `Bug' must develop awareness of the
other agents' actions, infer the strategy of both sides, and eventually learn
an action policy to cooperate. The reinforcement learning algorithm Deep
Deterministic Policy Gradient (DDPG) is implemented with a hybrid reward
structure combining dense and sparse rewards. The cooperative behavior is
quantitatively evaluated by the mean probability of winning the match and mean
number of steps needed to win.
|
[
{
"created": "Fri, 10 Feb 2023 22:12:29 GMT",
"version": "v1"
}
] |
2023-02-14
|
[
[
"Wang",
"Ni",
""
],
[
"Das",
"Gautham P.",
""
],
[
"Millard",
"Alan G.",
""
]
] |
This work extends an existing virtual multi-agent platform called RoboSumo to create TripleSumo -- a platform for investigating multi-agent cooperative behaviors in continuous action spaces, with physical contact in an adversarial environment. In this paper we investigate a scenario in which two agents, namely `Bug' and `Ant', must team up and push another agent `Spider' out of the arena. To tackle this goal, the newly added agent `Bug' is trained during an ongoing match between `Ant' and `Spider'. `Bug' must develop awareness of the other agents' actions, infer the strategy of both sides, and eventually learn an action policy to cooperate. The reinforcement learning algorithm Deep Deterministic Policy Gradient (DDPG) is implemented with a hybrid reward structure combining dense and sparse rewards. The cooperative behavior is quantitatively evaluated by the mean probability of winning the match and mean number of steps needed to win.
|
1901.10909
|
Zhiling Long
|
Zhiling Long, Yazeed Alaudah, Muhammad Ali Qureshi, Motaz Al Farraj,
Zhen Wang, Asjad Amin, Mohamed Deriche, and Ghassan AlRegib
|
Characterization of migrated seismic volumes using texture attributes: a
comparative study
| null |
Proceedings of the SEG 85th Annual Meeting, New Orleans, LA, Oct.
2015
| null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we examine several typical texture attributes developed in the
image processing community in recent years with respect to their capability of
characterizing a migrated seismic volume. These attributes are generated in
either frequency or space domain, including steerable pyramid, curvelet, local
binary pattern, and local radius index. The comparative study is performed
within an image retrieval framework. We evaluate these attributes in terms of
retrieval accuracy. It is our hope that this comparative study will help
acquaint the seismic interpretation community with the many available powerful
image texture analysis techniques, providing more alternative attributes for
their seismic exploration.
|
[
{
"created": "Wed, 30 Jan 2019 15:42:19 GMT",
"version": "v1"
}
] |
2019-01-31
|
[
[
"Long",
"Zhiling",
""
],
[
"Alaudah",
"Yazeed",
""
],
[
"Qureshi",
"Muhammad Ali",
""
],
[
"Farraj",
"Motaz Al",
""
],
[
"Wang",
"Zhen",
""
],
[
"Amin",
"Asjad",
""
],
[
"Deriche",
"Mohamed",
""
],
[
"AlRegib",
"Ghassan",
""
]
] |
In this paper, we examine several typical texture attributes developed in the image processing community in recent years with respect to their capability of characterizing a migrated seismic volume. These attributes are generated in either frequency or space domain, including steerable pyramid, curvelet, local binary pattern, and local radius index. The comparative study is performed within an image retrieval framework. We evaluate these attributes in terms of retrieval accuracy. It is our hope that this comparative study will help acquaint the seismic interpretation community with the many available powerful image texture analysis techniques, providing more alternative attributes for their seismic exploration.
|
2205.08329
|
Sandra Lagen
|
Sandra Lag\'en, Xavier Gelabert, Andreas Hansson, Manuel Requena,
Lorenza Giupponi
|
Fronthaul Compression Control for shared Fronthaul Access Networks
|
paper to appear in IEEE Communications Magazine
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There is a widely held belief that future Radio Access Network (RAN)
architectures will be characterized by increased levels of virtualization,
whereby base station functionalities, traditionally residing at a single
location, will be scattered across different logical entities while being
interfaced via high-speed fronthaul (FH) links. For the deployment of such FH
links, operators are faced with the challenge of maintaining acceptable radio
access performance while at the same time keeping deployment costs low. A
common practice is to exploit statistical multiplexing by allowing several
cells to utilize the same FH link. As a result, in order to cope with the
resulting aggregated traffic, different techniques can be used to reduce the
required FH data rates. Herein, we focus on FH compression control strategies
for multiple-cell/multiple-user scenarios sharing a common FH link. We propose
various methods for sounding reference signal (SRS) handling and analyze
different FH-aware modulation data compression and scheduling strategies.
Considering a full system setup, including the radio and FH access networks,
numerical evaluation is conducted using a 5G NR system-level simulator
implemented in ns-3. Simulation results show that, under stringent FH capacity
constraints, optimized modulation compression strategies provide significant
user-perceived throughput gains over baseline strategies (between 5.2x and
6.9x). On top of them, SRS handling methods achieve additional 2% to 41% gains.
|
[
{
"created": "Tue, 17 May 2022 13:23:21 GMT",
"version": "v1"
}
] |
2022-05-18
|
[
[
"Lagén",
"Sandra",
""
],
[
"Gelabert",
"Xavier",
""
],
[
"Hansson",
"Andreas",
""
],
[
"Requena",
"Manuel",
""
],
[
"Giupponi",
"Lorenza",
""
]
] |
There is a widely held belief that future Radio Access Network (RAN) architectures will be characterized by increased levels of virtualization, whereby base station functionalities, traditionally residing at a single location, will be scattered across different logical entities while being interfaced via high-speed fronthaul (FH) links. For the deployment of such FH links, operators are faced with the challenge of maintaining acceptable radio access performance while at the same time keeping deployment costs low. A common practice is to exploit statistical multiplexing by allowing several cells to utilize the same FH link. As a result, in order to cope with the resulting aggregated traffic, different techniques can be used to reduce the required FH data rates. Herein, we focus on FH compression control strategies for multiple-cell/multiple-user scenarios sharing a common FH link. We propose various methods for sounding reference signal (SRS) handling and analyze different FH-aware modulation data compression and scheduling strategies. Considering a full system setup, including the radio and FH access networks, numerical evaluation is conducted using a 5G NR system-level simulator implemented in ns-3. Simulation results show that, under stringent FH capacity constraints, optimized modulation compression strategies provide significant user-perceived throughput gains over baseline strategies (between 5.2x and 6.9x). On top of them, SRS handling methods achieve additional 2% to 41% gains.
|
2303.11577
|
Wenqian Chen
|
Wenqian Chen, Panos Stinis
|
Feature-adjacent multi-fidelity physics-informed machine learning for
partial differential equations
|
12 figures
| null | null |
PNNL-SA-182880
|
cs.LG cs.NA math.NA physics.comp-ph physics.flu-dyn
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Physics-informed neural networks have emerged as an alternative method for
solving partial differential equations. However, for complex problems, the
training of such networks can still require high-fidelity data which can be
expensive to generate. To reduce or even eliminate the dependency on
high-fidelity data, we propose a novel multi-fidelity architecture which is
based on a feature space shared by the low- and high-fidelity solutions. In the
feature space, the projections of the low-fidelity and high-fidelity solutions
are adjacent by constraining their relative distance. The feature space is
represented with an encoder and its mapping to the original solution space is
effected through a decoder. The proposed multi-fidelity approach is validated
on forward and inverse problems for steady and unsteady problems described by
partial differential equations.
|
[
{
"created": "Tue, 21 Mar 2023 03:51:15 GMT",
"version": "v1"
},
{
"created": "Wed, 22 Mar 2023 06:07:28 GMT",
"version": "v2"
},
{
"created": "Mon, 27 Mar 2023 05:29:33 GMT",
"version": "v3"
}
] |
2023-03-28
|
[
[
"Chen",
"Wenqian",
""
],
[
"Stinis",
"Panos",
""
]
] |
Physics-informed neural networks have emerged as an alternative method for solving partial differential equations. However, for complex problems, the training of such networks can still require high-fidelity data which can be expensive to generate. To reduce or even eliminate the dependency on high-fidelity data, we propose a novel multi-fidelity architecture which is based on a feature space shared by the low- and high-fidelity solutions. In the feature space, the projections of the low-fidelity and high-fidelity solutions are adjacent by constraining their relative distance. The feature space is represented with an encoder and its mapping to the original solution space is effected through a decoder. The proposed multi-fidelity approach is validated on forward and inverse problems for steady and unsteady problems described by partial differential equations.
|
1910.05874
|
Yeonjong Shin
|
Yeonjong Shin
|
Effects of Depth, Width, and Initialization: A Convergence Analysis of
Layer-wise Training for Deep Linear Neural Networks
| null | null | null | null |
cs.LG cs.NA math.NA stat.ML
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Deep neural networks have been used in various machine learning applications
and achieved tremendous empirical successes. However, training deep neural
networks is a challenging task. Many alternatives have been proposed in place
of end-to-end back-propagation. Layer-wise training is one of them, which
trains a single layer at a time, rather than trains the whole layers
simultaneously. In this paper, we study a layer-wise training using a block
coordinate gradient descent (BCGD) for deep linear networks. We establish a
general convergence analysis of BCGD and found the optimal learning rate, which
results in the fastest decrease in the loss. More importantly, the optimal
learning rate can directly be applied in practice, as it does not require any
prior knowledge. Thus, tuning the learning rate is not needed at all. Also, we
identify the effects of depth, width, and initialization in the training
process. We show that when the orthogonal-like initialization is employed, the
width of intermediate layers plays no role in gradient-based training, as long
as the width is greater than or equal to both the input and output dimensions.
We show that under some conditions, the deeper the network is, the faster the
convergence is guaranteed. This implies that in an extreme case, the global
optimum is achieved after updating each weight matrix only once. Besides, we
found that the use of deep networks could drastically accelerate convergence
when it is compared to those of a depth 1 network, even when the computational
cost is considered. Numerical examples are provided to justify our theoretical
findings and demonstrate the performance of layer-wise training by BCGD.
|
[
{
"created": "Mon, 14 Oct 2019 00:50:55 GMT",
"version": "v1"
},
{
"created": "Mon, 7 Sep 2020 21:21:37 GMT",
"version": "v2"
}
] |
2020-09-09
|
[
[
"Shin",
"Yeonjong",
""
]
] |
Deep neural networks have been used in various machine learning applications and achieved tremendous empirical successes. However, training deep neural networks is a challenging task. Many alternatives have been proposed in place of end-to-end back-propagation. Layer-wise training is one of them, which trains a single layer at a time, rather than trains the whole layers simultaneously. In this paper, we study a layer-wise training using a block coordinate gradient descent (BCGD) for deep linear networks. We establish a general convergence analysis of BCGD and found the optimal learning rate, which results in the fastest decrease in the loss. More importantly, the optimal learning rate can directly be applied in practice, as it does not require any prior knowledge. Thus, tuning the learning rate is not needed at all. Also, we identify the effects of depth, width, and initialization in the training process. We show that when the orthogonal-like initialization is employed, the width of intermediate layers plays no role in gradient-based training, as long as the width is greater than or equal to both the input and output dimensions. We show that under some conditions, the deeper the network is, the faster the convergence is guaranteed. This implies that in an extreme case, the global optimum is achieved after updating each weight matrix only once. Besides, we found that the use of deep networks could drastically accelerate convergence when it is compared to those of a depth 1 network, even when the computational cost is considered. Numerical examples are provided to justify our theoretical findings and demonstrate the performance of layer-wise training by BCGD.
|
2302.13754
|
Katharina Ensinger
|
Katharina Ensinger, Sebastian Ziesche, Barbara Rakitsch, Michael
Tiemann, Sebastian Trimpe
|
Combining Slow and Fast: Complementary Filtering for Dynamics Learning
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modeling an unknown dynamical system is crucial in order to predict the
future behavior of the system. A standard approach is training recurrent models
on measurement data. While these models typically provide exact short-term
predictions, accumulating errors yield deteriorated long-term behavior. In
contrast, models with reliable long-term predictions can often be obtained,
either by training a robust but less detailed model, or by leveraging
physics-based simulations. In both cases, inaccuracies in the models yield a
lack of short-time details. Thus, different models with contrastive properties
on different time horizons are available. This observation immediately raises
the question: Can we obtain predictions that combine the best of both worlds?
Inspired by sensor fusion tasks, we interpret the problem in the frequency
domain and leverage classical methods from signal processing, in particular
complementary filters. This filtering technique combines two signals by
applying a high-pass filter to one signal, and low-pass filtering the other.
Essentially, the high-pass filter extracts high-frequencies, whereas the
low-pass filter extracts low frequencies. Applying this concept to dynamics
model learning enables the construction of models that yield accurate long- and
short-term predictions. Here, we propose two methods, one being purely
learning-based and the other one being a hybrid model that requires an
additional physics-based simulator.
|
[
{
"created": "Mon, 27 Feb 2023 13:32:47 GMT",
"version": "v1"
},
{
"created": "Wed, 1 Mar 2023 14:29:48 GMT",
"version": "v2"
}
] |
2023-03-02
|
[
[
"Ensinger",
"Katharina",
""
],
[
"Ziesche",
"Sebastian",
""
],
[
"Rakitsch",
"Barbara",
""
],
[
"Tiemann",
"Michael",
""
],
[
"Trimpe",
"Sebastian",
""
]
] |
Modeling an unknown dynamical system is crucial in order to predict the future behavior of the system. A standard approach is training recurrent models on measurement data. While these models typically provide exact short-term predictions, accumulating errors yield deteriorated long-term behavior. In contrast, models with reliable long-term predictions can often be obtained, either by training a robust but less detailed model, or by leveraging physics-based simulations. In both cases, inaccuracies in the models yield a lack of short-time details. Thus, different models with contrastive properties on different time horizons are available. This observation immediately raises the question: Can we obtain predictions that combine the best of both worlds? Inspired by sensor fusion tasks, we interpret the problem in the frequency domain and leverage classical methods from signal processing, in particular complementary filters. This filtering technique combines two signals by applying a high-pass filter to one signal, and low-pass filtering the other. Essentially, the high-pass filter extracts high-frequencies, whereas the low-pass filter extracts low frequencies. Applying this concept to dynamics model learning enables the construction of models that yield accurate long- and short-term predictions. Here, we propose two methods, one being purely learning-based and the other one being a hybrid model that requires an additional physics-based simulator.
|
2004.00893
|
Jianxiong Guo
|
Jianxiong Guo, Weili Wu
|
A k-hop Collaborate Game Model: Extended to Community Budgets and
Adaptive Non-Submodularity
| null |
IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2022
|
10.1109/TSMC.2021.3129276
| null |
cs.SI cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Revenue maximization (RM) is one of the most important problems on online
social networks (OSNs), which attempts to find a small subset of users in OSNs
that makes the expected revenue maximized. It has been researched intensively
before. However, most of exsiting literatures were based on non-adaptive
seeding strategy and on simple information diffusion model, such as
IC/LT-model. It considered the single influenced user as a measurement unit to
quantify the revenue. Until Collaborate Game model appeared, it considered
activity as a basic object to compute the revenue. An activity initiated by a
user can only influence those users whose distance are within k-hop from the
initiator. Based on that, we adopt adaptive seed strategy and formulate the
Revenue Maximization under the Size Budget (RMSB) problem. If taking into
account the product's promotion, we extend RMSB to the Revenue Maximization
under the Community Budget (RMCB) problem, where the influence can be
distributed over the whole network. The objective function of RMSB and RMCB is
adatpive monotone and not adaptive submodular, but in some special cases, it is
adaptive submodular. We study the RMSB and RMCB problem under both the speical
submodular cases and general non-submodular cases, and propose RMSBSolver and
RMCBSolver to solve them with strong theoretical guarantees, respectively.
Especially, we give a data-dependent approximation ratio for RMSB problem under
the general non-submodular cases. Finally, we evaluate our proposed algorithms
by conducting experiments on real datasets, and show the effectiveness and
accuracy of our solutions.
|
[
{
"created": "Thu, 2 Apr 2020 09:20:22 GMT",
"version": "v1"
}
] |
2022-06-08
|
[
[
"Guo",
"Jianxiong",
""
],
[
"Wu",
"Weili",
""
]
] |
Revenue maximization (RM) is one of the most important problems on online social networks (OSNs), which attempts to find a small subset of users in OSNs that makes the expected revenue maximized. It has been researched intensively before. However, most of exsiting literatures were based on non-adaptive seeding strategy and on simple information diffusion model, such as IC/LT-model. It considered the single influenced user as a measurement unit to quantify the revenue. Until Collaborate Game model appeared, it considered activity as a basic object to compute the revenue. An activity initiated by a user can only influence those users whose distance are within k-hop from the initiator. Based on that, we adopt adaptive seed strategy and formulate the Revenue Maximization under the Size Budget (RMSB) problem. If taking into account the product's promotion, we extend RMSB to the Revenue Maximization under the Community Budget (RMCB) problem, where the influence can be distributed over the whole network. The objective function of RMSB and RMCB is adatpive monotone and not adaptive submodular, but in some special cases, it is adaptive submodular. We study the RMSB and RMCB problem under both the speical submodular cases and general non-submodular cases, and propose RMSBSolver and RMCBSolver to solve them with strong theoretical guarantees, respectively. Especially, we give a data-dependent approximation ratio for RMSB problem under the general non-submodular cases. Finally, we evaluate our proposed algorithms by conducting experiments on real datasets, and show the effectiveness and accuracy of our solutions.
|
2209.09298
|
Yunwen Lei
|
Yunwen Lei, Rong Jin, Yiming Ying
|
Stability and Generalization Analysis of Gradient Methods for Shallow
Neural Networks
|
to appear in Neural Information Processing Systems (NeurIPS 2022)
| null | null | null |
cs.LG stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
While significant theoretical progress has been achieved, unveiling the
generalization mystery of overparameterized neural networks still remains
largely elusive. In this paper, we study the generalization behavior of shallow
neural networks (SNNs) by leveraging the concept of algorithmic stability. We
consider gradient descent (GD) and stochastic gradient descent (SGD) to train
SNNs, for both of which we develop consistent excess risk bounds by balancing
the optimization and generalization via early-stopping. As compared to existing
analysis on GD, our new analysis requires a relaxed overparameterization
assumption and also applies to SGD. The key for the improvement is a better
estimation of the smallest eigenvalues of the Hessian matrices of the empirical
risks and the loss function along the trajectories of GD and SGD by providing a
refined estimation of their iterates.
|
[
{
"created": "Mon, 19 Sep 2022 18:48:00 GMT",
"version": "v1"
}
] |
2022-09-21
|
[
[
"Lei",
"Yunwen",
""
],
[
"Jin",
"Rong",
""
],
[
"Ying",
"Yiming",
""
]
] |
While significant theoretical progress has been achieved, unveiling the generalization mystery of overparameterized neural networks still remains largely elusive. In this paper, we study the generalization behavior of shallow neural networks (SNNs) by leveraging the concept of algorithmic stability. We consider gradient descent (GD) and stochastic gradient descent (SGD) to train SNNs, for both of which we develop consistent excess risk bounds by balancing the optimization and generalization via early-stopping. As compared to existing analysis on GD, our new analysis requires a relaxed overparameterization assumption and also applies to SGD. The key for the improvement is a better estimation of the smallest eigenvalues of the Hessian matrices of the empirical risks and the loss function along the trajectories of GD and SGD by providing a refined estimation of their iterates.
|
1909.12415
|
Jinyu Li
|
Jinyu Li, Rui Zhao, Hu Hu, and Yifan Gong
|
Improving RNN Transducer Modeling for End-to-End Speech Recognition
|
Accepted by IEEE ASRU workshop, 2019
| null | null | null |
cs.CL eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the last few years, an emerging trend in automatic speech recognition
research is the study of end-to-end (E2E) systems. Connectionist Temporal
Classification (CTC), Attention Encoder-Decoder (AED), and RNN Transducer
(RNN-T) are the most popular three methods. Among these three methods, RNN-T
has the advantages to do online streaming which is challenging to AED and it
doesn't have CTC's frame-independence assumption. In this paper, we improve the
RNN-T training in two aspects. First, we optimize the training algorithm of
RNN-T to reduce the memory consumption so that we can have larger training
minibatch for faster training speed. Second, we propose better model structures
so that we obtain RNN-T models with the very good accuracy but small footprint.
Trained with 30 thousand hours anonymized and transcribed Microsoft production
data, the best RNN-T model with even smaller model size (216 Megabytes)
achieves up-to 11.8% relative word error rate (WER) reduction from the baseline
RNN-T model. This best RNN-T model is significantly better than the device
hybrid model with similar size by achieving up-to 15.0% relative WER reduction,
and obtains similar WERs as the server hybrid model of 5120 Megabytes in size.
|
[
{
"created": "Thu, 26 Sep 2019 22:09:09 GMT",
"version": "v1"
}
] |
2019-09-30
|
[
[
"Li",
"Jinyu",
""
],
[
"Zhao",
"Rui",
""
],
[
"Hu",
"Hu",
""
],
[
"Gong",
"Yifan",
""
]
] |
In the last few years, an emerging trend in automatic speech recognition research is the study of end-to-end (E2E) systems. Connectionist Temporal Classification (CTC), Attention Encoder-Decoder (AED), and RNN Transducer (RNN-T) are the most popular three methods. Among these three methods, RNN-T has the advantages to do online streaming which is challenging to AED and it doesn't have CTC's frame-independence assumption. In this paper, we improve the RNN-T training in two aspects. First, we optimize the training algorithm of RNN-T to reduce the memory consumption so that we can have larger training minibatch for faster training speed. Second, we propose better model structures so that we obtain RNN-T models with the very good accuracy but small footprint. Trained with 30 thousand hours anonymized and transcribed Microsoft production data, the best RNN-T model with even smaller model size (216 Megabytes) achieves up-to 11.8% relative word error rate (WER) reduction from the baseline RNN-T model. This best RNN-T model is significantly better than the device hybrid model with similar size by achieving up-to 15.0% relative WER reduction, and obtains similar WERs as the server hybrid model of 5120 Megabytes in size.
|
2108.07931
|
Lin Ning
|
Lin Ning, Karan Singhal, Ellie X. Zhou, Sushant Prakash
|
Learning Federated Representations and Recommendations with Limited
Negatives
| null | null | null | null |
cs.LG cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep retrieval models are widely used for learning entity representations and
recommendations. Federated learning provides a privacy-preserving way to train
these models without requiring centralization of user data. However, federated
deep retrieval models usually perform much worse than their centralized
counterparts due to non-IID (independent and identically distributed) training
data on clients, an intrinsic property of federated learning that limits
negatives available for training. We demonstrate that this issue is distinct
from the commonly studied client drift problem. This work proposes
batch-insensitive losses as a way to alleviate the non-IID negatives issue for
federated movie recommendations. We explore a variety of techniques and
identify that batch-insensitive losses can effectively improve the performance
of federated deep retrieval models, increasing the relative recall of the
federated model by up to 93.15% and reducing the relative gap in recall between
it and a centralized model from 27.22% - 43.14% to 0.53% - 2.42%. We also
open-source our code framework to accelerate further research and applications
of federated deep retrieval models.
|
[
{
"created": "Wed, 18 Aug 2021 02:01:52 GMT",
"version": "v1"
},
{
"created": "Tue, 2 Nov 2021 04:50:31 GMT",
"version": "v2"
}
] |
2021-11-03
|
[
[
"Ning",
"Lin",
""
],
[
"Singhal",
"Karan",
""
],
[
"Zhou",
"Ellie X.",
""
],
[
"Prakash",
"Sushant",
""
]
] |
Deep retrieval models are widely used for learning entity representations and recommendations. Federated learning provides a privacy-preserving way to train these models without requiring centralization of user data. However, federated deep retrieval models usually perform much worse than their centralized counterparts due to non-IID (independent and identically distributed) training data on clients, an intrinsic property of federated learning that limits negatives available for training. We demonstrate that this issue is distinct from the commonly studied client drift problem. This work proposes batch-insensitive losses as a way to alleviate the non-IID negatives issue for federated movie recommendations. We explore a variety of techniques and identify that batch-insensitive losses can effectively improve the performance of federated deep retrieval models, increasing the relative recall of the federated model by up to 93.15% and reducing the relative gap in recall between it and a centralized model from 27.22% - 43.14% to 0.53% - 2.42%. We also open-source our code framework to accelerate further research and applications of federated deep retrieval models.
|
2205.02593
|
Ruixin Hong
|
Ruixin Hong, Hongming Zhang, Xintong Yu, Changshui Zhang
|
METGEN: A Module-Based Entailment Tree Generation Framework for Answer
Explanation
|
NAACL 2022 Findings
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Knowing the reasoning chains from knowledge to the predicted answers can help
construct an explainable question answering (QA) system. Advances on QA
explanation propose to explain the answers with entailment trees composed of
multiple entailment steps. While current work proposes to generate entailment
trees with end-to-end generative models, the steps in the generated trees are
not constrained and could be unreliable. In this paper, we propose METGEN, a
Module-based Entailment Tree GENeration framework that has multiple modules and
a reasoning controller. Given a question and several supporting knowledge,
METGEN can iteratively generate the entailment tree by conducting single-step
entailment with separate modules and selecting the reasoning flow with the
controller. As each module is guided to perform a specific type of entailment
reasoning, the steps generated by METGEN are more reliable and valid.
Experiment results on the standard benchmark show that METGEN can outperform
previous state-of-the-art models with only 9% of the parameters.
|
[
{
"created": "Thu, 5 May 2022 12:06:02 GMT",
"version": "v1"
}
] |
2022-05-06
|
[
[
"Hong",
"Ruixin",
""
],
[
"Zhang",
"Hongming",
""
],
[
"Yu",
"Xintong",
""
],
[
"Zhang",
"Changshui",
""
]
] |
Knowing the reasoning chains from knowledge to the predicted answers can help construct an explainable question answering (QA) system. Advances on QA explanation propose to explain the answers with entailment trees composed of multiple entailment steps. While current work proposes to generate entailment trees with end-to-end generative models, the steps in the generated trees are not constrained and could be unreliable. In this paper, we propose METGEN, a Module-based Entailment Tree GENeration framework that has multiple modules and a reasoning controller. Given a question and several supporting knowledge, METGEN can iteratively generate the entailment tree by conducting single-step entailment with separate modules and selecting the reasoning flow with the controller. As each module is guided to perform a specific type of entailment reasoning, the steps generated by METGEN are more reliable and valid. Experiment results on the standard benchmark show that METGEN can outperform previous state-of-the-art models with only 9% of the parameters.
|
2403.10638
|
Conor Artman
|
Conor M. Artman, Aditya Mate, Ezinne Nwankwo, Aliza Heching, Tsuyoshi
Id\'e, Ji\v{r}\'i Navr\'atil, Karthikeyan Shanmugam, Wei Sun, Kush R.
Varshney, Lauri Goldkind, Gidi Kroch, Jaclyn Sawyer, Ian Watson
|
A resource-constrained stochastic scheduling algorithm for homeless
street outreach and gleaning edible food
| null | null | null | null |
cs.LG cs.CY stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We developed a common algorithmic solution addressing the problem of
resource-constrained outreach encountered by social change organizations with
different missions and operations: Breaking Ground -- an organization that
helps individuals experiencing homelessness in New York transition to permanent
housing and Leket -- the national food bank of Israel that rescues food from
farms and elsewhere to feed the hungry. Specifically, we developed an
estimation and optimization approach for partially-observed episodic restless
bandits under $k$-step transitions. The results show that our Thompson sampling
with Markov chain recovery (via Stein variational gradient descent) algorithm
significantly outperforms baselines for the problems of both organizations. We
carried out this work in a prospective manner with the express goal of devising
a flexible-enough but also useful-enough solution that can help overcome a lack
of sustainable impact in data science for social good.
|
[
{
"created": "Fri, 15 Mar 2024 19:12:28 GMT",
"version": "v1"
}
] |
2024-03-20
|
[
[
"Artman",
"Conor M.",
""
],
[
"Mate",
"Aditya",
""
],
[
"Nwankwo",
"Ezinne",
""
],
[
"Heching",
"Aliza",
""
],
[
"Idé",
"Tsuyoshi",
""
],
[
"Navrátil",
"Jiří",
""
],
[
"Shanmugam",
"Karthikeyan",
""
],
[
"Sun",
"Wei",
""
],
[
"Varshney",
"Kush R.",
""
],
[
"Goldkind",
"Lauri",
""
],
[
"Kroch",
"Gidi",
""
],
[
"Sawyer",
"Jaclyn",
""
],
[
"Watson",
"Ian",
""
]
] |
We developed a common algorithmic solution addressing the problem of resource-constrained outreach encountered by social change organizations with different missions and operations: Breaking Ground -- an organization that helps individuals experiencing homelessness in New York transition to permanent housing and Leket -- the national food bank of Israel that rescues food from farms and elsewhere to feed the hungry. Specifically, we developed an estimation and optimization approach for partially-observed episodic restless bandits under $k$-step transitions. The results show that our Thompson sampling with Markov chain recovery (via Stein variational gradient descent) algorithm significantly outperforms baselines for the problems of both organizations. We carried out this work in a prospective manner with the express goal of devising a flexible-enough but also useful-enough solution that can help overcome a lack of sustainable impact in data science for social good.
|
2311.13831
|
Juil Koo
|
Juil Koo, Chanho Park, Minhyuk Sung
|
Posterior Distillation Sampling
|
Project page: https://posterior-distillation-sampling.github.io/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce Posterior Distillation Sampling (PDS), a novel optimization
method for parametric image editing based on diffusion models. Existing
optimization-based methods, which leverage the powerful 2D prior of diffusion
models to handle various parametric images, have mainly focused on generation.
Unlike generation, editing requires a balance between conforming to the target
attribute and preserving the identity of the source content. Recent 2D image
editing methods have achieved this balance by leveraging the stochastic latent
encoded in the generative process of diffusion models. To extend the editing
capabilities of diffusion models shown in pixel space to parameter space, we
reformulate the 2D image editing method into an optimization form named PDS.
PDS matches the stochastic latents of the source and the target, enabling the
sampling of targets in diverse parameter spaces that align with a desired
attribute while maintaining the source's identity. We demonstrate that this
optimization resembles running a generative process with the target attribute,
but aligning this process with the trajectory of the source's generative
process. Extensive editing results in Neural Radiance Fields and Scalable
Vector Graphics representations demonstrate that PDS is capable of sampling
targets to fulfill the aforementioned balance across various parameter spaces.
|
[
{
"created": "Thu, 23 Nov 2023 07:25:31 GMT",
"version": "v1"
},
{
"created": "Wed, 20 Mar 2024 05:55:56 GMT",
"version": "v2"
},
{
"created": "Mon, 1 Apr 2024 01:18:26 GMT",
"version": "v3"
}
] |
2024-04-03
|
[
[
"Koo",
"Juil",
""
],
[
"Park",
"Chanho",
""
],
[
"Sung",
"Minhyuk",
""
]
] |
We introduce Posterior Distillation Sampling (PDS), a novel optimization method for parametric image editing based on diffusion models. Existing optimization-based methods, which leverage the powerful 2D prior of diffusion models to handle various parametric images, have mainly focused on generation. Unlike generation, editing requires a balance between conforming to the target attribute and preserving the identity of the source content. Recent 2D image editing methods have achieved this balance by leveraging the stochastic latent encoded in the generative process of diffusion models. To extend the editing capabilities of diffusion models shown in pixel space to parameter space, we reformulate the 2D image editing method into an optimization form named PDS. PDS matches the stochastic latents of the source and the target, enabling the sampling of targets in diverse parameter spaces that align with a desired attribute while maintaining the source's identity. We demonstrate that this optimization resembles running a generative process with the target attribute, but aligning this process with the trajectory of the source's generative process. Extensive editing results in Neural Radiance Fields and Scalable Vector Graphics representations demonstrate that PDS is capable of sampling targets to fulfill the aforementioned balance across various parameter spaces.
|
2012.01489
|
Ekram Hossain
|
S. Hu, X. Chen, W. Ni, E. Hossain, and X. Wang
|
Distributed Machine Learning for Wireless Communication Networks:
Techniques, Architectures, and Applications
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Distributed machine learning (DML) techniques, such as federated learning,
partitioned learning, and distributed reinforcement learning, have been
increasingly applied to wireless communications. This is due to improved
capabilities of terminal devices, explosively growing data volume, congestion
in the radio interfaces, and increasing concern of data privacy. The unique
features of wireless systems, such as large scale, geographically dispersed
deployment, user mobility, and massive amount of data, give rise to new
challenges in the design of DML techniques. There is a clear gap in the
existing literature in that the DML techniques are yet to be systematically
reviewed for their applicability to wireless systems. This survey bridges the
gap by providing a contemporary and comprehensive survey of DML techniques with
a focus on wireless networks. Specifically, we review the latest applications
of DML in power control, spectrum management, user association, and edge cloud
computing. The optimality, scalability, convergence rate, computation cost, and
communication overhead of DML are analyzed. We also discuss the potential
adversarial attacks faced by DML applications, and describe state-of-the-art
countermeasures to preserve privacy and security. Last but not least, we point
out a number of key issues yet to be addressed, and collate potentially
interesting and challenging topics for future research.
|
[
{
"created": "Wed, 2 Dec 2020 19:53:32 GMT",
"version": "v1"
}
] |
2020-12-04
|
[
[
"Hu",
"S.",
""
],
[
"Chen",
"X.",
""
],
[
"Ni",
"W.",
""
],
[
"Hossain",
"E.",
""
],
[
"Wang",
"X.",
""
]
] |
Distributed machine learning (DML) techniques, such as federated learning, partitioned learning, and distributed reinforcement learning, have been increasingly applied to wireless communications. This is due to improved capabilities of terminal devices, explosively growing data volume, congestion in the radio interfaces, and increasing concern of data privacy. The unique features of wireless systems, such as large scale, geographically dispersed deployment, user mobility, and massive amount of data, give rise to new challenges in the design of DML techniques. There is a clear gap in the existing literature in that the DML techniques are yet to be systematically reviewed for their applicability to wireless systems. This survey bridges the gap by providing a contemporary and comprehensive survey of DML techniques with a focus on wireless networks. Specifically, we review the latest applications of DML in power control, spectrum management, user association, and edge cloud computing. The optimality, scalability, convergence rate, computation cost, and communication overhead of DML are analyzed. We also discuss the potential adversarial attacks faced by DML applications, and describe state-of-the-art countermeasures to preserve privacy and security. Last but not least, we point out a number of key issues yet to be addressed, and collate potentially interesting and challenging topics for future research.
|
1904.03879
|
Shuhao Gu
|
Shuhao Gu, Yang Feng, Qun Liu
|
Improving Domain Adaptation Translation with Domain Invariant and
Specific Information
|
11 pages, accepted by NAACL 2019
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In domain adaptation for neural machine translation, translation performance
can benefit from separating features into domain-specific features and common
features. In this paper, we propose a method to explicitly model the two kinds
of information in the encoder-decoder framework so as to exploit out-of-domain
data in in-domain training. In our method, we maintain a private encoder and a
private decoder for each domain which are used to model domain-specific
information. In the meantime, we introduce a common encoder and a common
decoder shared by all the domains which can only have domain-independent
information flow through. Besides, we add a discriminator to the shared encoder
and employ adversarial training for the whole model to reinforce the
performance of information separation and machine translation simultaneously.
Experiment results show that our method can outperform competitive baselines
greatly on multiple data sets.
|
[
{
"created": "Mon, 8 Apr 2019 08:00:25 GMT",
"version": "v1"
},
{
"created": "Sat, 21 Sep 2019 12:32:26 GMT",
"version": "v2"
}
] |
2019-09-24
|
[
[
"Gu",
"Shuhao",
""
],
[
"Feng",
"Yang",
""
],
[
"Liu",
"Qun",
""
]
] |
In domain adaptation for neural machine translation, translation performance can benefit from separating features into domain-specific features and common features. In this paper, we propose a method to explicitly model the two kinds of information in the encoder-decoder framework so as to exploit out-of-domain data in in-domain training. In our method, we maintain a private encoder and a private decoder for each domain which are used to model domain-specific information. In the meantime, we introduce a common encoder and a common decoder shared by all the domains which can only have domain-independent information flow through. Besides, we add a discriminator to the shared encoder and employ adversarial training for the whole model to reinforce the performance of information separation and machine translation simultaneously. Experiment results show that our method can outperform competitive baselines greatly on multiple data sets.
|
2110.04045
|
Lei Chen
|
Lei Chen, Yilin Liu, Yue Li, Lingyun Yu, BoYu Gao, Maurizio Caon, Yong
Yue, and Hai-Ning Liang
|
Effect of Visual Cues on Pointing Tasks in Co-located Augmented Reality
Collaboration
| null | null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Visual cues are essential in computer-mediated communication. It is
especially important when communication happens in a collaboration scenario
that requires focusing several users' attention on aspecific object among other
similar ones. This paper explores the effect of visual cues on pointing tasks
in co-located Augmented Reality (AR) collaboration. A user study (N = 32, 16
pairs) was conducted to compare two types of visual cues: Pointing Line (PL)and
Moving Track (MT). Both are head-based visual techniques.Through a series of
collaborative pointing tasks on objects with different states (static and
dynamic) and density levels (low, mediumand high), the results showed that PL
was better on task performance and usability, but MT was rated higher on social
presenceand user preference. Based on our results, some design implicationsare
provided for pointing tasks in co-located AR collaboration.
|
[
{
"created": "Fri, 8 Oct 2021 11:44:15 GMT",
"version": "v1"
}
] |
2021-10-11
|
[
[
"Chen",
"Lei",
""
],
[
"Liu",
"Yilin",
""
],
[
"Li",
"Yue",
""
],
[
"Yu",
"Lingyun",
""
],
[
"Gao",
"BoYu",
""
],
[
"Caon",
"Maurizio",
""
],
[
"Yue",
"Yong",
""
],
[
"Liang",
"Hai-Ning",
""
]
] |
Visual cues are essential in computer-mediated communication. It is especially important when communication happens in a collaboration scenario that requires focusing several users' attention on aspecific object among other similar ones. This paper explores the effect of visual cues on pointing tasks in co-located Augmented Reality (AR) collaboration. A user study (N = 32, 16 pairs) was conducted to compare two types of visual cues: Pointing Line (PL)and Moving Track (MT). Both are head-based visual techniques.Through a series of collaborative pointing tasks on objects with different states (static and dynamic) and density levels (low, mediumand high), the results showed that PL was better on task performance and usability, but MT was rated higher on social presenceand user preference. Based on our results, some design implicationsare provided for pointing tasks in co-located AR collaboration.
|
2311.02394
|
Robert Tjarko Lange
|
Robert Tjarko Lange, Yujin Tang, Yingtao Tian
|
NeuroEvoBench: Benchmarking Evolutionary Optimizers for Deep Learning
Applications
|
22 pages, 20 figures, 37th Conference on Neural Information
Processing Systems (NeurIPS 2023) Track on Datasets and Benchmarks
| null | null | null |
cs.NE cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Recently, the Deep Learning community has become interested in evolutionary
optimization (EO) as a means to address hard optimization problems, e.g.
meta-learning through long inner loop unrolls or optimizing non-differentiable
operators. One core reason for this trend has been the recent innovation in
hardware acceleration and compatible software - making distributed population
evaluations much easier than before. Unlike for gradient descent-based methods
though, there is a lack of hyperparameter understanding and best practices for
EO - arguably due to severely less 'graduate student descent' and benchmarking
being performed for EO methods. Additionally, classical benchmarks from the
evolutionary community provide few practical insights for Deep Learning
applications. This poses challenges for newcomers to hardware-accelerated EO
and hinders significant adoption. Hence, we establish a new benchmark of EO
methods (NeuroEvoBench) tailored toward Deep Learning applications and
exhaustively evaluate traditional and meta-learned EO. We investigate core
scientific questions including resource allocation, fitness shaping,
normalization, regularization & scalability of EO. The benchmark is
open-sourced at https://github.com/neuroevobench/neuroevobench under Apache-2.0
license.
|
[
{
"created": "Sat, 4 Nov 2023 12:42:38 GMT",
"version": "v1"
}
] |
2023-11-07
|
[
[
"Lange",
"Robert Tjarko",
""
],
[
"Tang",
"Yujin",
""
],
[
"Tian",
"Yingtao",
""
]
] |
Recently, the Deep Learning community has become interested in evolutionary optimization (EO) as a means to address hard optimization problems, e.g. meta-learning through long inner loop unrolls or optimizing non-differentiable operators. One core reason for this trend has been the recent innovation in hardware acceleration and compatible software - making distributed population evaluations much easier than before. Unlike for gradient descent-based methods though, there is a lack of hyperparameter understanding and best practices for EO - arguably due to severely less 'graduate student descent' and benchmarking being performed for EO methods. Additionally, classical benchmarks from the evolutionary community provide few practical insights for Deep Learning applications. This poses challenges for newcomers to hardware-accelerated EO and hinders significant adoption. Hence, we establish a new benchmark of EO methods (NeuroEvoBench) tailored toward Deep Learning applications and exhaustively evaluate traditional and meta-learned EO. We investigate core scientific questions including resource allocation, fitness shaping, normalization, regularization & scalability of EO. The benchmark is open-sourced at https://github.com/neuroevobench/neuroevobench under Apache-2.0 license.
|
1303.4277
|
Radu Ciucanu
|
Iovka Boneva, Radu Ciucanu, Slawek Staworko
|
Simple Schemas for Unordered XML
|
16th International Workshop on the Web and Databases (WebDB 2013)
http://webdb2013.lille.inria.fr/
| null | null | null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider unordered XML, where the relative order among siblings is
ignored, and propose two simple yet practical schema formalisms: disjunctive
multiplicity schemas (DMS), and its restriction, disjunction-free multiplicity
schemas (MS). We investigate their computational properties and characterize
the complexity of the following static analysis problems: schema
satisfiability, membership of a tree to the language of a schema, schema
containment, twig query satisfiability, implication, and containment in the
presence of schema. Our research indicates that the proposed formalisms retain
much of the expressiveness of DTDs without an increase in computational
complexity.
|
[
{
"created": "Mon, 18 Mar 2013 15:03:43 GMT",
"version": "v1"
},
{
"created": "Fri, 22 Mar 2013 18:55:16 GMT",
"version": "v2"
},
{
"created": "Fri, 7 Jun 2013 11:35:08 GMT",
"version": "v3"
},
{
"created": "Thu, 20 Jun 2013 08:23:49 GMT",
"version": "v4"
}
] |
2013-06-21
|
[
[
"Boneva",
"Iovka",
""
],
[
"Ciucanu",
"Radu",
""
],
[
"Staworko",
"Slawek",
""
]
] |
We consider unordered XML, where the relative order among siblings is ignored, and propose two simple yet practical schema formalisms: disjunctive multiplicity schemas (DMS), and its restriction, disjunction-free multiplicity schemas (MS). We investigate their computational properties and characterize the complexity of the following static analysis problems: schema satisfiability, membership of a tree to the language of a schema, schema containment, twig query satisfiability, implication, and containment in the presence of schema. Our research indicates that the proposed formalisms retain much of the expressiveness of DTDs without an increase in computational complexity.
|
1812.03539
|
Rohit Garg
|
Rohit Garg, Shichao Yang and Sebastian Scherer
|
Monocular and Stereo Cues for Landing Zone Evaluation for Micro UAVs
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Autonomous and safe landing is important for unmanned aerial vehicles. We
present a monocular and stereo image based method for fast and accurate landing
zone evaluation for UAVs in various scenarios. Many existing methods rely on
Lidar or depth sensor to provide accurate and dense surface reconstruction. We
utilize stereo images to evaluate the slope and monocular images to compute
homography error. By combining them together, our approach works for both rigid
and non-rigid dynamic surfaces. Experiments on many outdoor scenes such as
water, grass and roofs, demonstrate the robustness and effectiveness of our
approach.
|
[
{
"created": "Sun, 9 Dec 2018 18:21:59 GMT",
"version": "v1"
}
] |
2018-12-11
|
[
[
"Garg",
"Rohit",
""
],
[
"Yang",
"Shichao",
""
],
[
"Scherer",
"Sebastian",
""
]
] |
Autonomous and safe landing is important for unmanned aerial vehicles. We present a monocular and stereo image based method for fast and accurate landing zone evaluation for UAVs in various scenarios. Many existing methods rely on Lidar or depth sensor to provide accurate and dense surface reconstruction. We utilize stereo images to evaluate the slope and monocular images to compute homography error. By combining them together, our approach works for both rigid and non-rigid dynamic surfaces. Experiments on many outdoor scenes such as water, grass and roofs, demonstrate the robustness and effectiveness of our approach.
|
2005.09025
|
Alexander Badri-Spr\"owitz
|
Felix Ruppert and Alexander Badri-Spr\"owitz
|
FootTile: a Rugged Foot Sensor for Force and Center of Pressure Sensing
in Soft Terrain
| null | null |
10.1109/ICRA40945.2020.9197466
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we present FootTile, a foot sensor for reaction force and
center of pressure sensing in challenging terrain. We compare our sensor design
to standard biomechanical devices, force plates and pressure plates. We show
that FootTile can accurately estimate force and pressure distribution during
legged locomotion. FootTile weighs 0.9g, has a sampling rate of 330Hz, a
footprint of 10 by 10mm and can easily be adapted in sensor range to the
required load case. In three experiments we validate: first the performance of
the individual sensor, second an array of FootTiles for center of pressure
sensing and third the ground reaction force estimation during locomotion in
granular substrate. We then go on to show the accurate sensing capabilities of
the waterproof sensor in liquid mud, as a showcase for real world rough terrain
use.
|
[
{
"created": "Mon, 18 May 2020 18:45:39 GMT",
"version": "v1"
}
] |
2022-03-07
|
[
[
"Ruppert",
"Felix",
""
],
[
"Badri-Spröwitz",
"Alexander",
""
]
] |
In this paper we present FootTile, a foot sensor for reaction force and center of pressure sensing in challenging terrain. We compare our sensor design to standard biomechanical devices, force plates and pressure plates. We show that FootTile can accurately estimate force and pressure distribution during legged locomotion. FootTile weighs 0.9g, has a sampling rate of 330Hz, a footprint of 10 by 10mm and can easily be adapted in sensor range to the required load case. In three experiments we validate: first the performance of the individual sensor, second an array of FootTiles for center of pressure sensing and third the ground reaction force estimation during locomotion in granular substrate. We then go on to show the accurate sensing capabilities of the waterproof sensor in liquid mud, as a showcase for real world rough terrain use.
|
1911.00178
|
Anindya De
|
Anindya De and Rocco A. Servedio
|
Kruskal-Katona for convex sets, with applications
| null | null | null | null |
cs.CC math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The well-known Kruskal-Katona theorem in combinatorics says that (under mild
conditions) every monotone Boolean function $f: \{0,1\}^n \to \{0,1\}$ has a
nontrivial "density increment." This means that the fraction of inputs of
Hamming weight $k+1$ for which $f=1$ is significantly larger than the fraction
of inputs of Hamming weight $k$ for which $f=1.$
We prove an analogous statement for convex sets. Informally, our main result
says that (under mild conditions) every convex set $K \subset \mathbb{R}^n$ has
a nontrivial density increment. This means that the fraction of the radius-$r$
sphere that lies within $K$ is significantly larger than the fraction of the
radius-$r'$ sphere that lies within $K$, for $r'$ suitably larger than $r$. For
centrally symmetric convex sets we show that our density increment result is
essentially optimal.
As a consequence of our Kruskal-Katona type theorem, we obtain the first
efficient weak learning algorithm for convex sets under the Gaussian
distribution. We show that any convex set can be weak learned to advantage
$\Omega(1/n)$ in $\mathsf{poly}(n)$ time under any Gaussian distribution and
that any centrally symmetric convex set can be weak learned to advantage
$\Omega(1/\sqrt{n})$ in $\mathsf{poly}(n)$ time. We also give an
information-theoretic lower bound showing that the latter advantage is
essentially optimal for $\mathsf{poly}(n)$ time weak learning algorithms. As
another consequence of our Kruskal-Katona theorem, we give the first nontrivial
Gaussian noise stability bounds for convex sets at high noise rates. Our
results extend the known correspondence between monotone Boolean functions over
$ \{0,1\}^n$ and convex bodies in Gaussian space.
|
[
{
"created": "Fri, 1 Nov 2019 01:39:40 GMT",
"version": "v1"
}
] |
2019-11-04
|
[
[
"De",
"Anindya",
""
],
[
"Servedio",
"Rocco A.",
""
]
] |
The well-known Kruskal-Katona theorem in combinatorics says that (under mild conditions) every monotone Boolean function $f: \{0,1\}^n \to \{0,1\}$ has a nontrivial "density increment." This means that the fraction of inputs of Hamming weight $k+1$ for which $f=1$ is significantly larger than the fraction of inputs of Hamming weight $k$ for which $f=1.$ We prove an analogous statement for convex sets. Informally, our main result says that (under mild conditions) every convex set $K \subset \mathbb{R}^n$ has a nontrivial density increment. This means that the fraction of the radius-$r$ sphere that lies within $K$ is significantly larger than the fraction of the radius-$r'$ sphere that lies within $K$, for $r'$ suitably larger than $r$. For centrally symmetric convex sets we show that our density increment result is essentially optimal. As a consequence of our Kruskal-Katona type theorem, we obtain the first efficient weak learning algorithm for convex sets under the Gaussian distribution. We show that any convex set can be weak learned to advantage $\Omega(1/n)$ in $\mathsf{poly}(n)$ time under any Gaussian distribution and that any centrally symmetric convex set can be weak learned to advantage $\Omega(1/\sqrt{n})$ in $\mathsf{poly}(n)$ time. We also give an information-theoretic lower bound showing that the latter advantage is essentially optimal for $\mathsf{poly}(n)$ time weak learning algorithms. As another consequence of our Kruskal-Katona theorem, we give the first nontrivial Gaussian noise stability bounds for convex sets at high noise rates. Our results extend the known correspondence between monotone Boolean functions over $ \{0,1\}^n$ and convex bodies in Gaussian space.
|
2009.05132
|
SeungKee Jeon
|
SeungKee Jeon
|
1st Place Solution to Google Landmark Retrieval 2020
|
3 pages
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents the 1st place solution to the Google Landmark Retrieval
2020 Competition on Kaggle. The solution is based on metric learning to
classify numerous landmark classes, and uses transfer learning with two train
datasets, fine-tuning on bigger images, adjusting loss weight for cleaner
samples, and esemble to enhance the model's performance further. Finally, it
scored 0.38677 mAP@100 on the private leaderboard.
|
[
{
"created": "Mon, 24 Aug 2020 05:45:20 GMT",
"version": "v1"
}
] |
2020-09-14
|
[
[
"Jeon",
"SeungKee",
""
]
] |
This paper presents the 1st place solution to the Google Landmark Retrieval 2020 Competition on Kaggle. The solution is based on metric learning to classify numerous landmark classes, and uses transfer learning with two train datasets, fine-tuning on bigger images, adjusting loss weight for cleaner samples, and esemble to enhance the model's performance further. Finally, it scored 0.38677 mAP@100 on the private leaderboard.
|
2011.08430
|
Yueyue Dai
|
Yueyue Dai (Member, IEEE), Ke Zhang, Sabita Maharjan (Senior Member,
IEEE), and Yan Zhang (Fellow, IEEE)
|
Deep Reinforcement Learning for Stochastic Computation Offloading in
Digital Twin Networks
|
10 pages
| null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
The rapid development of Industrial Internet of Things (IIoT) requires
industrial production towards digitalization to improve network efficiency.
Digital Twin is a promising technology to empower the digital transformation of
IIoT by creating virtual models of physical objects. However, the provision of
network efficiency in IIoT is very challenging due to resource-constrained
devices, stochastic tasks, and resources heterogeneity. Distributed resources
in IIoT networks can be efficiently exploited through computation offloading to
reduce energy consumption while enhancing data processing efficiency. In this
paper, we first propose a new paradigm Digital Twin Networks (DTN) to build
network topology and the stochastic task arrival model in IIoT systems. Then,
we formulate the stochastic computation offloading and resource allocation
problem to minimize the long-term energy efficiency. As the formulated problem
is a stochastic programming problem, we leverage Lyapunov optimization
technique to transform the original problem into a deterministic per-time slot
problem. Finally, we present Asynchronous Actor-Critic (AAC) algorithm to find
the optimal stochastic computation offloading policy. Illustrative results
demonstrate that our proposed scheme is able to significantly outperforms the
benchmarks.
|
[
{
"created": "Tue, 17 Nov 2020 05:40:16 GMT",
"version": "v1"
},
{
"created": "Wed, 18 Nov 2020 02:42:44 GMT",
"version": "v2"
}
] |
2020-11-19
|
[
[
"Dai",
"Yueyue",
"",
"Member, IEEE"
],
[
"Zhang",
"Ke",
"",
"Senior Member,\n IEEE"
],
[
"Maharjan",
"Sabita",
"",
"Senior Member,\n IEEE"
],
[
"Zhang",
"Yan",
"",
"Fellow, IEEE"
]
] |
The rapid development of Industrial Internet of Things (IIoT) requires industrial production towards digitalization to improve network efficiency. Digital Twin is a promising technology to empower the digital transformation of IIoT by creating virtual models of physical objects. However, the provision of network efficiency in IIoT is very challenging due to resource-constrained devices, stochastic tasks, and resources heterogeneity. Distributed resources in IIoT networks can be efficiently exploited through computation offloading to reduce energy consumption while enhancing data processing efficiency. In this paper, we first propose a new paradigm Digital Twin Networks (DTN) to build network topology and the stochastic task arrival model in IIoT systems. Then, we formulate the stochastic computation offloading and resource allocation problem to minimize the long-term energy efficiency. As the formulated problem is a stochastic programming problem, we leverage Lyapunov optimization technique to transform the original problem into a deterministic per-time slot problem. Finally, we present Asynchronous Actor-Critic (AAC) algorithm to find the optimal stochastic computation offloading policy. Illustrative results demonstrate that our proposed scheme is able to significantly outperforms the benchmarks.
|
1508.04025
|
Minh-Thang Luong
|
Minh-Thang Luong, Hieu Pham, Christopher D. Manning
|
Effective Approaches to Attention-based Neural Machine Translation
|
11 pages, 7 figures, EMNLP 2015 camera-ready version, more training
details
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An attentional mechanism has lately been used to improve neural machine
translation (NMT) by selectively focusing on parts of the source sentence
during translation. However, there has been little work exploring useful
architectures for attention-based NMT. This paper examines two simple and
effective classes of attentional mechanism: a global approach which always
attends to all source words and a local one that only looks at a subset of
source words at a time. We demonstrate the effectiveness of both approaches
over the WMT translation tasks between English and German in both directions.
With local attention, we achieve a significant gain of 5.0 BLEU points over
non-attentional systems which already incorporate known techniques such as
dropout. Our ensemble model using different attention architectures has
established a new state-of-the-art result in the WMT'15 English to German
translation task with 25.9 BLEU points, an improvement of 1.0 BLEU points over
the existing best system backed by NMT and an n-gram reranker.
|
[
{
"created": "Mon, 17 Aug 2015 13:43:19 GMT",
"version": "v1"
},
{
"created": "Tue, 18 Aug 2015 10:27:26 GMT",
"version": "v2"
},
{
"created": "Wed, 19 Aug 2015 08:14:59 GMT",
"version": "v3"
},
{
"created": "Sat, 29 Aug 2015 09:03:04 GMT",
"version": "v4"
},
{
"created": "Sun, 20 Sep 2015 08:25:52 GMT",
"version": "v5"
}
] |
2015-09-22
|
[
[
"Luong",
"Minh-Thang",
""
],
[
"Pham",
"Hieu",
""
],
[
"Manning",
"Christopher D.",
""
]
] |
An attentional mechanism has lately been used to improve neural machine translation (NMT) by selectively focusing on parts of the source sentence during translation. However, there has been little work exploring useful architectures for attention-based NMT. This paper examines two simple and effective classes of attentional mechanism: a global approach which always attends to all source words and a local one that only looks at a subset of source words at a time. We demonstrate the effectiveness of both approaches over the WMT translation tasks between English and German in both directions. With local attention, we achieve a significant gain of 5.0 BLEU points over non-attentional systems which already incorporate known techniques such as dropout. Our ensemble model using different attention architectures has established a new state-of-the-art result in the WMT'15 English to German translation task with 25.9 BLEU points, an improvement of 1.0 BLEU points over the existing best system backed by NMT and an n-gram reranker.
|
2408.08127
|
Emmanuel Deruty
|
Emmanuel Deruty, David Meredith and Stefan Lattner
|
The evolution of inharmonicity and noisiness in contemporary popular
music
|
43 pages, 23 figures
| null | null | null |
cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Much of Western classical music uses instruments based on acoustic resonance.
Such instruments produce harmonic or quasi-harmonic sounds. On the other hand,
since the early 1970s, popular music has largely been produced in the recording
studio. As a result, popular music is not bound to be based on harmonic or
quasi-harmonic sounds. In this study, we use modified MPEG-7 features to
explore and characterise the way in which the use of noise and inharmonicity
has evolved in popular music since 1961. We set this evolution in the context
of other broad categories of music, including Western classical piano music,
Western classical orchestral music, and musique concr\`ete. We propose new
features that allow us to distinguish between inharmonicity resulting from
noise and inharmonicity resulting from interactions between relatively discrete
partials. When the history of contemporary popular music is viewed through the
lens of these new features, we find that the period since 1961 can be divided
into three phases. From 1961 to 1972, there was a steady increase in
inharmonicity but no significant increase in noise. From 1972 to 1986, both
inharmonicity and noise increased. Then, since 1986, there has been a steady
decrease in both inharmonicity and noise to today's popular music which is
significantly less noisy but more inharmonic than the music of the sixties. We
relate these observed trends to the development of music production practice
over the period and illustrate them with focused analyses of certain key
artists and tracks.
|
[
{
"created": "Thu, 15 Aug 2024 12:52:40 GMT",
"version": "v1"
}
] |
2024-08-16
|
[
[
"Deruty",
"Emmanuel",
""
],
[
"Meredith",
"David",
""
],
[
"Lattner",
"Stefan",
""
]
] |
Much of Western classical music uses instruments based on acoustic resonance. Such instruments produce harmonic or quasi-harmonic sounds. On the other hand, since the early 1970s, popular music has largely been produced in the recording studio. As a result, popular music is not bound to be based on harmonic or quasi-harmonic sounds. In this study, we use modified MPEG-7 features to explore and characterise the way in which the use of noise and inharmonicity has evolved in popular music since 1961. We set this evolution in the context of other broad categories of music, including Western classical piano music, Western classical orchestral music, and musique concr\`ete. We propose new features that allow us to distinguish between inharmonicity resulting from noise and inharmonicity resulting from interactions between relatively discrete partials. When the history of contemporary popular music is viewed through the lens of these new features, we find that the period since 1961 can be divided into three phases. From 1961 to 1972, there was a steady increase in inharmonicity but no significant increase in noise. From 1972 to 1986, both inharmonicity and noise increased. Then, since 1986, there has been a steady decrease in both inharmonicity and noise to today's popular music which is significantly less noisy but more inharmonic than the music of the sixties. We relate these observed trends to the development of music production practice over the period and illustrate them with focused analyses of certain key artists and tracks.
|
2001.01204
|
HangTai Li
|
Hangtai Li and Yingbo Liu and Rui Tan
|
Covert Association of Applications on Edge Devices by Processor Workload
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The scheme of application (app) distribution systems involving incentivized
third-party app vendors is a desirable option for the emerging edge computing
systems. However, such a scheme also brings various security challenges as
faced by the current mobile app distribution systems. In this paper, we study a
threat named covert device association, in which the vendors of two apps
collude to figure out which of their app installations run on the same edge
device. If the two colluding apps are popular, the threat can be used to launch
various types of further attacks at scale. For example, the user of the
compromised edge device, who wishes to remain anonymous to one of the two apps,
will be de-anonymized if the user is not anonymous to the other app. Moreover,
the coalition of the two apps will have an escalated privilege set that is the
union of their individual privilege sets. In this paper, we implement the
threat by a reliable and ubiquitous covert channel based on the edge device
processor workload. The implementations on three edge devices (two smartphones
and an embedded compute board) running Android and Android Things do not
require any privileged permissions. Our implementations cover three attack
scenarios of 1) two apps running on the same Android phone, 2) an app and a web
session in the Tor browser running on the same Android phone, and 3) two apps
running on the same Android Things device. Experiments show that the covert
channel gives at least 0.25 bps data rate and the covert device association
takes at most 3.2 minutes.
|
[
{
"created": "Sun, 5 Jan 2020 10:23:01 GMT",
"version": "v1"
},
{
"created": "Fri, 10 Jan 2020 08:44:19 GMT",
"version": "v2"
}
] |
2020-01-13
|
[
[
"Li",
"Hangtai",
""
],
[
"Liu",
"Yingbo",
""
],
[
"Tan",
"Rui",
""
]
] |
The scheme of application (app) distribution systems involving incentivized third-party app vendors is a desirable option for the emerging edge computing systems. However, such a scheme also brings various security challenges as faced by the current mobile app distribution systems. In this paper, we study a threat named covert device association, in which the vendors of two apps collude to figure out which of their app installations run on the same edge device. If the two colluding apps are popular, the threat can be used to launch various types of further attacks at scale. For example, the user of the compromised edge device, who wishes to remain anonymous to one of the two apps, will be de-anonymized if the user is not anonymous to the other app. Moreover, the coalition of the two apps will have an escalated privilege set that is the union of their individual privilege sets. In this paper, we implement the threat by a reliable and ubiquitous covert channel based on the edge device processor workload. The implementations on three edge devices (two smartphones and an embedded compute board) running Android and Android Things do not require any privileged permissions. Our implementations cover three attack scenarios of 1) two apps running on the same Android phone, 2) an app and a web session in the Tor browser running on the same Android phone, and 3) two apps running on the same Android Things device. Experiments show that the covert channel gives at least 0.25 bps data rate and the covert device association takes at most 3.2 minutes.
|
1106.0257
|
R. Maclin
|
R. Maclin, D. Opitz
|
Popular Ensemble Methods: An Empirical Study
| null |
Journal Of Artificial Intelligence Research, Volume 11, pages
169-198, 1999
|
10.1613/jair.614
| null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An ensemble consists of a set of individually trained classifiers (such as
neural networks or decision trees) whose predictions are combined when
classifying novel instances. Previous research has shown that an ensemble is
often more accurate than any of the single classifiers in the ensemble. Bagging
(Breiman, 1996c) and Boosting (Freund and Shapire, 1996; Shapire, 1990) are two
relatively new but popular methods for producing ensembles. In this paper we
evaluate these methods on 23 data sets using both neural networks and decision
trees as our classification algorithm. Our results clearly indicate a number of
conclusions. First, while Bagging is almost always more accurate than a single
classifier, it is sometimes much less accurate than Boosting. On the other
hand, Boosting can create ensembles that are less accurate than a single
classifier -- especially when using neural networks. Analysis indicates that
the performance of the Boosting methods is dependent on the characteristics of
the data set being examined. In fact, further results show that Boosting
ensembles may overfit noisy data sets, thus decreasing its performance.
Finally, consistent with previous studies, our work suggests that most of the
gain in an ensemble's performance comes in the first few classifiers combined;
however, relatively large gains can be seen up to 25 classifiers when Boosting
decision trees.
|
[
{
"created": "Wed, 1 Jun 2011 16:41:44 GMT",
"version": "v1"
}
] |
2011-06-02
|
[
[
"Maclin",
"R.",
""
],
[
"Opitz",
"D.",
""
]
] |
An ensemble consists of a set of individually trained classifiers (such as neural networks or decision trees) whose predictions are combined when classifying novel instances. Previous research has shown that an ensemble is often more accurate than any of the single classifiers in the ensemble. Bagging (Breiman, 1996c) and Boosting (Freund and Shapire, 1996; Shapire, 1990) are two relatively new but popular methods for producing ensembles. In this paper we evaluate these methods on 23 data sets using both neural networks and decision trees as our classification algorithm. Our results clearly indicate a number of conclusions. First, while Bagging is almost always more accurate than a single classifier, it is sometimes much less accurate than Boosting. On the other hand, Boosting can create ensembles that are less accurate than a single classifier -- especially when using neural networks. Analysis indicates that the performance of the Boosting methods is dependent on the characteristics of the data set being examined. In fact, further results show that Boosting ensembles may overfit noisy data sets, thus decreasing its performance. Finally, consistent with previous studies, our work suggests that most of the gain in an ensemble's performance comes in the first few classifiers combined; however, relatively large gains can be seen up to 25 classifiers when Boosting decision trees.
|
1504.02990
|
Shaoshi Yang Dr.
|
Haijing Liu, Hui Gao, Shaoshi Yang, Tiejun Lv
|
Low-Complexity Downlink User Selection for Massive MIMO Systems
|
11 pages, 27 figures, Accepted to publish on IEEE Systems Journal --
Special Issue on 5G Wireless Systems with Massive MIMO, Apr. 2015
| null |
10.1109/JSYST.2015.2422475
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we propose a pair of low-complexity user selection schemes with
zero-forcing precoding for multiuser massive MIMO downlink systems, in which
the base station is equipped with a large-scale antenna array. First, we derive
approximations of the ergodic sum rates of the systems invoking the
conventional random user selection (RUS) and the location-dependant user
selection (LUS). Then, the optimal number of simultaneously served user
equipments (UEs), $K^*$, is investigated to maximize the sum rate
approximations. Upon exploiting $K^*$, we develop two user selection schemes,
namely $K^*$-RUS and $K^*$-LUS, where $K^*$ UEs are selected either randomly or
based on their locations. Both of the proposed schemes are independent of the
instantaneous channel state information of small-scale fading, therefore
enjoying the same extremely-low computational complexity as that of the
conventional RUS scheme. Moreover, both of our proposed schemes achieve
significant sum rate improvement over the conventional RUS. In addition, it is
worth noting that like the conventional RUS, the $K^*$-RUS achieves good
fairness among UEs.
|
[
{
"created": "Sun, 12 Apr 2015 16:58:08 GMT",
"version": "v1"
}
] |
2015-06-02
|
[
[
"Liu",
"Haijing",
""
],
[
"Gao",
"Hui",
""
],
[
"Yang",
"Shaoshi",
""
],
[
"Lv",
"Tiejun",
""
]
] |
In this paper we propose a pair of low-complexity user selection schemes with zero-forcing precoding for multiuser massive MIMO downlink systems, in which the base station is equipped with a large-scale antenna array. First, we derive approximations of the ergodic sum rates of the systems invoking the conventional random user selection (RUS) and the location-dependant user selection (LUS). Then, the optimal number of simultaneously served user equipments (UEs), $K^*$, is investigated to maximize the sum rate approximations. Upon exploiting $K^*$, we develop two user selection schemes, namely $K^*$-RUS and $K^*$-LUS, where $K^*$ UEs are selected either randomly or based on their locations. Both of the proposed schemes are independent of the instantaneous channel state information of small-scale fading, therefore enjoying the same extremely-low computational complexity as that of the conventional RUS scheme. Moreover, both of our proposed schemes achieve significant sum rate improvement over the conventional RUS. In addition, it is worth noting that like the conventional RUS, the $K^*$-RUS achieves good fairness among UEs.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.